Search is not available for this dataset
text
stringlengths
216
4.52M
meta
dict
\section{Introduction} The energy output of actively accreting supermassive black holes (active galactic nuclei, AGN) has become a critical ingredient in modern galaxy formation theories. Powerful AGN (quasars, $L_{bol} > 10^{45}$~erg/s) can heat and photo-ionise gas tens of kiloparsecs away, and even well into the circum-galactic medium \citep{john15, rudie17} in a process known as radiative feedback. Furthermore, radiatively driven nuclear winds \citep{murr95} or jets can launch galaxy-wide outflows. Such mechanical feedback processes can aide in establishing~the black hole vs.\ bulge correlations, can effectively quench star formation activity, and~--~most importantly -- set the upper limit to the masses of galaxies \citep[e.g.]{ferr00, crot06, fabi12, korm13}. However, constraining the power and reach of such feedback processes exerted by black holes onto their hosts remains a major unresolved issue in modern extragalactic astrophysics. The critical role of quasars in galaxy formation was hypothesised two decades ago \citep{silk98}, yet this paradigm only recently obtained observational support, much of it on the basis of IFU observations \citep{rupk11,liu13b,harr14,carn15}. There is increasing evidence that in powerful AGN the main interaction with the gas is through winds, which are inhomogeneous, complex multi-phase phenomena, with different gas phases observable in different spectral domains \citep{heck90, veil05}. Most of our current knowledge about AGN-driven outflows comes from mapping the kinematics of the warm ionized gas phase via optical emission lines such as [O\,III]~5007~\AA. The signature of galaxy-wide winds is that of gas on galactic scales moves with velocities inconsistent with a dynamical equilibrium with the host galaxy or disk rotation \citep{rupk13,liu13b, wyle16a, wyle18}. Integral field unit (IFU) surveys now offer new possibilities in characterising outflow signatures for statistically significant samples. The SDSS-IV \citep{Blanton_2017} survey Mapping Nearby Galaxies at APO \citep[MaNGA;][]{Bundy_2015, Drory_2015, Law_2015, Yan_2016a, Yan_2016b, Wake_2017} is a new optical fibre-bundle IFU survey and will obtain IFU observations of 10,000 galaxies at $z \lesssim 0.1$ over the next few years, allowing an extensive investigation of the spatial dimension of galaxy evolution. The goals of the survey are to improve the understanding on the processes involved in galaxy formation and evolution over time. MaNGA also allows to take full advantage of the spatial dimension of AGN ionisation signatures \citep{penny18, rembold17, Sanchez18, wyle17a, wyle18}. \citet{wyle18} have recently developed spatially resolved techniques tailored to the MaNGA data for identifying signatures of AGN. Out of 2778 galaxies in the parent sample, they identify 303 AGN candidates which show signatures of gas ionised by relatively hard radiation fields inconsistent with star formation. While the authors show that $\sim 10$\% of low redshift galaxies currently host low- to intermediate-luminosity AGN based on photoionisation diagnostics, it remains unclear if and to what extent these AGN impact the gas kinematics through AGN-driven winds. Additionally, \cite{wyle18} show that about a third to half of the MaNGA-selected AGN candidates would not have been selected based on the SDSS-III single-fibre observations since AGN ionisation signatures are only prevalent beyond the 3 arcsec coverage of the single-fibre spectra. Reasons for such signatures can be manyfold (heavy circumnuclear obscuration, off-nuclear AGN, dominant nuclear SF signatures, relic AGN) and are currently under investigation. A particularly intriguing possibility is that some of the AGN candidates are relic AGN. In such objects the nuclear activity subsided some time ago, but the photo-ionisation signatures at large distances persist for $10^4-10^5$ years due to light-travel delays and radiative timescales of emitting gas \citep{lint09, Schawinski_2015, Sartori_2016, Keel_2017}. In relic AGN, kinematic signatures of previous AGN activity may be longer lived than nuclear photoionisation signatures \citep{Ishibashi15}. Investigating the ionised gas kinematics in currently active AGN with nuclear AGN signatures and relic AGN candidates is therefore of great interest with respect to outflow timescales and outflow propagation. In this paper, we investigate the prevalence of ionised gas outflow signatures in MaNGA-selected AGN. This work focuses on a detailed kinematic analysis of the [O\,III]$\lambda$4959,5007\AA~doublet. We first develop a spaxel-based fitting algorithm allowing broad secondary components in the emission line profile to be accounted for. Our goal is to improve on the kinematic measurements previously made by the survey pipeline and use them in conjunction with a sample of independently identified AGN candidates from \citet{wyle18}. The paper is organised as follows: Section 2 introduces the MaNGA survey and structure of the available data. Section 3 presents the spectroscopic fitting procedure, while Section 4 presents the kinematic analysis. In Section 5 we discuss the identification of ionised gas outflow signatures and their prevalence in MaNGA-selected AGN and non-AGN. In Section 6 we present our conclusions. To statistically compare distributions, we use the two-sample Kolmogorov-Smirnov test and report $p$, the probability of the null hypothesis that the two samples are drawn from the same distribution. Low $p$ values ($ p < 0.01$) mean that the two samples are statistically different. Throughout the paper we use $H_{0} = 72$~km~s$^{-1}$~Mpc$^{-1}$, $\Omega_m=0.3$,$\Omega_{\Lambda}=0.7$. \section{Data} \subsection{The MaNGA Survey and Data Products} Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) is a spectroscopic survey as part of the Sloan Digital Sky Survey-IV (SDSS-IV). MaNGA is a two-dimensional spectroscopic survey that uses Integral Field Unit (IFU) observations to take multiple spectral observations of each galaxy in the 3,600 -- 10,000\textup{\AA}\ range using the BOSS Spectrograph \citep{Gunn_2006, smee13} at $R \sim 2000$. Fibers are arranged into hexagonal groups, with bundle sizes ranging from 19 -- 127 fibres, depending on the apparent size of the target galaxy (which corresponds to diameters ranging between 12\arcsec\ to 32\arcsec), leading to an average footprint of $400-500$~arcsec$^{2}$ per IFU. The fibres have a size of 2\arcsec\ aperture (2.5\arcsec\ separation between fibre centres), which at $z\sim 0.05$ corresponds to $\sim 2$ kpc, although with dithering the effective sampling improves to $1.4$\arcsec. The current data release DR14 \citep{dr14_2017} contains 2778 galaxies at $0.01 < z < 0.15$ with a mean $z \sim 0.05$. Over the next three years, MaNGA will have obtained observations of $\sim$10,000 galaxies at $z\la 0.15$ and with stellar masses $>10^9 M_{\odot}$. The MaNGA Data Reduction Pipeline (DRP) produces sky-subtracted spectrophotometrically calibrated spectra and rectified three-dimensional data cubes that combine individual dithered observations \citep[for details on MaNGA data reduction see][]{Law_2016} with a spatial pixel scale of 0.5 \arcsec\ pixel$^{-1}$. The median spatial resolution of the MaNGA data is 2.54 \arcsec\ FWHM while the median spectral resolution is $\sim 72$ km/s \citep{Law_2016}. The MaNGA Data Analysis Pipeline \citep[DAP, ][]{Yan_2016a, Westfall_2017} is a project-led software package used to analyse the data products provided by the MaNGA DRP, providing the collaboration and public with survey-level quantities, such as stellar-population parameters, kinematics and emission-line properties for 21 different emission lines. To make these calculations, the DAP first fits the stellar continuum using the Penalized Pixel-Fitting method \citep[pPXF, ][]{Cappellari_2004, Cappellari_2017} and then subtracts the best-fitting stellar continuum from the observed data before fitting single Gaussians to the emission lines, allowing for additional subtraction of a non-zero baseline. The final fitting model, emission line fit, and baseline fit are all available via the `MODEL,' `EMLINE,' and `EMLINE\_BASE' extensions respectively in the DAP logcube files. \subsection{Samples} The work in this paper is based on the Data Release 14 (DR14) which consists of data cubes for 2778 galaxies for 2727 unique objects. The aim of this work is to compare the kinematic characteristics of the [OIII] emission line for MaNGA-selected AGN and non-AGN in the MaNGA sample. In optical surveys, emission line flux ratios and diagnostic diagrams are the most common way to identify AGN \citep{bald81,oste89,zaka03,kauf03a,kewl06,reye08,yuan16}. But a major caveat of large optical spectroscopic surveys such as the Sloan Digital Sky Survey (prior to MaNGA) is the small size of the optical fibres which, at 3$\arcsec$ diameter (in the case of SDSS-I to SDSS-III surveys), only cover a fraction of the footprint of a galaxy and are only sensitive to processes close to the galactic center. \cite{wyle18} recently developed spatially resolved techniques for identifying signatures of active galactic nuclei tailored to MaNGA IFU data identifying 303 AGN candidates. A minor update to the selection code \footnote{This minor update regards the inclusion of a few `borderline' objects related to the precision with which $d_{BPT}$, the distance between Seyfert/LINER-classified spaxels in the BPT diagram and the star formation demarcation line, is measured (see \citet{wyle18} for more details).} has increased the sample to 308 sources which we adopt as the `AGN' sample in this work (see Table \ref{table_measurements}). We furthermore refer to remaining MaNGA galaxies as `non-AGN'. While LINER-like (`low ionisation nuclear emission line region') galaxies can be associated with a number of ionisation mechanisms such as weakly ionising AGN \citep{heck80}, shock ionisation (either related to star-forming processes in inactive galaxies or AGN activity) or photo-ionisation through hot evolved stars, the algorithm developed by \citet{wyle18} was tailored to select the most likely AGN among LINER-like galaxies. This has been achieved using combination of spatially resolved line diagnostic diagrams, assessing the significance of the deviation from the star formation locus in line diagnostic diagrams and applying additional cuts on H$\alpha$ surface brightness and H$\alpha$ equivalent width. Some of the AGN candidates selected by \citet{wyle18} would not have been identified based on the single-fibre nuclear spectra alone. This is either because the AGN is hidden behind large columns of dust in the galactic center, because the AGN has recently turned off and relic AGN signatures are only visible at larger distances, because the AGN is offset after a recent galaxy merger, or because a circumnuclear starburst overwhelms nuclear AGN signatures \citep{wyle18}. Based on the MaNGA measurements in the inner 3$\arcsec$ (similar to the single fibre classifications of SDSS I-III), 109 out of the 308 MaNGA-selected AGN would be classified as star-forming (SF) galaxies, 84 sources would be classified as `Seyfert' galaxies, i.e. AGN, and 91 sources would be classified as LINER-like galaxies. In the remaining part of the paper, we refer to these subsamples as Seyfert-AGN (84 sources), SF-AGN (109 sources) and LINER-AGN (91 sources). The 24 remaining galaxies could not be classified based on their central emission line signatures due to one or more needed emission lines not fulfilling the required S/N criteria \citep[see ][for the details of the selection]{wyle18}. In the remaining part of the paper, we do not include these galaxies when assessing the differences between the individual `types' of AGN. We note that this AGN selection is different to the ones used in \citet{rembold17} or \citet{Sanchez18} who both present results on AGN in MaNGA. The main difference in the sample selection is that both of these works only use the photoionisation signatures in the central region of the galaxies to classify a galaxy as an AGN candidate. \citet{rembold17} uses the SDSS-III spectroscopic data from DR12 based on the 3\arcsec\ single-fibre measurements whereas \citet{Sanchez18} uses the MaNGA spectroscopic information in the central $3\arcsec \times 3\arcsec$ region analysed and measured with the PIPE3D pipeline \citep{Sanchez_2016}. Their AGN samples contain 62 and 98 AGN candidates, respectively. The differences in the sample selection lie in the exact choice of diagnostic diagrams and equivalent width cuts. For example, while \citet{rembold17} employ a cut of 3\AA\ on the equivalent width of H$\alpha$, \citet{Sanchez18} use a more relaxed criterion of only 1.5\AA. These differences lead to different and less/more sources being selected as AGN candidates. The overlap between the \citet{wyle18} and \citet{rembold17} samples is 37 sources, the overlap between the \citet{wyle18} and \citet{Sanchez18} samples is 44 sources and the overlap between the \citet{rembold17} and \citet{Sanchez18} samples is 37 sources. In Comerford et al. (submitted), we explore the overlap between optical and radio selected AGN in MaNGA and are furthermore exploring the implications for BPT and H$\alpha$ selection of AGNs in a subsequent paper (Negus et al., in prep.). \section{Methods} \subsection{Spectral Fitting} In this paper, all kinematic calculations are made based on fits of the emission lines of the [OIII] doublet line at 5008/4960\textup{\AA}. All SDSS data, including MaNGA, are stored at vacuum wavelengths \citep{Morton_1991} but we use air wavelengths to identify emission lines following the long-standing convention. In AGN, a large fraction of the [OIII] emission originates in the narrow-line regions surrounding the AGN \citep[e.g.][]{kewl06, liu13b}. Being a forbidden line, it can therefore trace the low-density AGN-ionised gas even out to galaxy-wide scales of several kiloparsecs \citep{liu13b}. [OIII] is easily observable from the ground for low to intermediate redshift AGN and it is widely used as a gas (outflow) tracer in low- and intermediate-redshift AGN \citep{cren10a, cren15, Lena_2015, Fischer_2017, rupk17}, making our measurements easily comparable with other works. We develop a customised fitting procedure to model the [OIII] doublet and potential secondary and/or broad components in the line. We first extract the spectra for each spaxel from the DAP Logcube files using the `FLUX' extension, providing the flux density in units of $ \frac{10^{-17} \rm{erg}}{\rm{s\ cm^2\ {\textup{\AA}}\ \rm{spaxel}}}$ and then subtract the modelled stellar continuum (see Section 2.2) using the other Logcube extensions. We then also measure the flux-level blue- and red-ward of the H$\beta$ + [OIII] line complex to subtract any additional continuum contributions using a linear function of wavelength that might not have been accounted for. We further adopt the spectroscopic redshifts from the NASA Sloan Atlas (NSA) catalogues that are based on the single-fibre measurements to correct the spectra to the rest-frame of the galaxy. The MaNGA Data Analysis pipeline performs single Gaussian fitting on a number of selected, bright emission lines in all spaxels. However, a single Gaussian is often insufficient for describing the profile of the [OIII] line, and in particular such a fit would fail to capture a secondary broad component characteristic of outflowing gas. To evaluate the prevalence of additional kinematic components in MaNGA-selected AGN, we therefore allow multiple Gaussian components to be fit to the emission lines. In the fitting process, we fit the two transitions of the [OIII] doublet simultaneously and assume they share the same kinematics. We furthermore fix the ratio between the amplitude of the 5008\textup{\AA}\ peak and the 4960\textup{\AA}\ peak to its quantum value of 2.98. The fitting procedure uses least squares regression to return best-fit parameters for the single-Gaussian and double-Gaussian models. We evaluate the goodness of the fit based on its $\chi^2$ value and use the fit that both minimises the number of Gaussian components and its $\chi^2$. While in the 2-dimensional maps we show all spaxels where the signal-to-noise ratio of the [OIII] emission lines $S/N > 3$, we only use spaxels with $S/N > 10$ in the subsequent analysis part of this paper. In Figure \ref{8715-3702_fit} we show an example fit to a spaxel in the MaNGA galaxy 8715-3702 where our multiple component Gaussian fitting describes the line profile more accurately than the standard single Gaussian fit. \begin{figure} \centering \includegraphics[width = 0.4\textwidth, trim = 4cm 3.2cm 6cm 2cm, clip= true]{example_spectrum.pdf} \caption{An example fit for a spaxel in MaNGA galaxy 8715-3702. The spectrum after subtraction of the stellar continuum is shown in red, the single gaussian fit is shown in blue, and the multi-Gaussian fit is shown in green. While the single Gaussian fit (blue) misses the broad wings in the line profile, the multi-Gaussian fit provides a much more accurate description of the line profile and its associated kinematic properties.} \label{8715-3702_fit} \end{figure} \subsection{Non-Parametric Values} When spectra can be well described based solely by single Gaussian fits, then the calculation of best fit parameters such as velocity dispersion $\sigma$, full width at half maximum (FWHM), amplitude are sufficient to describe the kinematic properties of the emission line in that spaxel. This is not the case when multiple Gaussians are used to describe the line profile. Because the sum of multiple gaussians is used in some spaxels, we calculate non-parametric values based on percentages of the total integrated flux and follow the measurement strategy presented in \citet{zaka14} and \citet{liu13b} \citep[see also][]{whit85a} to determine amplitudes, centroid velocities and emission line widths. Such non-parametric measurements do not strongly depend on a specific fitting procedure. The cumulative flux as a function of velocity is \begin{equation} \Phi(v) = \int_{-\infty}^{v} F_{v}(v') dv' \end{equation} and the total line flux is given by $\Phi(\infty)$. In practice, we use the interval [-2000,2000]~km~s$^{-1}$ in the rest-frame of the galaxy for the integration. For each spaxel, we compute the line of sight velocity $v_{med}$ where $\Phi(v_{med}) = 0.5 \cdot \Phi(\infty)$, i.e. this is the velocity that bisects the total area underneath the emission-line profile. Because the fitting is performed in the rest frame of the galaxy as determined by its stellar component, $v_{med}$ is measured relative to the restframe. We use the W$_{80}$ parameter to parameterise the velocity width of the line. W$_{80}$ refers to the velocity width that encloses 80\% of the total flux. For a purely Gaussian profile, W$_{80}$ is close to the FWHM but the non-parametric velocity width measurements are more sensitive to the weak broad bases of non-Gaussian emission line profiles \citep{liu13b}. We first determine $v_{90}$ such that $\Phi(v_{90}) = 0.9 \cdot \Phi(\infty)$ and $v_{10}$ such that $\Phi(v_{10})~=~0.1~\cdot~\Phi(\infty)$ and then calculate W$_{80}$ using W$_{80} = v_{90} - v_{10}$. \subsection{Kinematic and Division Maps} Having performed the fitting and analysis procedure described in Section 3.1 and 3.2 in all spaxels of MaNGA-selected AGN candidates, we create two-dimensional maps for the following quantities: \begin{enumerate} \item The total flux measured for [OIII]5008\textup{\AA}, \item The non-parametric line-of-sight velocity $v_{med}$, \item The non-parametric velocity width W$_{80}$, \item The number of Gaussians used for each fit determined based on the $\chi^2$ analysis, \item The reduced $\chi^2$ statistic of the best fit in each spaxel. \end{enumerate} \begin{figure*} \begin{center} \includegraphics[width = 0.75\textwidth, trim = 0cm 0.2cm 0cm 0cm, clip= true]{"Map_Plot_8715-3702_FINAL"} \vspace{0.5cm} \includegraphics[width = 0.75\textwidth, trim = 0cm 0.2cm 0cm 0cm, clip= true]{"Map_Plot_8459-6102_FINAL"} \vspace{0.5cm} \includegraphics[width = 0.75\textwidth, trim = 0cm 0.2CM 0cm 0cm, clip= true]{"Map_Plot_8978-9101_FINAL"} \caption{Example maps for three MaNGA galaxies 8715-3702, 8459-6102 and 8978-9101. We show the SDSS composite $gri$ optical (top left), the [OIII] flux density (Logarithmic, top center), the median velocity $v_{med}$ (top right), the multi-Gaussian-based W$_{80}$ measurements (lower left), the reference, single-Gaussian-based W$_{80, DAP}$ values based on the MaNGA Data Analysis Pipeline fits (lower center), and the divisional values $\frac{\rm{W}_{80}}{\rm{W}_{80,DAP}}$(lower right). All images and maps are orientated North-up, East-left.} \label{maps} \end{center} \end{figure*} The MaNGA survey team has already made velocity dispersion calculations $\sigma_{DAP}$ based on a single gaussian fits to the [OIII] emission lines in each spaxel. In order to assess how well these single Gaussian fits describe the lines and how the velocity dispersion based on the single Gaussian fits compares to the dispersion derived in this work, we generate `division maps' for every galaxy in the survey. The `division maps' report the fractional difference in velocity dispersion when comparing the single Gaussian fits to our measurements. To achieve a fair comparison, we first compute the non-parametric velocity dispersion W$_{80}$ for the single Gaussian fits from the MaNGA DAP. As mentioned above, for a purely Gaussian profile, W$_{80}$ is closely related to the FWHM and therefore to its velocity dispersion, such that: \begin{equation} \centering W_{80, \rm{DAP}} = 1.088 \cdot FWHM_{\rm{DAP}} = 1.088 \cdot 2.35 \sigma_{\rm{DAP}} = 2.56 \cdot\sigma_{\rm{DAP}} \label{convert} \end{equation} We then divide the W$_{80}$ value derived from our multi-Gaussian customised fitting procedure described above by the corresponding DAP single-Gaussian based W$_{80, \rm{DAP}}$ value in each spaxel. This results in a two-dimensional map for each galaxy reporting divisional values (i.e. fractional differences in velocity line width measurements) across the galaxy. Values in this map $\sim 1$ identify the spaxels where our fits, either single or double, are similar to the DAP ones, while higher values flag the spaxels where the double Gaussian fit captured a secondary component in the emission line missed by the DAP. In the subsequent analysis, we utilise these maps to assess the prevalence of additional kinematic ionised gas components in MaNGA galaxies and MaNGA-selected AGN. In Figure \ref{maps} we show three examples of these maps, including the galaxy 8715-3702 already shown in Figure \ref{8715-3702_fit}. For each source, we show the SDSS composite image, the [OIII] flux density, the median velocity $v_{med}$, the multi-Gaussian-based W$_{80}$ measurements, the single-Gaussian-based W$_{80,\rm{DAP}}$ and the divisional values. We note that the number of spaxels with valid values might differ between the W$_{80,\rm{DAP}}$ map and the maps based on the here developed fitting routine. This is because we are only reporting measured quantities in pixels with a S/N of the [OIII] emission line of $> 3$ which might differ slighly from the DAP `good' spaxels. Object 8715-3702 (top source) is classified as a Seyfert-AGN with high W$_{80}$ measurements of $\sim 1000$~km/s while the W$_{80, \rm{DAP}}$ measurements are significantly lower. This difference is reflected in the `division map' where the enhanced velocity line width in the North-West and East of the galaxy is apparent. Objects 8459-6102 (centre source, classified as a regular star-forming galaxy) and 8978-9101 (bottom source, classified as LINER-AGN) both show a low gas velocity width and little difference between the W$_{80}$ and W$_{80, \rm{DAP}}$ measurements. That means that most fits are consistent with the single-Gaussian fit results suggesting that no or little enhanced gas kinematics are present in these sources. \section{Results} \subsection{Absolute Kinematic Comparison} We first perform an absolute kinematic comparison by assessing the distribution of W$_{80}$ values in all MaNGA galaxies and in the AGN samples. Figure \ref{big_hist} shows the distribution of all W$_{80}$ measurements in every fitted spaxel of every MaNGA galaxy. We also separately show the distributions of the MaNGA-selected AGN and in non-AGN galaxies. The distribution peaks at $\sim 200$~km~s$^{-1}$ which corresponds to typical gas velocity widths in galaxies with masses of $10^{10-11}$~M$_{\odot}$. Additionally, we observe a heavily skewed tail towards large velocity widths, which is enhanced in MaNGA-selected AGN, indicative of kinematic peculiarities \citep{nels00, kewl06}. In Figure \ref{W80_hist_mean} we show the distribution of the mean W$_{80}$ measurements $\langle$W$_{80}\rangle$ for all galaxies, for the MaNGA-selected AGN and the three different MaNGA-AGN subsamples, SF-AGN, LINER-AGN and Seyfert-AGN. For this analysis we only include galaxies in which at least 10\% of the spaxels have valid [OIII] emission line measurements, i.e. a peak $S/N > 10$. This cut was chosen to ensure that [OIII] is detected in a significant enough number of spaxels and to be able make meaningful conclusions about the [OIII] behaviour in these galaxies. In total, 1116 MaNGA galaxies and 159 MaNGA-selected AGN fulfill this criterion. For every galaxy, we furthermore measure the 75th percentile of their W$_{80}$ distributions W$_{80, 75th}$. In Figure \ref{W80_hist_75} we show the distribution of the W$_{80, 75th}$ measurements for all galaxies, for the MaNGA-selected AGN and the three different MaNGA-AGN subsamples, SF-AGN, LINER-AGN and Seyfert-AGN. A two-sample Kolmogorov-Smirnov (KS) test shows that the $\langle$W$_{80}\rangle$ distributions of the MaNGA-selected AGN, the SF-AGN, LINER-AGN and Seyfert-AGN are significantly different from the total distribution. We repeat the analysis comparing the distribution of the MaNGA-selected AGN subsamples with only the non-AGN MaNGA galaxies and find similarly significant results. We report the measurements for $\langle$W$_{80}\rangle$ and W$_{80, 75th}$ in Table \ref{table_measurements} and the $p$-statistic value of the KS-tests in Table \ref{table_statistics}. To investigate if these results are driven by any unaccounted bias, we randomly select 100 galaxies from the full sample and repeat the statistical comparison. A KS-test comparing the full and random galaxy sample results in a high $p$-value of $p = 0.47$ and shows that the W$_{80}$ distribution of the randomly selected galaxy sample is drawn from the same distribution as the full MaNGA sample. We conclude that the statistical difference between the AGN and AGN-subsamples to the full MaNGA W$_{80}$ distribution is indeed intrinsic and that the kinematic properties of the MaNGA-selected AGN is distinct from the overall MaNGA distribution. To further illustrate this point, we compute the number of galaxies in which at least 10\% of the spaxels show W$_{80}$ values of W$_{80} > 500/800/1000$~km~s$^{-1}$. A total of 257/112/37 galaxies pass this cut. In the MaNGA-selected AGN sample, 77/21/7 (25/7/2\%) of the galaxies pass this cut while only 180/91/30 (7/4/1\%) of the remaining MaNGA galaxies do. Similarly, there are 13 MaNGA-selected AGN with $\langle$W$_{80}\rangle > 500$~km~s$^{-1}$ (4\%) and only 20 non-AGN with $\langle$W$_{80}\rangle > 500$~km~s$^{-1}$ (< 1\%). The fractions were computed using the full MaNGA and MaNGA AGN sample as baseline, i.e. 2778 and 308 sources, respectively. Using the number of galaxies that initially passed our quality cut (at least 10\% of the spaxels need to have an [OIII] line measurement with S/N>10) as baseline, i.e. 1116 and 159 sources, respectively, leads to the same conclusion: Two to three times as many MaNGA selected AGN show enhanced [OIII] kinematics compared to the non-AGN in MaNGA. \begin{figure} \includegraphics[ width=0.45\textwidth, trim = 0cm 0.2cm 1.3cm 1.54cm, clip= true]{W80_hist_all.pdf} \caption{The distribution of W$_{80}$ measurements across all spaxels in all 2778 MaNGA galaxies (black), the MaNGA-selected AGN (red) and non-AGN (green). The small number of values below 150 km/s are removed as they are at the limit of the instrumental resolution of the survey and not physically meaningful. Of particular interest is the largely skewed tail to values above \textasciitilde500 km/s, which is significantly enhanced for the MaNGA-selected AGN ($p-value$ of KS-test $< 10^{-200}$), indicative of enhanced kinematics potentially related to current or previous AGN activity.} \label{big_hist} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=0.95\textwidth, trim = 4cm 1cm 4cm 2cm, clip= true]{W80.pdf} \caption{Normalised logarithmic distributions of the mean W$_{80}$ measurements for each galaxy for the whole MaNGA sample, the MaNGA-selected AGN, the AGN subsamples and the non-AGN. The small number of values below 150 km/s are removed as they are at the limit of the instrumental resolution of the survey and not physically meaningful. We show the distribution for all MaNGA galaxies (top left), non-AGN (top center), AGN Candidates (top right), and the AGN subsamples Seyfert-AGN (lower left), SF-AGN (lower center), and LINER-AGN (lower right). For reference, we also show the non-AGN distribution (black dashed histogram) in the AGN and AGN subsample panels. In the AGN and all AGN subsamples, the distributions show larger contributions from high W$_{80}$ measurements, indicative of enhanced kinematics potentially related to current or previous AGN activity.} \label{W80_hist_mean} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.95\textwidth, trim = 4cm 1cm 4cm 2cm, clip= true]{W80_75.pdf} \caption{Normalised logarithmic distributions of the 75th percentile W$_{80}$ measurements for each galaxy for the whole MaNGA sample, the MaNGA-selected AGN, the AGN subsamples and the non-AGN. The small number of values below 150 km/s are removed as they are at the limit of the instrumental resolution of the survey and not physically meaningful. We show the distribution for all MaNGA galaxies (top left), non-AGN (top center), AGN Candidates (top right), and the AGN subsamples Seyfert-AGN (lower left), SF-AGN (lower center), and LINER-AGN (lower right). For reference, we also show the non-AGN distribution (black dashed histogram) in the AGN and AGN subsample panels. In the AGN and all AGN subsamples, the distributions show larger contributions from high W$_{80}$ measurements, indicative of enhanced kinematics potentially related to current or previous AGN activity.} \label{W80_hist_75} \end{center} \end{figure*} {\footnotesize \begin{table*} \caption{Mean and 75th percentile [OIII] velocity width measurements $\langle$W$_{80}\rangle$ and W$_{80, 75th}$ of all MaNGA galaxies analysed in this work. The column f$_{spx}$ reports the fraction of spaxels with high signal-to-noise S/N [OIII] emission line measurements with S/N~$> 10$. The last two columns report whether the source is identified as a MaNGA-selected AGN (flag1 = 1), a MaNGA non-AGN (flag1=0). If identified as a MaNGA-selected AGN, the flag2 column denotes whether this source is a SF-AGN (flag2=2), LINER-AGN (flag2=3) or Seyfert-AGN (flag2=4, see Section 2.2 for more details on the subsample definitions). } \begin{tabular}{|l|r|r|r|r|r|r|r|r|} \hline \multicolumn{1}{|c|}{MaNGA ID} & \multicolumn{1}{c|}{R.A.} & \multicolumn{1}{c|}{Dec.} & \multicolumn{1}{c|}{z} & \multicolumn{1}{c|}{$\langle$W$_{80}\rangle$} & \multicolumn{1}{c|}{W$_{80, 75th}$} & \multicolumn{1}{c|}{f$_{spx}$} & \multicolumn{1}{c|}{flag1} & \multicolumn{1}{c|}{flag2} \\ \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{(degrees)} & \multicolumn{1}{c|}{(degrees)} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{(km/s)} & \multicolumn{1}{c|}{(km/s)} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline 1-593159 & 217.629970676 & 52.7071590288 & 0.0825072 & 375 & 419 & 0.26 & 1 & 2\\ 1-592984 & 215.718400019 & 40.6225967971 & 0.0922799 & 198 & 207 & 0.15 & 0 & 0\\ 1-592881 & 214.647824001 & 44.1224317199 & 0.0220107 & & & 0.0 & 0 & 0\\ 1-592881 & 214.647824001 & 44.1224317199 & 0.0362875 & & & 0.0 & 0 & 0\\ 1-592743 & 213.110482013 & 45.6904101161 & 0.0311029 & 224 & 228 & 0.87 & 0 & 0\\ 1-592049 & 206.299565786 & 23.0718639552 & 0.0189259 & & & 0.0 & 0 & 0\\ 1-591474 & 197.580687349 & 47.1240556946 & 0.0441288 & 295 & 325 & 0.04 & 1 & 3\\ 1-591248 & 195.01782 & 28.15501 & 0.0305763 & & & 0.0 & 0 & 0\\ 1-591183 & 194.806227507 & 27.7746020056 & 0.0416588 & & & 0.0 & 0 & 0\\ 1-591068 & 194.181347734 & 27.0347535248 & 0.0314488 & 250 & 254 & 0.14 & 0 & 0\\ 1-591006 & 193.579232526 & 27.0680204421 & 0.13901 & & & 0.0 & 0 & 0\\ 1-590159 & 187.063503724 & 44.4531321941 & 0.0489803 & 464 & 560 & 0.20 & 1 & 2\\ 1-590053 & 186.11866 & 46.01868 & 0.0435213 & 210 & 210 & 0.68 & 0 & 0\\ 1-589908 & 184.55356586 & 44.1732422277 & 0.127436 & 331 & 332 & 0.41& 0 & 0\\ \hline\end{tabular} \\ Only a portion of this table is shown here to demonstrate its form and content. A machine-readable version of the full table is available as online material. \label{table_measurements} \end{table*} } \subsection{Proportional Comparison} While the `Absolute Kinematic Comparison' presented above is sensitive to sources in which large values of W$_{80}$ are observed, many outflow candidates and kinematically peculiar sources would not be selected. In low- and intermediate luminosity AGN driving outflows, the measured gas velocity widths are often on the order of the velocity widths expected of regular disk rotation \citep{gree05o3, barb09, Fischer_2017, wyle17a}. In such sources, outflowing components can be better identified when broad, blue-shifted components are present in the relevant emission lines, [OIII] in our case. To quantify the prevalence of such components, we analyse the difference between the velocity width maps provided by the MaNGA Data Analysis Pipeline (based on single Gaussian fits) and the velocity width maps derived in this work using the `Division Maps' that report the fractional change between the pipeline velocity width and the here derived velocity width. \begin{figure*} \begin{center} \includegraphics[width = 0.95\textwidth, trim = 4cm 0cm 4cm 1cm, clip= true]{prop_comp.pdf} \caption{Distribution of spaxel fractions for which the ratio between the single-Gaussian-based and multi-Gaussian-based velocity width exceeds a defined threshold $c$. We show the normalised distributions for all MaNGA galaxies (left), non-AGN (center) and MaNGA-selected AGN candidates (right) for a $c$ value of 1.25.} \label{c plots} \end{center} \end{figure*} In order to quantitatively assess the differences between the single-Gaussian-based and multi-Gaussian-based velocity widths, we measure the fraction of spaxels in each galaxy for which the ratio between the single-Gaussian-based and multi-Gaussian-based velocity width exceeds a defined threshold. We define $c$ as the threshold constant and show the distribution of spaxel fractions for $c = 1.25$ in Figure \ref{c plots}. The figure shows the distribution of the fraction of spaxels $F$ per galaxy with $\rm{W}_{80}> 1.25 \cdot \rm{W}_{80, DAP}$. Due to low number statistics we here do not repeat the analysis for the different AGN subsamples. The results of the two-sided KS-test comparing the overall, non-AGN and AGN distributions to that of the full MaNGA sample are reported in Table \ref{table_statistics}. The KS-test shows very low $p$-values when comparing the AGN distribution to the full MaNGA and/or non-AGN distribution, showing that the kinematic properties in these sources are distinct. Visually inspecting the distributions, we note that indeed most of the differences between the AGN distribution and the overall MaNGA distribution lies in the high spaxel fraction tail. This shows that not only does the distribution of the AGN sample differ from the overall MaNGA distribution, but that MaNGA-selected AGN show on average more spaxels where multiple Gaussian components were needed to describe the [OIII] emission line profile. This trend is further quantitatively validated by introducing the following cut. We select galaxies where at least 25\% of the host galaxy's spaxels demonstrated at least a 25\% increase (this corresponds to $c = 1.25$) of their measured W$_{80}$ values compared to W$_{80, DAP}$. A 25\% increase is chosen to reflect areas of significant change, and 25\% of spaxels are required to limit the number of galaxies to a manageable amount while still identifying interesting sources. A total of 237 galaxies pass this cut out of 2778 total. 36 (11.6\%) of the galaxies in the AGN sample pass this cut compared to only 237 (9.5\%) of the remaining non-AGN. To test for any potential biases in our analysis, we again draw a sample of 100 randomly selected galaxies from the overall MaNGA galaxy sample and repeat the analysis. A KS-test comparing the random distribution with the overall MaNGA distribution shows that the two distributions are statistically indistinguishable. \begin{table*} \caption{Results of the two-sided Kolmogorov-Smirnov test comparing the distributions of the `absolute kinematic comparison' shown in Figure~\ref{W80_hist_mean} and Figure~\ref{W80_hist_75} and the `proportional kinematic comparison' shown in Figure \ref{c plots}. We report the returned $p$-values when comparing the full MaNGA distribution and the non-AGN distribution to the AGN and AGN subsample distributions. Low $p$-values $< 0.01$ show that the distributions are significantly different from one another.} \begin{tabular}{ l|c|c|c|c|c|c } \hline Absolute Kinematic Comparison (mean W$_{80})$: & & & & & \\ & AGN & Seyfert-AGN & SF-AGN & LINER-AGN & Random \\ \hline Full MaNGA & 1.0e-23 & 3.4e-12 & 6.8e-10 & 4.7e-7 & 0.47\\ non-AGN & 1.2e-31 & 2.4e-15 & 4.0e-13 & 1.5e-9 & 0.69\\ \hline & & & & & \\ Absolute Kinematic Comparison (75th percentile W$_{80}$):\\ & AGN & Seyfert-AGN & SF-AGN & LINER-AGN & Random \\ \hline Full MaNGA & 4.9e-23 & 1.3e-11 & 4.2e-12 & 1.7e-7 & 0.71\\ non-AGN & 9.8e-31 & 2.4e-14 & 9.5e-16 & 5.9e-10 & 0.98\\ \hline & & & & & \\ Proportional Kinematic Comparison: & & & & & \\ & AGN & & & & Random \\ \hline Full MaNGA & 1.5e-4 & & & & 0.30\\ non-AGN & 5.9e-6 & & & & 0.52\\ \hline \end{tabular} \label{table_statistics} \end{table*} \section{Discussion} \subsection{The prevalence of ionised outflow signatures in MaNGA galaxies} With the goal of assessing the prevalence of ionised outflow signatures in MaNGA galaxies and MaNGA-selected AGN, we have shown that MaNGA-selected AGN candidates more frequently show enhanced [OIII] emission line kinematics than non-AGN in MaNGA. The difference in the gas kinematics between AGN and non-AGN galaxies is apparent in a variety of tests. By measuring the velocity width of the [OIII] emission line at $5007$\AA\ W$_{80}$, we have first shown that the $\langle$W$_{80}\rangle$ and W$_{80, 75th}$ distributions of the full MaNGA sample are significantly different from the MaNGA-selected AGN sample and from the individual AGN subsamples. MaNGA-selected AGN candidates tend to have higher $\langle$W$_{80}\rangle$ and W$_{80, 75th}$ and two to three times as many MaNGA-selected AGN candidates show enhanced [OIII] kinematics. In particular, we observe $2-3$ times as many of AGN with a significant fraction of spaxels with $\langle$W$_{80}\rangle > 500$~km/s compared to the non-AGN sample. These gas velocity values are significantly higher than what is expected from regular disk rotation. The typical gas velocity dispersion of SDSS galaxies at $z < 0.15$ and log$(M_{*}/M_{\odot}) < 11$ is $\lesssim 150$~km/s corresponding to W$_{80}$ values of $\lesssim 380$~km/s \citep{Thomas_2013, Beifiori_2014, Cicone_2016, Ubler_2019}. This suggests that a large fraction of the high velocity gas detected in the MaNGA-selected AGN is due to non-gravitational motions of the gas, potentially due to radiatively or mechanically AGN-driven outflows. Most MaNGA-selected AGN are low-/intermediate-luminosity sources with $L_{[OIII]} \sim 10^{40}$erg/s \citep{wyle18}. Since the [O III] luminosity can be used as an indicator of AGN bolometric luminosity if an AGN is present in the galaxy \citep{heck04,reye08}, this corresponds to a bolometric luminosiy $L_{bol, AGN} \sim 10^{43}$erg/s. Recently, \citet{fior17} has collected AGN wind observations for nearly 100 AGN and shown that the outflow velocity strongly depends on the AGN bolometric luminosity confirming previous studies. In many intermediate luminosity AGN, such as the AGN in this work, the velocities of any AGN-driven outflow therefore tend to be low \citep[$v < 500$~km/s, ][]{Lena_2015, Fischer_2017, wyle17a, wyle18b} and single-Gaussian measurements and measuring the velocity width alone usually does not capture such low-velocity outflow activity. But additional kinematic components (such as outflows) often leave an imprint on the overall velocity profile of the relevant emission line ([OIII] in our case) which can be identified by multi-component analysis of the emission lines. By identifying MaNGA galaxies in which the [OIII] emission line is better described by a two-component fit -- compared to the single component fit performed by the MaNGA Data Analysis Pipeline -- we have shown that MaNGA-selected AGN require more often a multi-component model to describe their [OIII] emission line profiles suggesting an enhancement of broad secondary components being present. The number of sources with outflow signatures identified in this paper is likely a lower limit. Since the analysis in this paper only captures sources with high [OIII] velocity widths or [OIII] velocity profiles with clear deviations from a Gaussian profile, weak and physically small outflow signatures below the spatial resolution of the MaNGA survey, such as the ones found in \cite{wyle17a}, would not be identified. A thorough modelling and subtraction of the gas velocity fields, which is beyond the scope of this paper, would be necessary to identify outflows through their residual signatures. Our analysis shows that enhanced gas kinematics are clearly more prevalent in MaNGA-selected AGN than in non-AGN, but what are possible driving mechanisms? In low- and intermediate luminosity AGN, in which AGN-driven outflows are not expected to be of such high velocities that the AGN can be considered as the only possible driver, the effect of stellar feedback and merger-induced gas flows have to be considered as potentially contributing to the observed signatures. To examine the contribution of star formation to the outflow activity, we show the relation between $\langle$W$_{80}\rangle$ and the star formation rate (SFR) in Figure \ref{sfr_w80}. \citet{heck15} have shown that there is a strong correlation between the star formation rates and the outflow velocities of starburst-driven winds which can be explained by a model of a population of clouds accelerated by the combined forces of gravity and the momentum flux from the starburst. While \citet{heck15} measure the outflow velocity as the flux-weighted line centroid defined relative to the systemic velocity of the galaxy, a different definition (such as a W80 measurement) would not affect the qualitative sense of their results. We cross-match the MaNGA catalog with the MPA-JHU catalog and use the star formation rates reported there. The SFRs are derived using the 4000 \AA\ break of SDSS single fibre spectra, following the method described in \citet{Brinchmann_2004}. We do not observe any significant positive correlations between SFR and $\langle$W$_{80}\rangle$ for either the total MaNGA distribution or only the AGN subsample. Especially for the sources with $\langle$W$_{80}\rangle > 500$~km/s the star formation rates are not high enough to explain the observed high velocity widths. We also test whether there is any correlation between the specific star formation rates and $\langle$W$_{80}\rangle$ and do not observe any. We furthermore visually inspect all MaNGA-selected AGN with $\langle$W$_{80}\rangle > 500$~km/s and find the merger fraction to be $\sim 10$\%. We find a similar fraction for the total MaNGA AGN candidate sample. In this analysis we have included all galaxies in close pairs, interacting galaxies and galaxies with visible merger signatures such as tidal tails. \citet{rembold17} find an even lower merger fraction in their AGN host galaxy sample of only a few percent. While we cannot fully exclude that starbursts and mergers contribute partially to the here observed enhanced [OIII] kinematics, these results suggest that stellar-driven winds and merger-induced flows are not the dominant reason for why we observe high [OIII] velocity widths in the MaNGA-selected AGN candidates with $\langle$W$_{80}\rangle > 500$~km/s. Furthermore, spatially resolved inflows in isolated galaxies are usually associated with low velocity dispersions of a few tens to 100~km/s \citep{Storchi_2019}, such that it seems unlikely that the high velocity dispersions observed here would be related to any inflows. The signatures are consistent with being due to radiatively or mechanically-driven AGN outflows. This type of analysis is more difficult for the AGN outflow candidates that show [OIII] velocity profiles with clear deviations from a Gaussian profile (the ones identified in Section 4.2), but which do not necessarily have $\langle$W$_{80}\rangle > 500$~km/s. In the following section, we assess whether the kinematics of the additional kinematic components with which these profiles have been modelled agree with expectations from outflow models. \subsection{Inflow vs. Outflow?} Both types of comparisons presented in Section 4.1 and 4.2 reveal that the [OIII] kinematics of MaNGA-selected AGN are different from the non-AGN in MaNGA. In the previous section we have shown that the kinematics of the MaNGA AGN with $\langle$W$_{80}\rangle > 500$~km/s are consistent with being due to radiatively or mechanically-driven AGN outflows. In AGN with an enhanced prevalence of secondary kinematic components but not necessarily high [OIII] velocity widths, the kinematic signatures could be due to either outflows or inflows. In this section we determine which scenario is more plausible. We measure the velocity offset $\Delta $ between $v_{DAP}$ and $v_{med}$, where $v_{med}$ is the median velocity of the [OIII] profile from our multi-Gaussian fit and $v_{DAP}$ is the median velocity from the single-Gaussian DAP fit. For a pure Gaussian profile or a profile with an insignificant secondary component $v_{DAP} \simeq v_{med}$ and $\Delta v = 0$. In Figure \ref{v_med_figure} we show the distribution of $\Delta v$ values for all MaNGA spaxels that have a velocity width W$_{80}$ greater than the mean W$_{80}$ of their galaxy. This selection ensures that we only consider spaxels where the velocity width is indicative of out- or inflowing components. Figure \ref{v_med_figure} shows that the distribution of $\Delta v$ is slightly skewed towards negative $\Delta v$ values when considering all MaNGA galaxies or only non-AGN in MaNGA. However, the distribution of $\Delta v$ values for the MaNGA-selected AGN candidates is much more heavily skewed towards negative values. Although sometimes redshifted emission line components have been associated with (AGN-driven) outflows \citep[e.g.][]{Mueller_Sanchez_2011, Fischer_2013}, because of dust attenuation blue-shifted emission line profiles are a better probe and signature of outflowing gas. We cannot fully exclude that in some galaxies the enhanced prevalence of secondary kinematic components in the [OIII] profile is not due to AGN-driven outflows, but rather due to signatures from mergers, bars or even inflows. But Figure \ref{v_med_figure} shows that most secondary kinematic components in MaNGA-selected AGN are blue-shifted secondary components to the [OIII] profile. This is an indication that we predominantly observe the signatures of outflows. \begin{figure*} \begin{center} \includegraphics[scale = 0.45, trim = 0cm 0cm 0cm 0cm, clip= true]{delta_v.pdf} \caption{Distribution of the [OIII] velocity offsets $\Delta v$, measuring the difference between the Gaussian-measured line-of-sight velocity and the non-parametric line-of-sight velocity as determined from our multi-Gaussian fitting and indicating whether the emission line is blue- or redshifted with respect to the Gaussian-measured line-of-sight velocity. We show the distributions for all galaxies in MaNGA (upper left), the non-AGN in MaNGA (upper center), the MaNGA-selected AGN candidates (upper right) and the three AGN subsamples (lowwo ist er row). The distribution of the MaNGA-selected AGN candidates and the three AGN subsamples are skewed towards negative $\Delta v$ values, suggesting that we predominantly observe the signatures of outflows (rather than inflows) in these sources.} \label{v_med_figure} \end{center} \end{figure*} \subsection{Non-AGN galaxies with outflow signatures} While we have shown that $\sim 2-3$ as many MaNGA-selected AGN show enhanced [OIII] kinematics compared to MaNGA galaxies not selected as AGN, a significant number of MaNGA non-AGN also show enhanced [OIII] kinematics. We visually inspect the spatially resolved MaNGA-BPT maps and the corresponding maps of the H$\alpha$ equivalent width (EW(H$\alpha$)) of all MaNGA non-AGN with $\langle$W$_{80}\rangle > 500$~km/s. All of those show BPT diagnostics consistent with AGN and/or LI(N)ER-like line ratios. However, they exhibit extremely low H$\alpha$ equivalent widths with EW(H$\alpha$)~$ < 3$~\AA. \citet{Cid-Fernandes_2010} have shown that invoking the H$\alpha$ equivalent width allows to differentiate between the different ionisation mechanisms that lead to the overlap in the LI(N)ER region of traditional diagnostic diagrams. Based on the bimodal distribution of EW(H$\alpha$), \citet{Cid-Fernandes_2010} suggest that EW(H$\alpha$)~$> 3$~\AA\ optimally separates true AGNs from `fake' AGNs in which LI(N)ER emission is due to hot evolved stars. As described in detail in \citet{wyle18}, the MaNGA AGN selection takes this additional criterion into account which is why such sources have not been selected as AGN candidates. We observe now that in addition to the true AGN-selected candidates these `fake' AGN make up the high velocity width tail from Figure \ref{W80_hist_mean}. This raises the question of what is driving these kinematic peculiarities. Typical sources that have to be considered when observing [OIII] velocity widths exceeding $500$~km/s are mergers \citep{alag12, harr12} and outflows driven by stellar processes \citep{heck15}. Alternatively, these sources may host AGN that have not been identified as AGN in the optical selection \citep{wyle18} or relic AGN \citep{Ishibashi15} in which outflow signatures of a previous AGN episode are still imprinted on the gas kinematics. But a visual inspection of the optical images of these sources and the star formation rates (see Fig. \ref{sfr_w80}) do not suggest that mergers or stellar-driven outflows are a major contributor to the observed kinematic peculiarities. In Figure \ref{sfr_w80} we furthermore mark objects that have been identified as `passive radio sources' (Wylezalek et al. in prep.). These sources have infrared Vega colors, measured with the \textit{Wide-field Infrared Survey Explorer (WISE)} satellite, of $-0.2 < \rm{W1-W2} < 0.3$ and $0 < \rm{W2-W3} < 2$, indicative of a passive galaxy population and no AGN activity \citep{ster12b}. But these objects have high radio-to-IR flux ratios. This type of objects, potentially low-luminosity radio AGN, are sometimes associated with late feedback from AGN required to be strong enough to suppress the late cooling flows of hot gas and keep quiescent galaxies red and dead \citep{heck14}. We now find that some of these objects make up the low SFR, high $\langle$W$_{80}\rangle$ tail in Figure \ref{sfr_w80}, suggesting that some of these identified high $\langle$W$_{80}\rangle$ sources are indeed active AGN missed in the optical selection. In Wylezalek et al. in prep., we explore in detail the relation between the multi-wavelength properties of AGN identified using various selection criteria and their gas kinematics. \begin{figure} \begin{center} \includegraphics[width = 0.5\textwidth, trim = 1cm 0.2cm 0cm 2cm, clip= true]{sfr_w80.pdf} \caption{Relation between the mean W$_{80}$ measurements $\langle$W$_{80}\rangle$ and the star formation rates in all MaNGA galaxies (blue cirlces) and in the MaNGA-selected AGN (red diamonds). No significant correlation is observed in either sample suggesting that it is not stellar-driven winds that are the dominant reason for high $\langle$W$_{80}\rangle$ in both the all MaNGa sample and the MaNGA-selected AGN. Only sources that pass the quality cut described in Section 4.1 are plotted here.} \label{sfr_w80} \end{center} \end{figure} \subsection{AGN with no outflow signatures} It is also notable, that many MaNGA-selected AGN candidates in this analysis do not show striking outflow signatures. \citet{Nedelchev17} investigated nearly 10000 Seyfert galaxies selected from SDSS DR7 to look for cold-gas outflows traced by Na D, finding outflow signatures in only 0.5\% of the population compared to 0.8\% of the control galaxies. They conclude that nearby optical AGN rarely drive kpc-scale cold-gas outflows and are not more frequent than in non-AGN. Similarly, \citet{Roberts-Borsani_2018} recently conducted a stacking analysis of the Na D doublet of 240,567 inactive galaxies and 67,753 AGN hosts from the SDSS DR7 survey to probe the prevalence and characteristics of cold galactic-scale flows local galaxies. They find little variation in either outflow or inflow detection rate between the AGN and non-AGN hosts. While this appears somewhat at odds with the findings in this paper, there are several plausible reasons for these different findings. The first and most obvious one is that we may not be observing the same type of AGN and not the same type of outflows. As described in detail in \citet{wyle18}, the AGN selection utilised in this paper is sensitive to a more nuanced picture of AGN activity and due to the IFU nature of the MaNGA survey, the selection allows to discover AGN signatures at large distances from the galaxy center. In particular, the selection allows to identify AGN candidates which have been missed in the previous SDSS single fibre surveys. Additionally, the analysis in this paper focuses on ionised gas signatures whereas both studies cited above investigate the prevalence of neutral outflow/inflow signatures in local galaxies and AGN. While both simulations and observations show that AGN-driven outflows are multiphase phenomena, the actual link between the different gas phases participating in the outflow remains unknown \citep{guil12, rupk13a, Costa_2015, Santoro_2018}, especially because the AGN may ionise significant fractions of the neutral gas reservoir. Therefore, single-phase studies often lead to wrong or incomplete estimates of the prevalence, extent, mass, and energetics of outflows and may therefore lead to misinterpreting their relevance in galaxy evolution \citep{Cicone_2018}. Na~D absorption can, for example, be subject to significant uncertainties due to the fact that it can be probed only where there is enough background stellar light. Studying molecular outflows and their connection to ionised and neutral atomic phases in a sample of 45 local galaxies, \citet{Fluetsch_2018} compute the ratio of the molecular to atomic neutral outflow rates $\dot{M}_{H_{2}}/ \dot{M}_{HI}$. They use both the Na~D absorption to compute neutral mass outflows rates and the fine-structure line of C+, [CII]$\lambda$157.74$\mu$m, which is an alternative way to probe atomic neutral outflows. They find $\dot{M}_{H_{2}}/ \dot{M}_{HI}$ to be an order of magnitude larger when using Na D absorption as tracer for $\dot{M}_{HI}$ compared to using [CII] as tracer for $\dot{M}_{HI}$. This comparison suggests that outflow detections and outflow mass measurements based on Na~D absorption are likely lower limits and that a multiwavelength and multi-phase assessment would lead to more complete and higher values/rates \citep{Roberts-Borsani_2018}. In this work, we do find a significant difference between the prevalence of ionised outflow signatures in MaNGA-selected AGN compared to non-AGN with larger prevalence fractions than reported in the above cited works. For example, we find that 25/7/2\% of MaNGA-selected AGN candidates have $\langle$W$_{80}\rangle > 500/800/1000$~km/s (see Section 4.1). However, the outflow prevalence fractions are still quite low. One of the reasons for this observation is certainly connected to the type of AGN we are probing, which are primarily weak AGN. Theoretical models \citep{zubo12} suggest that AGN need to provide sufficient luminosity to be able to push the gas out of the galactic potential. This `threshold' nature of AGN feedback was recently also suggested by molecular gas \citep{veil13a} and radio observations \citep{zaka14}. Additionally, our observations are only sensitive to outflows on galaxy-wide scales. Small-scale outflows with the size of a few kpc would be missed in this analysis due to the limitations in spatial resolution \citep{wyle17a}. In Figure \ref{oiii_w80}, we show the relation between $\langle$W$_{80}\rangle$ and the total [OIII] luminosity L$_{[OIII]}$ as measured from the MaNGA observations. Although $L_{[OIII]}$ can be affected by extinction, most often from dust in the narrow-line region, [O III] luminosities are a good indicator of total bolometric AGN luminosity \citep{reye08, lama10}. We observe a significant positive correlation between $\langle$W$_{80}\rangle$ and $L_{[OIII]}$ for the MaNGA-selected AGN in this work. A Spearman rank test results in a $p-$value of $1.3 \times 10^{-10}$. We do not observe such a significant correlation for the whole MaNGA sample ($p-$value $=0.7$). This result confirms previous observations that AGN luminosity plays a dominant role in the launching and detection of outflows \citep{zaka14}. Furthermore, \citet{zubo18} suggests that in galaxies with low gas fraction, typically low redshift galaxies, AGN are fed by intermittent gas reservoirs, and thus the typical AGN episode duration is short \citep[for low-z AGN it is expected to be on the order of $10^5$ yrs, ][]{schawinski15, king15}. Since such host galaxies are mostly devoid of gas, any outflow inflated by the AGN is difficult to detect because they are faint. With the MaNGA survey primarily targeting low redshift, low- and intermediate luminosity AGN, it is not unexpected that only a small fraction of MaNGA-selected AGN selected exhibit clear and significant outflow signatures. \begin{figure} \begin{center} \includegraphics[width = 0.5\textwidth, trim = 0cm 0.4cm 0cm 2cm, clip= true]{oiii_w80.pdf} \caption{Relation between the mean W$_{80}$ measurements $\langle$W$_{80}\rangle $and the total [OIII] luminosities $L_{[OIII]}$ (an indicator for the bolometric AGN luminosity) in all MaNGA galaxies (blue cirlces) and in the MaNGA-selected AGN (red diamonds). We observe a significant positive correlation between $\langle$W$_{80}\rangle$ and $L_{[OIII]}$ for the MaNGA-selected AGN suggesting that the AGN luminosity plays a dominant role in in the launching and detection of winds. Only sources that pass the quality cut described in Section 4.1 are plotted here.} \label{oiii_w80} \end{center} \end{figure} \subsection{Spatial distribution of the [OIII] velocity widths} We furthermore investigate the spatial distributions of the W$_{80}$ measurements. As above, we limit our analysis to MaNGA galaxies in which at least 10\% of the spaxels have valid [OIII] emission line measurements, i.e. a peak $S/N > 10$. We then measure the mean W$_{80}$ as a function of projected distance from the galaxies' centres in units of effective radii R$_{\rm{eff}}$. By design, all MaNGA galaxies are covered by the MaNGA footprint out to at least 1.5 R$_{\rm{eff}}$. In Figure \ref{radial_profiles}, we show the resulting radial profiles for the AGN and non-AGN samples, split by $\langle$W$_{80}\rangle$ and L$_{\rm{[OIII]}}$, respectively. We observe a strong dependence of the AGN radial profiles on L$_{\rm{[OIII]}}$ where the AGN with L$_{\rm{[OIII]}} > 2 \times 10^{40}$~erg/s exhibit a steep rise in W$_{80}$ within the inner 0.4~R$_{\rm{eff}}$, indicative of enhanced [OIII] kinematics in the galaxy nuclei of luminous MaNGA AGN (Figure \ref{radial_profiles}, top panel). While the less luminous MaNGA AGN candidates do not show this steep rise in the center, both the low and higher luminosity AGN samples show higher W$_{80}$ measurements at all radii than the non-AGN samples. The non-AGN samples show mostly flat radial W$_{80}$ profiles. We observe a tentative rise at $\rm{R}< 0.4~\rm{R}_{\rm{eff}}$ in the high luminosity non-AGN sample with L$_{\rm{[OIII]}}~>~2~\times~10^{40}$~erg/s, showing that the non-AGN sample either includes AGN that have been missed in the optical selection (see Section 5.2) and/or sources with strong nuclear starbursts driving enhanced [OIII] kinematics. Separating the AGN and non-AGN samples by their overall mean [OIII] velocity width $\langle$W$_{80}\rangle$ (Figure \ref{radial_profiles}, bottom panel), we observe that the radial profiles of both the AGN and non-AGN samples with $\langle$W$_{80}\rangle > 500$~km/s are similar across all radii. Interestingly, neither of these profiles exhibits a significant rise towards the center. Rather, both profiles are relatively flat out to $\rm{R} = 1.5~\rm{R}_{\rm{eff}}$. Visually investigating the [OIII] velocity width maps of these high $\langle$W$_{80}\rangle$ sources, we indeed confirm that the high velocity width signatures encompass the MaNGA footprint, indicative of large-scale enhanced [OIII] kinematics, possibly related to galaxy-wide outflows. In these cases the radial extent of the outflows likely exceeds the probed 1.5 R$_{\rm{eff}}$ such that the outflow size probed here can only be regarded as a lower limit. These are not necessarily the most L$_{\rm{[OIII]}}$ luminous sources as the difference of radial profiles to the AGN and non-AGN samples with L$_{\rm{[OIII]}} > 2 \times 10^{40}$~erg/s shows. While we have shown in Figure \ref{oiii_w80} that L$_{\rm{[OIII]}}$ generally correlates with $\langle$W$_{80}\rangle$, consistent with what has been found in other works \cite[e.g.][]{zaka14}, there are some sources within both the AGN and non-AGN MaNGA samples with low- to intermediate [OIII] luminosities that show evidence of large-scale disturbed [OIII] kinematics. As discussed in Section 5.2 and shown in Figure \ref{sfr_w80}, the `passive radio sources', possibly missed low-luminosity radio AGN, make up a large fraction of the high $\langle$W$_{80}\rangle$ population within the non-AGN sample. The radial velocity width profiles of the AGN and non-AGN samples with $\langle$W$_{80}\rangle < 500$~km/s show significantly lower W$_{80}$ measurements across all radii compared to the $\langle$W$_{80}\rangle > 500$~km/s samples. But the AGN sample with $\langle$W$_{80}\rangle < 500$~km/s consistently shows higher W$_{80}$ measurements compared to the non-AGN sample with $\langle$W$_{80}\rangle < 500$~km/s, with a slight rise of the radial profile at $\rm{R}< 0.4~\rm{R}_{\rm{eff}}$. This confirms our previous observations that the MaNGA-selected AGN in this work show distinct [OIII] kinematics from the non-AGN sample with a higher prevalence of enhanced [OIII] kinematics. Several attempts have been made in the literature to quantify the size of AGN outflow regions and its relation to AGN power or galactic potential. However, this proves to be a difficult exercise as it is unclear what defines the `size' of an outflow. For example, outflow sizes can be defined as the spatial extent of a region above a certain velocity width threshold \citep{sun17} or edges of outflowing bubbles \citep[see][and references therein]{Harrison_2018}. Alternatively, outflow models assuming certain geometrical shapes of the outflowing region can be used to infer the maximum extent of the outflowing regions and deproject velocities. For example, \citet{wyle17a} has used a bi-cone model described in \citet{Mueller_Sanchez_2011} which consists of two symmetrical hollow cones having interior and exterior walls with apexes coincident with a central point source to model the outflow in an intermediate-luminosity MaNGA-selected AGN. However, this detailed modelling was only enabled by higher resolution follow-up IFU observations and is not available for all our source. Here, we quantify the size of the outflow signatures in AGN in the following way. In Figure \ref{radial_profiles_kpc} we show the averaged W$_{\rm{80}}$ radial profiles for the different L$_{\rm{[OIII]}}$ regimes as a function of absolute distance in kpc. Due to the different sizes of MaNGA galaxies, these radial profiles only provide a qualitative measurement of the difference in `sizes' of the outflow region. We observe that in AGN with L$_{\rm{[OIII]}} < 2 \times 10^{40}$~erg/s the averaged radial profile reaches the level of the non-AGN at $\sim 8$~kpc, whereas the radial profile of the AGN with L$_{\rm{[OIII]}} > 2 \times 10^{40}$~erg/s is starting to reach the level of the non-AGN samples at $\sim 15$~kpc. We do not show the radial profiles as a function of distance in kpc for the low and high $\langle$W$_{80}\rangle$ populations (as we did in Figure \ref{radial_profiles}) due to the small sample size of the high $\langle$W$_{80}\rangle$ population leading to noisy radial profiles beyond 8~kpc. The average [OIII] luminosities L$_{\rm{[OIII]}}$ of the two AGN samples shown in Figure \ref{radial_profiles_kpc} are $9\times10^{39}$~erg/s and $2\times10^{41}$~erg/s, respectively, while the average W$_{\rm{80}}$ measurements of the two AGN samples are 256~km/s and 335~km/s, respectively. For an outflow with a velocity $v_{gas}$ of these averaged W$_{\rm{80}}$ measurements, we can estimate the total kinetic energy injection rate over the lifetime $\tau$ of the ionised gas nebulae to be \begin{equation} \dot{E}_{kin} = \frac{E_{kin}}{\tau} = \frac{M_{gas} v_{gas}^2}{2 \tau} \end{equation} where $M_{gas}$ can be estimated from the [OIII] luminosity under the `Case B' assumption \citep{oste06, nesv11}, assuming that [OIII]/H$_{\beta} \sim 10$ \citep{liu13b} and adopting an electron density $n_e = 100~\rm{cm}^{-3}$\citep{gree11}. The gas mass can then be expressed as $M_{gas} = \left( \frac{L_{[OIII]}}{10^{44} \rm{erg~s^{-1}}}\right) \cdot 2.82 \times 10^9~\rm{M}_{\odot}$ \citep{liu13b}. The lifetime $\tau$ of the episode of AGN wind activity may be estimated as the travel time of clouds to reach the observed distances of 8/15~kpc from the centre traveling with the average outflow velocities $v_{gas}$ quoted above. These calculations yields total kinetic energy injection rates of $\dot{E}_{kin} \sim 2 \times 10^{38}$~erg/s and $\sim 5 \times 10 \times 10^{39}$~erg/s for the low and high luminosity AGN samples, respectively. Although these calculations provide order-of-magnitude estimates at best, they show that \textit{(i)} even in low/intermediate luminosity AGN, the positive correlation between AGN power and outflow energetics persists as expected from models \citep{Dempsey_2018} and seen in observations \citep[][and references therein]{fior17} but that \textit{(ii)} the feedback processes in such low/intermediate luminosity, low redshift AGN probed by MaNGA are unlikely to have a significant impact on the evolution of their host galaxies, i.e. fully suppress star formation processes, as the kinetic coupling efficiencies $\dot{E}_{kin}/L_{AGN}$ are $\ll 1$\% and most of them even likely $\ll 0.1$\%. Although some theoretical models suggest that efficiencies as low as 0.5\% may be sufficient in substantially suppressing star formation in the host \citep[e.g.][]{Hopkins_2010}, most models require efficiencies of $\sim 5$\% for feedback to be effective \citep[see][and references therein]{Harrison_2018}. However, our calculations are based on ionised gas observations which only probe the fraction of the gas that is in the warm ionised gas phase. This gas is likely in dense clouds that remain largely optically thick to the AGN ionising radiation and only a thin shell on the surface of these ionisation-bounded clouds produces emission lines \citep{Dempsey_2018}. Therefore, there may be additional outflow components that are not captured by our calculations. Over the lifetime of the AGN (typically $\sim 10^7$~yr) and depending on the amount of cool gas present in the individual host galaxies, such continuous and ubiquitous energy injection and outflow activity may still heat a fraction of the cool gas and delay or suppress star formation in individual cases \citep{Cheung_2016}. \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth, trim = 1.8cm 2.2cm 2cm 2cm, clip= true]{oiii_dist_resolved_2.pdf} \caption{Averaged radial dependence of W$_{80}$ as a function of projected distance from the galaxies' centres in units of effective radii R$_{\rm{eff}}$. \textbf{Top Panel}: Averaged W$_{80}$ radial profiles for the AGN and non-AGN samples, split by L$_{\rm{[OIII]}}$, respectively. \textbf{Bottom Panel}: Averaged W$_{80}$ radial profiles for the AGN and non-AGN samples, split by $\langle$W$_{80}\rangle$, respectively.} \label{radial_profiles} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth, trim = 1.8cm 0.5cm 2cm 2cm, clip= true]{oiii_dist_resolved_2_kpc.pdf} \caption{Averaged radial dependence of W$_{80}$ as a function of projected distance from the galaxies' centres in units of kpc. We show the radial profiles for the AGN and non-AGN samples, split by L$_{\rm{[OIII]}}$, respectively.} \label{radial_profiles_kpc} \end{center} \end{figure} \subsection{Differences between AGN subsamples} In Section 4.1, we analysed the three different AGN subsamples introduced. We have shown that the distributions of the mean W$_{80}$ measurements $\langle$W$_{80}\rangle$ of all three AGN subsamples are significantly different from the overall MaNGA distribution and the non-AGN distribution (see Figure \ref{W80_hist_mean} and Table \ref{table_statistics}), with the AGN distributions being more skewed towards higher $\langle$W$_{80}\rangle$ measurements. The same observation is true when comparing the distributions of the 75th percentile W$_{80}$ measurements W$_{80, 75th}$ (Figure \ref{W80_hist_75}). The Seyfert-AGN and SF-AGN are different from the full MaNGA and non-AGN distributions at a higher significance level (lower $p$-values) than the LINER-AGN distribution in both the $\langle$W$_{80}\rangle$ and the W$_{80, 75th}$ comparisons (Table \ref{table_statistics}). This observation, albeit tentative, might be explained by multiple factors. While the AGN selection method in \citep{wyle18} was developed such that `fake' AGN with LINER-like signatures would not be selected as AGN candidates, the authors noted that some of the LINER-AGN may be sources where the LINER-like photoionization signatures are connected to AGN-unrelated shocks. If indeed some of the selected LINER-AGN are related to shocks, while the shocks themselves may be due to AGN, stellar or merger activity \citep[see also][]{wyle17a}, then one would expect that the LINER-AGN sample is not as `clean' as the SF-AGN and Seyfert-AGN samples. Some of the LINER-AGN may in fact not be AGN. Therefore, LINER-AGN may show a lower rate of or different outflow signatures. Generally, however, we observe little difference between the [OIII] kinematics of the three AGN subsamples. In particular, the Seyfert-AGN and SF-AGN subsamples show very similar characteristics in their [OIII] kinematics. The exact nature of the SF-AGN remains elusive. Various possibilities include recently turned off AGN, AGN dominated by a nuclear starburst, off-nuclear AGN and these are currently still being investigated (Wylezalek et al. in prep.). Our kinematic analysis suggests that the rate and nature of weak outflows is similar in both subsamples. \section{Conclusion} In this work we have examined the kinematics of the [OIII]$\lambda 5007 \AA$ emission line in each spatial element of 2778 low redshift galaxies observed as part of the SDSS-IV MaNGA survey. Specifically, we have developed a customised fitting method that allows to account for potential secondary kinematic components in the emission line profile as opposed to the MaNGA pipeline measurements that are based on single-Gaussian fitting. We first model the [OIII] emission line profile using both single and double-Gaussian profiles and evaluate the goodness of the fit based on its $\chi^{2}$ value. We then utilise the non-parametric measurements W$_{80}$ and $v_{med}$ to quantify the width of the emission line profile and the line-of-sight velocity. The main purpose of this work is to assess the prevalence of ionised gas outflow signatures in MaNGA-selected AGN candidates. Since MaNGA-observed AGN tend to be low- to intermediate luminosity AGN, a careful analysis of faint broad secondary components and/or deviations from a simple Gaussian profile need to be carefully assessed. Our strategy to do so is twofold. Only considering well detected emission lines with a signal-to-noise ratio $S/N > 10$, we first measure the mean $\langle$W$_{80}\rangle$ for each galaxy by averaging over all valid spaxels in that galaxy. We do not consider galaxies in which less than 10\% of the spaxels have valid [OIII] emission line measurements. We also determine the W$_{80}$ value that marks the 75th percentile of each galaxy's W80 distribution W$_{80, 75th}$. Both $\langle$W$_{80}\rangle$ and W$_{80, 75th}$ indicated in an absolute fashion which galaxies show large velocity widths. This type of analysis may miss galaxies with slow/moderate outflows that are imprinted as small deviations from a pure Gaussian [OIII] line profile. We therefore also assess the fractional change between the pipeline velocity width and the [OIII] velocity width derived in this work. Based on these derived quantities, we compare the [OIII] kinematics in the full MaNGA sample to the non-AGN in MaNGA, the MaNGA-selected AGN candidates and three subsamples of the MaNGA-selected AGN candidates. We find the following: \begin{itemize} \item The $\langle$W$_{80}\rangle$ and W$_{80, 75th}$ distributions of the full MaNGA sample are significantly different from the MaNGA-selected AGN sample and from the individual AGN subsamples where MaNGA-selected AGN candidates tend to show higher $\langle$W$_{80}\rangle$ and W$_{80, 75th}$. \item Two to three times as many MaNGA-selected AGN candidates show enhanced [OIII] kinematics. This result is based on determining how many galaxies/AGN show W$_{80} > 500/800/1000$~km/s. \item MaNGA-selected AGN require more often a multi-component model to describe their [OIII] emission line profiles compared to the non-AGN in MaNGA. While this result is more tentative, it suggests an enhancement of broad secondary components being present. \item Comparing the line-of-sight velocities measured in this work to the line-of-sight velocities measured by the MaNGA pipeline, we find that the emission lines are predominantly blue-shifted suggesting that the kinematic peculiarities observed in this work are indeed related to outflows (rather than inflows). \item While generally AGN show a higher prevalence of ionised outflow signatures compared to the non-AGN in MaNGA, there are sources not selected as AGN that do show enhanced [OIII] kinematics. These sources do not display an enhanced merger fraction or indications that stellar processes might be driving these outflow indicators. Such sources may host AGN or AGN relics that have not been identified by the optical selection methods used here. A thorough multi-wavelength analysis is required to determine the cause of these enhanced [OIII] kinematics. \item We observe a significant correlation between the [OIII] luminosity and $\langle$W$_{80}\rangle$ in the MaNGA-selected AGN, confirming similar measurements in other works using other AGN samples. Based on these results it seems that AGN need to provide sufficient luminosity to be able to launch outflows and push the gas out of the galactic potential. Since most AGN in MaNGA are of low-/intermediate luminosity, it is therefore no surprise that we detect outflow signatures in only $\sim 25$\% of the MaNGA-selected AGN. \item We find significant differences in the radial extent of broad [OIII] velocity components between the MaNGA-selected AGN and non-AGN sources. Higher luminosity AGN are able to drive larger scale outflows than lower luminosity AGN in agreement with in previous studies. The kinetic coupling efficiencies $\dot{E}_{kin}/L_{AGN}$ in MaNGA-selected AGN which are predominantly low- and intermediate luminosity sources are $\ll 1$\% which might imply that these AGN are unlikely to have a significant impact on the evolution of their host galaxies. However, these estimates are lower limits since we are likely missing a large fraction of the outflowing gas that is not in the here probed warm ionised gas phase. Over the lifetime of the AGN, continuous energy injection and outflow activity may still heat a fraction of the cool gas and delay or suppress star formation in individual cases even when the AGN is weak. \end{itemize} This work shows that ionised outflow signatures are more prevalent in MaNGA-selected AGN than in non-AGN. Much of this work has only been possible due to the added spatial dimension provided by the MaNGA IFU data and shows that outflow and feedback signatures in low-luminosity, low-redshift AGN may previously have been underestimated. \section*{Acknowledgements} A.M.F. acknowledges the support of the NASA Maryland Space Grant Consortium. R.A.R. thanks partial financial support from CNPq and FAPERGS. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrof\'isica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut f\"ur Astrophysik Potsdam (AIP), Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg), Max-Planck-Institut f\"ur Astrophysik (MPA Garching), Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observat\'ario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Aut\'onoma de M\'exico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. \bibliographystyle{mnras}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Introduction} Viruses are relatively simple in structure as they consist of a container and genetic material. The container material ranges from solid-like proteins to fluid-like lipids \cite{Bruinsma2015, Perotti2016}. For the latter type of container, evolutionary pressures have led to the emergence of filamentous spike proteins that protrude out from the container. Such viruses are otherwise known as coronaviruses, examples of which include the common cold, HIV, and, of course, SARS-CoV-1 and SARS-CoV-2 \cite{Einav2019, Ayora-Talavera2018, Laue2021}. The number of spike proteins may indeed vary from virus to virus (for a given type of coronavirus) \cite{Einav2019, Ayora-Talavera2018, Laue2021}. Moreover, the average spike protein density may change from one type of coronavirus to the next. For instance, the average spike protein density is typically two orders of magnitude in HIV as compared to other coronaviruses \cite{Stano2017a}. Of course, spike protein density is merely one aspect of their characterization. Much work has been done to characterize their binding affinities, and some work has been done to characterize their mechanics \cite{Moreno2022, Kiss2021, Moreira2020, Hu2020, Bosch2003}. In other words, the spike proteins have their own mechanical/conformational degrees of freedom to serve as additional knobs for viruses to optimize their function, which is presumably to replicate. Since the contents of a virus are minimal, in order to replicate, viruses must enter cells to hijack their biological machinery. Viruses enter cells via multiple pathways. Two of the dominant pathways for coronaviruses are membrane fusion, and endocytosis \cite{Jackson2022a, Grove2011}. In the former, there exists an appropriate receptor on the cell surface, and an additional player arrives just below/at the cell surface to assist the virus in opening its genetic contents. In the latter, the cell surface deforms to wrap around the virus and ultimately pinches it off to enter the cell as a virus-containing vesicle. Clathrin and dynamin are major players in one endocytotic pathway. Other pathways include cytoskeletal filaments, such as actin \cite{Yamauchi2013, Laliberte2011}. Of course, a cell surface contains complex structures with multiple coreceptors interacting with spikes of the virus~\cite{Ripa2021, Maginnis2018}. An identified receptor of the Sars-CoV-2 virus is ACE2; however, other coreceptors such as membrane rafts ~\cite{Ripa2021}, extracellular vimentin \cite{Suprewicz2022, Amraei2022}, glycans~\cite{Hao2021} also play a role in endocytosis in particular. A subset of such coreceptors, such as extracellular vimentin, can be filamentous with their own degrees of freedom~\cite{Ramos2020} and can, therefore, take on a life of their own, if you will, affect viral uptake by the host cell. Following our initial mostly experimental study of how extracellular vimentin impacts viral uptake \cite{Suprewicz2022}, here, we present a computational model in which {\it both} the cell-like construct and the virus-like construct contain their own filamentous structures protruding from their surfaces and study the impact of these additional degrees of freedom on viral wrapping. Indeed, much research has focused on how the size and shape of the virus impact viral wrapping \cite{Shen2019, Shi2011, Chen2016a} with spherocylindrical objects wrapping (and pinching off) more efficiently than spheres \cite{Vacha2011}. In addition, a recent genetic algorithm approach demonstrated that spheres with patchy sites that are arranged in the lines along the sphere, as opposed to randomly, endocytose more efficiently \cite{Forster2020}. Filamentous structures on viral surfaces have been found to lower energy barriers for binding to the cell \cite{Cao2021a, Stencel-Baerenwald2014}. However, given the presence of filamentous proteins emanating from, or bound to, cell surfaces, how do filamentous objects on the {\it outside} of a cell assist in viral wrapping? To begin to answer the above question, we have organized the manuscript as follows. We first present the computational model with several simplifying assumptions. The first is that the cell surface is modeled as a deformable sheet with bending and stretching energy costs, representing the underlying cell cortex supporting the lipid bilayer. Second, as extracellular vimentin is our main filamentous protein candidate, vimentin self-assembles into filamentous, hierarchical structures whose tetramers are six times longer than the thickness of a cell membrane~\cite{ULF2012}. There is some evidence for extracellular vimentin adhering to the cell surface via glycans~\cite{glycans}; however, vimentin may stick directly to the lipid bilayer or to the cell cortex via plectins. Since many of the details regarding how extracellular vimentin interacts with the surfaces of cells have yet to be discovered, we assume that the extracellular filamentous protein structures, or extracellular components, tether directly to the cell surface. Finally, the third simplifying assumption is that while the viral container will contain filamentous spikes emanating from it, the container will be a deformable shell with elastic interactions. Even with such simplifying assumptions, the model is still very rich in its viral wrapping phase space. We then present the results of the modeling after varying the density of the filamentous, extracellular components, and the filamentous extra viral components, as well as the mechanics of each component and of the deformable sheet, to explore their implications for viral wrapping. We then conclude with a discussion of the evolutionary pressures of viral structure and mechanics, given our results. \begin{figure*}[t] \includegraphics{Model.pdf} \caption{ { \it Schematic illustration of the components of the coarse-grained computational model} (a) The cell surface is constructed as network of equilateral triangles (blue particles and edges) (b) Extracellular components are modeled as a single polymer with bending and streching rigidity (c) The virus surface (gray) is composed of particles on a sphere connected each other via springs and containing spikes (red) emanating from the surface (d) Spike proteins are quantified as single polymers with bending and stretching rigidity (e) Combining all the components of the computational model yields a virus with spikes interacting with a cell surface with extracellular components. } \end{figure*} \section*{Computational model and methods} To understand the role of the filamentous structures on the outside of both a cell and a virus on viral uptake, we model viral wrapping via coarse-grained Brownian dynamics simulations. Here are the relevant players and how they are incorporated into the model and their respective interactions with the other players.\\ \textit{Cell surface}: We model the cell surface as a deformable sheet, as shown in Fig 1(a). The sheet is made up of particles, shown in blue, that are connected via springs. Nearest-neighbor particles in the sheet interact via harmonic spring potential $V_{Spring}=\frac{K^{Cell Surface}_{NN}}{2}\left ( r_{ij} -l_{o} \right )^{2}$, where $K^{Cell Surface}_{NN}$ is the strength of spring interaction, $r_{ij}$ is the distance between nearest neighbors and $l_{o}$ is the edge length of the equilateral triangle. Bending rigidity is introduced in the surface with a spring interaction between second nearest neighbors $V_{Spring}=\frac{K^{Cell Surface}_{SNN}}{2}\left ( r_{ik} -{l_{o}} \sqrt{3} \right )^{2}$, where $K^{Cell Surface}_{SNN}$ is the strength of second spring interaction, $r_{ik}$ is the distance between second nearest-neighbors and $l_{o}\sqrt{3}$ is the second nearest-neighbor distance. This second nearest-neighbor interaction acts as a brace to constrain the distance between the two more distant particles such that there is an effective hinge between the two triangles spanning the brace, and given the additional braces, we can explore the bending cost of the deformable sheet. This interpretation is further supported by an additional soft-core repulsion between particles (detailed below) with the particle size relative to the smallest equilateral triangle size set such that the particles cannot pass through this triangle to decrease twisting. Moreover, the presence of the filamentous structures on one side of the sheet also helps to minimize twisting. \begin{table}[H] \centering \begin{tabularx}{0.5\textwidth} { >{\raggedright\arraybackslash}X >{\centering\arraybackslash}X >{\raggedleft\arraybackslash}X } \hline\\ Parameters & Value & Reference \\\\ \hline \hline\\ $K_{NN}^{Cell Surface}$ & $ 1\,\,pN/nm$ & \\\\ $K_{SNN}^{Cell Surface}$ & $ 0/3.5/7/10.5/14\,\, k_{B}T$ & \\\\ $K_{Spring}^{Virus}$ & $ 0.05/0.5/1/5/10/50\,\,pN/nm $ & \cite{Kiss2021} \cite{Zeng2017}\cite{DePablo2019} \\\\ $K_{Spring}^{Spike}$ & $ 10^{-2}/10^{-1}/10^{0}/10^{1}/10^{2}\,\,pN/nm $ & \cite{Moreira2020}\\\\ $K_{Spring}^{ECC}$ & $ 10^{-2}/10^{-1}/10^{0}/10^{1}/10^{2}\,\,pN/nm $ &\cite{Schepers2020rr}\\\\ $K_{Spring}^{Receptor}$ & $ 50\,\,pN/nm $& \cite{Cao2021}\\\\ $K_{Spring}^{ECC-Receptor}$& $ 5\,\,pN/nm $& \cite{Bai2020}\\\\ $K_{Bending}^{Receptor}$& $ 120\,\,k_{B}T $& -\\\\ $K_{Bending}^{ECC}$& $ 24\,\,k_{B}T \times 10^{-2}/10^{-1}/10^{0}/10^{1}/10^{2} $&- \\\\ $K_{Bending}^{Spike}$& $ 24\,\,k_{B}T \times 10^{-2}/10^{-1}/10^{0}/10^{1}/10^{2} $& -\\\\ ${\epsilon}_{Attractive}^{ECC-Spike}$ & $ 25\,\,k_{B}T $& -\\\\ $K_{Soft-Repulsion}$ & $ 1\,\,pN/nm $& -\\\\ $D$ & $ 1\,\,\mu m^{2}/s $& -\\\\ \hline \end{tabularx} \caption{Table of parameters used unless otherwise specified. } \label{table:1} \end{table} Again, we are modeling the cell surface as a deformable sheet. With this assumption, we can also explore local, nontrivial shapes of the underlying cortex, which, in turn, drives the plasma membrane shape. On the other hand, since earlier work demonstrates that the head domain of vimentin can also associate with a plasma membrane~\cite{plasmamembrane}, we will address the potential changes if the sheet were to be fluid-like in the discussion. \\ \textit{Extra-Cellular Components (ECC)}: The filamentous ECCs are semiflexible polymers also modeled as particles connected with springs, as shown in Fig 1(b). Each ECC consists of four particles, with the first one connected to a cell surface particle, as shown in Fig. 1(e). Since the cell surface microenvironment may be random, as opposed to a pattern, we place the ECC randomly on the deformable sheet. While a major candidate ECC we consider is extracellular vimentin, given its increasingly pronounced role in viral infection \cite{Ramos2020}, one can also consider heparan sulfate~\cite{Zhang2020}, proteoglycans, glycolipids \cite{Marsh2006}, or sialic acids \cite{Sieczkarski2002}. Each of the candidates may have different mechanical properties based on the protein's physical properties and cell type. To model such properties, each of these co-receptors has bending rigidity given by $V_{Bending}=\frac{K_{Bending}^{ECC}}{2}\left ( cos (\theta_{lmn})-1 \right )^{2}$ where $\theta_{lmn}$ is the angle between three consecutive particles in co-receptors and $K_{Bending}^{ECC}$ is the strength of the bending force. We maintain the equilibration angle to be zero or a straight polymer, such that there is an energy cost for bending. The stretching energetic cost is governed by $V_{Spring}=\frac{K_{Spring}^{ECC}}{2}\left ( r_{ij} -\sigma_{o} \right )^{2}$, where $K_{Spring}^{ECC}$ is the strength of spring interaction, $r_{ij}$ is the distance between nearest neighbor and $\sigma_{o}$ is the diameter of a ECC particle. Finally, we assume that these filamentous extracellular components are bound to the cell cortex just beneath the lipid bilayer \cite{Pinto2022, Liu2021b, Zhang2020}. \\ \textit{Receptor}: A short receptor is placed in the middle of the cell surface and is the same as the ECC receptor with different values for $K_{Spring}^{Receptor}$ and $K_{Bending}^{Receptor}$ as specified in Table 1. The initial condition is the virus-like particle bound to the short receptor. \textit{Virus Surface}: The deformable virus surface is initialized by generating particles on a sphere. The particles are arranged in a Fibonacci sequence. We then implement a Delaunay triangulation to identify the edges between these particles. All particles on the sphere are connected with a harmonic spring potential $V_{Spring}=\frac{K_{Spring}^{Virus}}{2}\left ( r_{ij} -r_{o} \right )^{2}$, where $K_{Spring}^{Virus}$ is the strength of the spring and $r_{o}$ is the equilibrium distance found by the triangulation. This construction of a deformable, elastic virus surface, with ultimately filamentous structures emanating from it, is a simplifying assumption as viruses with spike proteins typically consist of fluid-like containers but, nonetheless, provides a starting point for the modeling. \textit{Spikes on Virus:} Spikes are two particles joined by harmonic springs, as shown in Fig. 1(d), which are also joined to the virus surface by another harmonic interaction, as shown in Fig 1(c). Since spikes can take on multiple configurations\cite{Turonova2020a, Romer2021}, they also have bending potential with $V_{Bending}=\frac{K_{Bending}^{Spike}}{2}\left ( cos (\theta_{pqr})-1 \right )^{2}$. The connecting spring potential is given by $V_{Spring}=\frac{K_{Spring}^{Spike}}{2}\left ( r_{ij} -r_{o} \right )^{2}$, where $K_{Spring}^{Spike}$ is the strength of the spring and $\sigma_{o}$ is the particle's diameter. We place the spikes on the virus's surface such that the typical distance between two neighboring spikes is between 3-15 nm, the range found in experiments \cite{Kiss2021, Yao2020}.\\ \textit{Spike-receptor interaction}: The virus is connected to the primary receptor via a harmonic spring interaction, $V_{Spring}=\frac{K_{spring}^{Spike-Receptor}}{2}\left ( r_{ij} -\sigma_{o} \right )^{2}$, where $K_{spring}^{Spike-Receptor}$ is the strength of the spring and $r_{o}$ is the equilibrium distance. The receptor is made up of four particles connected via harmonic spring and has some bending rigidity.\\ \textit{Spike-ECC interaction}: Virus spikes and extracellular components may interact via specific binding domains and charge interactions \cite{Stencel-Baerenwald2014, Nguyen2020, Zhang2020}. We model all the possible interactions with a simple, attractive potential, given by Eq. 1, which attracts in some range with the strength of $\epsilon_{Attractive}^{ECC-Spike}$. Although the virus spikes may have multiple receptor binding domains \cite{Lan2020, Maginnis2018}, we took a conservative approach and only allowed one ECC to bind to one spike protein for modeling lock and key interactions \cite{Powezka2020}. Thus, our sticky potential only attracts the top ECC particle and top spike particles to each other. Therefore, $r_{ij}$ is the distance between $ith$ top spike particle and the $jth$ top ECC particle, and $\sigma_{o}$ is the diameter of the particle, or \begin{equation} V_{LJ}= \begin{cases} 4 \epsilon_{Attractive}^{ECC-Spike} \left [ \left ( \frac{\sigma_{o}}{r_{ij}} \right )^{12} - \left ( \frac{\sigma_{o}}{r_{ij}} \right )^{6} \right ] & r\leqslant 2\sigma_{o}\\ 0 & r> 2\sigma_{o}. \end{cases} \end{equation} {\textit Soft-core repulsion}: All components of the model have volume exclusion due to soft-core repulsion given by Eq. 2, where $K_{Soft-Repulsion}$ is the strength of the soft repulsion, $r_{ij}$ is the distance between the centers of two particles, and $\sigma_{o}$ is the diameter of a particle. The repulsion force acts only when the distance between particles is lower than the diameter of the particle, or \begin{equation}\ V_{Soft-Repulsion}= \begin{cases} \frac{K_{Soft-Repulsion}}{2}\left ( r_{ij}-\sigma_{o} \right )^{2} & r_{ij}\leqslant \sigma_{o}\\ 0 & r_{ij}> \sigma_{o}. \end{cases} \end{equation} \textit{Method:} We implement Brownian dynamics to quantify the dynamics of the model. The equation of motion is given by Eq. 3 below, where $r_{i}$ is the position of an ith particle, $F_{i}^{c}$ is the sum of conservative forces acting on the ith particle, and $\xi(t)$ is the Gaussian white noise to simulate thermal fluctuations and follows $<\xi(t)>=0$ and $<\xi_{i}(t)\xi_{j}(t')>=0$. The sum of conservative forces on the $ith$ particle is $F_{i}^{c}=F_{i}^{Spring}+F_{i}^{Bending}+F_{i}^{LJ}+F_{i}^{Soft-Repulsion}$. We obtain these forces by taking derivatives of the above potentials with respect to the $ith$ particle coordinate ($F_{i}=- \frac{\partial V }{\partial r_{i}} $). We integrate the equation of motion by the Euler-Murayama method, or \begin{equation} \dot{r_{i}}= \mu F_{i}^{c} + \sqrt{2D\xi(t)}. \end{equation} \begin{figure*}[t] \includegraphics{Fig_2_Oct_5.pdf} \caption{ {\it Optimal coverage of extra-cellular components (co-receptors) is required for maximal wrapping} (a) Simulation snapshots the virus wrapping as a function of number of virus spikes and ECC coreceptor percent of area coverage. (b) Virus wrapping is defined as the ratio of virus surface area covered by cell surface and the entire virus surface area (c) Virus wrapping as a function of percent of ECC coverage: For lower spike numbers, there is no appreciable wrapping, but for 200 spikes, non-monotonic viral wrapping behaviour emerges. Note that 200 spikes correspond to a spike area density of 6.3 $\times 10^{-3}$ nm$^{-2}$ and 100 $\%$ ECC coverage corresponds to the area density of 5.4 $\times 10^{-3}$ nm$^{-2}$. } \end{figure*} \textit{Scales:} Our Brownian dynamics simulation is a coarse-grained simulation. All simulation quantities are normalized via length, time, and force scales. We can convert simulation quantities to biologically relevant quantities by the following definitions: one unit simulation length is defined as 10 nm, one unit simulation time is 1$\mu$s, and one force unit is $10^{-1} $pN. All simulation quantities are expressed in terms of these basic units. We run the total simulation with dt $10^{-4}$ for $10^{8}$ simulation steps or 50 ms and recording positions at each 25 $\mu$s. Total run time is comparable to the typical viral endocytosis time\cite{Imoto2022, Chanaday2018}. To find the optimal conditions for endocytosis in our system, we vary the density of ECC and virus spike, spring strength and bending rigidity of ECC and virus spike, and spring strength of virus. Finally, the bending rigidity of the cell surface can be written in terms of $k_{B}T$. Since bending rigidity is encoded in the second nearest-neighbor springs, multiplying that spring constant with the area of the triangle made up of those springs, with side $l_{o}\sqrt{3}$, gives the bending rigidity. Converting to a dimensionless bending rigidity, $\tilde{K}_{SNN}^{Cell Surface}$, using our normalized length and force scales, we get that $K_{bending}^{Cell Surface}$=$\tilde{K}_{SNN}^{Cell Surface} \times 0.7 k_{B}T$ gives the values shown in Table 1 to give a comparison with measured bending rigidities of the cell membrane that includes the underlying cortical cortex.\\ The simulation box contains up to 8624 particles. Each particle has a diameter of 10 nm. Since the typical size of the virus is 10s-100s of nanometers and the cell size is usually at the scales of 10 s of micrometers, thus virus interacts only a small part of the cell surface. So, we simulated only a small patch of the cell surface of size 550 nm$\times$480.6 nm, which is made up of 1444 particles. The cell surface boundary is free in our simulations. Each ECC has a length of 40 nm, which is in the range of many cell surface proteins \cite{Patteson2020}. Since ECC density, or percentage of coverage, is a parameter, total ECC particles vary from 1156 to 5776. The primary receptor has four particles as well. The virus surface has a typical diameter of 100 nm~\cite{Sahin2020, Ke2020} and is made up of 1000 particles. The virus spikes consist of two particles or 20 nm long, which correspond to many virus types~\cite{Kiss2021, Bosch2003}. The total number of spike particles varies from 50 to 400. The virus spikes and ECCs attract each other only if the distance between their top particles is less than 8.7 nm which is the range of interaction found in experiments~\cite{Xie2020}. We computed ten realizations for each parameter set and averaged them for plotting purposes. Error bars are the standard deviation of the mean. \begin{figure*}[t] \includegraphics{Membrane_Oct_6_New_Final.pdf} \caption{{\it Cell surface rigidity helps generate folds} (a) Bottom view of virus wrapping by the cell surface (b) Cell surface wrapping is defined by determining the ratio of the cell surface area inside the cylinder to the total cell surface area. Crumple-like wrapping typically leads to more cell surface area inside the cylinder than fold-like wrapping shape. (c) Cell surface wrapping is minimum for an optimal coreceptor coverage, or area density, which corresponds to folds. Folds use relatively less cell surface area and maximize viral wrapping. (d) Shape index of the cell surface shows folds have a higher value than crumples. Folds form at optimal cell surface coreceptor percent coverage with non-zero cell surface rigidity. } \end{figure*} \section*{Results} \subsection{Optimal co-receptor percentage of coverage yields maximum wrapping} We first investigate the effects of varying extracellular components (ECC) and the virus spike density on wrapping. We altered the cell surface coverage by the ECCs from 20 percent to cover it fully and varied the spike numbers from 25 to 200 spikes. In Fig2A, we show a typical side view of the final wrapping configurations of the system as the number of virus spikes (horizontal axis) and the percentage of coverage of extracellular components (vertical axis). We observe that the cell surface wraps poorly for lower spike numbers. In other words, there is little interaction between the virus and the cell. However, for 200 spikes, there is substantial wrapping, so we will focus, for the most part, on this part of the parameter space for the number of spikes. Note that 200 spikes uniformly distributed on the surface of the virus-like particle of radius 50 nm leads to an approximate area density of 6.3 $\times 10^{-3}$ nm$^{-2}$. ECC coverage of 100 $\%$ corresponds to the area density of 5.4 $\times 10^{-3}$ nm$^{-2}$. To quantify the wrapping behavior, we define virus wrapping as the ratio of the surface area covered via a cell surface divided by the whole surface area of the virus, as shown in Fig 2B. To find the surface area covered by the cell surface, we compute all virus spikes adhering to ECC and then add their patch area on the virus surface to obtain the area covered. The fractional area of the virus is what is plotted with unity, denoting the entire viral surface is bound to the cell surface. From Fig 2C, we observe that viral wrapping does not show appreciable changes for viruses with a lower spike number than 200. Given that only one-to-one interaction is allowed between spikes and ECCs, this indicates that only having enough ECC does not necessarily ensure wrapping by the cell surface. However, viruses should also have enough spikes to attach to the cell surface. We find non-monotonic behavior for the wrapping as a function of the percentage of coverage of extracellular components for 200 spikes, as shown in Fig. 2C. Specifically, there is less virus wrapping at low ECC percent coverage, $>20$ $\%$. Viral wrapping increases at medium percent coverage, 40 and 60 $\%$, only to again decrease for the high ECC percent coverage of 80 and 100 $\%$. Since we have the constraint that a spike can only adhere to one ECC at a time, getting low virus wrapping with low ECC coverage is somewhat expected. As there are few coreceptors to stick to, such viruses cannot interact with many coreceptors. On the other hand, for higher percent coverages of coreceptors, there is a shielding effect that reduces the viral wrapping. Once the coreceptors near the virus attach to the spikes, far away coreceptors cannot interact with virus spikes due to volume exclusion. Thus, having many coreceptors does {\it not} lead to higher virus wrapping. Virus wrapping is maximum at the medium coreceptors density, where virus spikes have enough coreceptors to interact with those around them and have enough space between spikes not to invoke shielding effects. Thus, we found an optimal percent coverage of cell surface coreceptors for maximal wrapping. The notion of an optimal percent coverage for maximal wrapping is a rather reasonable one if one considers effective cell surface bending rigidity that depends on such a quantity. More specifically, as the ECC percent coverage increases, the sheet stiffens such that the sheet eventually can no longer wrap around the viral. At lower ECC percent coverage, some stiffening of the sheet gives rise to a more coordinated wrapping, which we explore in more detail in the next subsection. \subsection{Cell surface rigidity can drive folds} Next, we investigate the cell surface bending rigidity's role in viral wrapping. Since there exists heterogeneity in the structure and mechanics of the cell cortex, as viruses invade cells, they may encounter different cellular surface rigidities, which may affect the rate of its uptake. In Fig. 3a, we plot the percent coverage of ECCs on the vertical axis and cell surface bending rigidity on the horizontal axis, showing the typical morphology the cell surface takes during viral uptake. These simulation snapshots are taken from the inside of the cell surface with the virus on the outside of the cell. At lower and mid-ECC percent coverage, we observe that the cell surface exhibits crumples, with more cell surface undulations, for zero bending rigidity. However, we observe fold-like structures of the cell surface for non-zero bending rigidity. Furthermore, we do not observe a significant change in cell surface morphology at high ECC percent coverage, irrespective of cell surface bending rigidity, as the ECC percent coverage presumably dominates the effective bending rigidity. Given that we have two general types of wrapping configurations that the cell surface takes on, crumple-like wrapping and fold-like wrapping, how do we distinguish between the two types? We say ``crumple-like'' because the crumpling is very localized near the virus, unlike the crumpling of a sheet of paper, for example~\cite{crumpling}. Quantifying folds versus crumples for a triangulated mesh is typically an exercise in discrete geometry. Generically, folds have fewer changes in sign of the local Gaussian curvature than crumples, as the latter consists of shorter, randomly oriented creases. Edge effects combined with noise in the local Gaussian curvature in our system make it difficult to quantitatively distinguish between the two types~\cite{Magid}. So we, instead, measure a dimensionless area of the sheet, much like a gyrification index (GI) as well as a cell surface shape index. As for the former, for a given cylindrical area just beneath the virus, a smoother surface has a smaller area indicating folds, rather than a rougher surface, or larger area, indicating crumples, assuming similar heights. We calculate the cell surface wrapping by taking a circle of radius $R$, centered at the center of mass of the virus. Starting with the highest spike on the virus that is adhered to an ECC, we construct a cylinder around the virus and identify all the cell surface particles in this cylinder for a given radius. As these particles are a part of a triangulated lattice, we see the total surface area of the cell surface inside the cylinder by adding up all the triangle areas inside the cylinder. Here, the radius of cylinder $R$ sets how close or far from the virus we find the surface area. To arrive at a dimensionless area, we divide the obtained surface area by the total area of the sheet. We cannot have a large value of $R$ compared to the virus radius as there is a finite amount of cell surface; by taking too small $R$, on the other hand, we may miss some of the data. Therefore, $R$ is 1.5 times of radius of the virus or 150 nm. For other radii, we studied the quantification, and it is robust for a range of $R$. Please see SI Fig. S1. For calculating the cell surface shape index, we trace the boundary of the cell surface inside the cylinder and determine the perimeter $P$ as well as the cell surface area $A$ inside the cylinder. We then define a dimensionless quantity called shape index as $\sqrt{A}/P$. With this definition of shape index, a hemispherical surface has a shape index of approximately $0.399$, which would be a lower bound. Larger shape indices denote less spherically symmetric wrapping and, so, less efficient wrapping in terms of the use of cell surface area. \begin{figure*}[t] \includegraphics{Time_Snap_Oct_6_New.pdf} \caption{ { \it Cell surface folds wrap the virus faster than cell surface crumples} (a) Cell surface crumpling around the virus while wrapping at low ECC percent coverage (b) Cell surface creates folds while wrapping the virus at optimal ECC percent coverage. (c) Virus wrapping is fast initially but slows down as fewer ECC are available to attach, eventually has the highest wrapping at optimal ECC percent coverage. (d) Virus velocity in the z-direction shows the highest acceleration with cell surface folds around the virus at optimal ECC percent coverage. The simulation velocity can be converted to a biologically relevant velocity by 200$\mu$m/s. Biologically relevant time units can be obtained by multiplying 125$\mu$s that follow from the time units defined previously. } \end{figure*} In Fig3b, we have plotted the cell surface wrapping as a heat map, where the color bar represents the ratio of cell surface near the virus divided by the total cell surface area. At high ECC percent coverage, the cell surface near the virus does not change significantly with cell surface bending rigidity and has less cell surface area near the virus. This is due to effective cell bending rigidity which arises because of volume exclusion among all ECC particles that keep the cell surface shape constant. This effect is more pronounced at high density as more particles lead to higher effective bending rigidity and keeps the surface near-flat. If the cell surface is perfectly flat, the ratio of area under the circle and total surface area is 0.27, which is very near to the value we found for the ratio at a higher percent coverage of ECC. We also observe that for low and mid-ECC densities at zero-bending rigidity, it takes more cell surface area to wrap viruses compared to the non-zero bending rigidity. From Fig. 3a, we see crumples forming at low-mid ECC with zero-bending and folds forming with non-zero-bending rigidity. Therefore, crumples take more cell surface area than folds to wrap the virus. This is more pronounced at the optimal ECC percent coverage of 40 percent, where the difference between cell surface area taken crumple and folds are about 10 percent. While a 10 percent difference is not a substantial difference for the uptake of one virus, it may become more substantive for multiple viruses in addition to the usual material that is endocytosed. Furthermore, we plotted the cell surface shape index as a function of ECC percent coverage and cell surface bending rigidity in Fig. 3c. Just as with the dimensionless GI, at low or mid-ECC percent coverage and zero bending rigidities, the larger shape index indicates crumples in red. However, for non-zero bending rigidity, the shape index decreases, as indicated in blue, thereby indicating more efficient wrapping. This is more pronounced at the optimal ECC percent coverage of 40 percent. Note that for the higher ECC percent coverage, the shape index decreases from the crumple value; however, the overall curvature of the cell surface begins to change near the virus to head towards anti-wrapping, if you will. \subsection{Folds wrap faster than crumples} We now explore the dynamics of viral wrapping. From Fig. 4a and b, we can see crumple and fold formation, respectively, as the cell surface wraps around the virus. In Fig. 4a, the cell surface is covered with a lower ECC percent coverage, 20 percent leading to crumpling around the virus, and in Fig. 4B, the cell surface has the optimal ECC coverage of 40 percent, where the cell surface is folding towards the virus. We analyzed these configurations by plotting the virus wrapping as a function of simulation time for multiple ECC densities. From Fig. 4c, at lower ECC percent coverage, the virus is wrapped slowly, but for higher percent coverage of coreceptors, the cell surface wraps fast initially. To be specific, we observe two regimes of virus wrapping with time, an initial faster wrapping before the dashed line in Fig. 3c, where spikes are attaching to many coreceptors, and a slower regime at later times, after the dashed line, with the virus attached to a lower number of coreceptors leading to slower viral wrapping. In terms of coreceptor percent coverage, virus spikes do not bind many coreceptors to wrap initially for lower coreceptor density. However, as the cell surface crumples, spikes find more coreceptors to bind, increasing virus wrapping with time but eventually achieving low viral wrapping. For higher percent coverage, spikes initially find many coreceptors to bind, but since there is only one-to-one interaction allowed, spikes located at the higher side of the virus struggle to find more later as the shielding effect sets in due to crowding imposed by volume exclusion. Finally, at optimal percent coverage, we see that even though wrapping starts relatively slowly initially, it catches up as more and more coreceptors come into the range of virus spike interaction due to the folding of the cell surface, leading to the highest virus wrapping such that crowding effects are minimized. We also investigate the virus's engulfing velocity to further understand the role of folds and crumples in viral uptake. We plotted the virus velocity in the z-direction, or perpendicular with respect to the cell surface, see Fig. 4d. Here, we also observe two regimes, an initial fast regime before the dashed line, where virus velocity is similar for all densities except the lowest, as in this region viral spikes are attaching to many coreceptors. We also have a second regime later, after the dashed line, where the virus engulfing velocity is distinct depending on coreceptor percent coverage. For lower percent coverage, the virus engulfing velocity is slower because of the low availabilities of coreceptors, but with higher percent coverage, virus velocity slowly increases with time even though coreceptors are available; due to crowding, additional virus spikes cannot attach to the coreceptors. For the optimal coreceptor percent coverage, we see an increasing velocity over time as spikes keep finding more coreceptors to attach. Thus, the cell surface that folds is faster in catching the virus than cell surfaces that crumple around the virus. \begin{figure*}[t] \includegraphics{Streching_New_Final.pdf} \caption{{ \it Changes in ECC and spike stretching strength can lead to crumples (at optimal ECC density).} (a) Simulation snapshot for $K^{spike}_{spring}=10^{0}$ and $K^{ECC}_{spring}=10^{0}$ show crumpled cell surface (b) log-log plot: Heat map of virus wrapping showing at low spike and ECC spring strength gives low wrapping compared to high spike and ECC spring strength values (c) and (f) viral wrapping as a function of time for $K^{Spike}_{spring}=10^{0}$ and $K^{Spike}_{spring}=10^{4}$ for various values of $K^{ECC}_{spring}$ (d) Simulation snapshot for $K^{spike}_{spring}=10^{4}$ and $K^{ECC}_{spring}=10^{4}$ showing folded cell surface (e) log-log plot: Heat map of cell surface shape index have a higher value at low streching strength of ECC and spikes indicating crumple formation, but at high ECC and spike strength has lower shape index points to fold formation. Biologically relevant time units can be obtained by multiplying 125$\mu$s that follow from the time units defined previously. For spring constants, the conversion factor is $10^{-2}\,\,pN/nm$.} \end{figure*} \subsection{ECC and spike stretching drives the system from folds to crumples} In this subsection, we investigate the effects of the mechanical properties of spike and ECC on viral wrapping. Indeed, filaments on the cell surface have varied mechanical properties depending on the type of filament. Therefore, we vary the stretching strength of coreceptors and virus spikes on viral uptake. Given this additional mechanical variation, we ask the question, is the optimal percent coverage of coreceptors always guaranteed a fold? To answer this question, we maintain the percent coverage of coreceptors at 40$\%$, which yielded maximal virus wrapping, as shown in Fig. 2c, for 200 spikes (for the given set of mechanical parameters stated previously). From Fig. 5a and 5d, we observe that changing the stretching strength drives the cell surface morphology to crumples, even at the optimal percent coverage of coreceptors. In Fig 5a, coreceptors and spikes have low stretching strength of $10^{-2}pN/nm$. The attraction between spikes and ECC with low stretching cost energy makes them more accessible to each other, leading to many attachments between them. Even though we have a significant number of spikes connected to coreceptors, the cell surface does not form a fold but a crumple. This is because these low- stretching strength coreceptors cannot transfer enough force to the cell surface to make a more coordinated fold. On the other hand, in Fig. 5d, the spike and ECC have the stretching strength of $10^{2} pN/nm$, which is enough to generate forces to fold the cell surface around the virus. Thus, we found that an optimal ECC percent coverage determined for one set of mechanical parameters does not ensure efficiency that virus wrapping can always be achieved. The stretching rigidity of spikes and filaments on the cell surface also constrains the formation of folds. We vary the stretching strength of spike and coreceptors up to four orders of magnitude given in Table 1. In Fig. 5b, we plot the heat map (on log-log scale) of virus wrapping with respect to the ECC and spike stretching strength. We observe two regions, first a low wrapping region at low spike stretching strength which persists irrespective of coreceptors stretching strength. For low coreceptors stretching strength also, we find low wrapping, irrespective of spike stretching strength, as depicted in blue on the heat map. The second region has high virus wrapping for the high spike and coreceptors stretching strength, as shown in red. Our findings imply that the coreceptor and spike must have high stretching strength to get a high virus wrapping. To identify the folds and crumples qualitatively, we plotted the cell surface shape index in Fig. 5e. We find that for low spike and ECC stretching strength, we measure a higher shape index which is consistent with cell surface crumpling around the virus. At higher spike and ECC stretching strength, we measure a lower value of the cell surface shape index, indicating fold formation. These results are consistent with the previously observed shape index behavior in Fig 4c. We also examine the time series analysis to understand how the fold and crumple formation mechanism changes due to the stretching strength of spike and coreceptors. In Fig 5c, we set the stretching strength of spikes constant at the lowest explored value of $10^{-2}$pN/nm and vary the coreceptors stretching strength from $10^{-2}$pN/nm to $10^{2}$ pN/nm. We observe that changing coreceptors stretching strength does not contribute much to the virus wrapping behavior as it just increases with time having similar trends for all coreceptor's stretching strength values. On the other hand, from Fig. 5f with a high spike stretching strength of $10^{2}$pN/nm, we see that low coreceptor stretching values are associated with wrapping slowly and get less virus wrapping than for high coreceptor stretching strength. This is because, with the higher stretching strength of the spike and coreceptor, the cell surface folds, leading to the faster wrapping of the virus. We demonstrate that having optimal coreceptors percent coverage is not enough for efficient wrapping via folds. Coreceptors and spikes also require a threshold stretching strength above which the cell surface folds to achieve more wrapping. We also investigate the effects of the bending rigidity of filamentous ECCs and virus spikes. Bending rigidity does not impact the cell surface morphologies as stretching. See Fig. 2 of the SI. Finally, as for varying viral rigidity, we obtain results consistent with previous work \cite{Shen2019} that softer viruses are harder to wrap than more rigid viruses. See Fig. 3 of the SI. \section*{Discussion} Our study suggests that cells whose surfaces are optimally populated with filamentous protein structures acting as coreceptors are more likely to be infected as they uptake the virus faster and use relatively less cell surface area per individual virus so that more virus-like particles can be uptaken. At the optimal percent coverage, the cell surface makes folds around the virus, and folds are faster and more efficient at wrapping the virus than crumple-like wrapping. Our study also finds that cell surface bending rigidity helps generate folds, as bending rigidity enhances force transmission across the surface. We also conclude that such an optimal percent coverage does not always ensure a fold formation, as changing mechanical parameters, such as the stretching stiffness of the ECC or the virus spikes, can drive the crumple-like formation of the cell surface. There has been much work exploring the role of viruses or nanoparticle spikes and how their mechanical properties affect endocytosis. However, these studies treat receptors as sticky particles on the cell surface without any degree of freedom \cite{Moreno2022, Lin2020, Li2020c, Shen2019, Xia2017, Ding2012, Li2012}. On the other hand, there have also been many studies of sticky sites embedded in a cell membrane affect endocytosis, though, again, they do not have their own degrees of freedom \cite{Zhang2014, Li2021a}. To our knowledge, this work is the first to consider the physical properties of receptors, including density, stretching, and bending energetic costs on both the cell and viral surface in viral wrapping. In doing so, we find a key quantity to focus on in terms of an effective stiffness of the cell surface that depends on the density of filaments attached and their own intrinsic mechanics. Work will be needed to quantify this property in larger sheets without the virus-like particles. Moreover, it is interesting to determine how material on the outside of a cell, both the virus and the extracellular filaments, can reshape the cell cortex in nontrivial ways in terms of folds versus crumples, which then drives shape changes in the connected plasma cell membrane. Interestingly, earlier work on endocytosis has focused on how the underlying cell cortex can modify the shape of the plasma cell membrane from spherical wrapping to more cylindrical wrapping in yeast \cite{Zhang2015b}. In this work, we unlock a much broader range of shapes for further study. The earlier experimental finding that extracellular vimentin enhances SARS-CoV-2 uptake is intriguing \cite{Suprewicz2022, Amraei2022}. It turns out that extracellular vimentin helps other viruses and bacteria to enter the cell \cite{Ramos2020}. As there exist vimentin-null mice (but not actin-null nor microtubule-mull mice)~\cite{vimentinnull}, the fact that viruses and bacteria have evolved to interact with vimentin is no surprise when hijacking the machinery of a cell. In doing so, they interact with a higher-order, optimizing construct instead of an essential, functioning construct to replicate and redeploy without dramatically altering cell function. Presumably, this is yet another evolutionary pressure on viruses to optimize their interaction with vimentin on both the outside and inside of cells. Therefore, our focus on constructs outside and/or attached to the cell emphasizes the importance of the microenvironment of a cell, even for endocytosis. The importance of the tumor microenvironment has now become a cornerstone for understanding cancer, with many modeling efforts underway to make predictions \cite{DiVirgilio2018, Lim2018, DePalma2017, Parker2020}. Given the work here, we now argue that the notion of microenvironment has an impact on viral and bacterial infections and will contribute towards our understanding of the variability of health impacts of such infections. While our inspiration here has been extracellular vimentin, glycolipids that bind the protein lectin to form a filamentous-like complex emanating from the cell surface and can play a role in clathrin-independent endocytosis~\cite{lectin}. Our results point to a potential mechanical role for this complex in endocytosis. We have assumed that the virus-like particle is already attached via a small receptor to the cell surface and quantified viral wrapping. However, it would also be interesting to study how the cellular microenvironment affects the trajectory of a nanoparticle to find that initial receptor in terms of search strategy \cite{Marbach2022}. We have also assumed a one-to-one interaction between ECC and spike without any kinetics, i.e., no attaching or detaching rate is considered in this model. Finally, as our cell surface is a deformable sheet to which filamentous structures attach, we will investigate how the nanoparticle enters the cell via a pinch-off mechanism by extending our work to include a cellular fluid membrane. In the earlier work quantifying cylindrical endocytosis in yeast, the proposed pinch-off mechanism is a Pearling instability driven by BAR proteins acting on both the cortex and the plasma cell membrane \cite{Zhang2015b}. As additional morphologies of the cell surface are proposed in mammalian cells \cite{Abouelezz2022, Jin2022, Lanzetti2001}, perhaps a Pearling instability or additional mechanisms will be discovered. To make such predictions, the richness of biology must be reflected in analytical or computational modeling in at least some minimal manner. \section*{Acknowledgements} AEP and JMS acknowledge an NSF RAPID grant 2032861. AEP acknowledges NIH R35 GM142963. CDS acknowledges NSF DMR 2217543. SG acknowledges a dissertation graduate fellowship from Syracuse University. The authors also acknowledge the Syracuse University HTC Campus Grid and NSF award ACI-1341006 for providing computing resources.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Correlations among physical quantities of clusters of galaxies are very useful tools for studying formation of clusters and cosmological parameters. In particular, the luminosity ($L_{\rm X}$)-temperature ($T$) relation in X-ray clusters has been studied by many authors. Observations show that clusters of galaxies exhibit a correlation of approximately $L_{\rm X} \propto T^3$ (Edge \& Stewart \markcite{es1991}1991; David et al. \markcite{dsj1993}1993; Allen \& Fabian \markcite{fa1998}1998; Markevitch \markcite{m1998}1998; Arnaud \& Evrard \markcite{ae1998}1998). On the other hand, a simple theoretical model predicts $L_{\rm X} \propto T^2$ on the assumptions that (1) the internal structures of clusters of different mass are similar; in particular, the ratio of gas mass to virial mass in the clusters ($f=M_{\rm gas}/M_{\rm vir}$) is constant; (2) all clusters identified at some redshift have the same characteristic density, which scales with the mean density of the universe (e.g. Kaiser \markcite{k1986}1986; Navarro, Frenk, \& White \markcite{nfw1995}1995; Eke, Navarro, \& Frenk \markcite{enf1998}1998). This discrepancy remains one of the most important problems in clusters of galaxies. The discrepancy of the $L_{\rm X}-T$ relation is not easily resolved even if we relax one of these basic assumptions. We just show an example, where the assumption (1) is relaxed. The X-ray luminosity of clusters is approximately given by $L_{\rm X} \propto \rho_0^2 R^3 T^{1/2}$, where $\rho_0$ is the characteristic gas density, and $R$ is the core radius. Thus, the observed relation $L_{\rm X} \propto T^3$ indicates that $\rho_0^2 R^3 \propto T^{5/2}$. If the gravitational matter has the same core radius as gas, the baryon mass fraction is given by $f\propto \rho_0 R^3/RT \propto \rho_0 R^2 T^{-1}$. If we assume $f\propto T^{\alpha}$, we obtain $\rho_0\propto T^{2-3\alpha}$, $R\propto T^{-1/2+2\alpha}$, $M_{\rm gas}\propto T^{1/2+3\alpha}$, $M_{\rm vir}\propto T^{1/2+2\alpha}$, and the characteristic density of gravitational matter $\rho_{\rm vir} \propto M_{\rm vir}/R^3 \propto T^{2-4\alpha}$. Assuming that $\rho_{\rm vir}$ is constant in the spirit of the above assumption (2) (so-called recent-formation approximation), we should take $\alpha=1/2$. Thus, this model predicts a correlation of $\rho_0\propto R \propto T^{1/2}$. However, such a correlation has not been found, although many authors have investigated relations among the physical quantities of clusters (e.g. Edge \& Stewart \markcite{es1991}1991; Mohr \& Evrard \markcite{me1997}1997; Arnaud \& Evrard \markcite{ae1998}1998, Mohr, Mathiesen, \& Evrard \markcite{mme1999}1999). It is to be noted that in the spirit of the above assumption (1), it is favorable to use core radii when comparing clusters with different masses, although some previous studies use isophotal radii instead of core radii in the analysis (e.g. Mohr \& Evrard \markcite{me1997}1997). Some other studies use a 'virial' radius, defined as the radius of a sphere of which the mean interior density is proportional to the critical density of the universe at the observed redshift of the cluster ($z\sim 0$). However, these radii are derived from the temperatures of clusters, and are not independent of the temperatures (e.g. Mohr et al. \markcite{mme1999}1999). Moreover, $L_{\rm X}$ is mainly determined by structure around core region which preserves the information of the background universe when the cluster collapsed (e.g. Navarro, Frenk, \& White \markcite{nfw1997}1997; Salvador-Sol\'{e}, Solanes, \& Manrique \markcite{ssm1998}1998). Thus, we adopt the core radius as the characteristic scale of a cluster. Since most previous works implicitly assumed that clusters form a one-parameter family, the failure of finding the correlations including core radii suggests that clusters form a two-parameter family instead. In this Letter, we reanalyze the observational data of X-ray clusters and study the relations in detail based on the idea of fundamental plane. Originally, the word, 'fundamental plane', represents a relation among effective radius, surface brightness, and velocity dispersion of elliptical and S0 galaxies (e.g. Faber et al. \markcite{fdd1987}1987; Djorgovski \& Davis \markcite{dd1987}1987). In this study, we apply the notion of the fundamental plane to X-ray clusters and discuss relations among $\rho_0$, $R$, and $T$. In \S\ref{sec:data}, results are presented and in \S\ref{sec:dis}, their implications are discussed. Throughout the paper we assume $H_0 = 50\;\rm km\; s^{-1}\; Mpc^{-1}$. \section{Data} \label{sec:data} We use the observational data of the central density, $\rho_0$, core radius, $R$, and temperature, $T$, of 45 clusters in the catalogue of Mohr et al. \markcite{mme1999}(1999). We have confirmed that the results in this section are almost identical to those based on the catalogue of Jones \& Forman \markcite{jf1984}(1984). Mohr et al. \markcite{mme1999}(1999) gathered the temperature data of previous {\em ASCA}, {\em Ginga} and {\em Einstein} observations. On the other hand, they obtained central densities and core radii using {\em ROSAT} data; they fitted surface brightness profiles by the conventional $\beta$ model, \begin{equation} \label{eq:beta} \rho_{\rm gas, 1}(r) = \frac{\rho_1}{[1+(r/R_1)^2]^{3\beta/2}} \:, \end{equation} where $r$ is the distance from the cluster center, and $\rho_1$, $R_1$, and $\beta$ are fitting parameters. If an excess in emission (so-called cooling flow) is seen in the innermost region, Mohr et al. \markcite{mme1999}(1999) fitted this component by an additional $\beta$ model, \begin{equation} \label{eq:beta2} \rho_{\rm gas, 2}(r) = \frac{\rho_2}{[1+(r/R_2)^2]^{3\beta/2}} \:. \end{equation} Since we are interested in global structure of clusters, we use $\rho_1$ and $R_1$ as $\rho_0$ and $R$, respectively. Since Mohr et al. \markcite{mme1999}(1999) presented only $\rho_2$ for the clusters with central excess, we calculate $\rho_1$ by \begin{equation} \label{eq:rho1} \rho_1 = \left(\frac{I_1 R_2}{I_2 R_1}\right)^{1/2}\rho_2 \:, \end{equation} where $I_1$ and $I_2$ are the central surface brightness corresponding to the components (\ref{eq:beta}) and (\ref{eq:beta2}), respectively. Although $R$ and $\beta$ are correlated, each of them was determined exactly enough for our analysis (see Fig.4 in Mohr et al. \markcite{mme1999}[1999]) The data plotted in the $(\log \rho_0, \log R, \log T)$ space are fitted with a plane, \begin{equation} \label{eq:plane} A\log{\rho_0} + B\log{R} + C\log{T} + D = 0 \:. \end{equation} The result of the least square fitting with equal weight for simplicity is $A:B:C=1:1.39:-1.29$. The scatter about the plane is 0.06 dex. This amounts to a scatter of about 15\%, which is a typical observational error. We call the plane 'the fundamental plane', hereafter. The ratio $A:B:C$ is close to $2:3:-2.5$, which is expected when $L_{\rm X} \propto T^3 \propto \rho_0^2 R^3 T^{1/2}$. Thus, the observed relation, $L_{\rm X} \propto T^3$, basically corresponds to a cross section of the fundamental plane. In order to study more closely, we investigate further the distribution of the observational data on the fundamental plane. We fit the data to another plane, \begin{equation} \label{eq:nplane} a\log{\rho_0} + b\log{R} + c\log{T} + d = 0 \:, \end{equation} under the constraint, \begin{equation} \label{eq:const} Aa+Bb+Cc=0 \:. \end{equation} This means that the plane (\ref{eq:nplane}) is perpendicular to the fundamental plane (\ref{eq:plane}). The result is $a:b:c=1:1.18:2.04$. The scatter about the plane is 0.2 dex. We call this plane 'the vertical plane'. For convenience, two unit vectors in the $(\log \rho_0, \log R, \log T)$ space are defined by, \begin{equation} \label{eq:e1} \mbox{\boldmath $e_1$} = \frac{1}{\sqrt{A^2+B^2+C^2}}(A,B,C) = (0.47,0.65,-0.60)\:, \end{equation} \begin{equation} \label{eq:e2} \mbox{\boldmath $e_2$} = \frac{1}{\sqrt{a^2+b^2+c^2}}(a,b,c) = (0.39,0.46,0.80)\:. \end{equation} Moreover, one of the unit vectors perpendicular to both $\mbox{\boldmath $e_1$}$ and $\mbox{\boldmath $e_2$}$ is defined as $\mbox{\boldmath $e_3$} =(0.79,-0.61,-0.039)$. The set of three vectors is one of the bases in the $(\log \rho_0, \log R, \log T)$ space. Thus, the equations $X=\rho_0^{0.47} R^{0.65} T^{-0.60}$, $Y=\rho_0^{0.39} R^{0.46} T^{0.80}$, and $Z=\rho_0^{0.79} R^{-0.61} T^{-0.039}$ are three orthogonal quantities. Figure 1 shows the cross section of the fundamental plane viewed from the $Y$ axis. Figure 2 shows the data on the $(Y,Z)$ plane, i.e., the fundamental plane. As can be seen, a clear correlation exists on the plane, that is, clusters form a band in the $(\log \rho_0, \log R, \log T)$ space. The major axis of the band is the cross line of the fundamental and vertical planes, and a vector along the major axis is proportional to $\mbox{\boldmath $e_3$}$. We refer to the band as 'fundamental band', hereafter. Note that the line determined by the least square method directly from the three-dimensional data is almost parallel to the vector $\mbox{\boldmath $e_3$}$. The vector $\mbox{\boldmath $e_3$}$ means that \begin{equation} \label{eq:nR} \rho_0 \propto R^{-1.3\pm 0.2} \:, \end{equation} \begin{equation} \label{eq:TR} T \propto R^{0.06\pm 0.1} \propto \rho_0^{-0.05\pm 0.1} \:, \end{equation} Relation (\ref{eq:TR}) indicates that the major axis of the fundamental band is nearly parallel to the $\log \rho_0 - \log R$ plane, i.e., temperature varies very little along the fundamental band. Thus, the observed relation $L_{\rm X}\propto T^3$ should be the correlation along the minor axis of the band on the fundamental plane as is explicitly shown in the next section. \section{Discussion} \label{sec:dis} The results presented in the previous section demonstrate that the clusters of galaxies are seen to populate a planar distribution in the global parameter space $(\log \rho_0, \log R, \log T)$. Therefore, clusters turn out to be a two-parameter family. The observed relation $L_{\rm X} \propto T^3$ is a cross section of this 'fundamental plane'. Moreover, there is a correlation among the data on the fundamental plane although the dispersion is relatively large. This 'fundamental band' is a newly found correlation between density and radius with a fixed temperature. In order to further investigate the relation between physical quantities and the data distribution in the $(\log \rho_0, \log R, \log T)$ space, we represent $L_{\rm X}$, $M_{\rm gas}$, $M_{\rm vir}$, $f$, and $\rho_{\rm vir}$ by $X$, $Y$, and $Z$, using the obtained relations \begin{equation} \label{eq:n_0} \rho_0 \propto X^{0.47} Y^{0.39} Z^{0.79}\:, \end{equation} \begin{equation} \label{eq:R} R \propto X^{0.65} Y^{0.46} Z^{-0.61}\:, \end{equation} \begin{equation} \label{eq:T} T \propto X^{-0.60} Y^{0.80} Z^{-0.039}\:. \end{equation} The results are \begin{equation} \label{eq:lx} L_{\rm X} \propto \rho_0^2 R^3 T^{1/2} \propto X^{2.6} Y^{2.6} Z^{-0.27}\;, \end{equation} \begin{equation} \label{eq:gas} M_{\rm gas} \propto \rho_0 R^3 \propto X^{2.9} Y^{1.8} Z^{-1.0}\:, \end{equation} \begin{equation} \label{eq:vir} M_{\rm vir} \propto R T \propto X^{0.05} Y^{1.3} Z^{-0.65}\:, \end{equation} \begin{equation} \label{eq:frac} f = M_{\rm gas}/M_{\rm vir} \propto X^{2.4} Y^{0.51} Z^{-0.39}\:, \end{equation} \begin{equation} \label{eq:rho_vir} \rho_{\rm vir} \propto M_{\rm vir} R^{-3} \propto X^{-1.9} Y^{-0.12} Z^{1.2}\:. \end{equation} Exactly speaking, $M_{\rm gas}$ and $M_{\rm vir}$ represent the core masses rather than the masses of the whole cluster. In relation (\ref{eq:vir}), we assume that clusters of galaxies are in dynamical equilibrium. The scatters of $X$, $Y$, and $Z$ are $\Delta \log X = 0.06$, $\Delta \log Y = 0.2$, and $\Delta \log Z = 0.5$, respectively. Thus, $Z$ is the major axis of the fundamental band and is the primary parameter of the data distribution. On the other hand, relation (\ref{eq:T}) indicates that the scatter of $Y$ nearly corresponds to a variation of $T$ because $\Delta \log T = - 0.60\Delta \log X + 0.80\Delta \log Y - 0.039\Delta \log Z$. It can be also shown that a variation of $L_{\rm X}$ is dominated by the scatter of $Y$. Since $Y$ corresponds the minor axis of the fundamental band, this means that the $L_{\rm X}-T$ relation is well represented by only the secondary parameter $Y$, but not by the primary parameter $Z$. To put it differently, $L_{\rm X} (\propto \rho_0^2 R^3 T^{1/2})$ depends on only $T$, which is consistent with previous findings. The result reflects the fact that a combination of $\rho_0$ and $R$ like $\rho_0^2 R^3$ behaves as a function of $T$ (relation [\ref{eq:nR}]), while $\rho_0$ or $R$ varies almost independently of $T$ (relations [\ref{eq:TR}]). If we safely ignore the scatter of $X$ and $Z$ in relations (\ref{eq:T}) and (\ref{eq:lx}), we obtain $T\propto Y^{0.80}$, and $L_{\rm X} \propto T^{3.3}$. This slope of the $L_{\rm X}-T$ relation approaches the observed ones, although it is slightly larger. On the other hand, $M_{\rm gas}$, $M_{\rm vir}$, and $f$ are not represented by any one of the parameters $X$, $Y$, and $Z$; both $Y$ and $Z$ contribute to their variations. Note that $\rho_{\rm vir}$ is mainly governed by $Z$ as relation (\ref{eq:rho_vir}) shows. The above analysis raises two questions. The first question is why the combination like $\rho_0^2 R^3$ behaves as a function of only $T$, or equivalently why $X$ is nearly constant. In the following arguments, we assume that the scatter of $X$ is due to observational errors, that is, $\Delta \log X$ is essentially zero. The behavior of $f$ may be a clue to the question. Since we allow two parameters, $f$ can be expressed in terms of any two physical parameters. For example, if we express $f$ with $M_{\rm vir}$ and $\rho_{\rm vir}$, $f$ turns out to be determined by $f\propto M_{\rm vir}^{0.4} \rho_{\rm vir}^{-0.1}$. This means that the baryon fraction in clusters is an increasing function of $M_{\rm vir}$. If we adopt the relation of $f\propto M_{\rm vir}^{0.4}$ by hand and ignore $\rho_{\rm vir}^{-0.1}$ hereafter, we obtain $\rho_0^2 R^{3.2} \propto T^{2.8}$ (relations [\ref{eq:lx}]-[\ref{eq:frac}]), which is roughly consistent with the shape of the fundamental plane, and the $L_{\rm X}-T$ relation. Such a relation of $f$ may be realized if supernovae in the galaxies in the clusters heat the intracluster medium. In other words, the behavior of $f$ is likely to originate from the thermal history of clusters of galaxies. The second question is why clusters form a two-parameter family. We think that one natural parameter is $M_{\rm vir}$. As another physically meaningful parameter, we may choose $\rho_{\rm vir}$. Relation (\ref{eq:rho_vir}) implies that $\rho_{\rm vir}$ is not constant, which is inconsistent with the simple theoretical prediction, and that it varies nearly independent of temperature. Since $\rho_{\rm vir}$ is supposed to reflect the critical density of the universe when the cluster, especially around the core region, collapsed, this suggests that the present day clusters consist of objects with a range of collapse redshift. In a separate paper, we investigate cosmological implication of the results presented in this paper (Fujita \& Takahara \markcite{ft1999}1999). Finally, we show that the results of this paper reproduce the size-temperature relation found by Mohr \& Evrard \markcite{me1997}(1997) and gas mass-temperature relation found by Mohr et al. \markcite{mme1999}(1999). Since surface brightness profiles of clusters in the envelope region are given by $I(r) \propto \rho_0^2 T^{1/2} R (r/R)^{-3}$ when $\beta=2/3$, isophotal size, $r=R_{\rm I}$, has the relation $R_{\rm I} \propto \rho_0^{2/3} R^{4/3} T^{1/6}$. Eliminating $\rho_0$ by the relation of the fundamental plane, $\rho_0 R^{1.39}\propto T^{1.29}$, we obtain the relation $R_{\rm I}\propto R^{0.41} T^{1.03}\propto Y^{1.0}Z^{-0.3}$. This is consistent with the size-temperature relation $R_{\rm I}\propto T^{0.93}$, although a coefficient $R^{0.4}$ induces scatter of $\lesssim 30$\% for a given $T$. The correlation corresponds to a cross section of the fundamental plane seen slightly inclined from the direction of $Z$ axis. Next, the consistency with the gas mass-temperature relation is explained as follows: As in Mohr et al. \markcite{mme1999}(1999), let us define $R_{\rm vir, m} \propto T^{1/2}$, $M_{\rm vir, m} \propto T^{3/2}$, and $M_{\rm gas, m}\propto f_{\rm m}\rho_{\rm vir, m} R_{\rm vir, m}^3$, where $R_{\rm vir, m}$, $M_{\rm vir, m}$, and $M_{\rm gas, m}$ are the virial radius, the virial mass, and the gas mass of a cluster, respectively; index m refers to the quantities for $r<R_{\rm vir, m}$. When $\beta \sim 2/3$, we can show that $f_{\rm m}\propto f$, because $f_{\rm m}\propto M_{\rm gas, m}/M_{\rm vir, m} \propto \rho_0 R^2 R_{\rm vir, m}/M_{\rm vir, m}\propto \rho_0 R^2 T^{-1}\propto f$. Since we find $f\propto M_{\rm vir}^{0.4}\propto (RT)^{0.4}$, and since $\rho_{\rm vir, m}$ is nearly constant by definition, we obtain the relation $M_{\rm gas, m} \propto R^{0.4} T^{1.9}\propto Y^{1.7}Z^{0.3}$. This is consistent with the relation $M_{\rm gas, m}\propto T^{1.98}$ found by Mohr et al. \markcite{mme1999}(1999). Note that the scatter originated from $R^{0.4}$ is not conspicuous when the observational data are plotted, because of the steepness of the relation ($\propto T^2$). This correlation also corresponds to a cross section of the fundamental plane seen from very near to the direction of $Z$ axis. \acknowledgments This work was supported in part by the JSPS Research Fellowship for Young Scientists.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} How to create, transport, and manipulate spin currents is a central problem in the multidisciplinary field of spintronics.\cite{Zutic04} The key theoretical concepts there include the current-driven spin-transfer torque\cite{Slonczewski96} and resultant force acting on a domain wall (DW)\cite{Aharonov-Stern} in metallic ferromagnetic/nonmagnetic multilayers, the dissipationless spin currents in paramagnetic spin-orbit coupled systems,\cite{Rashba60} and magnon transport in textured magnetic structures.\cite{Bruno05} A fundamental query behind the issue is how to describe transport magnetic currents.\cite{Rashba05} Conventionally, the charge current is defined by the product of the carrier density and the drift velocity related via the continuity equation. In the case of spin current, the deviation of the spin projection from its equilibrium value plays a role of a charge. Then, an emergence of the transport magnetic currents may be expected in non-equilibrium state as a manifestation of the dynamical off-diagonal long range order (ODLRO).\cite{Volovik07} Historically, D\"oring first pointed out that the longituidal component of the slanted magnetic moment inside the Bloch DW emerges as a consequence of translational motion of the DW.\cite{Doring48} An additional magnetic energy associated with the resultant demagnetization field is interpreted as the kinetic energy of the wall. Recent progress of material synthesis sheds new light on this problem. In a series of magnets belonging to chiral space group without any rotoinversion symmetry elements, the crystallographic chirality gives rise to the asymmetric Dzyaloshinskii interaction that stabilizes either left-handed or right-handed chiral magnetic structures.\cite{Dzyaloshinskii58} In these chiral helimagnets, magnetic field applied perpendicular to the helical axis stabilizes a periodic array of DWs with definite spin chirality forming kink crystal or chiral soliton lattice.\cite{Kishine_Inoue_Yoshida2005} In this paper, we demonstrate that the magnetic transport analogous to D\"oring effect\cite{Doring48} occurs in the moving kink crystal of chiral helimagnets and serves an example of the dynamical ODLRO in non-equilibrium state. An essential point is that the kink crystal state has a degeneracy originating from the translational symmetry. Consequently, the transport momentum has a form $M\dot{X}$, where $M$ and $X$ represent the kink mass and the collective coordinate of the kink in the laboratory frame. The kink crystal behaves as a heavy object with the inertial mass $M$. We start with a spin Hamiltonian describing the chiral helimagnet, \begin{eqnarray} {\cal{H}}&=&-J\sum_{<i,j>}\mathbf{S}_i\cdot\mathbf{S}_{j}+\mathbf{D}\cdot\sum_{<i,j>}\mathbf{S}_i\times\mathbf{S}_{j} -\tilde{\mathbf{H}}\cdot\sum_{i}\mathbf{S}_i,\nonumber\\\label{lattH} \end{eqnarray} where the first term represents the ferromagnetic coupling with the strength $J>0$ between the nearest neighbor $\mathbf{S}_i=S(\cos\theta_i,\sin\theta_i\cos\varphi_i,\sin\theta_i\sin\varphi_i)$ and $\mathbf{S}_j$, where $\theta_i$ and $\varphi_i$ denote the local polar coordinates. The second term represents the parity-violating Dzyaloshinskii interaction restricted to the nearest neighbor pairs of the adjacent ferromagnetic planes, characterized by the the mono-axial vector $\mathbf{D}=D\hat{\mathbf{e}}_x$ along a certain crystallographic chiral axis (taken as the $x$-axis). The third term represents the Zeeman coupling with the magnetic field $\tilde{\mathbf{H}}=2\mu_B H\hat{\mathbf{e}}_y$ applied {\it perpendicular} to the chiral axis. When $H=0$, the long-period incommensurate helimagnetic structure is stabilized with the definite chirality (left-handed or right-handed) fixed by the direction of the mono-axial $\bm{D}$-vector. In the continuum limit, the Hamiltonian density corresponding to the lattice Hamiltonian (\ref{lattH}) is written as \begin{eqnarray} {\cal{H}}&=& {1\over 2} \left({\partial_x\theta}\right)^2 +{1\over 2}\sin^2\theta\left({\partial_x\varphi}\right)^2\nonumber\\ &-&q_0\sin^2\theta\left({\partial_x\varphi}\right)-\beta\sin\theta\cos\varphi,\label{eqn1} \end{eqnarray} where the energy is measured by $JS^2$, and $\beta=\tilde{H}/{{J}S}$. The semi-classical spin variable is represented as $ \mathbf{S}= S(\cos\theta, \sin\theta\cos\varphi, \sin\theta\sin\varphi ) $ by using the slowly varying polar angles $\theta(x)$ and $\varphi(x)$[see Fig.~\ref{fig:CSL}(a)]. The helical pitch for the zero field ($\beta=0$) is given by $q_0=\tan^{-1}(D/J)\approx D/J$. Under the transverse field, a regular array of the magnetic kink is formed.\cite{Dzyaloshinskii64} The kink corresponds to the phase winding in the left-handed ($\Delta\varphi=+2\pi$) or right-handed($\Delta\varphi=-2\pi$) manners. Since we assume the uniform mono-axial Dzyaloshinskii vector $\bm{D}$, the kink with only positive (left-handed) or negative (right-handed) charge are energetically favored. The kink with the same charge repel each other, just like in the case of the Coulomb repulsion. So, the magnetic kink crystal (soliton lattice) is formed, as shown in Figs.~\ref{fig:CSL}(b) and (c). \begin{figure}[h] \includegraphics[width=85mm]{CSLd.eps \caption{(a) Polar coordinates in the laboratory frame. (b) Formation of the magnetic kink crystal in the chiral helimagnets under the transverse magnetic field, and (c) concomitant phase modulation. In (b), we depict a linear array of the spins along one chiral axis that is ferromagnetically coupled to the neighboring arrays. }\label{fig:CSL} \end{figure} The magnetic kink crystal phase is described by the stationary soliton solution, $\theta=\pi/2$ and $ \cos\left(\varphi_0(x)/ 2\right)={\rm{sn}}({m} x/\kappa), $ where $ m=\sqrt{\beta} $ corresponds to the first breather mass and \lq\lq$\rm{sn}$\rq\rq denotes a Jacobian elliptic function.\cite{Dzyaloshinskii64} The period of the kink crystal is given by $ l_0={2\kappa K(\kappa)}/{\sqrt{\beta}}. $ The elliptic modulus $\kappa$ ($0<\kappa<1$) is determined by the energy minimization condition $ {E(\kappa)/ \kappa}={\pi q_0/ 4m}. $ Here, $K(\kappa)$ and $E(\kappa)$ denote the elliptic integrals of the first and second kind, respectively. Now, we consider the fluctuations around the classical solution and write $ \theta(x)=\pi/2+u(x)$, and $ \varphi(x)=\varphi_0(x)+v(x). $ When we consider only the tangential $\varphi$-mode, our problem is reduced to the one first investigated by Sutherland.\cite{Sutherland73} The $\varphi$-mode is fully studied in the context of the chiral helimagnet.\cite{Izyumov-Laptev86,Aristov-Luther03} In the present work, however, it is essential to take into account not only the $\varphi$-mode but the $\theta$-mode to argue the longitudinal magnetic current. Expanding (\ref{eqn1}) up to $u^2$ and $v^2$, we have $ H=\int dx({\cal{H}}_0+{\cal{H}}_u+{\cal{H}}_v+{\cal{H}}_{\rm{int}}), $ where ${\cal{H}}_0$ gives the classical solution and $ {\cal{H}}_u= u{{\cal{L}}}_u u$, ${\cal{H}}_v= v{{\cal{L}}}_v v, $ where the differential operators are defined by \begin{eqnarray} {\cal{L}}_u&=& -{1\over 2}\partial_x^2-{1\over 2}(\partial_x\varphi_0)^2 +q_0(\partial_x\varphi_0)+{1\over 2}\beta\cos\varphi_0,\\ {\cal{L}}_v&=&-{1\over 2}\partial_x^2+{1\over 2}\beta\cos\varphi_0. \end{eqnarray} The lowest-order coupling between the $u$ and $v$ modes comes from $ {\cal{H}}_{\rm{int}}=-u^2(\partial_x v)^2/2, $ that is neglected here. In the case of zero-field, $\beta=0$, we have $ {\cal{L}}_u= -\partial_x^2/2+q_0^2/2$, and $ {\cal{L}}_v=-\partial_x^2/2. $ Therefore we see that $u$-mode acquires the mass $q_0$ (scaled by $JS^2$), while the $v$-mode becomes massless. This situation naturally arises, because the $v$-mode is a Goldstone mode, but the $u$-mode is not. Even after switching the perpendicular field, the $u$-mode ($v$-mode) remains to be massive (massless). From now on, we argue that the massive $\theta$-fluctuations carry the magnetic current. First, we perform the mode expansions, $v(x,t)=\sum_n\eta _n(t)v_n(x)$ and $ u(x,t)=\sum_n\xi _n(t)u_n(x) $, and seek the energy dispersions for the normal vibrational modes, satisfying ${\cal{L}}_u u_n(x)=\lambda_nu_n(x)$, and ${\cal{L}}_v v_n(x)=\rho_n v_n(x)$, respectively. Introducing $\tilde{x}=mx/\kappa$, we have the Schr\"odinger-type equations, \begin{eqnarray} {d^2u_n(x)/ d\tilde{x}^2}&=&[2\kappa^2 {\rm{sn\,}}^2\tilde{x}\nonumber\\ &&\!\!-{\kappa^2}(1+{2\lambda_n/ \beta}) -4+4 \kappa \tau ]u_n(x),\label{eveq:u}\\ {d^2v_n(x)/ d\tilde{x}^2}&=& [2\kappa^2 {\rm{sn\,}}^2\tilde{x}-{\kappa^2} (1+{2\rho_n/ \beta}) ]v_n(x),\label{eveq:v} \end{eqnarray} with $\tau=q_0/m$. In Eq.~(\ref{eveq:u}) we consider the case of weak field corresponding to small $\kappa$, leading to ${\rm{dn\,}} x\approx 1$. Now, the equations (\ref{eveq:u}) and (\ref{eveq:v}) reduce to the Jacobi form of the Lam$\acute{\rm{e}}$ equation,\cite{WW} $ {d^2{\Lambda}_{{\alpha}}(x)/ dx^2}=\left\{\ell(\ell+1)\kappa^2{\rm{sn\,}}^2 x+A\right\}{\Lambda}_{{\alpha}}(x),\label{lame} $ with $\ell=1$. It is known that the solution is parameterized by a single continuous complex parameter $\alpha$ as,\cite{Sutherland73,Izyumov-Laptev86} \begin{equation} \displaystyle {\Lambda}_{{\alpha}}(x)=N { \vartheta_4\left(\pi (x-x_0) / 2K \right) \over \vartheta_4\left(\pi x/2K \right) }e^{-i{Q}x} ,\label{LamesolI} \end{equation} where $N$ is a normalizing factor and $\vartheta _i$ ($i=1,2,3,4$) denote the Theta functions\cite{WW} with $Q$ being the Floquet index.\cite{Sutherland73,Izyumov-Laptev86,Aristov-Luther03} The energy dispersion is obtained by determining $A=-\kappa^2(1+\tilde{A})$ as a function of the Floquet index $Q$ which marks eigenstates instead of $n$. It is known\cite{Sutherland73} that the dispersion consists of two (generally $\ell+1$) bands specified by the acoustic branch $\tilde{A}_1=\kappa ^{\prime 2}/\kappa ^2\,{\rm{sn\,}}^2\left( \alpha ,\kappa ^{\prime }\right) $, $Q_1=\pi \alpha /2KK^{\prime }+Z\left( \alpha ,\kappa ^{\prime }\right) $, and the optical branch $\tilde{A}_2=1/[\kappa ^2{\rm{sn\,}}^2\left( \alpha ,\kappa ^{\prime }\right)] $, $Q_2=\pi \alpha /2KK^{\prime }+Z\left( \alpha ,\kappa ^{\prime }\right) +{\rm{dn\,}}\left( \alpha ,\kappa ^{\prime }\right) {\rm{cn\,}}\left( \alpha ,\kappa ^{\prime }\right) /{\rm{sn\,}}\left( \alpha ,\kappa ^{\prime }\right) $, where $% \alpha \in \left( -K^{\prime },K^{\prime }\right] $. Here, $K^{\prime }$ denotes the elliptic integral of the first kind with the complementary modulus $\kappa'=\sqrt{1-\kappa^2}$ and $Z$ denotes the Zeta-function.\cite{WW} The complex parameter $x_0$ in Eq.(\ref{LamesolI}) are given by $i\alpha +K$ and $i\alpha $ for the acoustic and optical branches, respectively. We have the acoustic branch, $ 0\leq \tilde{A}_1 <{\kappa'^2/ \kappa^2}$, for $ 0\leq |{Q}_1| \leq {\pi/ 2K} $, and the optical branch, ${1/ \kappa^2}\leq \tilde{A}_2 <\infty$, for ${\pi/ 2K}\leq |{Q}_2|$. The energy gap $ \Delta =1, $ opens at $|Q|={\pi/ 2K}.$ We present the dispersions $\omega_Q$ in Fig.~\ref{fig:dispersion}, where the gapless acoustic $\sqrt{\mu_BHS}\tilde{A}_1^{1/2}$ and optical $\sqrt{\mu_BHS}\tilde{A}_2^{1/2}$ bands of $\varphi$-excitations are depicted together with the gapfull acoustic $\sqrt{\mu_BHS} [\tilde{A}_1+4 \tau/\kappa -4/\kappa^2]^{1/2}$ and optical $\sqrt{\mu_BHS} [ \tilde{A}_2+4 \tau/\kappa -4/\kappa^2]^{1/2}$ bands of $\theta$-excitations. \begin{figure} \includegraphics[width=70mm]{excitat.eps \caption{ The energy dispersions of the eigen modes for (a) the tangential $\varphi$-fluctuation ($\omega_Q=\sqrt{\rho}$) and (b) the longitudinal $\theta$-fluctuations ($\omega_Q=\sqrt{\lambda}$). } \label{fig:dispersion} \end{figure} Next we consider the collective dynamics of the kink crystal. For this purpose, we carry out the canonical formulation by using the collective coordinate method.\cite{Christ-Lee75} We start with the corresponding Lagrangian, \begin{eqnarray} {{L}}=\int dx \left[ S( 1-\cos \theta) \varphi _t-{\cal{V}}\left( \theta,\varphi \right)\right] , \label{L0} \end{eqnarray} where the Berry phase term is taken into account. The mode expansion leads to the vibrational term, ${\cal{V}}\left[ \theta ,\varphi \right] =\sum_n\lambda _n\xi _n^2+\sum_n\rho _n\eta _n^2$. Elevating the position of the kink center $X$ to a dynamical variable, we write the solution in the form \begin{eqnarray} \left. \begin{array}{c} \varphi =\varphi _0\left[ x-X(t)\right] +\sum_{n=1}^\infty \eta _n(t)v_n\left( x-X(t)\right) , \\ \theta =\pi /2+\sum_{n=1}^\infty \xi _n(t)u_n\left( x-X(t)\right). \end{array} \right\} \end{eqnarray} Plugging these expressions into the Lagrangian (\ref{L0}), we have $ L=-S\sum_n\dot{\eta}_n(t)K_{1n}+S\dot{X}\sum_nK_{2n}\xi _n(t) -S\sum_{n,m}K_{3nm}\xi _n(t)\dot{\eta}_m(t) -\sum_n\lambda _n\xi_n^2-\sum_n\rho _n\eta _n^2, $ with the coefficients $ K_{1n}=\int dxv_n\left( x\right) $, $ K_{2n}=\int dx\left({\partial \varphi _0}/{\partial x}\right) u_n\left( x\right)$, and $ K_{3nm}=\int dxu_m\left( x\right) v_n\left( x\right) . $ This Lagrangian is {\it singular} because the determinant of the matrix of second derivatives of the Lagrangian with respect to velocities (Hessian) turns out to be zero. Therefore we need to construct the Hamiltonian by using the Dirac's prescription for the constrained Hamiltonian systems. The canonical momenta conjugated to the coordinates $X$, $\xi _n$, and $\eta _n$, i.e., $ p_1={\partial L}/{\partial \dot{X}}=S\sum_nK_{2n}\xi _n, $ $ p_{2n}={\partial L}/{\partial \dot{\xi}_n}=0, $ $ p_{3n}={\partial L}/{\partial \dot{\eta}_n}=-SK_{1n}-S\sum_mK_{3mn}\xi _m, $ lead to the extended Hamiltonian, $ H^{*}=p_1\dot{X}+\sum_np_{2n}\dot{\xi}_n+\sum_np_{3n}\dot{\eta}_n-L $ with a set of primary constraints, \begin{eqnarray} \left. \begin{array}{c} \Phi _1^{(1)}=p_1-S \sum_n K_{2n}\xi _n=0, \\ \Phi _{2n}^{(1)}=p_{2n}=0, \\ \Phi _{3n}^{(1)}=p_{3n}+SK_{1n}+S\sum_mK_{3mn}\xi _m=0. \end{array} \right\} \end{eqnarray} Because of a lack of primary expressible velocities the Hamiltonian with the imposed constraints \[ H^{(1)}=\Phi _1^{(1)}\dot{X}+\sum_n\Phi _{2n}^{(1)}\dot{\xi}_n+\sum_n\Phi _{3n}^{(1)}\dot{\eta}_n+H_{ph}, \] coincides with $H^{*}$, where $H _{ph}=\sum_n\lambda _n\xi _n^2+\sum_n\rho _n\eta _n^2$. It governs the equations of motion of the constrained system, i.e. the constraints are hold at all times. This leads to a set of dynamical equations, \begin{eqnarray} \left. \begin{array}{c} \sum_nK_{2n}\dot{\xi}_n=0, \label{me1}\\ -2\lambda _n\xi _n+\dot{X}SK_{2n}-S\sum_mK_{3nm}\dot{\eta}_m=0, \label{me2}\\ -2\rho _n\eta _n+S\sum_mK_{3mn}\dot{\xi}_m=0, \label{me3} \end{array} \right\} \label{c2} \end{eqnarray} to give $\dot{\xi}_n=0$ and $\eta_n=0$. Imposing the secondary constraints $\Phi _{n}^{(2)}=\eta_n=0$ to be constant in time, we obtain $\dot{\eta}_n=0$. Together with the second constraint in (\ref{c2}) this yields $ \xi _n=({SK_{2n}}/{2\lambda _n})\dot{X}, $ and we reach the final form of the physical Hamiltonian, $ H_{ph}=p_1^2/2M, $ where $p_1=M\dot{X}$ involves the soliton mass \begin{equation} M=S^2\sum_n\frac{K_{2n}^2}{2\lambda _n}. \label{mass} \end{equation} Now we are ready to define the longitudinal spin current. We start with the linear momentum carried by the kink crystal, \begin{equation} P=S\int_0^{L_0}\left( 1-\cos \theta \right) \varphi _x\,dx, \label{macroP} \end{equation} where $L_0$ is the system size. By using $\theta =\pi /2+u$ and $\varphi =\varphi _0$, for a steady current, we obtain \begin{eqnarray} P&\approx& S\left[ \varphi _0({L_0})-\varphi _0(0)\right] +S\int_0^{L_0}u(x)\frac{\partial \varphi _0}{\partial x}\,dx\nonumber\\ &=& 2\pi {\cal{Q}} S+S\sum\limits_n\xi _nK_{2n}. \end{eqnarray} We here introduced the topological charge, ${\cal{Q}}=[\varphi _0(L)-\varphi _0(0)]/2\pi$. Using the result $ \xi _n=({SK_{2n}}/{2\lambda _n})\dot{X} $ and Eq.(\ref{mass}), we obtain an important formula, \begin{eqnarray} P= 2\pi S {\cal{Q}} +M\dot{X}, \label{momentum} \end{eqnarray} that plays an essential role in this paper. The first term is associated with the equilibrium background momentum and the second one corresponds to the transport current carried by the $\theta $% -fluctuations. Apparently, the transverse magnetic field increases a period of the kink crystal lattice and diminishes the topological charge ${\cal{Q}}$ and therefore it affects only the background linear momentum. The physical momentum related with a mass transport due to the excitations around the kink crystal state is generated by the steady movement. The \lq\lq superfluid mass current\rq\rq \,is accompanied by the \lq\lq superfluid magnetic current\rq\rq \, transfered by the $\theta $-fluctuations. It is determined through the definition of the magnetic density,\cite{Volovik07} $ {\cal{N}}=S( 1 - \cos\theta). $ By using $\theta =\pi /2+u(x,t)$, we have $ {\partial \cal{N}}/{\partial t}=S\sin \theta {\partial \theta }/{\partial t% }\approx S{\partial u}/{\partial t}, $ with $ u(x,t)=\sum_n\xi _n(t)u_n\left[ x-X(t)\right]. $ Therefore, for a steady current, we obtain the continuity equation \begin{eqnarray} \frac{\partial \cal{N}}{\partial t}=-\dot{X}^2\frac \partial {\partial x}\left( \sum_n\frac{S^2K_{2n}}{2\lambda _n}u_n\right) =-\frac{\partial j^x}{\partial x% }, \end{eqnarray} where we introduced the magnon time-even current carried by the $\theta $-fluctuations \begin{eqnarray} j^x=S^2\dot{X}^2\sum_n\frac{K_{2n}}{2\lambda _n}u_n.\label{lc} \end{eqnarray} Here we used the fact $\dot{\xi}_n=0$ because of the constraint. The time evenness is manifested by appearance of not $\dot{X}$ but $\dot{X}^2$. The important point to note is that {\it the only massive $\theta$-mode can carry the longitudinal magnon current as a manifestation of ordering in non-equilibrium state, i.e., dynamical off-diagonal long range order. } The final stage is to explicitly compute (\ref{lc}). We can exactly prove that the optical branch does not contribute to the magnetic current because of the orthogonality (the proof is detailed in a later paper). After a lengthy but straightforward manipulation, the contribution of the acoustic branch is obtained as a function of $\tilde{x}=mx/\kappa$, \begin{eqnarray} j^x(\tilde{x}) &=& \frac{8E^2(\kappa ){\dot{X}^2}}{J\pi ^2q_0^{2}\left( 4E(\kappa )/\pi -1\right) }\;{\rm{dn\,}}\left(\tilde{x}\right) \approx \frac{2{\dot{X}^2}}{Jq_0^{2}}\;{\rm{dn\,}}\left(\tilde{x}\right) ,\nonumber\\ \label{AcSpCur} \end{eqnarray} for the weak field case corresponding to small $\kappa$ leading to $E(\kappa )\approx \pi /2$. On the other hand, the background spin current\cite{Heurich03} is shown to become $j_{\rm{bg}}(x)\approx\partial \varphi_0(x)/\partial x-q_0 \propto {\rm{dn\,}}(\tilde{x})-{2} E(\kappa)/\pi$. We stress that the physical meaning of $j_{\rm{bg}}$ is completely different from the current described by Eq.~(\ref{AcSpCur}) in {\it non-equilibrium} state. \cite{Xiao} We present a schematic view of an instant distribution of spins in the current-carrying state in Fig.~\ref {Volovik}(a). In Fig.~\ref{Volovik}(b), we present a snapshot of the position dependence of the current density $j^x(\tilde{x})=j^x_{\rm ac}(\tilde{x})$ in the weak field limit, given by Eq.~(\ref{AcSpCur}). \begin{figure}[h] \includegraphics[width=75mm]{Volovikc.eps \caption{(a) A schematic view of an instant distribution of spins in the current-carrying state. This picture corresponds to the case of intermediate field strength. (b) A snapshot of the position dependence of the current density $j^x(\tilde{x})$ in the weak field limit, exactly treated in this paper. $j^x(\tilde{x})$ is scaled by its maximum $j_{\rm{max}}^x=j^x(0)$. } \label{Volovik} \end{figure} In realizing the bulk magnetic current proposed here, a single crystal of chiral magnets serves as spintronics device. The mechanism involves no spin-orbit coupling and the effect is not hindered by dephasing. Finally, we propose possible experimental methods to trigger off the spin current considered here. {\it Spin torque mechanism}: the spin-polarized electric current can exert torque to ferromagnetic moments through direct transfer of spin angular momentum.\cite{Slonczewski96} This effect, related with Aharonov-Stern effect \cite {Aharonov-Stern} for a classical motion of magnetic moment in an inhomogeneous magnetic field, is eligible to excite the sliding motion of the kink crystal by injecting the spin-polarized current (polarized electron beam) in the direction either perpendicular or oblique to the chiral axis. The spin current transported by the soliton lattice may amplify the spin current of the injected carriers. {\it XMCD}: to detect the longitudinal magnetic currents accompanied by the dynamical ODLRO, x-ray magnetic circular dichroism (XMCD) may be used. Photon angular momentum may be aligned either parallel or anti-parallel to the direction of the longitudinal net magnetization. {\it Ultrasound}: further possibility to control and detect the spin current is using a coupling between spins and chiral torsion.\cite{Fedorov} Ultrasound with the wavelength being adjusted to the period of the kink crystal may excite the periodic chiral torsion and resonantly supply the kinetic energy to the kink crystal. Consequently, the ultrasound attenuation may occur.\cite{Hu} {\it TOF technique}: the most direct way of detecting the traveling magnon density may be winding a sample by a pick-up coil and performing the time-of-flight (TOF) experiment. Then, the coil should detect a periodic signal induced by the magnetic current. \begin{acknowledgments} We acknowledge helpful discussions with Yu.~A.~Izyumov, K.~Inoue, I.~Fomin and M.~Sigrist. J.~K. acknowledges Grant-in-Aid for Scientific Research (A)(No.~18205023) and (C) (No.~19540371) from the Ministry of Education, Culture, Sports, Science and Technology, Japan. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The most important continuous alphabet channel in communication systems is the discrete-time additive white Gaussian noise (AWGN) channel in which at each time $i$, the output of the channel $Y_i$ is the sum of the input $X_i$ and Gaussian noise $Z_i$. Shannon showed in his original paper~\cite{Shannon48} that launched the field of information theory that the capacity of the AWGN channel is \begin{equation} \mathsf{C}(P) = \frac{1}{2}\log(1+P), \label{eqn:cap} \end{equation} where $P$ is the signal-to-noise ratio (SNR). More precisely, let $M^*(W^n,\varepsilon,P)$ be the maximum number of codewords that can be transmitted over $n$ independent uses of an AWGN channel with SNR $P$ and average error probability not exceeding $\varepsilon\in (0,1)$. Then, combining the direct part in \cite{Shannon48} and the strong converse by Shannon in~\cite{Sha59b} (also see Yoshihara~\cite{Yoshihara} and Wolfowitz~\cite{Wolfowitz}), one sees that \begin{equation} \lim_{n\to\infty}\frac{1}{n}\log M^*(W^n,\varepsilon,P)=\mathsf{C}(P) \quad \mbox{bits per channel use} \end{equation} holds for every $\varepsilon\in (0,1)$. Recently, there has been significant renewed interest in studying the higher-order terms in the asymptotic expansion of non-asymptotic fundamental limits such as $\log M^*(W^n,\varepsilon,P)$. This line of analysis was pioneered by Strassen \cite[Theorem~1.2]{Strassen} for discrete memoryless channels (DMCs) and is useful because it provides key insights into the amount of backoff from channel capacity for block codes of finite length $n$. For the AWGN channel, Hayashi \cite[Theorem~5]{Hayashi09} showed that \begin{equation} \log M^*(W^n,\varepsilon,P) = n\mathsf{C}(P) + \sqrt{n\mathsf{V}(P)}\Phi^{-1}(\varepsilon) + o(\sqrt{n}) \label{eqn:hayashi} \end{equation} where $\Phi^{-1}(\cdot)$ is the inverse of the Gaussian cumulative distribution function and \begin{equation} \mathsf{V}(P) = \log^2 \mathrm{e}\cdot\frac{P(P+2)}{2(P+1)^2} \quad \mbox{bits$^2$ per channel use} \label{eqn:disp} \end{equation} is termed the {\em Gaussian dispersion function}~\cite{PPV10}. The first two terms in the expansion in \eqref{eqn:hayashi} are collectively known the {\em normal approximation}. The functional form of $\mathsf{V}(P)$ was already known to Shannon~\cite[Section~X]{Sha59b} who analyzed the behavior of the reliability function of the AWGN channel at rates close to capacity. Subsequently, the $o(\sqrt{n})$ remainder term in the expansion in~\eqref{eqn:hayashi} was refined by Polyanskiy-Poor-Verd\'u~\cite[Theorem~54, Eq.~(294)]{PPV10} who showed that \begin{equation} O(1)\le\log M^*(W^n,\varepsilon,P) - \Big( n\mathsf{C}(P) + \sqrt{n\mathsf{V}(P)}\Phi^{-1}(\varepsilon) \Big) \le\frac{1}{2}\log n + O(1). \label{eqn:ppv_gauss} \end{equation} The same bounds hold under the maximum probability of error formalism. Despite these impressive advances in the fundamental limits of coding over a Gaussian channel, the gap in the third-order term beyond the normal approximation in \eqref{eqn:ppv_gauss} calls for further investigations. The authors of the present paper showed for DMCs with positive $\varepsilon$-dispersion that the third-order term is no larger than $\frac{1}{2}\log n + O(1)$ \cite[Theorem~1]{TomTan12}, matching a lower bound by Polyanskiy~\cite[Theorem~53]{Pol10} for non-singular channels (also called channels with positive reverse dispersion~\cite[Eq.~(3.296)]{Pol10}). Altu\u{g} and Wagner~\cite{altug13} showed for singular, symmetric DMCs that the third-order term is $O(1)$. Moulin~\cite{mou13b} recently showed for a large class of channels (but {\em not} the AWGN channel) that the third-order term is $\frac{1}{2}\log n + O(1)$. In light of these existing results for DMCs, a reasonable conjecture would be that the third-order term for the Gaussian case is either $O(1)$ or $\frac{1}{2}\log n + O(1)$. In this paper, we show that in fact, the lower bound in \eqref{eqn:ppv_gauss} is loose. In particular, we establish that it can be improved to match the upper bound $\frac{1}{2}\log n + O(1)$. Our proof technique is similar to that developed by Polyanskiy~\cite[Theorem 53]{Pol10} to show that $\frac{1}{2}\log n + O(1)$ is achievable for non-singular DMCs. However, our proof is more involved due to the presence of power constraints on the codewords. \section{Problem Setup and Definitions} Let $W$ be an AWGN channel where the noise variance\footnote{The assumption that the noise variance is $1$ does not entail any loss of generality because we can simply scale the admissible power accordingly to ensure that the SNR is $P$.} is $1$, i.e. \begin{equation} W(y|x)=\frac{1}{\sqrt{2\pi}}\exp\Big(-\frac{(y-x)^2}{2} \Big). \end{equation} Let $\mathbf{x}=(x_1,\ldots,x_n)$ and $\mathbf{y}=(y_1,\ldots, y_n)$ be two vectors in $\mathbb{R}^n$. Let $W^n(\mathbf{y}|\mathbf{x})=\prod_{i=1}^n W(y_i|x_i)$ be the $n$-fold memoryless extension of $W$. An {\em $(n,M,\varepsilon,P)_{\mathrm{av}}$-code} for the AWGN channel $W$ is a system $\{ (\mathbf{x}(m), \mathcal{D}_m)\}_{m=1}^M$ where $\mathbf{x}(m)\in\mathbb{R}^n,m \in\{ 1,\ldots, M\}$, are the codewords satisfying the maximal power constraint $\|\mathbf{x}(m)\|_2^2\le n P$, the sets $\mathcal{D}_m\subset\mathbb{R}^n$ are disjoint decoding regions and the {\em average probability of error} does not exceed $\varepsilon$, i.e.\ \begin{equation} \frac{1}{M}\sum_{m=1}^M W^n\big( \mathcal{D}_m^c \,\big|\, \mathbf{x}(m)\big)\le \varepsilon. \end{equation} Define $M^*(W^n,\varepsilon,P) :=\max\big\{ M \in\mathbb{N} : \exists \, \mbox{ an } (n,M,\varepsilon,P)_{\mathrm{av}}\mbox{-code for } W\big\}$. We also employ the Gaussian cumulative distribution function \begin{equation} \Phi(a) := \int_{-\infty}^a \frac{1}{\sqrt{2\pi}}\exp\Big( -\frac{u^2}{2}\Big)\,\mathrm{d} u \end{equation} and define its inverse as $\Phi^{-1}(\varepsilon): =\sup\{a\in\mathbb{R}: \Phi(a)\le \varepsilon\}$, which evaluates to the usual inverse for $0 <\varepsilon < 1$ and continuously extends to take values $\pm\infty$ outside that range. \section{Main Result and Remarks} Let us reiterate our main result. \begin{theorem} \label{thm:ach3} For all $0<\varepsilon<1$ and $P \in (0,\infty)$, \begin{equation} \log M^*(W^n,\varepsilon,P) \ge n \mathsf{C}(P)+\sqrt{n\mathsf{V}(P)}\Phi^{-1} (\varepsilon) + \frac{1}{2}\log n + O(1) \label{eqn:sizeM_star} \end{equation} where $\mathsf{C}(P)$ and $\mathsf{V}(P)$ are the Gaussian capacity and dispersion functions respectively. \end{theorem} We make the following remarks before proving the theorem in the following section. \begin{enumerate} \item As mentioned in the Introduction, the upper bound on $\log M^*(W^n,\varepsilon,P)$ in \eqref{eqn:ppv_gauss} was first established by Polyanskiy-Poor-Verd\'u~\cite[Theorem~65]{PPV10}. They evaluated the meta-converse~\cite[Theorem~28]{PPV10} and appealed to the spherical symmetry in the Gaussian problem. The third-order term in the normal approximation was shown to be upper bounded by $\frac{1}{2}\log n+O(1)$ (under the average or maximum error probability formalism). Thus, one has \begin{equation} \log M^*(W^n,\varepsilon,P) = n \mathsf{C}(P)+\sqrt{n\mathsf{V}(P)}\Phi^{-1} (\varepsilon) + \frac{1}{2}\log n + O(1) . \label{eqn:sizeM_eq} \end{equation} The technique developed by the present authors in \cite{TomTan12} can also be used to prove the $\frac{1}{2}\log n + O(1)$ upper bound on the third-order term. \item Our strategy for proving \eqref{eqn:sizeM_star} parallels that for non-singular DMCs without cost constraints by Polyanskiy~\cite[Theorem~53]{Pol10}. It leverages on the random-coding union (RCU) bound~\cite[Theorem~16]{PPV10} and uses the log-likelihood ratio as the decoding metric, i.e.\ we do maximum likelihood decoding. However, the Gaussian problem involves cost (power) constraints and our random codebook generation strategy (which is similar to Shannon's~\cite{Sha59b}) involves drawing codewords independently and uniformly at random from the power sphere. Thus, a more delicate analysis (vis-\`a-vis~\cite[Theorem~53]{Pol10}) is required. In particular, one cannot directly employ the refined large-deviations result stated in \cite[Lemma~47]{PPV10} which is crucial in showing the achievability of $\frac{1}{2}\log n+O(1)$. This is because \cite[Lemma~47]{PPV10} requires independence of a collection random variables whereas the independence structure is lacking in the AWGN problem. \item In Theorem~\ref{thm:ach3}, we considered a maximal power constraint on the codewords, i.e.\ $\|\mathbf{x}(m)\|_2^2\le nP$ for all $m$. It is easy to show that the third-order term is the same for the case of equal power constraints, i.e.\ $\|\mathbf{x}(m)\|_2^2= nP$ for all $m$. However, the strong converse does not even hold~\cite[Theorem~77]{Pol10} under the {\em average probability of error} formalism and the {\em average power constraint across the codebook}, i.e.\ $\frac{1}{M}\sum_{m=1}^M\|\mathbf{x}(m)\|_2^2\le nP$. The $\varepsilon$-capacity depends on $\varepsilon$. We do not consider this case in this paper. Nonetheless, the strong converse and normal approximation do hold~\cite[Theorem~54]{PPV10} under the {\em maximum probability of error} formalism and average power constraint across the codebook but we do not consider this setup here. It is known~\cite[Eq.~(295)]{PPV10} that the third-order term is sandwiched between $O(1)$ and $\frac{3}{2}\log n + O(1)$. \item A straightforward extension of our proof technique (in particular, the application of Lemma~\ref{lem:boundU} in Section~\ref{sec:interval}) shows that the achievability of $\frac{1}{2}\log n+O(1)$ also holds for the problem of information transmission over {\em parallel Gaussian channels} \cite[Section~9.4]{Cov06} in which the capacity is given by the well-known {\em water-filling} solution. See Appendix \ref{app:parallel} for a description of the modifications to the proof of Theorem~\ref{thm:ach3} to this setting. This improves on the result in \cite[Theorem~81]{Pol10} by $\frac{1}{2}\log n$. However, this third-order achievability result does not match the converse bound given in \cite[Theorem~80]{Pol10} in which it is shown that the third-order term is upper bounded by $\frac{k+1}{2}\log n + O(1)$ where $k\ge 1$ is the number of parallel Gaussian channels. We leave the closing of this gap for future research. \item Finally, we make an observation concerning the relation between prefactors in the error exponents regime and the third-order terms in the normal approximation. In \cite{Sha59b}, Shannon derived exponential bounds on the average error probability of optimal codes over a Gaussian channel using geometric arguments. For {\em high rates} (i.e.\ rates above the critical rate and below capacity), he showed that \cite[Eqs.~(4)--(5)]{Sha59b} \begin{equation} \mathrm{P}_{\mathrm{e}}^*(M,n)=\Theta\Big( \frac{\exp(-n F(\varphi))}{\sqrt{n}}\Big) \label{eqn:shannon_exponent} \end{equation} where $\mathrm{P}_{\mathrm{e}}^*(M,n)$ is the optimal average probability of error of a length-$n$ block code of size $M \in\mathbb{N}$, $\varphi=\varphi(R)$ is a cone angle related to the signaling rate $R :=\frac{1}{n}\log M$ as follows \cite[Eq.~(28)]{Sha59b} \begin{align} \exp(-nR ) = \frac{\big(1+O\big(\frac{1}{n}\big)\big)\sin^n\varphi}{\sqrt{2\pi n} \, \sin\varphi \, \cos\varphi} , \label{eqn:theta1_Rn} \end{align} and the exponent in \eqref{eqn:shannon_exponent} is defined as \begin{align} F(\varphi)&:=\frac{P }{2}-\frac{\sqrt{P} \,G \, \cos\varphi}{2}-\log\big(G\sin\varphi\big), \quad\mbox{where}\\ G=G(\varphi ) & := \frac{1}{2}\big(\sqrt{P} \cos\varphi+ \sqrt{P\cos^2 \varphi+4}\big). \end{align} Furthermore for high rates, the error exponent (reliability function) of an AWGN channel is known and equals the sphere-packing exponent~\cite[Eq.~(7.4.33)]{gallagerIT} \begin{equation} E(R) = \frac{P}{4\beta}\bigg( (\beta+1) -(\beta-1)\sqrt{ 1+\frac{4\beta}{P(\beta-1)}} \bigg)+\frac{1}{2}\log\bigg( \beta-\frac{P(\beta-1)}{2}\bigg[\sqrt{ 1+\frac{4\beta}{P(\beta-1)}}-1\bigg]\bigg) \label{eqn:ER} \end{equation} where $\beta :=\exp(2R)$. Simple algebra shows that $F(\theta )= E(\tilde{R}(\theta))$ when $\tilde{R}(\theta):= -\log\sin\theta$. Thus, \begin{align} F\big(\varphi(R) \big) &= E\big( \tilde{R}(\varphi(R) )\big) \\ &= E\big(-\log\sin (\varphi(R)) \big) \\ & = E\Big(R-\frac{\log n}{2n}+ \Theta\Big(\frac{1}{n}\Big)\Big) \label{eqn:use_theta1_Rn}\\ &= E(R) - E'(R) \frac{\log n}{2n}+\Theta\Big(\frac{1}{n}\Big), \label{eqn:taylor_ER} \end{align} where \eqref{eqn:use_theta1_Rn} follows from \eqref{eqn:theta1_Rn} and \eqref{eqn:taylor_ER} follows by Taylor expanding the continuously differentiable function $E(R)$. Note that $E'(R)\le 0$. This leads to the conclusion that for high rates, \begin{equation} \mathrm{P}_{\mathrm{e}}^*(M,n)=\Theta\Big( \frac{\exp(-nE(R))}{n^{(1+ | E'(R)| )/2}}\Big). \end{equation} Thus, the prefactor of the AWGN channel is $\Theta(n^{-(1+ | E'(R) | )/2})$. We showed in Theorem~\ref{thm:ach3} that the third-order term is $\frac{1}{2}\log n+O(1)$. Somewhat surprisingly, this is analogous to the symmetric, discrete memoryless case. Indeed for non-singular, symmetric DMCs (such as the binary symmetric channel) the prefactor in the error exponents regime for high rates is $\Theta(n^{-(1+ |E'(R) |)/2})$ \cite{altug11,altug12, altug12a, Sca13} and for DMCs with positive $\varepsilon$-dispersion, the third-order term is $\frac{1}{2}\log n+O(1)$ (combining \cite[Theorem~1]{TomTan12} and \cite[Theorem~53]{Pol10}). (Actually symmetry is not required for the third-order term to be $\frac{1}{2}\log n+O(1)$.) On the other hand, for singular, symmetric DMCs (such as the binary erasure channel), the prefactor is $\Theta(n^{-1/2})$ \cite{altug12,altug11, altug12a, Sca13} and the third-order term is $O(1)$ (combining \cite[Proposition~1]{altug13} and \cite[Theorem~45]{PPV10}). Also see~\cite[Theorem~23]{Pol13}. These results suggest a connection between prefactors and third-order terms. Indeed, a precise understanding of this connection is a promising avenue for further research. \end{enumerate} \section{Proof of Theorem~\ref{thm:ach3}} The proof, which is based on random coding, is split into several steps. \subsection{Random Codebook Generation And Encoding} We first start by defining the random coding distribution \begin{equation} f_{\mathbf{X}} (\mathbf{x}) := \frac{\delta ( \|\mathbf{x}\|_2^2 -nP )}{S_n(\sqrt{nP})} \label{eqn:rc_dist} \end{equation} where $\delta(\cdot)$ is the Dirac delta and $S_n(r) = \frac{2\pi^{n/2}}{\Gamma(n/2)}r^{n-1}$ is the surface area of a radius-$r$ sphere in $\mathbb{R}^n$. We sample $M$ length-$n$ codewords independently from $f_{\mathbf{X}}$. In other words, we draw codewords uniformly at random from the surface of the sphere in $\mathbb{R}^n$ with radius $\sqrt{nP}$. The number of codewords $M$ will be specified at the end of the proof in \eqref{eqn:logM}. These codewords are denoted as $\mathbf{x}(m) = (x_1(m),\ldots, x_n(m)), m \in \{ 1,\ldots, M\}$. To send message $m$, transmit codeword~$\mathbf{x}(m)$. \subsection{Maximum-Likelihood Decoding } Let the induced output density be $f_{\mathbf{X}}W^n$, i.e.\ \begin{equation} f_{\mathbf{X}}W^n(\mathbf{y}):=\int_{\mathbf{x}'} f_{\mathbf{X}}(\mathbf{x}')W^n(\mathbf{y}|\mathbf{x}')\,\mathrm{d}\mathbf{x}' . \end{equation} Given $\mathbf{y}=(y_1,\ldots, y_n)$, the decoder selects the message $m$ satisfying \begin{equation} q(\mathbf{x}(m),\mathbf{y})> \max_{ \tilde{m} \in \{1,\ldots, M\}\setminus \{m\}} q(\mathbf{x}(\tilde{m}),\mathbf{y}) , \label{eqn:decode-rule} \end{equation} where the decoding metric is the log-likelihood ratio defined as \begin{equation} q(\mathbf{x},\mathbf{y}) := \log \frac{W^n(\mathbf{y}|\mathbf{x})}{f_{\mathbf{X}}W^n(\mathbf{y})} . \label{eqn:decoding_metric} \end{equation} If there is no unique $m\in\{1,\ldots, M\}$ satisfying \eqref{eqn:decode-rule}, declare an error. (This happens with probability zero.) Since the denominator in \eqref{eqn:decoding_metric}, namely $f_{\mathbf{X}}W^n(\mathbf{y})$, is constant across all codewords, this is simply maximum-likelihood or, in this Gaussian case, minimum-Euclidean distance decoding. We will take advantage of the latter observation in our proof, more precisely the fact that \begin{equation} q(\mathbf{x},\mathbf{y}) = \frac{n}{2} \log \frac{1}{2\pi} + \langle \mathbf{x}, \mathbf{y} \rangle - nP - \|\mathbf{y}\|_2^2 - \log f_{\mathbf{X}}W^n(\mathbf{y}) \label{eqn:inner_product} \end{equation} only depends on the codeword through the inner product $\langle \mathbf{x}, \mathbf{y} \rangle=\sum_{i=1}^n x_i y_i$. In fact, $q(\mathbf{x},\mathbf{y})$ is equal to $\langle \mathbf{x}, \mathbf{y} \rangle$ up to a shift that only depends on $\|\mathbf{y}\|_2^2$. Note that because $f_{\mathbf{X}}W^n$ is not a product density, $q(\mathbf{x},\mathbf{y})$ is {\em not separable} (into a sum of $n$ terms) unlike in the i.i.d.\ random coding case~\cite[Theorem~53]{Pol10}. \subsection{The Random Coding Union (RCU) Bound} All the randomly drawn codewords satisfy the cost constraints with probability one. By using the same proof technique as that for the RCU bound~\cite[Theorem~16]{PPV10}, we may assert that there exists an $(n,M,\varepsilon', P)_{\mathrm{av}}$-code satisfying \begin{equation} \varepsilon'\le \mathbb{E}\left[ \min\big\{1,M \Pr \big( q(\bar{\mathbf{X}} , \mathbf{Y} ) \ge q(\mathbf{X} , \mathbf{Y} ) |\mathbf{X},\mathbf{Y} \big)\big\}\right] \label{eqn:rcu} \end{equation} where the random variables $(\bar{\mathbf{X}},\mathbf{X},\mathbf{Y})$ are distributed as $f_{\mathbf{X}}(\bar{\mathbf{x}})\times f_{\mathbf{X}}(\mathbf{x})\times W^n(\mathbf{y}|\mathbf{x})$. Now, introduce the function \begin{equation} g(t,\mathbf{y}) := \Pr\big(q(\bar{\mathbf{X}} , \mathbf{Y} ) \ge t \,\big|\, \mathbf{Y}=\mathbf{y} \big). \label{eqn:gty} \end{equation} Since $\bar{\mathbf{X}}$ is independent of $\mathbf{X}$, the probability in \eqref{eqn:rcu} can be written as \begin{equation} \Pr \big( q(\bar{\mathbf{X}} , \mathbf{Y} ) \ge q(\mathbf{X} , \mathbf{Y} ) |\mathbf{X},\mathbf{Y} \big) = g(q(\mathbf{X} , \mathbf{Y} ) ,\mathbf{Y}). \end{equation} Furthermore, by Bayes rule, we have $f_{\mathbf{X}|\mathbf{Y}}(\mathbf{x}|\mathbf{y})\times f_{\mathbf{X}}W^n(\mathbf{y})= f_{\mathbf{X}}(\mathbf{x}) \times W^n(\mathbf{y}|\mathbf{x})$ and so \begin{equation} f_{\mathbf{X}}(\bar{\mathbf{x}})=f_{\mathbf{X}}(\bar{\mathbf{x}})\frac{f_{\mathbf{X}|\mathbf{Y}}(\bar{\mathbf{x}}|\mathbf{y})}{f_{\mathbf{X}|\mathbf{Y}}(\bar{\mathbf{x}}|\mathbf{y})} = f_{\mathbf{X}|\mathbf{Y}}(\bar{\mathbf{x}}|\mathbf{y})\exp(-q(\bar{\mathbf{x}} , \mathbf{y})). \end{equation} For a fixed sequence $\mathbf{y} \in \mathbb{R}^n$ and a constant $t\in\mathbb{R}$, multiplying both sides by $\mathbf{1}\{q (\bar{\mathbf{x}} , \mathbf{y})\ge t\}$ and integrating over all $\bar{\mathbf{x}}$ yields the following alternative representation of $g(t,\mathbf{y})$: \begin{equation} g(t,\mathbf{y}) =\mathbb{E}\big[ \exp(-q( \mathbf{X},\mathbf{Y}))\mathbf{1}\{ q( \mathbf{X},\mathbf{Y})\ge t \} \,\big|\, \mathbf{Y}=\mathbf{y}\big]. \label{eqn:integrating} \end{equation} \subsection{A High-Probability Set} Consider the set of ``typical'' channel outputs whose norms are approximately $\sqrt{n(P+1)}$. More precisely, define \begin{equation} \mathcal{F}:=\Big\{ \mathbf{y}\in\mathbb{R}^n : \frac{1}{n} \|\mathbf{y}\|_2^2 \in [ P+1 -\delta , P+1+\delta]\Big\}. \end{equation} We claim that the probability of $\mathbf{Y}\in\mathcal{F}$ is large. First the union bound yields \begin{align} \Pr(\mathbf{Y}\in\mathcal{F}^c) \le \Pr\bigg( \frac{1}{n}\|\mathbf{X}+\mathbf{Z}\|_2^2 > P+1+\delta\bigg) + \Pr\bigg( \frac{1}{n}\|\mathbf{X}+\mathbf{Z}\|_2^2 < P+1 - \delta\bigg) . \end{align} Since the bounding of both probabilities can be done in a similar fashion, we focus on the first which may be written as \begin{equation} \Pr\bigg( \frac{1}{n}\|\mathbf{X}+\mathbf{Z}\|_2^2 > P+1+\delta\bigg)= \Pr\bigg( \frac{1}{n}\big(2\langle\mathbf{X},\mathbf{Z}\rangle + \|\mathbf{Z}\|_2^2\big) > 1+\delta\bigg).\label{eqn:dropP} \end{equation} Define the following ``typical'' set of noises \begin{equation} \mathcal{G}:=\Big\{ \mathbf{z}\in\mathbb{R}^n:\frac{1}{n} \|\mathbf{z}\|_2^2 \le 1+\frac{\delta}{2} \Big\} . \end{equation} Since $\mathbf{Z}=(Z_1,\ldots, Z_n)\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{n\times n})$, by the Chernoff bound (or, more precisely, by Cramer's theorem~\cite[Theorem~2.2.3]{Dembo} for $\chi_1^2$ random variables), the probability that $\mathbf{Z}\in\mathcal{G}^c$ is upper bounded by $\exp(-\kappa_1 n \delta^2)$ for some constant $\kappa_1>0$. Now, we continue bounding the probability in \eqref{eqn:dropP} as follows: \begin{align} \Pr\bigg( \frac{1}{n}\big(2\langle\mathbf{X},\mathbf{Z}\rangle + \|\mathbf{Z}\|_2^2\big) > 1+\delta\bigg) &\le\Pr\bigg( \frac{1}{n}\big(2\langle\mathbf{X},\mathbf{Z}\rangle \!+\! \|\mathbf{Z}\|_2^2\big) > 1+\delta\,\bigg|\, \mathbf{Z}\in\mathcal{G}\bigg)\Pr(\mathbf{Z}\in\mathcal{G}) \! +\!\Pr(\mathbf{Z}\in\mathcal{G}^c) \\ &\le \Pr\bigg( \frac{2}{n} \langle\mathbf{X},\mathbf{Z}\rangle > \frac{\delta}{2}\,\bigg|\,\mathbf{Z}\in \mathcal{G}\bigg)\Pr(\mathbf{Z}\in\mathcal{G}) +\Pr(\mathbf{Z}\in\mathcal{G}^c) \label{eqn:usedefG}\\ &\le \Pr\bigg( \frac{1}{n}\sum_{i=1}^n X_i Z_i> \frac{\delta}{4}\bigg) +\Pr(\mathbf{Z}\in\mathcal{G}^c) , \end{align} where in~\eqref{eqn:usedefG} we used the definition of $\mathcal{G}$. By spherical symmetry, we may take $\mathbf{X}$ to be any point on the power sphere $\{\mathbf{x}:\|\mathbf{x}\|_2^2 = nP\}$. We take $\mathbf{X}$ to be equal to $(\sqrt{nP}, 0, \ldots, 0)$. Then the first term reduces to \begin{equation} \Pr\bigg( Z_1 > \frac{\delta}{4} \cdot\sqrt{\frac{n}{P}} \, \bigg) =1-\Phi\bigg( \frac{\delta}{4} \cdot\sqrt{\frac{n}{P}} \, \bigg)\le\exp(-\kappa_2 n \delta^2) , \end{equation} where $\kappa_2 >0$ is a constant. By putting all the bounds together and setting $\delta=n^{-1/3}$, we deduce that \begin{equation} \Pr(\mathbf{Y}\in\mathcal{F})\ge 1-\xi_n\label{eqn:chernoff} \end{equation} where $\xi_n := \exp(-\kappa_3 n^{1/3})$ for some $\kappa_3>0$. Note that $\xi_n$ decays faster than any polynomial. \subsection{Probability Of The Log-Likelihood Ratio Belonging To An Interval} \label{sec:interval} We would like to upper bound $g(t,\mathbf{y})$ in \eqref{eqn:gty} to evaluate the RCU bound. This we do in the next section. As an intermediate step, we consider the problem of upper bounding \begin{equation} h(\mathbf{y}; a, \mu) := \Pr\big( q(\mathbf{X},\mathbf{Y})\in [a,a+\mu] \, \big| \, \mathbf{Y} = \mathbf{y}\big) , \end{equation} where $a \in \mathbb{R}$ and $\mu > 0$ are some constants. Because $\mathbf{Y}$ is fixed to some constant vector $\mathbf{y}$ and $\|\mathbf{X}\|_2^2$ is also constant, $h(\mathbf{y}; a, \mu)$ can be rewritten using~\eqref{eqn:inner_product} as \begin{equation} h(\mathbf{y};a,\mu) := \Pr\big( \langle\mathbf{X},\mathbf{Y}\rangle\in [a', a' + \mu] \, \big| \, \mathbf{Y} = \mathbf{y}\big) , \label{eqn:inner_prod} \end{equation} for some other constant $a' \in \mathbb{R}$. It is clear that $h(\mathbf{y}; a, \mu)$ depends on $\mathbf{y}$ through its norm and so we may define (with an abuse of notation), \begin{equation} h(s;a,\mu) : = h(\mathbf{y};a,\mu) ,\quad\mbox{if}\quad s = \frac{1}{n}\|\mathbf{y}\|_2^2. \end{equation} In the rest of this section, we assume that $\mathbf{y}\in\mathcal{F}$ or, equivalently, $s\in [ P+1 -\delta , P+1+\delta]$. \begin{figure}[t] \centering \begin{overpic}[width=.6\columnwidth]{sphere4} \put(93,41.5){$z_1$} \put(67,86){$z_2$} \put(64,35.5){$0$} \put(42,34.5){$-\sqrt{nP}$} \put(43.5,41.6){$-\mathbf{x}_0$} \put(72,62.5){Q} \put(80,35){$\sqrt{ns}\!-\!\sqrt{nP}$} \put(-1,35){$-\!\sqrt{ns}\!-\!\sqrt{nP}$} \put(56,50){\rotatebox{45}{$\sqrt{ns}$}} \put(83,58){$\sqrt{ns}\sin\psi$} \put(73,17){$\sqrt{ns}\cos\psi-\sqrt{nP}$} \put(58,43){$\psi$} \put(79.5,39.5){\circle*{2}} \put(20.5,39.5){\circle*{2}} \put(50.5,39.5){\circle*{2}} \put(72,60.5){\circle*{2}} \put(28,73){$\{\mathbf{z}:\|\mathbf{x}_0+\mathbf{z}\|_2^2=ns\}$} \linethickness{1.2mm} \put(66.5,39.8){\line(1,0){5}} \end{overpic} \caption{Illustration of the relation between $Z_1$ and $\Psi$ in \eqref{eqn:ZTheta} in two dimensions. The transformation of this figure to the $U$ coordinate system via~\eqref{eqn:UTheta} translates the sphere to the origin and scales its radius to be $1$. } \label{fig:z1} \end{figure} By introducing the standard Gaussian random vector $\mathbf{Z}=(Z_1, \ldots, Z_n)\sim \mathcal{N}(\mathbf{0},\mathbf{I}_{n\times n})$, we have \begin{align} h(s;a,\mu &= \Pr\left( \langle\mathbf{X},\mathbf{X}+\mathbf{Z}\rangle\in [a',a'+\mu] \, \Big|\, \|\mathbf{X}+\mathbf{Z}\|_2^2 = ns\right) \\ &= \Pr\bigg( \sum_{i=1}^n X_i Z_i + nP \in [a',a'+\mu] \, \bigg|\, \|\mathbf{X}+\mathbf{Z}\|_2^2 = ns \bigg) \label{eqn:a_prime} \end{align} where~\eqref{eqn:a_prime} follows by the observation that $\langle\mathbf{X},\mathbf{X}\rangle = nP$ with probability one. Now, define \begin{equation} \mathbf{x}_0 := \big( \sqrt{nP}, 0,\ldots, 0\big) \end{equation} to be a fixed vector on the power sphere. By spherical symmetry, we may pick $\mathbf{X}$ in \eqref{eqn:a_prime} to be equal to $\mathbf{x}_0$. Thus, we have \begin{equation} h(s;a,\mu) = \Pr\bigg( Z_1 + \sqrt{nP} \in \Big[ \frac{a'}{\sqrt{nP}},\frac{a'+\mu}{\sqrt{nP}}\Big] \, \bigg|\, \|\mathbf{x}_0+\mathbf{Z}\|_2^2 =ns \bigg) . \label{eqn:x0} \end{equation} In other words, we are conditioning on the event that the random vector $\mathbf{Z} \sim \mathcal{N}(\mathbf{0},\mathbf{I}_{n\times n})$ lands on the surface of a sphere of radius $\sqrt{ns}$ centered at $-\mathbf{x}_0 = (-\sqrt{nP}, 0, \ldots, 0)$. See Fig.~\ref{fig:z1}. We are then asking what is the probability that the first component plus $\sqrt{nP}$ belongs to the prescribed interval of length proportional to $\mu/\sqrt{n}$. Let us now derive the conditional density of $Z_1$ given the event $\mathcal{E} :=\{ \|\mathbf{x}_0+\mathbf{Z}\|_2^2 = ns\}$. Denote this density as $f_{Z_1|\mathcal{E}} (z_1)$. Note that the support of $f_{Z_1|\mathcal{E}}(z_1)$ is $[-\sqrt{ns}-\sqrt{nP},\sqrt{ns}-\sqrt{nP}]$. It is easier to find the conditional density of the angle $\Psi \in [0,2\pi]$ given the event $\mathcal{E}$ where $\Psi$ and $Z_1$ are related as follows: \begin{equation} Z_1 = \sqrt{ns} \cos\Psi - \sqrt{nP} . \label{eqn:ZTheta} \end{equation} Again see Fig.~\ref{fig:z1}. Now, we have \begin{equation} f_{\Psi|\mathcal{E}}(\psi)\, \mathrm{d} \psi\propto \left(\sin^{n-2} \psi\right) \exp\left( -\frac{n}{2} \left[ (\sqrt{s}\cos\psi - \sqrt{P} )^2 + s\sin^2\psi\right]\right)\, \mathrm{d} \psi. \end{equation} This follows because the area element (an $(n-1)$-dimensional annulus of radius $\sqrt{ns}\sin\psi$ and width $\mathrm{d}\psi$) is proportional to $\sin^{n-2}\psi$ (similar to Shannon's derivation in~\cite[Eq.~(21)]{Sha59b}) and the Gaussian weighting is proportional to $\exp\big( -\frac{n}{2} \big[ (\sqrt{s}\cos\psi - \sqrt{P} )^2 + s\sin^2\psi\big]\big)$. This is just $\exp(-d^2 / 2)$ where $d$ is the distance of the point described by $\psi$ (point Q in Fig.~\ref{fig:z1}) to the origin. We are obviously leveraging heavily on the radial symmetry of the problem around the first axis. Now, we consider the change of variables \begin{equation} U = \cos\Psi \label{eqn:UTheta} \end{equation} resulting in \begin{equation} f_{U|\mathcal{E}} (u) \, \mathrm{d} u \propto (1-u^2)^{(n-3)/2}\exp \big( n \sqrt{Ps} u \big) \, \mathrm{d} u. \end{equation} Note that $U$ takes values in $[-1,1]$. More precisely, the conditional density of $U$ given $\mathcal{E}$ is \begin{equation} f_{U|\mathcal{E}} (u) =\frac{1}{F_n} (1-u^2)^{(n-3)/2} \exp\big( n \sqrt{Ps} u \big) \mathbf{1}\{ u \in [-1,1]\} ,\label{eqn:densityU} \end{equation} where the normalization constant is \begin{equation} F_n := \int_{-1}^1 (1-u^2)^{(n-3)/2} \exp\big( n \sqrt{Ps} u \big) \, \mathrm{d} u \label{eqn:Fn} . \end{equation} The conditional density we have derived in \eqref{eqn:densityU}--\eqref{eqn:Fn} reduces to that by Stam~\cite[Eq.~(3)]{Stam} for the limiting case $P=0$, i.e.\ the sphere is centered at the origin. It is of paramount importance to analyze how $\sup_{u \in [-1,1]} f_{U|\mathcal{E}}(u)$ scales with $n$. The answer turns out to be $O(\sqrt{n})$. More formally, we state the following lemma whose proof is provided in Appendix~\ref{app:boundU}. \begin{lemma} \label{lem:boundU} Define the function \begin{equation} L(P,s) := \frac{(2Ps)^2}{\sqrt{2\pi}}\cdot \sqrt{\frac{1+4Ps-\sqrt{1+4Ps}}{ ( \sqrt{1+4Ps}-1)^5} } .\label{eqn:defL} \end{equation} The following bound holds: \begin{align} \limsup_{n\to\infty}\frac{1}{\sqrt{n}} \sup_{u\in [-1,1]} f_{U|\mathcal{E}}(u) \le L(P,s). \end{align} \end{lemma} Equipped with this lemma, let us consider the probability $h(s;a,\mu)$ in \eqref{eqn:x0}. We have \begin{align} h(s;a,\mu) & = \Pr\bigg( \sqrt{ns} \, U \in \Big[ \frac{a'}{\sqrt{nP}}, \frac{a' + \mu}{\sqrt{nP}} \Big] \,\bigg|\,\, \mathcal{E}\bigg) \label{eqn:ch_to_U} \\ &= \int_{ a'/ (n\sqrt{Ps}) }^{(a'+\mu)/ (n\sqrt{Ps} ) } f_{U|\mathcal{E}}(u) \, \mathrm{d} u \label{eqn:aprime2}\\ &\le \int_{ a'/ (n\sqrt{Ps}) }^{(a'+\mu)/ (n\sqrt{Ps} ) } 2 \, L(P,s) \, \sqrt{n} \, \mathrm{d} u \label{eqn:use_lem} \\ &= \frac{2\, L(P,s) \, \mu}{\sqrt{n Ps}}, \label{eqn:final_bd} \end{align} where \eqref{eqn:ch_to_U} follows from the fact that $Z_1 = \sqrt{ns} \, U -\sqrt{nP}$ due to \eqref{eqn:ZTheta} and \eqref{eqn:UTheta}, and~\eqref{eqn:use_lem} holds for all sufficiently large $n$ (depending only on $P$ and $s$) on account of Lemma~\ref{lem:boundU}. Since $s\in [ P+1 -\delta , P+1+\delta]$ and $\delta = n^{-1/3}\to 0$, we deduce that for all $\mathbf{y}\in\mathcal{F}$ and $n$ sufficiently large (depending only on $P$), \begin{equation} h(\mathbf{y};a,\mu) \le K(P) \cdot\frac{ \mu }{\sqrt{n }} , \label{eqn:root_n} \end{equation} for some function $K(P)$. In fact, by the continuity of $s\mapsto L(P,s)$, the constant $K(P)$ can be taken to be \begin{equation} K(P)=\frac{3\, L(P,P+1)}{\sqrt{P (P+1)}}. \end{equation} \subsection{Probability That The Decoding Metric Exceeds $t$ For An Incorrect Codeword} We now return to bounding $g(t,\mathbf{y})$ defined in \eqref{eqn:gty}. Again, we assume $\mathbf{y}\in\mathcal{F}$. The idea here is to consider the second form of $g(t,\mathbf{y})$ in \eqref{eqn:integrating} and to slice the interval $[t,\infty)$ into non-overlapping segments $\{ [t+l\eta, t+(l+1)\eta): l \in \mathbb{N}\cup\{0\}\}$ where $\eta>0$ is a constant. Then we apply \eqref{eqn:root_n} to each segment. This is modelled after the proof of \cite[Lemma~47]{PPV10}. Indeed, we have \begin{align} g(t,\mathbf{y})& =\mathbb{E}\big[ \exp(-q( \mathbf{X},\mathbf{Y}))\mathbf{1}\{ q( \mathbf{X},\mathbf{Y})\ge t \} \,\big|\, \mathbf{Y}=\mathbf{y}\big]\nn\\ &\le \sum_{l=0}^{\infty} \exp(-t -l\eta) \Pr\left(t+l\eta\le q(\mathbf{X},\mathbf{Y}) < t+(l+1)\eta \,\big|\, \mathbf{Y} = \mathbf{y}\right) \label{eqn:slices} \\ &\le\sum_{l=0}^{\infty} \exp(-t -l\eta) \cdot \frac{ K(P)\, \eta}{ \sqrt{n}} \label{eqn:use_previous}\\ &= \frac{\exp(-t )}{1-\exp(-\eta)} \cdot\frac{K(P)\, \eta}{ \sqrt{n}} . \label{eqn:geom} \end{align} Since $\eta$ is a free parameter, we may choose it to be $\log 2$ yielding \begin{equation} g(t,\mathbf{y}) \le \frac{G \, \exp(-t )}{\sqrt{n}} \label{eqn:Lambda_bd} \end{equation} where $G=G(P)=(2\log 2)\, K(P)$. \subsection{Evaluating The RCU Bound} We now have all the necessary ingredients to evaluate the RCU bound in \eqref{eqn:rcu}. Consider, \begin{align} \varepsilon'&\le \mathbb{E}\left[ \min\big\{1,Mg(q(\mathbf{X},\mathbf{Y}) ,\mathbf{Y}) \big\}\right]\nn\\ &\le\Pr(\mathbf{Y}\in \mathcal{F}^c)+ \mathbb{E}\left[ \min\big\{1,Mg(q(\mathbf{X},\mathbf{Y}) ,\mathbf{Y}) \big\}\,\Big|\, \mathbf{Y}\in\mathcal{F} \right] \cdot \Pr(\mathbf{Y} \in \mathcal{F}) \\ & \le\Pr(\mathbf{Y}\in \mathcal{F}^c) + \mathbb{E}\left[ \min\left\{1, \frac{M G \exp(-q(\mathbf{X},\mathbf{Y}) ) }{\sqrt{n}} \right\} \,\bigg|\, \mathbf{Y}\in\mathcal{F} \right] \cdot \Pr(\mathbf{Y} \in \mathcal{F})\label{eqn:use_integrating}\\ & \le\xi_n + \mathbb{E}\left[ \min\left\{1, \frac{MG\exp(-q(\mathbf{X},\mathbf{Y}) ) }{\sqrt{n}} \right\} \,\bigg|\, \mathbf{Y}\in\mathcal{F}\right] \cdot \Pr(\mathbf{Y} \in \mathcal{F})\label{eqn:use_F_bound} \end{align} where \eqref{eqn:use_integrating} is due to~\eqref{eqn:Lambda_bd} with $t = q(\mathbf{X},\mathbf{Y})$ and \eqref{eqn:use_F_bound} uses the bound in \eqref{eqn:chernoff}. Now we split the expectation into two parts depending on whether $q(\mathbf{x},\mathbf{y})> \log (MG/\sqrt{n})$ or otherwise, i.e.\ \begin{align} & \mathbb{E}\left[ \min\left\{1, \frac{MG\exp(-q(\mathbf{X},\mathbf{Y}) ) }{\sqrt{n}} \right\} \,\bigg|\, \mathbf{Y}\in\mathcal{F}\right] \nn\\ &\le \Pr\left( q(\mathbf{X},\mathbf{Y}) \le \log \frac{MG}{\sqrt{n}} \,\bigg|\, \mathbf{Y}\in\mathcal{F}\right) + \frac{MG}{\sqrt{n }} \mathbb{E}\left[ \mathbf{1}\left\{ q(\mathbf{X},\mathbf{Y}) > \log\frac{MG}{\sqrt{n}} \right\}\exp(-q(\mathbf{X},\mathbf{Y})) \,\bigg|\, \mathbf{Y}\in\mathcal{F}\right] \label{eqn:expand_min} . \end{align} By applying \eqref{eqn:Lambda_bd} with $t = \log (MG/\sqrt{n})$, we know that the second term can be bounded as \begin{equation} \frac{MG}{\sqrt{n }} \mathbb{E}\left[ \mathbf{1}\left\{ q(\mathbf{X},\mathbf{Y}) > \log\frac{MG}{\sqrt{n}} \right\}\exp(-q(\mathbf{X},\mathbf{Y})) \,\bigg|\, \mathbf{Y}\in\mathcal{F}\right]\le \frac{G}{\sqrt{n}}. \end{equation} Now let $f_{Y}^*(y) = \mathcal{N}(y;0,P+1)$ be the capacity-achieving output distribution and $f_{\mathbf{Y}}^* (\mathbf{y})=\prod_{i=1}^n f_{Y}^*(y_i)$ its $n$-fold memoryless extension. In Step 1 of the proof of Lemma~61 in~\cite{PPV10}, Polyanskiy-Poor-Verd\'u showed that on $\mathcal{F}$, the ratio of the induced output density $f_{\mathbf{X}}W^n(\mathbf{y})$ and $f_{\mathbf{Y}}^*(\mathbf{y})$ can be bounded by a finite constant $J$, i.e.\ \begin{equation} \sup_{\mathbf{y}\in\mathcal{F}} \frac{ f_{\mathbf{X}}W^n(\mathbf{y})}{ f_{\mathbf{Y}}^*(\mathbf{y})} \le J .\label{eqn:change_meas} \end{equation} Also see \cite[Proposition~2]{Mol13}. We return to bounding the first term in \eqref{eqn:expand_min}. Using the definition of $q(\mathbf{x},\mathbf{y})$ in~\eqref{eqn:decoding_metric} and applying the bound in~\eqref{eqn:change_meas} yields \begin{align} \Pr\left( q(\mathbf{X},\mathbf{Y}) \le \log \frac{MG}{\sqrt{n}} \,\bigg|\, \mathbf{Y}\in\mathcal{F} \right) &=\Pr\left( \log\frac{W^n(\mathbf{Y}|\mathbf{X})}{f_{\mathbf{X}}W^n(\mathbf{Y})} \le \log \frac{MG}{\sqrt{n}} \,\bigg|\, \mathbf{Y}\in\mathcal{F} \right) \\ & \le \Pr\left( \log\frac{W^n(\mathbf{Y}|\mathbf{X})}{f_{\mathbf{Y}}^*(\mathbf{Y})} \le \log \frac{MGJ}{\sqrt{n}} \,\bigg|\, \mathbf{Y}\in\mathcal{F}\right) . \end{align} Thus, when we multiply the first term in \eqref{eqn:expand_min} by $\Pr(\mathbf{Y} \in \mathcal{F})$, use Bayes rule and drop the event $\{\mathbf{Y}\in\mathcal{F}\}$, we see that the product can be bounded as follows: \begin{align} \Pr\left( q(\mathbf{X},\mathbf{Y}) \le \log \frac{MG}{\sqrt{n}} \,\bigg|\, \mathbf{Y}\in\mathcal{F} \right) \cdot\Pr(\mathbf{Y} \in \mathcal{F})\le\Pr \left( \log\frac{W^n(\mathbf{Y}|\mathbf{X})}{f_{\mathbf{Y}}^*(\mathbf{Y})} \le \log \frac{MGJ}{\sqrt{n}} \right)\label{eqn:first_term} . \end{align} The right-hand-side of \eqref{eqn:first_term} can be written as an average over $\mathbf{X}\sim f_{\mathbf{X}}$, i.e.\ \begin{align} \Pr \left( \log\frac{W^n(\mathbf{Y}|\mathbf{X})}{f_{\mathbf{Y}}^*(\mathbf{Y})} \le \log \frac{MGJ}{\sqrt{n}} \right)= \int_{\mathbf{x}}f_{\mathbf{X}}(\mathbf{x}) \Pr\left( \log\frac{W^n(\mathbf{Y}|\mathbf{X})}{f_{\mathbf{Y}}^*(\mathbf{Y})} \le \log \frac{M G J }{\sqrt{n}} \, \bigg|\, \mathbf{X}=\mathbf{x} \right)\,\mathrm{d} \mathbf{x} \label{eqn:pick_X}. \end{align} By noting that $f_{\mathbf{Y}}^*(\mathbf{y})$ is a product density, \begin{equation} \Pr\bigg( \log\frac{W^n(\mathbf{Y}|\mathbf{X})}{f_{\mathbf{Y}}^*(\mathbf{Y})} \le \log \frac{M G J }{\sqrt{n}} \, \bigg|\, \mathbf{X}=\mathbf{x} \bigg) = \Pr\bigg(\sum_{i=1}^n \log\frac{W (Y_i| X_i)}{f_{Y}^*(Y_i)} \le \log \frac{M G J }{\sqrt{n}} \, \bigg|\, \mathbf{X}=\mathbf{x} \bigg). \end{equation} The above probability does not depend on $\mathbf{x}$ as long as it is on the power sphere $\{\mathbf{x}:\|\mathbf{x}\|_2^2=nP\}$ because of spherical symmetry. Hence we may take $\mathbf{x}=(\sqrt{P},\ldots, \sqrt{P})$. It is then easy to check that the first two central moments of the information density are \begin{align} \mathbb{E}\left[\frac{1}{n}\sum_{i=1}^n \log\frac{W (Y_i| \sqrt{P})}{f_{Y}^*(Y_i)} \right] = \mathsf{C}(P) , \quad\mbox{and}\quad \var\left[ \frac{1}{n}\sum_{i=1}^n \log\frac{W (Y_i| \sqrt{P})}{f_{Y}^*(Y_i)}\right] = \frac{ \mathsf{V}(P) }{n}. \end{align} Furthermore, the following third-absolute moment \begin{equation} \mathsf{T}(P) := \frac{1}{n} \sum_{i=1}^n \mathbb{E} \left[\left| \log\frac{W (Y_i| \sqrt{P})}{f_{Y}^*(Y_i)}- \mathbb{E}\bigg[\log\frac{W (Y_i| \sqrt{P})}{f_{Y}^*(Y_i)}\bigg]\right|^3\right] \end{equation} is obviously bounded (note the scaling). See \cite[Lemma~10 and Appendix~A]{ScarlettTan} for a precise analysis of third absolute moments of information densities involving Gaussians. This allows us to apply the Berry-Esseen theorem~\cite[Theorem~2 in Section~XVI.5]{feller}, which implies that \begin{equation} \Pr\left( \log\frac{W^n(\mathbf{Y}|\mathbf{X})}{f_{\mathbf{Y}}^*(\mathbf{Y})} \le \log \frac{M G J }{\sqrt{n}} \, \bigg|\, \mathbf{X}= (\sqrt{P},\ldots, \sqrt{P})\right)\le\Phi\left( \frac{\log \frac{M G J }{\sqrt{n}} -n\mathsf{C}(P)}{\sqrt{n\mathsf{V}(P)}} \right) + \frac{6\, \mathsf{T}(P)}{\sqrt{n\mathsf{V}(P)^3}} .\label{eqn:be} \end{equation} Let $B=B(P) := 6\,\mathsf{T}(P) / \mathsf{V}(P)^{3/2}$. We deduce that \begin{equation} \Pr\left( \log\frac{W^n(\mathbf{Y}|\mathbf{X})}{f_{\mathbf{Y}}^*(\mathbf{Y})} \le \log \frac{MGJ}{\sqrt{n}} \right)\le \Phi\left( \frac{\log \frac{M G J }{\sqrt{n}} -n\mathsf{C}(P)}{\sqrt{n\mathsf{V}(P)}} \right) + \frac{B}{\sqrt{n}} . \end{equation} Putting all the bounds together, we obtain \begin{equation} \varepsilon'\le \Phi\left( \frac{\log \frac{M G J }{\sqrt{n}} -n\mathsf{C}(P)}{\sqrt{n\mathsf{V}(P)}} \right) + \frac{B}{\sqrt{n}} + \frac{G}{\sqrt{n}}+\xi_n. \end{equation} Now choose \begin{align} \log M = n\mathsf{C}(P) + \sqrt{n \mathsf{V}(P)} \Phi^{-1}\left( \varepsilon - \frac{B+G}{\sqrt{n}}-\xi_n \right) + \frac{1}{2}\log n-\log(GJ) \label{eqn:logM} \end{align} ensuring that \begin{equation} \varepsilon'\le \varepsilon. \end{equation} Hence, there exists an $(n,M,\varepsilon,P)_{\mathrm{av}}$-code where $M$ is given by \eqref{eqn:logM}. It is easily seen by Taylor expanding $\Phi^{-1}(\cdot)$ around $\varepsilon$ that \begin{equation} \log M = n\mathsf{C}(P) + \sqrt{n \mathsf{V}(P)} \Phi^{-1}(\varepsilon) + \frac{1}{2}\log n+O(1). \end{equation} This completes the proof of Theorem~\ref{thm:ach3}.\qed \appendices \numberwithin{equation}{section} \section{Modifications of the Proof to the Parallel Gaussian Channels Settng} \label{app:parallel} In this appendix, we give a sketch of how the proof of Theorem~\ref{thm:ach3} can be used for the scenario where information is to be transmitted across $k$ parallel Gaussian channels. See Section 9.4 of \cite{Cov06} for the precise problem setting. Let the input and output to the channel be $(\mathbf{X}_1, \ldots, \mathbf{X}_k)$ and $(\mathbf{Y}_1, \ldots, \mathbf{Y}_k)$ respectively. Let the independent noises of each of the channels have variances $N_1,\ldots, N_k$ and denote the total admissible power as $P$. Let $|\cdot|^+:=\max\{0,\cdot\}$ and set $P_1,\ldots, P_k$ be the power assignments that maximize the information capacity expression, i.e.\ \begin{equation} P_j = |\nu-N_j|^+ \label{eqn:water} \end{equation} where the Karush-Kuhn-Tucker multiplier $\nu$ is chosen to satisfy the total power constraint \begin{equation} \sum_{j=1}^k |\nu-N_j|^+=P. \label{eqn:kkt} \end{equation} Let $\mathcal{P}^+:=\{j \in\{1,\ldots, k\}: P_j>0\}$. Clearly, \eqref{eqn:water} and \eqref{eqn:kkt} imply that $\mathcal{P}^+$ is non-empty if $P>0$. We use the random coding distribution $f_{\mathbf{X}_1}\times\ldots\times f_{\mathbf{X}_k}$ where each constituent distribution $f_{\mathbf{X}_j}$ is given by~\eqref{eqn:rc_dist} with $P_j$ in place of $P$ there. Close inspection of the proof of Theorem~\ref{thm:ach3} shows that the only estimate that needs to be verified is~\eqref{eqn:final_bd}. For this, we consider the analogue of \eqref{eqn:a_prime} which can be written as \begin{equation} h(s_1,\ldots, s_k;a,\mu)=\Pr\bigg(\sum_{j=1}^k \sqrt{P_j}\, Z_{j1}\in \Big[\frac{a_2}{\sqrt{n}}, \frac{a_2+\mu}{\sqrt{n}}\Big] \,\bigg|\, \|\mathbf{X}_j+\mathbf{Z}_j\|_2^2 = ns_j,\,\forall\, j \in\{1,\ldots, k\} \bigg) , \label{eqn:Zk} \end{equation} where $a_2$ is related to $a'$ in \eqref{eqn:a_prime} by a constant shift. Note that the sum of the inner products $\sum_{j=1}^k\langle\mathbf{X}_j,\mathbf{Y}_j\rangle$ in the analogue of \eqref{eqn:inner_prod} reduces to $\sum_{j=1}^k\sqrt{P_j} Z_{j1}= \sum_{j\in \mathcal{P}^+}\sqrt{P_j}Z_{j1}$ once we have exploited spherical symmetry to choose $\mathbf{X}_j=\mathbf{x}_{j0}:=(\sqrt{nP_j},0,\ldots, 0)$ and moved all the constants to the right-hand-side. Let $\mathcal{E}$ be the event $\{ \|\mathbf{x}_{j0}+\mathbf{Z}_j\|_2^2 = ns_j,\,\forall\, j \in\{1,\ldots, k\} \}$. By introducing the independent random variables $\{U_j: j\in\mathcal{P}^+\}$ that are related to $\{Z_{j1} : j\in\mathcal{P}^+\}$ analogously to~\eqref{eqn:ZTheta}, we see that \eqref{eqn:Zk} reduces to \begin{equation} h(s_1,\ldots, s_k;a,\mu)=\Pr\bigg(\sum_{j \in\mathcal{P}^+} \sqrt{P_js_j}\, U_{j }\in \Big[\frac{a_3}{n}, \frac{a_3+\mu}{n}\Big] \,\bigg|\, \mathcal{E}\bigg), \label{eqn:reduce_inner_prod} \end{equation} where $a_3$ is related to $a_2$ by a constant shift. In principle, since the $U_j$'s are independent, we can use its distribution in \eqref{eqn:densityU} to find the distribution of $\sum_{j \in\mathcal{P}^+} \sqrt{P_js_j}\, U_{j }$ by convolution and bound the probability using the steps that led to \eqref{eqn:final_bd}. However, the following method proves to be easier. Let $l$ be any element in $\mathcal{P}^+$ then consider \begin{align} &h(s_1,\ldots, s_k;a,\mu) \nn\\ & =\int \Pr\bigg(\sum_{j \in\mathcal{P}^+} \sqrt{P_js_j} \,U_{j }\in \Big[\frac{a_3}{n}, \frac{a_3+\mu}{n}\Big] \,\bigg|\, \mathcal{E},\,\big\{\forall j \in\mathcal{P}^+\setminus \{l\} , U_j=u_j\big\}\bigg)\, \prod_{ j \in\mathcal{P}^+\setminus \{l\} }f_{U_j|\mathcal{E}}(u_j)\, \mathrm{d} u_j \label{eqn:law_tp}\\ & =\int \Pr\bigg( \sqrt{P_{l } s_{l }} \,U_{l }\in \Big[\frac{a_4}{n}, \frac{a_4+\mu}{n}\Big] \,\bigg|\, \mathcal{E}, \, \big\{\forall j \in\mathcal{P}^+\setminus \{l\} , U_j=u_j\big\} \bigg)\, \prod_{ j \in\mathcal{P}^+\setminus\{l\} } f_{U_j|\mathcal{E}}(u_j)\, \mathrm{d} u_j \label{eqn:U_indep0}\\ & =\int \Pr\bigg( \sqrt{P_{l } s_{l }}\, U_{l }\in \Big[\frac{a_4}{n}, \frac{a_4+\mu}{n}\Big] \,\bigg|\, \mathcal{E} \bigg)\, \prod_{ j \in\mathcal{P}^+\setminus \{l\} } f_{U_j|\mathcal{E}}(u_j)\, \mathrm{d} u_j \label{eqn:U_indep}\\ & \le \int \frac{2\, L(P_{l }, s_{l }) \, \mu}{\sqrt{nP_{l }s_{l }}} \, \prod_{ j \in\mathcal{P}^+\setminus \{l\}} f_{U_j|\mathcal{E}}(u_j)\, \mathrm{d} u_j \label{eqn:U_indep1}\\ &= \frac{2\, L(P_{l }, s_{l }) \, \mu}{\sqrt{nP_{l }s_{l }}} , \end{align} where \eqref{eqn:law_tp} follows from the law of total probability; \eqref{eqn:U_indep0} follows by noting that $\{u_j:j\in\mathcal{P}^+\setminus \{l\} \}$ are constants and defining $a_4$ to be related to $a_3$ by a constant shift; \eqref{eqn:U_indep} is due to the joint independence of the random variables $\{U_j: j \in\mathcal{P}^+\}$; and finally~\eqref{eqn:U_indep1}, which holds for $n$ sufficiently large, follows by the same reasoning in the steps that led to~\eqref{eqn:final_bd}. Since $l\in\mathcal{P}^+$ is arbitrary, \begin{equation} h(s_1,\ldots, s_k;a,\mu)\le\min_{l\in\mathcal{P}^+} \frac{2\, L(P_{l } , s_{l }) \, \mu}{\sqrt{nP_{l }s_{l }}} . \end{equation} We conclude that, just as in \eqref{eqn:root_n}, the probability $h(\mathbf{y}_1,\ldots, \mathbf{y}_k; a,\mu)$ is still bounded above by a constant multiple of $\mu/\sqrt{n}$ and the constant does not depend on $a$. The rest of the proof proceeds {\em mutatis mutandis}. \section{Proof of Lemma~\ref{lem:boundU}} \label{app:boundU} We first find a lower bound for the normalization constant $F_n$ defined in \eqref{eqn:Fn}. Using the fact that $(1-u^2)^{-3/2} \geq 1$, we have \begin{equation} F_n \ge \underline{F}_n:=\int_{-1}^1 \exp( n \alpha(u) ) \, \mathrm{d} u \label{eqn:underF} \end{equation} where the exponent is \begin{equation} \alpha(u) := \frac{1}{2}\log (1-u^2) + \sqrt{Ps} u . \end{equation} This exponent is maximized at \begin{equation} u^* = \frac{\sqrt{1+ 4Ps}-1}{2\sqrt{Ps}}, \label{eqn:u_star} \end{equation} which is in the interior of $[-1,1]$ for finite $P$. Furthermore, the second derivative of $\alpha$ is \begin{equation} \alpha''(u) = -\frac{(1+u^2)}{(1-u^2)^2} \end{equation} which is always negative. Now we use Laplace's method to lower bound the definite integral in \eqref{eqn:underF} with that of a Gaussian~\cite{Tierney86, Shun95}. We provide the details for the reader's convenience. Let $\epsilon\in(0, -\alpha''(u^*))$. By the continuity of $\alpha''(u)$ at $u^*$ and Taylor's theorem, there exists a $\zeta \in (0, 1-u^*)$ such that for any $u \in (u^*-\zeta,u^*+\zeta)\subset [-1, 1]$, we have $\alpha(u)\ge \alpha(u^*)+\frac{1}{2}(\alpha''(u^*)-\epsilon)(u-u^*)^2$. The following lower bounds hold: \begin{align} \underline{F}_n &\ge\int_{u^*-\zeta}^{u^*+\zeta} \exp(n\alpha(u))\, \mathrm{d} u \\ &\ge \exp(n\alpha(u^*))\int_{u^*-\zeta}^{u^*+\zeta} \exp\Big( \frac{n}{2}(\alpha''(u^*)-\epsilon) (u-u^*)^2\Big)\, \mathrm{d} u \\ &=\exp(n\alpha(u^*)) \sqrt{ \frac{1}{n( -\alpha''(u^*)+\epsilon)}} \int_{ -\zeta\sqrt{ n ( -\alpha''(u^*)+\epsilon) }}^{ \zeta\sqrt{ n ( -\alpha''(u^*)-\epsilon) }} \mathrm{e}^{-v^2/2} \, \mathrm{d} v . \label{eqn:change_vars} \end{align} We used the change of variables $v=\sqrt{n ( -\alpha''(u^*)+\epsilon)}(u-u^*)$ in the final step. The integral in \eqref{eqn:change_vars} tends to $\sqrt{2\pi}$ as $n$ becomes large so \begin{equation} \liminf_{n\to\infty}\frac{\underline{F}_n}{ \sqrt{ \frac{2\pi}{n |\alpha''(u^*)|}} \exp(n\alpha(u^*))}\ge \sqrt{\frac{- \alpha''(u^*) }{-\alpha''(u^*)+\epsilon }} . \label{eqn:take_liminf} \end{equation} Since $\epsilon>0$ is arbitrary, we can rewrite \eqref{eqn:take_liminf} as \begin{equation} \underline{F}_n \ge \gamma_n \, \sqrt{\frac{2\pi}{n |\alpha''(u^*)|}}\exp(n \alpha(u^*)), \label{eqn:laplace} \end{equation} for some sequence $\gamma_n$ that converges to $1$ as $n \to \infty$. Furthermore, the numerator of $f_{U|\mathcal{E}}(u)$ in \eqref{eqn:densityU} can be upper bounded as \begin{equation} (1-u^2)^{(n-3)/2} \exp\big( n \sqrt{Ps} u \big) = \exp (n \beta_n(u)) \le\exp( n \beta_n(u_n^*)) \label{eqn:upper_bd_num} \end{equation} where the exponent is \begin{equation} \beta_n(u):= \Big(\frac{1}{2}-\frac{3}{2n}\Big)\log (1-u^2) + \sqrt{Ps} u \end{equation} and the maximizer of $\beta_n(u)$ is \begin{equation} u_n^* := \frac{ \sqrt{ (1-\frac{3}{n} )^2 + 4Ps }-(1-\frac{3}{n})}{2 \sqrt{Ps}}. \end{equation} Clearly, $u_n^*\to u^*$ as $n\to\infty$. We have, by uniting \eqref{eqn:laplace} and \eqref{eqn:upper_bd_num}, that \begin{equation} \sup_{u \in [-1, 1]} f_{U|\mathcal{E}}(u) \le \frac1{\gamma_n} \sqrt{\frac{n |\alpha''(u^*)|}{2\pi}} \exp\big( n [\beta_n(u_n^*)-\alpha(u^*) ] \big) .\label{eqn:sup_u} \end{equation} Now, we examine the exponent $\beta_n(u_n^*)-\alpha(u^*)$ above. We have \begin{align} \beta_n(u_n^*)-\alpha(u^*) \le \beta_n(u_n^*)-\alpha(u_n^* ) = \frac{3}{2n}\log\frac{1}{1-(u_n^*)^2} \label{eqn:def_alpha_beta} \end{align} where the inequality follows because $u^*$ maximizes $\alpha$ and so $\alpha(u_n^*)\le\alpha(u^*)$ and the equality is due to the definitions of $\alpha(u)$ and $\beta_n(u)$. Thus, \eqref{eqn:sup_u} can be further upper bounded as \begin{equation} \sup_{u \in [-1, 1]} f_{U|\mathcal{E}}(u) \le \frac1{\gamma_n} \cdot\sqrt{\frac{n |\alpha''(u^*)|}{2\pi}} \cdot\frac{1}{(1-(u_n^*)^2)^{3/2}} . \end{equation} Dividing both sides by $\sqrt{n}$ and taking the $\limsup$ shows that the upper bound can be chosen to be \begin{equation} L(P,s) = \frac{1}{ (1-(u^*)^2)^{3/2}} \cdot\sqrt{\frac{ |\alpha''(u^*)|}{2\pi}} = \sqrt{ \frac{1+ (u^*)^2}{ 2\pi (1-(u^*)^2)^5} }\label{eqn:defB} . \end{equation} This concurs with \eqref{eqn:defL} after we substitute for the value of $u^*$ in \eqref{eqn:u_star}.\qed \subsection*{Acknowledgements} VT sincerely thanks Shaowei Lin (I$^2$R, A*STAR) for many helpful explanations concerning approximation of integrals in high dimensions. The authors also thank Jonathan Scarlett (Cambridge) and Y\"ucel Altu\u{g} (Cornell) for discussions and constructive comments on the manuscript \bibliographystyle{ieeetr}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:intro} The simplest and most widely studied class of expanding interval maps, and those which we will concern ourselves, are intermediate $\beta$-transformations, namely transformations of the form $T_{\beta, \alpha} : x \mapsto \beta x + \alpha\mod 1$ acting on $[0,1]$, where $(\beta, \alpha) \in \Delta \coloneqq \{ (b, a) \in \mathbb{R}^{2} \colon b \in (1, 2) \; \text{and} \; a \in [0, 2-b]\}$. This class of transformations have motivated a wealth of results, providing practical solutions to a variety of problems. They arise as Poincar\'e maps of the geometric model of Lorenz differential equations~\cite{MR681294}, Daubechies \textsl{et al.} \cite{1011470} proposed a new approach to analog-to-digital conversion using \mbox{$\beta$-transformations}, and Jitsumatsu and Matsumura \cite{Jitsumatsu2016AT} developed a random number generator using \mbox{$\beta$-transformations}. (This random number generator passed the NIST statistical test suite.) Through their study, many new phenomena have appeared, revealing rich combinatorial and topological structures, and unexpected connections to probability theory, ergodic theory and aperiodic order; see for instance \cite{bezuglyi_kolyada_2003,Komornik:2011,ArneThesis}. Intermediate $\beta$-transformations also have an intimate link to metric number theory in that they give rise to \mbox{non-integer} based expansions of real numbers. Given $\beta \in (1, 2)$ and $x \in [0, 1/(\beta-1)]$, an infinite word $\omega = \omega_{1}\omega_{2}\cdots$ with letters in the alphabet $\{0, 1\}$ is called a \textsl{$\beta$-expansion} of $x$ if \begin{align*} x = \sum_{n \in \mathbb{N}} \omega_{n} \, \beta^{-n}. \end{align*} Through iterating the map $T_{\beta, \alpha}$ one obtains a subset of $\{ 0, 1\}^{\mathbb{N}}$ known as the intermediate $\beta$-shift $\Omega_{\beta, \alpha}$, where each $\omega \in \Omega_{\beta, \alpha}$ is a $\beta$-expansion, and corresponds to a unique point in $[\alpha/(\beta-1), 1+\alpha/(\beta-1)]$, see \eqref{eq:commutative_diag} and the commentary following it for further details. By equipping $\Omega_{\beta, \alpha}$ with the left shift map $\sigma$, one obtains a dynamical system which is topologically conjugate to the dynamical system $\mathcal{S}_{\beta,\alpha}=(T_{\beta, \alpha}, [0, 1])$, namely one obtains a symbolic system which possess the same ergodic properties as $\mathcal{S}_{\beta,\alpha}$. Note, in this article, by topologically conjugate we mean that the conjugacy is one-to-one everywhere except on a countable set on which the conjugacy is at most finite to one. Open dynamical systems, namely systems with holes in the state space through which mass can leak away, have received a lot of attention, see \cite{Ur86,Schme,N09,BBF2014,KKLL} and reference therein. We prove the following correspondence connecting $\mathcal{S}_{\beta,\alpha}$ and open dynamical systems driven by greedy $\beta$-transformations, namely intermediate $\beta$-transformations with no rotation factor, or equivalently when $\alpha = 0$. \begin{theorem}\label{thm:main} Given $(\beta, \alpha) \in \Delta$, there exist $t \in [0, 1]$ and $\beta' \in (1, 2)$ with $(T_{\beta, \alpha}, [0, 1])$ topologically conjugate to the open dynamical system $(T_{\beta', 0}\vert_{K^{+}_{\beta',0}(t)}, K^{+}_{\beta',0}(t))$, where \begin{align*} K^{+}_{\beta', 0}(t) \coloneqq \{ x \in[0, 1) \colon T_{\beta',0}^{n}(x)\not \in [0,t) \; \textup{for all} \; n \in \mathbb{N}_{0} \}. \end{align*} However, the converse does not hold, namely there exist $t \in [0, 1]$ and $\beta' \in (1, 2)$ such that there does not exist a topological conjugation between $(T_{\beta', 0}\vert_{K^{+}_{\beta',0}(t)}, K^{+}_{\beta', 0}(t))$ and $(T_{\beta, \alpha}, [0, 1])$ for any $(\beta, \alpha) \in \Delta$. Moreover, given $\beta' \in (1, 2)$ with $T_{\beta',0}^{n}(1) = 0$, for some $n \in \mathbb{N}$, there exists $\delta \in (0, \beta'^{-1})$, such that to each $t < \delta$ in the bifurcation set \begin{align*} E_{\beta',0}^{+} \coloneqq \{ t \in[0,1) \colon T_{\beta', 0}^{n}(t) \not\in [0, t) \; \textup{for all} \; n \in \mathbb{N}_{0} \} \end{align*} one may associate a unique $(\beta, \alpha) \in \Delta$ with $(T_{\beta, \alpha}, [0, 1])$ topologically conjugate to $(T_{\beta', 0}\vert_{K^{+}_{\beta',0}(t)}, K^{+}_{\beta',0}(t))$. \end{theorem} This result complements \cite[Proposition 3.1 and Theorem 3.5]{bundfuss_kruger_troubetzkoy_2011}. Here, it is shown that every subshift of finite type and any greedy $\beta$-shift encodes a survivor set of $x \mapsto mx \bmod 1$, for some $m \in \mathbb{N}$ with $m \geq 2$. With this and \Cref{thm:main} at hand, we have that any intermediate $\beta$-shift encodes a survivor set of the doubling map. We employ our characterisation given in \Cref{thm:main} to (1) build a Krieger embedding theorem for intermediate \mbox{$\beta$-transformations}, and (2) obtain new metric and topological results on survivor sets of intermediate \mbox{$\beta$-transformations}. \begin{enumerate}[label={\rm(\arabic*)},leftmargin=*] \item {\bfseries A Krieger embedding theorem for intermediate \mbox{$\beta$-transformations.}} Subshifts, such as $\Omega_{\beta, \alpha}$, are to dynamical systems what shapes like polygons and curves are to geometry. Subshifts which can be described by a finite set of forbidden words are called \textsl{subshifts of finite type} and play an essential role in the study of dynamical systems. One reason why subshifts of finite type are so useful is that they have a simple representation using a finite directed graph. Questions concerning the subshift can then often be phrased as questions about the graph's adjacency matrix, making them more tangible, see for instance \cite{LM,brin_stuck_2002} for further details on subshifts of finite type. Moreover, in the case of greedy $\beta$-shifts (that is when $\alpha = 0$), often one first derives results for greedy $\beta$-shifts of finite type, and then one uses an approximation argument to determine the result for a general greedy $\beta$-shift, see for example \cite{DavidFarm2010,LL16}. Here we prove a Krieger embedding theorem for intermediate $\beta$-shifts. Namely, we show the following, complementing the work of \cite{LSSS} where the same result is proven except where the containment property given in Part~(iii) is reversed. Due to this reversed containment, our proof and that of \cite{LSSS}, although both of a combinatorial flavour, are substantially different. % \begin{corollary}\label{Cor_1} Given $(\beta, \alpha) \in \Delta$, there exists a sequence $\{ (\beta_{n}, \alpha_{n}) \}_{n \in \mathbb{N}}$ in $\Delta$ with $\lim_{n\to \infty} (\beta_{n}, \alpha_{n}) = (\beta, \alpha)$ and \begin{enumerate}[label={\rm(\roman*)}] \item $\Omega_{\beta_{n}, \alpha_{n}}$ a subshift of finite type, \item the Hausdorff distance between $\Omega_{\beta, \alpha}$ and $\Omega_{\beta_{n}, \alpha_{n}}$ converges to zero as $n$ tends to infinity, and \item $\Omega_{\beta_{n}, \alpha_{n}} \subseteq \Omega_{\beta, \alpha}$. \end{enumerate} \end{corollary} % \noindent These results together with the results of \cite{LSSS} complements the corresponding result for the case when $\alpha = 0$ proven in \cite{P1960} and which asserts that any greedy $\beta$-shift can be approximated from \textsl{above} and \textsl{below} by a greedy $\beta$-shift of finite type. \vspace{1em} % \item {\bfseries Metric and topological results on survivor sets of intermediate \mbox{$\beta$-transformations}.} Via our correspondence theorem (\Cref{thm:main}), we are able to transfer the results of \cite{KKLL} obtained for open dynamical systems driven by greedy $\beta$-transformations to general intermediate $\beta$-transformations. Specifically, we show the following, extending the results of \cite{KKLL} and complementing those of \cite{Ur86,N09}. Here we follow the notation used in \cite{KKLL}, and recall that an infinite word in the alphabet $\{0, 1\}$ is \textsl{balanced} if and only if the number of ones in any two subwords of the same length differ by at most $1$. % \begin{corollary}\label{Cor_2} The bifurcation set $E_{\beta,\alpha}^{+} \coloneqq \{ t \in[0,1) \colon T_{\beta, \alpha}^{n}(t) \not\in [0, t) \; \textup{for all} \; n \in \mathbb{N}_{0} \}$ is a Lebesgue null set. Moreover, if largest lexicographic word in $\Omega_{\beta, \alpha}$ is balanced, then $E_{\beta,\alpha}^{+}$ contains no isolated points. % \end{corollary} \noindent If the largest lexicographic word in $\Omega_{\beta, \alpha}$ is not balanced, then under an additional technical assumption, in \Cref{cor:isoloated_pts}, we show that there exists a $\delta > 0$, such that $E_{\beta,\alpha}^{+} \cap [0, \delta]$ contains no isolated points. Further, letting $K^{+}_{\beta, \alpha}(t)$ denote the survivor set $\{ x \in[0,1) \colon T_{\beta,\alpha}^{n}(x)\not \in [0,t) \; \text{for all} \; n \in \mathbb{N}_{0} \}$, we have: % \begin{corollary}\label{Cor_3} The dimension function $\eta_{\beta, \alpha} \colon t \mapsto \dim_{\mathcal{H}}(K_{\beta,\alpha}^{+}(t))$ is a Devil staircase function, that is, $\eta_{\beta,\alpha}(0) = 1$, $\eta_{\beta,\alpha}((1-\alpha)/\beta) = 0$, $\eta_{\beta, \alpha}$ is decreasing, and $\eta_{\beta,\alpha}$ is constant Lebesgue almost everywhere. \end{corollary} % \noindent With \Cref{Cor_1,Cor_3} at hand, we can also prove the following. % \begin{corollary}\label{Cor_E_beta_alpha} The bifurcation set $E_{\beta,\alpha}^{+}$ has full Hausdorff dimension. \end{corollary} % \end{enumerate} The sets $K^{+}_{\beta,\alpha}(t)$ can be seen as a level sets of the set of badly approximable numbers in non-integer bases, that is, \begin{align*} \mathrm{BAD}_{\beta, \alpha}(0) \coloneqq \{ x \in [0,1] \colon 0 \not\in \overline{\{T_{\beta, \alpha}^n(x) \colon n\geq 0\}}\} = \bigcup_{t \in (0, 1)} K^{+}_{\beta, \alpha}(t). \end{align*} Moreover, for $\xi \in [0,1]$ one can study the more general set \begin{align*} \mathrm{BAD}_{\beta, \alpha}(\xi) \coloneqq \{ x \in [0,1] \colon \xi \not\in \overline{\{T_{\beta, \alpha}^n(x) \colon n\geq 0\}}\}, \end{align*} which, by \Cref{Cor_3}, is a set of full Hausdorff dimension. When $\alpha = 0$, F\"arm, Persson and Schmeling \cite{DavidFarm2010} and later Hu and Yu \cite{HY} study these sets and showed that they are winning, and hence that they have the large intersection property. To our knowledge, the present work, is the first to consider the case $\alpha \neq 0$. Before stating our results on $\mathrm{BAD}_{\beta, \alpha}(\xi)$, we recall the notion of a winning set. In the 1960s Schmidt~\cite{S} introduced a topological game in which two players take turns in choosing balls that are a subset of the previously chosen ball. There is a target set $S$ and the objective of Player~$1$ is to make sure that the point that is present in every ball chosen during the game is in $S$. The objective of Player~$2$ is to prevent this. A set is called winning when Player~$1$ can always build a winning strategy no matter how Player~2 plays. \begin{definition} Let $\alpha$ and $\gamma\in (0,1)$ be fixed and suppose we have two players, Player~1 and Player~2. Let Player~2 choose a closed initial interval $B_1\subset [0,1]$ and let Player~1 and Player~2 choose nested closed intervals such that $B_1 \supset W_1 \supset B_2 \supset W_2 \supset \ldots$ and $|W_{n+1}|=\alpha |B_n|$ and $|B_{n+1}|=\gamma |W_n|$. A set $S$ is called $(\alpha,\gamma)$-winning if there is a strategy for Player~2 to ensure that $\bigcap_{i\in \mathbb{N}} W_i \subset S$. The set $S$ is called $\alpha$-winning if it is $(\alpha,\gamma)$-winning for all $\gamma\in (0,1)$ and is called winning if it is $\alpha$-winning for some $\alpha\in (0,1)$. \end{definition} A key attribute of winning which makes it an interesting property to study is that winning sets have full Hausdorff dimension~\cite{S}. Another, is that it persists under taking intersections, that is, for two winning sets their intersection is again winning, and hence of full Hausdorff dimension~\cite{S}; this is not true in general for sets of full Hausdorff dimension. We also note, the property of winning is preserved under bijective affine transformations. \begin{theorem}\label{thm:main_2} Given $(\beta, \alpha) \in \Delta$ with $\Omega_{\beta, \alpha}$ a subshift of finite type, and $\xi \in [0, 1]$, the set $\mathrm{Bad}_{\beta,\alpha}(\xi)$ is winning. \end{theorem} We remark that in \cite{JT2009} a similar result for $C^{2}$-expanding Markov circle maps was proven, but that intermediate $\beta$-transformations do not fall into this regime. Further, with \Cref{thm:main_2} at hand and with \cite[Theorem 1]{DavidFarm2010} in mind, we conjecture that $\mathrm{Bad}_{\beta,\alpha}(\xi)$ is winning for all $(\beta, \alpha) \in \Delta$ and $\xi \in [0, 1]$. Our work is organised as follows. In \Cref{sec:prelim} we we present necessary definitions, preliminaries and auxiliary results. \Cref{sec:proof_thm_1_1,sec:proof_thm_1_6} are respectively devoted to proving \Cref{thm:main,thm:main_2}, and \Cref{sec:proof_cor_1_2,sec:proof_cor_1_3_4} respectively contain the proofs of \Cref{Cor_1}, and \Cref{Cor_2,Cor_3}. Additionally, in \Cref{sec:proof_cor_1_3_4}, we demonstrate how our theory may be used to numerically compute the Hausdorff dimension of $K_{\beta,\alpha}^{+}(t)$. \section{Notation and preliminaries}\label{sec:prelim} \subsection{Subshifts} Let $m \geq 2$ denote a natural number and set $\Lambda = \{0, 1, \ldots, m-1\}$. We equip the space $\Lambda^\mathbb{N}$ of infinite sequences indexed by $\mathbb{N}$ with the topology induced by the \textsl{word metric} $\mathscr{D} \colon \Lambda^\mathbb{N} \times \Lambda^\mathbb{N} \to \mathbb{R}$ given by \begin{align*} \mathscr{D}(\omega, \nu) \coloneqq \begin{cases} 0 & \text{if} \; \omega = \nu,\\ 2^{- \lvert\omega \wedge \nu\rvert + 1} & \text{otherwise}. \end{cases} \end{align*} Here, $\rvert \omega \wedge \nu \lvert \coloneqq \min \, \{ \, n \in \mathbb{N} \colon \omega_{n} \neq \nu_n \}$, for $\omega$ and $\nu \in \Lambda^{\mathbb{N}}$ with $\omega \neq \nu$, where for an element $\omega \in \Lambda^{\mathbb{N}}$ we write $\omega=\omega_1\omega_2\cdots$. Note, when equipping $\Lambda$ with the discrete topology, the topology induced by $\mathscr{D}$ on $\Lambda^{\mathbb{N}}$ coincides with the product topology on $\Lambda^{\mathbb{N}}$. We let $\sigma \colon \Lambda^{\mathbb{N}} \to \Lambda^{\mathbb{N}}$ denote the \textsl{left-shift map} defined by $\sigma(\omega_{1} \omega_{2} \cdots) \coloneqq \omega_{2} \omega_{3} \cdots$, and for $n \in \mathbb{N}$, we set $\omega\rvert_{n} = \omega_{1} \omega_{2} \cdots \omega_{n}$. A \textsl{subshift} is any closed set $\Omega \subseteq \Lambda^\mathbb{N}$ with $\sigma(\Omega) \subseteq \Omega$. Given a subshift $\Omega$, we set $\Omega\vert_{0} = \{ \varepsilon\}$, where $\varepsilon$ denotes the empty word, and for $n \in \mathbb{N}$, we set \begin{align*} \Omega\lvert_{n} \coloneqq \left\{ \omega_{1} \cdots \omega_{n} \in \Lambda^{n} \colon \,\text{there exists} \; \xi \in \Omega \; \text{with} \; \xi|_n = \omega_{1} \cdots \omega_{n} \right\} \end{align*} and write $\Omega^{*} \coloneqq \bigcup_{n \in \mathbb{N}_{0}} \Omega\lvert_{n}$ for the collection of all finite words. We denote by $\lvert \Omega\vert_{n} \rvert$ the cardinality of $\Omega\vert_{n}$, and for $\omega \in \Omega\vert_{n}$, we set $\lvert \omega \rvert = n$. We extend the domain of $\sigma$ to $\Omega^{*}$, by setting $\sigma(\varepsilon) \coloneqq \varepsilon$, and for $n \in \mathbb{N}$, letting \begin{align*} \sigma(\omega_{1} \omega_{2} \cdots \omega_{n}) \coloneqq \begin{cases} \omega_{2} \omega_{3} \cdots \omega_{n} & \text{if} \; n \neq 1,\\ \varepsilon & \text{otherwise}. \end{cases} \end{align*} For $\omega = \omega_{1} \cdots \omega_{\lvert \omega \rvert} \in \Omega^{*}$ and $\xi = \xi_{1} \xi_{2} \cdots \in \Omega \cup \Omega^{*}$ we denote the concatenation $\omega_{1} \cdots \omega_{\lvert \omega \rvert} \ \xi_{1} \xi_{2} \cdots$ by $\omega \ \xi$. \begin{definition} A subshift $\Omega$ is said to be \textsl{of finite type} if there exists $M \in \mathbb{N}$ such that, $\omega_{n - M + 1} \cdots \omega_{n} \ \xi_{1} \cdots \xi_{m} \in \Omega^{*}$, for all $\omega_{1} \cdots \omega_{n}$ and $\xi_{1} \cdots \xi_{m} \in \Omega^{*}$ with $n, m \in \mathbb{N}$ and $n \geq M$, if and only if $\omega_{1} \cdots \omega_{n} \ \xi_{1} \cdots \xi_{m} \in \Omega^{*}$. \end{definition} The following result gives an equivalent condition for when a subshift is of finite type. \begin{theorem}[{\cite[Theorem 2.1.8]{LM}}] A subshift $\Omega \subseteq \Lambda^{\mathbb{N}}$ is of finite type if and only if there exists a finite set $F \subset \Omega^{*}$ with $\Omega = \mathcal{X}_{F}$, where $\mathcal{X}_{F} \coloneqq \{ \omega \in \Lambda^{\mathbb{N}} \colon \sigma^{m}(\omega)\vert_{\lvert \xi \rvert} \neq \xi \; \text{for all} \; \xi \in F \; \text{and} \; m \in \mathbb{N}\}$. \end{theorem} Two subshifts $\Omega$ and $\Psi$ are said to be \textsl{topologically conjugate} if there exists a $\phi \colon \Omega \to \Psi$ that is surjective, one-to-one everywhere except on a countable set on which it is at most finite-to-one, and $\sigma \circ \phi(\omega) = \phi \circ \sigma(\omega)$ for all $\omega \in \Omega$. We call $\phi$ the \textsl{conjugacy}. In the case that $m = 2$, a particular conjugacy which we will make use of is the \textsl{reflection map} $R$ defined by $R(\omega_{1} \omega_{2} \cdots) = (1-\omega_{1})(1-\omega_{2})\cdots$ for $\omega = \omega_{1} \omega_{2} \cdots \in \{0,1\}^{\mathbb{N}}$. This concept of two subshifts being topologically conjugate, naturally extends to general dynamical systems, see for instance \cite{LM,brin_stuck_2002}. An infinite word $\omega = \omega_{1} \omega_{2} \cdots \in \Lambda^{\mathbb{N}}$ is called \textsl{periodic} with \textsl{period} $n \in \mathbb{N}$ if and only if, for all $m \in \mathbb{N}$, we have $\omega_{1} \cdots \omega_{n} = \omega_{(m - 1)n + 1} \cdots \omega_{m n}$, in which case we write $\omega = \omega\vert_{n}^{\infty}$, and denote the smallest period of $\omega$ by $\operatorname{per}(\omega)$. Similarly, an infinite word $\omega = \omega_{1} \omega_{2} \cdots \in \Lambda^{\mathbb{N}}$ is called \textsl{eventually periodic} with \textsl{period} $n \in \mathbb{N}$ if there exists $k \in \mathbb{N}$ such that, for all $m \in \mathbb{N}$, we have $\omega_{k+1} \cdots \omega_{k+n} = \omega_{k+(m - 1)n + 1} \cdots \omega_{k+ m n}$, in which case we write $\omega = \omega_{1} \cdots \omega_{k} (\omega_{k+1} \cdots \omega_{k+n})^\infty$. \subsection{Intermediate \texorpdfstring{$\beta$}{beta}-shifts}\label{sec:beta-shifts} For $(\beta, \alpha) \in \Delta$ we set $p = p_{\beta, \alpha} = (1-\alpha)/\beta$ and define the \textsl{upper $T_{\beta, \alpha}$-expansion} $\tau_{\beta, \alpha}^{+}(x)$ of $x \in [0, 1]$ to be the infinite word $\omega_{1} \omega_{2} \cdots \in \{ 0, 1\}^{\mathbb{N}}$, where, for $n \in \mathbb{N}$, \begin{align}\label{eq:upper_kneading} \omega_{n} \coloneqq \begin{cases} 0 & \quad \text{if } T_{\beta,\alpha}^{n-1}(x) < p,\\ 1 & \quad \text{otherwise,} \end{cases} \end{align} and define the \textsl{lower $T_{\beta, \alpha}$-expansion} $x$ to be $\tau^{-}_{\beta, \alpha}(x) \coloneqq \lim_{y \nearrow x} \tau_{\beta,\alpha}^{+}(y)$. Note, one can also define $\tau^{-}_{\beta, \alpha}(x)$ analogously to $\tau^{+}_{\beta, \alpha}(x)$ by using the map $T_{\beta,\alpha}^{-} \colon x \mapsto \beta x + \alpha$ if $x \leq p$, and $x \mapsto \beta x + \alpha - 1$ otherwise, in replace of of $T_{\beta, \alpha}$, and by changing the \textsl{less than}, to \textsl{less than or equal to} in \eqref{eq:upper_kneading}, see \cite[Section 2.2]{LSSS}. With this in mind, and for ease of notation, sometimes we may write $T_{\beta,\alpha}^{+}$ for $T_{\beta, \alpha}$. We denote the images of $[0,1)$ under $\tau_{\beta, \alpha}^{+}$ by $\Omega^{+}_{\beta, \alpha}$, the image of $(0,1]$ under $\tau_{\beta, \alpha}^{-}$ by $\Omega^{-}_{\beta, \alpha}$, and set $\Omega_{\beta, \alpha} \coloneqq \Omega_{\beta, \alpha}^{+} \cup \Omega_{\beta, \alpha}^{-}$. We refer to $\Omega_{\beta, \alpha}$ as an intermediate $\beta$-shift and define the \textsl{upper} and \textsl{lower kneading invariants} of $\Omega_{\beta,\alpha}$ to be the infinite words $\tau^{\pm}_{\beta, \alpha}(p)$, respectively. The following result shows that $\tau^{\pm}_{\beta, \alpha}(p)$ completely determine $\Omega_{\beta,\alpha}$. \begin{theorem}[{\cite{P1960,HS:1990,AM:1996,KS:2012,BHV:2011}}]\label{thm:Structure} For $(\beta, \alpha) \in \Delta$, the spaces $\Omega_{\beta, \alpha}^{\pm}$ are completely determined by upper and lower kneading invariants of $\Omega_{\beta, \alpha}$, namely \begin{align*} \Omega_{\beta, \alpha}^{+} &= \{ \omega \in \{ 0, 1\}^{\mathbb{N}} \colon \tau_{\beta, \alpha}^{+}(0) \preceq \sigma^{n}(\omega) \prec \tau_{\beta, \alpha}^{-}(p) \; \textup{or} \; \tau_{\beta, \alpha}^{+}(p) \preceq \sigma^{n}(\omega) \prec \tau_{\beta, \alpha}^{-}(1) \; \textup{for all} \; n \in \mathbb{N}_{0} \},\\ % \Omega_{\beta, \alpha}^{-} &= \{ \omega \in \{ 0, 1\}^{\mathbb{N}} \colon \tau_{\beta, \alpha}^{+}(0) \prec \sigma^{n}(\omega) \preceq \tau_{\beta, \alpha}^{-}(p) \; \textup{or} \; \tau_{\beta, \alpha}^{+}(p) \prec \sigma^{n}(\omega) \preceq \tau_{\beta, \alpha}^{-}(1) \; \textup{for all} \; n \in \mathbb{N}_{0} \}. \end{align*} Here, $\prec$, $\preceq$, $\succ$ and $\succeq$ denote the lexicographic orderings on $\{ 0 ,1\}^{\mathbb{N}}$. Moreover, the cardinality of $\Omega_{\beta, \alpha}^{\pm}$ is equal to that of the continuum, and $\Omega_{\beta, \alpha}$ is closed with respect to the metric $\mathscr{D}$. Hence, $\Omega_{\beta, \alpha}$ is a subshift. \end{theorem} This result establishes the importance of the kneading invariants of $\Omega_{\beta, \alpha}$, for a given $(\beta, \alpha) \in \Delta$, and so it is natural to ask, for a fixed $\beta \in (1, 2)$, if they are monotonic or continuous in $\alpha$. The following proposition answers this. \begin{proposition}[{\cite{BHV:2011,Cooperband2018ContinuityOE}}]\label{prop:mon_cont_kneading} Let $\beta \in (1,2)$ be fixed. \begin{enumerate}[label={\rm(\arabic*)}] \item The maps $a \mapsto \tau_{\beta,a}^{\pm}(p_{\beta, a})$ are strictly increasing with respect to the lexicographic ordering. \item The map $a \mapsto \tau_{\beta, a}^{+}(p_{\beta, a})$ is right continuous, and the map $a \mapsto \tau_{\beta,a}^{-}(p_{\beta, a})$ is left continuous. \item If $\alpha \neq 0$ and $\tau_{\beta, \alpha}^{+}(p_{\beta, \alpha})$ is not periodic, then $a \mapsto \tau_{\beta,a}^{+}(p_{\beta, a})$ is continuous at $\alpha$, and if $\alpha \neq \beta-1$ and $\tau_{\beta,\alpha}^{-}(p_{\beta, \alpha})$ is not periodic, then $a \mapsto \tau_{\beta,a}^{-}(p_{\beta, a})$ is continuous at $\alpha$. \item If $\tau_{\beta,\alpha}^{+}(p_{\beta, \alpha})$ is periodic with period $M$, for a given $\alpha \in (0, 2-\beta]$, then given $m \in \mathbb{N}$, there exists a real number $\delta > 0$ so that, $\tau_{\beta,\alpha-\delta'}^{+}(p_{\beta, \alpha-\delta'})\vert_{m} = \tau_{\beta,\alpha}^{+}(p_{\beta, \alpha})\vert_{M}\tau_{\beta,\alpha}^{-}(p_{\beta, \alpha})\vert_{m-M}$, for all $\delta' \in (0, \delta)$. \item If $\tau_{\beta,\alpha}^{-}(p_{\beta, \alpha})$ is periodic with period $M$, for a given $\alpha \in [0, 2-\beta)$, then given $m \in \mathbb{N}$, there exists a real number $\delta > 0$ so that, $\tau_{\beta,\alpha+\delta'}^{-}(p_{\beta, \alpha+\delta'})\vert_{m} = \tau_{\beta,\alpha}^{-}(p_{\beta, \alpha})\vert_{M}\tau_{\beta,\alpha}^{+}(p_{\beta, \alpha})\vert_{m-M}$, for all $\delta' \in (0, \delta)$. \end{enumerate} \end{proposition} Another natural question to ask is when do two infinite words in the alphabet $\{0, 1\}$ give rise to kneading invariants of an intermediate $\beta$-shift. This question was addressed in \cite{barnsley_steiner_vince_2014} where the following solution was derived and for which we require the following notation. Given $\omega$ and $\nu \in \{0,1\}^\mathbb{N}$ with $\sigma(\nu) \preceq \omega \preceq \nu \preceq \sigma(\omega)$ we set \begin{align*} \Omega^{+}(\omega,\nu) &\coloneqq \{ \xi \in \{0,1\}^{\mathbb{N}} \colon \sigma(\nu) \preceq \sigma^{n}(\xi) \prec \omega \; \text{or} \; \nu \preceq \sigma^{n}(\xi) \prec \sigma(\omega) \; \text{for all} \; n \in \mathbb{N}_{0} \},\\ \Omega^{-}(\omega,\nu) &\coloneqq \{ \xi \in \{0,1\}^{\mathbb{N}} \colon \sigma(\nu) \prec \sigma^n(\xi) \preceq \omega \; \text{or} \; \nu \prec \sigma^{n} (\xi) \preceq \sigma(\omega) \; \text{for all} \; n \in \mathbb{N}_{0} \}. \end{align*} \begin{theorem}[{\cite{barnsley_steiner_vince_2014}}]\label{thm:BSV14} Two infinite words $\omega = \omega_{1}\omega_{2}\cdots$ and $\nu=\nu_{1}\nu_{2}\cdots \in \{0,1\}^\mathbb{N}$ are kneading invariants of an intermediate $\beta$-shift $\Omega_{\beta, \alpha}$, for some $(\beta, \alpha) \in \Delta$ if and only if the following four conditions hold. \begin{enumerate}[label={\rm(\arabic*)}] \item $\omega_1=0$ and $\nu_1=1$, \item $\omega\in\Omega^-(\omega, \nu) $ and $\nu\in\Omega^+(\omega, \nu)$, \item $\lim_{n\to\infty} \log(|\Omega^+|_n)>0$, and \item if $\omega,\nu\in\{\xi,\zeta \}^\mathbb{N}$ for two finite words $\xi$ and $\zeta$ in the alphabet $\{0,1\}$ with length greater than or equal to three, such that $\xi_1\xi_2=01$ , $\zeta_1\zeta_2=10$, $\xi^\infty \in \Omega^-(\xi^\infty, \zeta^\infty)$ and $\zeta^\infty \in \Omega^{+}(\xi^\infty, \zeta^\infty)$, then $\omega=\xi^\infty$ and $\nu=\zeta^\infty$. \end{enumerate} \end{theorem} This result together with \Cref{thm:Structure} can be seen as a generalisation of the following seminal result of Parry. \begin{theorem}[{\cite[Corollary 1]{P1960}}]\label{thm:Parry_converse} If $\omega \in \{0, 1\}^{\mathbb{N}}$ with $\sigma^{n}(\omega) \neq 0^\infty$ for all $n \in \mathbb{N}$, then there exists a $\beta \in (1,2)$ with $\omega = \tau_{\beta, 0}^{-}(1)$ if and only if $\sigma^{m}(\omega) \preceq \omega$ for all $m \in \mathbb{N}$. \end{theorem} Combining this result with \Cref{thm:Structure}, we obtain the following which will be utilised in our proof of \Cref{thm:main}. \begin{corollary}\label{cor:From_greedy_to_intermediate} Given $(\beta, \alpha) \in \Delta$, there exists $\beta' \in (1,2)$ such that $\tau_{\beta,\alpha}^{-}(1) = \tau_{\beta',0}^{-}(1)$. \end{corollary} In the sequel we will also make use of the projection $\pi_{\beta, \alpha} \colon \{ 0, 1 \}^{\mathbb{N}} \to [0, 1]$ defined by \begin{align*} \pi_{\beta, \alpha}(\omega_{1} \omega_{2} \cdots) \coloneqq \frac{\alpha}{1 - \beta} + \sum_{k \in \mathbb{N}} \frac{\omega_{k}}{\beta^k}. \end{align*} We note that $\pi_{\beta, \alpha}$ is linked to the iterated function systems $( [0, 1]; f_{0} \colon x \mapsto \beta^{-1}x, f_{1} \colon x \mapsto \beta^{-1}(x+1)$ via the equality \begin{align*} \pi_{\beta, \alpha}(\omega_{1} \omega_{2} \cdots) = \alpha / (1-\beta) + \lim_{n \to \infty} f_{\omega_{1}} \circ \cdots \circ f_{\omega_{n}}([0, 1]), \end{align*} and the iterated function system $( [0, 1]; f_{\beta, \alpha, 0} \colon x \mapsto \beta^{-1}x - \alpha\beta^{-1}, f_{\beta, \alpha, 1} \colon x \mapsto \beta^{-1}x - (\alpha-1)\beta^{-1}$ via the equality \begin{align}\label{eq:alt_IFS} \pi_{\beta, \alpha}(\omega_{1} \omega_{2} \cdots) =\lim_{n \to \infty} f_{\beta,\alpha,\omega_{1}} \circ \cdots \circ f_{\beta,\alpha,\omega_{n}}([0, 1]). \end{align} We refer the reader to \cite{F:1990} for further details on iterated function systems. An important property of $\pi_{\beta, \alpha}$ is that the following diagrams commute. \begin{align}\label{eq:commutative_diag} \begin{aligned} \xymatrix@C+2pc{ \Omega^{+}_{\beta, \alpha} \ar@/_/[d]_{\pi_{\beta, \alpha}} \ar[r]^{\sigma} & \Omega_{\beta, \alpha}^{+} \ar@/^/[d]^{\pi_{\beta, \alpha}} \\ {[0, 1)} \ar@/_/[u]_{\tau_{\beta,\alpha}^{+}} \ar[r]_{T_{\beta, \alpha}} & \ar@/^/[u]^{\tau_{\beta,\alpha}^{+}} [0, 1)} \end{aligned} \qquad\qquad \begin{aligned} \xymatrix@C+2pc{ \Omega^{-}_{\beta, \alpha} \ar@/_/[d]_{\pi_{\beta, \alpha}} \ar[r]^{\sigma} & \Omega_{\beta, \alpha}^{-} \ar@/^/[d]^{\pi_{\beta, \alpha}} \\ {(0, 1]} \ar@/_/[u]_{\tau_{\beta,\alpha}^{-}} \ar[r]_{T^{-}_{\beta, \alpha}} & \ar@/^/[u]^{\tau_{\beta,\alpha}^{-}} (0, 1]} \end{aligned} \end{align} This result is verifiable from the definitions of the involved maps and a sketch of a proof can be found in \cite{BHV:2011}. It also yields that each $\omega \in \Omega_{\beta, \alpha}$ is a $\beta$-expansion, and corresponds to the unique point \begin{align*} \sum_{k \in \mathbb{N}} \frac{\omega_{k}}{\beta^{k}} = \pi_{\beta,\alpha}(\omega) - \frac{\alpha}{1-\beta} \end{align*} in $[\alpha/(\beta-1), 1+\alpha/(\beta-1)]$. A particular expansion which we will make use of is $\tau_{\beta,0}^{-}(1)$ which is referred to as the \textsl{quasi-greedy $\beta$-expansion of $1$}. The commutativity of the diagrams given in \eqref{eq:commutative_diag} also implies that the dynamical systems $(\Omega_{\beta, \alpha}, \sigma)$ and $([0, 1), T_{\beta, \alpha})$ are topologically conjugate. This in tandem with \Cref{thm:Structure}, yields that the upper and lower kneading invariants completely determine the dynamics of $T_{\beta, \alpha}$. Additionally, we have the following. \begin{theorem}[{\cite{GH,BHV:2011}}]\label{thm:Laurent} Let $(\beta, \alpha) \in \Delta$ be fixed. If $\omega = \omega_{1}\omega_{2} \cdots $ and $\nu = \nu_{1} \nu_{2} \cdots $, respectively, denote the upper and lower kneading invariants of $\Omega_{\beta, \alpha}$, then $\beta$ is the maximal real root of the Laurent series \begin{align*} \pi_{z,\alpha}(\omega) - \pi_{z,\alpha}(\nu) = \sum_{k \in \mathbb{N}} (\omega_{k} - \nu_{k})\,z^{-k}. \end{align*} \end{theorem} The above allows us to transfer results on the dynamical system $(\Omega_{\beta, \alpha}, \sigma)$ to $([0, 1], T_{\beta, \alpha})$ and vice versa. We will utilise this in the proofs of our main results, and thus will make use of the following symbolic representations of $K_{\beta, \alpha}^{+}(t)$ and $E_{\beta, \alpha}^{+}$ defined in \Cref{sec:intro}. For $(\beta, \alpha) \in \Delta$ and $t \in [0,1]$, we set \begin{align*} \mathcal{K}^{+}_{\beta,\alpha}(t) &\coloneqq \{ \omega \in \{0,1\}^\mathbb{N} \colon \tau_{\beta,\alpha}^{+}(t) \preceq \sigma^{n}(\omega) \prec \tau_{\beta,\alpha}^{-}(1) \; \text{for all} \; n \in \mathbb{N}_{0} \} % \intertext{and we let} % \mathcal{E}_{\beta,\alpha}^{+} &\coloneqq \{ \omega \in \{0,1\}^\mathbb{N} \colon \omega \preceq \sigma^{n}(\omega) \prec \tau_{\beta,\alpha}^{-}(1) \; \text{for all} \; n \in \mathbb{N}_{0} \}. \end{align*} In addition to this, we will utilise the following Ledrappier-Young formula due to Raith \cite{R}. For $t \in (0,1]$, \begin{align}\label{eq:entopen} \dim_{H}(K_{\beta,\alpha}(t)) = \frac{h_{\operatorname{top}}(T_{\beta,\alpha}\vert_{K_{\beta,\alpha}(t)})}{\log(\beta)}. \end{align} where $K_{\beta,\alpha}(t) \coloneqq \{ x \in[0,1] \colon T_{\beta,\alpha}^{n}(x)\not \in (0,t) \; \text{for all} \; n \in \mathbb{N}_{0} \}$ and where $h_{\operatorname{top}}(T_{\beta,\alpha}\vert_{K^{+}_{\beta,\alpha}(t)})$ denotes the topological entropy of the dynamical system $(K^{+}_{\beta,\alpha}(t), T_{\beta,\alpha}\vert_{K^{+}_{\beta,\alpha}(t)})$. Here, for a given subset $L \subseteq [0, 1]$ we set \begin{align*} h_{\operatorname{top}}(T_{\beta,\alpha}\vert_{L}) \coloneqq \lim_{n \to \infty} \frac{\log (\lvert \tau_{\beta,\alpha}^{+}(L)\vert_{n}\rvert)}{n} \end{align*} see \cite{LM,brin_stuck_2002,Walters_1982}, and reference therein, for further details on topological entropy. More specifically, in \Cref{sec:proof_cor_1_3_4}, we apply the following result, which is a consequence of Raith's Ledrappier-Young formula \eqref{eq:entopen} and \cite[Proposition 2.6]{KKLL}. \begin{proposition}\label{prop:ent} For $(\alpha, \beta) \in \Delta$ and $t \in [0,1]$, \begin{align}\label{eq:entropy} h_{\operatorname{top}}(T_{\beta,\alpha}\vert_{K_{\beta,\alpha}(t)}) = h_{\operatorname{top}}(T_{\beta,\alpha}\vert_{K^{+}_{\beta,\alpha}(t)}) \end{align} and hence \begin{align}\label{eq:ent} \dim_{H}(K^{+}_{\beta,\alpha}(t)) = \frac{h_{\operatorname{top}}(T_{\beta,\alpha}\vert_{K^{+}_{\beta,\alpha}(t)})}{\log(\beta)}. \end{align} \end{proposition} \begin{proof} Since the set $K_{\beta,\alpha}(t)\backslash K^{+}_{\beta,\alpha}(t)$ is countable, we have $\dim_H(K_{\beta,\alpha}(t))=\dim_H(K^{+}_{\beta,\alpha}(t))$. The Ledrappier-Young formula given in \eqref{eq:ent} is therefore a direct consequence of \eqref{eq:entopen} and \eqref{eq:entropy}. The proof of \eqref{eq:entropy}, follows from a small adaptation of the proof of \cite[Proposition 2.6]{KKLL}, which we present below. If $0 \not\in K_{\beta,\alpha}(t)$ or $t=0$, then $\mathcal{K}^{+}_{\beta,\alpha}(t) =\mathcal{K}_{\beta,\alpha}(t)$, and if $t\not \in E^{+}_{\beta,\alpha}$ then there exists a $t^*>t$ with $K^{+}_{\beta,\alpha}(t)=K^{+}_{\beta,\alpha}(t^*)$. The first of these two statements follows directly from the definition of $K_{\beta,\alpha}(t)$ and $K_{\beta,\alpha}^{+}(t)$, and the second can be seen to hold as follows. For every $t \not\in E^{+}_{\beta,\alpha}$ there exists a smallest natural number $N$ such that $T_{\beta,\alpha}^N(t) \in [0,t)$. Let $\delta = \min \{ p_{\beta, \alpha} - T_{\beta,\alpha}^{n}(t) \colon n \in \{ 0, 1, \dots, N-1 \} \; \text{and} \; T_{\beta,\alpha}^{n}(t) < p_{\beta, \alpha} \}$ and set $\varepsilon = \min\{t-T_{\beta,\alpha}^N(t), \delta\}/\beta^{N}$. By construction, for all $s\in (t,t+\varepsilon)$, we have that $T_{\beta,\alpha}^N(s)\in [0,t) \subset [0, s)$, and hence that $s\not\in K^{+}_{\beta,\alpha}(s)$ and $s\not\in K^{+}_{\beta,\alpha}(t) $. This implies that $K^{+}_{\beta,\alpha}(t)= K^{+}_{\beta,\alpha}(s)$ for all $s\in (t,t+\varepsilon)$. In fact letting $t^{*} = \inf \{ s \in E^{+}_{\beta,\alpha} \colon s > t \}$, a similar justification yields that $K^{+}_{\beta,\alpha}(t)= K^{+}_{\beta,\alpha}(s)$ for all $s \in (t,t^*)$. All-in-all, this all implies that $t^* \in E^{+}_{\beta,\alpha}$. Therefore, it suffices to show that, if $t \in E^{+}_{\beta,\alpha} \setminus \{0\}$ and $0\in K_{\beta,\alpha}(t)$, then $h_{\operatorname{top}}(T_{\beta,\alpha}\vert_{K_{\beta,\alpha}(t)})= h_{\operatorname{top}}(T_{\beta,\alpha}\vert_{K^{+}_{\beta,\alpha}(t)})$. To this end, suppose that $t\in E^{+}_{\beta,\alpha} \setminus \{0\}$. In which case $\sigma^{n}(\tau^{+}_{\beta,\alpha}(t)) \neq \tau^{+}_{\beta,\alpha}(0)$, and so setting \begin{align*} \mathcal{K}_{\beta,\alpha}^{0}(t) &\coloneqq \{ \omega \in \{0,1\}^{\mathbb{N}} \colon \text{there exists} \; m \in \mathbb{N}_{0} \; \text{with} \; \sigma^{m}(\omega) = \tau^{+}_{\beta,\alpha}(0) \; \text{and} \; \tau^{+}_{\beta,\alpha}(t) \prec \sigma^{n}(\omega) \prec \tau^{-}_{\beta,\alpha}(1) \; \text{for all} \; n \in \mathbb{N}_{0} \} \end{align*} and letting $\mathcal{K}_{\beta,\alpha}(t)$ denote the symbolic representation of $K_{\beta,\alpha}(t)$, namely letting \begin{align*} \mathcal{K}_{\beta,\alpha}(t) &\coloneqq \{ \omega \in \{0,1\}^{\mathbb{N}} \colon \sigma^{n}(\omega) = \tau^{+}_{\beta,\alpha}(0) \; \text{or} \; \tau^{+}_{\beta,\alpha}(t) \preceq \sigma^{n}(\omega) \prec \tau^{-}_{\beta,\alpha}(1) \; \text{for all} \; n \in \mathbb{N}_{0} \}, \end{align*} we have that $\mathcal{K}_{\beta,\alpha}(t)\setminus \mathcal{K}_{\beta,\alpha}^{+}(t) = \mathcal{K}_{\beta,\alpha}^{0}(t)$. Let $k \in \mathbb{N}$ be fixed, and let $\zeta \in \mathcal{K}_{\beta,\alpha}^{0}(t)\vert_{k}$ with $\zeta \neq \tau^{+}_{\beta,\alpha}(0)\vert_{k}$. By construction, there exists $\omega \in \mathcal{K}_{\beta,\alpha}^{0}(t)$ with $\omega\vert_{k} = \zeta$. Let $j$ be the smallest natural number such that $\sigma^{j}(\omega) = \tau^{+}_{\beta,\alpha}(0)$ and set $\nu = \omega\vert_{j-1}\xi_{j}$, where $\xi_{j}$ denotes the $j$-th letter of $\tau^{+}_{\beta,\alpha}(0)$. Observe that \begin{align}\label{eq:ent_proof} \tau^{+}_{\beta,\alpha}(t)\vert_{j-i} \preceq \sigma^{i}(\nu) \preceq \tau^{+}_{\beta,\alpha}(1)\vert_{j-i} \end{align} for all $i \in \{0, 1, \dots, j-1\}$. Let $i^{*} \in \{0, 1, \dots, j-1\}$ be the smallest integer such that $\sigma^{i^{*}}(\nu) = \tau^{+}_{\beta,\alpha}(t)\vert_{j-i^{*}}$, and if strict inequality holds in the lower bound of \eqref{eq:ent_proof} for all $i \in \{0, 1, \dots, j-1\}$, then set $i^{*}=j$. By the minimality of $i^{*}$, we have $\nu \, \sigma^{j-i^{*}+1}(\tau^{+}_{\beta,\alpha}(t)) = \nu\vert_{i^{*}} \tau^{+}_{\beta,\alpha}(t) \in \mathcal{K}_{\beta,\alpha}^{+}(t)$. Noting, $\nu\vert_{j-1} = \zeta\vert_{j-1}$ if $j \leq k$, and $\nu\vert_{k} = \zeta\vert_{k}$ if $j \geq k+1$, we have that \begin{align}\label{eq:ent_proof_2} \lvert \mathcal{K}_{\beta,\alpha}^{0}(t)\vert_{k} \rvert \leq 1 + 2 \sum_{j = 0}^{k} \, \lvert \mathcal{K}_{\beta,\alpha}^{+}(t)\vert_{j} \rvert \leq 3 (k+1) \lvert \mathcal{K}_{\beta,\alpha}^{+}(t)\vert_{k} \rvert. \end{align} Since $\mathcal{K}_{\beta,\alpha}(t)\setminus \mathcal{K}_{\beta,\alpha}^{+}(t) = \mathcal{K}_{\beta,\alpha}^{0}(t)$, and since by definition, \begin{align*} h_{\operatorname{top}}(T_{\beta,\alpha}\vert_{K_{\beta,\alpha}(t)}) = \lim_{n \to \infty} \frac{\log(\mathcal{K}_{\beta,\alpha}(t)\vert_{n})}{n} \quad \text{and} \quad h_{\operatorname{top}}(T_{\beta,\alpha}\vert_{K^{+}_{\beta,\alpha}(t)}) = \lim_{n \to \infty} \frac{\log(\mathcal{K}^{+}_{\beta,\alpha}(t)\vert_{n})}{n}, \end{align*} the inequality given in \eqref{eq:ent_proof_2} implies that $h_{\operatorname{top}}(T_{\beta,\alpha}\vert_{K^{+}_{\beta,\alpha}(t)}) \geq h_{\operatorname{top}}(T_{\beta,\alpha}\vert_{K_{\beta,\alpha}(t)})$. As $K^{+}_{\beta,\alpha}(t) \subseteq K_{\beta,\alpha}(t)$ and as the entropy of a subsystem cannot exceed that of its parent system, the result follows. \end{proof} \subsection{\texorpdfstring{$\beta$}{beta}-shifts of finite type} In \cite{LSS16} a study of when an intermediate $\beta$-shift is of finite type was carried out. This work was continued in \cite{LSSS} where it was shown that any intermediate $\beta$-shift can be \textsl{approximated from below} by an intermediate $\beta$-shift is of finite type. These results are summarised in the following theorem. \begin{theorem}[{\cite{LSSS,LSS16}}]\label{thm:LSSS} An intermediate $\beta$-shift $\Omega_{\beta,\alpha}$ is a subshift of finite type if and only if the kneading invariants $\tau_{\beta,\alpha}^{+}(p_{\beta,\alpha})$ and $\tau_{\beta,\alpha}^{-}(p_{\beta,\alpha})$ are periodic. Moreover, given $(\beta, \alpha) \in \Delta$ and $\epsilon > 0$, there exists a $(\beta', \alpha') \in \Delta$, with $0 \leq \beta' - \beta < \epsilon$ and $\lvert \alpha - \alpha' \rvert < \epsilon$ and such that \begin{enumerate}[label={\rm(\arabic*)}] \item $\Omega_{\beta', \alpha'}$ is a subshift of finite type, \item the Hausdorff distance between $\Omega_{\beta, \alpha}$ and $\Omega_{\beta', \alpha'}$ is less than $\epsilon$, and \item $\Omega_{\beta, \alpha} \subseteq \Omega_{\beta', \alpha'}$. \end{enumerate} \end{theorem} \subsection{Transitivity of intermediate \texorpdfstring{$\beta$}{beta}-transformations} An interval map $T \colon [0,1] \to [0,1]$ is said to be \textsl{transitive} if for any open subinterval $U$ of $[0,1]$ there exists an $m \in \mathbb{N}$ with $\bigcup_{k = 1}^{m} T^{k}(U) = (0,1)$. The property of transitivity will play an important part in our proof of \Cref{thm:main_2}, and thus we will utilise the following result of \cite{G1990,Palmer79} on non-transitive intermediate $\beta$-transformations. Note the contrast in the structure of the set of $(\beta, \alpha) \in \Delta$ with $T_{\beta,\alpha}$ transitive and the set of $(\beta, \alpha) \in \Delta$ with $\Omega_{\beta, \alpha}$ of finite type, namely that the former is has positive $2$-dimensional Lebesgue measure and the latter is countable. \begin{theorem}[{\cite{G1990,Palmer79}}]\label{thm:G1990+Palmer79} Let $\Delta_{\operatorname{trans}}$ denote the set of $(\beta, \alpha) \in \Delta$ with $T_{\beta,\alpha}$ transitive. The sets $\Delta_{\operatorname{trans}}$ and $\Delta \setminus \Delta_{\operatorname{trans}}$ have positive Lebesgue measure. Moreover, given $(\beta, \alpha) \in \Delta \setminus \Delta_{\operatorname{trans}}$, there exist \begin{enumerate}[label={\rm(\roman*)}] \item a natural number $n \geq 2$ and $k \in \{1,\ldots, n-1\}$ with $n$ and $k$ co-prime, \item a sequence of points $\{ b_{0}, b_{1}, \ldots, b_{2n-1}\}$ in $(0,1)$ with $b_{i} < b_{i+1}$ for all $i \in \{0, 1, \ldots, 2n-2\}$, and \item an $\tilde{\alpha} \in [0, 2-\beta^{n}]$, \end{enumerate} such that \begin{enumerate}[label={\rm(\arabic*)}] \item the transformation $T_{\beta^{n},\tilde{\alpha}}$ is transitive, \item $T_{\beta,\alpha}^{n}(J_{i}) = J_{i}$ and $T_{\beta,\alpha}(J_{i}) = J_{i+k \bmod{n}}$, for all $i \in \{ 0, 1, \ldots, n-1 \}$, and \item $T_{\beta,\alpha}^{n}\vert_{J_{i}}$ is topologically conjugate to $T_{\beta^{n},\tilde{\alpha}}$, for all $i \in \{ 0, 1, \ldots, n-1 \}$, where the conjugation is linear. \end{enumerate} Here, $J_{0} = [0, b_{0}] \cup [b_{2n-1}, 1]$, and $J_{i} = [b_{2i-1}, b_{2i}]$, for all $i \in \{1, 2, \ldots, n-1\}$. Further, there exists a $T_{\beta,\alpha}$-periodic point $q$ in $\mathscr{J}=\bigcup_{i = 0}^{n-2} [b_{2i}, b_{2i+1}]$, such that the orbit of $q$ under $T_{\beta,\alpha}$ is contained in $\mathscr{J}$, and for all $x$ in $\mathscr{J}$ but not in the orbit of $q$, there exists an $m \in \mathbb{N}$ such that $T_{\beta,\alpha}^{m}(x) \in \bigcup_{i = 0}^{n-1} J_{i}$. \end{theorem} \subsection{A sufficient condition for a dynamical set to be winning}\label{sec:HY} To prove \Cref{thm:main_2} we not only appeal to the results of \cite{G1990,Palmer79}, but also to \cite[Theorem 2.1]{HY}, where a sufficient condition for certain dynamical sets to be winning is given. In order to state \cite[Theorem 2.1]{HY} we require the following notation. A partition of $[0,1]$ is a collection of finitely many intervals $\{ I(i) \}_{i \in \Lambda}$, where $\Lambda = \{0, 1, \ldots, m-1\}$ for some $m \in \mathbb{N}$, with pairwise disjoint interiors such that $[0,1] = \bigcup_{i \in \Lambda} I(i)$. Here, we assume that the intervals are ordered, namely, that if $i$ and $j \in \{0, 1, \ldots, m-1\}$ with $i < j$, then, for all $x \in I(i)$ and $y \in I(j)$, we have that $x \leq y$. Let $T \colon [0, 1] \to [0,1]$ and let $\{ I(i) \}_{i \in \Lambda}$ denote a partition of $[0, 1]$, such that $T$ restricted to $I(i)$ is monotonic and continuous for all $i \in \Lambda$. For $\xi = \xi_{1} \xi_{2} \cdots \xi_{n} \in \Lambda^{n}$, for some $n \in \mathbb{N}$, we set \begin{align*} I(\xi) = \bigcap_{i = 1}^{n} \ \{ x \in [0, 1] \colon T^{i-1}(x) \in I(\xi_{i}) \}. \end{align*} If $I(\xi)$ is non-empty, we call $I(\xi)$ a \textsl{level $n$ cylinder set} of $T$, and $\xi$ an \textsl{admissible word of length $n$} with respect to the partition $\{ I(i)\}_{i \in \Lambda}$. For $n \in \mathbb{N}_{0}$, we denote by $\Omega_{T}\vert_{n}$ the set of all admissible words of length $n$, where by convention $\Omega_{T}\vert_{0} = \{ \varepsilon \}$, and set $\Omega_{T}^{*} = \bigcup_{n \in \mathbb{N}_{0}} \Omega_{T}\vert_{n}$. When $T = T_{\beta, \alpha}$ and when our partition is $\{ I(0) = [0,p_{\beta, \alpha}), I(1)=[p_{\beta, \alpha},1] \}$, for some $(\beta, \alpha) \in \Delta$, we have $\Omega_{T}\vert_{n} =\Omega_{\beta, \alpha}\vert_{n}$. Further, for $\xi \in \{0, 1\}^{*}$, we have $I(\xi)$ is non-empty if and only if there exists an $\omega \in \Omega_{\beta, \alpha}$ with $\omega\vert_{\lvert \xi \rvert} = \xi$, and $\overline{I(\xi)} = \pi_{\beta, \alpha}( \{ \omega \in \Omega_{\beta, \alpha} \colon \omega\vert_{n} = \xi \} )$, where $\overline{I(\xi)}$ denotes the closure of $I(\xi)$. For $\xi$ and $\nu \in \Omega_{T}^{*}$ and $c > 0$ a real number, we say that $\xi$ is \textsl{$\nu$-extendable} if the concatenation $\xi\nu$ is admissible, and say that the cylinder sets $I(\xi)$ and $I(\nu)$ are \textsl{$c$-comparable} if $c \leq \lvert I(\xi) \rvert / \lvert I(\nu) \rvert \leq 1/c$. We call $T$ is \textsl{piecewise locally $C^{1+\delta}$ expanding} if $T$ restricted to $I(i)$ is differentiable for all $i \in \Lambda$, and \begin{enumerate}[label={\rm(\arabic*)}] \item there exists a real number $\eta > 0$, such that $\lvert T'(x) \rvert > \eta$ for all $x \in I(i)$ and $i \in \Lambda$, and there exists $k \in \mathbb{N}$ and a real number $\lambda > 1$ such that $\lvert (T^{k})'(x) \rvert \geq \lambda$ for all $\xi \in \Omega_{T}\vert_{k}$ and $x \in I(\xi)$, and \item there exist two positive constants $\delta$ and $c$ such that for all $i \in \Lambda$ and all $x$ and $y \in I(i)$, \begin{align*} \left\lvert \frac{T'(x)}{T'(y)} - 1 \right\rvert \leq c \lvert x - y \rvert^{\delta}. \end{align*} \end{enumerate} We call $T$ \textsl{Markov} if for all $i$ and $j \in \Lambda$, either $T(I(i)) \cap I(j) = \emptyset$, or $I(j) \subseteq T(I(i))$. Letting $(\beta, \alpha) \in \Delta$ with $\Omega_{\beta, \alpha}$ a subshift of finite type, we set $A = \{ a_{1}, a_{2}, \ldots, a_{n} \}$ to be the set of ordered points of \begin{align*} \{ \pi_{\beta, \alpha}(\sigma^{k}(\tau_{\beta,\alpha}^{+}(p_{\beta, \alpha}))) \colon k \in \mathbb{N}\} \cup \{ \pi_{\beta, \alpha}(\sigma^{k}(\tau_{\beta,\alpha}^{-}(p_{\beta, \alpha}))) \colon k \in \mathbb{N} \}. \end{align*} The transformation $T_{\beta, \alpha}$ is a piecewise locally $C^{1+\delta}$ expanding Markov map with respect to the partition \begin{align}\label{eq:Markov_Partition} P_{\beta,\alpha} = \{[a_{1}, a_{2}), \ldots, [a_{n-2}, a_{n-1}), [a_{n-1}, a_{n}]\}. \end{align} Letting $T$ be a piecewise locally $C^{1+\delta}$ expanding map with respect to the partition $\{ I(i) \}_{i \in \Lambda}$, then for $x \in [0, 1]$, there exists an infinite word $\omega = \omega_{1} \omega_{2} \cdots \in \Lambda^{\mathbb{N}}$, with $\omega \vert_{k} \in \Omega_{T}^{*}$ for all $k \in \mathbb{N}$ and such that $\{ x \} = \bigcap_{k \in \mathbb{N}} \overline{I(\omega\vert_{n})}$. We call $\omega$ a \textsl{symbolic representation} of $x$ with respect to the partition $\{ I(i)\}_{i \in \Lambda}$. In the case that $T = T_{\beta,\alpha}$, for some $(\beta,\alpha) \in \Delta$, for a point $x \in [0, 1]$, symbolic representations of $x$ with respect to the partition $\{ [0,p_{\beta, \alpha}), [p_{\beta,\alpha},1] \}$ are $\tau_{\beta, \alpha}^{\pm}(x)$. Note, all but a countable set of points have a unique symbolic representation, we denote this countable set by $E = E_{T}$. For a fixed $x \in [0,1]$ and $\gamma \in (0, 1)$, let $\omega$ denote a symbolic representation of $x$. We denote the following geometric condition by $H_{x, \gamma}$: \begin{align}\label{eq:condition_1} \adjustlimits \lim_{i \to \infty} \sup_{\; u\,:\,u\;\text{and}\;u\omega\vert_{i}\in\Omega_{T}^{*}} \frac{\lvert I(u\omega\vert_{i}) \rvert}{\lvert I(u) \rvert} = 0 \end{align} and there exists a natural number $i^{*}$ and a real number $c>0$ such that if $i \in \mathbb{N}$ with $i \geq i^{*}$, for all $\nu$ and $\eta \in \Omega_{T}^{*}$ which are $\omega\vert_{i}$-extendable with $I(\nu)$ and $I(\eta)$ being $\gamma/4$-comparable, either \begin{align}\label{eq:condition_2} \operatorname{dist}(I(\nu\omega\vert_{i}), I(\eta\omega\vert_{i})) = 0 \quad \text{or} \quad \operatorname{dist}(I(\nu\omega\vert_{i}), I(\eta\omega\vert_{i})) \geq c \operatorname{dist}(I(\nu), I(\eta)). \end{align} \begin{theorem}[{\cite{HY}}]\label{thm_HY_Thm_2.1} Let $T$ be a piecewise locally $C^{1+\delta}$ expanding map with respect to the partition $\{ I(i) \}_{i \in \Lambda}$, and let $x \in [0, 1]$ with symbolic representation $\omega$. \begin{enumerate}[label={\rm(\arabic*)}] \item If $H_{x, \gamma}$ is satisfied for some $\gamma \in (0, 1)$ , then the set $\{ y \in [0,1] \colon T^{k}(y) \not\in I(\omega\vert_{m}) \; \text{for all} \; k \in \mathbb{N}_{0} \} \cup E$ is $\rm ( 1/2, \gamma)$-winning for some natural number $m$. % \item If $x \not\in E$ and if $H_{x, \gamma}$ is satisfied for any $\gamma \in (0, 1)$, then the set ${\rm BAD}_{T}(x) = \{ y \in [0, 1] \colon x \not\in \overline{\{T^k(y) \colon k \in \mathbb{N}_{0}\}}\}$ is $1/2$-winning. \end{enumerate} \end{theorem} We conclude this section with the following proposition which we use in conjunction with \Cref{thm:G1990+Palmer79,thm_HY_Thm_2.1} and the fact that the property of winning is preserved under bijective affine transformations to prove \Cref{thm:main_2}. \begin{proposition}\label{prop:alpha-winning_transport} Let $T$ be a piecewise locally $C^{1+\delta}$ expanding interval map, let $x \in [0, 1]$ and set \begin{align*} \mathrm{BAD}(T,x) \coloneqq \{ y \in [0,1] \colon x \not\in \overline{\{T^{n}(y) \colon n \in \mathbb{N}\}}\}. \end{align*} If, for a fixed $k \in \mathbb{N}$, we have that $\mathrm{BAD}(T^k, T^m(x))$ is winning for all $m \in \{ 0, 1, \ldots, k\}$, then $\mathrm{BAD}(T,x)$ is winning. \end{proposition} \begin{proof} This follows from the fact that winning is preserved under taking countable intersections and since, by construction, $\mathrm{BAD}(T^k,x)\cap \mathrm{BAD}(T^k,T(x))\cap \cdots \cap \mathrm{BAD}(T^k,T^k(x))\subset \mathrm{BAD}(T,x)$. \end{proof} \section{Intermediate \texorpdfstring{$\beta$}{beta}-shifts as greedy \texorpdfstring{$\beta$}{beta}-shifts: Proof of \texorpdfstring{\Cref{thm:main}}{Theorem 1.1}}\label{sec:proof_thm_1_1} In proving \Cref{thm:main} and \Cref{Cor_2,Cor_3}, we will investigate the question, given a fixed $\beta \in (1, 2)$, for which $\omega \in \{0,1\}^\mathbb{N}$ does there exist $(\beta', \alpha') \in \Delta$ with $\Omega^+_{\beta', \alpha'} = \{ \nu \in \{0,1\}^\mathbb{N} \colon \omega \preceq \sigma^{n}(\nu) \prec \tau_{\beta,0}^{-}(1) \; \text{for all} \; n \in \mathbb{N} \}$? Not only is this question interesting in its own right, but in classifying such words, we will be able to transfer the results from \cite{KKLL} to the setting of the intermediate $\beta$-transformations. With this in mind, we let $\mathcal{A}_\beta$ denote the set of all such words and set $\rho = \inf_{n \in \mathbb{N}_{0}} \pi_{\beta,0}(\sigma^{n}(\tau_{\beta,0}^{-}(1)))$. Further, we utilise the following notation. We let $t_{\beta, 0, c} \in (0,1)$ be such that $\dim_H(K^{+}_{\beta,0}(t))>0$ for all $t < t_{\beta, 0, c}$ and $\dim_H(K^{+}_{\beta, 0}(t)) = 0$ for all $t > t_{\beta, 0, c}$, and set $\mathcal{T}_{\beta, 0, c} = \tau_{\beta,0}^{+}(t_{\beta, c})$. \begin{proof}[Proof of \texorpdfstring{\Cref{thm:main}}{Theorem 1.1}] For $\beta\in(1,2)$, let $\mathcal{B}_{\beta} \coloneqq \{ \omega \in \mathcal{E}_{\beta,0}^{+} \colon \pi_{\beta,0}(\omega) \leq \rho \; \text{and} \; \omega \prec \mathcal{T}_{\beta, 0, c}\}$. By \Cref{cor:From_greedy_to_intermediate} and the commutativity of the diagram given in \eqref{eq:commutative_diag}, observe that it is sufficient to show \begin{enumerate} \item $\mathcal{A}_\beta\subseteq \mathcal{B}_\beta$ with equality holding for Lebesgue almost every $\beta\in(1,2)$, \item there exist $\beta\in (1,2)$ such that $\mathcal{A}_\beta \neq \mathcal{B}_\beta$, and \item if the quasi-greedy $\beta$-expansion of $1$ is periodic, then $\mathcal{A}_\beta = \mathcal{B}_\beta$. \end{enumerate} To show $\mathcal{A}_\beta\subseteq \mathcal{B}_\beta$, let $\beta \in (1, 2)$ and $\eta \in \mathcal{A}_\beta$ be fixed. Setting $\omega = 1\eta$ and $\nu = 0\tau_\beta^-(1)$, we observe that they meet Conditions~(1)--(4) of \Cref{thm:BSV14} . Condition~(2) of \Cref{thm:BSV14} gives $\nu \in \Omega^{+}(\omega, \nu)$, and so, for a given $n \in \mathbb{N}_{0}$, \begin{align*} \eta \preceq \sigma^{n}(\eta) \prec 0 \tau_{\beta,0}^{-}(1) \quad \text{or} \quad 1\eta \preceq \sigma^{n}(\eta) \prec \tau_{\beta,0}^{-}(1), \end{align*} yielding that $\eta \preceq \sigma^{n}(\eta) \prec \tau_{\beta,0}^{-}(1)$ for all $n \in \mathbb{N}_{0}$, namely that $\eta \in \mathcal{E}_{\beta,0}^{+}$. Condition~(2) of \Cref{thm:BSV14} also gives $\omega\in \Omega^{-}(\omega, \nu)$, and so, for a given $n \in \mathbb{N}_{0}$, \begin{align*} \eta \prec \sigma^{n}(\tau_{\beta,0}^{-}(1)) \preceq 0 \tau_{\beta,0}^{-}(1) \quad \text{or} \quad 1\eta \prec \sigma^{n}(\tau_{\beta,0}^{-}(1)) \preceq \tau_{\beta,0}^{-}(1). \end{align*} This implies that $\eta \prec \sigma^{n}(\tau_\beta^-(1))$ for all $n \in \mathbb{N}_{0}$, and so $\pi_{\beta,0}(\eta)\leq \rho$. Since Condition~(3) of \Cref{thm:BSV14} holds, the topological entropy of $(\Omega(\omega,\nu), \sigma)$ is positive, and thus $\eta \prec \mathcal{T}_{\beta, 0, c}$. Therefore, $\eta \in \mathcal{B}_\beta$, and hence $\mathcal{A}_\beta\subseteq \mathcal{B}_\beta$. To see that $\mathcal{A}_\beta = \mathcal{B}_\beta$ for Lebesgue almost every $\beta \in (1,2)$, from the concluding remarks of \cite{Schme} we know that, for Lebesgue almost all $\beta \in (1,2)$ there is no bound on the length of blocks of consecutive zeros in the quasi-greedy $\beta$-expansion of $1$, namely $\tau_{\beta,0}^-(1)$. This implies that $\rho = 0$, and hence that $\mathcal{B}_\beta=\{ 0^\infty \}$. Since $\tau_{\beta,0}^{\pm}(0) = 0^\infty$ it follows that $0^\infty \in \mathcal{A}_\beta$, and thus that $\mathcal{B}_\beta \subseteq \mathcal{A}_\beta$ for Lebesgue almost every $\beta \in (1,2)$. This in tandem with the fact that $\mathcal{A}_\beta\subseteq \mathcal{B}_\beta$ for all $\beta \in (1,2)$, yields that $\mathcal{A}_\beta = \mathcal{B}_\beta$ for Lebesgue almost every $\beta \in (1,2)$. Let $\beta$ denote the algebraic number with minimal polynomial $x^5-x^4-x^3-2x^2+x+1$. An elementary calculation yields that $\tau_{\beta,0}^{-}(1)=11(100)^{\infty}$. We claim that $\xi = 00(011)^\infty \in \mathcal{B}_\beta$, but that $\xi \not\in \mathcal{A}_\beta$, namely that $\mathcal{A}_\beta \subsetneq \mathcal{B}_\beta$. It is readily verifiable that $\xi \in \mathcal{E}_\beta^+$ and also, since $\rho=\pi_{\beta,0}((001)^\infty)$, that $\pi_{\beta,0}(\xi) < \rho$. This yields that $\{001, 011 \}^\mathbb{N} \subset \mathcal{K}^{+}_{\beta,0}(\pi_{\beta,0}(\xi))$, and hence that $h_{\operatorname{top}}(\sigma\vert_{\mathcal{K}^{+}_{\beta,0}(\pi_{\beta,0}(\xi))}) > 0$. In other words, we have $\xi \prec \mathcal{T}_{\beta,c}$, and so $\xi \in \mathcal{B}_\beta $. By way of contradiction, suppose that $\xi \in \mathcal{A}_\beta$. Set $\omega=0\tau_{\beta,0}^{-}(1)=011(100)^\infty$ and $\nu = 1\xi = 100(011)^\infty$. In which case $\omega$ and $\nu \in \{ \chi, \zeta\}^\mathbb{N}$ with $\chi=011$ and $\zeta=100$. Noting that $\chi$ and $\zeta$ are words of length three in the alphabet $\{0,1\}$, that $\chi\vert_{2} = 01$, $\zeta\vert_{2} = 10$, that $\chi^\infty \in \Omega^{-}(\chi^\infty, \zeta^\infty)$ and $\zeta^\infty \in \Omega^{+}(\chi^\infty, \zeta^\infty)$, but that $\omega=\chi\zeta^\infty \neq \chi^\infty$, contradicting Condition~(4) of \Cref{thm:BSV14}. It remains to prove that if $\tau_{\beta,0}^-(1)$ is periodic, then $\mathcal{A}_\beta = \mathcal{B}_\beta$. To this end, fix $\beta \in (1, 2)$ with $\tau_{\beta,0}^{-}(1)$ periodic. Let $\xi \in \mathcal{B}_\beta $ and set $\nu=1 \xi$ and $\omega = 0\tau_\beta^-(1)$. By assumption, $\xi \in \mathcal{E}_{\beta,0}^+$, and so $\sigma(\nu) \preceq \sigma^{n}(\nu) \prec \sigma(\omega)$ and therefore $\nu \in \Omega^{+}(\omega, \nu)$, and since $\tau_{\beta,0}^{-}(1)$ is the quasi greedy $\beta$-expansion of $1$ in base $\beta$, we have $\sigma^{n}(\omega) \preceq \sigma(\omega)$ for all $n \in \mathbb{N}_{0}$. As $\pi_{\beta,0}(\xi) < \rho$ we have $ \sigma(\nu) \prec \sigma^n(\omega)$, and so $\omega \in \Omega^{-}(\omega, \nu)$. Thus, $\omega$ and $\nu$ satisfy Conditions~(1) and~(2) of \Cref{thm:BSV14}. Condition~(3) of \Cref{thm:BSV14} follows from $\xi \prec \mathcal{T}_{\beta,c}$. To conclude the proof, it suffices to show that $\omega$ and $\nu$ satisfy Condition~(4) of \Cref{thm:BSV14}. Suppose there exist $\chi$ and $\zeta \in \{0,1\}^{*}$ of length at least three with \begin{align*} \chi\vert_{2} = 01, \quad \zeta\vert_{2} = 10, \quad \chi^{\infty} \in \Omega^{-}(\chi^{\infty},\zeta^{\infty}), \quad \text{and} \quad \zeta^{\infty} \in \Omega^{+}(\chi^{\infty},\zeta^{\infty}), \end{align*} and such that $\omega$ and $\nu \in \{ \chi, \zeta \}^{\mathbb{N}}$. By our assumption and construction, in particular, since $\omega$ is periodic and since $\chi^{\infty} \in \Omega^{-}(\chi^{\infty},\zeta^{\infty})$, we have $\omega = \chi^\infty$. By way of contradiction, suppose that $\nu \neq \zeta^\infty$. In which case, there exists $n \in \mathbb{N}_{0}$ such that $\sigma^{n}(\nu)\vert_{\lvert \chi \rvert + \lvert \zeta \rvert} = \chi \zeta$. Noting that $\chi\vert_{2} = 01$ and $\zeta\vert_{2} = 10$, this yields $\sigma(\omega) \prec \sigma^{n+1}(\nu)$ contradicting the fact that $\omega$ and $\nu$ satisfy Condition~(2) of \Cref{thm:BSV14}. \end{proof} \section{A Krieger embedding theorem for intermediate \texorpdfstring{$\beta$}{beta}-transformations: Proof of \texorpdfstring{\Cref{Cor_1}}{Corollary 1.2}}\label{sec:proof_cor_1_2} To prove \Cref{Cor_1} we first show the following special case. \begin{theorem}\label{thm:one_perioidc} Let $(\beta,\alpha)\in\Delta$ be such that $\nu = \tau^{+}_{\beta, \alpha}(p_{\beta,\alpha})$ is not periodic and $\omega = \tau^{-}_{\beta, \alpha}(p_{\beta,\alpha})$ is periodic. There exists a sequence $((\beta_{n}, \alpha_{n}))_{n\in \mathbb{N}}$ in $\Delta$ with $\lim_{n\to \infty}\beta_{n} = \beta$ and $\lim_{n \to \infty}\alpha_{n} = \alpha$ and such that \begin{enumerate}[label={\rm(\roman*)}] \item[\rm (1)] $\Omega_{\beta_{n},\alpha_n}$ is a subshift of finite type, \item[\rm (2)] the Hausdorff distance between $\Omega_{\beta, \alpha}$ and $\Omega_{\beta_{n}, \alpha_{n}}$ converges to zero as $n$ tends to infinity, and \item[\rm (3)] $\Omega_{\beta_{n},\alpha_n}\subseteq\Omega_{\beta,\alpha}$. \end{enumerate} \end{theorem} \begin{proof} We prove this using \Cref{thm:main} and the results of \cite{KKLL}. By \Cref{thm:main}, there exist $\beta' \in (1, 2)$ and $t \in E_{\beta',0}^+$ such that $\mathcal{K}_{\beta',0}^+(t)=\Omega^+_{\beta,\alpha}$ with $\tau_{\beta',0}^{+}(t) = \sigma(\nu)$. Our goal is to find a monotonically decreasing sequence $(t_i)_{i \in \mathbb{N}}$ converging to $t$ with $t_i\in E_{\beta',0}^+$ and $t_i$ is a $T_{\beta', 0}^+$-periodic point for all $i \in \mathbb{N}$. We will first prove that $t$ is not isolated from above. For this we use the following. A finite word $s \in \{0, 1\}^{*}$ is Lyndon if $s^\infty \prec \sigma^n(s^\infty)$ for all $n \in \mathbb{N}$ with $n \neq 0 \bmod \lvert s \rvert$, and set $L_{\beta} \coloneqq \{ s\in \{0, 1\}^* \colon s \; \text{is a Lyndon word and} \; s^\infty \in \Omega_{\beta', 0} \}$. For $s \in L_{\beta}$, let $I_{s}$ denote the half-open interval $[\pi_{\beta',0}(s0^\infty), \pi_{\beta',0}(s^\infty))$. \Cref{thm:Structure} in combination with our hypothesis that $\tau_{\beta,\alpha}^{-}(p_{\beta,\alpha}) = 0\tau_{\beta,\alpha}^{-}(1)$ is periodic, yields there exists a shortest finite word $\zeta$ with $\tau_{\beta,\alpha}^{-}(1) = \zeta^\infty$. Letting $n$ be the length of $\zeta$, we set $\zeta^\prime$ to be the lexicographical smallest element of the set $\{ \zeta_{k} \cdots \zeta_{n} \zeta_{1} \cdots \zeta_{k-1} \colon k \in \{2, \ldots, n \}\}$, and set $y = \pi_{\beta',0}(\zeta^\prime 0^\infty)$. By construction $\zeta^{\prime}$ is a Lyndon. Since by our hypothesis $\nu = \tau^{+}_{\beta, \alpha}(p_{\beta,\alpha})$ is not periodic and since $t < \pi_{\beta',0}({\zeta^{\prime}}^{\infty})$, we observe that $t < y$. For $s \in L_{\beta}$, by the Lyndon property of $s$, if $x \in I_s$, then $x \not\in E_{\beta',0}^{+}$, which implies $E_{\beta',0}^{+} \cap (0,y) \subseteq (0,y) \backslash \bigcup_{s\in L_\beta} I_s$. In fact we claim $(0,y) \backslash \bigcup_{s\in L_\beta} I_s = E_{\beta',0}^+ \cap (0,y)$. In order to prove this, let $x\in (0,y) \backslash \bigcup_{s\in L_\beta} I_s$ and suppose $x \notin E_{\beta',0}^+$. Under this hypothesis, there exists a minimal $n \in \mathbb{N}$ such that $\sigma^{n}(\tau_{\beta', 0}^{+}(x)) \prec \tau_{\beta', 0}^{+}(x)$. By the minimality of $n$, we have that $\xi = \tau_{\beta', 0}^{+}(x)\vert_{n}$ is a Lyndon word, and that $\tau_{\beta', 0}^{+}(x) \prec \xi^{\infty}$. If $\xi^\infty \not\in \Sigma_{\beta^\prime,0}$, then there exists $j \in \{ 1, 2, \ldots, n \}$ such that $\tau_{\beta',0}^{-}(1)\prec\sigma^j(\xi^\infty)$ where equality is excluded since $x<y$. Set $k = \tau_\beta^{-}(1) \wedge \sigma^j(\xi^\infty)$, and notice $k>n-j$; otherwise $\tau_{\beta', 0}^{+}(x)$ would not be admissible. This yields $\tau_{\beta',0}^{-}(1)=\xi_{j+1}\xi_{j+2}\cdots \xi_n (\xi_1\cdots \xi_n)^l \omega_1 \omega_2 \cdots$ with $l$ possibly $0$ but chosen so that $\omega_1 \cdots \omega_n \neq \xi_1 \cdots \xi_n$; note this is possible since $\nu$ is not periodic. Thus, $\sigma^{n-j+ln}(\tau_{\beta',0}^{-}(1)) \prec \sigma^{n-j+ln}(\sigma^{j}(\xi^\infty))=\xi^\infty$ and $\omega_1\cdots \omega_n \prec \xi$. Hence, $\sigma^{n-j+ln}(\tau_{\beta',0}^{-}(1)) \prec \xi 0^\infty \prec \tau_{\beta',0}^{-}(x)$, contradicting the fact that we choose $x\in (0,y) \backslash \bigcup_{s\in L_\beta} I_s$. It therefore follows that $E_{\beta',0}^{+} \cap (0,y) = (0,y) \backslash \bigcup_{s\in L_\beta} I_s$ as required. Suppose that $t$ cannot be approximated from above by elements in $E_{\beta',0}^+$, that is, there exists a real number $\epsilon > 0$ with $(t,t+\epsilon)\cap E_{\beta',0}^+=\emptyset$. Since $E_{\beta',0}^+ \cap (0,y) = (0,y) \backslash \bigcup_{s\in L_\beta} I_s$, there exists a Lyndon word $s$ with $(t,t+\epsilon) \subset I_s$, but as $I_s$ is closed from the left, $t\in I_s$, contradicting our hypothesis that $t\in E_{\beta',0}^{+}$. This implies $t$ can be approximated from above by elements in $E_{\beta',0}^{+}$, namely there exists a monotonically decreasing sequence $(t_i^\prime)_{i\in \mathbb{N}}$ of real numbers converging to $t$ with $t_i^\prime \in E_{\beta',0}^+$, for all $i \in \mathbb{N}$. If $t_i^\prime$ is not $T_{\beta',0}$-periodic for some $i \in \mathbb{}N$, then by \cite[Lemmanta~3.4 and~3.5]{KKLL}, there exists a monotonically increasing sequence of $T_{\beta',0}$-periodic points $(s_{i, j}^\prime)_{j \in \mathbb{N}}$ converging to $t_i^\prime$ with $s_{i, j} \in E_{\beta',0}^+$. For $i \in \mathbb{N}$, setting $t_i=t_i^\prime$ whenever $t_i^\prime$ is $T_{\beta',0}$-periodic, and otherwise setting $t_i=s_{i,j}^\prime$ where $s_{i,j}^\prime$ is chosen so that $t < s_{i,j}^\prime < t_i^\prime$, the sequence $(t_i)_{i \in \mathbb{N}}$ converges to $t$ from above and $t_i$ is $T_{\beta',0}$-periodic. Since $\omega$ is periodic with respect to the left shift map, \Cref{thm:main} implies, for each $i \in \mathbb{N}$, there exists $(\beta_{i}, \alpha_{i}) \in \Delta$ with $K_{\beta'0}^+(t_i)=\Omega^+_{\beta_{i},\alpha_i}$. Since both $\omega$ and $\tau^{+}_{\beta',0}(t_i)$ are periodic, \Cref{thm:LSSS} yields that $\Omega_{\beta_{i},\alpha_i}$ is of subshift of finite type. Further, since $\mathcal{K}_{\beta',0}^{+}(t_i) \subseteq \mathcal{K}^{+}_{\beta',0}(t)$, it follows that $\Omega_{\beta_{i},\alpha_{i}} \subseteq \Omega_{\beta,\alpha}$ for all $i \in \mathbb{N}$. \end{proof} \begin{proof}[{Proof of \Cref{Cor_1}}] Assume the setting of \Cref{Cor_1} and for ease of notation set $p = p_{\beta, \alpha}$, $\nu = \tau_{\beta, \alpha}^{+}(p)$ and $\omega = \tau_{\beta, \alpha}^{-}(p)$. By \Cref{thm:LSSS}, we have that $\Omega_{\beta, \alpha}$ is a subshift of finite type if and only if $\omega$ and $\nu$ are periodic. Since the subshift of finite type property is preserved by topological conjugation, and observing that $\Omega^{\pm}_{\beta, \alpha}$ and $\Omega^{\mp}_{\beta, 2-\beta-\alpha}$ are topologically conjugate, with conjugation map $R$, with out loss of generality we may assume that $\nu$ is not periodic. We consider the case, when $\omega$ is periodic and when $\omega$ is not periodic separately. The former of these two cases follows from \Cref{thm:one_perioidc}, and so all that remains is to show the result for the latter case, namely when $\omega$ is not periodic. To this end, assume that $\omega$ and $\nu$ are both not periodic. Let $n \in \mathbb{N}$ be fixed, set $O_{n}^{\pm}(p) = \{ (T_{\beta, \alpha}^{\pm})^{k}(p) \colon k \in \{ 0, 1, \ldots, n-1 \} \}$, and let $\beta' \in (1, \beta)$ be such that \begin{align}\label{eq:def_beta_prime} (1-\alpha)/\beta' + (\beta+1)^{n}(\beta - \beta') < \min \{ x \in O_{n}^{+}(p) \cup O_{n}^{-}(p) \colon x > p \}. \end{align} (As defined in \Cref{sec:beta-shifts}, we let $T_{\beta,\alpha}^{-} \colon x \mapsto \beta x + \alpha$ if $x \leq p$, and $x \mapsto \beta x + \alpha - 1$ otherwise, and for ease of notation, we write $T_{\beta,\alpha}^{+}$ for $T_{\beta, \alpha}$.) Setting $p' = (1-\alpha)/\beta'$, we claim, for all $k \in \{ 1, \ldots, n-1 \}$, that either \begin{align}\label{eq:desired_inequalities} (T_{\beta', \alpha}^{\pm})^{k}(p') \leq (T_{\beta, \alpha}^{\pm})^{k}(p) \leq p \leq p' \quad \text{or} \quad (T_{\beta, \alpha}^{\pm})^{k}(p) \geq (T_{\beta', \alpha}^{\pm})^{k}(p') \geq p' \geq p. \end{align} Hence, by definition and since $\omega$ and $\nu$ are not periodic, $\omega\vert_{n} = \tau_{\beta', \alpha}^{-}(p)\vert_{n}$ and $\nu\vert_{n} = \tau_{\beta', \alpha}^{+}(p)\vert_{n}$. To prove this claim, note, for all $k \in \{ 1, \ldots, n-1 \}$, either \begin{align}\label{eq:either_or} (T_{\beta, \alpha}^{\pm}(p))^{k} < p \quad \text{or} \quad (T_{\beta, \alpha}^{\pm})^{k}(p) \geq \min \{ x \in O_{n}^{+}(p) \cup O_{n}^{-}(p) \colon x > p \}. \end{align} If $0 \leq y \leq x \leq p$, or if $p' \leq y \leq x \leq 1$, then \begin{align}\label{eq:orbit_bound} 0 \leq T^{\pm}_{\beta, \alpha}(x) - T^{\pm}_{\beta', \alpha}(y) = \beta x - \beta'y = \beta x - \beta y + \beta y - \beta'y \leq \beta(x-y) + (\beta-\beta'). \end{align} Observe that $T^{\pm}_{\beta, \alpha}(p) = T^{\pm}_{\beta', \alpha}(p')$ and \begin{align*} 0 \leq (T^{\pm}_{\beta, \alpha})^{2}(p) - (T^{\pm}_{\beta', \alpha})^{2}(p') \leq \beta - \beta' \leq (\beta + 1)(\beta - \beta') \leq (\beta + 1)^{2}(\beta - \beta') \leq (\beta+1)^{n}(\beta - \beta'). \end{align*} Suppose, by way of induction on $m$, that \begin{align*} 0 \leq (T^{\pm}_{\beta, \alpha})^{m}(p) - (T^{\pm}_{\beta', \alpha})^{m}(p') \leq (\beta+1)^{m}(\beta-\beta') \leq (\beta+1)^{n}(\beta-\beta'), \end{align*} for some $m \in \{2, \dots, n-2\}$. Combining \eqref{eq:def_beta_prime}, \eqref{eq:either_or} and \eqref{eq:orbit_bound} with our inductive hypothesis, we have \begin{align*} 0 \leq (T^{\pm}_{\beta, \alpha})^{m+1}(p) - (T^{\pm}_{\beta', \alpha})^{m+1}(p') &\leq \beta((T^{\pm}_{\beta, \alpha})^{m}{p} - (T^{\pm}_{\beta', \alpha})^{m}(p')) + (\beta-\beta')\\ &\leq \beta(\beta+1)^{m}(\beta-\beta') + (\beta-\beta') \leq (\beta+1)^{m+1}(\beta-\beta') \leq (\beta+1)^{n}(\beta-\beta'). \end{align*} In other words, for all $k \in \{ 1, \ldots, n-1 \}$, \begin{align*} 0 \leq (T^{\pm}_{\beta, \alpha})^{k}(p) - (T^{\pm}_{\beta', \alpha})^{k}(p') \leq (\beta+1)^{k}(\beta-\beta') \leq (\beta+1)^{n}(\beta-\beta'). \end{align*} This in tandem with \eqref{eq:def_beta_prime} and \eqref{eq:either_or} proves the claim. We observe that $(\omega, \nu) \neq (\tau_{\beta', \alpha}^{+}(p'), \tau_{\beta', \alpha}^{-}(p'))$, for if not, then since $\beta' < \beta$, this would contradict \Cref{thm:Laurent}. This implies that $\omega \neq \tau_{\beta', \alpha}^{-}(p')$ or $\nu \neq \tau_{\beta, \alpha}^{+}(p')$. We claim that $\omega \succ \tau_{\beta', \alpha}^{-}(p')$ and $\nu \succ \tau_{\beta, \alpha}^{+}(p')$. Consider the case when $\omega \neq \tau_{\beta', \alpha}^{-}(p')$. This implies there exists a smallest integer $m \geq n$ such that neither \begin{align*} (T_{\beta', \alpha}^{-})^{m}(p') \leq (T_{\beta, \alpha}^{-})^{m}(p) \leq p \quad \text{nor} \quad (T_{\beta, \alpha}^{-})^{m}(p) \geq (T_{\beta', \alpha}^{-})^{m}(p') \geq p'. \end{align*} Using the fact that if $0 \leq y \leq x < p$ or if $p' < y \leq x \leq 1$, then $T^{-}_{\beta', \alpha}(y) \leq T^{-}_{\beta, \alpha}(x)$, in tandem with \eqref{eq:desired_inequalities}, and noting that $p<p'$, we have that \begin{align*} \tau_{\beta', \alpha}^{-}(p')\vert_{m-2}=\omega\vert_{m-2}, \quad (T^{-}_{\beta',\alpha})^{m}(p')<p', \quad (T^{-}_{\beta,\alpha})^{m}(p) > p \quad \text{and} \quad (T^{-}_{\beta',\alpha})^{m}(p')\leq (T^{-}_{\beta,\alpha})^{m}(p). \end{align*} Thus, $\tau_{\beta', \alpha}^{-}(p')\vert_{m-1} \prec \omega\vert_{m-1}$ and hence $\tau_{\beta', \alpha}^{-}(p') \prec \omega$. An analogous argument proves the claim when $\nu \neq \tau_{\beta, \alpha}^{+}(p')$. Hence, we have shown, given an $n \in \mathbb{N}$, that there exists a positive $\delta \in \mathbb{R}$, such that, for all $\beta' \in (\beta-\delta, \beta)$, \begin{align}\label{smaller} \tau_{\beta', \alpha}^{\pm}(p')\vert_{n} = \tau_{\beta, \alpha}^{\pm}(p)\vert_{n} \quad \text{and} \quad \tau_{\beta', \alpha}^{\pm}(p') \prec \tau_{\beta, \alpha}^{\pm}(p), \end{align} where $p'=(1-\alpha)/\beta'$. Further, by using the fact that $\Omega^{\pm}_{\beta, \alpha}$ and $\Omega^{\mp}_{\beta, 2-\beta-\alpha}$ are topologically conjugate, with conjugating map $R$, together with \eqref{smaller}, we have that there exists a positive $\delta' \in \mathbb{R}$, such that, for all $\beta' \in (\beta-\delta', \beta)$, \begin{align*} \tau_{\beta', \alpha+\beta-\beta'}^{\pm}(p_{\beta', \alpha+\beta-\beta'})\vert_{n} = \tau_{\beta, \alpha}^{\pm}(p)\vert_{n} \quad \text{and} \quad \tau_{\beta', \alpha+\beta-\beta'}^{\pm}(p_{\beta', \alpha+\beta-\beta'}) \succ \tau_{\beta, \alpha}^{\pm}(p). \end{align*} Letting $\beta' \in (\beta-\min(\delta,\delta'), \beta)$ be fixed and setting \begin{align*} q_{1} = \sup \{ a \in (\alpha, \alpha+\beta-\beta') \colon \tau_{\beta', a}^{\pm}(p_{\beta', a}) \preceq \tau_{\beta, \alpha}^{\pm}(p)\} \quad \text{and} \quad q_{2} = \inf \{ a \in (\alpha, \alpha+\beta-\beta') \colon \tau_{\beta', a}^{\pm}(p_{\beta', a}) \succeq \tau_{\beta, \alpha}^{\pm}(p)\}, \end{align*} by \Cref{prop:mon_cont_kneading}, we have $\alpha \leq q_{1} \leq q_{2} \leq \alpha+\beta-\beta'$ and $\tau_{\beta', a}^{\pm}(p_{\beta',a})\vert_{n} = \tau_{\beta, \alpha}^{\pm}(p)\vert_{n}$, for all $a \in [q_{1}, q_{2}]$. Moreover, $\tau_{\beta', a}^{-}(p_{\beta',a}) \preceq \tau_{\beta, \alpha}^{-}(p) \prec \tau_{\beta, \alpha}^{+}(p) \preceq \tau_{\beta', a}^{+}(p_{\beta',a})$, for all $a \in [q_{1}, q_{2}]$, implying one of the following sets of orderings. \begin{align*} \tau_{\beta', a}^{-}(p_{\beta',a}) \prec \tau_{\beta, \alpha}^{-}(p) &\prec \tau_{\beta, \alpha}^{+}(p) \prec \tau_{\beta', a}^{+}(p_{\beta', a})\\ \tau_{\beta', a}^{-}(p_{\beta',a}) = \tau_{\beta, \alpha}^{-}(p) &\prec \tau_{\beta, \alpha}^{+}(p) \prec \tau_{\beta', a}^{+}(p_{\beta', a})\\ \tau_{\beta', a}^{-}(p_{\beta',a}) \prec \tau_{\beta, \alpha}^{-}(p) &\prec \tau_{\beta, \alpha}^{+}(p) = \tau_{\beta', a}^{+}(p_{\beta', a}) \end{align*} If either the first case occurs, the second case occurs and $\tau_{\beta', a}^{+}(p_{\beta',a})$ is not periodic, or the third case occurs and $\tau_{\beta', a}^{-}(p_{\beta',a})$ is not periodic, then an application of \Cref{prop:mon_cont_kneading} and \Cref{thm:LSSS} yields the required result. This leaves two remaining sub-cases, namely when $\tau_{\beta', a}^{-}(p_{\beta',a}) = \tau_{\beta, \alpha}^{-}(p) \prec \tau_{\beta, \alpha}^{+}(p) \prec \tau_{\beta', a}^{+}(p_{\beta', a})$ with $\tau_{\beta', a}^{+}(p_{\beta', a})$ periodic, and when $\tau_{\beta', a}^{-}(p_{\beta',a}) \prec \tau_{\beta, \alpha}^{-}(p) \prec \tau_{\beta, \alpha}^{+}(p) = \tau_{\beta', a}^{+}(p_{\beta', a})$ with $\tau_{\beta', a}^{-}(p_{\beta', a})$ periodic. Let us consider the first of these two sub-cases; the second follows by an analogous arguments. For ease of notation let $\nu' = \tau_{\beta', a}^{+}(p_{\beta',a})$ and note that by assumption $\omega = \tau_{\beta', a}^{-}(p_{\beta',a})$ and that $\omega \prec \nu \prec \nu'$. If the map $s \mapsto \tau_{\beta', s}^{+}(p_{\beta',s})$ is continuous at $s = a$, then an application of \Cref{prop:mon_cont_kneading} and \Cref{thm:LSSS} yields the required result; if we do not have continuity at $s = a$, by \Cref{prop:mon_cont_kneading} we have that $\nu'$ is periodic with periodic $N$, for some $N \in \mathbb{N}$, and thus an application of \Cref{thm:one_perioidc} completes the proof, alternatively we may proceed as follows. We claim that $\nu \prec \nu'\vert_{N}\omega$. Indeed if $\nu\vert_{N} \prec \nu'\vert_{N}$, the claim follows immediately, and so let us suppose that $\nu\vert_{N} = \nu'\vert_{N}$. If $\nu_{N+1} = 0$, the claim follows, from \Cref{thm:Structure}. On the other hand, by \Cref{thm:Structure}, if $\nu_{N+1} = 1$, then $\sigma^{N}(\nu) \succeq \nu$. If $\sigma^{N}(\nu)\vert_{N} \succ \nu\vert_{N} = \nu'\vert_{N}$, then $\nu \succ \nu'$, contradicting our assumption that $\nu \prec \nu'$, and so $\sigma^{N}(\nu)\vert_{N} = \nu\vert_{N}$. This implies there exists a minimal integer $m$ such that $\nu_{m N + 1} = 0$ and $\nu\vert_{mN} = \nu'\vert_{mN}$; otherwise $\nu$ would be periodic. However, this together with \Cref{thm:Structure}, yields that $\nu \preceq \sigma^{(m-1)N}(\nu) = \nu\vert_{N}\sigma^{mN}(\nu) \preceq \nu\vert_{N}\omega = \nu'\vert_{N}\omega$, as required. To complete the proof of this sub-case we appeal once more to \Cref{prop:mon_cont_kneading} which together with the above implies that there exists a real number $\delta > 0$ such that for all $a' \in (a-\delta,a)$ we have $\tau_{\beta',a'}^{-}(p_{\beta',a'}) \prec \omega$ and $\nu \prec \tau_{\beta',a'}^{+}(p_{\beta',a'}) \prec \nu'\vert_{N}\omega \prec \nu'$. An application of \Cref{thm:LSSS} yields the required result. \end{proof} In the above proof, it is critical that $\omega$ and $\nu$ are not periodic, as this allows us to construct $\beta'$, $q_{1}$ and $q_{2}$ so that $\omega$ and $\nu$ are sufficiently close to $\tau_{\beta',a'}^{-}(p_{\beta',a'})$ and $\tau_{\beta',a'}^{+}(p_{\beta',a'})$, respectively, for all $a' \in [q_{1}, q_{2}]$. However, under the assumption that $\nu$ is periodic we may not use our construction to build such $\beta'$ and hence $q_{1}$ and $q_{2}$. Indeed, the strict inequalities in \Cref{eq:either_or} no longer hold, and thus the ordering given in \eqref{smaller} fails. \section{Survivor sets of intermediate \texorpdfstring{$\beta$}{beta}-transformations: Proof of \texorpdfstring{\Cref{Cor_2,Cor_3}}{Corollaries 1.3 and 1.4}}\label{sec:proof_cor_1_3_4} Here, we examine open dynamical systems on the unit interval with a hole at zero and where the dynamics is driven by an intermediate \mbox{$\beta$-transformation}. With the aid of \Cref{thm:main} we can relate such open dynamical system to open dynamical systems driven by greedy \mbox{$\beta$-transformations}. This allows us to transfer the results of \cite{KKLL} and \cite{AK} on isolated points in $E_{\beta,\alpha}^+$, the Hausdorff dimension of survivor sets, and the critical point of the dimension function from the Greedy case to the intermediate case. For readability, we omit the $0$ in notation of $\pi_{\beta,0}$, $E_{\beta}^+=E_{\beta,0}^+$, and so on, and thus write $\pi_{\beta}$ for $\pi_{\beta,0}$, $E_{\beta}^+$ for $E_{\beta,0}^+$, and so forth. By \Cref{thm:Parry_converse,thm:Structure}, given $(\beta,\alpha) \in \Delta$, there exists a unique $\beta^\prime \in (1, 2)$ with $\tau_{\beta,\alpha}^-(1)=\tau_{\beta^\prime}^-(1)$. Thus, we define a function $u \colon \Delta \to (1, 2)$ by $u(\beta,\alpha) \coloneqq \beta^\prime$, and let $\tilde{\pi}_{\beta,\alpha} \coloneqq \pi_{u(\beta,\alpha)} \circ \tau_{\beta,\alpha}^{+}$. Correlations of the systems $(T_{\beta,\alpha},[0,1])$ and $(T_{u(\beta,\alpha)},K_{u(\beta,\alpha)}^+(\tilde{\pi}_{\beta,\alpha}(0)))$ are expressed in the following proposition. \begin{proposition}\label{prop:char} Let $(\beta,\alpha)\in\Delta$ and let $\beta^\prime=u(\beta,\alpha)$. \begin{enumerate} \item $\tilde{\pi}_{\beta,\alpha}([0,1])=K_{\beta^\prime}^+(\tilde{\pi}_{\beta,\alpha}(0))$ and $\tilde{\pi}_{\beta,\alpha}(E_{\beta,\alpha}^+)=E_{\beta^\prime}^+\cap[\tilde{\pi}_{\beta,\alpha}(0),1]$. \item For every $x\in E_{\beta,\alpha}^+$ we have that $x$ is isolated in $E_{\beta,\alpha}^+$ if and only if $\tilde{\pi}_{\beta,\alpha}(x)$ is isolated in $E_{\beta^\prime}^+$. \item For $t\in(0,1)$, we have $ \tilde{\pi}_{\beta,\alpha}(K_{\beta,\alpha}^+(t))=K_{\beta^\prime}^+(\tilde{\pi}_{\beta,\alpha}(t)) $. \item For $t\in(0,1)$, we have $\dim_H(K_{\beta,\alpha}^+(t))=(\log(\beta^\prime)/\log(\beta)) \dim_H(K_{\beta^\prime}^+(\tilde{\pi}_{\beta,\alpha}(t)))$. \end{enumerate} \end{proposition} \newpage \begin{proof} Let us begin by proving Part~(1). To this end, observe that $\tilde{\pi}_{\beta,\alpha}$ is monotonic, since it is a composition of monotonic functions, and so, $\tilde{\pi}_{\beta,\alpha}(T_{\beta,\alpha}^n(x)) \geq \tilde{\pi}_{\beta,\alpha}(0)$ for all $x \in [0,1]$ and $n \in \mathbb{N}_{0}$. By the fact that the diagrams in \eqref{eq:commutative_diag} commute, we have for all $x \in [0,1]$ and $n \in \mathbb{N}$, that \begin{align*} T_{\beta'}^{n}(\tilde{\pi}_{\beta,\alpha}(x)) = T_{\beta'}^{n}(\pi_{\beta'}(\tau^{+}_{\alpha,\beta}(x))) = \pi_{\beta'}(\sigma^{n}(\tau^{+}_{\alpha,\beta}(x))) = \pi_{\beta'}(\tau^{+}_{\alpha,\beta}(T^{n}_{\beta,\alpha}(x)) = \tilde{\pi}_{\beta,\alpha}(T^{n}_{\beta,\alpha}(x)). \end{align*} Combining the above, we may conclude that $\tilde{\pi}_{\beta,\alpha}([0,1]) \subseteq K_{\beta^\prime}(\tilde{\pi}_{\beta,\alpha}(0))$. To prove that equality holds, namely that $\tilde{\pi}_{\beta,\alpha}([0,1]) = K_{\beta^\prime}(\tilde{\pi}_{\beta,\alpha}(0))$, using the commutativity of the diagrams in \eqref{eq:commutative_diag}, we observe that $x \in K_{\beta^\prime}(\tilde{\pi}_{\beta,\alpha}(0))$, if and only if, $\pi_{\beta'}(\tau^{+}_{\beta,\alpha}(0)) \leq \pi_{\beta'}(\sigma^{n}(\tau^{+}_{\beta'}(x)) \leq \pi_{\beta'}(\tau_{\beta'}^{-}(1))$ for all $n \in \mathbb{N}_{0}$. Since $\pi_{\beta'}$ is injective on $\Omega_{\beta'}$ and monotonic on $\{0,1\}^{\mathbb{N}}$, and since $\tau_{\beta,\alpha}^-(1)=\tau_{\beta^\prime}^-(1)$, it follows that $\tau_{\beta'}(x) \in \Omega^{+}_{\beta, \alpha}$. In other words, there exists a $y \in [0,1]$ such that $\tilde{\pi}_{\beta,\alpha}(y) = \pi_{\beta'}(\tau^{+}_{\beta,\alpha}(y)) = x$, yielding the first statement of Part~(1). Let us now prove the second statement. If $x \in \tilde{\pi}_{\beta,\alpha}(E_{\beta,\alpha}^{+})$, then there exists a $y \in E_{\beta,\alpha}^{+} \subset [0,1]$ with $\tilde{\pi}_{\beta,\alpha}(y) = x$. This in tandem with the fact that the the diagrams in \eqref{eq:commutative_diag} are commutative, and since the maps $\tau_{\beta,\alpha}^{+}$ and $\pi_{\beta'}$ are monotonic, we have \begin{align*} T_{\beta'}^{n}(x) = T_{\beta'}^{n}( \pi_{\beta'}(\tau_{\beta,\alpha}^{+}(y))) = \pi_{\beta'}(\sigma^{n}(\tau_{\beta,\alpha}^{+}(y))) = \pi_{\beta'}(\tau_{\beta,\alpha}^{+}(T_{\beta,\alpha}^{n}(y))) \geq \pi_{\beta'}(\tau_{\beta,\alpha}^{+}(y)) = x. \end{align*} Since $y \in [0,1]$ and $\tilde{\pi}_{\beta,\alpha}(y) = x$, and since $\tilde{\pi}_{\beta,\alpha}$ is monotonic, $x \geq \tilde{\pi}_{\beta,\alpha}(0)$. This, together with the fact that $E_{\beta'}^{+} \subseteq [0, 1]$, yields $\tilde{\pi}_{\beta,\alpha}(E_{\beta,\alpha}^+) \subseteq E_{\beta^\prime}^+\cap[\tilde{\pi}_{\beta,\alpha}(0),\tilde{\pi}_{\beta,\alpha}(1)]$. To see that $\tilde{\pi}_{\beta,\alpha}(E_{\beta,\alpha}^+) \supseteq E_{\beta^\prime}^+\cap[\tilde{\pi}_{\beta,\alpha}(0),1]$ let $x \in E_{\beta'}^{+}$ with $x \geq \tilde{\pi}_{\beta,\alpha}(0)$. By definition $E_{\beta'}^{+}$ and the commutativity of the diagrams in \eqref{eq:commutative_diag}, \begin{align*} \pi_{\beta,\alpha}(\tau_{\beta,\alpha}^{+}(0)) \leq \pi_{\beta,\alpha}(\tau_{\beta'}^{+}(x)) \leq T^{n}_{\beta,\alpha}(\pi_{\beta,\alpha}(\tau_{\beta'}^{+}(x))) \leq \pi_{\beta,\alpha} (\tau_{\beta'}^{-}(1)) \leq \pi_{\beta,\alpha} (\tau_{\beta,\alpha}^{-}(1)). \end{align*} In other words $\pi_{\beta,\alpha}(\tau_{\beta'}^{+}(x)) \in E_{\beta,\alpha}^+$. Since $\tilde{\pi}_{\beta,\alpha}$ is invertible on $[0,1)$ with inverse $\pi_{\beta,\alpha} \circ \tau^{+}_{\beta'}$ the result follows. Part~(2) follows from Part~(1) using the fact that $\tilde{\pi}_{\beta,\alpha}$ is monotonic and injective on $[0,1)$. Part~(3) follows using analogous argument to those used above to proof Part~(1), and Part~(4) follows from \Cref{prop:ent} and Part~(3) in the following way. \[ \dim_H(K_{\beta,\alpha}^+(t))=\frac{h_{\operatorname{top}}(T_{\beta,\alpha}\vert_{K^+_{\beta,\alpha}(t)})}{\log(\beta)}=\frac{h_{\operatorname{top}}(T_{\beta^\prime}\vert_{K^+_{\beta^\prime}(\tilde{\pi}_{\beta,\alpha}(t))})}{\log(\beta)}=\frac{\log(\beta^\prime)}{\log(\beta)}\dim_H(K_{\beta^\prime}^+(\tilde{\pi}_{\beta,\alpha}(t))). \qedhere \] \end{proof} For the next proposition we will require the following analogue of the map $\tilde{\pi}_{\beta,\alpha}$, namely $\tilde{\pi}_{\beta,\alpha}^{-} \coloneqq \pi_{u(\beta,\alpha)} \circ \tau_{\beta,\alpha}^{-}$. Note in the previous proposition, we could have also used the map $\tilde{\pi}_{\beta,\alpha}^{-}$ instead of $\tilde{\pi}_{\beta,\alpha}$ since they coincide on all points considered in (1)--(4). However, in the proof of \Cref{prop:char} we would need to replace $T_{\beta,\alpha}$ by $T_{\beta,\alpha}^{-}$, $T_{\beta'}$ by $T_{\beta'}^{-}$, $\tau_{\beta,\alpha}^{\pm}$ by $\tau_{\beta,\alpha}^{\mp}$ and $\tau_{\beta'}^{\pm}$ by $\tau_{\beta'}^{\mp}$, making it notionally heavy, and thus for ease of notation we use $\tilde{\pi}_{\beta,\alpha}$. \begin{proposition}\label{prop:char(5)} For all $(\beta,\alpha)\in\Delta$, we have that $\tilde{\pi}^{-}_{\beta,\alpha}(t_{\beta,\alpha,c})=t_{u(\beta,\alpha),c}$. \end{proposition} \begin{proof} Observe that there exists a sequence of real numbers $(t_{n})_{n\in \mathbb{N}}$ with $t_n \in E_{\beta,\alpha}^{+}$ such that $t_n < t_{\beta,\alpha,c}$ and $\lim_{n\to \infty} t_{n} = t_{\beta,\alpha,c}$; otherwise the dimension function would be constant around $t_{\beta,\alpha,c}$ contradicting its definition. Define $\hat{t}_n =\tilde{\pi}^{-}_{\beta,\alpha}(t_n)$. By Proposition \ref{prop:char} Part~(3), for all $n\in \mathbb{N}$, we have $\tilde{\pi}^{-}_{\beta,\alpha}(K_{\beta,\alpha}^+(t_n))=K_{u(\beta,\alpha)}^+(\hat{t}_n)$. An application of Proposition \ref{prop:char} Part~(4) together with our remarks directly preceding this Proposition, yields for $n \in \mathbb{N}$, \begin{align*} \dim_H(K_{u(\beta,\alpha)}^+(\hat{t}_n))>0, \quad \dim_H(K_{\beta,\alpha}^+(t_{\beta,\alpha,c}))=0, \quad \text{and} \quad \dim_H(K_{u(\beta,\alpha)}^+(\tilde{\pi}^{-}_{\beta,\alpha}(t_{\beta,\alpha,c}))=0. \end{align*} As $\pi_{u(\beta,\alpha)}$ is continuous and $\tau_{\beta,\alpha}^{-}$ is left continuous, $\tilde{\pi}^{-}_{\beta,\alpha}$ is left continuous, and so $\tilde{\pi}^{-}_{\beta,\alpha}(\lim_{n\to \infty}(t_n))= \lim_{n\to \infty} \tilde{\pi}^{-}_{\beta,\alpha}(t_N)$. This implies that $\tilde{\pi}^{-}_{\beta,\alpha}(t_{\beta,\alpha,c}) = \lim_{n\to \infty} \hat{t}_N$, and hence that $\tilde{\pi}^{-}_{\beta,\alpha}(t_{\beta,\alpha,c})=t_{u(\beta,\alpha),c}$. \end{proof} The value of $t_{u(\beta,\alpha),c}$ is explicitly given in \cite{KKLL} when $\tau_{\beta,\alpha}^-(1)$ is balanced. For all other cases see \cite{AK}. A word $\omega = \omega_{1}\omega_{2} \cdots \in \{0,1\}^{\mathbb{N}}$ is called \textsl{balanced} if $\lvert(\omega_{n} + \omega_{n+1} + \cdots + \omega_{n+m}) - (\omega_{k+n} + \omega_{k+n+1} + \cdots + \omega_{k+n+m}) \rvert \leq 1$ for all $k$, $n$ and $m \in \mathbb{N}_{0}$ with $n \geq 1$. Following notation of \cite{KKLL} and \cite{Schme}, we let \begin{align*} C_3 \coloneqq\{ \beta\in(1,2) \colon \text{the length of consecutive zeros in} \; \tau_\beta^-(1) \; \text{is bounded} \} \;\; \text{and} \;\; C \coloneqq \{ \beta \in (1,2) \colon \tau_\beta^-(1) \; \text{is balanced} \}. \end{align*} For every $(\beta,\alpha) \in \Delta$ with $\alpha>0$, we have that $u(\beta,\alpha)\in C_3$. By \cite[Theorem 3.12]{KKLL}, for $\beta \in C_3$, there exists $\delta>0$ such that $E_{\beta}^+\cap[0,\delta]$ contains no isolated points. With this in mind and, for $\beta \in (1,2)$, setting $\delta(\beta) \coloneqq \sup \{ \delta \in [0, 1] \colon E_{\beta}^+\cap[0,\delta] \; \text{contains no isolated points} \}$, we have the following corollary of Proposition \ref{prop:char}. \begin{corollary}\label{cor:isoloated_pts} Let $(\beta,\alpha)\in \Delta$ with $\alpha>0$. If $\tilde{\pi}_{\beta,\alpha}(0)<\delta(u(\beta,\alpha))$ then there exists a $\delta>0$ such that $E_{\beta,\alpha}^+\cap[0,\delta]$ contains no isolated points. Further, if $u(\beta,\alpha) \in C$, then $\delta(u(\beta,\alpha))=1$ and $E_{\beta,\alpha}^+$ contains no isolated points. \end{corollary} \begin{proof} The first statement follows from \Cref{prop:char} Parts (1) and (2), and \cite[Theorem 3.12]{KKLL}. The second statement follows from \cite[Theorem 3]{KKLL}, which states that if $\beta\in C$ then $E_\beta$ does not contain any isolated points. \end{proof} \begin{proof}[Proof of \texorpdfstring{\Cref{Cor_2}}{Corollary 1.3}] In \cite{MR0166332} an absolutely continuous invariant measure of $T_{\beta,\alpha}$ is constructed, and in \cite{Hof} it is shown that this measure is ergodic. (In fact it is shown that it is maximal, and the only measure of maximal entropy.) This yields, given an $m \in \mathbb{N}$, that for almost all $x \in [0,1]$, there exists $n_{x} = n \in \mathbb{N}_{0}$ such that $T_{\beta,\alpha}^{n}(x) \in [0, 1/m)$, and hence that $K_{\beta,\alpha}^{+}(1/m)$ is a Lebesgue null. Since $E_{\beta,\alpha}^{+} \setminus \{0\} \subseteq \cup_{m=1}^{\infty} K_{\beta,\alpha}^{+}(1/m)$, by subadditivity of the Lebesgue measure, it follows that $E_{\beta,\alpha}^{+}$ is a Lebesgue null set. The statement on the isolated points of $E_{\beta,\alpha}^{+}$ follows from \Cref{cor:isoloated_pts}. \end{proof} \begin{proof}[Proof of \texorpdfstring{\Cref{Cor_3}}{Corollary 1.4}] This is a direct consequence of \Cref{prop:char} Part~(4) and \cite[Theorem~A (ii)]{KKLL}. \end{proof} \begin{proof}[Proof of \texorpdfstring{\Cref{Cor_E_beta_alpha}}{Corollary 1.5}] Let $(\beta, \alpha) \in \Delta$ with $\Omega_{\beta,\alpha}$ a subshift of finite type and $T_{\beta,\alpha}$ transitive, let $P_{\beta,\alpha}$ denote the Markov partition of $T_{\beta,\alpha}$ defined in \eqref{eq:Markov_Partition}, and for $n\in \mathbb{N}$, let $\Omega_{T_{\beta,\alpha}}\vert_{n}$ denote the set of all length $n$ admissible words of $T_{\beta,\alpha}$ with respect to the partition $P_{\beta,\alpha}$. Fix $n \in \mathbb{N}$ sufficient large, set $\omega$ to be lexicographically the smallest word in $\Omega_{T_{\beta,\alpha}}\vert_{n}$, let $a_{n} = a_{\beta, \alpha, n} = \sup I(\omega)$ and let $\nu \in \Omega_{T_{\beta,\alpha}}\vert_{n}$ with $\nu \succ \omega$. By transitivity and the Markov property, there exist $k \in \mathbb{N}$ and $\xi \in \Omega_{\beta,\alpha}\vert_{k}$ with $k > n$, $I(\xi) \subset I(\omega)$, $T_{\beta,\alpha}^{k-n}(I(\xi)) = I(\nu)$, and $T_{\beta,\alpha}^j(I(\xi))$ an interval and $T_{\beta,\alpha}^{j}(x) \geq a_{n}$ for all $j \in \{1,2, \dots, k - n\}$ and $x \in I(\xi)$. In other words, there exists a linear scaled copy of $K_{\beta,\alpha}^{+}(a_{n}) \cap I(\nu)$ in $I(\xi) \cap E_{\beta,\alpha}^{+}$. Namely, we have \begin{align}\label{eq:scaling_func} f_{\beta,\alpha,\nu,n}(K_{\beta,\alpha}^{+}(a_{n}) \cap I(\nu)) \subseteq I(\xi) \cap E_{\beta,\alpha}^{+}, \end{align} where $f_{\beta, \alpha,\nu, n} = f_{\beta,\alpha,\chi(\xi_{1})} \circ f_{\beta,\alpha,\chi(\xi_{2})} \circ \cdots \circ f_{\beta,\alpha,\chi(\xi_{k-n})}$ and $\chi \colon \Omega_{T_{\beta,\alpha}}\vert_{1} \to \{ 0, 1\}$ is defined by \begin{align*} \chi(a) = \begin{cases} 0 & \text{if} \; I(a) \subseteq [0, p_{\beta,\alpha}],\\ 1 & \text{otherwise.} \end{cases} \end{align*} Here, we recall that $f_{\beta,\alpha,0}(x) = \beta^{-1}x-\alpha\beta^{-1}$ and $f_{\beta,\alpha, 1}(x) = \beta^{-1}x-(\alpha-1)\beta^{-1}$ for $x \in [0,1]$. With the above at hand, we may conclude that \begin{align*} \dim_H (E_{\beta,\alpha}^{+}) \geq \max \{ \dim_H(K_{\beta,\alpha}^{+}(a_{n}) \cap I(\nu)) \colon \nu \in \Omega_{T_{\beta,\alpha}}\vert_{n} \} = \dim_H(K_{\beta,\alpha}^{+}(a_{n})). \end{align*} Since $n$ was chosen sufficiently large but arbitrarily, this in tandem with \Cref{Cor_3} implies $\dim_H (E_{\beta,\alpha}^{+}) = 1$, since $a_n$ converges to zero as $n$ tends to infinity. Since Hausdorff dimension is preserved under taking linear transformations, an application of \Cref{thm:G1990+Palmer79,thm:LSSS}, yields for $(\beta, \alpha) \in \Delta$ with $\Omega_{\beta,\alpha}$ a subshift of finite type, that $\dim_H (E_{\beta,\alpha}^{+}) = 1$. To conclude, let $(\beta,\alpha) \in \Delta$ be chosen arbitrarily, and let $((\beta_{n},\alpha_{n}))_{n \in \mathbb{N}}$ denote the sequence of tuples given in \Cref{Cor_1} converging to $(\beta, \alpha)$. Set $\tilde{\pi}_{\beta, \alpha}^{(n)} = \pi_{\beta,\alpha} \circ \tau^{+}_{\beta_{n},\alpha_{n}}$, and for $t$ and $s \in [0,1]$ with $t < s$, let \begin{align*} K_{\beta,\alpha}^{+}(t, s) \coloneqq \{ x \in[0, 1) \colon T_{\beta,\alpha}^{n}(x) \not \in [0,t) \cup (s, 1] \; \textup{for all} \; n \in \mathbb{N}_{0} \} \end{align*} By \Cref{Cor_1}, \Cref{thm:Structure}, and the commutativity of the diagram in \eqref{eq:commutative_diag}, we may choose $((\beta_{n},\alpha_{n}))_{n \in \mathbb{N}}$ so that $(\pi_{\beta,\alpha}^{(n)}(0))_{n \in \mathbb{N}}$ is a monotonically decreasing sequence converging to zero, and $(\pi_{\beta,\alpha}^{(n)}(1))_{n \in \mathbb{N}}$ is a monotonically increasing sequence converging to one. Thus, by construction, for $n$ and $l \in \mathbb{N}$ with $l \geq n$, \begin{align*} K_{\beta,\alpha}^{+}(\pi_{\beta,\alpha}^{(n)}(0), \pi_{\beta,\alpha}^{(n)}(1)) \subseteq K_{\beta,\alpha}^{+}(\pi_{\beta,\alpha}^{(n)}(0), \pi_{\beta,\alpha}^{(l)}(1)), \quad \text{and} \quad K_{\beta,\alpha}^{+}(\pi_{\beta,\alpha}^{(n)}(0)) = \bigcup_{m \in \mathbb{N}} K_{\beta,\alpha}^{+}(\pi_{\beta,\alpha}^{(n)}(0), \pi_{\beta,\alpha}^{(m)}(1)). \end{align*} Hence, by countable stability of the Hausdorff dimension and \Cref{Cor_3}, \begin{align}\label{eq:limit_hausdorff_proj_0,1} \lim_{n \to \infty} \dim_{H}(K_{\beta,\alpha}^{+}(\pi_{\beta,\alpha}^{(n)}(0),\pi_{\beta,\alpha}^{(n)}(1))) = 1. \end{align} Via analogous arguments to those given in the proof of \Cref{prop:char}, we have the following. \begin{enumerate} \item[($1^{*}$)] $\tilde{\pi}_{\beta,\alpha}^{(n)}([0,1])=K_{\beta,\alpha}^+(\tilde{\pi}_{\beta,\alpha}^{(n)}(0), \tilde{\pi}_{\beta,\alpha}^{(n)}(1))$ and $\tilde{\pi}_{\beta,\alpha}^{(n)}(E_{\beta_{n},\alpha_{n}}^+)=E_{\beta, \alpha}^+\cap[\tilde{\pi}_{\beta,\alpha}^{(n)}(0),\tilde{\pi}_{\beta,\alpha}^{(n)}(1)]$. \item[($3^{*}$)] For $t\in(0,1)$, we have $ \tilde{\pi}_{\beta,\alpha}^{(n)}(K_{\beta_{n},\alpha_{n}}^+(t))=K_{\beta,\alpha}^+(\tilde{\pi}_{\beta,\alpha}^{(n)}(t), \tilde{\pi}_{\beta,\alpha}^{(n)}(1)) $. \end{enumerate} For $k \in \mathbb{N}$ sufficiently large and $\nu \in \Omega_{T_{\beta_{n},\alpha_{n}}}\vert_{k}$, setting $a_{n,k} = a_{\beta_{n},\alpha_{n},k}$, from the equalities given in \eqref{eq:alt_IFS} and \eqref{eq:scaling_func}, the commutativity of the diagram in \eqref{eq:commutative_diag} and ($1^{*}$), we have that \begin{align*} f_{\beta,\alpha, \nu, k}( \pi_{\beta,\alpha}^{(n)}( K_{\beta_{n},\alpha_{n}}^{+}(a_{n,k}) \cap I(\nu))) =\pi_{\beta,\alpha}^{(n)}( f_{\beta_{n},\alpha_{n}, \nu, k}( K_{\beta_{n},\alpha_{n}}^{+}(a_{n,k}) \cap I(\nu))) \subseteq \pi_{\beta,\alpha}^{(n)}(E_{\beta_{n},\alpha_{n}}^{+}) \subseteq E_{\beta,\alpha}. \end{align*} This in tandem with ($3^{*}$), the fact that there exists $\nu \in \Omega_{T_{\beta_{n},\alpha_{n}}}\vert_{k}$ with \begin{align*} \dim_{H}(\pi_{\beta,\alpha}^{(n)}( K_{\beta_{n},\alpha_{n}}^{+}(a_{n,k}) \cap I(\nu))) = \dim_{H}(\pi_{\beta,\alpha}^{(n)}( K_{\beta_{n},\alpha_{n}}^{+}(a_{n, k}))), \end{align*} and that Hausdorff dimension is invariant under linear scaling, implies that \begin{align*} \dim_{H}(K_{\beta,\alpha}^{+}(\pi_{\beta,\alpha}^{(n)}(a_{n,k}), \pi_{\beta,\alpha}^{(n)}(1))) =\dim_{H}(\pi_{\beta,\alpha}^{(n)}( K_{\beta_{n},\alpha_{n}}^{+}(a_{n,k}))) \leq \dim_{H}(E_{\beta,\alpha}). \end{align*} This in tandem with \Cref{Cor_3}, the equality given in \eqref{eq:limit_hausdorff_proj_0,1}, the observations that, for $n \in \mathbb{N}$, the sequence $(\pi_{\beta,\alpha}^{(n)}(a_{n,k}))_{k \in \mathbb{N}}$ is monotonically decreasing with $\lim_{k \to \infty} \pi_{\beta,\alpha}^{(n)}(a_{n,k}) =\pi_{\beta,\alpha}^{(n)}(0)$, and for $l$ and $m \in \mathbb{N}$ with $l \geq m$, \begin{align*} K_{\beta,\alpha}^{+}(\pi_{\beta,\alpha}^{(n)}(a_{n,m}), \pi_{\beta,\alpha}^{(n)}(1)) \subseteq K_{\beta,\alpha}^{+}(\pi_{\beta,\alpha}^{(n)}(a_{n,l}), \pi_{\beta,\alpha}^{(n)}(1)), % \quad \text{and} \quad % K_{\beta,\alpha}^{+}(\pi_{\beta,\alpha}^{(n)}(0), \pi_{\beta,\alpha}^{(n)}(1)) = \bigcup_{k \in \mathbb{N}} K_{\beta,\alpha}^{+}(\pi_{\beta,\alpha}^{(n)}(a_{n,k}), \pi_{\beta,\alpha}^{(n)}(1)), \end{align*} and the countable stability of the Hausdorff dimension, yields the required result. \end{proof} \subsection*{Examples and applications} \begin{figure}[t] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{dimhbeta110alpha0.png} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{dimhsymbetagoldenmean.png} \end{subfigure} \hfill \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{dimhbeta10alpha0001.png} \end{subfigure} \caption{Graphs of $\eta_{\beta,\alpha}$: on the left, $\eta_{\beta}(t)$ with $\beta$ such that $\tau_\beta^-(1)=(110)^\infty$; in the middle, $\eta_{\beta,\alpha}$ for $(\beta, \alpha) \in \Delta$ with $\tau_{\beta,\alpha}^-(1)=(110)^\infty$ and $\tau_{\beta,\alpha}^+(0)=(001)^\infty$; on the right, $\eta_{\beta}$ for $(\beta, \alpha) \in \Delta$ such that $\tau_{\beta,\alpha}^-(1)=(10)^\infty$ and $\tau_{\beta,\alpha}^+(0)=(0001)^\infty $.}\label{fig:dim} \end{figure} Let $(\beta,\alpha) \in \Delta$ be such that $\tau^-_{\beta,\alpha}(1)=(10)^\infty$. In which case, $u(\beta,\alpha)$ is equal to the golden mean, which we denote by $G$, and belongs to the set $C$. Thus, $E^{+}_{\beta,\alpha}$ contains no isolated points. From \cite[Proposition 5.2]{KKLL} and by an elementary calculation, we have that $t_{u(\beta,\alpha),c} = G^{-2}$ and $\tau_{G}^{-}(G^{-2}) = 00(10)^\infty$. This, in tandem with \Cref{prop:char(5)}, yields $t_{\beta,\alpha,c} = \pi_{\beta, \alpha} \tau_{G}^{-}(G^{-2}) = \pi_{\beta, \alpha}(00(10)^\infty) = \alpha/(1-\beta)+1/(\beta^{3}-1)$, which one can show is equal to $(1-\alpha-\beta\alpha)\beta^{-2}$ using the fact that $\tau^-_{\beta,\alpha}(1)=(10)^\infty$. Moreover, by \Cref{prop:char} Part~(4), \begin{align*} \dim_H(K_{\beta,\alpha}(t))=(\log(G)/\log(\beta)) \dim_H(K_{G}(\tilde{\pi}_{\beta,\alpha}(t)), \end{align*} for all $t \in (0, 1)$. We now show that for a given $\beta\in(1,G)$ there exists a unique $\alpha\in(0,1/2)$ with $G = u(\beta,\alpha)$, or equivalently, that for a given $\beta\in(1,G)$ there exists a unique $\alpha\in(0,1/2)$ with $T_{\beta,\alpha}(1) = p_{\beta,\alpha}$. Using the definitions of the involved terms, $T_{\beta,\alpha}(1) = p_{\beta,\alpha}$ if and only if $\alpha=1-\beta^2/(\beta+1)$. Noting, when $\beta = G$, that $\alpha=0$, and as $\beta$ approaches $1$ from above, that $\alpha =1-\beta^2/(\beta+1)$ converges to $1/2$, yields the required result. By \Cref{thm:LSSS}, under the assumption that $\tau^-_{\beta,\alpha}(1)=(10)^\infty$, if $\tau_{\beta,\alpha}^+(0)$ is periodic, then $\Omega_{\beta,\alpha}$ is a subshift of finite type. We now find $(\beta, \alpha) \in \Delta$ such that $\tau_{\beta,\alpha}^+(0)=(0001)^\infty$ and $\tau^-_{\beta,\alpha}(1)=(10)^\infty$. For this we, observe that \begin{align*} T_{\beta,\alpha}(0) = \alpha, \quad T_{\beta,\alpha}^2(0) = \beta\alpha+\alpha, \quad T_{\beta,\alpha}^3(0) = \beta(\beta\alpha+\alpha)+\alpha, \quad \text{and} \quad T_{\beta,\alpha}^4(0) = \beta(\beta(\beta\alpha+\alpha)+\alpha)+\alpha-1=0. \end{align*} Substituting $\alpha=1-\beta^2/(\beta+1)$ into the last equality, gives \begin{align*} \beta(\beta(\beta(1-\beta^2/(\beta+1))+(1-\beta^2/(\beta+1)))+(1-\beta^2/(\beta+1)))+(1-\beta^2/(\beta+1))-1 = 0. \end{align*} This reduces to $\beta(\beta^4 -\beta^2- \beta -1) =0$. Thus, if $\beta$ is the positive real root of $\beta^4 -\beta^2- \beta -1 =0$ and $\alpha=1-\beta^2/(\beta+1)$, then $\tau_{\beta,\alpha}^+(0)=(0001)^\infty$ and $\tau^-_{\beta,\alpha}(1)=(10)^\infty$. Numerically approximating $\beta$ and $\alpha$ yields $\beta\approx 1.4656$ and $\alpha\approx 0.1288$. We utilise the above, in particular \Cref{prop:char}, in studying the dimension function $\eta_{\beta,\alpha}$. Recall, if $t\not \in E^{+}_{\beta,\alpha}$, then there exists $t^*>t$ with $K^{+}_{\beta,\alpha}(t)=K^{+}_{\beta,\alpha}(t^*)$. Thus, it suffices to study $K^{+}_{\beta,\alpha}(t)$ for $t\in E^{+}_{\beta,\alpha}$. For a fixed $t\in E^{+}_{\beta,\alpha}$, with the aid of \Cref{thm:BSV14}, we find $(\beta^\prime,\alpha^\prime) \in \Delta$ with $\tau_{\beta,\alpha}(t)=\tau_{\beta^\prime,\alpha^\prime}(0)$ and $\tau^{-}_{\beta,\alpha}(1) = \tau^{-}_{\beta^\prime,\alpha^\prime}(1)$. By \Cref{prop:char} Part~(4), \begin{align*} \eta_{\beta,\alpha}(t)=\frac{\log(u(\beta,\alpha))}{\log(\beta)} \dim_H(K_{u(\beta,\alpha)}^+(\tilde{\pi}_{\beta,\alpha}(t))) \quad \text{and} \quad \eta_{\beta^\prime,\alpha^\prime}(0)=\frac{\log(u(\beta^\prime,\alpha^\prime))}{\log(\beta^\prime)} \dim_H(K_{u(\beta^\prime,\alpha^\prime)}^+(\tilde{\pi}_{\beta^\prime,\alpha^\prime}(0))). \end{align*} Since $u(\beta^\prime,\alpha^\prime)=u(\beta,\alpha)$, $\tilde{\pi}_{\beta,\alpha}(t)=\tilde{\pi}_{\beta^\prime,\alpha^\prime}(0)$ and $\eta_{\beta^\prime,\alpha^\prime}(0)=1$, we have $\eta_{\beta,\alpha}(t)=\log(\beta^\prime)/\log(\beta)$. In summary, determining the value of $\eta_{\beta, \alpha}(t)$ reduces down to finding such $\alpha^\prime$ and $\beta^\prime$. This can performed numerically with the aid of the monotonicity and continuity of the projection maps, see Figure \ref{fig:dim} for sample numerical outputs. \section{Winning sets of intermediate \texorpdfstring{$\beta$}{beta}-transformations: Proof of Proof of \texorpdfstring{\Cref{thm:main_2}}{Theorem 1.6}}\label{sec:proof_thm_1_6} To show the conditions of \Cref{thm_HY_Thm_2.1} are satisfied when $T=T_{\beta, \alpha}$ for all $(\beta, \alpha) \in \Delta$ with $T_{\beta, \alpha}$ transitive and $\Omega_{\beta, \alpha}$ of finite type we use the following lemma on the geometric length of cylinder sets and the following proposition. \begin{lemma}\label{lem:geometric_lengths_of_cylinders} Let $(\beta, \alpha) \in \Delta$ be such that $T_{\beta, \alpha}$ is transitive and $\Omega_{\beta, \alpha}$ is a subshift of finite type. If $\nu = \nu_{1} \cdots \nu_{\lvert \nu \rvert}$ is an admissible word with respect to the partition $P_{\beta,\alpha}$, then $\rho \beta^{-\lvert \nu \rvert} \leq \lvert I(\nu) \rvert \leq \beta^{-\lvert \nu \rvert}$, where $\rho = \min \{ \lvert I(i) \rvert \colon i \in \Lambda \}$. \end{lemma} \begin{proof} If $\lvert \nu \rvert = 1$, the result is a consequence of the fact that $\max\{ p_{\beta,\alpha}, 1 - p_{\beta,\alpha}\} \leq \beta^{-1}$, and that $I(\nu) \subseteq [0, p_{\beta,\alpha}]$ or $I(\nu) \subseteq [p_{\beta,\alpha},1]$. Therefore, we may assume that $\lvert \nu \rvert \geq 2$. Since $T_{\beta, \alpha}$ is Markov with respect to the partition $P_{\beta,\alpha}$, for $j \in \{ 0, 1, \ldots, \lvert \nu \rvert \}$, we have $T_{\beta,\alpha}^{j}(I(\nu))$ is an interval and $T_{\beta,\alpha}^{\lvert \nu \rvert - 1}(J(\nu)) = J(\nu_{\lvert \nu \rvert})$, where for a given admissible finite word $\omega$, we denote by $J(\omega)$ the interior of $I(\omega)$. This implies that $\lvert I(\nu) \rvert = \beta^{-\lvert \nu \rvert + 1}\lvert I(\nu_{\lvert \nu \rvert}) \rvert$ and hence that $\rho \beta^{-\lvert \nu \rvert} \leq \lvert I(\nu_{\lvert \nu \rvert}) \rvert \beta^{-\lvert \nu \rvert + 1} = \lvert I(\nu) \rvert = \lvert I(\nu_{\lvert \nu \rvert}) \rvert \beta^{-\lvert \nu \rvert + 1} \leq \beta^{-\lvert \nu \rvert}$. \end{proof} \begin{proposition}\label{prop:thm_SFT+Transitive_implies_winning} Under the hypotheses of \Cref{lem:geometric_lengths_of_cylinders}, for all $x \in [0, 1]$ and $\gamma \in (0, 1)$, we have that the geometric condition $H_{x, \gamma}$, with $T = T_{\beta, \alpha}$ and the partition $P_{\beta,\alpha}$, is satisfied. \end{proposition} \begin{proof} \Cref{lem:geometric_lengths_of_cylinders} yields \eqref{eq:condition_1} of $H_{\xi, \gamma}$, thus is suffices to show that \eqref{eq:condition_2} of $H_{x, \gamma}$ is satisfied. To this end, let $n-1$ denote the cardinality of $P_{\beta,\alpha}$, and observe that, since by assumption $T_{\beta, \alpha}$ is transitive, there exists an $m = m_{\beta, \alpha} \in \mathbb{N}$, so that $J(l) \subseteq T_{\beta,\alpha}^{m}(J(k))$, for all $l$ and $k \in \{0, 1, \ldots, n-1\}$, where $J(l)$ and $J(k)$ are as defined in the proof of \Cref{lem:geometric_lengths_of_cylinders}. Further, if for two admissible words $\nu$ and $\eta$, we have that $I(\nu)$ and $I(\eta)$ are $\gamma/4$-comparable, then by \Cref{lem:geometric_lengths_of_cylinders} there exists $k_{0} \in \mathbb{N}$ with $\lvert \lvert \nu \rvert - \lvert \eta \rvert \rvert \leq k_{0}$. Letting $\omega$ denote the symbolic representation of $x$ generated by $T_{\beta, \alpha}$ with respect to the partition $P_{\beta,\alpha}$, set \begin{align*} M = M_{\beta, \alpha} = \begin{cases} \max \{ \sigma^{k}(\omega) \wedge \omega \colon k \in \{1, 2, \ldots, k_{0} \}\} & \text{if} \; \omega \; \text{is not periodic},\\ \max \{ \sigma^{k}(\omega) \wedge \omega \colon k \in \{1, 2, \ldots, \operatorname{per}(\omega) -1 \}\} & \text{if} \; \omega \; \text{is periodic.} \end{cases} \end{align*} Our aim is to show \eqref{eq:condition_2} of $H_{x, \gamma}$ is satisfied for all admissible words $\nu$ and $\eta$ with $I(\nu)$ and $I(\eta)$ are $\gamma/4$-comparable and all integers $i > i^{*} = k_{0} + m + M$. For this, suppose $\nu$ and $\eta$ are $\omega\vert_{i}$-extendable with $0 \leq \lvert \lvert \nu \rvert - \lvert \eta \rvert \rvert \leq k_{0}$ and $\operatorname{dist}(I(\nu\omega\vert_{i}), I(\eta\omega\vert_{i})) > 0$. We consider case when $\nu$ is not a prefix of $\eta$, and when $\nu$ is a prefix of $\eta$ separately. For both of these cases we use the following facts. For $l \in \{ 0, 1, \ldots, n-1\}$ there exists a minimal $j \in \{ 1, 2, \ldots, m \}$ such that $T_{\beta, \alpha}^{j}(I(l))$ contains the interiors of at least two elements of $P_{\beta,\alpha}$. For $k \in \{1, 2, \ldots, \lvert \nu \rvert + i -1\}$ and $l \in \{1, 2, \ldots, \lvert \eta \rvert + i -1\}$, we have $T_{\beta, \alpha}^{k}(I(\nu \omega\vert_{i}))$ and $T_{\beta, \alpha}^{l}(I(\eta \omega\vert_{i}))$ are intervals, and $T_{\beta, \alpha}^{k}(J(\nu \omega\vert_{i})) = J(\sigma^{k}(\nu \omega\vert_{i}))$ and $T_{\beta, \alpha}^{l}(J(\eta \omega\vert_{i})) = J(\sigma^{l}(\eta \omega\vert_{i}))$. Let us consider the first of our two cases, namely when $\nu$ is not a prefix of $\eta$. Our above two facts imply that there exist $l \in \{1, 2, \ldots, m-1\}$ and $F \subseteq \{0, 1, \ldots, n-1\}$ with $\lvert F \rvert \geq 2$ such that \begin{align}\label{eq:HY-splitting_of_cylinders} I(\nu \omega\vert_{l}) = \bigcup_{k \in F} I(\nu \omega\vert_{l} k) \quad \text{and} \quad I(\eta \omega\vert_{l}) = \bigcup_{k \in F} I(\eta \omega\vert_{l} k). \end{align} Since $\nu$ is not a prefix of $\eta$, there exists $j \in \{ 1, 2, \ldots, \min\{\lvert \nu \rvert, \lvert \eta \rvert \} - 1 \}$ such that $\nu\vert_{j} = \eta\vert_{j}$ and $\nu\vert_{j+1} \prec \eta\vert_{j+1}$, or $\nu\vert_{j} = \eta\vert_{j}$ and $\nu\vert_{j+1} \succ \eta\vert_{j+1}$. Suppose that $\nu\vert_{j} = \eta\vert_{j}$ and $\nu\vert_{j+1} \prec \eta\vert_{j+1}$, and that $\omega_{1} = \max F$. Letting $k \in F \setminus \{\omega_{1}\}$, for all $x \in I(\nu \omega\vert_{i})$, $y \in I(\eta \omega\vert_{l} k)$ and $z \in I(\eta \omega\vert_{i})$, that $x \leq y \leq z$. In other words, $\operatorname{dist}(I(\nu\omega\vert_{i}), I(\eta\omega\vert_{i})) \geq \lvert I(\eta \omega\vert_{l} k)\rvert$, and hence by \Cref{lem:geometric_lengths_of_cylinders}, \begin{align*} \operatorname{dist}(I(\nu\omega\vert_{i}), I(\eta\omega\vert_{i})) \geq \lvert I(\eta \omega\vert_{l} k)\rvert \geq \rho \beta^{-(\lvert \eta \rvert + l + 1 )} \geq \rho \beta^{-(\lvert \eta \rvert + m + 1)} \geq \rho \beta^{-(m + 1)} \lvert I(\eta) \rvert. \end{align*} Similarly, if $\omega_{1} \neq \max\{F\}$, setting $k = \max F$, we obtain that \begin{align*} \operatorname{dist}(I(\nu\omega\vert_{i}), I(\eta\omega\vert_{i})) \geq \lvert I(\nu \omega\vert_{l} k)\rvert \geq \rho \beta^{-(\lvert \nu \rvert + l + 1 )} \geq \rho \beta^{-(\lvert \nu \rvert + m + 1)} \geq \rho \beta^{-(m + 1)} \lvert I(\nu) \rvert. \end{align*} An analogous argument yields the result when $\nu\vert_{j} = \eta\vert_{j}$ and $\nu\vert_{j+1} \succ \eta\vert_{j+1}$. When $\nu$ is a prefix of $\eta$, the result follows using a similar reasoning as in the case when $\nu$ is not a prefix of $\eta$, but where we replace the first line of the argument, namely \eqref{eq:HY-splitting_of_cylinders}, by the following observation. By construction, there exists a $j \in \{ 1, 2, \ldots, \lvert \eta \rvert - \lvert \nu \rvert + M -1 \}$ such that $\nu \omega\vert_{j} = (\eta \omega\vert_{i})\vert_{j + \lvert \nu \rvert}$ but $\nu \omega\vert_{j+1} \neq (\eta \omega\vert_{i})\vert_{j + \lvert \nu \rvert + 1}$. In which case, by our two facts, there exists an $l \in \{0, 1, 2, \ldots, m-1\}$ and a subset of $F$ of $\{1, 2, \ldots, n-1\}$ with $\lvert F \rvert \geq 2$ such that \[ I(\nu \omega\vert_{j+l}) = \bigcup_{k \in F} I(\nu \omega\vert_{j+l} k) \quad \text{and} \quad I((\eta \omega\vert_{i})\vert_{j + \lvert \nu \rvert + l}) = \bigcup_{k \in F} I((\eta \omega\vert_{i})\vert_{j + \lvert \nu \rvert +l} k). \qedhere \] \end{proof} \begin{proof}[{Proof of \Cref{thm:main_2}}] This is a direct consequence of \Cref{thm:G1990+Palmer79,thm_HY_Thm_2.1}, and \Cref{prop:alpha-winning_transport,prop:thm_SFT+Transitive_implies_winning}. \end{proof} \bibliographystyle{alpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction and literature review}~\label{sec:literature} In dynamic infrastructure sciences, the sensor placement (SP) problem is concerned with the time-varying selection or one-time placement of sensors, while optimizing desired objective functions. This problem exists widely in dynamic networks such as transportation systems, electric power systems, and water distribution networks (WDN). The optimal placement of water quality (WQ) sensors is a crucial issue in WDN due to the dangers brought by accidental or intentional contamination, the expensiveness of sensors and their installation cost, and their potential in performing real-time feedback control of water quality---control that requires high-frequency WQ sensor data. WQ sensor placement in WDN serves different purposes. The high-level one is minimizing the potential public health impacts of a contamination incident given a limited number of sensors. To quantify this high-level objective, the WQ literature considers various mathematical objectives and metrics. Specifically, the SP problem has been studied in~\cite{Krause2008,ostfeld2008battle,preis2008multiobjective,schal2013water,Eliades2014,Shastri2006,Aral2010} considering different contamination risks, optimization objectives, optimization formulations, uncertainty, the solution methodology and its computational feasibility, and the use of mobile sensors. Rathi and Gupta~\cite{Rathi2014} classify methodologies from over forty studies into two categories as single- and multi-objective SP problem. Two other comprehensive surveys focusing on optimization strategies are also conducted in~\cite{Hart2010,Hu2018}. As we mentioned, the most common objective of sensor placement in WDN is to minimize the potential public health caused by contamination incident, and it can be formulated as maximizing the coverage of water with a minimum number of sensors. Lee and Deininger introduce the concept of "Demand Coverage`` and solve the problem using a mixed integer programming (MIP) method~\cite{lee1992optimal}. Kumar et al.~\cite{kumar1997identification} and Kansal et al. \cite{kansal2012identification} propose heuristic methods to find optimal sensor location one by one by selecting one optimal location first and then selecting the next location by modifying the coverage matrix. To consider nodes with lower water quality, Woo et al. modify the objective by placing weights for each term and normalizing the concentrations~\cite{woo2001optimal}. Alzahrani et al. \cite{al2003optimizing} and Afshar and Marino~\cite{afshar2012multi} use genetic algorithm (GA) and ant colony optimization (ACO) respectively to find optimal placement strategy to maximize the demand coverage. Ghimire et al.~\cite{Ghimire2006} and Rathi and Gupta~\cite{rathi2014locations} also suggested heuristic methods to solve the problem. We briefly summarize the more recent literature on this problem followed by identifying the key research gap. Recently, He \textit{et al.}~\cite{He2018} propose a multi-objective SP method to explicitly account for contamination probability variations. Hooshmand \textit{et al.}~\cite{Hooshmand2020} address SP problem with the identification criterion assuming that a limited sensor budget is available, followed by minimizing the number of vulnerable nodes using mixed integer programming (MIP). A combined management strategy for monitoring WDN is proposed in~\cite{Ciaponi2019} based on the application of water network partitioning and the installation of WQ sensors. Winter \textit{et al.}~\cite{Winter2019} investigate optimal sensor placements by introducing two greedy algorithms in which the imperfection of sensors and multiple objectives are taken into account. Giudicianni \textit{et al.}~\cite{Giudicianni2020} present a method that relies on a priori clustering of the WDN and on the installation of WQ sensors at the most central nodes of each cluster---selected according to different topological centrality metrics. Hu \textit{et al.}~\cite{Hu2020} propose a customized genetic algorithm to solve multi-objective SP in WDN. Based on graph spectral techniques that take advantage on spectrum properties of the adjacency matrix of WDN graph, a sensor placement strategy is discussed in Di Nardo \textit{et al.}~\cite{DiNardo2018}. Different objective functions leads to different placement strategies, and Tinelli \textit{et al.}~\cite{tinelli2018impact} discuss the impact of objective function selection on optimal placement of sensors. Zhang \textit{et al.}~\cite{zhang2020assessing} investigate the global resilience considering all likely sensor failures that have been rarely explored. The research community thoroughly investigated water quality sensor placement strategies considering various socio-technical objectives (as briefly discussed above). The objective of this paper is \textit{not} to develop a computational method to solve such SP problems with the aforementioned metrics/objectives. The objective herein is to find optimal SP of water quality sensors considering an overlooked, yet significant metric: the state observability and estimation metric jointly with Kalman filtering.\footnote{In dynamic systems, the Kalman filter is a widely used algorithm that computes unmeasured state estimates of a system given a dynamic model and data from sensor measurements subject to noise.} In short, this metric maps sensor placements given a fixed hydraulic profile to a scalar value to be minimized. This value quantifies the observability of unmeasured WQ states (i.e., concentrations of chlorine) in the entire water network. The observability quantification metric is depicted as a state estimation error measuring the difference between the actual WQ states and their estimates. Accordingly, this proposed metric finds the optimal WQ sensor placement that minimizes the state estimation error via the vintage Kalman filter for noisy WQ dynamics and measurement models. To the best of our knowledge, this is the first attempt to find the optimal sensor placement jointly with optimizing Kalman filter performance for WQ dynamics. The most related research is the \textit{ensemble Kalman filter}-based techniques by Rajakumar \textit{et al.}~\cite{Rajakumar2019}, where the authors explore the impact of sensor placement on the final state estimation performance. However, the study \textit{(i)} does not provide sensor placement strategy, \textit{(ii)} mainly focuses on estimating water quality states and reaction parameters, and \textit{(iii)} a dynamic model for WQ is not present to guide optimal SP. To that end, the objective of this study is to provide a control- and network-theoretic method that determines the optimal geographic placements of water quality sensors while optimizing the Kalman filter performance. The specific paper contributions are: \begin{itemize} \item The state-space, control-theoretic dynamics depicting the evolution of WQ states, i.e., concentrations of chlorine, are shown. Specifically, we are modeling and tracking chlorine concentrations as a surrogate for contamination---this has been showcased in various studies depicting rapid depletion of chlorine upon the introduction of contaminants~\cite{yang2008modeling}. The dynamics of chlorine concentrations are represented as a time-varying state-space model. This model is then utilized to formulate the water quality sensor placement (WQSP) problem that optimizes the Kalman filter state estimation performance. This formulation \textit{(i)} takes into account and builds a mapping between a WDN observability metric and the performance of Kalman filter and \textit{(ii)} is a set function optimization (an optimization problem that takes sets as variables) that is difficult to solve for large networks. \item To account for the time-varying nature of the dynamic WQ model (due to the changes in the hydraulic profiles that are caused by demand profiles), an algorithm that computes a sensor placement for the most common hydraulic profiles is presented. Furthermore, scalability of this algorithm is investigated. The algorithm is based on an important theoretical feature for set function optimization called submodularity. This feature has been identified in recent control-theoretic studies for sensor placement strategies~\cite{Tzoumas2016,Zhang2017}. In particular, the developed approach is based on a greedy algorithm which returns a suboptimal placement strategy with guarantees on the distance to optimality. An efficient implementation of the algorithm is also presented. Compared to~\cite{Tzoumas2016,Zhang2017}, the proposed algorithm takes into account the time-varying nature of WQ dynamics. \item Thorough case studies on three water distribution networks under different conditions are presented. The case studies consider varying scales of water networks, significant demand variability, different number of allocated sensors, and their impact on the state estimation performance and WQSP solution. Important observations and recommendations for water system operators are given. Github codes are included for reproducibility. \end{itemize} The rest of the paper is organized as follows. Section~\ref{sec:Ctrl-WQM} introduces network-oriented water quality dynamic model by presenting the models of each component in detail. An abstract, linear, state-space format for the water quality model is given first considering the first-order reaction model with known reaction rate coefficients. WQ observability and its metric (observability Gramian) are introduced in Section~\ref{sec:WQSP}, and then WQSP problem is formulated and solved by taking advantage of submodularity property of set function optimization in Section~\ref{sec:WQSPformuation}. A scalable implementation of the problem is showcased. Section~\ref{sec:test} presents case studies to support the computational algorithms. Appendix~\ref{sec:appa} outlines components of the scalable implementation of the presented computational methods. The notation for this paper is introduced next. \vspace{1em} \noindent \textit{\textbf{Paper's Notation}} $\;$ Italicized, boldface upper and lower case characters represent matrices and column vectors: $a$ is a scalar, $\boldsymbol a$ is a vector, and $\boldsymbol A$ is a matrix. Matrix $\boldsymbol I_n$ denotes a identity square matrix of dimension $n$-by-$n$, whereas $\boldsymbol 0_{m \times n}$ denotes a zero matrix with size $m$-by-$n$. The notations $\mathbb{R}$ and $\mathbb{R}_{++}$ denote the set of real and positive real numbers. The notations $\mathbb{R}^n$ and $\mathbb{R}^{m\times n}$ denote a column vector with $n$ elements and an $m$-by-$n$ matrix in $\mathbb{R}$. For any two matrices $\boldsymbol A$ and $\boldsymbol B$ with same number of columns, the notation $\{\boldsymbol A, \boldsymbol B\}$ denotes $[\boldsymbol A^\top \ \boldsymbol B^\top]^\top$. For a random variable $\boldsymbol x \in \mathbb{R}^n$, $\mathbb{E}(\boldsymbol x)$ is its expected value, and its covariance is denoted by $\mathbb{C}(\boldsymbol x) = \mathbb{E}\left( (\boldsymbol x - \mathbb{E}(\boldsymbol x))(\boldsymbol x - \mathbb{E}(\boldsymbol x))^\top \right)$. \section{State-Space Water Quality Dynamic Model}~\label{sec:Ctrl-WQM} We model WDN by a directed graph $\mathcal{G} = (\mathcal{W},\mathcal{L})$. Set $\mathcal{W}$ defines the nodes and is partitioned as $\mathcal{W} = \mathcal{J} \bigcup \mathcal{T} \bigcup \mathcal{R}$ where $\mathcal{J}$, $\mathcal{T}$, and $\mathcal{R}$ are collection of junctions, tanks, and reservoirs. For the $i$-th node, set $\mathcal{N}_i$ collects its neighboring nodes (any two nodes connected by a link) and is partitioned as $\mathcal{N}_i = \mathcal{N}_i^\mathrm{in} \bigcup \mathcal{N}_i^\mathrm{out}$, where $\mathcal{N}_i^\mathrm{in}$ and $\mathcal{N}_i^\mathrm{out}$ are collection of inflow and outflow nodes. Let $\mathcal{L} \subseteq \mathcal{W} \times \mathcal{W}$ be the set of links, and define the partition $\mathcal{L} = \mathcal{P} \bigcup \mathcal{M} \bigcup \mathcal{V}$, where $\mathcal{P}$, $\mathcal{M}$, and $\mathcal{V}$ represent the collection of pipes, pumps, and valves. In this paper, we the use Lax-Wendroff scheme~\cite{lax1964difference} to space-discretize pipes and each pipe with length $L$ is split into $s_{L}$ segments. The number of junctions, reservoirs, tanks, pipes, pumps and valves is denoted as $n_{\mathrm{J}}$, $n_{\mathrm{R}}$, $n_{\mathrm{TK}}$, $n_{\mathrm{P}}$, $n_{\mathrm{M}}$, and $n_{\mathrm{V}}$. Hence, the number of nodes and links are $n_\mathrm{N} = n_{\mathrm{J}}+n_{\mathrm{R}}+n_{\mathrm{TK}}$ and $n_\mathrm{L} = n_{\mathrm{P}} \cdot s_{L} +n_{\mathrm{M}}+n_{\mathrm{V}}$. The principal component of the presented state-space, control-theoretic water quality modeling is a state-vector defining the concentrations of the disinfectant (chlorine) in the network. Concentrations at nodes such as junctions, reservoirs, and tanks are collected in vector $\boldsymbol c_\mathrm{N} \triangleq \{\boldsymbol c_\mathrm{J}, \boldsymbol c_\mathrm{R}, \boldsymbol c_\mathrm{T} \} $; concentrations at links such as pipes and pumps are collected in $\boldsymbol c_\mathrm{L} \triangleq \{\boldsymbol c_\mathrm{P}, \boldsymbol c_\mathrm{M}, \boldsymbol c_\mathrm{V} \} $. We define WQ state $\boldsymbol x(t) \triangleq \boldsymbol x$ at time $t$ as: $\boldsymbol x(t) = \{\boldsymbol c_\mathrm{N},\boldsymbol c_\mathrm{L} \} = \{\boldsymbol c_\mathrm{J}, \boldsymbol c_\mathrm{R}, \boldsymbol c_\mathrm{T}, \boldsymbol c_\mathrm{P}, \boldsymbol c_\mathrm{M}, \boldsymbol c_\mathrm{V}\} \in \mbb{R}^{n_x}, n_x = n_\mathrm{N} + n_\mathrm{L}. $ We also make two assumptions: \textit{(i)} the mixing of the solute is complete and instantaneous at junctions and in tanks with a continuously stirred tank reactors (CSTR) model~\cite{rossman2000epanet}, and \textit{(ii)} the first-order reaction for single-species that describes disinfectant decay both in the bulk flow and at the pipe wall are assumed herein. The assumptions are widely used in the literature~\cite{rossman1996numerical,basha2007eulerian,shang2008epanet}. \subsection{Conservation of mass}\label{sec:conservation} The water quality model represents the movement of all chemical and/or microbial species (contaminant, disinfectants, DBPs, metals, etc.) within a WDN as they traverse various components of the network. Specifically, we are considering the single-species interaction and dynamics of chlorine. This movement or time-evolution is based on three principles: \textit{(i)} \textit{mass balance in pipes}, which is represented by chlorine transport in differential pipe lengths by advection in addition to its decay/growth due to reactions; \textit{(ii)} \textit{mass balance at junctions}, which is represented by complete and instantaneous mixing of all inflows, that is the concentration of chlorine in links flowing into this junction; and \textit{(iii)} \textit{mass balance in tanks}, which is represented by a continuously stirred tank reactors (CSTRs)~\cite{rossman2000epanet} model with complete and instantaneous mixing and growth/decay reactions. The modeling of each component is introduced next. \subsubsection{Chlorine transport and reaction in pipes} The water quality modeling for pipes involves modeling the chlorine transport and reaction by 1-D advection-reaction (A-R) equation. For any Pipe $i$, the 1-D A-R model is given by a PDE: \begin{equation} ~\label{equ:adv-reac} {\partial_t c_\mathrm{P}} = -v_{i}(t) {\partial_x c_\mathrm{P}} + r_{i} c_\mathrm{P} , \end{equation} \noindent where $v_{i}(t)$ is flow velocity, $r_{i}$ is the first-order reaction rate and remains constant, which is related with the bulk and wall reaction rate and mass transfer coefficient between the bulk flow and the pipe wall~\cite{basha2007eulerian,rossman1996numerical}. Here, the Lax-Wendroff (L-W) scheme~\cite{lax1964difference} shown in Fig.~\ref{fig:lax} is used to approximate the solution of the PDE~\eqref{equ:adv-reac} in space and time; this model has been used and accepted in the literature~\cite{rossman1996numerical,morais2012fast,fabrie2010quality}. Pipe $i$ with length $L_{i}$ is split into $s_{L_{i}}$ segments, and the discretized form for segment $s$ is given by \begin{equation}~\label{equ:adv-reac-lax} \hspace{-1em} c_{i,s}(t\hspace{-1pt}+\hspace{-1pt}\Delta t) = \underline{\alpha} c_{i,s-1}(t) + (\alpha + r_i)c_{i,s}(t) +\bar{\alpha} c_{i,s+1}(t), \end{equation} where L-W coefficients for previous, current, and next segment are $\underline{\alpha} = 0.5 \beta (1+\beta)$, ${\alpha} = 1- \beta^2 $, and $\bar{\alpha} = -0.5 \beta (1-\beta)$. Note that $\beta \in \left(0,1\right]$ for Pipe $i$ at time $t$ is a constant related with stability condition of L-W scheme, and can be decided by ${v_{i}(t) \Delta t}(\Delta x_{i})^{-1}$, where $\Delta t$ and $\Delta x_{i}$ are the time step and the space-discretization step in Fig.~\ref{fig:lax}. Hence, to stabilize L-W scheme, the water quality time step $\Delta t \leq \min({\Delta x_{i}}/v_{i}(t))$, for all $i \in \mathcal{P}$. The L-W scheme coefficients $\underline{\alpha}$, $\alpha$, and $\bar{\alpha}$ are a function of time but vary much slower than $\boldsymbol x(t)$, and they only change when $v_i(t)$ changes after the $\Delta t$ and $\Delta x_i$ are fixed. That is, they only update each hydraulic time step. Equation~\eqref{equ:adv-reac-lax} can be lumped in a matrix-vector form for all segments $s$ for all Pipes $i \in \mc{P}$ as: \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{Fig_1.pdf} \caption{Time-space discretization of Pipe $i$ based on the L-W scheme. } \label{fig:lax} \vspace{-1.em} \end{figure} \begin{equation}~\label{equ:abs-pipe-mass} \boldsymbol c_\mathrm{P}(t+ \Delta t) = \boldsymbol A_\mathrm{P}(t) \boldsymbol c_\mathrm{P}(t) + \boldsymbol A_\mathrm{N}(t) \boldsymbol c_\mathrm{N}(t), \end{equation} where matrices $\boldsymbol A_\mathrm{P}$ and $\boldsymbol A_\mathrm{N}$ map the scalar equation~\eqref{equ:adv-reac-lax} into the vector form~\eqref{equ:abs-pipe-mass}. The Github codes of this paper~\cite{wang_2020} shows how these matrices are computed. \subsubsection{Chlorine mass balance at junctions}~\label{sec:mbjunction} Mass conservation of the disinfectant (i.e., chlorine) for Junction $i$ at time $t$ can be described by \begin{equation} \hspace{-1.5em} \textstyle \sum_{k = 1}^{|\mathcal{N}_i^\mathrm{in}|} q_{ki}(t)c_{ki}(t) = d_i(t) c_{i}(t) + \textstyle \sum_{j = 1}^{|\mathcal{N}_i^\mathrm{out}|} q_{ij}(t) c_{ij}(t),~\notag \end{equation} where $ \{ ki : k \in \mathcal{N}_i^\mathrm{in} \} $ and $ \{ ij : j \in \mathcal{N}_i^\mathrm{out} \} $ represent the sets of links with inflows and outflows of Junction $i$; $d_i$ is its demand; $q_{ki}(t)$ and $q_{ij}(t)$ are the flow rate in Links $ki$ and $ij$; $c_{ki}(t)$ and $c_{ij}(t)$ are the corresponding concentrations. Specifically, when links are pipes, $c_{ki}(t)$ and $c_{ij}(t)$ should be the last and first segment of Pipes $ki$ and $ij$. The matrix form when considering all junctions is given as \begin{equation} ~\label{equ:abs-junction-mass} \boldsymbol c_\mathrm{J}(t+ \Delta t) = \boldsymbol A_\mathrm{J}(t) \boldsymbol c_\mathrm{J}(t) + \boldsymbol A_\mathrm{L}(t) \boldsymbol c_\mathrm{L}(t). \end{equation} \subsubsection{Chlorine mass balance at tanks} Akin to dealing with junctions, we can express the mass balance equations for each tank, the details are similar and omitted for brevity and ease of exposition. With that in mind, the provided Github codes present all of the necessary details that are required to arrive at the high-fidelity state-space description. We directly give the matrix form of all tanks as \begin{equation} ~\label{equ:abs-tank-mass} \boldsymbol c_\mathrm{T}(t+ \Delta t) = \boldsymbol A_\mathrm{T}(t) \boldsymbol c_\mathrm{T}(t) + \boldsymbol A'_\mathrm{P}(t) \boldsymbol c_\mathrm{P}(t), \end{equation} where $\boldsymbol A_\mathrm{T}$ is in terms of tank volumes $\boldsymbol V_\mathrm{T}$, time step $\Delta t$, and flow rates flowing in or out of tanks. \subsubsection{Chlorine mass balance at reservoirs} Without loss of generality, it is assumed that the chlorine sources are only located at reservoirs, and the concentration at a reservoir is constant. That is, \begin{equation}\label{equ:abs-reservoir} \boldsymbol c_\mathrm{R}(t + \Delta t) = \boldsymbol c_\mathrm{R}(t) . \end{equation} \subsubsection{Chlorine transport in pumps and valves}~\label{sec:PumpandValve} We consider that the \textit{lengths} of pumps to be null, i.e., the distance between its upstream node and downstream node is zero, and hence they neither store any water nor are discretized into different segments. Therefore, the concentrations at pumps or valves equal the concentration of upstream nodes (a reservoir) they are connecting. That is \begin{equation} \hspace{-1em} c_{j}(t+\Delta t) = c_i(t + \Delta t) = c_i(t) = c_j(t), i \in \mathcal{R}, j \in \mathcal{M},~\notag \end{equation} and the corresponding matrix form for pumps is \begin{equation}~\label{equ:abs-pump} \boldsymbol c_\mathrm{M}(t+\Delta t) = \boldsymbol c_\mathrm{M}(t). \end{equation} As for valves installed on pipes, it is simply treated as a segment of that pipe. In this case, the concentration in valves equals the segment concentrations in pipes. We next show how these matrix forms can yield state-space formulation of water quality modeling. \subsection{Water quality modeling in state-space form}~\label{sec:space-state} The briefly summarized water quality model of each component from the previous section can be written as a state-space Linear Difference Equation (LDE) as in~\eqref{equ:de-abstract1} where $\boldsymbol I$ is an identity matrix of appropriate dimension. \begin{figure}[h] \begin{equation}~\label{equ:de-abstract1} \hspace{-2em}\includegraphics[width=0.92\linewidth,valign=c]{Fig_2.pdf}. \end{equation} \end{figure} For the ease of exposition, we consider that $\Delta t = 1 \sec$ and the time-index $t$ is replaced with another time-index $k$. The state-space form of the water quality model is presented as a linear time-variant (LTV) system: \begin{equation}~\label{equ:ltv} {\boldsymbol{x}}(k+1) =\boldsymbol A(k) \boldsymbol{x}(k)+\boldsymbol {w}(k), \;\; \boldsymbol y (k)=\boldsymbol C \boldsymbol{x}(k)+ \boldsymbol{v}(k), \end{equation} where $\boldsymbol x(k) \in \mbb{R}^{n_x}$ is the state vector defined above; $\boldsymbol y(k) \in \mbb{R}^{n_y}$ represents a vector of data from WQ sensors; $\boldsymbol {w}(k) \in \mbb{R}^{n_x}$ and $\boldsymbol{v}(k) \in \mbb{R}^{n_y}$ are the process and measurement noise; $\boldsymbol C \in \mbb{R}^{n_y \times n_x}$ is a matrix depicting the location of the placed sensors where $n_y << n_x$. We note the following. First, although the argument of the state-space matrices $\boldsymbol A(k)$ is in terms of $k$, this is somewhat of an abuse for the notation seeing that $\boldsymbol A(k)$ encodes the hydraulic profile (heads and flow rates) that does not change with the same frequency as the water quality states $\boldsymbol x(k)$. Hence, the state-space model~\eqref{equ:ltv} is time varying as system matrix $\boldsymbol A(k)$ changes for different hydraulic simulation, but remains the same $\boldsymbol A$ in a single simulation. Second, and without loss of generality, the input vector from booster stations is implicitly embedded within the state-space matrix $\boldsymbol A$. Third, for all $k \geq 0$, it is assumed that initial condition, process noise $\boldsymbol {w}(k)$ and the measurement noise $\boldsymbol v(k)$ are uncorrelated and the noise variance from each sensor is $\sigma^2$. Finally, we like to point out that extensive details for the above state-space model can be studied from our recent work on model predictive control of water quality dynamics~\cite{wang2020effective}. \section{Observability Metrics for WQ Dynamics}~\label{sec:WQSP} The objective of this section is two-fold. First, to introduce water system engineers and researchers to control-theoretic approaches for ensuring or optimizing the observability of the water quality dynamics. Herein, observability is defined as the ability to estimate water quality model states $\boldsymbol x(k)$ from available measurements $\boldsymbol y(k)$ via a state estimation routine. This provides situational awareness for the operator given data from few water quality sensors. Second, to define a simple observability metric that maps the number and location of fixed sensors to a scalar metric acting as a proxy for the state estimation. \subsection{Metrics for observability and its interpretations}~\label{sec:metric} In dynamic systems theory, observability is a measure of how the system state vector $\boldsymbol{x}(k) \in \mbb{R}^{n_x}$ can be inferred from knowledge of its output $\boldsymbol{y}(k) \in \mbb{R}^{n_y}$ over either finite- or infinite-time horizons. In particular, given sensor data $\boldsymbol y(0), \boldsymbol y(1), \ldots, \boldsymbol y(k_f-1)$ for finite $k_{f} = k_{final}$ time-steps, observability is concerned with reconstructing or estimating the initial unknown state vector $\boldsymbol x(0)$ from the $k_f$ measurements, and subsequently computing $\boldsymbol x(1), \ldots, \boldsymbol x(k_f)$ assuming noiseless system. Accordingly, a linear dynamic system (such as the water quality model~\eqref{equ:ltv}) is observable if and only if the observability matrix for $k_f$ time-steps \begin{equation} \mathcal{O}(k_f) = \{\boldsymbol C, \boldsymbol C \boldsymbol A, \hdots, \boldsymbol C \boldsymbol A^{k_f-1}\} \in \mbb{R}^{k_f n_y \times n_x }~\notag \end{equation} is full column rank~\cite{hespanha2018linear}, i.e., $\rank(\mc{O}(k_f))=n_x$ assuming that $k_fn_y > n_x$. In this section, and for brevity, we assume that the hydraulic variables are not changing during each hydraulic simulation period and hence $\boldsymbol A(k)= \boldsymbol A$. With that in mind, the proposed sensor placement formulations considers changing hydraulic simulations. For the infinite-time horizon case with $k_f = \infty$ (that is, data has been collected over a long period of time), a system is observable if and only if the observability matrix $\mathcal{O}(k_f=n_x) \in \mbb{R}^{n_x n_y \times n_x }$ is full column rank~\cite{hespanha2018linear}. However, observability is a binary metric---it cannot indicate \textit{how observable} a dynamic system is. Due to the complexity and dimension of the water quality model~\eqref{equ:ltv}, this dynamic model is \textit{not} observable, i.e., it fails the aforementioned rank condition for various water networks and hydraulic simulation profiles. Specifically, it is virtually impossible to accurately reconstruct all chlorine concentrations (states $\boldsymbol x(k)$) unless water quality sensors are ubiquitously available and widespread in the network, i.e., installed at each junction. To that end, a more elaborate, non-binary quantitative metric for observability is needed for the water quality model and the sensor placement problem. One metric is based on the \textit{observability Gramian}~\cite{hespanha2018linear} defined as the $k_f$ sum of matrices \begin{equation} \boldsymbol W(k_f)=\sum_{\tau=0}^{k_f}\left(\boldsymbol A^{\top}\right)^{\tau} \boldsymbol C^{\top} \boldsymbol C \boldsymbol A^{\tau}.~\notag \end{equation} The system is observable at time-step $k_f$ if matrix $\boldsymbol W(k_f)$ is nonsingular and is unobservable if $\boldsymbol W(k_f)$ is singular. Similarly, this definition extends for the infinite-horizon case with $k_f = \infty$. However, $\boldsymbol W$ is still a matrix and the aforementioned observability-singularity discussion is still binary. As a result, various non-binary metrics have been explored in the literature~\cite{Summers2013,Summers2016}. This includes: the minimum eigenvalue $\lambda_{\mathrm{min}}(\boldsymbol W)$, the log determinant $\log \operatorname{det} (\boldsymbol W)$, the $\operatorname{trace} (\boldsymbol W)$, and the sums or products of the first $m$ eigenvalues $\lambda_1,\hdots,\lambda_m$ of $\boldsymbol W$. These metrics differ in their practical application, interpretation, and theoretical properties; the reader is referred to~\cite{Summers2016} for a thorough discussion. In this paper, we utilize the $\log \operatorname{det} (\boldsymbol W)$ metric due to various reasons outlined in the ensuing sections, but the formulations presented in the paper can be extend to other metrics. \subsection{Metrics for water quality observability matrix}~\label{sec:aug} In this section, we provide a discussion on the utilized metric for observability for the sensor placement problem. To do so, we consider the time-invariant state-space matrices for a single hydraulic simulation $k \in [0,k_f]$ which is also a single instant of hydraulic simulation and demand profile. That is, to ease the ensuing exposition we assume that the state-space matrix $\boldsymbol A(k) = \boldsymbol A$ is fixed rather than being time-varying (the actual methods consider time-varying demand pattern). The objective of this section is to formulate a water quality observability metric that maps collection of water quality data $\boldsymbol y(k)$ from a specific number of sensors $n_y$ to a scalar observability measure under the noise from water quality dynamics and measurement models. First, consider the augmented measurement vector $\bar{\boldsymbol y}(k_f) \triangleq \{\boldsymbol y(0), \ldots,\boldsymbol y(k_f) \}$ for $k_f+1$ time-steps. Given~\eqref{equ:ltv}, this yields: \begin{equation}~\label{equ:ymeasurement} \hspace{-1.47em}\includegraphics[width=0.92\linewidth,valign=c]{Fig_3.pdf}, \end{equation} where $\boldsymbol z(k_f)$ lumps initial unknown state $\boldsymbol x_0 = \boldsymbol x(0)$ and process noise $\boldsymbol w(k_f)$, and $\bar{\boldsymbol v}(k_f)$ collects all measurement noise. Note that the left-hand side of~\eqref{equ:ymeasurement} is known, whereas vectors $\boldsymbol z(k_f)$ and $\bar{\boldsymbol v}(k_f)$ are unknown vectors. To that end, the problem of estimating $\boldsymbol z(k_f) \triangleq \boldsymbol z \in \mbb{R}^{n_z}$, where $n_z= (k_f+1)n_x$, is important to gain network-wide observability of water quality state $\boldsymbol x$ which will guide the real-time estimation. As a probabilistic surrogate to estimating $\boldsymbol z$, we utilize the minimum mean square estimate (MMSE) defined as $\mbb{E}(\boldsymbol z - \hat{\boldsymbol z})$, and its corresponding posterior error covariance matrix $\Sigma_{\boldsymbol z}$. These two quantities provide estimates of means and variances of the unknown variable $\boldsymbol z(k_f)$. Interestingly, these can be written in terms of the sensor noise variance $\sigma^2$, the collected sensor data $\bar{\boldsymbol y}(k_f)$, the observability-like matrix $\mathcal{\boldsymbol O}(k_f)$ in~\eqref{equ:ymeasurement}, and the expectation and covariance of the unknown variable $\boldsymbol z(k_f)$ given by \begin{equation} \mathbb{E}(\boldsymbol z(k_f)), \; \mathbb{C}(\boldsymbol z(k_f)) = \mathbb{E}\left( (\boldsymbol z - \mathbb{E}(\boldsymbol z))(\boldsymbol z - \mathbb{E}(\boldsymbol z))^\top \right) ~\notag \end{equation} Given these developments, and to guide the sensor placement problem formulation, a metric is needed to map the covariance matrix $\Sigma_{\boldsymbol z}$ to a \textit{scalar} value. In particular, the metric $\log \operatorname{det}\left(\Sigma_{\boldsymbol z}\right)$, which maps an $n_z$-by-$n_z$ matrix $\Sigma_z$ to a scalar value, can be used to achieve that. Fortunately, $\log \det (\Sigma_{\boldsymbol z})$ has a closed form expression given by: \begin{equation}~\label{equ:closedform} \hspace{-0.352cm}\log \det (\Sigma_{\boldsymbol z}) = 2 n_z \log (\sigma)\hspace{-2pt}-\hspace{-2pt}\log \det \left(\sigma^{2} \mathbb{C}^{-1}\left(\boldsymbol z\right)+\boldsymbol W_o \right) \end{equation} where $\boldsymbol W_o = \mathcal{\boldsymbol O}^{\top}(k_f)\mathcal{\boldsymbol O}(k_f)$. The reader is referred to~\cite{Tzoumas2016} for the derivation of~\eqref{equ:closedform}. We note the following: \textit{(i)} the closed-form expression of $\log \operatorname{det}\left(\Sigma_{\boldsymbol z}\right)$ in~\eqref{equ:closedform} assumes a \textit{fixed} sensor placement while associating a scalar measure of water quality observability given a collection of sensor data and the system's parameters. This closed form expression is rather too complex to be incorporated within a sensor placement formulation and does not allow for near real-time state estimation. The next section discusses simple solutions to these issues. \textit{(ii)} We use the $\log \det (\cdot)$ metric here as it is endowed with desirable theoretical properties (namely super/sub-modularity) that makes it amenable to large-scale networks, it exhibits a closed-form expression as in~\eqref{equ:closedform}, and has been used in various sensor placement studies in the literature. With that in mind, other metrics can be used including the $\mathrm{trace}$ operator. \subsection{Relationship with the Kalman filter} The above discussions yield a metric that can be used for quantifying observability of the water quality model~\eqref{equ:ltv}, in addition to probabilistically estimating the unknown, initial state vector $\boldsymbol x(0)$. A relevant problem is the real-time state estimation via the Kalman filter, which essentially reconstructs or estimates in real-time states $\boldsymbol x(k)$ from output measurements $\boldsymbol y(k)$. This is in contrast with the batch state estimation as in~\eqref{equ:ymeasurement}. While the initial state estimation problem discussed in the previous section provides a starting point for reconstructing $\boldsymbol x$, the Kalman filter presents a more general approach to the estimation problem. In fact, ignoring the process noise $\boldsymbol w$ and setting variances of sensor data to $\sigma^2 = 1$, the Kalman filter becomes equivalent to a real-time version of the above probabilistic estimator. Most importantly, the metric $\log \operatorname{det} (\cdot)$ degenerates to: \begin{align} \hspace{-0.4cm} \log \operatorname{det}\left(\Sigma_{\boldsymbol z}\right) &= - \log \operatorname{det} ( \boldsymbol I_{n_{x}} + \boldsymbol W(k_f) ) ~\label{equ:obsmetric} \\ &= - \log \operatorname{det} \left( \boldsymbol I_{n_{x}} + \sum_{\tau=0}^{k_f}\left(\boldsymbol A^{\top}\right)^{\tau} \boldsymbol C^{\top} \boldsymbol C \boldsymbol A^{\tau} \right) \notag \end{align} where $\boldsymbol I_{n_{x}}$ is an identity matrix of size $n_x$. This is shown in the recent control theoretic literature~\cite{Jawaid2015,Zhang2017}. In short, this is a simple metric that maps the number of installed or placed sensors (i.e., number of rows of matrix $\boldsymbol C$) to a metric that defines the quality of the state estimates. When no sensor is installed or $\boldsymbol C$ is a zero matrix, the observability Gramian $\boldsymbol W(k_f)$ is also a zero matrix, and intuitively the $\log \det (\cdot)$ metric defined above has the maximum error of $0$. When the network is fully sensed---that is $n_y = n_x$, $\boldsymbol C = \boldsymbol I_{n_{x}}$, and all states are measured---then $\boldsymbol W(k_f) = \boldsymbol I_{n_x} + \boldsymbol A + \hdots + \boldsymbol A^{k_f}$ and the smallest error is achieved. Building on that, the control theoretic literature thoroughly investigated bounds for the estimation error and the corresponding metric with respect to the number of sensors; see~\cite[Theorem 1]{Tzoumas2016}. The objective of this paper is to build on these developments and investigate how such metric relates with the performance of the Kalman filter. The next section formulates the water quality sensor problem using the introduced metric. \section{Water Quality Sensor Placement Formulation}~\label{sec:WQSPformuation} The objective of the presented water quality sensor placement (WQSP) formulation is to minimize the error covariance of the Kalman filter while using at most $r$ water quality sensors. In WDN, water quality sensors are installed at nodes, that is, at most $r$ sensors are selected from the set $\mathcal{W} = \mathcal{J} \bigcup \mathcal{T} \bigcup \mathcal{R}$ where the cardinality of set $|\mathcal{W}| = n_N$, i.e., the set $\mc{W}$ contains $n_N$ possible locations at various junctions, tanks, and reservoirs. This forms a sensor set $\mathcal{S} \subset \mathcal{W}$ where $|\mathcal{S}| = n_{\mathcal{S}} \leq r$. The specific geographic placement and locations of these $n_{\mathcal{S}}$ sensors are encoded in matrix $\boldsymbol C$ of~\eqref{equ:ltv} through binary indicators. In short, the presented WQSP seeks to find the optimal set $\mc{S}^*_{r}$ that optimizes the state estimation performance with at most $r$ WQ sensors. The metric discussed in the previous section assumes that the state-space matrix $\boldsymbol A$ (encoding network and hydraulic simulation parameters) is time-varying due to varying demand and flow/head profiles. In short, the metric~\eqref{equ:obsmetric} yields a time-varying value and hence different state estimation performance for each hydraulic simulation reflected with a different $\boldsymbol A(k)$ matrix. As a result, considering a varying hydraulic simulation profile within the sensor placement problem is important, i.e., the sensor placement solution needs to be {aware} of the most probable demand and hydraulic scenarios. Consequently, we define $\boldsymbol D_i \in\mathbb{R}^{n_{\mathrm{J}} \times T_h k_f}, \forall i \in \{1,\ldots,n_d\}$ for all $n_{\mathrm{J}}$ junctions during $T_h$ distinct hydraulic simulations, each lasting $k_f \, \sec$. The notation $\boldsymbol D_{i,k}$ defines the $k$th column vector of matrix $\boldsymbol D_i$. Parameter $n_d$ reflects the number of potential demand patterns; concrete examples are given in case study section. Demand profiles $\boldsymbol D_i \in \mathcal{D}$ essentially define the most common varying demand profiles experienced by the system operator from historical data. Each demand profile results in a different hydraulic profile and hence a different state-space matrix\footnote{We defined $\boldsymbol A(k)$ earlier due to the change in the hydraulic and demand profiles. The notation $\boldsymbol A(\boldsymbol D_{i,k})$ is equivalent to $\boldsymbol A(k)$ but offers more clarity.} $\boldsymbol A(\boldsymbol D_{i,k})\triangleq \boldsymbol A(k)$. Given these definitions and for an a priori defined $\boldsymbol D_{i,k} \in \mc{D}$, one useful instant of the WQSP problem can be abstractly formulated as: \begin{equation}\label{equ:WSQP} \begin{split} \mathrm{{WQSP:}} \;\;\;\; \minimize \;\; \; & f(\mathcal{S}; \boldsymbol A(\boldsymbol D_{i,k})) \\ \subjectto \;\;\;& {\mathcal{S} \subset \mathcal{W}, \;\; |\mathcal{S}| = n_\mathcal{S}} \leq r. \end{split} \end{equation} The design variable in the optimization problem $\mathrm{WQSP}$ is the location of the installed sensors reflected via set $\mc{S}$ defined earlier. The objective function $f(\cdot;\cdot): \mathbb{R}^{n_{\mc{S}}} \times \mathbb{R}^{n_x \times n_x} \to \mathbb{R}$ maps the optimal sensor placement candidate $\mc{S}$ and given hydraulic demand profile $\boldsymbol D_{i,k}$ and its corresponding matrix $\boldsymbol A(\boldsymbol D_{i,k}) $ to the state estimation, Kalman filter performance. We note that when the objective function has a set as the variable (i.e., $\mc{S}$ in $f(\cdot;\cdot)$), the objective function is often referred to as a \textit{set function}. We use these terms interchangeably. In this paper, the set (objective) function takes the form of~\eqref{equ:obsmetric} which indeed takes explicitly the sensor placement set $\mc{S}$ through matrix $\boldsymbol C$ as well as the a priori known hydraulic profiles and the corresponding state-space matrices $\boldsymbol A(\boldsymbol D_{i,k})$. The constraint set of $\mathrm{WQSP}$ represents the number of utilized sensors and their location in the network. For small-scale water networks, one may solve the set function optimization~\eqref{equ:WSQP} via brute force, but this is impossible for large-scale networks---such problems are known to be an NP-hard one, i.e., a computational problem that is suspected to have no polynomial-time algorithm to optimally solve. To address this computational challenge, we resort to a widely-used approach in combinatorial optimization: exploiting special property of the set function $f(\mathcal{S};\boldsymbol A(\boldsymbol D_{i,k}))$ via sub/super-modularity defined as follows. A set function $f(\cdot)$ is submodular if and only if $ f(\mc{A} \cup\{a\})-f(\mc{A}) \geq f(\mc{B} \cup\{a\})-f(\mc{B})$ for any subsets $\mc{A} \subseteq \mc{B} \subseteq \mc{V}$ and $\{a\} \in \mc{V} \backslash \mc{B}$. A set function $f(\cdot)$ is supermodular if $-f(\cdot)$ is submodular. Intuitively, submodularity is a diminishing returns property where adding an element to a smaller set gives a larger gain than adding one to a larger set~\cite{lovasz1983submodular}. The computational framework of submodularity of set function optimization allows one to use greedy algorithms~\cite{cormen2009introduction} with desirable performance while being computationally tractable. Although greedy algorithms are known to return suboptimal solutions, they are also known to return excellent performance when the set function is especially sub/super-modular. Interestingly, the set function in $\mathrm{WQSP}$ given in~\eqref{equ:obsmetric} is indeed supermodular~\cite[Theorem 2]{Tzoumas2016}. Given this property, a vintage greedy algorithm---applied to solve the NP-hard problem $\mathrm{WSQP}$---can return a solution $\mc{S}$ with objective function value $f(\mc{S})$ \textit{at least} 63\% of the optimal solution $f(\mathcal{S}^{*})$~\cite{Tzoumas2016}. Empirically, a large body of work~\cite{Tzoumas2016,Zhang2017,Cortesi2014} shows that the solution provided by some greedy algorithms can be near-optimal, rather than being 63\% optimal. \begin{algorithm}[t] \small \DontPrintSemicolon \KwIn{Number of sensors $r$, all demand profiles $\mathcal{D}$, water network parameters, $k = i = 1$, $\tilde{\mc{S}} = \emptyset$} \KwOut{Optimal sensor set $\mathcal{S^{\star}}$} \textbf{Compute:} $\boldsymbol A(\boldsymbol D_{i,k})=\boldsymbol A, \forall i, k \in \{1,\ldots,n_d\}, \{1,\ldots, T_h k_f\}$\; \For{$k\leq T_hk_f$ }{ \textcolor{blue}{// \textbf{For each single hydraulic simulation interval} $k$} \; $i = 1$, $\bar{\mc{S}}=\emptyset $\; \For{$i \leq n_d$ }{ \textcolor{blue}{// \textbf{For each demand profile}} \; $j = 1, \mathcal{S}_j = \emptyset $\; \While { $j \leq r$ }{ $e_{j} \leftarrow \mathrm{argmax} _{e \in \mc{W} \backslash \mathcal{S} }\left[f(\mathcal{S};\boldsymbol A )-f(\mathcal{S} \cup\{e\};\boldsymbol A)\right]$\; $\mathcal{S}_j \leftarrow \mathcal{S}_j \cup\left\{e_{j}\right\}$\; $j \leftarrow j+1$ } $\bar{\mc{S}} \leftarrow \bar{\mc{S}} \bigcup {\mc{S}}_{j}$, $i \leftarrow i+1$ } $ {\mc{S}}^{(k)} \leftarrow \arg \max_{\mc{S} \in \bar{\mc{S}}} f(\mathcal{S}; \boldsymbol A)$\; $\tilde{\mc{S}} \leftarrow \tilde{\mc{S}} \bigcup {\mc{S}}^{(k)}$ \; $k \leftarrow k+k_f$ } $\mathcal{S}^* \leftarrow \arg \max_{\mc{S} \in \tilde{\mc{S}}} {T}(\mathcal{S})$ \textcolor{blue}{\textbf{// \textbf{Greedy-optimal sensor placement}}}\; \caption{Greedy algorithm to solve WQSP problem.} \label{alg:greedy} \end{algorithm} We apply a greedy algorithm to solve the WQSP for various hydraulic profiles. The details of this algorithm are given in Algorithm~\ref{alg:greedy}. The notation $\mc{S}_j$ denotes the sensor set with $j$ placed sensors. The notation $\mc{S}^{(k)}$ denotes the sensor set at iteration $k$. The sets $\tilde{\mc{S}}$ and $\bar{\mc{S}}$ are super-sets that include various sets $\mc{S}$. Variable $e \in \mc{S}$ defines an element (i.e., a junction) in the set $\mc{S}$. The inputs for the algorithm are the number of sensors $r$, all demand profiles $\boldsymbol D_{i,k} \in \mc{D}$, and WDN parameters. The output of the algorithm is greedy-optimal sensor set $\mc{S}^*$. The first step of the algorithm is to compute all state-space matrices $\boldsymbol A(\boldsymbol D_{i,k})$ for various demand profiles $\boldsymbol D_{i,k}$. Then, given a fixed hydraulic simulation interval $k$, a fixed demand profile $i$, and fixed number of sensors $j$, Step 9 computes the optimal element in the set $\mc{W} \backslash \mc{S}_j$ that yields the best improvement in the set function optimization reflecting the Kalman filter performance---a core component of the greedy algorithm and supermodular optimization. At each iteration inside the while loop, the algorithm finds the optimal element $e_j$ (i.e., the sensor through a junction ID) that results in the best improvement in the state estimation performance metric. Then, the $n_d$ sets $\mc{S}_j$ (that include the optimal sensor sets for all $n_d$ demand profiles) are stored in a master set $\bar{\mc{S}}$. This is then followed by finding the optimal sensor sets from $\bar{\mc{S}}$ for all $T_h$ hydraulic simulations; these are all included in another master set $\tilde{\mc{S}}$. Finally, the algorithm terminates by computing the final {optimal sensor locations $\mc{S}^*$} via picking the combination that maximizes the occupation time ${T}(\mc{S})$ for all $\mc{S} \in \tilde{\mc{S}}$, i.e., a metric that defines the frequency of a specific sensor activation. Finally, we note that this algorithm returns the \textit{greedy-optimal} solution. This solution is not necessarily the optimal solution as discussed above with the 63\% optimality guarantees. Thorough case studies are given in the ensuing section. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{Fig_4.pdf} \caption{(a) Three-node network, (b) Net1, and (c) Net3.} \label{fig:setup \end{figure} \section{Case Studies}~\label{sec:test} We present three simulation examples (three-node network, Net1, and Net3 network~\cite{rossman2000epanet,shang2002particle}) to illustrate the applicability of our approach. The three-node network is designed to illustrate the details of proposed method and help readers understand the physical meaning of results intuitively. Then, we test Net1 with looped network topology considering the impacts on final WQSP from choosing \textit{(i)} the length of a single hydraulic simulation $t$, \textit{(ii)} L-W scheme time-step $\Delta t$ (or equally dynamic number of segments), \textit{(iii)} different base demands, and \textit{(iv)} different patterns. Net3 network is used to test scalability of proposed algorithm and verify our findings further. Considering that the LDE model~\eqref{equ:ltv} produces accurate state evolution, we eliminate the process noise and set the sensor noise standard deviation to $\sigma = 0.1$. The simulations are performed via EPANET Matlab Toolkit~\cite{Eliades2016} on Windows 10 Enterprise with an Intel(R) Xeon(R) CPU E5-1620 v3 @3.50 GHz. All codes, parameters, tested networks, and results are available on Github~\cite{wang_2020} which includes an efficient and scalable implementation of Algorithm~\ref{alg:greedy}. The details of this implementation are included in Appendix~\ref{sec:appa}. \subsection{Three-node network}\label{sec:3-node} The three-node network shown in Fig.~\ref{fig:setup}{a} includes one junction, one pipe, one pump, one tank, and one reservoir. A chlorine source ($ c_\mathrm{R1} = 0.8$ mg/L) is installed at Reservoir 1. The initial chlorine concentrations at or in the other components are $0$ mg/L. Only Junction 2 consumes water, and its base demand is $d_{\mathrm{base}} = 2000\ \mathrm{GPM}$. The corresponding pattern $\mathrm{Pattern\ I}$ (viewed as a row vector) for Junction 2 in 24 hours is presented in Fig.~\ref{fig:demandpattern}. Hence, only one demand profile for a day is computed as $\boldsymbol D = d_{\mathrm{base}} \times \mathrm{Pattern\ I}$. The pipe is split into fixed as $s_{L_{23}} = 150$ segments, and the single hydraulic simulation interval is set to $k_f = 300 \sec$ and $T_h = 24$ hydraulic simulations are considered. To help the readers understand intuitively about the water quality modeling in state-space form and the observability (Gramian), an illustrative code including step by step comments for this small three-node network is available in our Github~\cite{wang_2020} for the convenience of readers. \begin{figure}[t] \centering \includegraphics[width=0.94\linewidth]{Fig_5.pdf} \caption{Pattern for Three-node and Net1 networks.} \label{fig:demandpattern \end{figure} \begin{figure}[t] \centering \subfloat[\label{fig:net1_basedemand}]{\includegraphics[keepaspectratio=true,scale=0.21]{Fig_6.pdf}}{} \subfloat[\label{fig:net1_demandpattern}]{\includegraphics[keepaspectratio=true,scale=0.20]{Fig_7.pdf}}{} \caption{Different base demands (a) and demand patterns (b) for nodes in Net1.} \label{fig:net1_demand} \end{figure} \begin{figure}[t] \centering \subfloat[\label{fig:SS_3_NET1_a}]{\includegraphics[keepaspectratio=true,scale=0.07]{Fig_8.png}}{} \subfloat[\label{fig:SS_3_NET1_b}]{\includegraphics[keepaspectratio=true,scale=0.21]{Fig_9.png}}{} \caption{Sensor placement results for the three-node network (a) and Net1 (b) in 24 hours with $k_f = 300 \sec$, $\Delta t = 5\sec$, Pattern I, Base demand $1$.} \label{fig:SS_3_NET1} \end{figure} For the three-node network, there are three possible sensor locations ($\mathrm{R}1$, $\mathrm{J}2$, and $\mathrm{T}3$); therefore, $r$ is set to $1$ or $2$ in Algorithm~\ref{alg:greedy}. The final sensor placement results are presented as Fig.~\ref{fig:SS_3_NET1_a}. When $r = 1$, $\mathrm{J}2$ is the best location or the \textit{center} of the network, and when $r = 2$, locations $\mathrm{J}2$ and $\mathrm{T}3$ are selected. To qualify the centrality or importance of a specific location during 24 hours, \textit{occupation time} ${T}(\mc{S}) = \frac{\mathrm{Selected\ time}}{\mathrm{Total\ time}}$ is defined as a percentage of the selected time by Algorithm~\ref{alg:greedy} in a day. This measure indicates the importance of the selected sensor locations. If the sensor location does not change during 24 hours, the occupation time would be 100\%; see Tab.~\ref{tab:sensor}. With that in mind, this 100\% figure of sensor occupation time rarely happens for any junction in larger networks---its occurrence in the three-node network is due to its simple topology. We show more interesting results with varying occupation time in the next sections. \subsection{Looped Net1 network}\label{sec:net1} Net1 network~\cite{rossman2000epanet,shang2002particle} shown in Fig.~\ref{fig:setup}{b} is composed of 9 junctions, 1 reservoir, 1 tank, 12 pipes, and 1 pump. Beyond optimal sensor placements, here we investigate the impact of the length of a single hydraulic simulation length $k_f$, L-W scheme time-step $\Delta t$, and the demand profile on the final sensor placement result. This network is more complex than the three-node network because its flow direction changes and flow rates or velocities vary dramatically every hour. To balance the performance of L-W scheme and computational burden, $s_{L_i}$ for each pipe is set to an integer which is the ceiling of $\frac{L_i}{ v_i(t) \Delta t}$, and dynamic number of segments setting makes $\Delta t = 5\sec$. Furthermore, If the parameter $\Delta t = 10\sec$ is needed, and this can be achieved conveniently by reducing the $s_{L_i}$ for each pipe by half. \subsubsection{Base case scenario and its result} The base case is considered with the following settings: $\Delta t = 5\sec$, single hydraulic simulation $k_f = 300 \sec$, and demand profile for a single interval $\boldsymbol D_k = \mathrm{Base \ demand \ 1} \times \mathrm{Pattern\ I}$ shown in Fig.~\ref{fig:net1_demand}. There are 11 possible sensor locations (see Fig.~\ref{fig:setup}{b}), and the number of sensor locations $r$ is chosen as $[1, 3, 5]$ in~\eqref{equ:WSQP}. Similarly, we consider $24$ hours in Algorithm~\ref{alg:greedy}. The final result is presented in Fig.~\ref{fig:SS_3_NET1_b}, and the sensor placement results in terms of occupation time $T$ are presented in Tab.~\ref{tab:sensor}. From Fig.~\ref{fig:SS_3_NET1_b} and Tab.~\ref{tab:sensor}, when $r = 1$, $\mathrm{J}10$ in Fig.~\ref{fig:SS_3_NET1_b} is the best sensor location most of the time ($T_{\mathrm{J}10} = 66.4\%$) and the best location switches to $\mathrm{J}12$ or $\mathrm{J}21$ occasionally ($T_{\mathrm{J}12} = 18.6\%$, $T_{\mathrm{J}21} = 14.8\%$). Hence, the solution of WQSP is $\mathcal{S}_{r=1}^* = \{\mathrm{J}10\}$ (marked as blue in Tab.~\ref{tab:sensor}). Similarly, the locations with the largest $r$ occupation time are selected as the final results when $r = 3$ and $5$. These greedy-optimal placements are given by $\mathcal{S}_{r=3}^* = \{\mathrm{J}10, \mathrm{J}12, \mathrm{J}21\}$ and $\mathcal{S}_{r=5}^* = \mathcal{S}_{r=3}^* \bigcup \{\mathrm{T}2,\mathrm{J}31\}$. This showcases supermodularity of the set function optimization, seeing that $\mathcal{S}_{r=3}^* \subset \mathcal{S}_{r=5}^*$. \begin{table}[t] \fontsize{8}{8}\selectfont \vspace{-0.05cm} \centering \setlength\tabcolsep{2 pt} \renewcommand{\arraystretch}{1.2} \makegapedcells \setcellgapes{1.0pt} \caption{Sensor placement results with detailed occupation time (Base case of Net1: $\Delta t = 5\sec$, $k_f= 300\sec$ , Pattern I, Base demand 1).} ~\label{tab:sensor} \begin{tabular}{c|c|c} \hline \textit{Network} & $r$ & \textit{Result (selected positions are in blue)} \\ \hline \hline \multirow{2}{*}{\makecell{\textit{Three-}\\ \textit{node}}} & $1$ & $T_{\textcolor{blue}{\mathrm{J}2}}= 100\%$ \\ \cline{2-3} & $2$ & $T_{\textcolor{blue}{\mathrm{J}2, \mathrm{T}3}}= 100\%$ \\ \hline \multirow{3}{*}{\makecell{\\ \textit{Net1} \\ \\ \textit{(Base case)}}} & $1$ & $T_{\textcolor{blue}{\mathrm{J}10}} = 66.4\%,$ $T_{\mathrm{J}12} = 18.6\%,$ $T_{\mathrm{J}21} = 14.8\%$ \\ \cline{2-3} & $3$ & \makecell{$T_{\textcolor{blue}{\mathrm{J}10}} = 68.5\%,$ $T_{\textcolor{blue}{\mathrm{J}12}} = 56.7\%,$ $T_{\textcolor{blue}{\mathrm{J}21}} = 83.4\%$ \\ $T_{\mathrm{T}2} = 53.6\%$ } \\ \cline{2-3} & $5$ & \makecell{$T_{\textcolor{blue}{\mathrm{J}10}} = 69.5\%,$ $T_{\textcolor{blue}{\mathrm{J}12}} = 87.8\%,$ $T_{\textcolor{blue}{\mathrm{J}21}} = 100\%$ \\ $T_{\textcolor{blue}{\mathrm{T}2\ }} = 65.7\%,$ $T_{\textcolor{blue}{\mathrm{J}31}} = 82.1\%,$ $T_{\mathrm{J}11} = 49.4\%$ } \\ \hline \hline \end{tabular}% \end{table} \begin{table}[t] \fontsize{8}{8}\selectfont \vspace{-0.05cm} \centering \setlength\tabcolsep{3pt} \renewcommand{\arraystretch}{1.2} \makegapedcells \setcellgapes{1.0pt} \caption{Sensor placement results considering the impacts of L-W scheme time-step $\Delta t$ and the length of the single observation time $t$ (Case A: $\Delta t = 10\sec$, $k_f = 300\sec$; Case B: $\Delta t = 5\sec$, $k_f = 60\sec$).} ~\label{tab:sensor_impact} \begin{tabular}{c|c|c} \hline \textit{Network} & $r$ & \textit{Result (selected positions are in blue)} \\ \hline \hline \multirow{3}{*}{\makecell{\\ \textit{Net1} \\ \\ \textit{(Case A)} }} & $1$ & $T_{\textcolor{blue}{\mathrm{J}10}} = 71.6\%,$ $T_{\mathrm{J}12} = 12.4\%,$ $T_{\mathrm{J}21} = 14.5\%$ \\ \cline{2-3} & $3$ & \makecell{$T_{\textcolor{blue}{\mathrm{J}10}} = 71.6\%,$ $T_{\mathrm{J}12} = 48.4\%,$ $T_{\textcolor{blue}{\mathrm{J}21}} = 61.2\%$ \\ $T_{\textcolor{blue}{\mathrm{T}2}} = 68.8\%$ } \\ \cline{2-3} & $5$ & \makecell{$T_{\textcolor{blue}{\mathrm{J}10}} = 74.7\%,$ $T_{\textcolor{blue}{\mathrm{J}12}} = 82.7\%,$ $T_{\textcolor{blue}{\mathrm{J}21}} = 100\%$ \\ $T_{\textcolor{blue}{\mathrm{T}2\ }} = 69.2\%,$ $T_{\textcolor{blue}{\mathrm{J}31}} = 73.0\%,$ $T_{\mathrm{J}11} = 62.6\%$} \\ \hline \multirow{3}{*}{\makecell{\\ \textit{Net1} \\ \\ \textit{(Case B)} }} & $1$ & $T_{\textcolor{blue}{\mathrm{J}10}} = 86.5\%,$ $T_{\mathrm{J}12} = 8.3\%,$ $T_{\mathrm{J}21} = 5.1\%$ \\ \cline{2-3} & $3$ & \makecell{$T_{\textcolor{blue}{\mathrm{J}10}} = 100\%,$ $T_{\mathrm{J}12} = 40.4\%,$ $T_{\textcolor{blue}{\mathrm{J}21}} = 53.6\%$ \\ $T_{\textcolor{blue}{\mathrm{T}2}} = 74.2\%$ } \\ \cline{2-3} & $5$ & \makecell{$T_{\textcolor{blue}{\mathrm{J}10}} = 100\%,$ $T_{\textcolor{blue}{\mathrm{J}12}} = 85.7\%,$ $T_{\textcolor{blue}{\mathrm{J}21}} = 100\%$ \\ $T_{\textcolor{blue}{\mathrm{T}2\ }} = 78.1\%,$ $T_{\textcolor{blue}{\mathrm{J}31}} = 39.2\%,$ $T_{\mathrm{J}11} = 36.5\%$ } \\ \hline \hline \end{tabular}% \end{table} \subsubsection{The impacts of L-W scheme time-step and the length of single observation time} Here, we study the impact of L-W scheme time-step and the length of single observation time parameters on the final WQSP results---in comparison with the base case from the previous section. At first, only $\Delta t$ is increased from $5$ (from the base case) to $10\sec$ (Case A). Accordingly, the number of segments of all pipes is reduced by $50\%$, while still maintaining the accuracy of LDE state-space model compared to the EPANET water quality simulation. We also define Case B by reducing $k_f$ from $300\sec$ (base case) to $60 \sec$. The results for this experiment are shown in Tab.~\ref{tab:sensor_impact}. We observe the following: \textit{(i)} the final results are exactly the same as the ones under the base case for $r = 1,5$, and the differences are materialized only in the slightly changed occupation time; \textit{(ii)} the results under $r = 3$ are different from the base case as the solution changes from $\mathcal{S} = \{\mathrm{J}10, \mathrm{J}12, \mathrm{J}21\}$ (base case) to $\mathcal{S} = \{\mathrm{J}10, \mathrm{T}2, \mathrm{J}21\}$ (Cases A and B). This is due to the fact that the base case did not produce a clear winner in terms of the sensor placement---the occupation times ($T_{\textcolor{blue}{\mathrm{J}12}} = 56.7\%,$ $T_{\mathrm{T}2} = 53.6\%$) are similar. We note that even if the sensor placement strategy is changed when $r = 3$, the final performances of these three cases are comparable, and the relative error of Kalman filter performance in \eqref{equ:WSQP} reached between Base case and Case A (Case B) is $17.2\%$ ($7.9\%$) even though the difference in $\Delta t$ is two times and the difference in $k_f$ is 5 times, which is acceptable. Hence, one could make this preliminary conclusion: the impacts of L-W scheme time-step $\Delta t$ and the length of hydraulic simulation $k_f$ on the final sensor placement results are negligible assuming that the number of pipe segments (the partial differential equation space discretization parameter) is large enough to ensure the accuracy of LDE model. \begin{figure}[t] \centering \subfloat[\label{fig:net1_pattern1_base1_b}]{\includegraphics[keepaspectratio=true,scale=0.21]{Fig_10.png}}{} \subfloat[\label{fig:net1_pattern1_base1_c}]{\includegraphics[keepaspectratio=true,scale=0.21]{Fig_11.png}}{} \caption{Sensor placement results for Net1 with $k_f = 300\sec$, $\Delta t = 5\sec$, Pattern I under (a) Base demand $2$, (b) Base demand $3$.} \label{fig:net1_base} \end{figure} \subsubsection{The impact of various demand patterns} In this section, the impact of demand profiles on the final sensor placement result is explored. Note that the demand in 24 hours at a node is decided by its base demand and the corresponding patterns simultaneously. Furthermore, other demand patterns could reflect other days of the weeks such as a weekend, rather than assuming a week-long demand curve. First, the Pattern I is fixed as the stair-shape in Fig.~\ref{fig:demandpattern} or the dotted line in Fig.~\ref{fig:net1_demandpattern}, and base demands 1, 2, and 3 in Fig.~\ref{fig:net1_basedemand} are used. That is, we have $n_d = 3$ different demand profiles which is an input for Algorithm~\ref{alg:greedy}. Note that these base demands are generated for illustrative purposes. Base demand 1 is designed to assign nearly identical base demand at each node. Base demand 2 assigns more base demands to the nodes on the right half part of the network in Fig.~\ref{fig:setup}{b}, such as $\{\mathrm{J}12, \mathrm{J}13, \mathrm{J}22,\mathrm{J}23,\mathrm{J}32\}$. Base demand 3 assigns larger base demands to the nodes on the left half part of the topology in Fig.~\ref{fig:setup}{b}, such as $\{\mathrm{J}11, \mathrm{J}21, \mathrm{J}31\}$. \begin{figure}[t] \centering \subfloat[\label{fig:net1_pattern1_b}]{\includegraphics[keepaspectratio=true,scale=0.21]{Fig_12.png}}{} \subfloat[\label{fig:net1_pattern1_c}]{\includegraphics[keepaspectratio=true,scale=0.21]{Fig_13.png}}{} \caption{Sensor placement results for Net1 with $k_f= 300\sec$, $\Delta t = 5\sec$, and Base demand $1$ under (a) Pattern II, (b) Pattern III.} \label{fig:net1_pattern} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.98\linewidth]{Fig_14.png} \caption{Final sensor placement results for Net1 with $k_f= 300\sec$, $\Delta t = 5\sec$ consider five different demand profiles (fusion of Fig.~\ref{fig:SS_3_NET1_b}, Fig.~\ref{fig:net1_base}, and Fig.~\ref{fig:net1_pattern}).} \label{fig:fusion \end{figure} The final sensor placement strategies under the three base demands 1, 2, and 3 are shown as Fig.~\ref{fig:SS_3_NET1_b}, Fig.~\ref{fig:net1_pattern1_base1_b}, and Fig.~\ref{fig:net1_pattern1_base1_c}, and the corresponding detailed occupation time are not shown for brevity. It can be observed that the greedy-optimal location switches from $\mathcal{S}_{r=1}^* = \{ \mathrm{J}10 \}$ (under base demand 1) to $\mathcal{S}_{r=1}^* = \{ \mathrm{J}12 \}$ (under base demand 3) along with changing base demand; when $r=3$, it switches from $\mathcal{S}_{r=3}^* = \{ \mathrm{J}10, \mathrm{J}12, \mathrm{J}21 \}$ (under base demand 1) to $\mathcal{S}_{r=3}^* = \{ \mathrm{J}11, \mathrm{J}12, \mathrm{J}32 \}$ (under base demand 3); when $r=5$, it switches from $\mathcal{S}_{r=5}^* = \{ \mathrm{J}10, \mathrm{J}12, \mathrm{J}21, \mathrm{J}31, \mathrm{T}2\}$ (under base demand 1) to $\mathcal{S}_{r=5}^* = \{ \mathrm{J}11, \mathrm{J}12, \mathrm{J}23, \mathrm{J}31, \mathrm{J}32 \}$ (under base demand 2). This showcases changing base demands or different demand profiles indeed have an impact on the sensor placement, but Algorithm~\ref{alg:greedy} still returns the best placement according to the chosen metrics. Second, to test the impact of patterns, Patterns I, II, and III in Fig.~\ref{fig:net1_demandpattern} are used when base demand 1 is fixed (see Fig.~\ref{fig:net1_basedemand}). We have another $n_d = 3$ different group of demand profiles. Again, these patterns are only used for illustrative purposes to test the algorithm's performance. It can be seen that Pattern I is relatively flatter compared with the other patterns, while Patterns II and III vary dramatically and are complementary to each other. The final sensor placement strategies under Patterns I, II, and III are shown as Fig.~\ref{fig:SS_3_NET1_b}, Fig.~\ref{fig:net1_pattern1_b}, and Fig.~\ref{fig:net1_pattern1_c} that can also be viewed as three corresponding matrices with only zeros and ones element (not selected or selected). It can be observed that the greedy-optimal location switches from $\mathcal{S}_{r=1}^* = \{ \mathrm{J}10 \}$ to $\mathcal{S}_{r=1}^* = \{ \mathrm{J}21 \}$ and from $\mathcal{S}_{r=3}^* = \{ \mathrm{J}10, \mathrm{J}12, \mathrm{J}21 \}$ to $\mathcal{S}_{r=3}^* = \{ \mathrm{J}21, \mathrm{J}22, \mathrm{J}31 \}$. With the above comparisons, we claim that both base demands and patterns would have impacts on the final sensor placement solution in Net1. In order to quantify the similarity between two sensor placement strategies $S_1^*$ and $S_2^*$ (viewed as matrices with only zeros and ones), we define a similarity metric as $-\sum \sum \oplus (S_1^*,S_2^*)$, where $\oplus$ stands for element-wise logical operator xor. Note that this similarity metric is always a negative value, and when two matrices are the same, the largest similarity value $0$ is reached. With applying this similarity metric, Fig.~\ref{fig:net1_pattern} is closer or more similar to Fig.~\ref{fig:SS_3_NET1_b} than Fig.~\ref{fig:net1_base}, That is, the pattern tends to cause less impacts than the base demand in Net1 case. This conclusion may extend to the other networks, and it is always safe to claim that varying demand profiles at each node has significant impact on the sensor placement strategy. If we consider all discussed $n_d = 5$ demand profiles $\boldsymbol D_i \in \mathbb{R}^{5 \times k_f}$ where $i = 1, \ldots, n_d$, and run Algorithm~\ref{alg:greedy}, the final sensor placement results considering Patterns I with Base demand 1,2, and 3, and Patterns II and III are shown as Fig.~\ref{fig:fusion}, which is the fusion of Fig.~\ref{fig:SS_3_NET1_b}, Fig.~\ref{fig:net1_base}, and Fig.~\ref{fig:net1_pattern}. The final solution $\mathcal{S}_{r=1}^* = \{ \mathrm{J}10\}$, $\mathcal{S}_{r=3}^* = \mc{S}_{r=1}^* \bigcup \{\mathrm{J}21, \mathrm{T}2\}$, and $\mathcal{S}_{r=5}^* = \mc{S}_{r=3}^* \bigcup \{\mathrm{J}12, \mathrm{J}31\}$, thereby showcasing the greedy-optimal solution for Algorithm~\ref{alg:greedy} that exploits supermodularity of the set function optimization. \subsection{Net3 network}\label{sec:net3} In this section, the conclusions drawn from looped Net1 network in previous section are further corroborated via the Net3 water network shown in Fig.~\ref{fig:setup}{c} with 90 junctions, 2 reservoirs, 3 tanks, 117 pipes, and 2 pumps. The base demands of all junctions are assumed as fixed, and a relative flatten pattern (varies slightly) are tested. The results selecting $r = 2,8,14$ from 95 node locations are shown as Fig.~\ref{fig:net3SS}, the detailed locations are presented in Tab.~\ref{tab:net3_Result}, and set $\mc{S}^*_{r=2} \subset \mc{S}^*_{r=8} \subset \mc{S}^*_{r=14}$ indicates the supermodularity property of the solution for this Net3 network. This showcases this property for even a larger network, further reaffirming the performance of the greedy algorithm. Besides that, the motivations behind testing Net3 network from a practical point of view come in two aspects, that are \textit{(i)} whether it is effective or not via adding extra sensors to reduce the Kalman filter estimation error? and \textit{(ii)} is the strategy from Algorithm~\ref{alg:greedy} better than random strategies? \begin{figure}[t] \centering \includegraphics[width=01\linewidth]{Fig_15.png} \caption{Sensor placement results for Net3 with $r = 2,8,14$ (95 node IDs are not shown for brevity).} \label{fig:net3SS \end{figure} \begin{table}[t] \fontsize{8}{8}\selectfont \vspace{-0.05cm} \centering \setlength\tabcolsep{5 pt} \renewcommand{\arraystretch}{1.2} \makegapedcells \setcellgapes{1.7pt} \caption{Sensor placement results for Net 3.} ~\label{tab:net3_Result} \begin{tabular}{c|c|c} \hline \textit{Network} & $r$ & \textit{Result} \\ \hline \hline \multirow{3}{*}{\makecell{\\ \textit{Net3} \\ }} & $2$ & $\mc{S}_{r=2}^* = \{ \mathrm{J}203,\mathrm{J}204 \}$ \\ \cline{2-3} & $8$ & \makecell{ $\mc{S}_{r=2}^* \bigcup \{\mathrm{J}261,\mathrm{J}163, \mathrm{J}169,\mathrm{J}173, \mathrm{J}184,\mathrm{J}206 \}$ } \\ \cline{2-3} & $14$ & \makecell{ $\mc{S}_{r=8}^* \bigcup \{ \mathrm{J}208,\mathrm{J}177, \mathrm{J}179,\mathrm{J}209, \mathrm{J}20,\mathrm{J}121\}$ } \\ \hline \hline \end{tabular}% \end{table} \subsection{Estimation performance and comparing with random SP} This section computationally investigates two important issues closely related to the two motivations aforementioned: First, the performance of the state estimation and Kalman filter as the number of utilized sensors $r$ in the water network varies. The second issue is whether a uniform (i.e., placing a sensor every other junction) or random sensor placement strategy yields a comparable performance---in terms of the state estimation metric---when comparison with the greedy-optimal presented in Algorithm~\ref{alg:greedy}. Both issues are investigated for the larger network Net3. First, the relationship between performance of the Kalman filter $f(\mc{S}_r)$~\eqref{equ:obsmetric} and the number of sensors is shown as Fig.~\ref{fig:net3Reached}. Interestingly, Kalman filter performance $f(\mc{S}_r)$ decreases roughly linearly as the number of sensors $r$ increases from $1$ to $14$ for three different hydraulic simulations. Specifically, Fig.~\ref{fig:net3Reached} showcases the performance of the greedy-optimal solution when $r$ is fixed in Algorithm~\ref{alg:greedy} with fixed hydraulic profiles ($T_h = 0^\mathrm{th}, 10^\mathrm{th}, 20^\mathrm{th}$ hour) i.e., the three figures in Fig.~\ref{fig:net3Reached} show similar trend for three different hydraulic profiles. The best performance or lower bounds under the corresponding cases are reached when all sensor locations are selected ($r = 95$). This indicates that one would not expect a large improvement of Kalman filter performance via increasing the number of sensors even the locations of added sensors are all greedy-optimal. Furthermore, the time-varying Kalman filter performance $f(\mc{S}^*_{r = 14})$ for 24 hours is depicted via the blue line in Fig.~\ref{fig:net3Random}. the performance value can easily reach $10^{5}$ level for this relatively large-scale network due to \textit{(i)} the large dimension of $\boldsymbol z$ ($n_z = 3.066 \times 10^6$), \textit{(ii)} covariance matrix $\mathbb{C}$ with tiny diagonal element (i.e., $5 \times 10^{-3}$), and \textit{(iii)} the typical value of $k_f$ is 200 in Net 3 resulting in $\boldsymbol W_o$ with huge value element in \eqref{equ:closedform}. Moreover, the trend of the blue line is decided by the hydraulic profile such as the flow rates for 24 hours, the plot of flow rates are not shown for brevity. To address the second issue, we showcase the performance of a random sensor placement with a fixed number of sensors $r=14$. Specifically, ten random sensor placements are generated for every hydraulic simulation. To quantify the performance of the proposed optimal placement, we define the relative performance of a random placement strategy $\hat{\mc{S}}$ as $\Delta f(\hat{\mc{S}}_{r=14}) = f(\hat{\mc{S}}_{r=14}) - f(\mc{S}^*_{r=14})$. A smaller value of $\Delta f(\hat{\mc{S}}_{r=14})$ implies a better optimal placement. The red lines in Fig.~\ref{fig:net3Random} are the relative performance of ten different randomizations---all of them are greater than zero showcasing a much better state estimation performance through the greedy algorithm. Even though the differences of performance are only 100-200 on average, the actual Kalman filter performance is orders of magnitude better due to fact that the $\log \det$ function is used to quantify the state estimation accuracy. That is, the $\mc{S}^*_{r=14}$ obtained from Algorithm~\ref{alg:greedy} performs significantly better than any random strategy $\hat{\mc{S}}^*_{r=14}$. \begin{figure}[t] \centering \subfloat[\label{fig:net3Reached}]{\includegraphics[keepaspectratio=true,scale=0.33]{Fig_16.pdf}}{} \subfloat[\label{fig:net3Random}]{\includegraphics[keepaspectratio=true,scale=0.23]{Fig_17.pdf}}{} \caption{Kalman filter performance $f(\mc{S}_{r}^*)$ with $r = \{1,\ldots,14,95\}$ when $T_h = 0^\mathrm{th}, 10^\mathrm{th}, 20^\mathrm{th}$ hour (a), performance $f(\mc{S}^*_{r=14})$ for 24 hours (blue line in (b)), and the relative performance of ten randomized sensor placements $\Delta f(\hat{\mc{S}}_{r=14})$ (red lines in (b)).} \label{fig:net3_performance} \end{figure} \section{Conclusions, Paper Limitations, and Future Directions} The paper presents a new computational method that results in sensor placements of WQ sensing devices in water networks. The method exclusively focuses on the WDN observability in regards to the WQ dynamics. After thoroughly testing three networks, we summarize the findings. First, the impacts of choosing L-W scheme time-step $\Delta t$ (or the number of segments $s_L$) and the length of a single hydraulic simulation $k_f$ on the sensor placement strategy is minor and can be neglected. Second, our proposed method can be applied to practical networks to determine sensor placement strategy because in practice historical data for demand patterns are available, thereby furnishing the sensor placement with the most common demand patterns. Hence, there is a possibility that the optimal sensor placement in terms of occupation time obtained would relatively be \textit{time-invariant}. Third, the algorithm verifies the supermodular nature of the advocated set function optimization as corroborated via different test cases on three different networks. Fourth, and even if demand patterns change significantly, the algorithm can still be used to obtain a sensor placement that optimizes the sensor occupation time. The paper \textit{does not} advocate for \textit{only} looking at state estimation metrics for water quality sensor placement. As mentioned in Section~\ref{sec:literature}, a plethora of social and engineering objectives are typically considered in the literature to solve the WQSP. To that end, it is imperative that the proposed approach in this paper be studied in light of the other metrics and objectives discussed in the literature (such as minimizing the expected population and amount of contaminated water affected by an intrusion event). Consequently, an investigation of balancing engineering and state estimation objectives with more social-driven ones is needed. Hence, the objective of this paper is to provide a tool for the system operator that quantifies network observability vis-a-vis water quality sensor placements. The water system operator can also balance the objective of obtaining network-wide observability with these other metrics. Future work will focus on this limitation of the present work, in addition to considering multi-species dynamics that are nonlinear in the process model, which necessitate alternate approaches to quantify observability of nonlinear dynamic networks. This will also allow examining the potential reaction between contaminants and chlorine residuals that the sensors are monitoring. \section*{Data Availability Statement} Some or all data, models, or code used during the study were provided by a third party. Specifically, we provide a Github link that includes all the models, the data, and the results from the case study~\cite{wang_2020}. \section*{Acknowledgment} This material is based upon work supported by the National Science Foundation under Grants 1728629, 1728605, 2015671, and 2015603. \section{Introduction and literature review}~\label{sec:literature} In dynamic infrastructure sciences, the sensor placement (SP) problem is concerned with the time-varying selection or one-time placement of sensors, while optimizing desired objective functions. This problem exists widely in dynamic networks such as transportation systems, electric power systems, and water distribution networks (WDN). The optimal placement of water quality (WQ) sensors is a crucial issue in WDN due to the dangers brought by accidental or intentional contamination, the expensiveness of sensors and their installation cost, and their potential in performing real-time feedback control of water quality---control that requires high-frequency WQ sensor data. WQ sensor placement in WDN serves different purposes. The high-level one is minimizing the potential public health impacts of a contamination incident given a limited number of sensors. To quantify this high-level objective, the WQ literature considers various mathematical objectives and metrics. Specifically, the SP problem has been studied in~\cite{Krause2008,ostfeld2008battle,preis2008multiobjective,schal2013water,Eliades2014,Shastri2006,Aral2010} considering different contamination risks, optimization objectives, optimization formulations, uncertainty, the solution methodology and its computational feasibility, and the use of mobile sensors. Rathi and Gupta~\cite{Rathi2014} classify methodologies from over forty studies into two categories as single- and multi-objective SP problem. Two other comprehensive surveys focusing on optimization strategies are also conducted in~\cite{Hart2010,Hu2018}. As we mentioned, the most common objective of sensor placement in WDN is to minimize the potential public health caused by contamination incident, and it can be formulated as maximizing the coverage of water with a minimum number of sensors. Lee and Deininger introduce the concept of "Demand Coverage`` and solve the problem using a mixed integer programming (MIP) method~\cite{lee1992optimal}. Kumar et al.~\cite{kumar1997identification} and Kansal et al. \cite{kansal2012identification} propose heuristic methods to find optimal sensor location one by one by selecting one optimal location first and then selecting the next location by modifying the coverage matrix. To consider nodes with lower water quality, Woo et al. modify the objective by placing weights for each term and normalizing the concentrations~\cite{woo2001optimal}. Alzahrani et al. \cite{al2003optimizing} and Afshar and Marino~\cite{afshar2012multi} use genetic algorithm (GA) and ant colony optimization (ACO) respectively to find optimal placement strategy to maximize the demand coverage. Ghimire et al.~\cite{Ghimire2006} and Rathi and Gupta~\cite{rathi2014locations} also suggested heuristic methods to solve the problem. We briefly summarize the more recent literature on this problem followed by identifying the key research gap. Recently, He \textit{et al.}~\cite{He2018} propose a multi-objective SP method to explicitly account for contamination probability variations. Hooshmand \textit{et al.}~\cite{Hooshmand2020} address SP problem with the identification criterion assuming that a limited sensor budget is available, followed by minimizing the number of vulnerable nodes using mixed integer programming (MIP). A combined management strategy for monitoring WDN is proposed in~\cite{Ciaponi2019} based on the application of water network partitioning and the installation of WQ sensors. Winter \textit{et al.}~\cite{Winter2019} investigate optimal sensor placements by introducing two greedy algorithms in which the imperfection of sensors and multiple objectives are taken into account. Giudicianni \textit{et al.}~\cite{Giudicianni2020} present a method that relies on a priori clustering of the WDN and on the installation of WQ sensors at the most central nodes of each cluster---selected according to different topological centrality metrics. Hu \textit{et al.}~\cite{Hu2020} propose a customized genetic algorithm to solve multi-objective SP in WDN. Based on graph spectral techniques that take advantage on spectrum properties of the adjacency matrix of WDN graph, a sensor placement strategy is discussed in Di Nardo \textit{et al.}~\cite{DiNardo2018}. Different objective functions leads to different placement strategies, and Tinelli \textit{et al.}~\cite{tinelli2018impact} discuss the impact of objective function selection on optimal placement of sensors. Zhang \textit{et al.}~\cite{zhang2020assessing} investigate the global resilience considering all likely sensor failures that have been rarely explored. The research community thoroughly investigated water quality sensor placement strategies considering various socio-technical objectives (as briefly discussed above). The objective of this paper is \textit{not} to develop a computational method to solve such SP problems with the aforementioned metrics/objectives. The objective herein is to find optimal SP of water quality sensors considering an overlooked, yet significant metric: the state observability and estimation metric jointly with Kalman filtering.\footnote{In dynamic systems, the Kalman filter is a widely used algorithm that computes unmeasured state estimates of a system given a dynamic model and data from sensor measurements subject to noise.} In short, this metric maps sensor placements given a fixed hydraulic profile to a scalar value to be minimized. This value quantifies the observability of unmeasured WQ states (i.e., concentrations of chlorine) in the entire water network. The observability quantification metric is depicted as a state estimation error measuring the difference between the actual WQ states and their estimates. Accordingly, this proposed metric finds the optimal WQ sensor placement that minimizes the state estimation error via the vintage Kalman filter for noisy WQ dynamics and measurement models. To the best of our knowledge, this is the first attempt to find the optimal sensor placement jointly with optimizing Kalman filter performance for WQ dynamics. The most related research is the \textit{ensemble Kalman filter}-based techniques by Rajakumar \textit{et al.}~\cite{Rajakumar2019}, where the authors explore the impact of sensor placement on the final state estimation performance. However, the study \textit{(i)} does not provide sensor placement strategy, \textit{(ii)} mainly focuses on estimating water quality states and reaction parameters, and \textit{(iii)} a dynamic model for WQ is not present to guide optimal SP. To that end, the objective of this study is to provide a control- and network-theoretic method that determines the optimal geographic placements of water quality sensors while optimizing the Kalman filter performance. The specific paper contributions are: \begin{itemize} \item The state-space, control-theoretic dynamics depicting the evolution of WQ states, i.e., concentrations of chlorine, are shown. Specifically, we are modeling and tracking chlorine concentrations as a surrogate for contamination---this has been showcased in various studies depicting rapid depletion of chlorine upon the introduction of contaminants~\cite{yang2008modeling}. The dynamics of chlorine concentrations are represented as a time-varying state-space model. This model is then utilized to formulate the water quality sensor placement (WQSP) problem that optimizes the Kalman filter state estimation performance. This formulation \textit{(i)} takes into account and builds a mapping between a WDN observability metric and the performance of Kalman filter and \textit{(ii)} is a set function optimization (an optimization problem that takes sets as variables) that is difficult to solve for large networks. \item To account for the time-varying nature of the dynamic WQ model (due to the changes in the hydraulic profiles that are caused by demand profiles), an algorithm that computes a sensor placement for the most common hydraulic profiles is presented. Furthermore, scalability of this algorithm is investigated. The algorithm is based on an important theoretical feature for set function optimization called submodularity. This feature has been identified in recent control-theoretic studies for sensor placement strategies~\cite{Tzoumas2016,Zhang2017}. In particular, the developed approach is based on a greedy algorithm which returns a suboptimal placement strategy with guarantees on the distance to optimality. An efficient implementation of the algorithm is also presented. Compared to~\cite{Tzoumas2016,Zhang2017}, the proposed algorithm takes into account the time-varying nature of WQ dynamics. \item Thorough case studies on three water distribution networks under different conditions are presented. The case studies consider varying scales of water networks, significant demand variability, different number of allocated sensors, and their impact on the state estimation performance and WQSP solution. Important observations and recommendations for water system operators are given. Github codes are included for reproducibility. \end{itemize} The rest of the paper is organized as follows. Section~\ref{sec:Ctrl-WQM} introduces network-oriented water quality dynamic model by presenting the models of each component in detail. An abstract, linear, state-space format for the water quality model is given first considering the first-order reaction model with known reaction rate coefficients. WQ observability and its metric (observability Gramian) are introduced in Section~\ref{sec:WQSP}, and then WQSP problem is formulated and solved by taking advantage of submodularity property of set function optimization in Section~\ref{sec:WQSPformuation}. A scalable implementation of the problem is showcased. Section~\ref{sec:test} presents case studies to support the computational algorithms. Appendix~\ref{sec:appa} outlines components of the scalable implementation of the presented computational methods. The notation for this paper is introduced next. \vspace{1em} \noindent \textit{\textbf{Paper's Notation}} $\;$ Italicized, boldface upper and lower case characters represent matrices and column vectors: $a$ is a scalar, $\boldsymbol a$ is a vector, and $\boldsymbol A$ is a matrix. Matrix $\boldsymbol I_n$ denotes a identity square matrix of dimension $n$-by-$n$, whereas $\boldsymbol 0_{m \times n}$ denotes a zero matrix with size $m$-by-$n$. The notations $\mathbb{R}$ and $\mathbb{R}_{++}$ denote the set of real and positive real numbers. The notations $\mathbb{R}^n$ and $\mathbb{R}^{m\times n}$ denote a column vector with $n$ elements and an $m$-by-$n$ matrix in $\mathbb{R}$. For any two matrices $\boldsymbol A$ and $\boldsymbol B$ with same number of columns, the notation $\{\boldsymbol A, \boldsymbol B\}$ denotes $[\boldsymbol A^\top \ \boldsymbol B^\top]^\top$. For a random variable $\boldsymbol x \in \mathbb{R}^n$, $\mathbb{E}(\boldsymbol x)$ is its expected value, and its covariance is denoted by $\mathbb{C}(\boldsymbol x) = \mathbb{E}\left( (\boldsymbol x - \mathbb{E}(\boldsymbol x))(\boldsymbol x - \mathbb{E}(\boldsymbol x))^\top \right)$. \section{State-Space Water Quality Dynamic Model}~\label{sec:Ctrl-WQM} We model WDN by a directed graph $\mathcal{G} = (\mathcal{W},\mathcal{L})$. Set $\mathcal{W}$ defines the nodes and is partitioned as $\mathcal{W} = \mathcal{J} \bigcup \mathcal{T} \bigcup \mathcal{R}$ where $\mathcal{J}$, $\mathcal{T}$, and $\mathcal{R}$ are collection of junctions, tanks, and reservoirs. For the $i$-th node, set $\mathcal{N}_i$ collects its neighboring nodes (any two nodes connected by a link) and is partitioned as $\mathcal{N}_i = \mathcal{N}_i^\mathrm{in} \bigcup \mathcal{N}_i^\mathrm{out}$, where $\mathcal{N}_i^\mathrm{in}$ and $\mathcal{N}_i^\mathrm{out}$ are collection of inflow and outflow nodes. Let $\mathcal{L} \subseteq \mathcal{W} \times \mathcal{W}$ be the set of links, and define the partition $\mathcal{L} = \mathcal{P} \bigcup \mathcal{M} \bigcup \mathcal{V}$, where $\mathcal{P}$, $\mathcal{M}$, and $\mathcal{V}$ represent the collection of pipes, pumps, and valves. In this paper, we the use Lax-Wendroff scheme~\cite{lax1964difference} to space-discretize pipes and each pipe with length $L$ is split into $s_{L}$ segments. The number of junctions, reservoirs, tanks, pipes, pumps and valves is denoted as $n_{\mathrm{J}}$, $n_{\mathrm{R}}$, $n_{\mathrm{TK}}$, $n_{\mathrm{P}}$, $n_{\mathrm{M}}$, and $n_{\mathrm{V}}$. Hence, the number of nodes and links are $n_\mathrm{N} = n_{\mathrm{J}}+n_{\mathrm{R}}+n_{\mathrm{TK}}$ and $n_\mathrm{L} = n_{\mathrm{P}} \cdot s_{L} +n_{\mathrm{M}}+n_{\mathrm{V}}$. The principal component of the presented state-space, control-theoretic water quality modeling is a state-vector defining the concentrations of the disinfectant (chlorine) in the network. Concentrations at nodes such as junctions, reservoirs, and tanks are collected in vector $\boldsymbol c_\mathrm{N} \triangleq \{\boldsymbol c_\mathrm{J}, \boldsymbol c_\mathrm{R}, \boldsymbol c_\mathrm{T} \} $; concentrations at links such as pipes and pumps are collected in $\boldsymbol c_\mathrm{L} \triangleq \{\boldsymbol c_\mathrm{P}, \boldsymbol c_\mathrm{M}, \boldsymbol c_\mathrm{V} \} $. We define WQ state $\boldsymbol x(t) \triangleq \boldsymbol x$ at time $t$ as: $\boldsymbol x(t) = \{\boldsymbol c_\mathrm{N},\boldsymbol c_\mathrm{L} \} = \{\boldsymbol c_\mathrm{J}, \boldsymbol c_\mathrm{R}, \boldsymbol c_\mathrm{T}, \boldsymbol c_\mathrm{P}, \boldsymbol c_\mathrm{M}, \boldsymbol c_\mathrm{V}\} \in \mbb{R}^{n_x}, n_x = n_\mathrm{N} + n_\mathrm{L}. $ We also make two assumptions: \textit{(i)} the mixing of the solute is complete and instantaneous at junctions and in tanks with a continuously stirred tank reactors (CSTR) model~\cite{rossman2000epanet}, and \textit{(ii)} the first-order reaction for single-species that describes disinfectant decay both in the bulk flow and at the pipe wall are assumed herein. The assumptions are widely used in the literature~\cite{rossman1996numerical,basha2007eulerian,shang2008epanet}. \subsection{Conservation of mass}\label{sec:conservation} The water quality model represents the movement of all chemical and/or microbial species (contaminant, disinfectants, DBPs, metals, etc.) within a WDN as they traverse various components of the network. Specifically, we are considering the single-species interaction and dynamics of chlorine. This movement or time-evolution is based on three principles: \textit{(i)} \textit{mass balance in pipes}, which is represented by chlorine transport in differential pipe lengths by advection in addition to its decay/growth due to reactions; \textit{(ii)} \textit{mass balance at junctions}, which is represented by complete and instantaneous mixing of all inflows, that is the concentration of chlorine in links flowing into this junction; and \textit{(iii)} \textit{mass balance in tanks}, which is represented by a continuously stirred tank reactors (CSTRs)~\cite{rossman2000epanet} model with complete and instantaneous mixing and growth/decay reactions. The modeling of each component is introduced next. \subsubsection{Chlorine transport and reaction in pipes} The water quality modeling for pipes involves modeling the chlorine transport and reaction by 1-D advection-reaction (A-R) equation. For any Pipe $i$, the 1-D A-R model is given by a PDE: \begin{equation} ~\label{equ:adv-reac} {\partial_t c_\mathrm{P}} = -v_{i}(t) {\partial_x c_\mathrm{P}} + r_{i} c_\mathrm{P} , \end{equation} \noindent where $v_{i}(t)$ is flow velocity, $r_{i}$ is the first-order reaction rate and remains constant, which is related with the bulk and wall reaction rate and mass transfer coefficient between the bulk flow and the pipe wall~\cite{basha2007eulerian,rossman1996numerical}. Here, the Lax-Wendroff (L-W) scheme~\cite{lax1964difference} shown in Fig.~\ref{fig:lax} is used to approximate the solution of the PDE~\eqref{equ:adv-reac} in space and time; this model has been used and accepted in the literature~\cite{rossman1996numerical,morais2012fast,fabrie2010quality}. Pipe $i$ with length $L_{i}$ is split into $s_{L_{i}}$ segments, and the discretized form for segment $s$ is given by \begin{equation}~\label{equ:adv-reac-lax} \hspace{-1em} c_{i,s}(t\hspace{-1pt}+\hspace{-1pt}\Delta t) = \underline{\alpha} c_{i,s-1}(t) + (\alpha + r_i)c_{i,s}(t) +\bar{\alpha} c_{i,s+1}(t), \end{equation} where L-W coefficients for previous, current, and next segment are $\underline{\alpha} = 0.5 \beta (1+\beta)$, ${\alpha} = 1- \beta^2 $, and $\bar{\alpha} = -0.5 \beta (1-\beta)$. Note that $\beta \in \left(0,1\right]$ for Pipe $i$ at time $t$ is a constant related with stability condition of L-W scheme, and can be decided by ${v_{i}(t) \Delta t}(\Delta x_{i})^{-1}$, where $\Delta t$ and $\Delta x_{i}$ are the time step and the space-discretization step in Fig.~\ref{fig:lax}. Hence, to stabilize L-W scheme, the water quality time step $\Delta t \leq \min({\Delta x_{i}}/v_{i}(t))$, for all $i \in \mathcal{P}$. The L-W scheme coefficients $\underline{\alpha}$, $\alpha$, and $\bar{\alpha}$ are a function of time but vary much slower than $\boldsymbol x(t)$, and they only change when $v_i(t)$ changes after the $\Delta t$ and $\Delta x_i$ are fixed. That is, they only update each hydraulic time step. Equation~\eqref{equ:adv-reac-lax} can be lumped in a matrix-vector form for all segments $s$ for all Pipes $i \in \mc{P}$ as: \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{Fig_1.pdf} \caption{Time-space discretization of Pipe $i$ based on the L-W scheme. } \label{fig:lax} \vspace{-1.em} \end{figure} \begin{equation}~\label{equ:abs-pipe-mass} \boldsymbol c_\mathrm{P}(t+ \Delta t) = \boldsymbol A_\mathrm{P}(t) \boldsymbol c_\mathrm{P}(t) + \boldsymbol A_\mathrm{N}(t) \boldsymbol c_\mathrm{N}(t), \end{equation} where matrices $\boldsymbol A_\mathrm{P}$ and $\boldsymbol A_\mathrm{N}$ map the scalar equation~\eqref{equ:adv-reac-lax} into the vector form~\eqref{equ:abs-pipe-mass}. The Github codes of this paper~\cite{wang_2020} shows how these matrices are computed. \subsubsection{Chlorine mass balance at junctions}~\label{sec:mbjunction} Mass conservation of the disinfectant (i.e., chlorine) for Junction $i$ at time $t$ can be described by \begin{equation} \hspace{-1.5em} \textstyle \sum_{k = 1}^{|\mathcal{N}_i^\mathrm{in}|} q_{ki}(t)c_{ki}(t) = d_i(t) c_{i}(t) + \textstyle \sum_{j = 1}^{|\mathcal{N}_i^\mathrm{out}|} q_{ij}(t) c_{ij}(t),~\notag \end{equation} where $ \{ ki : k \in \mathcal{N}_i^\mathrm{in} \} $ and $ \{ ij : j \in \mathcal{N}_i^\mathrm{out} \} $ represent the sets of links with inflows and outflows of Junction $i$; $d_i$ is its demand; $q_{ki}(t)$ and $q_{ij}(t)$ are the flow rate in Links $ki$ and $ij$; $c_{ki}(t)$ and $c_{ij}(t)$ are the corresponding concentrations. Specifically, when links are pipes, $c_{ki}(t)$ and $c_{ij}(t)$ should be the last and first segment of Pipes $ki$ and $ij$. The matrix form when considering all junctions is given as \begin{equation} ~\label{equ:abs-junction-mass} \boldsymbol c_\mathrm{J}(t+ \Delta t) = \boldsymbol A_\mathrm{J}(t) \boldsymbol c_\mathrm{J}(t) + \boldsymbol A_\mathrm{L}(t) \boldsymbol c_\mathrm{L}(t). \end{equation} \subsubsection{Chlorine mass balance at tanks} Akin to dealing with junctions, we can express the mass balance equations for each tank, the details are similar and omitted for brevity and ease of exposition. With that in mind, the provided Github codes present all of the necessary details that are required to arrive at the high-fidelity state-space description. We directly give the matrix form of all tanks as \begin{equation} ~\label{equ:abs-tank-mass} \boldsymbol c_\mathrm{T}(t+ \Delta t) = \boldsymbol A_\mathrm{T}(t) \boldsymbol c_\mathrm{T}(t) + \boldsymbol A'_\mathrm{P}(t) \boldsymbol c_\mathrm{P}(t), \end{equation} where $\boldsymbol A_\mathrm{T}$ is in terms of tank volumes $\boldsymbol V_\mathrm{T}$, time step $\Delta t$, and flow rates flowing in or out of tanks. \subsubsection{Chlorine mass balance at reservoirs} Without loss of generality, it is assumed that the chlorine sources are only located at reservoirs, and the concentration at a reservoir is constant. That is, \begin{equation}\label{equ:abs-reservoir} \boldsymbol c_\mathrm{R}(t + \Delta t) = \boldsymbol c_\mathrm{R}(t) . \end{equation} \subsubsection{Chlorine transport in pumps and valves}~\label{sec:PumpandValve} We consider that the \textit{lengths} of pumps to be null, i.e., the distance between its upstream node and downstream node is zero, and hence they neither store any water nor are discretized into different segments. Therefore, the concentrations at pumps or valves equal the concentration of upstream nodes (a reservoir) they are connecting. That is \begin{equation} \hspace{-1em} c_{j}(t+\Delta t) = c_i(t + \Delta t) = c_i(t) = c_j(t), i \in \mathcal{R}, j \in \mathcal{M},~\notag \end{equation} and the corresponding matrix form for pumps is \begin{equation}~\label{equ:abs-pump} \boldsymbol c_\mathrm{M}(t+\Delta t) = \boldsymbol c_\mathrm{M}(t). \end{equation} As for valves installed on pipes, it is simply treated as a segment of that pipe. In this case, the concentration in valves equals the segment concentrations in pipes. We next show how these matrix forms can yield state-space formulation of water quality modeling. \subsection{Water quality modeling in state-space form}~\label{sec:space-state} The briefly summarized water quality model of each component from the previous section can be written as a state-space Linear Difference Equation (LDE) as in~\eqref{equ:de-abstract1} where $\boldsymbol I$ is an identity matrix of appropriate dimension. \begin{figure}[h] \begin{equation}~\label{equ:de-abstract1} \hspace{-2em}\includegraphics[width=0.92\linewidth,valign=c]{Fig_2.pdf}. \end{equation} \end{figure} For the ease of exposition, we consider that $\Delta t = 1 \sec$ and the time-index $t$ is replaced with another time-index $k$. The state-space form of the water quality model is presented as a linear time-variant (LTV) system: \begin{equation}~\label{equ:ltv} {\boldsymbol{x}}(k+1) =\boldsymbol A(k) \boldsymbol{x}(k)+\boldsymbol {w}(k), \;\; \boldsymbol y (k)=\boldsymbol C \boldsymbol{x}(k)+ \boldsymbol{v}(k), \end{equation} where $\boldsymbol x(k) \in \mbb{R}^{n_x}$ is the state vector defined above; $\boldsymbol y(k) \in \mbb{R}^{n_y}$ represents a vector of data from WQ sensors; $\boldsymbol {w}(k) \in \mbb{R}^{n_x}$ and $\boldsymbol{v}(k) \in \mbb{R}^{n_y}$ are the process and measurement noise; $\boldsymbol C \in \mbb{R}^{n_y \times n_x}$ is a matrix depicting the location of the placed sensors where $n_y << n_x$. We note the following. First, although the argument of the state-space matrices $\boldsymbol A(k)$ is in terms of $k$, this is somewhat of an abuse for the notation seeing that $\boldsymbol A(k)$ encodes the hydraulic profile (heads and flow rates) that does not change with the same frequency as the water quality states $\boldsymbol x(k)$. Hence, the state-space model~\eqref{equ:ltv} is time varying as system matrix $\boldsymbol A(k)$ changes for different hydraulic simulation, but remains the same $\boldsymbol A$ in a single simulation. Second, and without loss of generality, the input vector from booster stations is implicitly embedded within the state-space matrix $\boldsymbol A$. Third, for all $k \geq 0$, it is assumed that initial condition, process noise $\boldsymbol {w}(k)$ and the measurement noise $\boldsymbol v(k)$ are uncorrelated and the noise variance from each sensor is $\sigma^2$. Finally, we like to point out that extensive details for the above state-space model can be studied from our recent work on model predictive control of water quality dynamics~\cite{wang2020effective}. \section{Observability Metrics for WQ Dynamics}~\label{sec:WQSP} The objective of this section is two-fold. First, to introduce water system engineers and researchers to control-theoretic approaches for ensuring or optimizing the observability of the water quality dynamics. Herein, observability is defined as the ability to estimate water quality model states $\boldsymbol x(k)$ from available measurements $\boldsymbol y(k)$ via a state estimation routine. This provides situational awareness for the operator given data from few water quality sensors. Second, to define a simple observability metric that maps the number and location of fixed sensors to a scalar metric acting as a proxy for the state estimation. \subsection{Metrics for observability and its interpretations}~\label{sec:metric} In dynamic systems theory, observability is a measure of how the system state vector $\boldsymbol{x}(k) \in \mbb{R}^{n_x}$ can be inferred from knowledge of its output $\boldsymbol{y}(k) \in \mbb{R}^{n_y}$ over either finite- or infinite-time horizons. In particular, given sensor data $\boldsymbol y(0), \boldsymbol y(1), \ldots, \boldsymbol y(k_f-1)$ for finite $k_{f} = k_{final}$ time-steps, observability is concerned with reconstructing or estimating the initial unknown state vector $\boldsymbol x(0)$ from the $k_f$ measurements, and subsequently computing $\boldsymbol x(1), \ldots, \boldsymbol x(k_f)$ assuming noiseless system. Accordingly, a linear dynamic system (such as the water quality model~\eqref{equ:ltv}) is observable if and only if the observability matrix for $k_f$ time-steps \begin{equation} \mathcal{O}(k_f) = \{\boldsymbol C, \boldsymbol C \boldsymbol A, \hdots, \boldsymbol C \boldsymbol A^{k_f-1}\} \in \mbb{R}^{k_f n_y \times n_x }~\notag \end{equation} is full column rank~\cite{hespanha2018linear}, i.e., $\rank(\mc{O}(k_f))=n_x$ assuming that $k_fn_y > n_x$. In this section, and for brevity, we assume that the hydraulic variables are not changing during each hydraulic simulation period and hence $\boldsymbol A(k)= \boldsymbol A$. With that in mind, the proposed sensor placement formulations considers changing hydraulic simulations. For the infinite-time horizon case with $k_f = \infty$ (that is, data has been collected over a long period of time), a system is observable if and only if the observability matrix $\mathcal{O}(k_f=n_x) \in \mbb{R}^{n_x n_y \times n_x }$ is full column rank~\cite{hespanha2018linear}. However, observability is a binary metric---it cannot indicate \textit{how observable} a dynamic system is. Due to the complexity and dimension of the water quality model~\eqref{equ:ltv}, this dynamic model is \textit{not} observable, i.e., it fails the aforementioned rank condition for various water networks and hydraulic simulation profiles. Specifically, it is virtually impossible to accurately reconstruct all chlorine concentrations (states $\boldsymbol x(k)$) unless water quality sensors are ubiquitously available and widespread in the network, i.e., installed at each junction. To that end, a more elaborate, non-binary quantitative metric for observability is needed for the water quality model and the sensor placement problem. One metric is based on the \textit{observability Gramian}~\cite{hespanha2018linear} defined as the $k_f$ sum of matrices \begin{equation} \boldsymbol W(k_f)=\sum_{\tau=0}^{k_f}\left(\boldsymbol A^{\top}\right)^{\tau} \boldsymbol C^{\top} \boldsymbol C \boldsymbol A^{\tau}.~\notag \end{equation} The system is observable at time-step $k_f$ if matrix $\boldsymbol W(k_f)$ is nonsingular and is unobservable if $\boldsymbol W(k_f)$ is singular. Similarly, this definition extends for the infinite-horizon case with $k_f = \infty$. However, $\boldsymbol W$ is still a matrix and the aforementioned observability-singularity discussion is still binary. As a result, various non-binary metrics have been explored in the literature~\cite{Summers2013,Summers2016}. This includes: the minimum eigenvalue $\lambda_{\mathrm{min}}(\boldsymbol W)$, the log determinant $\log \operatorname{det} (\boldsymbol W)$, the $\operatorname{trace} (\boldsymbol W)$, and the sums or products of the first $m$ eigenvalues $\lambda_1,\hdots,\lambda_m$ of $\boldsymbol W$. These metrics differ in their practical application, interpretation, and theoretical properties; the reader is referred to~\cite{Summers2016} for a thorough discussion. In this paper, we utilize the $\log \operatorname{det} (\boldsymbol W)$ metric due to various reasons outlined in the ensuing sections, but the formulations presented in the paper can be extend to other metrics. \subsection{Metrics for water quality observability matrix}~\label{sec:aug} In this section, we provide a discussion on the utilized metric for observability for the sensor placement problem. To do so, we consider the time-invariant state-space matrices for a single hydraulic simulation $k \in [0,k_f]$ which is also a single instant of hydraulic simulation and demand profile. That is, to ease the ensuing exposition we assume that the state-space matrix $\boldsymbol A(k) = \boldsymbol A$ is fixed rather than being time-varying (the actual methods consider time-varying demand pattern). The objective of this section is to formulate a water quality observability metric that maps collection of water quality data $\boldsymbol y(k)$ from a specific number of sensors $n_y$ to a scalar observability measure under the noise from water quality dynamics and measurement models. First, consider the augmented measurement vector $\bar{\boldsymbol y}(k_f) \triangleq \{\boldsymbol y(0), \ldots,\boldsymbol y(k_f) \}$ for $k_f+1$ time-steps. Given~\eqref{equ:ltv}, this yields: \begin{equation}~\label{equ:ymeasurement} \hspace{-1.47em}\includegraphics[width=0.92\linewidth,valign=c]{Fig_3.pdf}, \end{equation} where $\boldsymbol z(k_f)$ lumps initial unknown state $\boldsymbol x_0 = \boldsymbol x(0)$ and process noise $\boldsymbol w(k_f)$, and $\bar{\boldsymbol v}(k_f)$ collects all measurement noise. Note that the left-hand side of~\eqref{equ:ymeasurement} is known, whereas vectors $\boldsymbol z(k_f)$ and $\bar{\boldsymbol v}(k_f)$ are unknown vectors. To that end, the problem of estimating $\boldsymbol z(k_f) \triangleq \boldsymbol z \in \mbb{R}^{n_z}$, where $n_z= (k_f+1)n_x$, is important to gain network-wide observability of water quality state $\boldsymbol x$ which will guide the real-time estimation. As a probabilistic surrogate to estimating $\boldsymbol z$, we utilize the minimum mean square estimate (MMSE) defined as $\mbb{E}(\boldsymbol z - \hat{\boldsymbol z})$, and its corresponding posterior error covariance matrix $\Sigma_{\boldsymbol z}$. These two quantities provide estimates of means and variances of the unknown variable $\boldsymbol z(k_f)$. Interestingly, these can be written in terms of the sensor noise variance $\sigma^2$, the collected sensor data $\bar{\boldsymbol y}(k_f)$, the observability-like matrix $\mathcal{\boldsymbol O}(k_f)$ in~\eqref{equ:ymeasurement}, and the expectation and covariance of the unknown variable $\boldsymbol z(k_f)$ given by \begin{equation} \mathbb{E}(\boldsymbol z(k_f)), \; \mathbb{C}(\boldsymbol z(k_f)) = \mathbb{E}\left( (\boldsymbol z - \mathbb{E}(\boldsymbol z))(\boldsymbol z - \mathbb{E}(\boldsymbol z))^\top \right) ~\notag \end{equation} Given these developments, and to guide the sensor placement problem formulation, a metric is needed to map the covariance matrix $\Sigma_{\boldsymbol z}$ to a \textit{scalar} value. In particular, the metric $\log \operatorname{det}\left(\Sigma_{\boldsymbol z}\right)$, which maps an $n_z$-by-$n_z$ matrix $\Sigma_z$ to a scalar value, can be used to achieve that. Fortunately, $\log \det (\Sigma_{\boldsymbol z})$ has a closed form expression given by: \begin{equation}~\label{equ:closedform} \hspace{-0.352cm}\log \det (\Sigma_{\boldsymbol z}) = 2 n_z \log (\sigma)\hspace{-2pt}-\hspace{-2pt}\log \det \left(\sigma^{2} \mathbb{C}^{-1}\left(\boldsymbol z\right)+\boldsymbol W_o \right) \end{equation} where $\boldsymbol W_o = \mathcal{\boldsymbol O}^{\top}(k_f)\mathcal{\boldsymbol O}(k_f)$. The reader is referred to~\cite{Tzoumas2016} for the derivation of~\eqref{equ:closedform}. We note the following: \textit{(i)} the closed-form expression of $\log \operatorname{det}\left(\Sigma_{\boldsymbol z}\right)$ in~\eqref{equ:closedform} assumes a \textit{fixed} sensor placement while associating a scalar measure of water quality observability given a collection of sensor data and the system's parameters. This closed form expression is rather too complex to be incorporated within a sensor placement formulation and does not allow for near real-time state estimation. The next section discusses simple solutions to these issues. \textit{(ii)} We use the $\log \det (\cdot)$ metric here as it is endowed with desirable theoretical properties (namely super/sub-modularity) that makes it amenable to large-scale networks, it exhibits a closed-form expression as in~\eqref{equ:closedform}, and has been used in various sensor placement studies in the literature. With that in mind, other metrics can be used including the $\mathrm{trace}$ operator. \subsection{Relationship with the Kalman filter} The above discussions yield a metric that can be used for quantifying observability of the water quality model~\eqref{equ:ltv}, in addition to probabilistically estimating the unknown, initial state vector $\boldsymbol x(0)$. A relevant problem is the real-time state estimation via the Kalman filter, which essentially reconstructs or estimates in real-time states $\boldsymbol x(k)$ from output measurements $\boldsymbol y(k)$. This is in contrast with the batch state estimation as in~\eqref{equ:ymeasurement}. While the initial state estimation problem discussed in the previous section provides a starting point for reconstructing $\boldsymbol x$, the Kalman filter presents a more general approach to the estimation problem. In fact, ignoring the process noise $\boldsymbol w$ and setting variances of sensor data to $\sigma^2 = 1$, the Kalman filter becomes equivalent to a real-time version of the above probabilistic estimator. Most importantly, the metric $\log \operatorname{det} (\cdot)$ degenerates to: \begin{align} \hspace{-0.4cm} \log \operatorname{det}\left(\Sigma_{\boldsymbol z}\right) &= - \log \operatorname{det} ( \boldsymbol I_{n_{x}} + \boldsymbol W(k_f) ) ~\label{equ:obsmetric} \\ &= - \log \operatorname{det} \left( \boldsymbol I_{n_{x}} + \sum_{\tau=0}^{k_f}\left(\boldsymbol A^{\top}\right)^{\tau} \boldsymbol C^{\top} \boldsymbol C \boldsymbol A^{\tau} \right) \notag \end{align} where $\boldsymbol I_{n_{x}}$ is an identity matrix of size $n_x$. This is shown in the recent control theoretic literature~\cite{Jawaid2015,Zhang2017}. In short, this is a simple metric that maps the number of installed or placed sensors (i.e., number of rows of matrix $\boldsymbol C$) to a metric that defines the quality of the state estimates. When no sensor is installed or $\boldsymbol C$ is a zero matrix, the observability Gramian $\boldsymbol W(k_f)$ is also a zero matrix, and intuitively the $\log \det (\cdot)$ metric defined above has the maximum error of $0$. When the network is fully sensed---that is $n_y = n_x$, $\boldsymbol C = \boldsymbol I_{n_{x}}$, and all states are measured---then $\boldsymbol W(k_f) = \boldsymbol I_{n_x} + \boldsymbol A + \hdots + \boldsymbol A^{k_f}$ and the smallest error is achieved. Building on that, the control theoretic literature thoroughly investigated bounds for the estimation error and the corresponding metric with respect to the number of sensors; see~\cite[Theorem 1]{Tzoumas2016}. The objective of this paper is to build on these developments and investigate how such metric relates with the performance of the Kalman filter. The next section formulates the water quality sensor problem using the introduced metric. \section{Water Quality Sensor Placement Formulation}~\label{sec:WQSPformuation} The objective of the presented water quality sensor placement (WQSP) formulation is to minimize the error covariance of the Kalman filter while using at most $r$ water quality sensors. In WDN, water quality sensors are installed at nodes, that is, at most $r$ sensors are selected from the set $\mathcal{W} = \mathcal{J} \bigcup \mathcal{T} \bigcup \mathcal{R}$ where the cardinality of set $|\mathcal{W}| = n_N$, i.e., the set $\mc{W}$ contains $n_N$ possible locations at various junctions, tanks, and reservoirs. This forms a sensor set $\mathcal{S} \subset \mathcal{W}$ where $|\mathcal{S}| = n_{\mathcal{S}} \leq r$. The specific geographic placement and locations of these $n_{\mathcal{S}}$ sensors are encoded in matrix $\boldsymbol C$ of~\eqref{equ:ltv} through binary indicators. In short, the presented WQSP seeks to find the optimal set $\mc{S}^*_{r}$ that optimizes the state estimation performance with at most $r$ WQ sensors. The metric discussed in the previous section assumes that the state-space matrix $\boldsymbol A$ (encoding network and hydraulic simulation parameters) is time-varying due to varying demand and flow/head profiles. In short, the metric~\eqref{equ:obsmetric} yields a time-varying value and hence different state estimation performance for each hydraulic simulation reflected with a different $\boldsymbol A(k)$ matrix. As a result, considering a varying hydraulic simulation profile within the sensor placement problem is important, i.e., the sensor placement solution needs to be {aware} of the most probable demand and hydraulic scenarios. Consequently, we define $\boldsymbol D_i \in\mathbb{R}^{n_{\mathrm{J}} \times T_h k_f}, \forall i \in \{1,\ldots,n_d\}$ for all $n_{\mathrm{J}}$ junctions during $T_h$ distinct hydraulic simulations, each lasting $k_f \, \sec$. The notation $\boldsymbol D_{i,k}$ defines the $k$th column vector of matrix $\boldsymbol D_i$. Parameter $n_d$ reflects the number of potential demand patterns; concrete examples are given in case study section. Demand profiles $\boldsymbol D_i \in \mathcal{D}$ essentially define the most common varying demand profiles experienced by the system operator from historical data. Each demand profile results in a different hydraulic profile and hence a different state-space matrix\footnote{We defined $\boldsymbol A(k)$ earlier due to the change in the hydraulic and demand profiles. The notation $\boldsymbol A(\boldsymbol D_{i,k})$ is equivalent to $\boldsymbol A(k)$ but offers more clarity.} $\boldsymbol A(\boldsymbol D_{i,k})\triangleq \boldsymbol A(k)$. Given these definitions and for an a priori defined $\boldsymbol D_{i,k} \in \mc{D}$, one useful instant of the WQSP problem can be abstractly formulated as: \begin{equation}\label{equ:WSQP} \begin{split} \mathrm{{WQSP:}} \;\;\;\; \minimize \;\; \; & f(\mathcal{S}; \boldsymbol A(\boldsymbol D_{i,k})) \\ \subjectto \;\;\;& {\mathcal{S} \subset \mathcal{W}, \;\; |\mathcal{S}| = n_\mathcal{S}} \leq r. \end{split} \end{equation} The design variable in the optimization problem $\mathrm{WQSP}$ is the location of the installed sensors reflected via set $\mc{S}$ defined earlier. The objective function $f(\cdot;\cdot): \mathbb{R}^{n_{\mc{S}}} \times \mathbb{R}^{n_x \times n_x} \to \mathbb{R}$ maps the optimal sensor placement candidate $\mc{S}$ and given hydraulic demand profile $\boldsymbol D_{i,k}$ and its corresponding matrix $\boldsymbol A(\boldsymbol D_{i,k}) $ to the state estimation, Kalman filter performance. We note that when the objective function has a set as the variable (i.e., $\mc{S}$ in $f(\cdot;\cdot)$), the objective function is often referred to as a \textit{set function}. We use these terms interchangeably. In this paper, the set (objective) function takes the form of~\eqref{equ:obsmetric} which indeed takes explicitly the sensor placement set $\mc{S}$ through matrix $\boldsymbol C$ as well as the a priori known hydraulic profiles and the corresponding state-space matrices $\boldsymbol A(\boldsymbol D_{i,k})$. The constraint set of $\mathrm{WQSP}$ represents the number of utilized sensors and their location in the network. For small-scale water networks, one may solve the set function optimization~\eqref{equ:WSQP} via brute force, but this is impossible for large-scale networks---such problems are known to be an NP-hard one, i.e., a computational problem that is suspected to have no polynomial-time algorithm to optimally solve. To address this computational challenge, we resort to a widely-used approach in combinatorial optimization: exploiting special property of the set function $f(\mathcal{S};\boldsymbol A(\boldsymbol D_{i,k}))$ via sub/super-modularity defined as follows. A set function $f(\cdot)$ is submodular if and only if $ f(\mc{A} \cup\{a\})-f(\mc{A}) \geq f(\mc{B} \cup\{a\})-f(\mc{B})$ for any subsets $\mc{A} \subseteq \mc{B} \subseteq \mc{V}$ and $\{a\} \in \mc{V} \backslash \mc{B}$. A set function $f(\cdot)$ is supermodular if $-f(\cdot)$ is submodular. Intuitively, submodularity is a diminishing returns property where adding an element to a smaller set gives a larger gain than adding one to a larger set~\cite{lovasz1983submodular}. The computational framework of submodularity of set function optimization allows one to use greedy algorithms~\cite{cormen2009introduction} with desirable performance while being computationally tractable. Although greedy algorithms are known to return suboptimal solutions, they are also known to return excellent performance when the set function is especially sub/super-modular. Interestingly, the set function in $\mathrm{WQSP}$ given in~\eqref{equ:obsmetric} is indeed supermodular~\cite[Theorem 2]{Tzoumas2016}. Given this property, a vintage greedy algorithm---applied to solve the NP-hard problem $\mathrm{WSQP}$---can return a solution $\mc{S}$ with objective function value $f(\mc{S})$ \textit{at least} 63\% of the optimal solution $f(\mathcal{S}^{*})$~\cite{Tzoumas2016}. Empirically, a large body of work~\cite{Tzoumas2016,Zhang2017,Cortesi2014} shows that the solution provided by some greedy algorithms can be near-optimal, rather than being 63\% optimal. \begin{algorithm}[t] \small \DontPrintSemicolon \KwIn{Number of sensors $r$, all demand profiles $\mathcal{D}$, water network parameters, $k = i = 1$, $\tilde{\mc{S}} = \emptyset$} \KwOut{Optimal sensor set $\mathcal{S^{\star}}$} \textbf{Compute:} $\boldsymbol A(\boldsymbol D_{i,k})=\boldsymbol A, \forall i, k \in \{1,\ldots,n_d\}, \{1,\ldots, T_h k_f\}$\; \For{$k\leq T_hk_f$ }{ \textcolor{blue}{// \textbf{For each single hydraulic simulation interval} $k$} \; $i = 1$, $\bar{\mc{S}}=\emptyset $\; \For{$i \leq n_d$ }{ \textcolor{blue}{// \textbf{For each demand profile}} \; $j = 1, \mathcal{S}_j = \emptyset $\; \While { $j \leq r$ }{ $e_{j} \leftarrow \mathrm{argmax} _{e \in \mc{W} \backslash \mathcal{S} }\left[f(\mathcal{S};\boldsymbol A )-f(\mathcal{S} \cup\{e\};\boldsymbol A)\right]$\; $\mathcal{S}_j \leftarrow \mathcal{S}_j \cup\left\{e_{j}\right\}$\; $j \leftarrow j+1$ } $\bar{\mc{S}} \leftarrow \bar{\mc{S}} \bigcup {\mc{S}}_{j}$, $i \leftarrow i+1$ } $ {\mc{S}}^{(k)} \leftarrow \arg \max_{\mc{S} \in \bar{\mc{S}}} f(\mathcal{S}; \boldsymbol A)$\; $\tilde{\mc{S}} \leftarrow \tilde{\mc{S}} \bigcup {\mc{S}}^{(k)}$ \; $k \leftarrow k+k_f$ } $\mathcal{S}^* \leftarrow \arg \max_{\mc{S} \in \tilde{\mc{S}}} {T}(\mathcal{S})$ \textcolor{blue}{\textbf{// \textbf{Greedy-optimal sensor placement}}}\; \caption{Greedy algorithm to solve WQSP problem.} \label{alg:greedy} \end{algorithm} We apply a greedy algorithm to solve the WQSP for various hydraulic profiles. The details of this algorithm are given in Algorithm~\ref{alg:greedy}. The notation $\mc{S}_j$ denotes the sensor set with $j$ placed sensors. The notation $\mc{S}^{(k)}$ denotes the sensor set at iteration $k$. The sets $\tilde{\mc{S}}$ and $\bar{\mc{S}}$ are super-sets that include various sets $\mc{S}$. Variable $e \in \mc{S}$ defines an element (i.e., a junction) in the set $\mc{S}$. The inputs for the algorithm are the number of sensors $r$, all demand profiles $\boldsymbol D_{i,k} \in \mc{D}$, and WDN parameters. The output of the algorithm is greedy-optimal sensor set $\mc{S}^*$. The first step of the algorithm is to compute all state-space matrices $\boldsymbol A(\boldsymbol D_{i,k})$ for various demand profiles $\boldsymbol D_{i,k}$. Then, given a fixed hydraulic simulation interval $k$, a fixed demand profile $i$, and fixed number of sensors $j$, Step 9 computes the optimal element in the set $\mc{W} \backslash \mc{S}_j$ that yields the best improvement in the set function optimization reflecting the Kalman filter performance---a core component of the greedy algorithm and supermodular optimization. At each iteration inside the while loop, the algorithm finds the optimal element $e_j$ (i.e., the sensor through a junction ID) that results in the best improvement in the state estimation performance metric. Then, the $n_d$ sets $\mc{S}_j$ (that include the optimal sensor sets for all $n_d$ demand profiles) are stored in a master set $\bar{\mc{S}}$. This is then followed by finding the optimal sensor sets from $\bar{\mc{S}}$ for all $T_h$ hydraulic simulations; these are all included in another master set $\tilde{\mc{S}}$. Finally, the algorithm terminates by computing the final {optimal sensor locations $\mc{S}^*$} via picking the combination that maximizes the occupation time ${T}(\mc{S})$ for all $\mc{S} \in \tilde{\mc{S}}$, i.e., a metric that defines the frequency of a specific sensor activation. Finally, we note that this algorithm returns the \textit{greedy-optimal} solution. This solution is not necessarily the optimal solution as discussed above with the 63\% optimality guarantees. Thorough case studies are given in the ensuing section. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{Fig_4.pdf} \caption{(a) Three-node network, (b) Net1, and (c) Net3.} \label{fig:setup \end{figure} \section{Case Studies}~\label{sec:test} We present three simulation examples (three-node network, Net1, and Net3 network~\cite{rossman2000epanet,shang2002particle}) to illustrate the applicability of our approach. The three-node network is designed to illustrate the details of proposed method and help readers understand the physical meaning of results intuitively. Then, we test Net1 with looped network topology considering the impacts on final WQSP from choosing \textit{(i)} the length of a single hydraulic simulation $t$, \textit{(ii)} L-W scheme time-step $\Delta t$ (or equally dynamic number of segments), \textit{(iii)} different base demands, and \textit{(iv)} different patterns. Net3 network is used to test scalability of proposed algorithm and verify our findings further. Considering that the LDE model~\eqref{equ:ltv} produces accurate state evolution, we eliminate the process noise and set the sensor noise standard deviation to $\sigma = 0.1$. The simulations are performed via EPANET Matlab Toolkit~\cite{Eliades2016} on Windows 10 Enterprise with an Intel(R) Xeon(R) CPU E5-1620 v3 @3.50 GHz. All codes, parameters, tested networks, and results are available on Github~\cite{wang_2020} which includes an efficient and scalable implementation of Algorithm~\ref{alg:greedy}. The details of this implementation are included in Appendix~\ref{sec:appa}. \subsection{Three-node network}\label{sec:3-node} The three-node network shown in Fig.~\ref{fig:setup}{a} includes one junction, one pipe, one pump, one tank, and one reservoir. A chlorine source ($ c_\mathrm{R1} = 0.8$ mg/L) is installed at Reservoir 1. The initial chlorine concentrations at or in the other components are $0$ mg/L. Only Junction 2 consumes water, and its base demand is $d_{\mathrm{base}} = 2000\ \mathrm{GPM}$. The corresponding pattern $\mathrm{Pattern\ I}$ (viewed as a row vector) for Junction 2 in 24 hours is presented in Fig.~\ref{fig:demandpattern}. Hence, only one demand profile for a day is computed as $\boldsymbol D = d_{\mathrm{base}} \times \mathrm{Pattern\ I}$. The pipe is split into fixed as $s_{L_{23}} = 150$ segments, and the single hydraulic simulation interval is set to $k_f = 300 \sec$ and $T_h = 24$ hydraulic simulations are considered. To help the readers understand intuitively about the water quality modeling in state-space form and the observability (Gramian), an illustrative code including step by step comments for this small three-node network is available in our Github~\cite{wang_2020} for the convenience of readers. \begin{figure}[t] \centering \includegraphics[width=0.94\linewidth]{Fig_5.pdf} \caption{Pattern for Three-node and Net1 networks.} \label{fig:demandpattern \end{figure} \begin{figure}[t] \centering \subfloat[\label{fig:net1_basedemand}]{\includegraphics[keepaspectratio=true,scale=0.21]{Fig_6.pdf}}{} \subfloat[\label{fig:net1_demandpattern}]{\includegraphics[keepaspectratio=true,scale=0.20]{Fig_7.pdf}}{} \caption{Different base demands (a) and demand patterns (b) for nodes in Net1.} \label{fig:net1_demand} \end{figure} \begin{figure}[t] \centering \subfloat[\label{fig:SS_3_NET1_a}]{\includegraphics[keepaspectratio=true,scale=0.07]{Fig_8.png}}{} \subfloat[\label{fig:SS_3_NET1_b}]{\includegraphics[keepaspectratio=true,scale=0.21]{Fig_9.png}}{} \caption{Sensor placement results for the three-node network (a) and Net1 (b) in 24 hours with $k_f = 300 \sec$, $\Delta t = 5\sec$, Pattern I, Base demand $1$.} \label{fig:SS_3_NET1} \end{figure} For the three-node network, there are three possible sensor locations ($\mathrm{R}1$, $\mathrm{J}2$, and $\mathrm{T}3$); therefore, $r$ is set to $1$ or $2$ in Algorithm~\ref{alg:greedy}. The final sensor placement results are presented as Fig.~\ref{fig:SS_3_NET1_a}. When $r = 1$, $\mathrm{J}2$ is the best location or the \textit{center} of the network, and when $r = 2$, locations $\mathrm{J}2$ and $\mathrm{T}3$ are selected. To qualify the centrality or importance of a specific location during 24 hours, \textit{occupation time} ${T}(\mc{S}) = \frac{\mathrm{Selected\ time}}{\mathrm{Total\ time}}$ is defined as a percentage of the selected time by Algorithm~\ref{alg:greedy} in a day. This measure indicates the importance of the selected sensor locations. If the sensor location does not change during 24 hours, the occupation time would be 100\%; see Tab.~\ref{tab:sensor}. With that in mind, this 100\% figure of sensor occupation time rarely happens for any junction in larger networks---its occurrence in the three-node network is due to its simple topology. We show more interesting results with varying occupation time in the next sections. \subsection{Looped Net1 network}\label{sec:net1} Net1 network~\cite{rossman2000epanet,shang2002particle} shown in Fig.~\ref{fig:setup}{b} is composed of 9 junctions, 1 reservoir, 1 tank, 12 pipes, and 1 pump. Beyond optimal sensor placements, here we investigate the impact of the length of a single hydraulic simulation length $k_f$, L-W scheme time-step $\Delta t$, and the demand profile on the final sensor placement result. This network is more complex than the three-node network because its flow direction changes and flow rates or velocities vary dramatically every hour. To balance the performance of L-W scheme and computational burden, $s_{L_i}$ for each pipe is set to an integer which is the ceiling of $\frac{L_i}{ v_i(t) \Delta t}$, and dynamic number of segments setting makes $\Delta t = 5\sec$. Furthermore, If the parameter $\Delta t = 10\sec$ is needed, and this can be achieved conveniently by reducing the $s_{L_i}$ for each pipe by half. \subsubsection{Base case scenario and its result} The base case is considered with the following settings: $\Delta t = 5\sec$, single hydraulic simulation $k_f = 300 \sec$, and demand profile for a single interval $\boldsymbol D_k = \mathrm{Base \ demand \ 1} \times \mathrm{Pattern\ I}$ shown in Fig.~\ref{fig:net1_demand}. There are 11 possible sensor locations (see Fig.~\ref{fig:setup}{b}), and the number of sensor locations $r$ is chosen as $[1, 3, 5]$ in~\eqref{equ:WSQP}. Similarly, we consider $24$ hours in Algorithm~\ref{alg:greedy}. The final result is presented in Fig.~\ref{fig:SS_3_NET1_b}, and the sensor placement results in terms of occupation time $T$ are presented in Tab.~\ref{tab:sensor}. From Fig.~\ref{fig:SS_3_NET1_b} and Tab.~\ref{tab:sensor}, when $r = 1$, $\mathrm{J}10$ in Fig.~\ref{fig:SS_3_NET1_b} is the best sensor location most of the time ($T_{\mathrm{J}10} = 66.4\%$) and the best location switches to $\mathrm{J}12$ or $\mathrm{J}21$ occasionally ($T_{\mathrm{J}12} = 18.6\%$, $T_{\mathrm{J}21} = 14.8\%$). Hence, the solution of WQSP is $\mathcal{S}_{r=1}^* = \{\mathrm{J}10\}$ (marked as blue in Tab.~\ref{tab:sensor}). Similarly, the locations with the largest $r$ occupation time are selected as the final results when $r = 3$ and $5$. These greedy-optimal placements are given by $\mathcal{S}_{r=3}^* = \{\mathrm{J}10, \mathrm{J}12, \mathrm{J}21\}$ and $\mathcal{S}_{r=5}^* = \mathcal{S}_{r=3}^* \bigcup \{\mathrm{T}2,\mathrm{J}31\}$. This showcases supermodularity of the set function optimization, seeing that $\mathcal{S}_{r=3}^* \subset \mathcal{S}_{r=5}^*$. \begin{table}[t] \fontsize{8}{8}\selectfont \vspace{-0.05cm} \centering \setlength\tabcolsep{2 pt} \renewcommand{\arraystretch}{1.2} \makegapedcells \setcellgapes{1.0pt} \caption{Sensor placement results with detailed occupation time (Base case of Net1: $\Delta t = 5\sec$, $k_f= 300\sec$ , Pattern I, Base demand 1).} ~\label{tab:sensor} \begin{tabular}{c|c|c} \hline \textit{Network} & $r$ & \textit{Result (selected positions are in blue)} \\ \hline \hline \multirow{2}{*}{\makecell{\textit{Three-}\\ \textit{node}}} & $1$ & $T_{\textcolor{blue}{\mathrm{J}2}}= 100\%$ \\ \cline{2-3} & $2$ & $T_{\textcolor{blue}{\mathrm{J}2, \mathrm{T}3}}= 100\%$ \\ \hline \multirow{3}{*}{\makecell{\\ \textit{Net1} \\ \\ \textit{(Base case)}}} & $1$ & $T_{\textcolor{blue}{\mathrm{J}10}} = 66.4\%,$ $T_{\mathrm{J}12} = 18.6\%,$ $T_{\mathrm{J}21} = 14.8\%$ \\ \cline{2-3} & $3$ & \makecell{$T_{\textcolor{blue}{\mathrm{J}10}} = 68.5\%,$ $T_{\textcolor{blue}{\mathrm{J}12}} = 56.7\%,$ $T_{\textcolor{blue}{\mathrm{J}21}} = 83.4\%$ \\ $T_{\mathrm{T}2} = 53.6\%$ } \\ \cline{2-3} & $5$ & \makecell{$T_{\textcolor{blue}{\mathrm{J}10}} = 69.5\%,$ $T_{\textcolor{blue}{\mathrm{J}12}} = 87.8\%,$ $T_{\textcolor{blue}{\mathrm{J}21}} = 100\%$ \\ $T_{\textcolor{blue}{\mathrm{T}2\ }} = 65.7\%,$ $T_{\textcolor{blue}{\mathrm{J}31}} = 82.1\%,$ $T_{\mathrm{J}11} = 49.4\%$ } \\ \hline \hline \end{tabular}% \end{table} \begin{table}[t] \fontsize{8}{8}\selectfont \vspace{-0.05cm} \centering \setlength\tabcolsep{3pt} \renewcommand{\arraystretch}{1.2} \makegapedcells \setcellgapes{1.0pt} \caption{Sensor placement results considering the impacts of L-W scheme time-step $\Delta t$ and the length of the single observation time $t$ (Case A: $\Delta t = 10\sec$, $k_f = 300\sec$; Case B: $\Delta t = 5\sec$, $k_f = 60\sec$).} ~\label{tab:sensor_impact} \begin{tabular}{c|c|c} \hline \textit{Network} & $r$ & \textit{Result (selected positions are in blue)} \\ \hline \hline \multirow{3}{*}{\makecell{\\ \textit{Net1} \\ \\ \textit{(Case A)} }} & $1$ & $T_{\textcolor{blue}{\mathrm{J}10}} = 71.6\%,$ $T_{\mathrm{J}12} = 12.4\%,$ $T_{\mathrm{J}21} = 14.5\%$ \\ \cline{2-3} & $3$ & \makecell{$T_{\textcolor{blue}{\mathrm{J}10}} = 71.6\%,$ $T_{\mathrm{J}12} = 48.4\%,$ $T_{\textcolor{blue}{\mathrm{J}21}} = 61.2\%$ \\ $T_{\textcolor{blue}{\mathrm{T}2}} = 68.8\%$ } \\ \cline{2-3} & $5$ & \makecell{$T_{\textcolor{blue}{\mathrm{J}10}} = 74.7\%,$ $T_{\textcolor{blue}{\mathrm{J}12}} = 82.7\%,$ $T_{\textcolor{blue}{\mathrm{J}21}} = 100\%$ \\ $T_{\textcolor{blue}{\mathrm{T}2\ }} = 69.2\%,$ $T_{\textcolor{blue}{\mathrm{J}31}} = 73.0\%,$ $T_{\mathrm{J}11} = 62.6\%$} \\ \hline \multirow{3}{*}{\makecell{\\ \textit{Net1} \\ \\ \textit{(Case B)} }} & $1$ & $T_{\textcolor{blue}{\mathrm{J}10}} = 86.5\%,$ $T_{\mathrm{J}12} = 8.3\%,$ $T_{\mathrm{J}21} = 5.1\%$ \\ \cline{2-3} & $3$ & \makecell{$T_{\textcolor{blue}{\mathrm{J}10}} = 100\%,$ $T_{\mathrm{J}12} = 40.4\%,$ $T_{\textcolor{blue}{\mathrm{J}21}} = 53.6\%$ \\ $T_{\textcolor{blue}{\mathrm{T}2}} = 74.2\%$ } \\ \cline{2-3} & $5$ & \makecell{$T_{\textcolor{blue}{\mathrm{J}10}} = 100\%,$ $T_{\textcolor{blue}{\mathrm{J}12}} = 85.7\%,$ $T_{\textcolor{blue}{\mathrm{J}21}} = 100\%$ \\ $T_{\textcolor{blue}{\mathrm{T}2\ }} = 78.1\%,$ $T_{\textcolor{blue}{\mathrm{J}31}} = 39.2\%,$ $T_{\mathrm{J}11} = 36.5\%$ } \\ \hline \hline \end{tabular}% \end{table} \subsubsection{The impacts of L-W scheme time-step and the length of single observation time} Here, we study the impact of L-W scheme time-step and the length of single observation time parameters on the final WQSP results---in comparison with the base case from the previous section. At first, only $\Delta t$ is increased from $5$ (from the base case) to $10\sec$ (Case A). Accordingly, the number of segments of all pipes is reduced by $50\%$, while still maintaining the accuracy of LDE state-space model compared to the EPANET water quality simulation. We also define Case B by reducing $k_f$ from $300\sec$ (base case) to $60 \sec$. The results for this experiment are shown in Tab.~\ref{tab:sensor_impact}. We observe the following: \textit{(i)} the final results are exactly the same as the ones under the base case for $r = 1,5$, and the differences are materialized only in the slightly changed occupation time; \textit{(ii)} the results under $r = 3$ are different from the base case as the solution changes from $\mathcal{S} = \{\mathrm{J}10, \mathrm{J}12, \mathrm{J}21\}$ (base case) to $\mathcal{S} = \{\mathrm{J}10, \mathrm{T}2, \mathrm{J}21\}$ (Cases A and B). This is due to the fact that the base case did not produce a clear winner in terms of the sensor placement---the occupation times ($T_{\textcolor{blue}{\mathrm{J}12}} = 56.7\%,$ $T_{\mathrm{T}2} = 53.6\%$) are similar. We note that even if the sensor placement strategy is changed when $r = 3$, the final performances of these three cases are comparable, and the relative error of Kalman filter performance in \eqref{equ:WSQP} reached between Base case and Case A (Case B) is $17.2\%$ ($7.9\%$) even though the difference in $\Delta t$ is two times and the difference in $k_f$ is 5 times, which is acceptable. Hence, one could make this preliminary conclusion: the impacts of L-W scheme time-step $\Delta t$ and the length of hydraulic simulation $k_f$ on the final sensor placement results are negligible assuming that the number of pipe segments (the partial differential equation space discretization parameter) is large enough to ensure the accuracy of LDE model. \begin{figure}[t] \centering \subfloat[\label{fig:net1_pattern1_base1_b}]{\includegraphics[keepaspectratio=true,scale=0.21]{Fig_10.png}}{} \subfloat[\label{fig:net1_pattern1_base1_c}]{\includegraphics[keepaspectratio=true,scale=0.21]{Fig_11.png}}{} \caption{Sensor placement results for Net1 with $k_f = 300\sec$, $\Delta t = 5\sec$, Pattern I under (a) Base demand $2$, (b) Base demand $3$.} \label{fig:net1_base} \end{figure} \subsubsection{The impact of various demand patterns} In this section, the impact of demand profiles on the final sensor placement result is explored. Note that the demand in 24 hours at a node is decided by its base demand and the corresponding patterns simultaneously. Furthermore, other demand patterns could reflect other days of the weeks such as a weekend, rather than assuming a week-long demand curve. First, the Pattern I is fixed as the stair-shape in Fig.~\ref{fig:demandpattern} or the dotted line in Fig.~\ref{fig:net1_demandpattern}, and base demands 1, 2, and 3 in Fig.~\ref{fig:net1_basedemand} are used. That is, we have $n_d = 3$ different demand profiles which is an input for Algorithm~\ref{alg:greedy}. Note that these base demands are generated for illustrative purposes. Base demand 1 is designed to assign nearly identical base demand at each node. Base demand 2 assigns more base demands to the nodes on the right half part of the network in Fig.~\ref{fig:setup}{b}, such as $\{\mathrm{J}12, \mathrm{J}13, \mathrm{J}22,\mathrm{J}23,\mathrm{J}32\}$. Base demand 3 assigns larger base demands to the nodes on the left half part of the topology in Fig.~\ref{fig:setup}{b}, such as $\{\mathrm{J}11, \mathrm{J}21, \mathrm{J}31\}$. \begin{figure}[t] \centering \subfloat[\label{fig:net1_pattern1_b}]{\includegraphics[keepaspectratio=true,scale=0.21]{Fig_12.png}}{} \subfloat[\label{fig:net1_pattern1_c}]{\includegraphics[keepaspectratio=true,scale=0.21]{Fig_13.png}}{} \caption{Sensor placement results for Net1 with $k_f= 300\sec$, $\Delta t = 5\sec$, and Base demand $1$ under (a) Pattern II, (b) Pattern III.} \label{fig:net1_pattern} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.98\linewidth]{Fig_14.png} \caption{Final sensor placement results for Net1 with $k_f= 300\sec$, $\Delta t = 5\sec$ consider five different demand profiles (fusion of Fig.~\ref{fig:SS_3_NET1_b}, Fig.~\ref{fig:net1_base}, and Fig.~\ref{fig:net1_pattern}).} \label{fig:fusion \end{figure} The final sensor placement strategies under the three base demands 1, 2, and 3 are shown as Fig.~\ref{fig:SS_3_NET1_b}, Fig.~\ref{fig:net1_pattern1_base1_b}, and Fig.~\ref{fig:net1_pattern1_base1_c}, and the corresponding detailed occupation time are not shown for brevity. It can be observed that the greedy-optimal location switches from $\mathcal{S}_{r=1}^* = \{ \mathrm{J}10 \}$ (under base demand 1) to $\mathcal{S}_{r=1}^* = \{ \mathrm{J}12 \}$ (under base demand 3) along with changing base demand; when $r=3$, it switches from $\mathcal{S}_{r=3}^* = \{ \mathrm{J}10, \mathrm{J}12, \mathrm{J}21 \}$ (under base demand 1) to $\mathcal{S}_{r=3}^* = \{ \mathrm{J}11, \mathrm{J}12, \mathrm{J}32 \}$ (under base demand 3); when $r=5$, it switches from $\mathcal{S}_{r=5}^* = \{ \mathrm{J}10, \mathrm{J}12, \mathrm{J}21, \mathrm{J}31, \mathrm{T}2\}$ (under base demand 1) to $\mathcal{S}_{r=5}^* = \{ \mathrm{J}11, \mathrm{J}12, \mathrm{J}23, \mathrm{J}31, \mathrm{J}32 \}$ (under base demand 2). This showcases changing base demands or different demand profiles indeed have an impact on the sensor placement, but Algorithm~\ref{alg:greedy} still returns the best placement according to the chosen metrics. Second, to test the impact of patterns, Patterns I, II, and III in Fig.~\ref{fig:net1_demandpattern} are used when base demand 1 is fixed (see Fig.~\ref{fig:net1_basedemand}). We have another $n_d = 3$ different group of demand profiles. Again, these patterns are only used for illustrative purposes to test the algorithm's performance. It can be seen that Pattern I is relatively flatter compared with the other patterns, while Patterns II and III vary dramatically and are complementary to each other. The final sensor placement strategies under Patterns I, II, and III are shown as Fig.~\ref{fig:SS_3_NET1_b}, Fig.~\ref{fig:net1_pattern1_b}, and Fig.~\ref{fig:net1_pattern1_c} that can also be viewed as three corresponding matrices with only zeros and ones element (not selected or selected). It can be observed that the greedy-optimal location switches from $\mathcal{S}_{r=1}^* = \{ \mathrm{J}10 \}$ to $\mathcal{S}_{r=1}^* = \{ \mathrm{J}21 \}$ and from $\mathcal{S}_{r=3}^* = \{ \mathrm{J}10, \mathrm{J}12, \mathrm{J}21 \}$ to $\mathcal{S}_{r=3}^* = \{ \mathrm{J}21, \mathrm{J}22, \mathrm{J}31 \}$. With the above comparisons, we claim that both base demands and patterns would have impacts on the final sensor placement solution in Net1. In order to quantify the similarity between two sensor placement strategies $S_1^*$ and $S_2^*$ (viewed as matrices with only zeros and ones), we define a similarity metric as $-\sum \sum \oplus (S_1^*,S_2^*)$, where $\oplus$ stands for element-wise logical operator xor. Note that this similarity metric is always a negative value, and when two matrices are the same, the largest similarity value $0$ is reached. With applying this similarity metric, Fig.~\ref{fig:net1_pattern} is closer or more similar to Fig.~\ref{fig:SS_3_NET1_b} than Fig.~\ref{fig:net1_base}, That is, the pattern tends to cause less impacts than the base demand in Net1 case. This conclusion may extend to the other networks, and it is always safe to claim that varying demand profiles at each node has significant impact on the sensor placement strategy. If we consider all discussed $n_d = 5$ demand profiles $\boldsymbol D_i \in \mathbb{R}^{5 \times k_f}$ where $i = 1, \ldots, n_d$, and run Algorithm~\ref{alg:greedy}, the final sensor placement results considering Patterns I with Base demand 1,2, and 3, and Patterns II and III are shown as Fig.~\ref{fig:fusion}, which is the fusion of Fig.~\ref{fig:SS_3_NET1_b}, Fig.~\ref{fig:net1_base}, and Fig.~\ref{fig:net1_pattern}. The final solution $\mathcal{S}_{r=1}^* = \{ \mathrm{J}10\}$, $\mathcal{S}_{r=3}^* = \mc{S}_{r=1}^* \bigcup \{\mathrm{J}21, \mathrm{T}2\}$, and $\mathcal{S}_{r=5}^* = \mc{S}_{r=3}^* \bigcup \{\mathrm{J}12, \mathrm{J}31\}$, thereby showcasing the greedy-optimal solution for Algorithm~\ref{alg:greedy} that exploits supermodularity of the set function optimization. \subsection{Net3 network}\label{sec:net3} In this section, the conclusions drawn from looped Net1 network in previous section are further corroborated via the Net3 water network shown in Fig.~\ref{fig:setup}{c} with 90 junctions, 2 reservoirs, 3 tanks, 117 pipes, and 2 pumps. The base demands of all junctions are assumed as fixed, and a relative flatten pattern (varies slightly) are tested. The results selecting $r = 2,8,14$ from 95 node locations are shown as Fig.~\ref{fig:net3SS}, the detailed locations are presented in Tab.~\ref{tab:net3_Result}, and set $\mc{S}^*_{r=2} \subset \mc{S}^*_{r=8} \subset \mc{S}^*_{r=14}$ indicates the supermodularity property of the solution for this Net3 network. This showcases this property for even a larger network, further reaffirming the performance of the greedy algorithm. Besides that, the motivations behind testing Net3 network from a practical point of view come in two aspects, that are \textit{(i)} whether it is effective or not via adding extra sensors to reduce the Kalman filter estimation error? and \textit{(ii)} is the strategy from Algorithm~\ref{alg:greedy} better than random strategies? \begin{figure}[t] \centering \includegraphics[width=01\linewidth]{Fig_15.png} \caption{Sensor placement results for Net3 with $r = 2,8,14$ (95 node IDs are not shown for brevity).} \label{fig:net3SS \end{figure} \begin{table}[t] \fontsize{8}{8}\selectfont \vspace{-0.05cm} \centering \setlength\tabcolsep{5 pt} \renewcommand{\arraystretch}{1.2} \makegapedcells \setcellgapes{1.7pt} \caption{Sensor placement results for Net 3.} ~\label{tab:net3_Result} \begin{tabular}{c|c|c} \hline \textit{Network} & $r$ & \textit{Result} \\ \hline \hline \multirow{3}{*}{\makecell{\\ \textit{Net3} \\ }} & $2$ & $\mc{S}_{r=2}^* = \{ \mathrm{J}203,\mathrm{J}204 \}$ \\ \cline{2-3} & $8$ & \makecell{ $\mc{S}_{r=2}^* \bigcup \{\mathrm{J}261,\mathrm{J}163, \mathrm{J}169,\mathrm{J}173, \mathrm{J}184,\mathrm{J}206 \}$ } \\ \cline{2-3} & $14$ & \makecell{ $\mc{S}_{r=8}^* \bigcup \{ \mathrm{J}208,\mathrm{J}177, \mathrm{J}179,\mathrm{J}209, \mathrm{J}20,\mathrm{J}121\}$ } \\ \hline \hline \end{tabular}% \end{table} \subsection{Estimation performance and comparing with random SP} This section computationally investigates two important issues closely related to the two motivations aforementioned: First, the performance of the state estimation and Kalman filter as the number of utilized sensors $r$ in the water network varies. The second issue is whether a uniform (i.e., placing a sensor every other junction) or random sensor placement strategy yields a comparable performance---in terms of the state estimation metric---when comparison with the greedy-optimal presented in Algorithm~\ref{alg:greedy}. Both issues are investigated for the larger network Net3. First, the relationship between performance of the Kalman filter $f(\mc{S}_r)$~\eqref{equ:obsmetric} and the number of sensors is shown as Fig.~\ref{fig:net3Reached}. Interestingly, Kalman filter performance $f(\mc{S}_r)$ decreases roughly linearly as the number of sensors $r$ increases from $1$ to $14$ for three different hydraulic simulations. Specifically, Fig.~\ref{fig:net3Reached} showcases the performance of the greedy-optimal solution when $r$ is fixed in Algorithm~\ref{alg:greedy} with fixed hydraulic profiles ($T_h = 0^\mathrm{th}, 10^\mathrm{th}, 20^\mathrm{th}$ hour) i.e., the three figures in Fig.~\ref{fig:net3Reached} show similar trend for three different hydraulic profiles. The best performance or lower bounds under the corresponding cases are reached when all sensor locations are selected ($r = 95$). This indicates that one would not expect a large improvement of Kalman filter performance via increasing the number of sensors even the locations of added sensors are all greedy-optimal. Furthermore, the time-varying Kalman filter performance $f(\mc{S}^*_{r = 14})$ for 24 hours is depicted via the blue line in Fig.~\ref{fig:net3Random}. the performance value can easily reach $10^{5}$ level for this relatively large-scale network due to \textit{(i)} the large dimension of $\boldsymbol z$ ($n_z = 3.066 \times 10^6$), \textit{(ii)} covariance matrix $\mathbb{C}$ with tiny diagonal element (i.e., $5 \times 10^{-3}$), and \textit{(iii)} the typical value of $k_f$ is 200 in Net 3 resulting in $\boldsymbol W_o$ with huge value element in \eqref{equ:closedform}. Moreover, the trend of the blue line is decided by the hydraulic profile such as the flow rates for 24 hours, the plot of flow rates are not shown for brevity. To address the second issue, we showcase the performance of a random sensor placement with a fixed number of sensors $r=14$. Specifically, ten random sensor placements are generated for every hydraulic simulation. To quantify the performance of the proposed optimal placement, we define the relative performance of a random placement strategy $\hat{\mc{S}}$ as $\Delta f(\hat{\mc{S}}_{r=14}) = f(\hat{\mc{S}}_{r=14}) - f(\mc{S}^*_{r=14})$. A smaller value of $\Delta f(\hat{\mc{S}}_{r=14})$ implies a better optimal placement. The red lines in Fig.~\ref{fig:net3Random} are the relative performance of ten different randomizations---all of them are greater than zero showcasing a much better state estimation performance through the greedy algorithm. Even though the differences of performance are only 100-200 on average, the actual Kalman filter performance is orders of magnitude better due to fact that the $\log \det$ function is used to quantify the state estimation accuracy. That is, the $\mc{S}^*_{r=14}$ obtained from Algorithm~\ref{alg:greedy} performs significantly better than any random strategy $\hat{\mc{S}}^*_{r=14}$. \begin{figure}[t] \centering \subfloat[\label{fig:net3Reached}]{\includegraphics[keepaspectratio=true,scale=0.33]{Fig_16.pdf}}{} \subfloat[\label{fig:net3Random}]{\includegraphics[keepaspectratio=true,scale=0.23]{Fig_17.pdf}}{} \caption{Kalman filter performance $f(\mc{S}_{r}^*)$ with $r = \{1,\ldots,14,95\}$ when $T_h = 0^\mathrm{th}, 10^\mathrm{th}, 20^\mathrm{th}$ hour (a), performance $f(\mc{S}^*_{r=14})$ for 24 hours (blue line in (b)), and the relative performance of ten randomized sensor placements $\Delta f(\hat{\mc{S}}_{r=14})$ (red lines in (b)).} \label{fig:net3_performance} \end{figure} \section{Conclusions, Paper Limitations, and Future Directions} The paper presents a new computational method that results in sensor placements of WQ sensing devices in water networks. The method exclusively focuses on the WDN observability in regards to the WQ dynamics. After thoroughly testing three networks, we summarize the findings. First, the impacts of choosing L-W scheme time-step $\Delta t$ (or the number of segments $s_L$) and the length of a single hydraulic simulation $k_f$ on the sensor placement strategy is minor and can be neglected. Second, our proposed method can be applied to practical networks to determine sensor placement strategy because in practice historical data for demand patterns are available, thereby furnishing the sensor placement with the most common demand patterns. Hence, there is a possibility that the optimal sensor placement in terms of occupation time obtained would relatively be \textit{time-invariant}. Third, the algorithm verifies the supermodular nature of the advocated set function optimization as corroborated via different test cases on three different networks. Fourth, and even if demand patterns change significantly, the algorithm can still be used to obtain a sensor placement that optimizes the sensor occupation time. The paper \textit{does not} advocate for \textit{only} looking at state estimation metrics for water quality sensor placement. As mentioned in Section~\ref{sec:literature}, a plethora of social and engineering objectives are typically considered in the literature to solve the WQSP. To that end, it is imperative that the proposed approach in this paper be studied in light of the other metrics and objectives discussed in the literature (such as minimizing the expected population and amount of contaminated water affected by an intrusion event). Consequently, an investigation of balancing engineering and state estimation objectives with more social-driven ones is needed. Hence, the objective of this paper is to provide a tool for the system operator that quantifies network observability vis-a-vis water quality sensor placements. The water system operator can also balance the objective of obtaining network-wide observability with these other metrics. Future work will focus on this limitation of the present work, in addition to considering multi-species dynamics that are nonlinear in the process model, which necessitate alternate approaches to quantify observability of nonlinear dynamic networks. This will also allow examining the potential reaction between contaminants and chlorine residuals that the sensors are monitoring. \section*{Data Availability Statement} Some or all data, models, or code used during the study were provided by a third party. Specifically, we provide a Github link that includes all the models, the data, and the results from the case study~\cite{wang_2020}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Observations of the cosmic microwave background (CMB) have confirmed a now `standard' cosmological model \cite{Komatsu:2010fb}. A key aspect of this model is that primordial fluctuations are a realization of a Gaussian random field. This implies that CMB fluctuations are completely characterized by their two-point correlation function $C(\theta)$ in real space, or equivalently, the power spectrum $C_{\ell}$ in harmonic space. All higher-order $N$-point correlation functions with even $N$ can be written in terms of the two-point function, and all $N$-point correlation functions with odd $N$ are zero. But while the simplest single-field slow-roll (SFSR) inflationary models assumed in the standard cosmological model predict departures from Gaussianity to be undetectably small \cite{localmodel}, several beyond-SFSR models predict departures from Gaussianity to be larger \cite{larger}, and possibly detectable with current or forthcoming CMB experiments. While the range of predictions for non-Gaussianity is large, the local model for non-Gaussianity \cite{Luo:1993xx}---that which appears in arguably the simplest beyond-SFSR models---has become the canonical model for most non-Gaussianity searches. The non-Gaussianity is parametrized in these models by a non-Gaussian amplitude ${f_{\rm{nl}}}$ to be defined more precisely below. Most efforts to measure ${f_{\rm{nl}}}$ have relied on an estimator constructed from the CMB bispectrum, the three-point correlation function in harmonic space. However, the local model also predicts a non-zero trispectrum (the harmonic-space four-point function) \cite{Kunz:2001ym,Hu:2001fa,Okamoto:2002ik,Kogo:2006kh,Regan:2010cn}, and efforts have recently been mounted to determine ${f_{\rm{nl}}}$ from the trispectrum \cite{Smidt:2010ra}. It has been suggested, moreover, that a comparison of the values of ${f_{\rm{nl}}}$ obtained from the bispectrum and trispectrum can be used as a consistency test for the local model \cite{Byrnes:2006vq, Kogo:2006kh,Smidt:2010ra}. However, it can be shown that the bispectrum estimator for ${f_{\rm{nl}}}$ saturates the Cramer-Rao bound, and it has been argued that this implies that no new information on the value of ${f_{\rm{nl}}}$, beyond that obtained from the bispectrum, can be obtained from the trispectrum \cite{Babich:2005en,Creminelli:2006gc}. Ref.~\cite{Creminelli:2006gc} further outlines the nature of the correlation between the bispectrum and trispectrum ${f_{\rm{nl}}}$ estimators implied by this conclusion. Here we show that the trispectrum {\it does} provide additional information on ${f_{\rm{nl}}}$; i.e., it is {\it not} redundant with that from the bispectrum. We show that there is indeed a correlation between the bispectrum and trispectrum ${f_{\rm{nl}}}$ estimators, elaborating the arguments of Ref.~\cite{Creminelli:2006gc}. However, we show with analytic estimates and numerical calculations that this correlation becomes weak in the high-statistics limit. We explain, with a simple example, how additional information on ${f_{\rm{nl}}}$ can be provided by the trispectrum given that the bispectrum estimator for ${f_{\rm{nl}}}$ saturates the Cramer-Rao bound. Put simply, the Cramer-Rao inequality bounds the variance with which a distribution can be measured, but there may be additional information in a distribution, about a theory or its parameters, beyond the distribution variance. The discussion of the Cramer-Rao bound and the examples we work out in Section~\ref{sec:cramerrao} may be of interest to a much broader audience of readers than just those interested in CMB non-Gaussianity. The outline of this paper is as follows: We begin in Section \ref{sec:cramerrao} with our discussion of the Cramer-Rao bound. The aim of the rest of the paper is to illustrate explicitly the nature of the correlation between the bispectrum estimator for ${f_{\rm{nl}}}$ and the trispectrum estimator for ${f_{\rm{nl}}}^2$ and to show that the correlation becomes small in the high-statistics limit. In Section \ref{sec:definitions} we introduce our conventions for the bispectrum and trispectrum. In Section \ref{sec:estimators} we derive the minimum-variance estimators for ${f_{\rm{nl}}}$ from the bispectrum and trispectrum and evaluate the noises in each. We also write down approximations for the estimators and noises valid for the local model. In Section \ref{sec:crosscorrelation} we explain the nature of the correlation between the bispectrum and trispectrum estimators for ${f_{\rm{nl}}}$. We then show that this correlation becomes weak (scaling with $(\ln N_{\mathrm{pix}})^{-1}$) as the number $N_{\mathrm{pix}}$ of pixels becomes large. We conclude in Section \ref{sec:conclusion}. Appendix A details the correspondence between continuum and discrete Fourier conventions for power spectra, bispectra, and trispectra, and Appendix B provides describes the numerical evaluation of the correlation. \section{The Cramer-Rao Bound} \label{sec:cramerrao} In the Sections below we will demonstrate that the estimators for ${f_{\rm{nl}}}$ and ${f_{\rm{nl}}}^2$ becomes statistically independent with sufficiently good statistics. However, the bispectrum estimator for ${f_{\rm{nl}}}$ saturates the Cramer-Rao bound, and it has been argued that this saturation implies that no further information about ${f_{\rm{nl}}}$, beyond that obtained from the bispectrum, can be obtained from the trispectrum \cite{Babich:2005en,Creminelli:2006gc}. Here we explain that the Cramer-Rao inequality bounds only the variance with which ${f_{\rm{nl}}}$ can be measured; additional information, beyond the variance, can be obtained from measurement of ${f_{\rm{nl}}}^2$ from the trispectrum. To illustrate, consider, following Ref.~\cite{Babich:2005en}, the analogous problem of determining ${f_{\rm{nl}}}$ and ${f_{\rm{nl}}}^2$ from a one-dimensional version of the local model. Suppose we have a random variable $X$ written in terms of a Gaussian random variable $x$ of zero mean ($\VEV{x}=0$) and unit variance ($\VEV{x^2}=1$) as $X=x+\epsilon(x^2-1)$. Here, $\epsilon$ parametrizes the departure from the null hypothesis $\epsilon=0$. The PDF for $X$, for a given $\epsilon$, is \begin{equation} P(X|\epsilon) = \frac{1}{\sqrt{2\pi}} \left[ \frac{e^{-x_+^2/2}}{1+2 \epsilon x_+}+ \frac{e^{-x_-^2/2}}{1+2 \epsilon x_-} \right], \label{eqn:onedPDF} \end{equation} where \begin{equation} x_{\pm} = \frac{1}{2\epsilon} \left[ \pm \sqrt{1+4 \epsilon(X+\epsilon)}-1 \right]. \label{eqn:xpm} \end{equation} The logarithm of the PDF can then be Taylor expanded about $\epsilon=0$ as \begin{equation} \ln P(X|\epsilon) = -\frac{X^2}{2} + \epsilon I_1(X) - \frac{\epsilon^2}{2} I_2(X) + {\cal O}(\epsilon^3), \label{eqn:expansion} \end{equation} where $I_1(X) \equiv X^3-3X$, and $I_2(X) = 5X^4+5-14 X^2$. It will be useful below to note that the expectation values of these quantities in the weakly non-Gaussian limit are $\VEV{I_1}=6\epsilon + {\cal O}(\epsilon^3)$ and $\VEV{I_2} = 6 + 272\,\epsilon^2 +{\cal O}(\epsilon^4)$. Now suppose we have a realization consisting of $N$ data points $X_i$, each drawn independently from the distribution in Eq.~(\ref{eqn:onedPDF}), and let's arrange these data points into a vector $\mathbf{X}$. The PDF for this realization, for a given $\epsilon$, is \begin{eqnarray} \ln P(\mathbf{X}|\epsilon) &=& \sum_i\left[ - \frac{X_i^2}{2} + \epsilon I_1(X_i) \right. \nonumber \\ & & \left. - \frac{\epsilon^2}{2} I_2(X_i) + {\cal O}(\epsilon^3) \right]. \label{eqn:multiP} \end{eqnarray} The Cramer-Rao inequality states that the smallest variance $\mathrm{Var({\widehat \epsilon})} \equiv \VEV{{\widehat \epsilon}^2}-\VEV{{\widehat \epsilon}}^2$ to an estimator ${\widehat \epsilon}$ is \begin{equation} \mathrm{Var({\widehat \epsilon})} \geq \frac{1}{F}, \label{eqn:CRbound} \end{equation} where \begin{eqnarray} F &=& \int \left[ \frac{\partial \ln P(\mathbf{X}|\epsilon)}{\partial \epsilon} \right]^2 \,P(\mathbf{X}|\epsilon) d\mathbf{X} \nonumber \\ &\equiv& \VEV{ \left[ \frac{\partial \ln P(\mathbf{X}|\epsilon)}{\partial \epsilon} \right]^2}, \label{eqn:Fisher} \end{eqnarray} is the Fisher information. Here, the angle brackets denote an expectation value with respect to the null-hypothesis ($\epsilon=0$) PDF. Applying Eq.~(\ref{eqn:Fisher}) to Eq.~(\ref{eqn:multiP}), we find \begin{equation} F = \sum_i \VEV{[I_1(X_i)]^2} = 6N, \label{eqn:Fone} \end{equation} from which we infer \begin{equation} \mathrm{Var({\widehat \epsilon})} \geq \frac{1}{6\,N}. \label{eqn:CRforf} \end{equation} This model predicts a skewness $\VEV{I_1} = \VEV{X^3-3X} = 6\epsilon$, and so we can construct an estimator for $\epsilon$ from the measured skewness as follows: \begin{equation} {\widehat \epsilon}_{s} = \frac{1}{6N} \sum_i (X_i^3-3 X_i). \label{eqn:skewestimator} \end{equation} The variance to this estimator is $\mathrm{Var({\widehat \epsilon}_{s})} = (6N)^{-1}$, and so this estimator saturates the Cramer-Rao bound. In retrospect, this saturation should come as no surprise. According to Eqs.~(\ref{eqn:multiP}) and (\ref{eqn:Fisher}), the Fisher information---and thus the minimum variance with which $\epsilon$ can be measured---is determined entirely by the term in $\ln P(\mathbf{X}|\epsilon)$ linear in $\epsilon$ which, in this case, is precisely the skewness. Thus, the terms in $\ln P(\mathbf{X}|\epsilon)$ that are higher order in $\epsilon$ contribute nothing to the Fisher information. And since the term linear in $\epsilon$ multiplies the skewness, ${\widehat \epsilon}_s$ saturates the Cramer-Rao bound. \begin{figure}[htbp] \centering \includegraphics[width=0.48\textwidth]{pdfexample.eps} \caption{Here we plot two probability distribution functions that share the same skewness but with two different values for the kurtosis.} \label{fig:samplePDFs} \end{figure} But this does {\it not} mean that there is no information about $\epsilon$ from these higher-order terms. Consider, for example, a more general PDF, \begin{eqnarray} \ln P_\alpha(X|\epsilon,\epsilon_1^2) &=& -\frac{X^2}{2} + \epsilon I_1(X) \nonumber \\ & & - \frac{\epsilon_1^2}{2} I_2(X) + {\cal O}(\epsilon^3), \label{eqn:alphaexpansion} \end{eqnarray} parametrized by $\epsilon_1^2$, in addition to the parameter $\epsilon$. This PDF differs from the PDF in Eq.~(\ref{eqn:expansion}) in the coefficient of $I_2(X)$. In the weakly non-Gaussian limit, the skewness of this PDF is $\VEV{I_1(X)} = 6\epsilon$, and its ``kurtosis'' is $\VEV{I_2(X)} = 6+846\,\epsilon^2 -574\,\epsilon_1^2+ 18(\epsilon^2-\epsilon_1^2)$.\footnote{In this paper we use the term ``kurtosis'' to denote the expectation value of $I_2(X)$. This is qualitatively similar to, but slightly different, than the usual kurtosis, which is usually defined to be the expectation value of $X^4-6X^2+3$.} If we fix $\epsilon$, we then have a family of PDFs, parametrized by $\epsilon_1$, that all have the same skewness but with different values of the kurtosis. Fig.~\ref{fig:samplePDFs} shows two PDFs that have the same skewness but different kurtoses. These are clearly two very different distributions; qualitatively, the large-$X$ tails are suppressed as $\epsilon_1$ is increased. The estimator in Eq.~(\ref{eqn:skewestimator}) once again gives us the optimal estimator for $\epsilon$ in this new PDF, but we can now also measure from the data the kurtosis, the expectation value of $I_2(X)$, which provides an estimator for $846\,\epsilon^2 -574\,\epsilon_1^2+ 18(\epsilon^2-\epsilon_1^2)$. This can then be used in combination with the skewness estimator for $\epsilon$ to obtain an estimator for $\epsilon_1^2$. According to the Cramer-Rao inequality, the smallest variance to $\epsilon_1^2$ that can be obtained is \begin{eqnarray} {\mathrm{Var}}(\epsilon_1^2) &=& \Biggl\{ \int \, \left[\frac{\partial \ln P(\mathbf{X}|\epsilon,\epsilon_1^2)}{\partial(\epsilon_1^2)} \right]^2 \nonumber \\ & & \times P(\mathbf{X}|\epsilon,\epsilon_1^2)\, dX \Biggr\}^{-1} = \frac{1}{278\,N}. \label{eqn:fnl2} \end{eqnarray} Note that we cannot apply the Cramer-Rao bound to the parameter $\epsilon_1$, rather than $\epsilon_1^2$, as $\partial P(\mathbf{X}|\epsilon,\epsilon_1^2)/\partial \epsilon_1$ is zero under the null hypothesis $\epsilon_1=0$, thus violating one of the conditions for the Cramer-Rao inequality to apply. Since $\epsilon_1^2$, not $\epsilon_1$, is determined by the data, the distribution function for $\epsilon_1^2$ (not $\epsilon_1$) will approach a Gaussian distribution in the large-$N$ limit. The covariance between $\epsilon$ and $\epsilon_1^2$ is zero, as the former is odd in $X$ and the latter even. Still, this does not necessarily imply that the two are statistically independent, as there is still a covariance between $\epsilon^2$ and $\epsilon_1^2$. However, this becomes small as $N$ becomes large. The correlation coefficient in this example is $r \equiv \mathrm{Cov}(\epsilon^2,\epsilon_1^2)/\sqrt{ {\mathrm{Var}}(\epsilon^2) {\mathrm{Var}}(\epsilon_1^2)} \simeq 6\,N^{-1/2}$. Thus, for large $N$, $\epsilon$ and $\epsilon_1^2$ are two statistically independent quantities that can be obtained from the data and then compared with the local-model prediction that $\epsilon_1^2=\epsilon^2$. In brief, the skewness and kurtosis are two different quantities that can be obtained from a measured distribution. In the limit of large $N$, no measurement of the skewness, no matter how precise, can tell us anything about the kurtosis, and {\it vice versa}. In this example, a one-sigma excursion in $\epsilon$ from a measurement with $N$ data points is ${\mathrm{Var}}^{1/2}(\epsilon) = (6N)^{-1/2}$, and this is smaller than ${\mathrm{Var}}^{1/4}(\epsilon_1^2) = (278\,N)^{-1/4}$, the square root of the one-sigma excursion in $\epsilon_1^2$, for any $N\gtrsim$~few. Thus, the skewness will provide better sensitivity if we are simply trying to detect a departure from the null hypothesis $\epsilon=0$; measurement of $\epsilon_1^2$ will not add much in this case. Still, if $\epsilon$ is measured with high statistical significance from the skewness, then measurement of $\epsilon_1^2$ can, with sufficient statistics, provide a statistically independent determination of $\epsilon^2$ and/or an independent test of the theory. Now consider another PDF, \begin{eqnarray} \ln P_{\mathrm{small}}(X|\epsilon) &=& -\frac{X^2}{2} + 10^{-2} \epsilon I_1(X) \nonumber \\ & & - \frac{\epsilon^2}{2} I_2(X) + {\cal O}(\epsilon^3), \label{eqn:epsexpansion} \end{eqnarray} that differs from the local-model PDF in the suppression we have inserted for the term linear in $\epsilon$, which thus suppresses the skewness. Application of the Cramer-Rao inequality in this case tells us that the smallest value of $\epsilon$ that can be distinguished from the null hypothesis ($\epsilon=0$) is $10^2/\sqrt{6N}$, and we know from the discussion above that this variance is obtained via measurement of the skewness. However, $\epsilon^2$, the coefficient of the second term in the expansion---that obtained from measurement of the kurtosis---can be obtained with the variance given above. Thus, in this case, estimation of $\epsilon^2$ via measurement of the kurtosis, provides a more sensitive probe of a departure from the null hypothesis $\epsilon=0$ than does estimation of $\epsilon$ from measurement of the skewness, as long as $N\lesssim 10^7$. Note that the Cramer-Rao bound is not violated in this case, as measurement of $\epsilon^2$, which does not discriminate between positive and negative values of $\epsilon$, does not provide any further information on ${\mathrm{Var}}(\epsilon)$. The {\it apparent} violation of the Cramer-Rao bound arises in this case because one of the conditions for the validity of the Cramer-Rao bound---that $\partial \ln P/\partial \epsilon$ be non-zero at $\epsilon=0$ (under the null hypothesis)---is becoming invalid as the numerical coefficient of $\epsilon$ in $\ln P$ is made smaller. Had we chosen that coefficient to be zero, rather than $10^{-2}$, then the Cramer-Rao inequality would have given a nonsensical bound for ${\mathrm{Var}}(\epsilon)$. \subsection{Summary} Suppose we have a theory that predicts new effects parametrized by a quantity $\epsilon$, with $\epsilon=0$ representing the null hypothesis. A general PDF for the data $\mathbf{X}$ given $\epsilon$ (or likelihood for $\epsilon$ for given data $\mathbf{X}$) can be expanded in $\epsilon$ as $\ln P(X|\epsilon) = \ln P_0(X) + \epsilon g(X) + \epsilon^2 h(X)+\cdots$, where $P_0(X)$ is the PDF under the null hypothesis $\epsilon=0$ and $g(X)$ and $h(X)$ are functions that describe the theory. Estimation of $\epsilon$ can be obtained through measurement of the mean value of $g(X)$, and an independent estimation of $\epsilon^2$ can, with sufficiently good statistics, be obtained from measurement of the mean value of $h(X)$. If $\VEV{[g(X)]^2}^2 \gtrsim \VEV{[h(X)]^2}$, where the expectation value is with respect to $P_0$, then measurement of the mean value of $g(X)$ will provide a more sensitive avenue for detection of a value of $\epsilon$ that departs from the null hypothesis than measurement of the mean value of $h(X)$. If $\VEV{[g(X)]^2}^2 \lesssim \VEV{[h(X)]^2}$, then measurement of the mean value of $h(X)$ will provide a more sensitive test for detection of a value of $\epsilon$ that departs from the null hypothesis. If the two are comparable, then both tests will be comparable. In the case of a statistically-significant detection, there may be, given sufficient statistics, independent information on the values of $\epsilon$ and $\epsilon^2$ from measurement of both moments. Care must be taken in interpreting results of measurement of $\epsilon^2$ from $h(X)$, to note that the distribution of the $h(X)$ estimator for $\epsilon^2$ is Gaussian in $\epsilon^2$, not $\epsilon$. \subsection{Local-model bispectrum and trispectrum} Similar arguments apply, {\it mutatis mutandis}, to measurement of the bispectrum and trispectrum, generalizations of the skewness and kurtosis: the estimator for ${f_{\rm{nl}}}$ obtained from the bispectrum is statistically independent (for sufficiently large $N_{\mathrm{pix}}$) from the estimator for ${f_{\rm{nl}}}^2$ obtained from the trispectrum. If the variance to ${f_{\rm{nl}}}$ obtained from the bispectrum is comparable to the square root of the variance to ${f_{\rm{nl}}}^2$ obtained from the trispectrum \cite{Hu:2001fa,Kogo:2006kh}, both will have roughly comparable sensitivities toward detection of a departure from the null hypothesis ${f_{\rm{nl}}}=0$. If there is a statistically significant detection, both can provide, with sufficiently good statistics, independent information on ${f_{\rm{nl}}}$ and ${f_{\rm{nl}}}^2$, even if the bispectrum estimator for ${f_{\rm{nl}}}$ saturates the Cramer-Rao bound. We stop short of verifying these claims with the full likelihood for the local model. However, the arguments given explicitly for the one-dimensional analog above also apply to the skewness and kurtosis in the local model, the three- and four-point functions at zero lag, respectively. While the skewness and kurtosis are not optimal estimators for ${f_{\rm{nl}}}$ or ${f_{\rm{nl}}}^2$, they are statistically independent quantities that are derived from the bispectrum and trispectrum, respectively. \subsection{Another example} Here we provide another example where statistically-independent information can be provided for estimators for $\epsilon$ and $\epsilon^2$, where $\epsilon$ is a parameter that quantifies a departure from a null hypothesis. Suppose we want to test a theory in which the decay product from a polarized particle is predicted to have an angular distribution $P(\theta) \propto P_0(\theta) + \epsilon P_1(\theta) +\epsilon^2 P_2(\theta)$, where $P_n$ are Legendre polynomials, and $\epsilon$ parametrizes the departure from the null hypothesis. In this case, measurement of the dipole, the mean value of $P_1(x)$, provides an estimator for $\epsilon$, and measurement of the quadrupole, the mean value of $P_2(x)$, provides a statistically-independent (with sufficiently high statistics) estimator for $\epsilon^2$. Thus, measurement of both the dipole and quadrupole can be used to test the data, even though the Cramer-Rao inequality tells us that ${\mathrm{Var}}(\epsilon)$ is bounded by the value obtained from the dipole. \section{Definitions and Conventions} \label{sec:definitions} We have argued above that the bispectrum estimator for ${f_{\rm{nl}}}$ and the trispectrum estimator for ${f_{\rm{nl}}}^2$ may provide statistically independent information. The aim of the rest of the paper will be to evaluate explicitly the correlation between the bispectrum estimator for ${f_{\rm{nl}}}$ and the trispectrum estimator for ${f_{\rm{nl}}}^2$. We will find that it is nonzero, but that it becomes small in the large-$l_{\mathrm{max}}$ limit. We assume a flat sky to avoid the complications (e.g., spherical harmonics, Clebsch-Gordan coefficients, Wigner 3$j$ and 6$j$ symbols, etc.) associated with a spherical sky, and we further assume the Sachs-Wolfe limit. We denote the fractional temperature perturbation at position $\vec\theta$ on a flat sky by $T(\vec\theta)$, and refer to it hereafter simply as the temperature. The temperature in the local model is written, \begin{equation} T(\vec \theta) = t(\vec\theta) +{f_{\rm{nl}}} [t(\vec\theta)]^2, \end{equation} in terms of a Gaussian random field $t(\vec \theta)$. Note that our ${f_{\rm{nl}}}$ is three times the definition, in terms of the gravitational potential, used in most of the literature. We use this alternative definition to simplify the equations, but the difference should be noted if comparing our quantitative results with others. The field $t(\vec\theta)$ has a power spectrum $C_l$ given by \begin{equation} \VEV{t_{\vec l_1} t_{\vec l_2}} = \Omega \delta_{\vec l_1+\vec l_2,0} C_l, \label{eqn:powerspectrum} \end{equation} where $\Omega=4\pi f_{\mathrm{sky}}$ is the survey area (in steradian), $t_{\vec l}$ is the Fourier transform of $t(\vec\theta)$, and $\delta_{\vec l_1+\vec l_2,0}$ is a Kronecker delta that sets $\vec l_1 = -\vec l_2$. In the limit ${f_{\rm{nl}}} T \ll 1$ (current constraints are ${f_{\rm{nl}}} T \lesssim 10^{-3}$), $C_l$ is also the power spectrum for $T(\vec\theta)$. The bispectrum $B(l_1,l_2,l_3)$ is defined by \begin{equation} \VEV{T_{\vec l_1} T_{\vec l_2} T_{\vec l_3}} = \Omega \delta_{\vec l_1 +\vec l_2 +\vec l_3,0} B(l_1,l_2,l_3). \label{eqn:bispectrum} \end{equation} The Kronecker delta insures that the bispectrum is defined only for $\vec l_1 +\vec l_2+\vec l_3=0$; i.e., only for triangles in Fourier space. Statistical isotropy then dictates that the bispectrum depends only on the magnitudes $l_1$, $l_2$, $l_3$ of the three sides of this Fourier triangle. The bispectrum for the local model is, \begin{equation} B(l_1,l_2,l_3) = 2 {f_{\rm{nl}}} [ C_{l_1} C_{l_2} + C_{l_1} C_{l_3} + C_{l_2} C_{l_3}]. \label{eqn:lmbispectrum} \end{equation} Likewise, the trispectrum is defined by \begin{equation} \VEV{T_{\vec l_1} T_{\vec l_2} T_{\vec l_3} T_{\vec l_4}} = \Omega \delta_{\vec l_1 +\vec l_2 +\vec l_3+ \vec l_4,0} {\cal T}(\vec l_1,\vec l_2,\vec l_3,\vec l_4), \end{equation} and for the local model, \begin{eqnarray} {\cal T}(\vec l_1,\vec l_2,\vec l_3,\vec l_4) &=& {f_{\rm{nl}}}^2 \left[ P_{l_3l_4}^{l_1 l_2}(|\vec l_1+\vec l_2|) \right. \nonumber \\ &+ & \left. P_{l_2l_4}^{l_1 l_3}(|\vec l_1+\vec l_3|) + P_{l_2l_3}^{l_1 l_4}(|\vec l_1+\vec l_4|) \right], \nonumber\\ \label{eqn:localtri} \end{eqnarray} where \begin{eqnarray} P_{l_3l_4}^{l_1 l_2}(|\vec l_1+\vec l_2|) &=& 4 C_{|\vec l_1+\vec l_2|} \left[ C_{l_1} C_{l_3} + C_{l_1} C_{l_4} \right. \nonumber \\ & & \left. + C_{l_2} C_{l_3} + C_{l_2} C_{l_4}\right]. \label{eqn:Pdefn} \end{eqnarray} Again, the trispectrum is nonvanishing only for $\vec l_1 +\vec l_2 +\vec l_3+ \vec l_4=0$, that is, only for quadrilaterals in Fourier space. \section{Minimum-variance non-Gaussianity Estimators} \label{sec:estimators} We now review how to measure ${f_{\rm{nl}}}$ from the bispectrum and the trispectrum. To keep our arguments clear (and since the current goal is simply detection of a departure from non-Gaussianity, rather than precise evaluation of ${f_{\rm{nl}}}$), we assume the null hypothesis ${f_{\rm{nl}}}=0$ in the evaluation of noises and construction of estimators. The generalization to nonzero ${f_{\rm{nl}}}$ is straightforward \cite{Creminelli:2006gc}. \subsection{The bispectrum} From Eqs.~(\ref{eqn:bispectrum}) and (\ref{eqn:lmbispectrum}), each triangle $\vec l_1 +\vec l_2 +\vec l_3 =0$ gives an estimator, \begin{equation} ({\widehat{\hnl^b}})_{123} = \frac{ T_{\vec l_1} T_{\vec l_2} T_{\vec l_3} }{\Omega B(l_1,l_2,l_3)/{f_{\rm{nl}}}}, \label{eqn:onetriangle} \end{equation} with variance [using Eq.~(\ref{eqn:powerspectrum})],\footnote{Here we ignore the negligible contributions from triangles and for the trispectrum below, quadrilaterals, where two sides have the same length. We do, however, include these configurations in the numerical analysis described in Appendix \ref{sec:appendixb} and verify that this assumption is warranted.} \begin{equation} \frac{\Omega^3 C_{l_1} C_{l_2} C_{l_3}}{\left[ \Omega B(l_1,l_2,l_3)/{f_{\rm{nl}}} \right]^2}. \label{eqn:singlevariance} \end{equation} The minimum-variance estimator is constructed by adding all of these estimators with inverse-variance weighting. It is \begin{equation} {\widehat{\hnl^b}} = \sigma_b^{2} \sum \frac{ T_{\vec l_1} T_{\vec l_2} T_{\vec l_3} B(l_1,l_2,l_3)/{f_{\rm{nl}}}}{ \Omega^2 C_{l_1}C_{l_2}C_{l_3}}, \label{eqn:biestimator} \end{equation} and it has inverse variance, \begin{equation} \sigma_b^{-2} = \sum \frac{ \left[ B(l_1,l_2,l_3)/{f_{\rm{nl}}} \right]^2}{\Omega C_{l_1}C_{l_2}C_{l_3}}. \label{eqn:binoise} \end{equation} The sums in Eqs.~(\ref{eqn:biestimator}) and (\ref{eqn:binoise}) are taken over all {\it distinct} triangles with $\vec l_1 + \vec l_2 +\vec l_3=0$. We may then take $\vec L\equiv \vec l_3$ to be the shortest side of the triangle---i.e., $l_1,l_2 > L$---and re-write the estimator as, \begin{eqnarray} {\widehat{\hnl^b}} &=& \frac{1}{2}\sigma_b^{2} \sum_{\vec L} \frac{1}{C_L} \nonumber \\ & \times & \sum_{\vec l_1+\vec l_2=-\vec L,\,l_1,l_2>L} \frac{ T_{\vec l_1} T_{\vec l_2} T_{\vec L} B(l_1,l_2,L)/{f_{\rm{nl}}}}{ \Omega^2 C_{l_1}C_{l_2}}, \nonumber \\ \label{eqn:biestimatorrewrite} \end{eqnarray} and the inverse-variance as \begin{equation} \sigma_b^{-2} = \frac{1}{2}\sum_{\vec L} \frac{1}{C_L} \sum_{ \vec l_1+\vec l_2 =-\vec L,\, l_1,l_2>L} \frac{ \left[ B(l_1,l_2,L)/{f_{\rm{nl}}} \right]^2}{\Omega C_{l_1}C_{l_2}}. \label{eqn:binoise2} \end{equation} The factor of $1/2$ is included to account for double counting of identical triangles, those with $\vec l_1 \leftrightarrow \vec l_2$. \subsubsection{Approximation to the Bispectrum Estimator} Now consider the variance $\sigma_b^2$ with which ${f_{\rm{nl}}}$ can be measured from the bispectrum. Take $C_l = A/l^2$ for the power spectrum, where $A\simeq 6\times10^{-10}$ is the power-spectrum normalization. The bispectrum in Eq.~(\ref{eqn:lmbispectrum}) is maximized for squeezed triangles, those with $L \ll l_1,l_2$, and thus with $l_1 \simeq l_2$. In this limit, the bispectrum can be approximated $B(l_1,l_2,L) \simeq 4 A^2 {f_{\rm{nl}}} L^{-2}l_1^{-2}$. Then, from Eq.~(\ref{eqn:binoise2}) the inverse variance (and thus the signal-to-noise) is dominated by squeezed triangles, and it is furthermore dominated by those triangles with the modes $\vec L$ of the {\it smallest} magnitudes $L$. \begin{figure}[htbp] \centering \includegraphics[width=0.35\textwidth]{triangle.eps} \caption{Three triangles that all share a shortest side $\vec L$.} \label{fig:triangle} \end{figure} More precisely, let us evaluate the contribution $(\sigma_b^{-2})_{\vec L}$ to the inverse variance obtained from all triangles that share the same shortest side $\vec L$, as shown in Fig.~\ref{fig:triangle}. Since this contribution is dominated by modes with $\vec l_1\simeq \vec l_2$, the inverse-variance from these triangles is, \begin{eqnarray} (\sigma_b^{-2})_{\vec L} &\simeq& \frac{1}{2\Omega} \frac{L^2}{A} \sum_{\vec l_1} \frac{ (4C_L C_{l_1})^2}{C_{l_1}^2} =\frac{8 A}{\Omega L^2} \sum_{\vec l} 1 \nonumber \\ & \simeq& \frac{8 A}{L^2} \frac{1}{2\pi} \int_L^{l_{\mathrm{max}}} l \, dl \simeq \frac{2 A}{ \pi L^2} l_{\mathrm{max}}^2, \label{eqn:Dl1estimate} \end{eqnarray} where we have used $\sum_{\vec l} = \Omega\int d^2l/(2\pi)^2$ in the last line. The full estimator then sums over all $\vec L$ as in Eq.~(\ref{eqn:biestimatorrewrite}). The full inverse-variance is then \begin{eqnarray} \sigma_b^{-2} &=& \sum_{\vec L} (\sigma_b^{-2})_{\vec L} = \Omega \int\frac{d^2L}{(2\pi)^2} (\sigma_b^{-2})_{\vec L} \nonumber \\ &\simeq& \frac{A \Omega}{\pi^2} l_{\mathrm{max}}^2 \ln \frac{L_{\mathrm{max}}}{L_{\mathrm{min}}} \nonumber\\ &\simeq& \frac{4 A f_{\mathrm{sky}} l_{\mathrm{max}}^2}{\pi} \ln \frac{L_{\mathrm{max}}}{L_{\mathrm{min}}}, \label{eqn:fullbivariance} \end{eqnarray} in agreement with Ref.~\cite{Babich:2004yc}. To summarize: (1) the signal-to-noise is greatly dominated by triangles with one side much shorter than the other two. (2) The signal-to-noise is dominated primarily by those with the smallest short side. (3) The contribution to the full signal-to-noise is equal per logarithmic interval of $L$, the magnitude of the smallest mode in the triangle. (4) Even if there is a huge number of triangles that enter the estimator, the error in the estimator is still dominated by the cosmic variance associated with the values of $T_{\vec L}$ for the $\vec L$ modes of the smallest $L$. Since the variance is dominated by squeezed triangles, we can approximate the estimator, Eq.~(\ref{eqn:biestimatorrewrite}), as \begin{equation} {{\widehat \hnl}}^b = \frac{2\sigma_b^2}{A\Omega^2 } \sum_{\vec L} T_{\vec L} X_{\vec L}, \label{eqn:approxbiestimator} \end{equation} where \begin{equation} X_{\vec L} \equiv \sum_{\vec l} T_{\vec l} T_{-\vec L - \vec l} l^2. \label{eqn:Xdefn} \end{equation} \subsection{The trispectrum} Now consider the trispectrum. Each distinct quadrilateral $\vec l_1 + \vec l_2+\vec l_3 +\vec l_4=0$ gives an estimator for the trispectrum with some variance. Adding the individual estimators with inverse-variance weighting gives the minimum-variance estimator,\footnote{Strictly speaking, one must subtract the connected part of the trispectrum. We omit this term to keep our expression compact, but it is included in the analytic and numerical calculations of the variances and covariances discussed below.} \begin{equation} {\widehat{(\hnl^2)^t}} = \sigma_t^{2} \sum \frac{ T_{\vec l_1} T_{\vec l_2} T_{\vec l_3} T_{\vec l_4} {\cal T}(\vec l_1,\vec l_2,\vec l_3,\vec l_4)/{f_{\rm{nl}}}^2}{\Omega^3 C_{l_1}C_{l_2}C_{l_3}C_{l_4}}, \label{eqn:triestimator} \end{equation} and the inverse variance, \begin{equation} \sigma_t^{-2} = \sum \frac{ \left[{\cal T}(\vec l_1,\vec l_2,\vec l_3,\vec l_4)/{f_{\rm{nl}}}^2 \right]^2}{\Omega^2 C_{l_1} C_{l_2} C_{l_3} C_{l_4}}. \label{eqn:trivariance} \end{equation} The sums here are over all distinct quadrilateral $\vec l_1 +\vec l_2 +\vec l_3 + \vec l_4=0$, and we again neglect quadrilaterals where two or more sides are the same. Each quadrilateral will have a smallest diagonal, which we call $\vec L$. The quadrilateral is then described by two triangles that each share their smallest side $\vec L$; the two sides of the first triangle will be $\vec l_1$ and $\vec l_2$ and the two sides of the second triangle will be $\vec l_3$ and $\vec l_4$. We can then re-write the sums in Eqs.~(\ref{eqn:triestimator}) and (\ref{eqn:trivariance}) as \begin{equation} \sum_{\vec L} \,\, \sum_{\vec l_1+\vec l_2=\vec L} \, \, \sum_{\vec l_3+\vec l_4= -\vec L}. \label{eqn:sums} \end{equation} The sum here is only over combinations of $\{\vec l_1,\vec l_2,\vec l_3,\vec l_4\}$ where the lengths of the two other diagonals, $|\vec l_1+\vec l_4|=|\vec l_2+\vec l_3|$ and $|\vec l_2+\vec l_4|=|\vec l_1+\vec l_3|$, are both $>L$, so that $L$ is the shortest diagonal [cf. Eq.~(\ref{eqn:localtri})]. \begin{figure}[htbp] \centering \includegraphics[width=0.48\textwidth]{quadrilateral.eps} \caption{An example of an elongated quadrilateral with a shortest diagonal $\vec L$. Note that it is equivalent to two elongated triangles that share the same shortest side $\vec L$.} \label{fig:quadrilateral} \end{figure} Let's now consider the local-model trispectrum given in Eqs.~(\ref{eqn:localtri}) and (\ref{eqn:Pdefn}). The three terms in Eq.~(\ref{eqn:localtri}) sum over the three diagonals of the quadrilateral. Eq.~(\ref{eqn:Pdefn}) then shows that each of these terms is the product of the power spectrum $C_L$ evaluated for the diagonal (e.g., $\vec L = \vec l_1+\vec l_2 = -\vec l_3 -\vec l_4$) times a sum of products of power spectra evaluated for each of the quadrilateral sides. The quadrilateral is thus maximized for highly elongated quadrilaterals, those with $l_i \gg L$, with one short diagonal, as shown in Fig.~\ref{fig:quadrilateral}. The trispectrum for these elongated quadrilaterals may be approximated as ${\cal T}(\vec l_1, \vec l_2,\vec l_3,\vec l_4) \simeq 16 {f_{\rm{nl}}}^2 C_L C_{l_1} C_{l_3}$. Now consider the contribution $(\sigma_t^{-2})_{\vec L}$ to the inverse variance from all quadrilaterals that share the same shortest diagonal $\vec L$. Using Eq.~(\ref{eqn:trivariance}) and approximating the trispectrum by the squeezed limit, this is \begin{eqnarray} (\sigma_t^{-2})_{\vec L} &\simeq& \frac{1}{8} \sum_{\vec l_1} \sum_{\vec l_3} \frac{ \left( 16 C_L C_{l_1} C_{l_3} \right)^2}{ \Omega^2 (C_{l_1} C_{l_3})^2 } \nonumber \\ & = & \frac{32\,A^2}{\Omega^2 L^4} \left( \sum_{\vec l} 1 \right)^2 = \frac{2}{\pi^2} \frac{A^2}{L^4} l_{\mathrm{max}}^4. \label{eqn:Lconttotri} \end{eqnarray} The factor $1/8$ in the first line accounts for the $\vec l_1 \leftrightarrow \vec l_2$ and $\vec l_3 \leftrightarrow \vec l_4$ symmetries and the symmetry under interchange of the $(\vec l_1,\vec l_2)$ and $(\vec l_3, \vec l_4)$ triangles. Again, the full variance is obtained by summing over $\vec L$ modes. Thus, \begin{equation} \sigma_t^{-2} \simeq \frac{2 f_{\mathrm{sky}}}{\pi^2} \frac{A^2}{L_{\mathrm{min}}^2} l_{\mathrm{max}}^4. \end{equation} Note that we obtain the $l_{\mathrm{max}}^{-4}$ scaling of the variance noted in Ref.~\cite{Kogo:2006kh}. Recall that $\sigma_t^2$ is a variance to ${f_{\rm{nl}}}^2$ (rather than ${f_{\rm{nl}}}$). Thus, the ratio of the smallest ${f_{\rm{nl}}}$ detectable via the trispectrum to the smallest detectable via the bispectrum is $\sqrt{\sigma_t/\sigma_b^2} \simeq 1.7\, f_{\mathrm{sky}}^{1/4} \left[ L_{\mathrm{min}} \ln (L_{\mathrm{max}}/L_{\mathrm{min}}) \right]^{1/2}$. For reasonable values of $L_{\mathrm{min}}$ and $L_{\mathrm{max}}$, the smallest ${f_{\rm{nl}}}$ detectable with the bispectrum is smaller, by a factor of order a few, than that detectable with the trispectrum \cite{Hu:2001fa,Okamoto:2002ik,Kogo:2006kh}. We can now derive an approximation for ${\widehat{(\hnl^2)^t}}$ noting that the variance, and thus the signal-to-noise, is dominated by equilateral triangles. From Eq.~(\ref{eqn:triestimator}), and using the squeezed limit for the trispectrum, we find, \begin{equation} {\widehat{(\hnl^2)^t}} = \frac{2}{3} \sigma_t^{2} \sum_{\vec L} \frac{1}{L^2} X_{\vec L}^2, \label{eqn:approxtriestimator} \end{equation} where $X_{\vec L}$ is the quantity given in Eq.~(\ref{eqn:Xdefn}). Comparing with the estimator, Eq.~(\ref{eqn:approxbiestimator}), we see that {\it this estimator is constructed from precisely the same sums of triangles as the bispectrum estimator}. Strictly speaking, the bispectrum estimator for ${f_{\rm{nl}}}$ involves a sum over a huge number of triangles; the number of such triangles scales as $N_{\mathrm{pix}}^2/6$ with the number of pixels in the map. Likewise, the trispectrum estimator for ${f_{\rm{nl}}}^2$ involves a sum over all quadrilaterals, and the number of these scales as $N_{\mathrm{pix}}^3/24$. Thus, one naively expects the correlation between the estimators to be extremely weak, given the huge number of bispectrum and trispectrum configurations. Eqs.~(\ref{eqn:approxbiestimator}) and (\ref{eqn:approxtriestimator}) show, however, that the quadrilateral configurations that dominate the trispectrum estimator for ${f_{\rm{nl}}}^2$ are very closely related to the triangle configurations that dominate the bispectrum estimator for ${f_{\rm{nl}}}$. \section{Correlation between bispectrum and trispectrum estimators for ${f_{\rm{nl}}}$} \label{sec:crosscorrelation} Since the bispectrum and trispectrum estimators for ${f_{\rm{nl}}}$ are both constructed from the same CMB map, it is expected that there should be some correlation between the two estimators. Eqs.~(\ref{eqn:approxbiestimator}) and (\ref{eqn:approxtriestimator}) help clarify the nature of the correlation. Clearly, if we use for the bispectrum estimator only triangles that share a single shortest side $\vec L$ and for the trispectrum estimator only quadrilaterals with the same $\vec L$ as the shortest diagonal, then the two estimators provide the same quantity, modulo the difference between the magnitude $|T_{\vec L}|^2$ (from the bispectrum estimator) and its expectation value $A/L^2$ (from the trispectrum estimator). However, we have not only triangles/quadrilaterals from a single $\vec L$ shortest side/diagonal, but those constructed from many $\vec L$'s. The correlation between the bispectrum and trispectrum estimators should thus decrease as the number of $\vec L$ modes increases in the same way that the means $\VEV{x}$ and $\VEV{x^2}$ measured with a large number $N$ of data points $x_i$ will become uncorrelated as $N$ becomes large. Of course since ${\widehat{\hnl^b}}$ is linear in $T_{\vec L}$, the covariance between ${\widehat{\hnl^b}}$ and ${\widehat{(\hnl^2)^t}}$ will be zero. However, the correlation between $({\widehat{\hnl^b}})^2$ and ${\widehat{(\hnl^2)^t}}$ will be nonzero. We thus now estimate the magnitude of the correlation coefficient, which we define as \begin{equation} r\equiv \frac{\VEV{\Delta\left(({\widehat{\hnl^b}})^2\right) \Delta\left({\widehat{(\hnl^2)^t}}\right)}} { {\VEV{\left[\Delta\left(({\widehat{\hnl^b}})^2\right) \right]^2}^{1/2} \VEV{\left[\Delta\left({\widehat{(\hnl^2)^t}}\right) \right]^2}^{1/2}} }, \label{eqn:rdefn} \end{equation} where $\Delta(Q) \equiv Q -\VEV{Q}$. To simplify the equations, we can drop the prefactors in Eqs.~(\ref{eqn:approxbiestimator}) and (\ref{eqn:approxtriestimator}) and deal with quantities, \begin{equation} F \equiv \sum_{\vec L} T_{\vec L} X_{\vec L}, \qquad G \equiv \sum_{\vec L} \frac{A}{L^2} X_{\vec L}^2. \label{eqn:FGdefn} \end{equation} The desired correlation coefficient is then \begin{equation} r = \frac{ \VEV{\Delta(F^2) \Delta G}}{ \VEV{ \left[\Delta(F^2) \right]^2}^{1/2} \VEV{(\Delta G)^2}^{1/2}}. \label{eqn:rFG} \end{equation} We begin by noting that $X_{\vec L}$ is a random variable with zero mean. In the large-$l_{\mathrm{max}}$ limit, it will be well approximated by a Gaussian random variable, in which case $\VEV{X_{\vec L}^4} =3 \VEV{X_{\vec L}^2}^2$. Some other useful relations include, \begin{equation} \VEV{F^2} = \sum_{\vec L_1,\vec L_2} \VEV{T_{\vec L_1} T_{\vec L_2} X_{\vec L_1} X_{\vec L_2}} = \Omega \sum_{\vec L} \frac{A}{L^2} \VEV{X_{\vec L}^2}, \end{equation} \begin{equation} \VEV{G} = \sum_{\vec L} \frac{A}{L^2} \VEV{X_{\vec L}^2} =\VEV{F^2}/\Omega, \end{equation} \begin{equation} \VEV{G^2} = \sum_{\vec L_1,\vec L_2} \frac{A^2}{L_1^2 L_2^2} \VEV{ X_{\vec L_1}^2 X_{\vec L_2}^2} = \VEV{G}^2 + 2 \sum_{\vec L} \frac{A^2}{L^4} \VEV{ X_{\vec L}^2}^2, \end{equation} \begin{eqnarray} \VEV{F^2G} &=& \sum_{\vec L_1}\sum_{\vec L_2}\sum_{\vec L_3} \frac{A}{L_1^2} \VEV{ T_{\vec L_2} T_{\vec L_3} } \VEV{X_{\vec L_1}^2 X_{\vec L_2} X_{\vec L_3}} \nonumber \\ &=&\Omega \sum_{\vec L_1,\vec L_2} \frac{A}{L_1^2} \frac{A}{L_2^2} \VEV{ X_{\vec L_1}^2 X_{\vec L_2}^2} = \Omega \VEV{G^2}. \end{eqnarray} Also, since $F$ is a sum over (approximately) Gaussian random variables, it is also well approximated by a Gaussian random variable, and so $\VEV{F^4} \simeq 3 \VEV{F^2}^2$. From these relations, it follows that \begin{equation} \VEV{ \Delta(F^2) \Delta G} = \VEV{F^2G}- \VEV{F^2}\VEV{G} = \Omega\left[\VEV{G^2}-\VEV{G}^2 \right], \end{equation} and thus that \begin{eqnarray} r &=&\frac{\Omega \VEV{(\Delta G)^2}^{1/2}}{\sqrt{2} \VEV{F^2}} = \frac { \left( \sum_{\vec L} L^{-4} \right)^{1/2}}{ \sum_{\vec L} L^{-2}} \nonumber \\ &=& \left[ 2\sqrt{\pi f_{\mathrm{sky}}} L_{\mathrm{min}} \ln\left(L_{\mathrm{max}}/L_{\mathrm{min}} \right) \right]^{-1}. \label{eqn:cccoeff} \end{eqnarray} Thus, if $L_{\mathrm{max}}$ is small, then the correlation will be large. However, the correlation coefficient decreases as $[\ln(L_{\mathrm{max}})]^{-1}$, and it will become negligible in the limit that $L_{\mathrm{max}}$ is large. Strictly speaking, the $X_{\vec L}$ are not entirely statistically independent, as we have assumed here, as many are constructed from the same measurements. They are also not perfectly Gaussian, as we have assumed. However, as we discuss in Appendix \ref{sec:appendixb}, we have checked with a full numerical calculation of the correlation coefficient that the basic conclusions---and particularly the scaling of the correlation coefficient $r$ with $L_{\mathrm{max}}$---are sound. \section{Conclusions} \label{sec:conclusion} A large body of recent work has focused on tests of the local model for non-Gaussianity that can be performed with measurement of the CMB trispectrum and bispectrum. Here we have clarified how the bispectrum and trispectrum may provide statistically independent information on the local-model non-Gaussianity parameter ${f_{\rm{nl}}}$, even if the bispectrum estimator for ${f_{\rm{nl}}}$ saturates the Cramer-Rao bound. The basic point is that the Cramer-Rao inequality puts a lower limit to the variance with which a given parameter can be measured. If the likelihood function is precisely Gaussian, then the likelihood is described entirely by the variance. However, if the likelihood function is not precisely Gaussian, then there is more information in the likelihood beyond the variance (see, e.g., Section VI in Ref.~\cite{Jungman:1995bz}). In the current problem, this is manifest in that a statistically-independent measurement of ${f_{\rm{nl}}}^2$ can be obtained from the trispectrum without contributing to the variance of ${f_{\rm{nl}}}$. We then built on an observation of Ref.~\cite{Creminelli:2006gc} to illustrate the nature of the correlation between the bispectrum estimator for ${f_{\rm{nl}}}$ and the trispectrum estimator of ${f_{\rm{nl}}}^2$. This analysis demonstrates that the two estimators do indeed become statistically independent in the large-$l_{\mathrm{max}}$ limit. Throughout we have made the null hypothesis ${f_{\rm{nl}}}=0$ to estimate the variances with which ${f_{\rm{nl}}}$ can be measured from the bispectrum and with which ${f_{\rm{nl}}}^2$ can be measured from the trispectrum. This is suitable if one is simply searching the data for departures from the null hypothesis. However, as emphasized by Ref.~\cite{Creminelli:2006gc}, the minimum-variance estimators constructed under the null hypothesis are no longer optimal if there is a strong signal. If so, then forecasts of signal-to-noise made with the null hypothesis are no longer valid in the limit of large signal-to-noise, and this calls into question claims \cite{Kogo:2006kh} that the trispectrum will provide a better probe of the local model in the large-S/N limit. In this limit, a new bispectrum estimator can be constructed to saturate the Cramer-Rao bound \cite{Creminelli:2006gc}, and an analogous optimal trispectrum estimator can in principle be found. Still, the observation that the bispectrum and trispectrum estimators in the local model are constructed from the same sums of triangles suggests that the precisions with which ${f_{\rm{nl}}}$ can be measured, in the high-S/N limit, from the bispectrum and trispectrum will be roughly comparable. Although we assumed the null hypothesis to argue that the bispectrum and trispectrum estimators for ${f_{\rm{nl}}}$ are independent, the same arguments should also apply in the high-S/N limit. For example, if the bispectrum estimator finds ${f_{\rm{nl}}}$ to be different from zero, with best-fit value $\bar {f_{\rm{nl}}}$, then the likelihood can be re-parametrized in terms of a quantity $\epsilon={f_{\rm{nl}}}-\bar{f_{\rm{nl}}}$ that quantifies the departure from the new null hypothesis ${f_{\rm{nl}}}=\bar{f_{\rm{nl}}}$. Measurement of $\epsilon$ with the trispectrum can then be used to provide a statistically independent consistency check of the model. Or, in simpler terms, the skewness and kurtosis are still two statistically independent quantities that can be obtained from a measured distribution, even if the skewness (or kurtosis) of that distribution is nonzero. Throughout, we have made approximations and simplifications to make the basic conceptual points clear, and we have restricted our attention simply to the local model, which we have here defined to be $\Phi=\phi+{f_{\rm{nl}}}(\phi^2-\VEV{\phi^2})$. However, inflationary models predict a wider range of trispectra \cite{others}. Likewise, analysis of real data will introduce a number of ingredients that we have excised from our simplified analysis. Still, we hope that the points we have made here may assist in the interpretation and understanding of experimental results and perhaps elucidate statistical tests of other, more general, non-Gaussian models. \begin{acknowledgments} MK thanks the support of the Miller Institute for Basic Research in Science and the hospitality of the Department of Physics at the University of California, Berkeley where part of this work was completed. MK was supported at Caltech by DoE DE-FG03-92-ER40701, NASA NNX10AD04G, and the Gordon and Betty Moore Foundation. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In recent years, silicon spin qubits hosted in gate-defined quantum dots (QDs) have achieved major milestones making this platform a compelling option for large scale quantum computing~\cite{gonzalez2021scaling}. These include the demonstration of high fidelity one- and two-qubit gates on the same device~\cite{xue2022quantum, noiri2022fast, mills2022two}, high fidelity readout using radiofrequency (rf) single-electron transistors (SET)~\cite{connors2020rapid}, the demonstration of simple instances of quantum error correction~\cite{takeda2022quantum} and the scale up to 6-qubit devices in a linear arrangement~\cite{philips2022universal}. In addition, chips combining quantum and classical electronics have been shown to operate at deep cryogenic temperatures, demonstrating a potential route for integrated addressing, control and measurement of qubits~\cite{guevel202019.2A, Ruffino2021a}. \par Silicon spin qubits typically rely on nearest neighbour exchange to implement two-qubit interactions~\cite{veldhorst2015two, zajac2017resonantly, huang2019fidelity}. Such a short-range qubit coupling applied across the qubit processor leads to high gate densities that hinder integration with local control electronics and gate fan-out~\cite{veldhorst2017silicon, boter2022spiderweb}, and introduce nonlinear responses due to cross-talk~\cite{undseth2022nonlinear}. Furthermore, introducing readout sensors within the qubit plane impacts the level of connectivity that can be achieved. To scale up beyond one-dimensional qubit arrays and integrate cryogenic electronics requires structures with enhanced functionality which can increase the separation between qubits, or between qubits and sensors. One approach to scaling is to use dispersive charge sensors, such as the rf single-electron box (SEB)~\cite{lafarge1991direct, house2016high, urdampilleta2019gate, cirianotejel2021spin}. The SEB offers similar levels of sensitivity to conventional charge sensors~\cite{oakes2022fast, niegemann2022parity} but only requires one charge reservoir, as opposed to two for the SET, facilitating the design of qubit arrays with higher connectivity. Another approach is to space out qubits by using elongated quantum dots (EQD) to mediate exchange interactions between them~\cite{martins2017negative, malinowski2018spin,wang2022jellybean}. Such an approach, requiring tunnel coupling between each of the remote QDs and the EQD, has been demonstrated in GaAs heterostructures to mediate fast, coherent exchange interaction between single spins separated by half a micron~\cite{malinowski2019fast}. A further advantage of the EQD is that it could itself act as a local charge reservoir to facilitate initialization~\cite{cai2019silicon}. \par In this Article, we combine aspects of these two concepts to demonstrate an SEB with an elongated charge island that enables charge sensing of multiple remote QDs, which, due to the increased separation, show minimal cross-talk. The structure is fabricated using a three-layer $n^{+}$-doped polycrystalline silicon gate metal-oxide-semiconductor (MOS) process that enables the formation of the elongated SEB as well as few-electron QDs. The extended distribution and quantisation of the charge within the EQD, consistent with semi-classical modelling, allows it to sense the charge on QDs separated by over 0.5~$\mu$m. Finally, we show tunnel coupling between the remote QDs and the EQD, which fulfills one of the requirements for coherent mediated exchange. \section{Experimental Methods} \label{sec:experimental_methods} \begin{figure} \includegraphics[]{Figures/Figure 1/Figure1_figure.pdf} \caption{ \label{fig:figure_1} \textbf{Formation of an elongated single electron box.} \textbf{(a)} Device schematic (gray dotted rectangle) with simplified RF circuit diagram (signal filtering omitted). A lumped-element resonator (orange dotted rectangle) is galvanically attached to the ohmic contact below the accumulation gate R and monitored via changes in the demodulated baseband-frequency reflectometry signal, $V_{\mathrm{IF}}$. \textbf{(b)} Changes in $V_{\mathrm{IF}}$ reflect the accumulation of a 2DEG with increasing reservoir gate voltage $V_{\mathrm{R}}$. All other gates are held at zero bias. \textbf{(c)} The elongated QD is operated as a single electron box. Here, gates at zero bias are drawn in grayscale, while biased gates are drawn in colour. Orange blobs are cartoons indicating locations of QDs of interest. An elongated, multi-electron quantum dot forms under gate T and is tunnel coupled to a charge reservoir accumulated under gate R. Driving the resonator at its natural frequency drives cyclic electron tunnelling between the reservoir R, and the elongated quantum dot under gate T. The (T, B-RT) stability diagram obtained at $V_{\mathrm{R}} = 1.5$ V shows dot-to-reservoir transitions that become increasingly regular with increasing $V_{\mathrm{T}}$. The signal strength depends on $V_{\mathrm{B-RT}}$, since the barrier voltage modulates the EQD-reservoir tunnel rate. } \end{figure} Our device consists of two double quantum dots (DQDs) separated by an EQD, nominally 340~nm long and 50~nm wide. The measured device is fabricated with three 30~nm thick in-situ $n^{+}$ phosphorus-doped polycrystalline silicon gate layers formed with a wafer-level electron-beam patterning process. The Si substrate is separated from the first gate layer with a 8~nm thick thermally grown SiO$_{2}$, patterned on high-resistivity ($>3$~k$\Omega$) p-type Si wafer to minimise the density of oxide defects. Gate layers are electrically isolated from one another with a 5~nm thick blocking high-temperature deposited SiO$_{2}$~\cite{stuyck2021uniform}. A schematic of the measured device is shown in Fig.~\ref{fig:figure_1}~\textbf{(a)}. We employ one layer of gates (closest to the silicon substrate) to provide confinement for the three possible current paths connecting ohmic contacts, around the active region of the device. A second layer of gates is used to form barriers between the EQD, the QDs and the reservoirs. As seen in other MOS QD arrays~\cite{veldhorst2014an}, QDs can also be formed under these `barrier' gates in the second layer, depending on applied gate voltages. A third gate layer is used as plungers to control the occupation of the EQD, the QDs, and the extension of two-dimensional electron gases (2DEG) from under accumulation gates, denoted as reservoir (R), source (S), and drain (D), overlapping with corresponding ohmics, towards the active region of the device. \par The device is cooled down in an Oxford Instruments Triton dilution refrigerator equipped with QDevil DACs, thermalizing filters and high-bandwidth sample holders~\cite{qdevil}. At base temperature (25~mK) we confirm the functionality of the device with gate electrode leakage tests, followed by pinch-off and saturation voltage measurements (see Appendix~\ref{sec:cryogenic_device_characterization} for the preliminary device characterization protocol). \par We detect charge transitions between the EQD and the reservoir using rf reflectometry~\cite{vigneau2022probing}, via a lumped-element resonator attached to the ohmic contact of the accumulation gate R, as illustrated in the inset of Fig.~\ref{fig:figure_1}~\textbf{(c)}. Further details of the rf reflectometry setup and data acquisition are presented in Fig.~\ref{fig:supplementary_figure_reflectometry}~\textbf{(a)}. The rf voltage $V_{\mathrm{rf}}$ drives single-electron AC tunneling currents between the reservoir and the EQD when not in Coulomb blockade. Cyclic tunneling manifests as changes in the complex impedance of the device, modifying the resonant frequency and matching impedance of the lumped-element resonator~\cite{gonzalez2015probing}. Fig.~\ref{fig:supplementary_figure_reflectometry}~\textbf{(b)} shows the vector network analyzer response of the resonator with gate R biased off/on. We apply a signal with frequency close to that of the resonator and the reflected signal, which carries information of the complex impedance of the SEB, is amplified and mixed down to produce the DC signal $V_{\mathrm{IF}}$. By monitoring shifts in the observed charge transitions, we operate the EQD as an SEB sensor which can simultaneously sense QDs formed near either of its ends. \section{Results} \subsection{Single-electron box tune-up} \begin{figure*} \includegraphics[]{Figures/Figure 2/Figure2_figure.pdf} \caption{ \label{fig:figure_2} \textbf{Charge sensing of QDs under P2 and P3.} Operating point (top schematic) and discontinuities in the SEB peak locations (bottom dataset) reveal electron loading voltages for \textbf{(a)} P2 and \textbf{(b)} P3 (white numbers). \textbf{(c)} Upper panel shows the addition voltages extracted from \textbf{(a)}-\textbf{(b)}. Error bars, obtained from $V_{\mathrm{P2}}$ and $V_{\mathrm{P3}}$ resolution, are smaller than marker size. \textbf{(c)} Lower panel shows the sensor peak shift, $\delta V_{T}$, with respect to peak linewidth, $\gamma_{T}$, at P3 QD charging events with B-T3 on (isolated from drain, as in panel \textbf{(b)}) and B-T3 off (connected to the reservoir formed with gate D). } \end{figure*} In order to operate the EQD as an SEB, we extend a 2DEG close to the active region of the device from a nearby ohmic contact by applying a positive voltage to gate R. We bias the EQD plunger gate, T, above the pinch-off voltage and tune the tunnel rate between the reservoir and the EQD by adjusting the voltage on the barrier gate B-RT. To tune the SEB, we first record $V_{\mathrm{IF}}$ as a function of $V_{\mathrm{R}}$ (see Fig.~\ref{fig:figure_1}~\textbf{(b)}). As $V_{\mathrm{R}}$ is increased, $V_{\mathrm{IF}}$ changes as the 2DEG is formed, modifying the circuit impedance. For $V_{\mathrm{R}} \gtrsim 1$~V, $V_{\mathrm{IF}}$ is nearly constant, indicating that the 2DEG is fully accumulated. In this region, changes in the resonator response due to voltage sweeps on the other gates can be ascribed to AC charge transport between the QDs and the 2DEG in the reservoir. \par Having fixed $V_{\mathrm{R}} = 1.5$~V, we then map out the charge stability diagram between gates T and B-RT (Fig.~\ref{fig:figure_1}~\textbf{(c)}), which shows dot-to-reservoir transitions (DRTs) indicating the presence of discretized charge states. For $V_{\mathrm{T}} \lesssim 0.55$~V, the data suggest a complex system comprising at least two coupled QDs, while for $V_{\mathrm{T}} \gtrsim 0.55$~V, the stability diagram increasingly resembles that of a single QD. Selecting $V_{\mathrm{B-RT}} = 0.29...0.31$~V maximizes the signal $V_{\mathrm{IF}}$ due to optimal tunnel rates between the reservoir and EQD. In the following, we use $V_{\mathrm{T}} = 0.69...0.72$~V, which we show to be sufficient for the EQD to extend over the length of the gate T. \subsection{Charge sensing of quantum dots} We next use the EQD as an SEB to individually sense electrons in QDs under P2 and P3, and also as a local electron reservoir for these dots (see Fig.~\ref{fig:figure_2}~\textbf{(a)}-\textbf{(b)}). To this end, starting from the SEB operating point of $V_{\mathrm{R}} = 1.5$~V, $V_{\mathrm{B-RT}} = 0.29$~V, and $V_{\mathrm{T}} = 0.70...0.72$~V, we further set $V_{\mathrm{B-2T}} = 0.250$~V, and $V_{\mathrm{B-T3}} = 0.225$~V. We illustrate this operating point with device schematics in Fig.~\ref{fig:figure_2}~\textbf{(a)} and \textbf{(b)}. Positive barrier gate voltages increase tunnel rates from P2 to T and T to P3. A simulation of electron densities qualitatively illustrates how the barrier gates reshape and pull the QDs towards them. This effect is further discussed in Sec.~\ref{sec:simulated_quantum_mechanical_electron_densities}. Barrier gate voltages are chosen to reside below their observed first electron loading voltages, based on (B-2T,T) and (T,B-T3) stability diagrams (see Figs.~\ref{fig:supplementary_figure_barrier_sensing}~\textbf{(a)}-\textbf{(b)}). \par We detect the loading of an electron to either P2 or P3 QDs as a discontinuity in the SEB DRT, caused by the mutual capacitance between the EQD and the QDs. We mark the $0\to 1$ charge transitions as the first detected discontinuity. We find the first electrons to load at $V_{\mathrm{P2}}(0 \to 1) = 0.400$~V, and $V_{\mathrm{P3}}(0 \to 1) = 0.609$~V, respectively. Subsequent electrons load in steps of tens of millivolts. At occupancy of one electron, we find typical sensor peak voltage signal-to-noise ratios (SNR) of $\mathrm{SNR}_{\mathrm{P2}} = 10.7$ and $\mathrm{SNR}_{\mathrm{P3}} =14.6$, using an integration time of $1$~ms (see Appendix~\ref{sec:radiofrequency_reflectometry} for details). \par In order to understand whether the sensed QDs P2 and P3 are in the few-electron regime~\cite{lim2011spin}, we plot the extracted addition voltages in Fig.~\ref{fig:figure_2}~\textbf{(c)}. These addition voltages carry information of the electron-number-dependent confinement energies, as $V_{g}(n_{d} \to n_{d}+1) - V_{g}(n_{d}-1 \to n_{d}) = \alpha_{dg}^{-1} \big[ E_{C\,d}(n_{d}) + \Delta(n_{d}) \big]$, where $n_{d}$ is the electron number at the QD $d$; $\alpha_{dg}$ is the lever arm from QD $d$ to gate $g$; and $E_{C\,d}(n_{d}) + \Delta (n_{d})$ is the sum of the corresponding on-site charging energy and the confinement energy. The addition voltages are irregular in general and, in particular, we observe an increase in the addition voltage both for P2 and P3 when loading from the presumed $4 \to 5$ electron state. This is consistent with filling the lowest two $\pm z$ valley-orbit states, such that the next electron occupies a higher-energy orbital state. \par Using an estimated T addition voltage of $|e|^{-1} \alpha_{T\,T}^{-1} \, E_{C\,T} = 4.4 \pm 0.2$~mV (see Fig.~\ref{fig:supplementary_figure_Tgate_charging_energies}), loading the first electron under P2 and P3 induces a charge of $\mathrm{d}q = 0.075\,e \pm 0.01$ for P2, and $\mathrm{d}q = 0.032\,e \pm 0.01\,e$ for P3, respectively, onto the SEB. We also show in Fig.~\ref{fig:figure_2}~\textbf{(c)} the shifts in $V_{\mathrm{T}}$ induced by P3 electron loading, $\delta V_{\mathrm{T}}$, relative to the fitted linewidth of the SEB DRT, $\gamma_{\mathrm{T}}$. This ratio $\delta V_{\mathrm{T}}/\gamma_{\mathrm{T}}$ is a proxy for charge sensitivity, and indicates whether the sensor is in the small or large signal regime~\cite{keith2019single}. When loading from the EQD, with $V_{\mathrm{B-T3}} = 0.225$~V, the shifts become larger than the line width of the sensor peak, i.e.\ $\delta V_{\mathrm{T}} \geq \gamma_{\mathrm{T}}$, by the fifth electron. We retain some sensitivity to the QDs even when the barrier gates to the EQD are off at zero bias. In this case, we resort to loading electrons under P3 from a reservoir formed via D. Here, we set $V_{\mathrm{B-T3}} = 0$~V, $V_{\mathrm{B-34}} = 0.275$~V, $V_{\mathrm{P4}}$ and $V_{\mathrm{B-4D}}$ to $0.9$~V, and $V_{\mathrm{D}}$ to $1.5$~V. We note that the first electron under P3 at this operating point is found at $V_{\mathrm{P3}} = 0.387$~V. We find that in this operating point, the sensitivity is lower and increases more slowly. \subsection{Charge sensing coupled quantum dots} \begin{figure*} \includegraphics[]{Figures/Figure 3/Figure3_figure} \caption{ \label{fig:figure_3} \textbf{Elongated single-electron-box as a distributed sensor.} \textbf{(a)}-\textbf{(c)} SEB charge-sensed stability diagrams of DQDs controlled with gates \textbf{(a)} P3 and B-34, \textbf{(b)} P3 and P4, and \textbf{(c)} TQD controlled with gates P2, T, and P3. Gate biasing and QDs are sketched with device schematics above the colour maps. \textbf{(a)} To define a DQD under P3 and B-34, we extend a 2DEG from the reservoir formed under gate D. We bias B-4D in saturation, and P4 near its pinch-off. \textbf{(b)} To define a DQD under P3 and P4, we instead bias B-34 and B-4D as barriers. \textbf{(c)} To define a TQD between P2, T, and P3, we bias B-2T, B-T3, and B-34 as barriers. We bias $V_{\mathrm{T}} = 0.7093$ V to obtain a signal near the first P2 and P3 QD electrons. The estimated P2, T, and P3 QD charge occupations are indicated as $(n_{\mathrm{P2}}, n_{\mathrm{T}}, n_{\mathrm{P3}})$. \textbf{(d)} Grayscale colormap shows the voltage-cross-derivative of ground state of an electrostatic Hamiltonian, obtained using the experimentally estimated lever arms and charging energies. Orange and red dotted lines correspond to the fitted lines from panel \textbf{(c)}.} \end{figure*} Having established the basic operation of the EQD as a SEB charge sensor for nearby QDs, we next demonstrate its ability to sense different configurations of nearby coupled QDs. We then go on to assess the sensitivity of this distributed charge sensor with increasing distance. First, we form a DQD under P3 and B-34 by extending the reservoir 2DEG formed with gate D, setting $V_{\mathrm{B-4D}} = V_{\mathrm{P4}} = 0.9$~V, well above their threshold voltages, while operating P3 and B-34 close to their expected first electron voltages. We re-tune $V_{\mathrm{T}} = 0.7084$~V, retaining $V_{\mathrm{P3}}$ and $V_{\mathrm{B-34}}$ at the center of their selected voltage ranges. The resulting SEB-sensed (P3,B-34) stability diagram is shown in Fig.~\ref{fig:figure_3}~\textbf{(a)}. We observe a honeycomb pattern typical for a tunnel-coupled DQD, retaining sensitivity to charge transitions of both QDs, even though the center-to-center distance of the furthest dot to the EQD is $305$~nm. We measure local addition voltages of approximately $114 \pm 1$~mV and $43 \pm 1$~mV for P3 and B-34, respectively. \par Second, we form a DQD under P3 and P4 (see Fig.~\ref{fig:figure_3}~\textbf{(b)}). Continuing from the previous operating point, we adjust the barrier voltages $V_{\mathrm{B-4D}} = V_{\mathrm{B-34}} = 0.275$~V, while retaining $V_{\mathrm{B-T3}} = 0$~V, to create confinement, and retune $V_{\mathrm{T}} = 0.7068$~V. Here, the DQD honeycomb pattern has average addition voltages of approximately $77 \pm 5$ and $63 \pm 5$~mV for P3 and P4, respectively. The observation of latching~\cite{yang2014charge}, i.e.\ distortion of P3 charge transitions, suggest that P3-P4 or P4-D tunnel rates are of the order of the ramp frequency $f_{\mathrm{ramp}}$ (see Appendix~\ref{sec:radiofrequency_reflectometry} for details on data acquisition). The center-to-center distance of P4 to the EQD is nominally $355$~nm, showing the charge sensing range of this extended SEB goes beyond those typically demonstrated by more conventional SEBs or SETs~\cite{philips2022universal}. \par Finally, we form a triple quantum dot between P2, T, and P3, by drawing in electrons under P3 from the reservoir D, and under P2 from the EQD. We control tunnel rates to electron reservoirs with $V_{\mathrm{B-2T}} = 0.25$~V, $V_{\mathrm{B-T3}} = 0$~V, and $V_{\mathrm{B-34}} = 0.275$~V. We bias the SEB to $V_{\mathrm{T}} = 0.7093$~V, to maximise sensitivity when $V_{\mathrm{P2}}$ and $V_{\mathrm{P3}}$ are set close to their expected first electron voltages and Fig.~\ref{fig:figure_3}~\textbf{(c)} shows the resulting (P2,P3) charge stability diagram of the triple QD. We label the estimated charge configuration for the P2, T, and P3 system as $(n_{\mathrm{P2}}, n_{\mathrm{T}}, n_{\mathrm{P3}})$. The estimates are based on a stability diagram simulation shown in Fig.~\ref{fig:figure_3}~\textbf{(d)}, which utilizes experimentally estimated lever arms and charging energies, which are further discussed in Sec.~\ref{sec:simulated_quantum_mechanical_electron_densities} and Appendix~\ref{sec:lever_arm_estimation}. The operating point is close to a so-called hextuple point, characterized by the hourglass shape, formed between $(0,n_{\mathrm{T}}+1,0)$ and $(1,n_{\mathrm{T}},1)$ charge states~\cite{fedele2020Thesis}. \par To confirm our understanding of the locations of the QDs in the triple QD configuration above, we extract the various lever arm ratios from the slope of the SEB peak and the quasi vertical and horizontal charge sensing shifts, obtained by line fits to the SEB peak positions (see Appendix~\ref{sec:lever_arm_estimation}). We observe close to zero P2-P3 cross-talk, as expected for remote QDs, with the estimate $\alpha_{\mathrm{P3},\mathrm{P2}}/\alpha_{\mathrm{P3},\mathrm{P3}} = (8 \pm 6) \times 10^{-3}$, obtained from the P3 charge transitions as a function of $V_{\mathrm{P3}}$. We get $\alpha_{\mathrm{P2},\mathrm{P3}}/\alpha_{\mathrm{P2},\mathrm{P2}} = 0 \pm [0, 3.33 \times 10^{-3}]$, limited by the lower data resolution along the $V_{\mathrm{P2}}$ axis. The average of the fitted EQD DRT slopes, marked with dashed dark red lines, is $\alpha_{\mathrm{T},\mathrm{P3}}/\alpha_{\mathrm{T},\mathrm{P2}} = 0.65 \pm 0.11$. A ratio equal to $1$ would indicate an EQD wavefunction which is symmetric with respect to locations of gates P2 and P3. Intuitively, the positively biased barrier B-2T ($V_{\mathrm{B-T3}} = 0$ V) pulls the EQD electron wavefunction towards P2, which could explain the lever arm asymmetry. \par Overall, the data from Fig.~\ref{fig:figure_3}~\textbf{(c)} demonstrates the simultaneous readout of QDs that are separated by approximately $510$~nm, operating the elongated SEB as a distributed charge sensor. The fact that a single EQD charge transition is capacitively shifted by the addition of charges to either P2 or P3 demonstrates that the EQD extends approximately over the length of gate T. We did not assess P2-T and T-P3 tunnel couplings at this operating point, however, in Appendix~\ref{sec:barrier_quantum_dot_sensing} we demonstrate that by utilizing dots under B-2T and B-T3 rather than P2 and P3, tunnel coupling to the EQD can be achieved. Our results demonstrate extended EQD wavefunctions and tunnel coupling to QDs in the periphery, both necessary requirements to utilize the EQD states for mediated exchange~\cite{srinivasa2015tunable}. \par \subsection{Simulated quantum-mechanical electron densities} \label{sec:simulated_quantum_mechanical_electron_densities} \begin{figure} \includegraphics[]{Figures/Figure 4/Figure4_figure} \caption{ \label{fig:figure_4} \textbf{Estimating the EQD length and the lever arm matrix.} \textbf{(a)} Simulated QMEDs of the T-P3 DQD with B-T3 biased with a positive voltage (top panel; see Fig.~\ref{fig:figure_2} \textbf{(b)}) and at zero bias (bottom panel; Fig.~\ref{fig:figure_2} \textbf{(c)}) are shown as grayscale colormaps overlayed with layer 2 (green) and 3 (blue) gate locations (dotted rectangles). Red contours correspond to t $(1 - m \sigma) \rho_{\mathrm{max}}$ for $m = 1,2,3$. Gate side view (top) highlights the locations of gates T and B-T3. \textbf{(b)} EQD length as a function of electron numbers $n_{\mathrm{T}}$, integrated from the QMED. Red datasets are obtained by only biasing the gate T, and correspond to $(1 - m\sigma) \rho_{\mathrm{max}}$ for $m=1,2$ in increasing lightness. Dotted lines are fits to the power law $a n_{\mathrm{T}}^{-1/2} + b$. Blue datasets are obtained by biasing $V_{\mathrm{B-2T}} = 0.275$ V, $V_{\mathrm{B-RT}} = 0.3$ V, and varying $V_{\mathrm{T}}$, likewise $m=1,2$ are shown in increasing lightness. The cross markers correspond to the operating point of Fig.~\ref{fig:figure_3}~\textbf{(c)}. \textbf{(c)} Experimentally estimated lever arm matrix components. We use data from Figs.~\ref{fig:figure_2}-\ref{fig:figure_3}, together with an independent estimate for $\alpha_{T,T}$ to estimate the lever arm matrix. \textbf{(d)} Simulated lever arm matrix. Simulations use gate biases corresponding to experimental operating points, with each QMED corresponding to a QD simulated separately with up to nearest-neighbour gate biases. \textbf{(e)} Relative errors between experimentally estimated and simulated lever arm matrix components. } \end{figure} \par To support the interpretation of a delocalized charge state under the EQD, and to benchmark our quantitative understanding of the QD systems under study, we employ a self-consistent Schr{\"o}dinger-Poisson solver (SPS) from a three-dimensional nanostructure simulation software~\cite{birner2011modeling, nextnanoWebManual} to evaluate so-called quantum-mechanical electron densities (QMED), denoted with $\rho(\textbf{r})$. We assimilate the QMEDs to probability densities under QDs to estimate shapes of many-electron charge states (see Appendix~\ref{sec:self_consistent_schroedinger_poisson_simulation} for details of the simulation methods). Figure~\ref{fig:figure_4}~\textbf{(a)} shows $(x,y)$ plane views of the simulated QMEDs of the T-P3 system studied in Fig.~\ref{fig:figure_2}~\textbf{(b)}-\textbf{(c)}. The two QMEDs are obtained by biasing the QD plunger gates (T or P3), and nearest neighbour barrier voltages at the non-zero biases where experimental data was taken. In the simulations, the barriers modify the shapes of the QDs, pulling QDs controlled with plunger gates towards the biased barriers, and extending the shape of the EQD. As we discuss below, the QD shape and location has an impact on (e.g.) lever arms, which are also experimentally measurable. \par The EQD length, obtained from the simulated $1\sigma$ and $2 \sigma$ QMED contours, is studied for a range of electron numbers, determined by integrating the simulated electron densities for a range of $V_{\mathrm{T}}$ voltages. The results are shown in Fig.~\ref{fig:figure_4}~\textbf{(b)}. In a simulation where only the gate T is biased, the EQD length increases monotonically. The EQD length can be fitted to the power law $x_{\mathrm{EQD}} = a n_{\mathrm{T}}^{-1/2} + b$, where $n_{\mathrm{T}}$ is the simulated electron number, $a < 0$, and we find $b = 347$~nm and $b = 339$~nm for $1 \sigma$ and $2 \sigma$, respectively. \par When B-2T and B-RT are also positively biased with constant voltages, the electron density under B-RT only, $n_{\mathrm{B-RT}} \approx 18.8$, is subtracted from the electron numbers. Here, the EQD length is a more complicated function of the electron number: The more graduate increase at low occupancy is due to how the B-RT gate pulls electrons, and the sharper increase at $n_{\mathrm{T}} \approx 6$ is caused by the EQD density merging with the density under B-2T. As the electron number increases further, the EQD length (defined by $1\sigma$ or $2\sigma$) gradually decreases due to an increasing concentration of charge in the centre of the QD. The simulated datapoints with $V_{\mathrm{T}} = 0.7093$~V (corresponding to the setpoint from Fig.~\ref{fig:figure_3}~\textbf{(c)}). The estimated length at this datapoint is $x = 320 \pm 2$~nm at $1 \sigma$, and $x = 354 \pm 2$~nm at $2 \sigma$. We use four measured datasets to estimate the lever arm components of the (P2,T,P3) system and compare them with simulated values, in Fig.~\ref{fig:figure_4}~\textbf{(c)}-\textbf{(e)}. Details of lever arm extraction, as well as all estimated and simulated lever arm components, are found in Appendices~\ref{sec:lever_arm_estimation},~\ref{sec:self_consistent_schroedinger_poisson_simulation}, and~\ref{sec:simulated_capacitance_matrices}. Simulated lever arms are systematically larger compared to experimentally extracted values, albeit typically agreeing within an order of magnitude. We find the largest errors for $\alpha_{\mathrm{P2,P3}}$ and $\alpha_{\mathrm{P2,T}}$ ($14.2$ and $5.3$, respectively), while the remaining off-diagonal lever arms have the smallest errors, from $0.074$ to $0.67$. \par We simulate the TQD charge stability diagram from Fig.~\ref{fig:figure_3}~\textbf{(c)} using the estimated lever arm components from Fig.~\ref{fig:figure_4}~\textbf{(c)} (upper matrix), and resulting estimated capacitances (see Appendix~\ref{sec:stability_map_simulation}). The resulting voltage cross-derivative of the ground state of the Hamiltonian, $\mathrm{d} ( \mathrm{d} E_{g} / \mathrm{d} V_{\mathrm{P3}}) / \mathrm{d} V_{\mathrm{P2}}$, is shown in Fig.~\ref{fig:figure_3}~\textbf{(d)}. See Appendix~\ref{sec:stability_map_simulation} for details of the simulation, and for the parameters used. The simulation displays qualitative agreement with data, and confirms the charge configurations $(n_{\mathrm{P2}}, n_{\mathrm{T}}, n_{\mathrm{P3}})$. The measured sensor slope in the $(n_{\mathrm{P3}}, n_{\mathrm{P2}}) = (1,1)$ is $a_{\mathrm{T}} = -0.703 \pm 0.008$, while the choice of lever arm matrix in the simulation leads to $a_{\mathrm{T}} = -0.739$. The experimental and simulated (P2,T) charge induced voltage shifts along $V_{\mathrm{P2}}$ agree within experimental resolution of $\pm 1$~mV, $\Delta V_{\mathrm{P2}} = 13 \pm 1$~mV. \section{Outlook} We have used the EQD as a rf-SEB charge sensor capable of sensing QDs up to $355$~nm away from the EQD center, suggesting that the same SEB charge state may be sensitive to charges in QDs separated by over $700$~nm. Our results are well supported by quantum mechanical electron density simulations. The enhanced functionality provided by the EQD may be expanded in future QD-based architectures to sensors defined with more complex gate shapes, such as a right-angle or a cross. A single sensor could allow sensing multiple QDs placed around the periphery, enabling novel unit cells requiring fewer individual gate structures for readout. Combined with the demonstration of few-electron QDs, our results show the potential of this multi-gate polysilicon platform to produce scalable QD unit cells. \par Another potential application of this type of elongated QD is as a mid-range spin qubit coupler as previously demonstrated for QDs in GaAs/AlGaAs heterostructures~\cite{malinowski2019fast}. We have here demonstrated two basic requirements towards this application: the quantization of charge in the EQD and the tunnel coupling to QDs at the periphery. We envision that extended QDs could become an important resource to increase the range of qubit-qubit interaction in silicon, complementing other approaches such as spin shuttling ~\cite{yoneda2021coherent, noiri2022shuttling, seidler2022conveyor}, capacitive coupling with floating gates~\cite{gilbert2020single, duan2020remote} and microwave photonic links~\cite{borjans2020resonant, harveycollard2022coherent}. Additionally, we have shown that the EQD can be used as a local electron reservoir, which can be utilized in schemes mitigating charge leakage errors~\cite{cai2019silicon}. \section{Acknowledgements} This research was supported by European Union's Horizon 2020 research and innovation programme under grant agreement no.\ 951852 (QLSI), and by the UK's Engineering and Physical Sciences Research Council (EPSRC) via QUES2T (EP/N015118/1), and the Hub in Quantum Computing and Simulation (EP/T001062/1). AC acknowledges funding from the Danish Independent Research Fund. M.F.G.-Z.\ is a UKRI Future Leaders Fellow (MR/V023284/1). \section{Introduction} In recent years, silicon spin qubits hosted in gate-defined quantum dots (QDs) have achieved major milestones making this platform a compelling option for large scale quantum computing~\cite{gonzalez2021scaling}. These include the demonstration of high fidelity one- and two-qubit gates on the same device~\cite{xue2022quantum, noiri2022fast, mills2022two}, high fidelity readout using radiofrequency (rf) single-electron transistors (SET)~\cite{connors2020rapid}, the demonstration of simple instances of quantum error correction~\cite{takeda2022quantum} and the scale up to 6-qubit devices in a linear arrangement~\cite{philips2022universal}. In addition, chips combining quantum and classical electronics have been shown to operate at deep cryogenic temperatures, demonstrating a potential route for integrated addressing, control and measurement of qubits~\cite{guevel202019.2A, Ruffino2021a}. \par Silicon spin qubits typically rely on nearest neighbour exchange to implement two-qubit interactions~\cite{veldhorst2015two, zajac2017resonantly, huang2019fidelity}. Such a short-range qubit coupling applied across the qubit processor leads to high gate densities that hinder integration with local control electronics and gate fan-out~\cite{veldhorst2017silicon, boter2022spiderweb}, and introduce nonlinear responses due to cross-talk~\cite{undseth2022nonlinear}. Furthermore, introducing readout sensors within the qubit plane impacts the level of connectivity that can be achieved. To scale up beyond one-dimensional qubit arrays and integrate cryogenic electronics requires structures with enhanced functionality which can increase the separation between qubits, or between qubits and sensors. One approach to scaling is to use dispersive charge sensors, such as the rf single-electron box (SEB)~\cite{lafarge1991direct, house2016high, urdampilleta2019gate, cirianotejel2021spin}. The SEB offers similar levels of sensitivity to conventional charge sensors~\cite{oakes2022fast, niegemann2022parity} but only requires one charge reservoir, as opposed to two for the SET, facilitating the design of qubit arrays with higher connectivity. Another approach is to space out qubits by using elongated quantum dots (EQD) to mediate exchange interactions between them~\cite{martins2017negative, malinowski2018spin,wang2022jellybean}. Such an approach, requiring tunnel coupling between each of the remote QDs and the EQD, has been demonstrated in GaAs heterostructures to mediate fast, coherent exchange interaction between single spins separated by half a micron~\cite{malinowski2019fast}. A further advantage of the EQD is that it could itself act as a local charge reservoir to facilitate initialization~\cite{cai2019silicon}. \par In this Article, we combine aspects of these two concepts to demonstrate an SEB with an elongated charge island that enables charge sensing of multiple remote QDs, which, due to the increased separation, show minimal cross-talk. The structure is fabricated using a three-layer $n^{+}$-doped polycrystalline silicon gate metal-oxide-semiconductor (MOS) process that enables the formation of the elongated SEB as well as few-electron QDs. The extended distribution and quantisation of the charge within the EQD, consistent with semi-classical modelling, allows it to sense the charge on QDs separated by over 0.5~$\mu$m. Finally, we show tunnel coupling between the remote QDs and the EQD, which fulfills one of the requirements for coherent mediated exchange. \section{Experimental Methods} \label{sec:experimental_methods} \begin{figure} \includegraphics[]{Figures/Figure 1/Figure1_figure.pdf} \caption{ \label{fig:figure_1} \textbf{Formation of an elongated single electron box.} \textbf{(a)} Device schematic (gray dotted rectangle) with simplified RF circuit diagram (signal filtering omitted). A lumped-element resonator (orange dotted rectangle) is galvanically attached to the ohmic contact below the accumulation gate R and monitored via changes in the demodulated baseband-frequency reflectometry signal, $V_{\mathrm{IF}}$. \textbf{(b)} Changes in $V_{\mathrm{IF}}$ reflect the accumulation of a 2DEG with increasing reservoir gate voltage $V_{\mathrm{R}}$. All other gates are held at zero bias. \textbf{(c)} The elongated QD is operated as a single electron box. Here, gates at zero bias are drawn in grayscale, while biased gates are drawn in colour. Orange blobs are cartoons indicating locations of QDs of interest. An elongated, multi-electron quantum dot forms under gate T and is tunnel coupled to a charge reservoir accumulated under gate R. Driving the resonator at its natural frequency drives cyclic electron tunnelling between the reservoir R, and the elongated quantum dot under gate T. The (T, B-RT) stability diagram obtained at $V_{\mathrm{R}} = 1.5$ V shows dot-to-reservoir transitions that become increasingly regular with increasing $V_{\mathrm{T}}$. The signal strength depends on $V_{\mathrm{B-RT}}$, since the barrier voltage modulates the EQD-reservoir tunnel rate. } \end{figure} Our device consists of two double quantum dots (DQDs) separated by an EQD, nominally 340~nm long and 50~nm wide. The measured device is fabricated with three 30~nm thick in-situ $n^{+}$ phosphorus-doped polycrystalline silicon gate layers formed with a wafer-level electron-beam patterning process. The Si substrate is separated from the first gate layer with a 8~nm thick thermally grown SiO$_{2}$, patterned on high-resistivity ($>3$~k$\Omega$) p-type Si wafer to minimise the density of oxide defects. Gate layers are electrically isolated from one another with a 5~nm thick blocking high-temperature deposited SiO$_{2}$~\cite{stuyck2021uniform}. A schematic of the measured device is shown in Fig.~\ref{fig:figure_1}~\textbf{(a)}. We employ one layer of gates (closest to the silicon substrate) to provide confinement for the three possible current paths connecting ohmic contacts, around the active region of the device. A second layer of gates is used to form barriers between the EQD, the QDs and the reservoirs. As seen in other MOS QD arrays~\cite{veldhorst2014an}, QDs can also be formed under these `barrier' gates in the second layer, depending on applied gate voltages. A third gate layer is used as plungers to control the occupation of the EQD, the QDs, and the extension of two-dimensional electron gases (2DEG) from under accumulation gates, denoted as reservoir (R), source (S), and drain (D), overlapping with corresponding ohmics, towards the active region of the device. \par The device is cooled down in an Oxford Instruments Triton dilution refrigerator equipped with QDevil DACs, thermalizing filters and high-bandwidth sample holders~\cite{qdevil}. At base temperature (25~mK) we confirm the functionality of the device with gate electrode leakage tests, followed by pinch-off and saturation voltage measurements (see Appendix~\ref{sec:cryogenic_device_characterization} for the preliminary device characterization protocol). \par We detect charge transitions between the EQD and the reservoir using rf reflectometry~\cite{vigneau2022probing}, via a lumped-element resonator attached to the ohmic contact of the accumulation gate R, as illustrated in the inset of Fig.~\ref{fig:figure_1}~\textbf{(c)}. Further details of the rf reflectometry setup and data acquisition are presented in Fig.~\ref{fig:supplementary_figure_reflectometry}~\textbf{(a)}. The rf voltage $V_{\mathrm{rf}}$ drives single-electron AC tunneling currents between the reservoir and the EQD when not in Coulomb blockade. Cyclic tunneling manifests as changes in the complex impedance of the device, modifying the resonant frequency and matching impedance of the lumped-element resonator~\cite{gonzalez2015probing}. Fig.~\ref{fig:supplementary_figure_reflectometry}~\textbf{(b)} shows the vector network analyzer response of the resonator with gate R biased off/on. We apply a signal with frequency close to that of the resonator and the reflected signal, which carries information of the complex impedance of the SEB, is amplified and mixed down to produce the DC signal $V_{\mathrm{IF}}$. By monitoring shifts in the observed charge transitions, we operate the EQD as an SEB sensor which can simultaneously sense QDs formed near either of its ends. \section{Results} \subsection{Single-electron box tune-up} \begin{figure*} \includegraphics[]{Figures/Figure 2/Figure2_figure.pdf} \caption{ \label{fig:figure_2} \textbf{Charge sensing of QDs under P2 and P3.} Operating point (top schematic) and discontinuities in the SEB peak locations (bottom dataset) reveal electron loading voltages for \textbf{(a)} P2 and \textbf{(b)} P3 (white numbers). \textbf{(c)} Upper panel shows the addition voltages extracted from \textbf{(a)}-\textbf{(b)}. Error bars, obtained from $V_{\mathrm{P2}}$ and $V_{\mathrm{P3}}$ resolution, are smaller than marker size. \textbf{(c)} Lower panel shows the sensor peak shift, $\delta V_{T}$, with respect to peak linewidth, $\gamma_{T}$, at P3 QD charging events with B-T3 on (isolated from drain, as in panel \textbf{(b)}) and B-T3 off (connected to the reservoir formed with gate D). } \end{figure*} In order to operate the EQD as an SEB, we extend a 2DEG close to the active region of the device from a nearby ohmic contact by applying a positive voltage to gate R. We bias the EQD plunger gate, T, above the pinch-off voltage and tune the tunnel rate between the reservoir and the EQD by adjusting the voltage on the barrier gate B-RT. To tune the SEB, we first record $V_{\mathrm{IF}}$ as a function of $V_{\mathrm{R}}$ (see Fig.~\ref{fig:figure_1}~\textbf{(b)}). As $V_{\mathrm{R}}$ is increased, $V_{\mathrm{IF}}$ changes as the 2DEG is formed, modifying the circuit impedance. For $V_{\mathrm{R}} \gtrsim 1$~V, $V_{\mathrm{IF}}$ is nearly constant, indicating that the 2DEG is fully accumulated. In this region, changes in the resonator response due to voltage sweeps on the other gates can be ascribed to AC charge transport between the QDs and the 2DEG in the reservoir. \par Having fixed $V_{\mathrm{R}} = 1.5$~V, we then map out the charge stability diagram between gates T and B-RT (Fig.~\ref{fig:figure_1}~\textbf{(c)}), which shows dot-to-reservoir transitions (DRTs) indicating the presence of discretized charge states. For $V_{\mathrm{T}} \lesssim 0.55$~V, the data suggest a complex system comprising at least two coupled QDs, while for $V_{\mathrm{T}} \gtrsim 0.55$~V, the stability diagram increasingly resembles that of a single QD. Selecting $V_{\mathrm{B-RT}} = 0.29...0.31$~V maximizes the signal $V_{\mathrm{IF}}$ due to optimal tunnel rates between the reservoir and EQD. In the following, we use $V_{\mathrm{T}} = 0.69...0.72$~V, which we show to be sufficient for the EQD to extend over the length of the gate T. \subsection{Charge sensing of quantum dots} We next use the EQD as an SEB to individually sense electrons in QDs under P2 and P3, and also as a local electron reservoir for these dots (see Fig.~\ref{fig:figure_2}~\textbf{(a)}-\textbf{(b)}). To this end, starting from the SEB operating point of $V_{\mathrm{R}} = 1.5$~V, $V_{\mathrm{B-RT}} = 0.29$~V, and $V_{\mathrm{T}} = 0.70...0.72$~V, we further set $V_{\mathrm{B-2T}} = 0.250$~V, and $V_{\mathrm{B-T3}} = 0.225$~V. We illustrate this operating point with device schematics in Fig.~\ref{fig:figure_2}~\textbf{(a)} and \textbf{(b)}. Positive barrier gate voltages increase tunnel rates from P2 to T and T to P3. A simulation of electron densities qualitatively illustrates how the barrier gates reshape and pull the QDs towards them. This effect is further discussed in Sec.~\ref{sec:simulated_quantum_mechanical_electron_densities}. Barrier gate voltages are chosen to reside below their observed first electron loading voltages, based on (B-2T,T) and (T,B-T3) stability diagrams (see Figs.~\ref{fig:supplementary_figure_barrier_sensing}~\textbf{(a)}-\textbf{(b)}). \par We detect the loading of an electron to either P2 or P3 QDs as a discontinuity in the SEB DRT, caused by the mutual capacitance between the EQD and the QDs. We mark the $0\to 1$ charge transitions as the first detected discontinuity. We find the first electrons to load at $V_{\mathrm{P2}}(0 \to 1) = 0.400$~V, and $V_{\mathrm{P3}}(0 \to 1) = 0.609$~V, respectively. Subsequent electrons load in steps of tens of millivolts. At occupancy of one electron, we find typical sensor peak voltage signal-to-noise ratios (SNR) of $\mathrm{SNR}_{\mathrm{P2}} = 10.7$ and $\mathrm{SNR}_{\mathrm{P3}} =14.6$, using an integration time of $1$~ms (see Appendix~\ref{sec:radiofrequency_reflectometry} for details). \par In order to understand whether the sensed QDs P2 and P3 are in the few-electron regime~\cite{lim2011spin}, we plot the extracted addition voltages in Fig.~\ref{fig:figure_2}~\textbf{(c)}. These addition voltages carry information of the electron-number-dependent confinement energies, as $V_{g}(n_{d} \to n_{d}+1) - V_{g}(n_{d}-1 \to n_{d}) = \alpha_{dg}^{-1} \big[ E_{C\,d}(n_{d}) + \Delta(n_{d}) \big]$, where $n_{d}$ is the electron number at the QD $d$; $\alpha_{dg}$ is the lever arm from QD $d$ to gate $g$; and $E_{C\,d}(n_{d}) + \Delta (n_{d})$ is the sum of the corresponding on-site charging energy and the confinement energy. The addition voltages are irregular in general and, in particular, we observe an increase in the addition voltage both for P2 and P3 when loading from the presumed $4 \to 5$ electron state. This is consistent with filling the lowest two $\pm z$ valley-orbit states, such that the next electron occupies a higher-energy orbital state. \par Using an estimated T addition voltage of $|e|^{-1} \alpha_{T\,T}^{-1} \, E_{C\,T} = 4.4 \pm 0.2$~mV (see Fig.~\ref{fig:supplementary_figure_Tgate_charging_energies}), loading the first electron under P2 and P3 induces a charge of $\mathrm{d}q = 0.075\,e \pm 0.01$ for P2, and $\mathrm{d}q = 0.032\,e \pm 0.01\,e$ for P3, respectively, onto the SEB. We also show in Fig.~\ref{fig:figure_2}~\textbf{(c)} the shifts in $V_{\mathrm{T}}$ induced by P3 electron loading, $\delta V_{\mathrm{T}}$, relative to the fitted linewidth of the SEB DRT, $\gamma_{\mathrm{T}}$. This ratio $\delta V_{\mathrm{T}}/\gamma_{\mathrm{T}}$ is a proxy for charge sensitivity, and indicates whether the sensor is in the small or large signal regime~\cite{keith2019single}. When loading from the EQD, with $V_{\mathrm{B-T3}} = 0.225$~V, the shifts become larger than the line width of the sensor peak, i.e.\ $\delta V_{\mathrm{T}} \geq \gamma_{\mathrm{T}}$, by the fifth electron. We retain some sensitivity to the QDs even when the barrier gates to the EQD are off at zero bias. In this case, we resort to loading electrons under P3 from a reservoir formed via D. Here, we set $V_{\mathrm{B-T3}} = 0$~V, $V_{\mathrm{B-34}} = 0.275$~V, $V_{\mathrm{P4}}$ and $V_{\mathrm{B-4D}}$ to $0.9$~V, and $V_{\mathrm{D}}$ to $1.5$~V. We note that the first electron under P3 at this operating point is found at $V_{\mathrm{P3}} = 0.387$~V. We find that in this operating point, the sensitivity is lower and increases more slowly. \subsection{Charge sensing coupled quantum dots} \begin{figure*} \includegraphics[]{Figures/Figure 3/Figure3_figure} \caption{ \label{fig:figure_3} \textbf{Elongated single-electron-box as a distributed sensor.} \textbf{(a)}-\textbf{(c)} SEB charge-sensed stability diagrams of DQDs controlled with gates \textbf{(a)} P3 and B-34, \textbf{(b)} P3 and P4, and \textbf{(c)} TQD controlled with gates P2, T, and P3. Gate biasing and QDs are sketched with device schematics above the colour maps. \textbf{(a)} To define a DQD under P3 and B-34, we extend a 2DEG from the reservoir formed under gate D. We bias B-4D in saturation, and P4 near its pinch-off. \textbf{(b)} To define a DQD under P3 and P4, we instead bias B-34 and B-4D as barriers. \textbf{(c)} To define a TQD between P2, T, and P3, we bias B-2T, B-T3, and B-34 as barriers. We bias $V_{\mathrm{T}} = 0.7093$ V to obtain a signal near the first P2 and P3 QD electrons. The estimated P2, T, and P3 QD charge occupations are indicated as $(n_{\mathrm{P2}}, n_{\mathrm{T}}, n_{\mathrm{P3}})$. \textbf{(d)} Grayscale colormap shows the voltage-cross-derivative of ground state of an electrostatic Hamiltonian, obtained using the experimentally estimated lever arms and charging energies. Orange and red dotted lines correspond to the fitted lines from panel \textbf{(c)}.} \end{figure*} Having established the basic operation of the EQD as a SEB charge sensor for nearby QDs, we next demonstrate its ability to sense different configurations of nearby coupled QDs. We then go on to assess the sensitivity of this distributed charge sensor with increasing distance. First, we form a DQD under P3 and B-34 by extending the reservoir 2DEG formed with gate D, setting $V_{\mathrm{B-4D}} = V_{\mathrm{P4}} = 0.9$~V, well above their threshold voltages, while operating P3 and B-34 close to their expected first electron voltages. We re-tune $V_{\mathrm{T}} = 0.7084$~V, retaining $V_{\mathrm{P3}}$ and $V_{\mathrm{B-34}}$ at the center of their selected voltage ranges. The resulting SEB-sensed (P3,B-34) stability diagram is shown in Fig.~\ref{fig:figure_3}~\textbf{(a)}. We observe a honeycomb pattern typical for a tunnel-coupled DQD, retaining sensitivity to charge transitions of both QDs, even though the center-to-center distance of the furthest dot to the EQD is $305$~nm. We measure local addition voltages of approximately $114 \pm 1$~mV and $43 \pm 1$~mV for P3 and B-34, respectively. \par Second, we form a DQD under P3 and P4 (see Fig.~\ref{fig:figure_3}~\textbf{(b)}). Continuing from the previous operating point, we adjust the barrier voltages $V_{\mathrm{B-4D}} = V_{\mathrm{B-34}} = 0.275$~V, while retaining $V_{\mathrm{B-T3}} = 0$~V, to create confinement, and retune $V_{\mathrm{T}} = 0.7068$~V. Here, the DQD honeycomb pattern has average addition voltages of approximately $77 \pm 5$ and $63 \pm 5$~mV for P3 and P4, respectively. The observation of latching~\cite{yang2014charge}, i.e.\ distortion of P3 charge transitions, suggest that P3-P4 or P4-D tunnel rates are of the order of the ramp frequency $f_{\mathrm{ramp}}$ (see Appendix~\ref{sec:radiofrequency_reflectometry} for details on data acquisition). The center-to-center distance of P4 to the EQD is nominally $355$~nm, showing the charge sensing range of this extended SEB goes beyond those typically demonstrated by more conventional SEBs or SETs~\cite{philips2022universal}. \par Finally, we form a triple quantum dot between P2, T, and P3, by drawing in electrons under P3 from the reservoir D, and under P2 from the EQD. We control tunnel rates to electron reservoirs with $V_{\mathrm{B-2T}} = 0.25$~V, $V_{\mathrm{B-T3}} = 0$~V, and $V_{\mathrm{B-34}} = 0.275$~V. We bias the SEB to $V_{\mathrm{T}} = 0.7093$~V, to maximise sensitivity when $V_{\mathrm{P2}}$ and $V_{\mathrm{P3}}$ are set close to their expected first electron voltages and Fig.~\ref{fig:figure_3}~\textbf{(c)} shows the resulting (P2,P3) charge stability diagram of the triple QD. We label the estimated charge configuration for the P2, T, and P3 system as $(n_{\mathrm{P2}}, n_{\mathrm{T}}, n_{\mathrm{P3}})$. The estimates are based on a stability diagram simulation shown in Fig.~\ref{fig:figure_3}~\textbf{(d)}, which utilizes experimentally estimated lever arms and charging energies, which are further discussed in Sec.~\ref{sec:simulated_quantum_mechanical_electron_densities} and Appendix~\ref{sec:lever_arm_estimation}. The operating point is close to a so-called hextuple point, characterized by the hourglass shape, formed between $(0,n_{\mathrm{T}}+1,0)$ and $(1,n_{\mathrm{T}},1)$ charge states~\cite{fedele2020Thesis}. \par To confirm our understanding of the locations of the QDs in the triple QD configuration above, we extract the various lever arm ratios from the slope of the SEB peak and the quasi vertical and horizontal charge sensing shifts, obtained by line fits to the SEB peak positions (see Appendix~\ref{sec:lever_arm_estimation}). We observe close to zero P2-P3 cross-talk, as expected for remote QDs, with the estimate $\alpha_{\mathrm{P3},\mathrm{P2}}/\alpha_{\mathrm{P3},\mathrm{P3}} = (8 \pm 6) \times 10^{-3}$, obtained from the P3 charge transitions as a function of $V_{\mathrm{P3}}$. We get $\alpha_{\mathrm{P2},\mathrm{P3}}/\alpha_{\mathrm{P2},\mathrm{P2}} = 0 \pm [0, 3.33 \times 10^{-3}]$, limited by the lower data resolution along the $V_{\mathrm{P2}}$ axis. The average of the fitted EQD DRT slopes, marked with dashed dark red lines, is $\alpha_{\mathrm{T},\mathrm{P3}}/\alpha_{\mathrm{T},\mathrm{P2}} = 0.65 \pm 0.11$. A ratio equal to $1$ would indicate an EQD wavefunction which is symmetric with respect to locations of gates P2 and P3. Intuitively, the positively biased barrier B-2T ($V_{\mathrm{B-T3}} = 0$ V) pulls the EQD electron wavefunction towards P2, which could explain the lever arm asymmetry. \par Overall, the data from Fig.~\ref{fig:figure_3}~\textbf{(c)} demonstrates the simultaneous readout of QDs that are separated by approximately $510$~nm, operating the elongated SEB as a distributed charge sensor. The fact that a single EQD charge transition is capacitively shifted by the addition of charges to either P2 or P3 demonstrates that the EQD extends approximately over the length of gate T. We did not assess P2-T and T-P3 tunnel couplings at this operating point, however, in Appendix~\ref{sec:barrier_quantum_dot_sensing} we demonstrate that by utilizing dots under B-2T and B-T3 rather than P2 and P3, tunnel coupling to the EQD can be achieved. Our results demonstrate extended EQD wavefunctions and tunnel coupling to QDs in the periphery, both necessary requirements to utilize the EQD states for mediated exchange~\cite{srinivasa2015tunable}. \par \subsection{Simulated quantum-mechanical electron densities} \label{sec:simulated_quantum_mechanical_electron_densities} \begin{figure} \includegraphics[]{Figures/Figure 4/Figure4_figure} \caption{ \label{fig:figure_4} \textbf{Estimating the EQD length and the lever arm matrix.} \textbf{(a)} Simulated QMEDs of the T-P3 DQD with B-T3 biased with a positive voltage (top panel; see Fig.~\ref{fig:figure_2} \textbf{(b)}) and at zero bias (bottom panel; Fig.~\ref{fig:figure_2} \textbf{(c)}) are shown as grayscale colormaps overlayed with layer 2 (green) and 3 (blue) gate locations (dotted rectangles). Red contours correspond to t $(1 - m \sigma) \rho_{\mathrm{max}}$ for $m = 1,2,3$. Gate side view (top) highlights the locations of gates T and B-T3. \textbf{(b)} EQD length as a function of electron numbers $n_{\mathrm{T}}$, integrated from the QMED. Red datasets are obtained by only biasing the gate T, and correspond to $(1 - m\sigma) \rho_{\mathrm{max}}$ for $m=1,2$ in increasing lightness. Dotted lines are fits to the power law $a n_{\mathrm{T}}^{-1/2} + b$. Blue datasets are obtained by biasing $V_{\mathrm{B-2T}} = 0.275$ V, $V_{\mathrm{B-RT}} = 0.3$ V, and varying $V_{\mathrm{T}}$, likewise $m=1,2$ are shown in increasing lightness. The cross markers correspond to the operating point of Fig.~\ref{fig:figure_3}~\textbf{(c)}. \textbf{(c)} Experimentally estimated lever arm matrix components. We use data from Figs.~\ref{fig:figure_2}-\ref{fig:figure_3}, together with an independent estimate for $\alpha_{T,T}$ to estimate the lever arm matrix. \textbf{(d)} Simulated lever arm matrix. Simulations use gate biases corresponding to experimental operating points, with each QMED corresponding to a QD simulated separately with up to nearest-neighbour gate biases. \textbf{(e)} Relative errors between experimentally estimated and simulated lever arm matrix components. } \end{figure} \par To support the interpretation of a delocalized charge state under the EQD, and to benchmark our quantitative understanding of the QD systems under study, we employ a self-consistent Schr{\"o}dinger-Poisson solver (SPS) from a three-dimensional nanostructure simulation software~\cite{birner2011modeling, nextnanoWebManual} to evaluate so-called quantum-mechanical electron densities (QMED), denoted with $\rho(\textbf{r})$. We assimilate the QMEDs to probability densities under QDs to estimate shapes of many-electron charge states (see Appendix~\ref{sec:self_consistent_schroedinger_poisson_simulation} for details of the simulation methods). Figure~\ref{fig:figure_4}~\textbf{(a)} shows $(x,y)$ plane views of the simulated QMEDs of the T-P3 system studied in Fig.~\ref{fig:figure_2}~\textbf{(b)}-\textbf{(c)}. The two QMEDs are obtained by biasing the QD plunger gates (T or P3), and nearest neighbour barrier voltages at the non-zero biases where experimental data was taken. In the simulations, the barriers modify the shapes of the QDs, pulling QDs controlled with plunger gates towards the biased barriers, and extending the shape of the EQD. As we discuss below, the QD shape and location has an impact on (e.g.) lever arms, which are also experimentally measurable. \par The EQD length, obtained from the simulated $1\sigma$ and $2 \sigma$ QMED contours, is studied for a range of electron numbers, determined by integrating the simulated electron densities for a range of $V_{\mathrm{T}}$ voltages. The results are shown in Fig.~\ref{fig:figure_4}~\textbf{(b)}. In a simulation where only the gate T is biased, the EQD length increases monotonically. The EQD length can be fitted to the power law $x_{\mathrm{EQD}} = a n_{\mathrm{T}}^{-1/2} + b$, where $n_{\mathrm{T}}$ is the simulated electron number, $a < 0$, and we find $b = 347$~nm and $b = 339$~nm for $1 \sigma$ and $2 \sigma$, respectively. \par When B-2T and B-RT are also positively biased with constant voltages, the electron density under B-RT only, $n_{\mathrm{B-RT}} \approx 18.8$, is subtracted from the electron numbers. Here, the EQD length is a more complicated function of the electron number: The more graduate increase at low occupancy is due to how the B-RT gate pulls electrons, and the sharper increase at $n_{\mathrm{T}} \approx 6$ is caused by the EQD density merging with the density under B-2T. As the electron number increases further, the EQD length (defined by $1\sigma$ or $2\sigma$) gradually decreases due to an increasing concentration of charge in the centre of the QD. The simulated datapoints with $V_{\mathrm{T}} = 0.7093$~V (corresponding to the setpoint from Fig.~\ref{fig:figure_3}~\textbf{(c)}). The estimated length at this datapoint is $x = 320 \pm 2$~nm at $1 \sigma$, and $x = 354 \pm 2$~nm at $2 \sigma$. We use four measured datasets to estimate the lever arm components of the (P2,T,P3) system and compare them with simulated values, in Fig.~\ref{fig:figure_4}~\textbf{(c)}-\textbf{(e)}. Details of lever arm extraction, as well as all estimated and simulated lever arm components, are found in Appendices~\ref{sec:lever_arm_estimation},~\ref{sec:self_consistent_schroedinger_poisson_simulation}, and~\ref{sec:simulated_capacitance_matrices}. Simulated lever arms are systematically larger compared to experimentally extracted values, albeit typically agreeing within an order of magnitude. We find the largest errors for $\alpha_{\mathrm{P2,P3}}$ and $\alpha_{\mathrm{P2,T}}$ ($14.2$ and $5.3$, respectively), while the remaining off-diagonal lever arms have the smallest errors, from $0.074$ to $0.67$. \par We simulate the TQD charge stability diagram from Fig.~\ref{fig:figure_3}~\textbf{(c)} using the estimated lever arm components from Fig.~\ref{fig:figure_4}~\textbf{(c)} (upper matrix), and resulting estimated capacitances (see Appendix~\ref{sec:stability_map_simulation}). The resulting voltage cross-derivative of the ground state of the Hamiltonian, $\mathrm{d} ( \mathrm{d} E_{g} / \mathrm{d} V_{\mathrm{P3}}) / \mathrm{d} V_{\mathrm{P2}}$, is shown in Fig.~\ref{fig:figure_3}~\textbf{(d)}. See Appendix~\ref{sec:stability_map_simulation} for details of the simulation, and for the parameters used. The simulation displays qualitative agreement with data, and confirms the charge configurations $(n_{\mathrm{P2}}, n_{\mathrm{T}}, n_{\mathrm{P3}})$. The measured sensor slope in the $(n_{\mathrm{P3}}, n_{\mathrm{P2}}) = (1,1)$ is $a_{\mathrm{T}} = -0.703 \pm 0.008$, while the choice of lever arm matrix in the simulation leads to $a_{\mathrm{T}} = -0.739$. The experimental and simulated (P2,T) charge induced voltage shifts along $V_{\mathrm{P2}}$ agree within experimental resolution of $\pm 1$~mV, $\Delta V_{\mathrm{P2}} = 13 \pm 1$~mV. \section{Outlook} We have used the EQD as a rf-SEB charge sensor capable of sensing QDs up to $355$~nm away from the EQD center, suggesting that the same SEB charge state may be sensitive to charges in QDs separated by over $700$~nm. Our results are well supported by quantum mechanical electron density simulations. The enhanced functionality provided by the EQD may be expanded in future QD-based architectures to sensors defined with more complex gate shapes, such as a right-angle or a cross. A single sensor could allow sensing multiple QDs placed around the periphery, enabling novel unit cells requiring fewer individual gate structures for readout. Combined with the demonstration of few-electron QDs, our results show the potential of this multi-gate polysilicon platform to produce scalable QD unit cells. \par Another potential application of this type of elongated QD is as a mid-range spin qubit coupler as previously demonstrated for QDs in GaAs/AlGaAs heterostructures~\cite{malinowski2019fast}. We have here demonstrated two basic requirements towards this application: the quantization of charge in the EQD and the tunnel coupling to QDs at the periphery. We envision that extended QDs could become an important resource to increase the range of qubit-qubit interaction in silicon, complementing other approaches such as spin shuttling ~\cite{yoneda2021coherent, noiri2022shuttling, seidler2022conveyor}, capacitive coupling with floating gates~\cite{gilbert2020single, duan2020remote} and microwave photonic links~\cite{borjans2020resonant, harveycollard2022coherent}. Additionally, we have shown that the EQD can be used as a local electron reservoir, which can be utilized in schemes mitigating charge leakage errors~\cite{cai2019silicon}. \section{Acknowledgements} This research was supported by European Union's Horizon 2020 research and innovation programme under grant agreement no.\ 951852 (QLSI), and by the UK's Engineering and Physical Sciences Research Council (EPSRC) via QUES2T (EP/N015118/1), and the Hub in Quantum Computing and Simulation (EP/T001062/1). AC acknowledges funding from the Danish Independent Research Fund. M.F.G.-Z.\ is a UKRI Future Leaders Fellow (MR/V023284/1).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction and Related Work}\label{sec:introduction} After multiple works in the spectral domain \cite{vasquez2019melnet,engel2019gansynth}, deep generative models in the waveform domain have recently shown the ability to produce high fidelity results with different methods: autoregressive \cite{oord2016wavenet,mehri2017samplernn}, flow-based \cite{prenger2018waveglow}, energy-based \cite{gritsenko2020spectral} or based on Generative Adversarial Networks \cite{donahue2019adversarial}. In the task of generating drum sounds in the waveform domain, GAN-based approaches have been explored in \cite{donahue2019adversarial} and \cite{nistal2020drumgan}. However, the authors can generate only low-resolution 16kHz drum sounds which is often unacceptable for music producers. Interactive sound design is often a major motivation behind these works: in \cite{aouameur2019neural} the authors use Variational Autoencoders (VAE) in order to generate spectrograms of drums apply a principal component analysis on the latent space of the VAE in order to explore the drum timbre space. One of the disadvantages of this model is that the reconstruction of the sounds by the VAE tends to be blurry. In \cite{bazin2021spectrogram}, the authors use a VQ-VAE2 \cite{razavi2019generating} in order to perform inpainting on instrument sound spectrograms. Score-based generative models \cite{vincent2011connection,ho2020denoising, song2019generative,song2021scorebased} propose a different approach to generative modeling, which consists in estimating the gradient of noise-corrupted data log-densities (score function): by iteratively denoising a sampled noise, these approaches obtained promising results, but mainly on image data. Moreover, the authors of Denoising Diffusion Implicit Model (DDIM) \cite{song2021ddim} use non-Markovian diffusion processes in order to accelerate the sampling of diffusion models. To this day, only two score-based generative models in the waveform domain have been published \cite{kong2021diffwave, chen2020wavegrad} and they are mostly focused on the task of neural vocoding with conditioning on a mel-spectrogram. In \cite{kong2021diffwave}, the authors achieved the task of generating audio with an unconditioned model trained on the speech command dataset \cite{warden2018speech}. The inference scheme of \cite{kong2021diffwave} does not provide a flexible sampling scheme because it is trained on a fixed discrete noise schedule whereas \cite{chen2020wavegrad} is trained on a continuous scalar indicating the noise level. In the image domain, \cite{song2021scorebased} generalizes the works of \cite{sohldickstein2015deep,ho2020denoising,song2019generative} by framing the noise corruption procedure as stochastic differential equation. Score-based generative models offer the following advantages over GAN-based approaches: \begin{itemize} \item Training time is reduced and training is more stable since there is only one network to train. \item Class-conditioning generation can be achieved by training a classifier a posteriori, which lets us train a model only one time. \item Data can be mapped to a latent space without the need to train an additional encoder compared to GANs \cite{encoder2021tov}, which makes the interpolation between two given input data readily available with only one model. \end{itemize} These properties alleviate us to search for directions in the latent space as in \cite{harkonen2020ganspace} or to directly hardcode conditional features in the architecture as in \cite{mirza2014conditional}. This easily controllable latent space permits sound design applications. One downside of score-based models compared to GANs is their higher inference times to generate new samples. In this work, we extend the approach of \cite{song2021scorebased} and propose CRASH (Controllable Raw Audio Synthesis with High-resolution), a score-based generative model adapted to the waveform domain. On a drum sound dataset, the numerous capabilities offered by this architecture allows for musically-relevant sound design applications. Our contributions are the following: \begin{itemize} \item A score-based model for unconditional generation that can achieve high fidelity 44.1 kHz drum sounds directly in the waveform domain, \item The use of a noise-conditioned U-Net to estimate the score function, \item A novel \emph{class-mixing} sampling scheme to generate "hybrid" sounds. \item We provide a reparameterization of the SDE that shows that the DDIM deterministic sampling is another discretization of \cite{song2021scorebased} which leads to faster than real time sampling. \item Experimental and practical insights about the choice of the stochastic differential equation used to corrupt the data. \end{itemize} \section{Background} \label{sec:background} \subsection{Score Based Modelling through Stochastic Differential Equations} \begin{figure}[h] \centering \includegraphics[scale=0.2]{figs/forward_backward_sde.png} \caption{Illustration of the noising and denoising processes of a kick sound with a VP schedule} \label{fig:sde} \end{figure} \subsubsection{Forward Process} Let $p_\text{data}$ be a data distribution. Diffusion models consist in progressively adding noise to the data distribution to transform it into a known distribution from which we can sample from as shown in Fig.~\ref{fig:sde}. In \cite{song2021scorebased}, the authors formalize this noising process as the following \textbf{forward} Stochastic Differential Equation (SDE): \begin{equation} \label{eq:sde1} \dd{\mathbf{x}} = f(t) \mathbf{x} \dd{t} + g(t) \dd{\mathbf{w}} \end{equation} where $f(t)$ is a continuous negative function from $[0, T] \to \mathbb{R}^-$, $g(t)$ a continuous positive function from $[0, T] \to \mathbb{R}^+$, and $\mathbf{w}$ is a standard Wiener process. Such approach can be understood as a continuous-time generalization of Denoising Diffusion Probabilistic Models (DDPMs) \cite{sohldickstein2015deep,ho2020denoising} and denoising Score Matching with Langevin Dynamics (SMLD) \cite{song2019generative}. For $\mathbf{x}(0) \sim p_\text{data}$, the transition kernel of Eq.~\ref{eq:sde1} is given by a normal distribution: \begin{equation} \label{eq:transition-kernel} p_t(\mathbf{x}(t) \mid \mathbf{x}(0)) = \mathcal{N}(\mathbf{x}(t); m(t)\mathbf{x}(0), \sigma^2(t)\mathbf{I}), \end{equation} where $m(t)$ and $\sigma(t)$ follow the system: \begin{equation} \left\{ \begin{array}{ll} \dv{m}{t} = f(t) m(t) \\ \dv{\sigma^2(t)}{t} = 2f(t) \sigma^2(t) + g^2(t) \end{array} \right. \label{eq:system_m_sigma} \end{equation} with the following initial conditions $m(0) = 1$ and $\sigma^2(0) = 0$. The solutions for $m(t)$ and $\sigma(t)$ are : \begin{equation} \left\{ \begin{array}{ll} m(t) = e^{\int_{0}^{t} f(s) \dd{s}} \\ \sigma^2(t) = e^{\int_{0}^{t} 2f(s) \dd{s}} \int_{0}^{t} g^2(u) e^{\int_{0}^{u} -2f(s) \dd{s}} \dd{u}. \end{array} \right. \label{solution_m_sigma} \end{equation} In \cite{song2021scorebased}, the authors define three types of SDEs which are presented in Tab.~\ref{sde_fg}. \begin{table}[h!] \begin{center} \begin{tabular}{ | c | c | c | } \hline & $f(t)$ & $g(t)$ \\ \hline VP & $-\frac{1}{2}\beta(t)$ & $\sqrt{\beta(t)}$ \\ \hline VE & $0$ & $\sqrt{\dv{[\sigma^2(t)]}{t}}$ \\ \hline sub-VP & $-\frac{1}{2}\beta(t)$ & $\sqrt{\beta(t)(1 - e^{-2\int_{0}^{t} \beta(s) \dd{s}})}$ \\ \hline \end{tabular} \end{center} \caption{Functions used in the VP, VE and sub-VP SDEs} \label{sde_fg} \end{table} For the Variance Preserving (VP) and sub-Variance Preserving (sub-VP) schedules, $m(T)\approx 0$ and $\sigma(T) \approx 1$ which means that the original data distribution is transformed into a distribution close to a standard normal distribution i.e. $p_{T} \approx \mathcal{N}(\mathbf{0}, \mathbf{I})$. For the Variance Exploding (VE), $\sigma^2(T) \gg m \approx 1$ which means that the original data is not perceptible at $t=T$ and that $p_{T} \approx \mathcal{N}(\mathbf{0}, \sigma^2(T)\mathbf{I})$. \subsubsection{Generation with the Reverse Process} In order to sample from the data distribution, we can sample $\mathbf{x}_T \sim p_T$ and apply the associated reverse time SDE \cite{anderson1982reverse} given by: \begin{equation} \dd{\mathbf{x}} = [f(t)\mathbf{x} - g^2(t) \nabla_{\mathbf{x}} \log p_t(\mathbf{x})]\dd{t}+ g(t)\dd{\mathbf{\Tilde{w}}} \label{eq:reverse_sde} \end{equation} where $\mathbf{\Tilde{w}}$ is a standard Wiener process running backwards from T to 0 and $\dd{t}$ is an infinitesimal negative timestep. It means that by knowing $\nabla_{\mathbf{x}} \log p_t(\mathbf{x})$, we can use a discretization of Eq.~\ref{eq:reverse_sde} to sample $\mathbf{x}(0)$ from $p_0=p_\text{data}$. In practice, the score function $s(\mathbf{x}(t), \sigma(t)) = \nabla_{\mathbf{x}} \log p_{t}(\mathbf{x})$ is intractable and it is approximated by a neural network $s_\theta(\mathbf{x}(t), \sigma(t))$ parameterized by $\theta$. In order to train the network, \cite{vincent2011connection} shows that for any t, minimizing \begin{equation} \mathbb{E}_{p_{t}(\mathbf{x})} {\left\| s_\theta(\mathbf{x}, \sigma(t)) - \nabla_{\mathbf{x}} \log p_{t}(\mathbf{x}) \right\|}_2^2 \end{equation} is equivalent to minimizing \begin{equation} \mathbb{E} {\left\| s_\theta(\mathbf{x}, \sigma(t)) - \nabla_{\mathbf{x}} \log p_{t}(\mathbf{x}(t) \mid \mathbf{x}(0)) \right\|}_2^2 \label{loss} \end{equation} where the expectation is over $\mathbf{x}(0) \sim p_\text{data}$, $\mathbf{x}(t) \sim p_{t}(\mathbf{x}(t) \mid \mathbf{x}(0))$, and the latter distribution is given by Eq.~\ref{eq:transition-kernel}. Now, in order to train the network for all $t \in [0, T]$ we consider the following mixture of Eq.~\ref{loss} losses over all noise levels: \begin{equation} L(\theta)=\mathbb{E} \lambda(t) {\left\| s_\theta(\mathbf{x}(t), \sigma(t)) - \nabla_{\mathbf{x}(t)} \log p_{t}(\mathbf{x}(t) \mid \mathbf{x}(0)) \right\|}_2^2 \end{equation} where we sample $t\sim[0,T]$, $\mathbf{x}(0) \sim p_\text{data}$, $\mathbf{x}(t) \sim p_{t}(\mathbf{x}(t) \mid \mathbf{x}(0))$ and where $\lambda(t)$ is a weighting function. In \cite{ho2020denoising, song2019generative, song2021scorebased}, $\lambda(t)$ is empirically set such that $\lambda(t)^{-1} \propto {\mathbb{E}{\left\| \nabla_{\mathbf{x}(t)} \log p_{t}(\mathbf{x}(t) \mid \mathbf{x}(0)) \right\|}^2_{2}} \propto \sigma^{2}(t)^{-1}$ while in \cite{durkan2021maximum} the authors show that the maximum likelihood estimator is obtained with $\lambda(t)=g^2(t)$ in $L(\theta)$. The training procedure is described in Alg.~\ref{alg:training_alg}, where we reparameterize our neural network as $\mathbf{\epsilon}_\theta(\mathbf{x}(t), \sigma(t)) := - \sigma(t)s_\theta(\mathbf{x}(t), \sigma(t))$ in order to estimate $\mathbf{\epsilon}$. \begin{algorithm} \caption{Training procedure} \label{alg:training_alg} \begin{algorithmic} \WHILE{Training} \STATE Sample $t \sim \mathcal{U}([0, T]), \mathbf{x}(0) \sim p_{\text{data}}, \mathbf{\epsilon} \sim \mathcal{N}(\mathbf{0}, \mathbf{I})$ \STATE Compute $\mathbf{x}(t) = m(t)\mathbf{x}(0) + \sigma(t)\mathbf{\epsilon}$ \STATE Gradient descent on $\nabla_{\theta} {\left\| \frac{\sqrt{\lambda(t)}}{\sigma(t)} [\mathbf{\epsilon}_\theta(\mathbf{x}(t), \sigma(t)) - \mathbf{\epsilon}] \right\|}_2^2$ \ENDWHILE \end{algorithmic} \end{algorithm} Once the network is trained, a N-step discretization of the \textbf{reverse time} SDE is done in order to unconditionally generate samples. This process is described in Alg.~\ref{alg:sde_sampling}, it is non-deterministic since we obtain various sounds by starting from the same sample $\mathbf{x}(T)$. \begin{algorithm} \caption{Sampling via SDE} \label{alg:sde_sampling} \begin{algorithmic} \STATE Choose $N$, sample $\mathbf{x}_N \sim \mathcal{N}(\mathbf{0}, \sigma^2(T)\mathbf{I})$ \FOR{$i = N-1, ..., 0$} \STATE $t_i = T\frac{i}{N}, f_i = f(t_i), g_i = g(t_i), \sigma_i = \sigma(t_i)$ \STATE $\mathbf{x}_i = (1 - \frac{f_{i+1}}{N}) \mathbf{x}_{i+1} - \frac{g^2_{i+1}}{N\sigma_{i+1}} \mathbf{\epsilon}_\theta (\mathbf{x}_{i+1}, \sigma_{i+1})$ \IF{$i>0$} \STATE Sample $\mathbf{z}_{i+1} \sim \mathcal{N}(\mathbf{0}, \mathbf{I})$ \STATE $\mathbf{x}_i = \mathbf{x}_i + \frac{g_{i+1}}{\sqrt{N}} \mathbf{z}_{i+1}$ \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Deterministic Sampling via Score based Ordinary Differential Equation} As mentioned in \cite{song2021scorebased}, for any SDE, there exists a corresponding deterministic process which satisfies an ordinary differential equation (ODE): \begin{equation} \dd{\mathbf{x}} = [f(t)\mathbf{x} - \frac{1}{2} g^2(t) \nabla_{\mathbf{x}} \log p_t(\mathbf{x})]\dd{t} \label{eq:ode} \end{equation} This defines a flow $\phi^t$ such that the marginal distributions $\phi^t_*(p_\text{data})$ are identical to the ones obtained by applying the SDE of Eq.~\ref{eq:sde1}. This mapping is interesting because it provides a latent representation for each $\mathbf{x} \sim p_\text{data}$. The procedure of sampling via the N-step discretization of the ODE is described in Alg.~\ref{alg:ode_sampling}. Moreover, we also experimented sampling by using the \texttt{scipy.integrate.solve\_ivp} solver with the RK45 method. \begin{minipage}{0.49\textwidth} \begin{algorithm}[H] \caption{Sampling via ODE} \label{alg:ode_sampling} \begin{algorithmic} \STATE Choose $N$, sample $\mathbf{x}_N \sim \mathcal{N}(\mathbf{0}, \sigma^2(T)\mathbf{I})$ \FOR{$i = N-1, ..., 0$} \STATE $t_i = T\frac{i}{N}, f_i = f(t_i), g_i = g(t_i), \sigma_i = \sigma(t_i)$ \STATE $\mathbf{x}_i = (1 - \frac{f_{i+1}}{N}) \mathbf{x}_{i+1} - \frac{g^2_{i+1}}{2N\sigma_{i+1}} \mathbf{\mathbf{\epsilon}}_\theta (\mathbf{x}_{i+1}, \sigma_{i+1})$ \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \subsection{Inpainting} Let's imagine that we don't like the attack of a kick (or any other part of a sound), the method of inpainting permits us to regenerate the desired part. In order to do that, we apply a reverse-time SDE or ODE discretization to an isotropic Gaussian and fix the part that we want to keep (with the associated noise corruption) after each denoising timestep. As presented in section \ref{sec:experiments}, we obtain very diverse and coherent results. \begin{minipage}{0.49\textwidth} \begin{algorithm}[H] \caption{Inpainting via ODE or SDE} \label{alg:inpainting} \begin{algorithmic} \STATE Choose $N$, $U$ an inpainting mask, $\mathbf{x}_\text{fixed}$ a fixed sound, sample $\mathbf{x}_N \sim \mathcal{N}(\mathbf{0}, \sigma^2(T)\mathbf{I})$ \FOR{$i = N-1, ..., 0$} \STATE $t_i = T\frac{i}{N}, f_i = f(t_i), g_i = g(t_i), \sigma_i = \sigma(t_i), m_i=m(t_i)$ \IF{ODE Sampling} \STATE $\mathbf{x}_i = (1 - \frac{f_{i+1}}{N}) \mathbf{x}_{i+1} - \frac{g^2_{i+1}}{2N\sigma_{i+1}} \mathbf{\mathbf{\epsilon}}_\theta (\mathbf{x}_{i+1}, \sigma_{i+1})$ \ENDIF \IF{SDE Sampling} \STATE $\mathbf{x}_i = (1 - \frac{f_{i+1}}{N}) \mathbf{x}_{i+1} - \frac{g^2_{i+1}}{N\sigma_{i+1}} \mathbf{\mathbf{\epsilon}}_\theta (\mathbf{x}_{i+1}, \sigma_{i+1})$ \IF{$i>0$} \STATE Sample $\mathbf{z}_{i+1} \sim \mathcal{N}(\mathbf{0}, \mathbf{I})$ \STATE $\mathbf{x}_i = \mathbf{x}_i + \frac{g_{i+1}}{\sqrt{N}} \mathbf{z}_{i+1}$ \ENDIF \ENDIF \STATE Sample $\mathbf{z} \sim \mathcal{N}(\mathbf{0}, \mathbf{I})$ \STATE $\mathbf{x}_i \odot U = m_i (\mathbf{x}_\text{fixed} \odot U) + \sigma_i (\mathbf{z} \odot U)$ \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \subsection{Interpolations} The flexibility of SDEs and ODEs allows to compute interpolations between sounds. In fact, there exists an infinity of latent spaces indexed by $t \in [0, T]$. We present here two types of interpolations: ODE interpolation in the latent space of isotropic Gaussians and SDE interpolation in any t-indexed latent space. \begin{figure}[h!] \includegraphics[scale=0.25]{figs/interpolation_schema.png} \caption{Interpolation of two sounds via Forward and Backward ODE} \label{fig:interpolation_schema} \end{figure} \subsubsection{ODE interpolation in the latent space of isotropic Gaussians} Let $\mathbf{\epsilon}_1$ and $\mathbf{\epsilon}_2$ be two samples from a standard normal distribution of $\mathbb{R}^L$ where $L$ is our space dimension and $0\leq \lambda \leq1$. We consider the spherical interpolation $\mathbf{\epsilon}_\lambda = \lambda \mathbf{\epsilon}_1 + \sqrt{1-\lambda^2} \mathbf{\epsilon}_2$ and then apply the ODE sampling to it. We choose a spherical interpolation in order to preserve a variance close to 1 for $\mathbf{\epsilon}_\lambda$. Morever, if we want to interpolate two sounds $\mathbf{x}_1$ and $\mathbf{x}_2$, we can apply the Forward ODE in order to obtain the corresponding latent codes $\mathbf{\epsilon}_1$ and $\mathbf{\epsilon}_2$, apply the desired spherical interpolation and then apply an ODE sampling. \subsubsection{ODE interpolation in a t-indexed latent space} In \cite{kong2021diffwave}, the authors perform a linear interpolation between two sounds at a corresponding intermediate t-indexed latent space before applying Denoising Diffusion Probabilistic Model (DDPM is the discrete equivalent of a VP SDE). We adapt the method to the continuous framework with SDE and ODE. Here again, the interpolation can be done between two t-indexed latent codes or between sounds corrupted using the transition kernel of Eq.~\ref{eq:transition-kernel}. \subsection{Class-Conditional sampling with a classifier} For any class $y$, we can train a noise-conditioned classifier on corrupted data $\mathbf{x}(t)$. As a consequence, the output of the classifier gives us $p_t(y \mid \mathbf{x}(t))$ for each class $y$. We can use automatic-differentiation to differentiate this quantity and by the Bayes Formula, since $p(y)$ is constant for each class $y$, we have the following formula: \begin{equation} \label{bayes_formula} \nabla_{\mathbf{x}} \log p_t(\mathbf{x} \mid y) = \nabla_{\mathbf{x}} \log p_t(\mathbf{x}) + \nabla_{\mathbf{x}} \log p_t(y \mid \mathbf{x}) \end{equation} As a consequence, we can generate samples of one class by solving this reverse time SDE: \begin{equation} \label{class_cond_eq} \dd{\mathbf{\mathbf{x}}} = [f(t)\mathbf{x} - g^2(t)\nabla_{\mathbf{x}} \log p_t(\mathbf{x} \mid y)]\dd{t}+ g(t) \dd{\mathbf{\Tilde{w}}} \end{equation} This approach is flexible since it only requires to train a noise-conditioned classifier: there is no need to design and train a class-conditional score-based model as done in \cite{kong2021diffwave}. \section{Reparameterizing the SDE and ODE and link between ODE and DDIM} As shown in Sect.~\ref{sec:reparam}, for any SDE we can reparameterize it by using the associated perturbation kernel and obtain the following forward SDE for $\frac{\mathbf{x}}{m}$: \begin{equation} \dd{\left(\frac{\mathbf{x}}{m}\right)} = \sqrt{\dv{t}(\frac{\sigma^2}{m^2})} \dd{\mathbf{w}}. \label{eq:sde_snr} \end{equation} According to Eq.~\ref{eq:reverse_sde}, the associated reverse time SDE is (see Sect.~\ref{sec:reparam} for details): \begin{equation} \dd{\left(\frac{\mathbf{x}}{m}\right)}= 2 \dv{t}(\frac{\sigma}{m})\mathbf{\epsilon}(\mathbf{x}, \sigma) \dd{t}+ \sqrt{\dv{t}(\frac{\sigma^2}{m^2})} \dd{\mathbf{\Tilde{w}}} \end{equation} where $\mathbf{\epsilon}(\mathbf{x}, \sigma) := - \sigma(t)\nabla_{\mathbf{x}} \log p_t(\mathbf{x})$. In the same way, if we compute the associated deterministic ODE, we obtain: \begin{equation} \dd{\left(\frac{\mathbf{x}}{m}\right)}=\dd{\left(\frac{\sigma}{m}\right)}\mathbf{\epsilon}(\mathbf{x}, \sigma). \label{eq:ode_reparam} \end{equation} Moreover, this ODE is a refactored version of Eq.~\ref{eq:ode} divided by $m$. It means that Eq.~\ref{eq:ode} and Eq.~\ref{eq:ode_reparam} encode the same latent representation. Now, by integrating Eq.~\ref{eq:ode_reparam} between $t_i$ and $t_{i+1}$ (and by writing $\mathbf{x}_{i} := \mathbf{x}(t_i)$, $m_i := m(t_i)$, $\sigma_i := \sigma(t_i)$), we obtain: \begin{equation} \label{eq:ddim_int} \begin{aligned} \frac{\mathbf{x}_{i+1}}{m_{i+1}} - \frac{\mathbf{x}_{i}}{m_{i}} = \int_{t_i}^{t_{i+1}} \dv{t}\left(\frac{\sigma(t)}{m(t)}\right) \mathbf{\epsilon}(\mathbf{x(t)}, \sigma(t)) \dd{t}\\ \approx \int_{t_i}^{t_{i+1}} \dv{t}\left(\frac{\sigma(t)}{m(t)}\right) \mathbf{\epsilon}(\mathbf{x}_{i+1}, \sigma_{i+1}) \dd{t}\\ = (\frac{\sigma_{i+1}}{m_{i+1}} - \frac{\sigma_{i}}{m_{i}})\mathbf{\epsilon}(\mathbf{x}_{i+1}, \sigma_{i+1}) \end{aligned} \end{equation} This discretization is exactly the deterministic one from Denoising Diffusion Implicit Models \cite{song2021ddim} (DDIM). Empirically, Alg.~\ref{alg:ddim_sampling} gives great samples with only 20 or 30 steps which permits to sample faster than real time. This comes from the fact that the only approximation error is due to $\mathbf{\epsilon}(\mathbf{x}(t), \sigma(t)) \approx \mathbf{\epsilon}(\mathbf{x}_{i+1}, \sigma_{i+1})$ between $t_i$ and $t_{i+1}$. \begin{algorithm}[H] \caption{DDIM sampling} \label{alg:ddim_sampling} \begin{algorithmic} \STATE Choose $N$, sample $\mathbf{x}_N \sim \mathcal{N}(\mathbf{0}, \sigma^2(T)\mathbf{I})$ \FOR{$i = N-1, ..., 0$} \STATE $t_i = T\frac{i}{N}, m_i = m(t_i), \sigma_i = \sigma(t_i)$ \STATE $\mathbf{x}_i = \frac{m_i}{m_{i+1}} \mathbf{x}_{i+1} + (\sigma_i - \sigma_{i+1}\frac{m_i}{m_{i+1}}) \mathbf{\epsilon}_\theta (\mathbf{x}_{i+1}, \sigma_{i+1})$ \ENDFOR \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \caption{Reparameterized SDE Sampling} \label{alg:ddim_sde_sampling} \begin{algorithmic} \STATE Choose $N$, sample $\mathbf{x}_N \sim \mathcal{N}(\mathbf{0}, \sigma^2(T)\mathbf{I})$ \FOR{$i = N-1, ..., 0$} \STATE $t_i = T\frac{i}{N}, m_i = m(t_i), \sigma_i = \sigma(t_i)$ \STATE $\mathbf{x}_i = \frac{m_i}{m_{i+1}} \mathbf{x}_{i+1} + 2(\sigma_i - \sigma_{i+1}\frac{m_i}{m_{i+1}}) \mathbf{\epsilon}_\theta (\mathbf{x}_{i+1}, \sigma_{i+1})$ \IF{$i>0$} \STATE Sample $\mathbf{z}_{i+1} \sim \mathcal{N}(\mathbf{0}, \mathbf{I})$ \STATE $\mathbf{x}_i = \mathbf{x}_i + \sqrt{(\frac{\sigma_{i+1} m_i}{m_{i+1}})^2 - \sigma_{i}^2} \mathbf{z}_{i+1}$ \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} \section{A discussion about Choosing the Right SDE: a generalization of the sub-VP SDE} In this section $T=1$. \label{sec:discussion} \subsection{The Signal to Noise Ratio (SNR)} As Eq.~\ref{eq:sde_snr} shows, any SDE for $\mathbf{x}$ is equivalent to a Variance Exploding SDE for $\frac{\mathbf{x}}{m}$. We define $\text{SNR}$ as the \emph{signal-to-noise ratio associated to the SDE} by $\text{SNR}(t) := \frac{m(t)^2}{\sigma(t)^2}$, as it completely determines the signal-to-noise ratio of $x(t)$ via $\text{SNR}(\mathbf{x}(t)) = \text{SNR}(t) \mathbb{E}[\mathbf{x}(0)^2]$. The quantities defined by the variations of $\text{SNR}(t)$ are more interpretable than the functions $f$ and $g$ when working with the SDE and ODE reparameterizations of Eq.\ref{eq:sde_snr} and Eq.\ref{eq:ode_reparam}. We test different functions in the experiments. Moreover, once that the SNR is defined, we still need to provide a function $m(t)$ (or equivalently $\sigma(t)$). \subsection{About the relation relation between $m(t)$ and $\sigma(t)$} The VP SDE is the continuous version of the Denoising Diffusion Probabilistic Model (DDPM) used in \cite{ho2020denoising} \cite{kong2021diffwave} \cite{chen2020wavegrad}. One of the main features of this model is that the mean coefficient $m(t)$ of the perturbation kernel is linked to the standard deviation $\sigma(t)$ (or noise-level) by the following equation $m(t)=\sqrt{1-\sigma^2(t)}$. Moreover, without mentioning this fact, in \cite{song2021scorebased} the authors introduce the sub-VP SDE which is characterized by the following formula $m(t) = \sqrt{1 - \sigma(t)}$. They obtained their best results with this schedule. This formula leads to a $\text{SNR}$ that decays faster and that has a higher limit value in $t=1$. Since $\beta$ is fixed, it also leads to a slowly increasing $\sigma$ function near $t=0$. (See Fig.\ref{fig:sigma}) In this work, we explore four relations between $m$ and $\sigma$ described in Tab.~\ref{sde_fg_gen} in order to study the influence of the decay of the ${\text{SNR}}$ ratio. We also write the functions $f(t)$ and $g(t)$ for each of these 4 relations. For the rest of the paper we take the convention $f(t):=-\frac{1}{2}\beta(t)$ in order to compare the VP and sub-VP schedules with ours. \begin{table}[h!] \begin{center} \scalebox{0.7}{% \begin{tabular}{ | c | c | c | } \hline $m$-$\sigma$ relation & $f(t)$ & $g(t)$ \\ \hline $m=\sqrt{1-\sigma^2}$ (VP) & $-\frac{1}{2}\beta(t)$ & $\sqrt{\beta(t)}$ \\ \hline $m=\sqrt{1-\sigma}$ (sub-VP) & $-\frac{1}{2}\beta(t)$ & $\sqrt{\beta(t)(1 - e^{-2\int_{0}^{t} \beta(s) \dd{s}})}$ \\ \hline $m=1-\sigma$ (sub-VP 1-1)& $-\frac{1}{2}\beta(t)$ & $\sqrt{\beta(t)(1 - e^{-\frac{1}{2}\int_{0}^{t} \beta(s) \dd{s}})}$ \\ \hline $m=(1-\sigma)^2$ (sub-VP 1-2)& $-\frac{1}{2}\beta(t)$ & $\sqrt{\beta(t)(1 - \frac{3}{2}e^{-\frac{1}{2}\int_{0}^{t} \beta(s) \dd{s}}+\frac{1}{2}e^{-\int_{0}^{t} \beta(s) \dd{s}})}$ \\ \hline \end{tabular}} \end{center} \caption{Functions used in the VP, sub-VP and generalized sub-VP SDEs. We give the general formulas in section \ref{sec:sub-vp_gen}.} \label{sde_fg_gen} \end{table} \subsection{Choosing the right functions for the SDE} Choosing a particular relation between $m$ and $\sigma$, imposes a relation between $g$ and $\beta$. The remaining free parameter is the function $\beta$, needed to fully define the SDE. In \cite{song2021scorebased}, the authors use a linear schedule for $\beta(t)$ because it is the continuous generalization of DDPMs. As presented in Fig.~\ref{fig:sigma}, this choice leads to a $\sigma(t)$ function that rapidly grows to its maximum. In \cite{nichol2021improved}, the authors mention this fast growing $\sigma$ function as a potential shortcoming and propose a smoother function (the green one in Fig.~\ref{fig:sigma}). \begin{figure}[h!] \centering \includegraphics[scale=0.32]{figs/sigma_t.png} \caption{Different choices for the $\sigma(t)$ function} \label{fig:sigma} \end{figure} Our approach differs from \cite{song2021scorebased} in that the definition of our SDE is motivated by choosing a relatively smooth increasing function $\sigma(t)$ such as $\sigma(0) = 0$ and $\sigma(1) = 1 - \epsilon$ (where $\epsilon$ is a small constant), together with a $m$-$\sigma$ relation, from which all other quantities can be computed as shown in Tab.~\ref{sde_fg_gen}. If the two approaches are equivalent, we believe that these quantities are more interpretable. In the regime of a small number of discretization steps, a slow increasing function may induce less approximation errors. For our experiments we propose $\sigma(t)=\frac{1}{2}[1-\cos((1-s)\pi t)]$ with $s=0.006$ which is the red plot in Fig.~\ref{fig:sigma}. We also sample t in the interval $[\eta, 1]$ during the training where $\eta$ is chosen such that $\sigma(\eta) = 10^{-4}$ because $10^{-4}$ is imperceptible. \section{Class-mixing sampling with a classifier} Drum classes are not perfectly distinct. For instance, the dataset contains drum sounds that are percussive enough to be seen as kicks but also sufficiently brilliant to be seen as snares and some kicks are combined with a hi-hat sound. We observe that our classifier (at the noise-level $\sigma=0$) sometimes outputs a mixed classes such as [0.3, 0.3, 0.4] and that it aligns well with our feeling when hearing the sound. We introduce the Class-Conditional sampling to a mixture of classes: For a given noisy sound $\mathbf{x}(t)$, the vector $\nabla_{\mathbf{x}(t)} \log p_t(y_i \mid \mathbf{x}(t))$ points out to the direction of the class $y_i$ in the noisy t-indexed latent space. Now, assuming that we have N classes $(y_i)_{i=1, …, N}$, let $(\lambda_i)_{i=1, …, N}$ be positive real numbers such as $\sum_{i=1}^N \lambda_i = 1$, we define a mixture of classes that we note $\{(y_i, \lambda_i)\}$ and the associated vector: \begin{equation} \nabla_{\mathbf{x}} \log p_t(\{(y_i, \lambda_i)\} \mid \mathbf{x}) := \sum_{i=1}^N \lambda_i \nabla_{\mathbf{x}} \log p_t(y_i \mid \mathbf{x}) \end{equation} In practice, we put this term in equation \ref{bayes_formula} in replacement of the last term and use equation \ref{class_cond_eq} to sample class-mixed audios. It gives us interesting results with a great palette of sounds. \section{Architecture} \label{sec:architecture} \subsection{Conditioned U-Net} \label{subsec:unet} Our model architecture is a conditioned U-Net \cite{MeseguerBrocal2019ConditionedUNetIA}, originally proposed for source separation. It takes two inputs: the noise level $\sigma(t)$ and the noisy audio $\mathbf{x}(t)$. The noise-level is encoded by Random Fourier Features \cite{tancik2020fourier} followed by a Multi-Layer Perceptron. The noisy audio goes into FiLM-conditioned \cite{perez2017film} Downsampling Blocks. Then, the signal goes into Upsampling Blocks that receive skip connections from the DBlocks of same levels. The output of the network is the estimated noise $\mathbf{\epsilon}_{\text{estimated}}$. This bears similarities with the architecture from \cite{kong2021diffwave} which has a similar succession of blocks with dilated convolutions but no downsampling or upsampling layers, which makes it slow in terms of computation. The architecture from \cite{chen2020wavegrad} has a U-Net-like shape \cite{ronneberger2015unet}, but heavily depends on the spectrogram conditioning and relies on a different noise-conditioning scheme. The $\sigma$-conditioned U-Net architecture seems to retain advantages from both approaches and is particularly suited for unconditional generation (see Fig.~\ref{fig:architecture}). The details of the architecture are presented in Sect.~\ref{sec:details}. \begin{figure}[h!] \includegraphics[scale=0.28]{figs/architecture.png} \caption{Architecture of the Conditioned U-Net} \label{fig:architecture} \end{figure} \subsection{Noise conditioned classifier} Our noise-conditioned classifier closely mimics the architecture of our Conditioned U-Net presented in in Sect.~\ref{subsec:unet}. The classifier is composed of a succession of FiLM-conditioned DBlocks followed by a projection layer and a softmax. Parameters for this architecture are presented in Sect.~\ref{sec:ncclassif}. \section{Experiments and results} \label{sec:experiments} \subsection{Dataset} For this work, we use an internal non-publicly available dataset of drum sounds which has also been used in \cite{nistal2020drumgan}. It is composed of approximately 300.000 one-shot kick, snare and cymbal sounds in equal proportions. The samples have a sample rate of 44.1kHz and are recorded in mono. We restricted and padded the audio to 21.000 time-steps because most sounds last less than 0.5 second. We used 90\% of the dataset in order to train our model. \subsection{Models and process} We evaluate the influence of $\sigma(t)$ and four $m$-$\sigma$ schedules. The training of the network is done with a learning rate of $2.10^{-4}$ and the Adam optimizer. In parallel, smoothed weights with exponential moving average (EMA) with a rate of 0.999 are computed and saved at each step. For each model, the network is trained for about 120 epochs and the weights are saved each 8 epochs. We generated drum sounds with the regular weights and with the EMA weights and we observed the same phenomenon as in \cite{song2020improved}: for the regular weights the quality of the sounds is not necessarily increasing with the training time whereas the EMA weights provide better and more homogeneous Fréchet Audio Distance \cite{kilgour2019frechet} (FAD) during training\footnote{We use the original implementation \cite{kilgour2019frechet} available at \url{https://github.com/google-research/google-research/tree/master/frechet_audio_distance} }. After generating 2700 sounds for each checkpoint of each model, we choose the best checkpoints and generate 27000 drum sounds for each. It takes 12 hours to generate 27000 drum sounds on a Nvidia RTX-3090 GPU with an ODE or SDE schedule of 400 steps and batches of 180 sounds per generation (maximum memory capacity). In comparison, it takes around 5 hours with the scipy solver and it takes only 1.5 hours a DDIM 50 steps discretization which is faster than real time (27000 drum sounds represents 3.5 hours of audio). \subsection{Quantitative Results} We report the FAD (lower is better) between the 27000 generated drum sounds and the test set for each unconditional generation with SDE and ODE (with a discretization of 400 steps), a DDIM discretization with 50 steps and the \texttt{scipy.integrate.solve\_ivp} solver with \texttt{rtol = atol = $10^{-5}$, method='RK45'} parameters in Tab.~\ref{FADs}. The cos schedule refers to the function $\sigma(t)=\frac{1}{2}[1-\cos((1-s)\pi t)]$ (the red one in Fig.~\ref{fig:sigma}) and the exp schedule corresponds to the function $\sigma(t)=\sqrt{1 - e^{-0.1 t - 9.95 t^2}}$ used in \cite{song2021scorebased} (the blue one in Fig.~\ref{fig:sigma}). Because adding Gaussian noise with a factor $10^{-4}$ is almost non perceptible, we also decided to compute the FAD between the generated samples and a noisy version of the test set where we corrupted the sounds with a $10^{-4}$ level of Gaussian noise. The DDIM sampling and Scipy solver generate samples that are a bit noisy at a non-perceptible level, this is why they perform better on the noisy test set. Moreover, note that the FAD between the test set and the noisy test set is 0.72 which means that the FAD is a very sensitive metric. As a consequence, by looking at Tab.~\ref{FADs} we can only say that the "cos schedule" is more adapted than the "exp schedule" from \cite{song2021scorebased} to audio data for equally spaced discretization steps because there are more steps near $\sigma=0$ which is crucial in order to obtain non-noisy sounds. We cannot really say if some sub-VP schedules are better than others and we think that the comparison should be done on image data because the metrics are more meaningful that in the audio domain. Finally, all models generate kicks, snares and cymbals in equal proportions but the generated samples are a bit less diverse than in the original dataset. \begin{table*}[] \begin{tabular}{@{}lcccccccc@{}} \cmidrule(l){2-9} & \multicolumn{4}{c|}{Test Set} & \multicolumn{4}{c|}{Noisy Test Set} \\ \cmidrule(l){2-9} & SDE 400 & ODE 400 & Scipy & DDIM 50 & SDE 400 & ODE 400 & Scipy & DDIM 50 \\ \midrule VP exp schedule (as in \cite{song2021scorebased}) & 4.11 & 3.96 & 4.54 & 5.11 & 3.36 & 3.45 & 3.04 & 2.87 \\ \midrule VP cos schedule & 1.29 & 1.10 & 2.82 & 1.56 & 1.76 & 2.06 & 1.84 & 1.75 \\ \midrule sub-VP cos schedule & 1.34 & $\mathbf{0.98}$ & 3.08 & 3.36 & 1.71 & 1.56 & 1.81 & $\mathbf{1.49}$ \\ \midrule sub-VP 1-1 cos schedule & 1.41 & 1.23 & 2.92 & 2.93 & 1.67 & 2.45 & 2.11 & 1.53 \\ \midrule sub-VP 1-2 cos schedule & 1.69 & 1.51 & 1.66 & 5.24 & 2.22 & 2.85 & 1.43 & 3.96 \\ \bottomrule \end{tabular} \caption{FAD comparison (lower is better)} \label{FADs} \end{table*} \subsection{Interactive sound design} Audio samples for all experiments described in this section can be heard on the accompanying website: \url{https://crash-diffusion.github.io/crash/}. \subsubsection{Interpolations} The relative lack of diversity of the unconditional generation is not dramatic since the model can still perform interactive sound design by modifying existing samples from the dataset. In order to do that, we apply the forward ODE to an existing sound and obtain its corresponding noise in the latent space of isotropic Gaussians. As presented in Fig.~\ref{fig:interpolation_schema}, we can perform spherical combinations on the latent codes and apply the backward ODE to obtain interpolations. Moreover the reconstructed sounds (at the left and right of the schema) are accurate. \begin{figure}[h!] \includegraphics[scale=0.2]{figs/interpolations.png} \caption{Interpolations between two kicks (at the left and right)} \label{fig:interpolations_kick} \end{figure} \subsubsection{Obtaining Variations of a Sound by Noising it and Denoising it via SDE} Let's take a sound $\mathbf{x}(0)$. We can noise it at a desired noise level $\sigma(t)$ via $\mathbf{x}(t) = m(t) \mathbf{x}(0) + \sigma(t) \epsilon$ and then denoise it with a SDE discretization from t to 0. We obtain then variations of the original sound. \subsubsection{Inpainting} We can also perform inpainting on a sound in order to regenerate any desired part. We show this method on Fig.~\ref{fig:inpainting_schema} where we regenerate 6 endings of a snare sound. \begin{figure}[h!] \includegraphics[scale=0.35]{figs/inpainting_schema.png} \caption{Six Inpaintings on the end of a snare sound} \label{fig:inpainting_schema} \end{figure} This provides an innovative way to generate a variety of plausible sounds starting with the same attack. \subsubsection{Class-Conditioning and Class-Mixing with a Classifier} We trained a noise-conditioned classifier on the 3 classes (kick, snare, cymbal) and used it to generate class-conditioned and class-mixing generation. Once again, by using the latent representation of a sound we can regenerate it (via ODE) with control on its "kickiness, snariness or cymbaliness". \begin{figure}[h!] \includegraphics[scale=0.22]{figs/cymbal_to_kick.png} \caption{Transformation of a cymbal into a kick via class-conditioning ODE} \end{figure} \begin{figure}[h!] \includegraphics[scale=0.19]{figs/snarifying_a_kick.png} \caption{Modifying a kick to make it sounds more "snary" via class-mixing} \label{fig:snarification} \end{figure} \section{Conclusion} We presented CRASH, a score-based generative model for the generation of raw audio based on the latest developments in modeling diffusion processes via SDEs. We proposed novel SDEs, well-suited to drum sound generation with high-resolution, together with an efficient architecture for estimating the score function. We showcased how the many controllable sampling schemes offered new perspectives for interactive sound design. In particular, our proposed \emph{class-mixing} strategy allows the controllable creation of convincing "hybrid" sounds that would be hard to obtain with conventional means. We hope that these new methods will contribute to enrich the workflow of music producers. \bibliographystyle{unsrtnat}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:Introduction} Active safety measures such as Advanced Driver Assistance Systems (ADAS) are an elementary component of road-safety, especially with high market penetration~\cite{Lundgren2006}. In order to further increase road safety, the EU regulation 2019/2144~\cite{RegulationEU2019/21442019} obliges all OEMs to install an emergency brake assist (EBA) as well as lane departure warning system. Hence, as vehicles need to be equipped with the necessary sensors, the main goal is on cost efficiency, utilizing as few sensors while offering as many functions possible. As a consequence, the comfort function adaptive cruise control (ACC) becomes a default function also in the lower segments. Adaptive cruise control and other SAE-L1 advanced driver assistance systems have been studied extensively, see also~\cite{Eskandarian2003}. For ACC, radar-only as well as radar+camera solutions are often used as sensor configurations. Despite budget sensors, the expectation especially on comfort function is high. Ambiguities of the radar sensors due to low angular resolution are predominant and cannot be resolved completely by the camera. As a consequence, target association is increasingly hard which leads to so-called ghost objects and/or lateral position ambiguities~\cite{Folster2005}. As a result of these ambiguities, false reactions of the system can occur, such as driver-take over scenarios from a field-operation test, which have been investigated in~\cite{Weinberger2001}. The cause is often not only a missing detection of the ACC-relevant vehicle but of a false interpretation of the scenario, i.e. that the detected vehicle has been falsely assessed to be not relevant, due to, for example, a false lane association. Target discrimination focuses on the correct lane association of vehicles to corresponding lanes~\cite{Zhang2012,Song2019}. We propose to use the Boids flocking algorithm~\cite{Reynolds1987,Reynolds1999}, which has been introduced by Reynolds to model the motion of bird flocks or fish school, to model the interaction between traffic participants with each other as well as with the environment. A flocking algorithm has been used in~\cite{Hayashi2016} to derive a control algorithm for multiple non-holonomic cars. Each car has been modeled as an individual boid which behaves according to the movement rules of the typical boid model, which are cohesion, alignment, and separation. The proposed control managed a collision-free path of all boids, however, the usecase was limited to a straight highway and, eventually, the assumption that all cars driving in the same direction will converge to the same speed does not hold in reality. In this paper, an individual flock of boids is created to follow each detected vehicle, i.e.~for $N_v$ vehicles, $N_f$ flocks with $N_b$ boids each are generated. The corresponding detected vehicle acts as a lead for the swarm to follow. The aforementioned movement rules are applied to each boid of each flock. In addition, a further rule is introduced in order to repel the individual flocks to each other in lateral direction. Thereby ensuring that the flocks maintain a lateral distance as long as they are within a given longitudinal distance. Since the boids of a flock are affected not only by the detected lead vehicle but also by the other boids of its own flock as well as by the neighboring flocks, the effects of the position uncertainties of an object on the target discrimination can be mitigated by using the average lateral distance between flocks. The remainder of this paper is organized as follows: Sec.~\ref{sec:obj_plaus_boids} introduces the Boids flocking algorithm including the proposed extensions in order to facilitate an object plausibilization. Additionally, the complexity of the proposed flocking algorithm is analyzed with respect to the flock size. Simulation results are presented in Sec.~\ref{sec:numerical_results}. Finally, Sec.~\ref{sec:conclusion} draws the conclusion. \section{Object Plausibilization with the Boids Flocking Algorithm} \label{sec:obj_plaus_boids} \subsection{Boids Flocking Algorithm} \label{sec:boid-flock-algor} Reynolds~\cite{Reynolds1987} introduced three main rules describing the movement of boids as an interaction between the individuals of one flock. The movement of each boid is influenced by \begin{itemize} \item \textbf{Separation}: The tendency of a boid to maintain a certain distance from the other boids within the visible range. \item \textbf{Cohesion}: The tendency of a boid to move to the average position of the boids within the visible range. \item \textbf{Alignment}: The tendency of a boid to align itself with the boids within the visible range with respect to orientation and velocity. \end{itemize} The three basic rules are applied to each boid and each flock within a certain radius are listed below and depicted in Figure~\ref{fig:boid_movement_rules_flock}. \begin{figure}[htbp] \centering \subfloat[{\textit Separation}]{ \includestandalone[width=0.31\linewidth,keepaspectratio]{pic/boid_separation} } \subfloat[{\textit Cohesion}]{ \includestandalone[width=0.31\linewidth, keepaspectratio]{pic/boid_cohesion} } \subfloat[{\textit Alignment}]{ \includestandalone[width=0.31\linewidth, keepaspectratio]{pic/boid_alignment} } \caption{Three main rules for boid movement within a flock. The $i$-th boid, for which the rules are calculated for, is given in red color. } \label{fig:boid_movement_rules_flock} \end{figure} We define the boids of the $j$-th flock as \begin{align} \label{eq:boid} \setBoidsFlock{j} = \left \lbrace \boid{1,j},\boid{2,j},\ldots \boid{\Nb,j} \right\rbrace \end{align} with $j=1,\ldots\Nf$, where $\Nf$ is the number of flocks. Each boid $\boid{i,j}$ contains the longitudinal and lateral position as well as velocity. For the sake of readability, we omit the index $j$ of the flock, as boids do not contain flock-specific information, i.e. $\boid{i} = [\boidP{i}\, \boidV{i}]^{\mathrm{T}}$, with $\boidP{i} = [p_{x,i}\, p_{y,i}]^{\mathrm{T}}$, $\boidV{i} = [v_{x,i}\, v_{y,i}]^{\mathrm{T}}$. As can be seen in Fig.~\ref{fig:boid_movement_rules_flock}, the visible range of a boid (field-of-view) is modeled as an ellipse, opposed to typically a circular section (cf.~\cite{Reynolds1999}), in order to consider the fact that vehicles are typically driving within the driving lanes. As a consequence, any boid that deviates too far laterally from the swarm is no longer considered by the swam. The deviating boid can nevertheless perceive the swarm and is influenced by it in its movement. Therefore, the set of boids, which are within the field-of-view of the $i$-th boid, is given by \begin{align} \label{eq:set_boids_fov} \setBoidsFoV{i} = \left\lbrace \boid{j} \left| (\boidP{j} - \boidP{i})^{\mathrm{T}} \mathbf{M}^{-1} (\boidP{j} - \boidP{i}) \leq 1 \right. \right\rbrace \end{align} with \begin{align*} \mathbf{M}= \!\left[\!\begin{array}{cc} \cos \varphi_i & - \sin \varphi_i \\ \sin \varphi_i & \cos \varphi_i \end{array}\!\right]\! \left[\!\begin{array}{cc} a^2 & 0 \\ 0 & b^2 \end{array}\!\right] \! \left[ \!\begin{array}{cc} \cos \varphi_i & -\sin \varphi_i \\ \sin \varphi_i & \cos \varphi_i \end{array}\!\right]^{\mathrm{T}}, \end{align*} where the parameter of the ellipse are: \begin{itemize} \item $\varphi_i$, the orientation angle of the boid given by $\arctan(v_{y,i} / v_{x,i})$; Note that the coordinate system according to ISO 8855~\cite{ISO8855_2011} is used, which means that the x-coordinate is in longitudinal and the y-coordinate in lateral direction. \item $a$, the length of the first principal axis of the ellipse; \item $b$, the length of the second principal axis of the ellipse. \item $|\setBoidsFoV{i}|$, the cardinality of the subset: $\Nbfov{i}$. Note that the $i$-th boid is not included in the set $\setBoidsFoV{i}$; therefore, $|\setBoidsFoV{i}| < |\setBoids|$. \end{itemize} In the following, the main steering rules are described. \subsection{Movement and interaction rules} \label{sec:movem-inter-rules} \textbf{Separation}: Each boid has a tendency to keep a certain distance from the other boids in the flock, thus, avoiding a collision. This behavior ensures that the flock is spread both in longitudinal and lateral direction, effectively, enlarging the field-of-view of the swarm. As described earlier, we assume that vehicles travel mainly within the driving lanes and the separation of the flock should also take this into account in the way that the boids have a larger separation in the longitudinal direction than in the lateral direction. However, this is not directly considered in the separation rule but in the weighting factor of the rule (see also \eqref{eq:BoidVelUpdate}). Several variations of the separation rule exists, whereas we follow the implementation of~\cite{Hartman2006}: \begin{align} \label{eq:sep_rule} \Rsep = - \sum_{j=1}^{\Nbfov{i}} \boidP{i} - \boidP{j}. \end{align} This means, that the position of all boids visible to the $i$-th boid is subtracted by the position of the $i$-th boid. \textbf{Cohesion}: Each boid is attracted towards the perceived center of the flock. This attraction counteracts the separation rule and causes the boids not to spread throughout the space. Otherwise, the boids would quickly lose the interaction with each other, due to the restricted field-of-view. The cohesion force is calculated by averaging the position of the $\Nbfov{i}$-boids and subtracting the result from the position of the $i$-th boid: \begin{align} \label{eq:coh_rule} \Rcoh = \frac{1}{\Nbfov{i}} \sum_{j=1}^{\Nbfov{i}}\boidP{j} - \boidP{i}. \end{align} Next to the attraction of each boid towards its perceived center of mass of the visible swarm, additional rules have been introduced in~\cite{Reynolds1999} with the rule \textbf{Leader Following} of particular interest, as it describes the tendency of a boid to move closer to a designated \textit{leader} without actually overtaking the leader. In our proposed approach, each detected vehicle is a natural designated leader, which are followed by the boids of the corresponding swarm. Hence, the attracting force of the leader is given by \begin{align} \label{eq:coh_leader} \Rcohl = \boidP{l} - \boidP{i}. \end{align} \textbf{Alignment}: Since every boid of a flock is supposed to follow the same designated leader, it stands to reason that eventually every member of the flock should have the same velocity. The alignment rule is calculated similarly to the cohesion rule, where the average of the perceived velocity is calculated first and the velocity of the $i$-th boid is subtracted from it: \begin{equation} \label{eq:ali_rule} \Rali = \frac{1}{\Nbfov{i}} \sum_{j=1}^{\Nbfov{i}}\boidV{j} - \boidV{i}. \end{equation} \subsection{Flock repulsion rule} \label{sec:flock-repulsion-rule} An interaction between flocks is introduced in this paper, which is described by the \textbf{flock repulsion} behavior. The idea is that the repulsive forces of the neighboring flock supports a clear separation of the boids and thus of the swarms, so that target discrimination is facilitated even if the object position is imprecise. The flock repulsion is denoted by $\Rrep$ and is calculated in two steps. First, the perceived center of the neighboring flock from the point-of-view of the $i$-th boid is calculated by \begin{align} \label{eq:center-flock-rep} \rrep{i}(:,k) = \frac{1}{\Nbfov{k}} \sum_{j=1}^{\Nbfov{k}} \boidP{j}. \end{align} Note here, that multiple flocks, for example on the left and right lane, can be perceived by one boid. It follows that the perceived center of mass of the neighboring swarms $\rrep{i}$ are represented in a matrix of size $[2 \times \Nfx{i}]$, with $\Nfx{i}$ the number of visible flocks by the $i$-th boid. The repulsing force can then be calculated with an exponential function whose value is exponentially decreasing with increasing distance of the swarms, with \begin{align} \label{eq:flock_repulsion} \Rrep = \pm \exp\left(\grep - |\boidP{i} - \rrep{i}|\right), \end{align} where the sign $\pm$ depends whether the flock is on the left or right side, respectively. The value of the factor $\grep$ was chosen so that the repulsive force is close to zero when the swarms have a distance of approximately one lane width. Contrary to the other rules, the repulsing flock rule has a strong effect on the position of the boids when the distance is small. The rule is exemplary depicted in Fig.~\ref{fig:flock-repulsion}. \begin{figure}[htbp] \centering \includestandalone[width=\linewidth,keepaspectratio] {pic/boid_repulsion} \caption{Exemplary illustration of the flock repulsion rule for two flocks shown in grey and black color. The red boid, which belongs to the grey flock, uses the average position of the observed black flock.} \label{fig:flock-repulsion} \end{figure} Although the rule is formulated generally in both longitudinal and lateral direction, the separation of flocks is only carried out in lateral direction and is considered in the weighting factor $\wrep$, which is explained in the following. \subsection{Position Update} \label{sec:position-update} The presented five behavioral rules are combined in an updated velocity vector $\boidVupdate{i}$ of the $i$-th boid and added to the velocity vector $\boidV{i}$ of the previous cycle: \begin{align} \label{eq:BoidVelUpdate} \boidVupdate{i} = & \boidV{i} + \wcoh \Rcoh + \wcohl \Rcohl \nonumber \\ & + \wali \Rali + \wsep \Rsep + \wrep \Rrep^{\mathrm{T}}, \end{align} where the weighting factor $\wrep$ for the repulsive behavior is given by \begin{align} \label{eq:weight-rep-force} \wrep = \left[ \begin{array}{cccc} w_{x,1} & w_{x,2} & \ldots & w_{x,\Nfx{i}} \\ w_{y,1} & w_{y,2} & \ldots & w_{y,\Nfx{i}} \end{array} \right] \end{align} with $w_{x,:} := 0$ and $w_{y,:} := \mathrm{sgn}(p_{y,i} - \rrep{i}(y,:))$. The remaining weighting factors are optimized heuristically and the chosen values can be found in Table~\ref{tab:para_usecase}. Given the updated velocity vector, the new position of the $i$-th boid can be calculated straightforward by \begin{align} \label{eq:boid-pos-update} \boidPupdate{i} = \boidP{i} + \boidVupdate{i}. \end{align} \subsection{Life-cycle of Boids} \label{sec:spawning-boids} Unlike other publications using Boids flocking algorithm, it is assumed in this paper, that boids have a rather short lifetime, meaning a boid is spawned and will eventually cease to exist within a duration of a few hundred update cycles, whereas each cycle is assumed to have a duration of about $80$~ms. As soon as a lead vehicle is consistently tracked, boids will be spawned by the lead vehicle every $100$~ms until $\Nb$ boids per flock exist. The position of the lead-vehicle as well as the lateral and longitudinal velocity from the previous cycle will be provided to the new boid and serve as initial values. \subsection{Reachability analysis with Dubins path} \label{sec:reach-analys-dubins} Simulating the movement of the boids of each flock according to~\eqref{eq:BoidVelUpdate} and ~\eqref{eq:boid-pos-update}, it can be seen that the resulting path of each boid is only $G0$-continuous and that boids may ``jump" sideways. In order to constrain the movement of boids and keeping in mind that boids shall follow vehicles with non-holonomic constraints, the reachability of the updated position of a boid is checked by a path generated using a Dubins path~\cite{Dubins1957}. Dubins paths consist only of straight paths (`S') and curve segments with a restricted radius, i.e. left curve (`L') and right curves (`R'), respectively. The minimum radius $\rmin$ is given by the longitudinal velocity $v$ and a fixed maximum lateral acceleration $\alatmax$: \begin{align*} \rmin = \frac{v^2}{\alatmax}. \end{align*} The maximum lateral acceleration for each boid is chosen to be $9~\siAms$, which results in a radius of about $60$~m at a velocity of $23$~m/s. In order to calculate a Dubins path from the current position to the updated target position of a boid, the start and target pose need to be determined, where as the orientation angles of start $\varphi_i$ and target pose ($\varphi_i^\prime$) of a boid are calculated by \begin{align} \label{eq:boidOrientation} \varphi_i = \atantwo\left(\frac{\boidV{y,i}}{\boidV{x,i}}\right),\; \varphi_i^\prime = \atantwo\left(\frac{\boidVupdate{y,i}}{\boidVupdate{x,i}}\right) \end{align} which yields $\boidPose{i} = [p_{x,i}, p_{y,i}, \varphi_i]^{\mathrm{T}}$. With the given start and target pose as well as maximum radius, the resulting Dubins path will be evaluated. Exemplary evaluations are depicted in Figure~\ref{fig:dubins_path_eval}, whereas valid paths are given in green and invalid paths in red. \begin{figure}[htbp] \centering \subfloat[Examples for directly reachable paths.]{ \includestandalone[width=0.45\linewidth, keepaspectratio]{pic/dubins_path}} \quad \subfloat[Examples for paths only reachable via detours.]{ \includestandalone[width=0.45\linewidth, keepaspectratio]{pic/dubins_path_nok}} \caption{Exemplary illustration of resulting Dubins paths.} \label{fig:dubins_path_eval} \end{figure} It can be seen that as soon as detours are required to reach the target pose, this pose is discarded either due to its position and/or orientation. Instead, in an iterative process, the target pose is changed in both position as well as orientation within a small radius --- and thus speed in lateral and longitudinal direction --- until the target pose can be reached without detours or until a maximum number of iterations is reached, which can be used to reduce false reactions of driver assistance systems. \section{Numerical Results} \label{sec:numerical_results} A three-lane highway scenario with three target vehicles is considered in the following with one vehicle driving in each lane and shown in Figure~\ref{fig:usecase}. The course includes gentle curves as well as straight sections. The ego vehicle and the preceding vehicle in the same lane (ID:2) are driving with the same velocity of $v_{\mathrm{ego}} = v_2 = 25$~m/s at a distance of about $30$~m, which results in a timegap of $T_\mathrm{G} = d_2 / v_2 = 1.2$~s; a typical headway distance of an ACC system. A slightly slower vehicle (ID:3) is driving on the right lane while a faster vehicle (ID:1) is approaching the ego vehicle from behind, eventually overtaking the ego vehicle and the other two vehicles. \begin{figure}[htbp] \centering \includestandalone[width=\linewidth, keepaspectratio]{pic/sim_setup} \caption{Usecase description} \label{fig:usecase} \end{figure} The parameters of the simulation setup are summarized in Table~\ref{tab:para_usecase}. A standard sensor setup for advanced driver assistance systems is chosen for the ego vehicle, comprising a long range radar as well as a monocular camera. The detection of each sensor are subsequently fused providing qualified tracked objects. These tracked objects serve as potential leaders to create the individual flock of boids. As the chosen velocities of the target vehicles (ID:2) and (ID:3) differ by only $2$~m/s, the reflections of the radar sensors of the two vehicles are ambiguous and false associations are likely. As a consequence, the lateral position of a tracked object may be shifted in lateral direction or worse, reflections of two vehicles are merged into one tracked object. \begin{figure}[tp] \centering \subfloat[$N_f=3$, $N_b=3$]{ \includegraphics[width=0.479\linewidth, keepaspectratio]{pic/3veh_N3_ego_left_vp33.pdf} \includegraphics[width=0.479\linewidth, keepaspectratio]{pic/3veh_N3_ego_right_vp33.pdf} \label{fig:violon_a}} \\ \subfloat[$N_f=3$, $N_b=7$]{ \includegraphics[width=0.479\linewidth, keepaspectratio]{pic/3veh_N7_ego_left_vp33.pdf} \includegraphics[width=0.479\linewidth, keepaspectratio]{pic/3veh_N7_ego_right_vp33.pdf} \label{fig:violin_b}} \\ \subfloat[$N_f=3$, $N_b=14$]{ \includegraphics[width=0.479\linewidth, keepaspectratio]{pic/3veh_N14_ego_left_vp33.pdf} \includegraphics[width=0.479\linewidth, keepaspectratio]{pic/3veh_N14_ego_right_vp33.pdf} \label{fig:violin_c}} \caption{\it Split-violin plots for the distribution of the lateral distance between vehicles on the ego and left lane (Ego-Left) as well as vehicles on the ego and right lane (Ego-Right) . Lateral distance of boids in blue (left violin), lateral distance of tracked objects (i.e. inputs) in red (right violin).} \label{fig:violin_sim_plots} \end{figure} The intention of the boids is not to determine a better estimate of the ground truth position compared to the tracked objects but to mitigate the shortcomings of the sensors by providing additional information about the relative position of the vehicles. Hence, the relative distance between the vehicles is taken as a measure for the target discrimination. The distance will be taken from the two pairs of target vehicles, whereby `Ego-Left' refers to the target combination (ID:1 and ID:2) and `Ego-Right' to the target combination (ID:2 and ID:3). The numerical results are shown in Figure~\ref{fig:violin_sim_plots} for increasing swarm sizes, combining a boxplot with the probability distribution, which allows for a better comparison of the two setups. For the visualization of the numerical results, violin plots~\cite{Hintze1998} are chosen. Each subplot compares the distribution of the lateral separation determined by the tracked objects (in red) and determined by boids (in blue). The median of the distribution is given by the black horizontal in the center of the notch, whereas the mean value is denoted by the black star. Correspondingly, first and third quartile are represented by the borders of the dark colored area, while the light colored region ranges from the first to the 99th percentile. As expected, due to the setup of the scenario, the lateral separation of the right pair `Ego-Right' is worse than that of the pair `Ego-Left' due to the smaller relative velocity and therefore increased difficulty for the target discrimination in the environmental model. The smallest swarm size, with $\Nb=3$ (cf. Fig.~\ref{fig:violon_a}), shows a performance that is inferior to that of the tracked objects. This becomes clear by the smaller median of the distribution (2.4~m compared to 3~m) as well as stronger outliers for the Ego-Right pair. With increasing swarm size, e.g. $\Nb=7$ (cf. Fig.~\ref{fig:violin_b}), the median of the lateral separation increases for the boids slightly above 3~m for the Ego-Right pair which approaches the true lateral separation of 3.7~m. The medians for the Ego-Left pair for both boids as well as tracked objects are comparatively close together, which was expected since the target vehicles (ID:1 to 3) drive parallel to each other only for a short time due to the higher relative velocity. However, the outliers for the tracked objects below a lateral separation of 2~m could be mitigated. Interestingly, the results are not exclusively improving with a further increasing swarm size, see Fig.~\ref{fig:violin_c} for $\Nb=14$. The \textit{separation rule} forces the boids to split along the longitudinal axis, which leads to an increased distance between first and last boid and thus a decreased influence on the average swarm position due the limited field-of-view of each boid. \begin{table}[t] \centering \begin{tabular}{|ll|} \hline Velocity ego vehicle ID:0 & $v_\mathrm{ego} = 25$~m/s \\ \hline Velocity black car ID:1 & $v_1 = 33$~m/s \\ \hline Velocity black car ID:2 & $v_2 = 25$~m/s \\\hline Velocity truck ID:3 & $v_3 = 23$~m/s\\ \hline Lateral separation ID:1-ID:2 & $d_{12} = 3.2$~m \\ \hline Lateral separation ID:2-ID:3 & $d_{23} = 3.7$~m \\ \hline \hline $\wsep$ & $[0.15, 0.07]^{\mathrm{T}}$ \\ \hline $\wcoh$ & $[0.4, 0.4]^{\mathrm{T}}$ \\ \hline $\wcohl$ &$[0.4, 0.2]^{\mathrm{T}}$ \\ \hline $\wali$ & $[0.3, 0.3]^{\mathrm{T}}$ \\ \hline $\grep$ & 1.5 \\ \hline \end{tabular} \caption{\it Parameters of the considered usecase as well as for the flocking algorithm.} \label{tab:para_usecase} \end{table} \section{Conclusions} \label{sec:conclusion} The Boids flocking algorithm has been evaluated for the target discrimination in driver assistance systems. By creating an individual flock for each detected vehicle, together with the presented movement rules for boids, simulation results can illustrate that the separation of individual vehicles is consistently improved although the tracked objects were partly less than 0.5m apart. The average lateral position information of a swarm can be used either for lane association or for the improvement of a lane change detection in low-cost driver assistance systems. So far, only moving traffic participants have been used to create new flocks of boids. In the future, also static infrastructure, such as guard rails, lane markings, etc. shall be used to create flocks of boids. In combination with the flock repulsion rule, it will be investigated whether the target separation of parallel driving vehicles can be improved even further. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Even at the minute scales of distance and duration examined with increasingly discriminating instruments, spacetime still appears to be smooth and structureless. However, a variety of models of quantum gravity posit that spacetime is, on Planck scales, subject to quantum fluctuations. Hence, if probed at a small enough scale, spacetime will appear complicated -- something akin in complexity to a turbulent froth that ~\cite{whe63} has dubbed ``quantum foam,'' also known as ``spacetime foam.'' The detection of spacetime foam is important for constraining models of quantum gravity. If a foamy structure is found, it would require at least a probabilistic rather than deterministic nature of spacetime itself, as the paths taken by different photons emitted by a distant source would not be identical to one another. In this commentary paper, we discuss the use of astronomical observations of distant sources to test models of quantum gravity. We concentrate particularly on a recent paper by \cite{tam}, published in Astronomy \& Astrophysics in September 2011. Some of the points discussed below were discussed in our own paper \cite{CNFP}, which was published nine months earlier. The present paper is organized as follows. In \S 2 we discuss the nature of quantum fluctuations and the proper distance measure to use. This has important implications for the predicted size of the seeing disk, and hence the constraints one can put on spacetime foam models given a non-detection, as we then discuss in \S 3. In \S 4 we discuss practicalities of carrying out these tests. These include the need to characterize the point-spread function (PSF) of a given telescope in terms that can be compared to the profile observed in a distant, unresolved source, such as a quasar or supernova. Finally, in \S 5 we close with a summary. \section{The Nature of Quantum Fluctuations and the Predicted Seeing Disk} To quantify the problem, let us recall that, if spacetime undergoes quantum fluctuations, the intrinsic distance to an object will vary, thus producing an intrinsic limitation to the accuracy with which one can measure a macroscopic distance. If we denote the fluctuation of a distance $l$ by $\delta l$, we expect $\delta l \gtrsim N l^{1 - \alpha} l_P^{\alpha}$, (see \cite{ng03b}), where $N$ is a numerical factor $\sim 1$ and $l_P = \sqrt{\hbar G/c^3}$ is the Planck length, the characteristic length scale in quantum gravity. The length in this expression, $\delta l$, must be defined with reference to the macroscopic distance, $l$ (rather than locally). The parameter $\alpha \lesssim 1$ specifies the different spacetime foam models. Distance fluctuations $\pm \delta l$ imply phase fluctuations $\pm \Delta \phi = \pm 2 \pi \delta l / \lambda$ (see ~\cite{lie03,rag03,ng03a}). One practical method of searching for these fluctuations is to look for ``halos'' in images of distant, unresolved sources, which can be produced by fluctuations in the direction of the local wave-vector, $\pm \delta \psi \equiv \pm \Delta \phi /(2 \pi) = \pm \delta l / \lambda$. The point is that due to quantum foam-induced fluctuations in the phase velocity of an incoming light wave from a distant point source, the wave front itself develops a small scale ``cloud of uncertainty'' equivalent to a ``foamy'' structure, because parts of the wave-front lag while other parts advance. This results in the wave vector, upon detection, acquiring a jitter in direction with an angular spread of the order of $\delta \psi$. In effect, spacetime foam creates a ``seeing disk'' whose angular diameter is \begin{equation} \delta \psi = N \big(\frac {l}{\lambda}\big)^{1 - \alpha} \big(\frac {l_P}{\lambda}\big)^{\alpha}. \label{eq0} \end{equation} We note that the magnitude of $\delta \psi$ as given in the above equation is consistent with our assumption of isotropic fluctuations which implies comparable sizes of the wave-vector fluctuations perpendicular to and along the line of sight (see \cite{chr06}). For a telescope or interferometer with baseline length $D_{tel}$, this means that the dispersion ($\sim \delta \psi$, normal to the wave front) will be recorded as a spread in the angular size of a distant point source, causing a reduction in the Strehl ratio, and/or the fringe visibility when $\delta \psi \sim \lambda / D_{tel}$ for a diffraction limited telescope. The fundamental uncertainties caused by spacetime foam are spatial, not angular, even though they result in a ''seeing disk''. Strictly speaking, the models specify the uncertainty $\pm \delta l$, in distance between a source and observer along the line of sight. This is because $\delta l$ is defined by the uncertainty in the distance measured by light travel times. Of course, there is also a corresponding uncertainty in the transit time for light from source to observer, $\delta t \sim \delta l/c$. Furthermore, since the globally averaged wavefront is effectively spherical, globally averaged photon trajectories will deviate from the direct line of sight by an angle less than or equal to $\delta l/l$. As a direct consequence, the expected blurring of distant images is {\it not} the result of a random walk of small angle photon scatterings along the line of sight, since the uncertainties in the derived directions of the local wave vectors must result in the same spatial uncertainty, $\delta l$ (no matter how many wave crests pass the observer's location). For example, in the "thin screen approximation", the accumulated transverse path of multiply scattered photons would be approximated as $(\delta \psi)l >> \delta l$. This would lead to expected time lags, $\delta \psi (l/c) >> \delta l/c$, in conflict with the basic premises for spacetime foam models. The above background, given in greater detail in our recent paper \cite{CNFP}, illustrates why, when measuring the length $l$ for sources at cosmological distances, the appropriate distance measure to use is the line-of-sight comoving distance (see ~\cite{hog00}) given by \begin{equation} D_C(z) = D_H I_E(z) \label{eq1} \end{equation} where \begin{equation} I_E (z)= \int_0^{z} {\frac{dz'}{E(z')}}, \end{equation} and \begin{equation} E(z) = \sqrt{\Omega_M (1+z)^3 + \Omega_k (1 + z)^2 + \Omega_{\Lambda}}, \label{eq2} \end{equation} with $D_H = c/H_0$ being the Hubble distance, $\Omega_M, \Omega_k$ and $\Omega_{\Lambda}$ being the (fractional) density parameter associated with matter, curvature and the cosmological constant respectively. Consistent with the latest WMAP + CMB data, we will use $\Omega_M = 0.25, \Omega_{\Lambda} = 0.75$ and $\Omega_k = 0$, and for the Hubble distance we will use $D_H = 1.3 \times 10^{26}$ meters. \section{Predicting the Halo Size} In terms of the comoving distance, for the various models of spacetime foam (parametrized by $\alpha$), the equivalent halo size is given by \begin{equation} \delta \psi = {\frac{N (1 - \alpha) l_P^{\alpha} D_H^{1 - \alpha} I(z,\alpha)}{\lambda_o}}, \label{eq3} \end{equation} with \begin{equation} I(z,\alpha) = \int_0^z {\frac {dz' (1+z')}{E(z')}} I_E(z')^{-\alpha}, \label{eq4} \end{equation} where the factor $(1 + z')$ in the integral corrects the observed wavelength $\lambda_o$, back to the wavelength $\lambda (z')$ at redshift $z'$. That is, $\lambda (z') = \lambda_o/(1 + z')$. \begin{figure \centering {\includegraphics[width=9cm]{fig1.ps}} \caption{\label{fig:det} The detectability of various models of foamy spacetime for existing and planned telescopes. We show the diagonal tracks for halo size $\delta \psi$ for an unresolved, $z=6.3$ source, using the comoving distance [eq. (5), dashed lines], naive application of the luminosity distance [i.e., not redoing the integral $I(z,\alpha)$ as per equation (9), in dotted lines], and correct application of the luminosity distance (dash-dot lines). Tracks are shown for $\alpha=0.6, 2/3$, and $N=1.8$. See \S\S 2,3 for discussion. It appears to us that \cite{tam} used the phase uncertainty $\Delta \phi = 2 \pi \delta \psi$ as a measure of halo size, which would exaggerate the expected halo size by nearly an order of magnitude. This displacement would make it appear that quantum foam may be easily tested by HST imaging, which it is not.} \end{figure} We have used these results to produce Figure 1. The diagonal lines in Figure 1 show predictions for the size of the seeing disk for different models of spacetime foam, for a source at redshift $z=6.3$, which represents the highest redshift quasar examined by \cite{tam}. We note that $\delta \psi$ in Figure 1 is a factor of $2 \pi$ smaller than the phase, $\Delta \phi$, which was used incorrectly by \cite{tam} to calculate expected halo size. In the case of a non-detection of angular broadening, the region above the diagonal line for a given $\alpha$ may be excluded. The discussion above illustrates the importance of a correct understanding of the seeing disk caused by spacetime foam as a spatial, rather than angular effect, thus requiring the use of the comoving distance. Figure 1 also shows how the prediction (for $\delta \psi$ not $\Delta \phi$) changes if one were to incorrectly model the seeing disk as being the result of angular fluctuations, and hence use the luminosity distance, \begin{equation} D_L(z)=(1+z)D_C(z) = (1+z)D_H I_E(z), \end{equation} rather than the comoving distance. This is the assumption made by \cite{tam} as well as \cite{ste07}. As an illustration of the cosmological effects we use equation (7) to calculate the equivalent halo size that one would predict {\it if one incorrectly} used the luminosity distance. To do this we make use of the last part of equation (7) as our $l'$, in which case \begin{equation} dl'= dD_L(z') = dz' D_H\int_0^{z'} {\frac{dz''}{E(z'')}} + {\frac{(1+z')D_H dz'}{E(z')}}. \end{equation} The result is that in calculating $\delta\psi$ one cannot simply use the luminosity distance in equation (5), and multiply it by $(1+z)$. Instead one must replace $I(z,\alpha)$ in equation (6) with the following: \begin{equation} I_2(z,\alpha)=\int_0^z dz'(1+z')\big[{{(1+z')}\over{E(z')}}+I_E(z')\big]\big[(1+z')I_E(z')\big]^{-\alpha}. \end{equation} Unfortunately, \cite{tam} do precisely this (their equations (2) and (3)). We can use the above formalism to estimate how this affects their quoted constraints. In Figure 1 we have overplotted tracks for the application of luminosity distance, both in the case of redoing $I(z,\alpha)$ and not redoing the integral. As can be seen, in Figure 1 the use of the incorrect distance measure causes a rather large miscalculation of the expected halo size that leads to an exaggeration in size by a factor of about 20 at a given wavelength. Furthermore, because the parallel set of diagonal lines in Figure 1 represents trajectories for $\delta\psi$ versus $\lambda$ that are specified by $\alpha$, this reduction in halo size leads to a reduction in the limiting value of $\alpha$ that can be determined from observations. \cite{tam} claim that current data exclude models with $\alpha < 0.68$ ($a_0 \sim 1$, including the holographic model which has $\alpha=2/3$) (the ``red zone'' in their Figure 5). However, by using the correct co-moving distance, we find that their limit for excluding quantum foam models should be reduced by $\Delta \alpha = 0.021$, much more consistent with the limit of $\alpha \sim 0.65$ previously established by \cite{CNFP}. \section{Observing Practicalities: Strehl Ratio and PSF} In conventional imaging the best way to characterize the halo is in terms of the observed Strehl ratio. This is defined as the ratio of the observed peak intensity from a point source as compared to the theoretical maximum peak intensity of a perfect telescope working at its diffraction limit. As can be seen by reference to Fig. 1, quasars are expected to be barely resolved in {\it HST} observations, and the Strehl ratio gives a concrete way to quantify how unresolved they are. This comparison must be done with reference to known stars in one's image, because the PSF of the {\it HST} varies significantly with position on the focal plane (and hence each individual camera). The Strehl ratio is defined as the ratio of the observed image peak to the peak diffraction spike. In \cite{CNFP} we approximated this ratio as \begin{equation} S_\mathrm{Obs} = S_M \exp \left[-(\sigma_I^2 + \sigma_{\psi}^2)\right] \label{eq5} \end{equation} where $S_M \leq 1$ represents a degradation of the observed Strehl ratio due to masking effects, $\sigma_I$ represents uncorrelated wavefront errors induced by the instrumentation (i.e., telescope plus instruments) and $\sigma_{\psi}$ represents uncorrelated wavefront ``errors'' induced by spacetime foam. Both of these dispersions are expressed in units of the telescope's diffraction limit, $\lambda / D_{tel}$. A similar treatment is taken in \cite{tam}, along with a superficially similar procedure, although it should be noted that they do not publish a list of the comparison stars used to compute $\sigma_I$ (as we did explicitly in \cite{CNFP}). This last makes it difficult to reproduce their results. If we follow this prescription, we can then define the spacetime foam degraded Strehl $S_{\psi}$ as $S_{\psi} = \exp (-\sigma_{\psi}^2)$, where $\sigma_{\psi}$ is $\delta \psi$ divided by $\lambda / D_{tel}$. Provided the comoving distance is used, as argued in \S 2, we then obtain for $\sigma_{\psi}$ \begin{equation} \sigma_{\psi} = {\frac{N (1 - \alpha) l_P^{\alpha} D_H^{1 - \alpha} I(z, \alpha) D_{tel}}{\lambda_o^2}}. \label{sigma} \end{equation} This approximation, of course, breaks down when $\sigma_{\psi} \sim 1$, i.e., when the wave front angular dispersion is comparable to the telescope's angular resolution. A fully parametrized version of the resultant Strehl ratio then is \begin{equation} S_{\psi} = \exp \left({\frac{- N^2 (1 - \alpha)^2 l_P^{2\alpha} D_H^{2(1 - \alpha)} I^2 (z,\alpha) D_{tel}^2}{ \lambda_o^4}} \right). \label{eq:SR} \end{equation} \begin{figure} {\centering{ \includegraphics[width=9cm]{strehl_phi0.667_1.80.eps} \includegraphics[width=9cm]{Strehl.eps}}} \caption{Expected Strehl ratio as a function of redshift (top) and wavelength (bottom). In both plots, we assume $\alpha=2/3$ and $N=1.8$. In the top figure we examine the change in Strehl ratio expected for a point source with varying redshift, $z$, for an observed wavelength of 400 nm. In the bottom plot, we specifically show the case of a $z=6.3$ quasar, as examined by \cite{tam}, for an observed wavelength of 400 nm. As in Figure 1, the dashed line refers to the use of comoving distance, dotted line refers to the naive use of luminosity distance, and dash-dot line refers to the treatment in equation (9).} \end{figure} However, just as with the expected halo size, the use of the luminosity distance drastically affects this expression. We cannot simply replace $D_C$ in equation (12) by $D_L$, as was done in \cite{tam}. Instead, $I(z,\alpha)$ must also be replaced with $I_2(z,\alpha)$ (equation (9)). This causes an overestimate in the magnitude of the exponential argument, thus causing a corresponding reduction in the Strehl ratio which is consistent with the discussion following equation (9). In Figure 2, we show the result of this error. As can be seen, even in the case of a source at $z=6.3$ -- the highest redshift source considered by \cite{tam} -- the effects of spacetime foam simply are not detectable in {\it HST} observations. From Figure 2, as was pointed out in \cite{CNFP}, it is not surprising that effects of spacetime foam are likely not to be detectable in HST images cross-referenced with high redshift SDSS quasars, because the only Hubble images from the SDSS sample are in the near IR band. At a wavelength of 8000 \AA, typical of the observations used in \cite{tam}, which used the ACS + F775W and F850LP filters, the expected Strehl ratio is $S_{\psi}> 0.98$ for both the comoving distance as well as a naive application of the luminosity distance -- i.e., just using eq. (7) and not including the modified integral $I_2$ (equation (9)). Even if both of these factors are included, we still expect $S_\psi > 0.95$, at most just a 5\% reduction in the measured Strehl with respect to that of the instrument. By comparison, in our paper (\cite{CNFP}, Table III), we used comparison stars for the HUDF quasars to measure the {\em instrumental} Strehl ratios, finding values between $S_I=0.27$ (F435W) and 0.64 (F850LP), with the low Strehl ratios in the blue being a result of the undersampling of the PSF by the ACS. While we are unable to comment exactly on the success of the method of \cite{tam} because they did not specify which stars were used or provide adequate information on the mechanics of deriving the phase (in their formulation), we can say that we find highly unrealistic their claim to have achieved the maximum possible constraint on $\alpha$ for this wavelength, based on our extensive experience with {\it HST} data. Indeed, as our work showed (\cite{CNFP}, Table IV) even for the much deeper observations of the UDF quasars (which extended to shorter wavelengths, but were at typical redshifts $z=4$, translating to a comoving distance about 15\% lower than $z=6.3$), the corrected Strehl ratio, $S_M / S_I$, that was achieved ranged from 1.04 down to 0.40, depending on the band, with two of the four being at Strehl ratios of $\sim 0.90$. The lower Strehl ratios were no doubt caused by a combination of factors, including not only the imperfections in the PSF of the {\it HST} and distortions across the chip and light path of individual images, but also factors intrinsic to the QSO such as the host galaxy. This is why in our paper, even though theoretically the observations of the HUDF quasar could probe to $\alpha \sim 0.66$, in practice the constraint that could be set was only $\alpha=0.65$ (see Figure 5 in \cite{CNFP}). On the basis of our experience, we believe it is likely that a similar statement can be made for the observations examined by \cite{tam}. It is worth mentioning that with current telescopes a second method of measuring possible effects of spacetime foam is becoming available. This is through the use of interferometry, {\it e.g.,} by using the VLTI. As can be seen in Figure 1, the VLTI would have a significant advantage in resolution over any optical-IR telescope, simply because its longest baseline is $\sim$ factor 20 longer than the largest telescope currently in use or under construction. Moreover, it would not suffer from some of the problems we have noted in the {\it HST} observations, namely undersampling. As well, since interferometers are very effective spatial filters the effect of the quasar's host galaxy would also be minimized. We therefore believe that the best way to probe the $\alpha\sim 0.7$ regime is with interferometers. We should point out that time lags from distant pulsed sources have also been posited as a possible test of quantum foam models. But, as explained in \cite{CNFP}, {\it the new Fermi Gamma-ray Space Telescope results (\cite{Abdo}) only exclude models with $\alpha < 0.3$.} \section{Summary} We have reviewed the theoretical basis for expecting halos due to spacetime foam, and also the correct distance measure. We have shown explicitly that, while we agree with the basic result of \cite{tam} that current observations with {\it HST} show no evidence for quantum gravity, as shown in our previously published paper \cite{CNFP}, we cannot agree with the resulting constraint they placed on models of quantum gravity. Because their calculations overstated the size of quantum foam induced halos of distant quasars by a factor 20, their limit for $\alpha$, is also overstated by a minimum of $\Delta\alpha=0.021$. Based on our experience with {\it HST} data, we also believe -- but cannot verify (because of the lack of documentation in \cite{tam}) -- that even this resulting level ($\alpha=0.66$) cannot be reached because of details specific to each observation, including the variation of the PSF of {\it HST} with position on the chip, the undersampling of the PSF by every instrument on {\it HST}, as well as the host galaxy of the quasar. \bigskip {\noindent This work was supported in part by the US Department of Energy under contract DE-FG02-06ER41418.}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The study of strongly correlated electrons continues to receive a lot of interest due to applications in condensed matter physics. Some of the well known models are the Hubbard and t-J models \cite{korbook} and generalizations such as the EKS model \cite{EKS}. Another example of a model describing strongly correlated electrons is the the supersymmetric $U$ model. This model was first introduced in \cite{ww} and was shown to be integrable via the Quantum Inverse Scattering Method (QISM) \cite{kul} by demonstrating that the model could be obtained from an $R$-matrix which is invariant with respect to the Lie superalgebra $gl(2|1)$. The Bethe-ansatz equations for the model were obtained in \cite{periodic,meaba,marcio,frahm}. Subsequently, an anisotropic generalization was presented in \cite{bar} which was also shown to be integrable through use of an $R$-matrix derived from a representation of the quantum superalgebra $U_q(gl(2|1))$ \cite{me}. The anisotropic supersymmetric $U$ model describes a system of correlated electrons and generalizes the Hubbard model in the sense that as well as the presence of the Hubbard on site (Coulomb) interaction there are additional correlated hopping and pair hopping terms. The model acts on the unrestricted $4^k$-dimensional electronic Hilbert space $\otimes ^k_{n=1} C^4$ where $k$ is the lattice length. This means that double occupancy of sites is allowed and distinguishes this model from the anisotropic $t-J$ model \cite{angobc} which shares the same supersymmetry algebra $U_q(gl(2|1))$. The model contains one free parameter $U$, (the Hubbard interaction parameter) which arises from the one-parameter family of inequivalent typical four-dimensional irreducible representations of the $U_q[gl(2|1)]$, and another which arises from the deformation parameter $q$. Bethe ansatz solutions for the anisotropic model with periodic boundary conditions have been studied \cite{bar,mass,meqaba}, however for this case there is no quantum superalgebra symmetry. In \cite{martin,GPPR,karow,kz,Angi} some quantum algebra invariant integrable closed chains derived from an $R$-matrix associated with the Hecke algebra were introduced and investigated. It was subsequently shown \cite{Links} that a general prescription for the construction of integrable systems with periodic boundary conditions and quantum algebra invariance existed which could then be applied to higher spin models such as the spin 1 XXZ Heisenberg chain \cite{Linkss}. In the present article we further develop this method by considering the graded case to derive the Hamiltonian of the anisotropic supersymmetric $U$ model with quantum supersymmetry on the closed chain. We will adopt a nested algebraic Bethe ansatz to solve the model and this will be presented in detail in Section 3. Also the energy of the Hamiltonian will be given. \section{Quantum algebra invariant Hamiltonian for the supersymmetric $U$ model} The following notation will be adopted. Electrons on a lattice are described by canonical Fermi operators $c_{i,\sigma}$ and $c_{i,\sigma}^{\dag}$ satisfying the anti-commutation relations given by $\{c^{\dag}_{i,\sigma},c_{j,\tau}\}=\delta_{ij} \delta_{\sigma\tau}$, where $i,j=1,2,..,k$ and $\sigma, \tau=\uparrow,\downarrow.$ The operator $c_{i,\sigma}$ ($c_{i,\sigma}^{\dag}$) annihilates (creates) an electron of spin $\sigma$ at site $i$, which implies that the Fock vacuum $|0>$ satisfies $c_{i,\sigma}|0>=0$. At a given lattice site $i$ there are four possible electronic states: $$|0>, \mbox{\hspace{.5cm}}|\uparrow>_i=c^{\dag}_{i,\uparrow}|0>, \mbox{\hspace{.5cm}}|\downarrow>_i=c^{\dag}_{i,\downarrow}|0>, \mbox{\hspace{.5cm}}|\uparrow\downarrow>_i=c^{\dag}_{i,\downarrow}c^{\dag}_{ i,\uparrow}|0>.$$ By $n_{i,\sigma}=c^{\dag}_{i,\sigma}c_{i,\sigma}$ we denote the number operator for electrons with spin $\sigma$ on site $i$, and we write $n_i=n_{i,\uparrow}+n_{i,\downarrow}$. The local Hamiltonian for this model is \cite{bar} \begin{eqnarray} H_{i(i+1)}& =& -\sum_{\sigma} (c_{i\sigma}^{\dag}c_{i+1\sigma}+h.c.) \mbox{exp}\left[-\frac 12(\zeta-\sigma \gamma)n_{i,-\sigma}-\frac 12(\zeta+\sigma \gamma) n_{i+1,- \sigma}\right]\nonumber \\ &&+\left[ Un_{i\uparrow}n_{i\downarrow}+Un_{i+1\uparrow}n_{i+1\downarrow} + U(c^{\dag}_{i\uparrow}c^{\dag}_{i\downarrow}c_{i+1 \downarrow}c_{i+1 \uparrow} +h.c.)\right], \label{ham1} \end{eqnarray} where $i$ labels the sites and $$ U = \epsilon[2 e^{-\zeta} \mbox{cosh } \zeta - \mbox{cosh }\gamma)]^{\frac 12}, \mbox{\hspace{1cm}} \epsilon=\pm 1.$$ This Hamiltonian may be obtained from the $R$-matrix of a one-parameter family of four-dimensional representations of $U_q[gl(2|1)]$, which is afforded by the module $W$ with highest weight $(0,0|\alpha)$. The details of this construction may be found in \cite{me}. The Hamiltonian (\ref{ham1}) may be modified to ensure quantum superalgebra invariance by adapting the general construction presented in \cite{Links}. We can write $$H=\sum_{i=1}^{k-1}H_{i(i+1)} +H_{1k},$$ where the boundary term is given by $$H_{1k}=G H_{k,1} G^{-1}$$ with $$G= R^-_{21} R^-_{31} ...R^-_{k1},~~~k\mbox{ the lattice length}.$$ Above, $R^-$ is the constant $R$-matrix obtained as the zero spectral parameter limit of the Yang-Baxter equation solution associated with the model \cite{me}. These operators act in the quantum space and the closed boundary conditions of the model may be explained by the relations $$GH_{i,i+1}=H_{i+1,i+2}G,~~~i=1,2,...,k-2,~~~ GH_{1k}=H_{12}G.$$ The quantum supersymmetry of the Hamiltonian is a result of the intertwining properties of the matrices $R$. \section{Nested Algebraic Bethe ansatz} We present the nested algebraic Bethe ansatz for the above Hamiltonian by extending the methods presented in \cite{Links,Linkss} to treat quantum group invariant closed higher spin chains to the graded case. We begin with the $R$-matrix satisfying the Yang-Baxter equation constructed directly from a solution of the twisted representation as given in \cite{meqaba}. The Yang-Baxter Equation may be written as the operator equation: \begin{eqnarray} {}_{vv}R^{\alpha_1 \alpha_2}_{\beta_1 \beta_2}(x/y)~~ {}_{vw}R^{\beta_1 a}_{\gamma_1 b}(x)~~ {}_{vw}R^{\beta_2 b}_{\gamma_2 c}(y) = {}_{vw}R^{\alpha_2 a}_{\beta_2 b}(y)~~{}_{vw}R^{\alpha_1 b}_{\beta_1 c}(x)~~ {}_{vv}R^{\beta_1 \beta_2}_{\gamma_1 \gamma_2 }(x/y) ,\label{vvham} \end{eqnarray} acting on the spaces $V\otimes V\otimes W$ where $V$ is the vector module and $W$ is the four-dimensional module associated with the one-parameter family of minimal typical representations. Greek indices are used to label the matrix spaces, that is the first two spaces and Roman indices label the quantum space, which is the third space. The quantum space represents the Hilbert space over a site on the one-dimensional lattice. The ${}_{vv}R$-matrix acts in the matrix space and it is between the two matrix spaces that the graded tensor product acts. The ${}_{vv}R$-matrix acts on $V\otimes V$ and has the form \cite{for}, \cite{vecrep} \begin{eqnarray} {}_{vv}R^{\beta_1 \beta_2}_{\alpha_1 \alpha_2}(x)= \begin{array}} \newcommand{\ea}{\end{array}{c} \unitlength=0.50mm \begin{picture}(20.,25.) \put(-3.,2.){\makebox(0.,0.){$\scriptstyle}\newcommand{\tr}{{\rm tr} x$}} \put(0.,-4.){\makebox(0.,0.){$\scriptstyle}\newcommand{\tr}{{\rm tr} \alpha_2 $}} \put(23.,2.){\makebox(0.,0.){$\scriptstyle}\newcommand{\tr}{{\rm tr} 1$}} \put(20.,-4.){\makebox(0.,0.){$\scriptstyle}\newcommand{\tr}{{\rm tr} \alpha_1 $}} \put(0.,24.){\makebox(0.,0.){$\scriptstyle}\newcommand{\tr}{{\rm tr} \beta_1 $}} \put(20.,24.){\makebox(0.,0.){$\scriptstyle}\newcommand{\tr}{{\rm tr} \beta_2 $}} \put(0.,20){\vector(-1,1){1.}} \put(20.,20){\vector(1,1){1.}} \put(0.00,0.00){\circle{2.0}} \put(1.00,1.00){\circle{2.0}} \put(2.00,2.00){\circle{2.0}} \put(3.00,3.00){\circle{2.0}} \put(4.00,4.00){\circle{2.0}} \put(5.00,5.00){\circle{2.0}} \put(6.00,6.00){\circle{2.0}} \put(7.00,7.00){\circle{2.0}} \put(8.00,8.00){\circle{2.0}} \put(9.00,9.00){\circle{2.0}} \put(10.00,10.00){\circle{2.0}} \put(11.00,11.00){\circle{2.0}} \put(12.00,12.00){\circle{2.0}} \put(13.00,13.00){\circle{2.0}} \put(14.00,14.00){\circle{2.0}} \put(15.00,15.00){\circle{2.0}} \put(16.00,16.00){\circle{2.0}} \put(17.00,17.00){\circle{2.0}} \put(18.00,18.00){\circle{2.0}} \put(19.00,19.00){\circle{2.0}} \put(20.00,0.00){\circle{2.0}} \put(19.00,1.00){\circle{2.0}} \put(18.00,2.00){\circle{2.0}} \put(17.00,3.00){\circle{2.0}} \put(16.00,4.00){\circle{2.0}} \put(15.00,5.00){\circle{2.0}} \put(14.00,6.00){\circle{2.0}} \put(13.00,7.00){\circle{2.0}} \put(12.00,8.00){\circle{2.0}} \put(11.00,9.00){\circle{2.0}} \put(10.00,10.00){\circle{2.0}} \put(9.00,11.00){\circle{2.0}} \put(8.00,12.00){\circle{2.0}} \put(7.00,13.00){\circle{2.0}} \put(6.00,14.00){\circle{2.0}} \put(5.00,15.00){\circle{2.0}} \put(4.00,16.00){\circle{2.0}} \put(3.00,17.00){\circle{2.0}} \put(2.00,18.00){\circle{2.0}} \put(1.00,19.00){\circle{2.0}} \end{picture} \ea = \left(\begin{array}{ccccccccc} A & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0&E&0&C&0&0&0&0&0\\0&0&E&0&0&0&C&0&0\\ 0&xC&0&E&0&0&0&0&0\\0&0&0&0&A&0&0&0&0\\ 0&0&0&0&0&E&0&C&0\\0&0&xC&0&0&0&E&0&0\\ 0&0&0&0&0&xC&0&E&0\\0&0&0&0&0&0&0&0&1 \end{array}\right)\nonumber \end{eqnarray} where $A,~E$ and $C$ depend on the spectral parameter as follows $A(x)= \frac {1-xq^2}{x-q^2}$, $E(x)= \frac {(1-x)q}{x-q^2}$ and $C(x)=\frac{1-q^2}{x-q^2}$. The ${}_{vv}R$-matrix satisfies the Yang-Baxter equation $$R_{12}(x/y) R_{13}(x) R_{23}(y) = R_{23}(y) R_{13}(x) R_{12}(x/y).\label{ybe}$$ By construction, these $R$-matrices also satisfy the generalized Cherednik reflection property \cite{22} \begin{eqnarray} R_{\alpha'\beta'}^{\alpha\beta}(x)R^{-1}{}_{\gamma\delta}^{\alpha'\beta'}(1/ y) =R_{\alpha'\beta'}^{\alpha\beta}(y)R^{-1}{}^{\alpha'\beta'}_{\gamma \delta}(1/x),\label{refl} \end{eqnarray} and crossing unitarity \cite{23} \begin{eqnarray} R^{st_1}{}_{\alpha' \beta'}^{\alpha\beta}(x\zeta) K^{\alpha'}_{\alpha''} (R^{-1}){}^{st_1}{}_{\gamma'\delta}^{\alpha''\beta'}(x) K^{-1} {}^{\gamma'}_{\gamma}=\delta^{\alpha}_{\gamma} \delta^{\beta}_{\delta},\label{unit} \end{eqnarray} where $st_1$ denotes matrix supertransposition in the first space and $K$ is the crossing matrix given below. It will be necessary to rewrite the ${}_{vv}R$-matrix in terms of constant matrices ${}_{vv}R^+$ and ${}_{vv}R^-$ that is, $${}_{vv} R(x) =\left(\frac {-1}{x-q^2}\right) (x~~{}_{vv}R^+ - ~~{}_{vv}R^- ),$$ where ${}_{vv}R^+$ (${}_{vv}R^-$) corresponds to the leading term in the limit as $x\rightarrow\infty$ ($x\rightarrow 0$). The ${}_{vw}R$-matrix was constructed in \cite{meqaba} in the $V\otimes W$ space and has the following form \begin{eqnarray} {}_{vw}R^{\beta j}_{\alpha i}(x)= \begin{array}} \newcommand{\ea}{\end{array}{c} \unitlength=0.50mm \begin{picture}(20.,25.) \put(-3.,2.){\makebox(0.,0.){$\scriptstyle}\newcommand{\tr}{{\rm tr} x$}} \put(0.,-4.){\makebox(0.,0.){$\scriptstyle}\newcommand{\tr}{{\rm tr} i$}} \put(23.,2.){\makebox(0.,0.){$\scriptstyle}\newcommand{\tr}{{\rm tr} 1$}} \put(20.,-4.){\makebox(0.,0.){$\scriptstyle}\newcommand{\tr}{{\rm tr} \alpha $}} \put(0.,24.){\makebox(0.,0.){$\scriptstyle}\newcommand{\tr}{{\rm tr} \beta $}} \put(20.,24.){\makebox(0.,0.){$\scriptstyle}\newcommand{\tr}{{\rm tr} j $}} \put(0.,20){\vector(-1,1){1.}} \put(20.,20){\vector(1,1){1.}} \put(20.00,0.00){\circle{2.0}} \put(19.00,1.00){\circle{2.0}} \put(18.00,2.00){\circle{2.0}} \put(17.00,3.00){\circle{2.0}} \put(16.00,4.00){\circle{2.0}} \put(15.00,5.00){\circle{2.0}} \put(14.00,6.00){\circle{2.0}} \put(13.00,7.00){\circle{2.0}} \put(12.00,8.00){\circle{2.0}} \put(11.00,9.00){\circle{2.0}} \put(10.00,10.00){\circle{2.0}} \put(9.00,11.00){\circle{2.0}} \put(8.00,12.00){\circle{2.0}} \put(7.00,13.00){\circle{2.0}} \put(6.00,14.00){\circle{2.0}} \put(5.00,15.00){\circle{2.0}} \put(4.00,16.00){\circle{2.0}} \put(3.00,17.00){\circle{2.0}} \put(2.00,18.00){\circle{2.0}} \put(1.00,19.00){\circle{2.0}} \put(0.,0.){\line(1,1){20.}} \end{picture} \ea = \left(\begin{array}{cccccccccccc} J&0&0&0&0&0&0&0& 0 & 0 & 0 & 0\\ 0&J&0&0&0&0&0&0& 0 & 0 & 0 & 0\\ 0&0&Y&0&0&Q'&0&0& -S' & 0 & 0 & 0\\ 0&0&0&L&0&0&0&0& 0 & -T' & 0 & 0\\ 0&0&0&0&J&0&0&0& 0 & 0 & 0 & 0\\ 0&0&Q&0&0&Y&0&0& -P' & 0 & 0 & 0\\ 0&0&0&0&0&0&J&0& 0 & 0 & 0 & 0\\ 0&0&0&0&0&0&0&L& 0 & 0 & -T' & 0\\ 0&0&S&0&0&P&0&0& M & 0 & 0 & 0\\ 0&0&0&-T&0&0&0&0& 0 & -N & 0 & 0\\ 0&0&0&0&0&0&0&-T& 0 & 0 & -N & 0\\ 0&0&0&0&0&0&0&0& 0 & 0 & 0 & 1 \end{array}\right)\nonumber \end{eqnarray} where the dependence of these elements on the spectral parameter is given by \begin{eqnarray} J(x)&=& \frac {(x-q^{-\alpha-2})}{(xq^{-\alpha-2}-1 )},~~~Y(x)=J(x)(D+B)+\frac 1{[\alpha+2]},\nonumber \\ L(x)&=& \frac 1{[\alpha+2]} ([\alpha+1] J(x)+1),~~~M(x)=F^2 DJ(x) +\frac {[\alpha]}{[\alpha+2]},~~~ N(x)= \frac 1{[\alpha+2]} (J(x) +[\alpha+1]), \nonumber \\ Q(x)&=&(qB-Dq^{-1})J(x) -\frac {q^{-1}}{[\alpha+2]},~~~Q'(x)=(q^{-1} B-qD)J(x) -\frac q{[\alpha+2]},\nonumber \\ S(x)&=& \frac{\sqrt{[\alpha]} }{[\alpha+2]} q^{-(\alpha+3)/2} -q^{(\alpha+1)/2}F D J(x), ~~~ S'(x)= -\frac{\sqrt{[\alpha]} }{[\alpha+2]}q^{(\alpha+3)/2} +q^{-(\alpha+1)/2}F D J(x),\nonumber \\ T(x)&=& \frac{\sqrt{[\alpha+1]}}{[\alpha+2]}(q^{(\alpha+2)/2}J(x) -q^{-(\alpha+2)/2} ), ~~~ T'(x)= \frac{\sqrt{[\alpha+1]}}{[\alpha+2]}(q^{(\alpha+2)/2} -q^{-(\alpha+2)/2}J(x) ),\nonumber \\ P(x)&=& q^{(\alpha+3)/2} F D J(x) - \frac {\sqrt{[\alpha]} } {[\alpha+2]}q^{-(\alpha+1)/2} , ~~~P'(x) = -q^{-(\alpha+3)/2} F DJ(x) + \frac {\sqrt{[\alpha]} } {[\alpha+2]}q^{(\alpha+1)/2} ,\nonumber \end{eqnarray} with constants \begin{eqnarray} D&=& \frac {[\alpha]}{[\alpha+2](q+q^{-1})}, ~~~F=\frac{(q+q^{-1})}{\sqrt{[\alpha]} }, ~~~ B= 1/(q+q^{-1}), ~~ \mbox{ and}~~~ [\alpha]=\frac {q^{\alpha} - q^{-\alpha}}{q-q^{-1} }.\nonumber \end{eqnarray} The ${}_{vw}R$-matrix as well as satisfying the Yang Baxter relation (\ref{vvham}), also satisfies the generalized Cherednick reflection property (\ref{refl}) and crossing unitarity (\ref{unit}). We now introduce an auxiliary doubled monodromy matrix \begin{eqnarray} \nonumber \lefteqn{{}_{vw}U(x)^{\beta \{j\}}_{\alpha \{i\} } = \begin{array}} \newcommand{\ea}{\end{array}{c} \unitlength=0.50mm \begin{picture}(95.,49.) \put(45.,15.){\makebox(0,0)[cc]{$\cdots$}} \put(21.,-3.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} i_1$}} \put(31.,-3.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} i_2$}} \put(71.,-3.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} i_k$}} \put(21.,47.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} j_1$}} \put(31.,47.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} j_2$}} \put(71.,47.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} j_k$}} \put(86.,15.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} \alpha$}} \put(70.,0.){\vector(0,1){45.}} \put(30.,0.){\vector(0,1){45.}} \put(20.,0.){\vector(0,1){45.}} \put(80.,35.){\vector(1,0){1.}} \put(57.00,35.00){\circle{2.}} \put(58.00,35.00){\circle{2.}} \put(59.00,35.00){\circle{2.}} \put(60.00,35.00){\circle{2.}} \put(61.00,35.00){\circle{2.}} \put(62.00,35.00){\circle{2.}} \put(63.00,35.00){\circle{2.}} \put(64.00,35.00){\circle{2.}} \put(65.00,35.00){\circle{2.}} \put(66.00,35.00){\circle{2.}} \put(67.00,35.00){\circle{2.}} \put(68.00,35.00){\circle{2.}} \put(72.00,35.00){\circle{2.}} \put(73.00,35.00){\circle{2.}} \put(74.00,35.00){\circle{2.}} \put(75.00,35.00){\circle{2.}} \put(76.00,35.00){\circle{2.}} \put(77.00,35.00){\circle{2.}} \put(78.00,35.00){\circle{2.}} \put(57.00,15.00){\circle{2.}} \put(58.00,15.00){\circle{2.}} \put(59.00,15.00){\circle{2.}} \put(60.00,15.00){\circle{2.}} \put(61.00,15.00){\circle{2.}} \put(62.00,15.00){\circle{2.}} \put(63.00,15.00){\circle{2.}} \put(64.00,15.00){\circle{2.}} \put(65.00,15.00){\circle{2.}} \put(66.00,15.00){\circle{2.}} \put(67.00,15.00){\circle{2.}} \put(68.00,15.00){\circle{2.}} \put(69.00,15.00){\circle{2.}} \put(70.00,15.00){\circle{2.}} \put(71.00,15.00){\circle{2.}} \put(72.00,15.00){\circle{2.}} \put(73.00,15.00){\circle{2.}} \put(74.00,15.00){\circle{2.}} \put(75.00,15.00){\circle{2.}} \put(76.00,15.00){\circle{2.}} \put(77.00,15.00){\circle{2.}} \put(78.00,15.00){\circle{2.}} \put(79.00,15.00){\circle{2.}} \put(80.00,15.00){\circle{2.}} \put(45.,35.){\makebox(0,0)[cc]{$\cdots$}} \put(86.,35.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} \beta$}} \put(15.00,15.00){\circle{2.}} \put(16.00,15.00){\circle{2.}} \put(17.00,15.00){\circle{2.}} \put(18.00,15.00){\circle{2.}} \put(19.00,15.00){\circle{2.}} \put(20.00,15.00){\circle{2.}} \put(21.00,15.00){\circle{2.}} \put(22.00,15.00){\circle{2.}} \put(23.00,15.00){\circle{2.}} \put(24.00,15.00){\circle{2.}} \put(25.00,15.00){\circle{2.}} \put(26.00,15.00){\circle{2.}} \put(27.00,15.00){\circle{2.}} \put(28.00,15.00){\circle{2.}} \put(29.00,15.00){\circle{2.}} \put(30.00,15.00){\circle{2.}} \put(31.00,15.00){\circle{2.}} \put(32.00,15.00){\circle{2.}} \put(33.00,15.00){\circle{2.}} \put(34.00,15.00){\circle{2.}} \put(35.00,15.00){\circle{2.}} \put(36.00,15.00){\circle{2.}} \put(37.00,15.00){\circle{2.}} \put(15.00,35.00){\circle{2.}} \put(16.00,35.00){\circle{2.}} \put(17.00,35.00){\circle{2.}} \put(18.00,35.00){\circle{2.}} \put(22.00,35.00){\circle{2.}} \put(23.00,35.00){\circle{2.}} \put(24.00,35.00){\circle{2.}} \put(25.00,35.00){\circle{2.}} \put(26.00,35.00){\circle{2.}} \put(28.00,35.){\vector(1,0){1.}} \put(32.00,35.00){\circle{2.}} \put(33.00,35.00){\circle{2.}} \put(34.00,35.00){\circle{2.}} \put(35.00,35.00){\circle{2.}} \put(36.00,35.00){\circle{2.}} \put(37.00,35.00){\circle{2.}} \put(39.00,35.){\vector(1,0){1.}} \put(5.00,25.){\circle*{2.5}} \put(5.00,25.00){\circle{2.}} \put(6.00,26.00){\circle{2.}} \put(7.00,27.00){\circle{2.}} \put(8.00,28.00){\circle{2.}} \put(9.00,29.00){\circle{2.}} \put(10.00,30.00){\circle{2.}} \put(11.00,31.00){\circle{2.}} \put(12.00,32.00){\circle{2.}} \put(13.00,33.00){\circle{2.}} \put(14.00,34.00){\circle{2.}} \put(15.00,35.00){\circle{2.}} \put(5.00,25.00){\circle{2.}} \put(6.00,24.00){\circle{2.}} \put(7.00,23.00){\circle{2.}} \put(8.00,22.00){\circle{2.}} \put(9.00,21.00){\circle{2.}} \put(10.00,20.00){\circle{2.}} \put(11.00,19.00){\circle{2.}} \put(12.00,18.00){\circle{2.}} \put(13.00,17.00){\circle{2.}} \put(14.00,16.00){\circle{2.}} \put(15.00,15.00){\circle{2.}} \end{picture} \ea}\\\nonumber &=& {}_{vw}R_{+}{}^{\beta_2 j_1}_{\alpha' j'_1}~ {}_{vw}R_{+}{}^{\beta_3 j_2}_{\beta_2 j'_2} \dots {}_{vw}R_{+}{}^{\beta j_k}_{\beta_k j'_k}~ {}_{vw}R_{\alpha_2 i_1}^{\alpha' j'_1}(1/x)~ {}_{vw}R_{\alpha_3 i_2}^{\alpha_2 j_2'}(1/x)\dots {}_{vw}R_{\alpha i_l}^{\alpha_kj'_k}(1/x),\label{mono} \end{eqnarray} acting on $V \otimes W^{\otimes k}$. Above ${}_{vw}R_+ $ represents the leading term in the matrix ${}_{vw}R^{-1}(x)$ for the limit as $x\rightarrow \infty$. Represent the doubled monodromy matrix in the following way: \begin{eqnarray} {}_{vw}U^\gamma_\alpha(x)&=& \left( \begin{array}{clcr} {}_{vw}U_1^1(x) & {}_{vw}U_2^1(x) & {}_{vw}U_3^1(x)\\ {}_{vw}U_1^2(x) & {}_{vw}U_2^2(x) & {}_{vw}U_2^3(x) \\ {}_{vw}U_1^3(x) & {}_{vw}U_2^3(x) & {}_{vw}U_3^3(x) \end{array}\right).\label{umatrix} \end{eqnarray} It may be shown that this monodromy matrix satisfies the following Yang-Baxter relation \begin{eqnarray} {}_{vv}R^{\alpha_1 \beta_1}_{\alpha_2 \beta_2}(y/x) ~~{}_{vw}U^{\beta_2 a}_{\gamma_2 b}(x) ~~{}_{vv}R_+{}^{\gamma_2 \alpha_2}_{\gamma_1 \delta_2} ~~ {}_{vw}U^{\delta_2 b}_{\delta_1 c}(y) = {}_{vw}U^{\alpha_1 a}_{\alpha_2 b}(y) ~~{}_{vv}R_+{}^{\alpha_2 \beta_1}_{\delta_2 \beta_2} ~~{}_{vw}U^{\beta_2b}_{\gamma_2 c}(x) ~~{}_{vv}R^{\gamma_2 \delta_2}_{\gamma_1 \delta_1 }(y/x) , \label{newybe} \end{eqnarray} depicted graphically below. \[ \unitlength=0.50mm \begin{picture}(170.,85.) \put(85.,45.){\makebox(0,0)[cc]{$=$}} \put(5.,10.){\vector(0,1){75.}} \put(36.,10.){\vector(0,1){75.}} \put(15.,10.){\vector(0,1){75.}} \put(115.,10.){\vector(0,1){75.}} \put(146.,10.){\vector(0,1){75.}} \put(125.,10.){\vector(0,1){75.}} \put(59.,80.){\vector(1,0){1.}} \put(65.,80.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} \alpha_1 $}} \put(65.,70.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} \beta_1 $}} \put(65.,50.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} \gamma_1 $}} \put(65.,20.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} \delta_1 $}} \put(170.,70.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} \alpha_1 $}} \put(170.,40.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} \beta_1 $}} \put(170.,20.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} \gamma_1 $}} \put(170.,10.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} \delta_1 $}} \put(-10.00,60.00){\circle*{2.}} \put(-9.00,61.00){\circle{2.}} \put(-8.00,62.00){\circle{2.}} \put(-7.00,63.00){\circle{2.}} \put(-6.00,64.00){\circle{2.}} \put(-5.00,65.00){\circle{2.}} \put(-4.00,66.00){\circle{2.}} \put(-3.00,67.00){\circle{2.}} \put(-2.00,68.00){\circle{2.}} \put(-1.00,69.00){\circle{2.}} \put(0.00,70.00){\circle{2.}} \put(-10.00,59.00){\circle*{2.}} \put(-9.00,58.00){\circle{2.}} \put(-8.00,57.00){\circle{2.}} \put(-7.00,56.00){\circle{2.}} \put(-6.00,55.00){\circle{2.}} \put(-5.00,54.00){\circle{2.}} \put(-4.00,53.00){\circle{2.}} \put(-3.00,52.00){\circle{2.}} \put(-2.00,51.00){\circle{2.}} \put(-1.00,50.00){\circle{2.}} \put(58.,70.){\vector(1,0){2.}} \put(0.00,70.00){\circle{2.}} \put(1.00,70.00){\circle{2.}} \put(2.00,70.00){\circle{2.}} \put(3.00,70.00){\circle{2.}} \put(7.00,70.00){\circle{2.}} \put(8.00,70.00){\circle{2.}} \put(9.00,70.00){\circle{2.}} \put(10.00,70.00){\circle{2.}} \put(11.,70.){\vector(1,0){2.}} \put(17.00,70.00){\circle{2.}} \put(18.00,70.00){\circle{2.}} \put(19.00,70.00){\circle{2.}} \put(20.00,70.00){\circle{2.}} \put(21.,70.){\vector(1,0){2.}} \put(25.,70.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} \ldots$}} \put(30.00,70.00){\circle{2.}} \put(31.00,70.00){\circle{2.}} \put(32.00,70.00){\circle{2.}} \put(33.00,70.00){\circle{2.}} \put(34.00,70.00){\circle{2.}} \put(39.00,70.00){\circle{2.}} \put(40.00,70.00){\circle{2.}} \put(41.00,70.00){\circle{2.}} \put(42.00,70.00){\circle{2.}} \put(43.00,70.00){\circle{2.}} \put(44.00,70.00){\circle{2.}} \put(46.00,70.00){\circle{2.}} \put(47.00,70.00){\circle{2.}} \put(48.00,70.00){\circle{2.}} \put(49.00,70.00){\circle{2.}} \put(50.00,70.00){\circle{2.}} \put(51.00,70.00){\circle{2.}} \put(52.00,70.00){\circle{2.}} \put(53.00,70.00){\circle{2.}} \put(54.00,70.00){\circle{2.}} \put(55.00,70.00){\circle{2.}} \put(56.00,70.00){\circle{2.}} \put(57.00,70.00){\circle{2.}} \put(0.00,50.00){\circle{2.}} \put(1.00,50.00){\circle{2.}} \put(2.00,50.00){\circle{2.}} \put(3.00,50.00){\circle{2.}} \put(4.00,50.00){\circle{2.}} \put(5.00,50.00){\circle{2.}} \put(6.00,50.00){\circle{2.}} \put(7.00,50.00){\circle{2.}} \put(8.00,50.00){\circle{2.}} \put(9.00,50.00){\circle{2.}} \put(10.00,50.00){\circle{2.}} \put(11.00,50.00){\circle{2.}} \put(12.00,50.00){\circle{2.}} \put(13.00,50.00){\circle{2.}} \put(14.00,50.00){\circle{2.}} \put(15.00,50.00){\circle{2.}} \put(16.00,50.00){\circle{2.}} \put(17.00,50.00){\circle{2.}} \put(18.00,50.00){\circle{2.}} \put(19.00,50.00){\circle{2.}} \put(20.00,50.00){\circle{2.}} \put(21.00,50.00){\circle{2.}} \put(22.00,50.00){\circle{2.}} \put(23.00,50.00){\circle{2.}} \put(24.00,50.00){\circle{2.}} \put(25.00,50.00){\circle{2.}} \put(26.00,50.00){\circle{2.}} \put(27.00,50.00){\circle{2.}} \put(28.00,50.00){\circle{2.}} \put(29.00,50.00){\circle{2.}} \put(30.00,50.00){\circle{2.}} \put(31.00,50.00){\circle{2.}} \put(32.00,50.00){\circle{2.}} \put(33.00,50.00){\circle{2.}} \put(34.00,50.00){\circle{2.}} \put(35.00,50.00){\circle{2.}} \put(36.00,50.00){\circle{2.}} \put(37.00,50.00){\circle{2.}} \put(38.00,50.00){\circle{2.}} \put(39.00,50.00){\circle{2.}} \put(40.00,50.00){\circle{2.}} \put(41.00,50.00){\circle{2.}} \put(42.00,50.00){\circle{2.}} \put(43.00,50.00){\circle{2.}} \put(44.00,50.00){\circle{2.}} \put(45.00,50.00){\circle{2.}} \put(46.00,50.00){\circle{2.}} \put(47.00,50.00){\circle{2.}} \put(48.00,50.00){\circle{2.}} \put(49.00,50.00){\circle{2.}} \put(50.00,50.00){\circle{2.}} \put(51.00,50.00){\circle{2.}} \put(52.00,50.00){\circle{2.}} \put(53.00,50.00){\circle{2.}} \put(54.00,50.00){\circle{2.}} \put(55.00,50.00){\circle{2.}} \put(56.00,50.00){\circle{2.}} \put(57.00,50.00){\circle{2.}} \put(58.00,50.00){\circle{2.}} \put(59.00,50.00){\circle{2.}} \put(110.00,40.00){\circle{2.}} \put(111.00,40.00){\circle{2.}} \put(112.00,40.00){\circle{2.}} \put(123.,40.){\vector(1,0){2.}} \put(117.00,40.00){\circle{2.}} \put(118.00,40.00){\circle{2.}} \put(119.00,40.00){\circle{2.}} \put(120.00,40.00){\circle{2.}} \put(121.00,40.00){\circle{2.}} \put(122.00,40.00){\circle{2.}} \put(131.,40.){\vector(1,0){2.}} \put(127.00,40.00){\circle{2.}} \put(128.00,40.00){\circle{2.}} \put(129.00,40.00){\circle{2.}} \put(130.00,40.00){\circle{2.}} \put(142.00,40.00){\circle{2.}} \put(143.00,40.00){\circle{2.}} \put(144.00,40.00){\circle{2.}} \put(148.00,40.00){\circle{2.}} \put(149.00,40.00){\circle{2.}} \put(150.00,40.00){\circle{2.}} \put(151.00,40.00){\circle{2.}} \put(157.00,40.00){\circle{2.}} \put(158.00,40.00){\circle{2.}} \put(159.00,40.00){\circle{2.}} \put(160.00,40.00){\circle{2.}} \put(161.00,40.00){\circle{2.}} \put(162.00,40.00){\circle{2.}} \put(163.00,40.00){\circle{2.}} \put(164.00,40.00){\circle{2.}} \put(157.,40.){\vector(1,0){10.}} \put(110.00,20.00){\circle{2.}} \put(111.00,20.00){\circle{2.}} \put(112.00,20.00){\circle{2.}} \put(113.00,20.00){\circle{2.}} \put(114.00,20.00){\circle{2.}} \put(115.00,20.00){\circle{2.}} \put(116.00,20.00){\circle{2.}} \put(117.00,20.00){\circle{2.}} \put(118.00,20.00){\circle{2.}} \put(119.00,20.00){\circle{2.}} \put(120.00,20.00){\circle{2.}} \put(121.00,20.00){\circle{2.}} \put(122.00,20.00){\circle{2.}} \put(123.00,20.00){\circle{2.}} \put(124.00,20.00){\circle{2.}} \put(125.00,20.00){\circle{2.}} \put(126.00,20.00){\circle{2.}} \put(127.00,20.00){\circle{2.}} \put(128.00,20.00){\circle{2.}} \put(129.00,20.00){\circle{2.}} \put(130.00,20.00){\circle{2.}} \put(142.00,20.00){\circle{2.}} \put(143.00,20.00){\circle{2.}} \put(144.00,20.00){\circle{2.}} \put(145.00,20.00){\circle{2.}} \put(146.00,20.00){\circle{2.}} \put(147.00,20.00){\circle{2.}} \put(148.00,20.00){\circle{2.}} \put(149.00,20.00){\circle{2.}} \put(150.00,20.00){\circle{2.}} \put(151.00,20.00){\circle{2.}} \put(152.00,20.00){\circle{2.}} \put(153.00,20.00){\circle{2.}} \put(154.00,20.00){\circle{2.}} \put(155.00,20.00){\circle{2.}} \put(156.00,20.00){\circle{2.}} \put(157.00,20.00){\circle{2.}} \put(158.00,20.00){\circle{2.}} \put(159.00,20.00){\circle{2.}} \put(160.00,20.00){\circle{2.}} \put(161.00,20.00){\circle{2.}} \put(162.00,20.00){\circle{2.}} \put(163.00,20.00){\circle{2.}} \put(164.00,20.00){\circle{2.}} \put(165.00,20.00){\circle{2.}} \put(166.00,20.00){\circle{2.}} \put(25.,50.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} \ldots$}} \put(136.,40.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} \ldots$}} \put(136.,20.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} \ldots$}} \put(100.00,30.00){\circle*{2.3}} \put(101.00,31.00){\circle{2.}} \put(102.00,32.00){\circle{2.}} \put(103.00,33.00){\circle{2.}} \put(104.00,34.00){\circle{2.}} \put(105.00,35.00){\circle{2.}} \put(106.00,36.00){\circle{2.}} \put(107.00,37.00){\circle{2.}} \put(108.00,38.00){\circle{2.}} \put(109.00,39.00){\circle{2.}} \put(110.00,40.00){\circle{2.}} \put(100.00,30.00){\circle{2.}} \put(101.00,29.00){\circle{2.}} \put(102.00,28.00){\circle{2.}} \put(103.00,27.00){\circle{2.}} \put(104.00,26.00){\circle{2.}} \put(105.00,25.00){\circle{2.}} \put(106.00,24.00){\circle{2.}} \put(107.00,23.00){\circle{2.}} \put(108.00,22.00){\circle{2.}} \put(109.00,21.00){\circle{2.}} \put(110.00,20.00){\circle{2.}} \put(-10.00,30.00){\circle*{2.3}} \put(-9.00,31.00){\circle{2.}} \put(-8.00,32.00){\circle{2.}} \put(-7.00,33.00){\circle{2.}} \put(-6.00,34.00){\circle{2.}} \put(-5.00,35.00){\circle{2.}} \put(-4.00,36.00){\circle{2.}} \put(-3.00,37.00){\circle{2.}} \put(-2.00,38.00){\circle{2.}} \put(-1.00,39.00){\circle{2.}} \put(0.00,40.00){\circle{2.}} \put(-10.00,30.00){\circle{2.}} \put(-9.00,29.00){\circle{2.}} \put(-8.00,28.00){\circle{2.}} \put(-7.00,27.00){\circle{2.}} \put(-6.00,26.00){\circle{2.}} \put(-5.00,25.00){\circle{2.}} \put(-4.00,24.00){\circle{2.}} \put(-3.00,23.00){\circle{2.}} \put(-2.00,22.00){\circle{2.}} \put(-1.00,21.00){\circle{2.}} \put(0.00,20.00){\circle{2.}} \put(100.00,60.00){\circle*{2.3}} \put(101.00,61.00){\circle{2.}} \put(102.00,62.00){\circle{2.}} \put(103.00,63.00){\circle{2.}} \put(104.00,64.00){\circle{2.}} \put(105.00,65.00){\circle{2.}} \put(106.00,66.00){\circle{2.}} \put(107.00,67.00){\circle{2.}} \put(108.00,68.00){\circle{2.}} \put(109.00,69.00){\circle{2.}} \put(110.00,70.00){\circle{2.}} \put(100.00,60.00){\circle{2.}} \put(101.00,59.00){\circle{2.}} \put(102.00,58.00){\circle{2.}} \put(103.00,57.00){\circle{2.}} \put(104.00,56.00){\circle{2.}} \put(105.00,55.00){\circle{2.}} \put(106.00,54.00){\circle{2.}} \put(107.00,53.00){\circle{2.}} \put(108.00,52.00){\circle{2.}} \put(109.00,51.00){\circle{2.}} \put(110.00,50.00){\circle{2.}} \put(111.00,50.00){\circle{2.}} \put(112.00,50.00){\circle{2.}} \put(113.00,50.00){\circle{2.}} \put(114.00,50.00){\circle{2.}} \put(115.00,50.00){\circle{2.}} \put(116.00,50.00){\circle{2.}} \put(117.00,50.00){\circle{2.}} \put(118.00,50.00){\circle{2.}} \put(119.00,50.00){\circle{2.}} \put(120.00,50.00){\circle{2.}} \put(121.00,50.00){\circle{2.}} \put(122.00,50.00){\circle{2.}} \put(123.00,50.00){\circle{2.}} \put(124.00,50.00){\circle{2.}} \put(125.00,50.00){\circle{2.}} \put(126.00,50.00){\circle{2.}} \put(127.00,50.00){\circle{2.}} \put(128.00,50.00){\circle{2.}} \put(129.00,50.00){\circle{2.}} \put(130.00,50.00){\circle{2.}} \put(131.00,50.00){\circle{2.}} \put(136.,50.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} \ldots$}} \put(142.00,50.00){\circle{2.}} \put(143.00,50.00){\circle{2.}} \put(144.00,50.00){\circle{2.}} \put(145.00,50.00){\circle{2.}} \put(146.00,50.00){\circle{2.}} \put(147.00,50.00){\circle{2.}} \put(148.00,50.00){\circle{2.}} \put(149.00,50.00){\circle{2.}} \put(150.00,50.00){\circle{2.}} \put(151.00,50.00){\circle{2.}} \put(152.00,50.00){\circle{2.}} \put(153.00,50.00){\circle{2.}} \put(154.00,50.00){\circle{2.}} \put(154.00,49.00){\circle{2.}} \put(154.00,48.00){\circle{2.}} \put(154.00,47.00){\circle{2.}} \put(154.00,46.00){\circle{2.}} \put(154.00,45.00){\circle{2.}} \put(154.00,44.00){\circle{2.}} \put(154.00,43.00){\circle{2.}} \put(154.00,42.00){\circle{2.}} \put(154.00,41.00){\circle{2.}} \put(154.00,40.00){\circle{2.}} \put(154.00,39.00){\circle{2.}} \put(154.00,38.00){\circle{2.}} \put(154.00,37.00){\circle{2.}} \put(154.00,36.00){\circle{2.}} \put(154.00,35.00){\circle{2.}} \put(154.00,34.00){\circle{2.}} \put(154.00,33.00){\circle{2.}} \put(154.00,32.00){\circle{2.}} \put(154.00,31.00){\circle{2.}} \put(154.00,30.00){\circle{2.}} \put(154.00,29.00){\circle{2.}} \put(154.00,28.00){\circle{2.}} \put(154.00,27.00){\circle{2.}} \put(154.00,26.00){\circle{2.}} \put(154.00,25.00){\circle{2.}} \put(154.00,24.00){\circle{2.}} \put(154.00,23.00){\circle{2.}} \put(154.00,22.00){\circle{2.}} \put(154.00,21.00){\circle{2.}} \put(154.00,20.00){\circle{2.}} \put(154.00,19.00){\circle{2.}} \put(154.00,18.00){\circle{2.}} \put(154.00,17.00){\circle{2.}} \put(154.00,16.00){\circle{2.}} \put(154.00,15.00){\circle{2.}} \put(154.00,14.00){\circle{2.}} \put(154.00,13.00){\circle{2.}} \put(154.00,12.00){\circle{2.}} \put(154.00,11.00){\circle{2.}} \put(154.00,10.00){\circle{2.}} \put(155.00,10.00){\circle{2.}} \put(156.00,10.00){\circle{2.}} \put(157.00,10.00){\circle{2.}} \put(158.00,10.00){\circle{2.}} \put(159.00,10.00){\circle{2.}} \put(160.00,10.00){\circle{2.}} \put(161.00,10.00){\circle{2.}} \put(162.00,10.00){\circle{2.}} \put(163.00,10.00){\circle{2.}} \put(164.00,10.00){\circle{2.}} \put(165.00,10.00){\circle{2.}} \put(166.00,10.00){\circle{2.}} \put(1.00,20.00){\circle{2.}} \put(2.00,20.00){\circle{2.}} \put(3.00,20.00){\circle{2.}} \put(4.00,20.00){\circle{2.}} \put(5.00,20.00){\circle{2.}} \put(6.00,20.00){\circle{2.}} \put(7.00,20.00){\circle{2.}} \put(8.00,20.00){\circle{2.}} \put(9.00,20.00){\circle{2.}} \put(10.00,20.00){\circle{2.}} \put(11.00,20.00){\circle{2.}} \put(12.00,20.00){\circle{2.}} \put(13.00,20.00){\circle{2.}} \put(14.00,20.00){\circle{2.}} \put(15.00,20.00){\circle{2.}} \put(16.00,20.00){\circle{2.}} \put(17.00,20.00){\circle{2.}} \put(18.00,20.00){\circle{2.}} \put(19.00,20.00){\circle{2.}} \put(25.,20.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} \ldots$}} \put(31.00,20.00){\circle{2.}} \put(32.00,20.00){\circle{2.}} \put(33.00,20.00){\circle{2.}} \put(34.00,20.00){\circle{2.}} \put(35.00,20.00){\circle{2.}} \put(36.00,20.00){\circle{2.}} \put(37.00,20.00){\circle{2.}} \put(38.00,20.00){\circle{2.}} \put(39.00,20.00){\circle{2.}} \put(40.00,20.00){\circle{2.}} \put(41.00,20.00){\circle{2.}} \put(42.00,20.00){\circle{2.}} \put(43.00,20.00){\circle{2.}} \put(44.00,20.00){\circle{2.}} \put(45.00,20.00){\circle{2.}} \put(46.00,20.00){\circle{2.}} \put(47.00,20.00){\circle{2.}} \put(48.00,20.00){\circle{2.}} \put(49.00,20.00){\circle{2.}} \put(50.00,20.00){\circle{2.}} \put(51.00,20.00){\circle{2.}} \put(52.00,20.00){\circle{2.}} \put(53.00,20.00){\circle{2.}} \put(54.00,20.00){\circle{2.}} \put(55.00,20.00){\circle{2.}} \put(56.00,20.00){\circle{2.}} \put(57.00,20.00){\circle{2.}} \put(58.00,20.00){\circle{2.}} \put(59.00,20.00){\circle{2.}} \put(60.00,20.00){\circle{2.}} \put(61.00,20.00){\circle{2.}} \put(1.00,40.00){\circle{2.}} \put(2.00,40.00){\circle{2.}} \put(3.00,40.00){\circle{2.}} \put(7.00,40.00){\circle{2.}} \put(8.00,40.00){\circle{2.}} \put(9.00,40.00){\circle{2.}} \put(10.00,40.00){\circle{2.}} \put(12.,40.){\vector(1,0){1.}} \put(17.00,40.00){\circle{2.}} \put(18.00,40.00){\circle{2.}} \put(19.00,40.00){\circle{2.}} \put(21.,40.){\vector(1,0){1.}} \put(25.,40.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} \ldots$}} \put(30.00,40.00){\circle{2.}} \put(31.00,40.00){\circle{2.}} \put(32.00,40.00){\circle{2.}} \put(33.00,40.00){\circle{2.}} \put(34.00,40.00){\circle{2.}} \put(39.00,40.00){\circle{2.}} \put(40.00,40.00){\circle{2.}} \put(41.00,40.00){\circle{2.}} \put(42.00,40.00){\circle{2.}} \put(43.00,40.00){\circle{2.}} \put(44.00,40.00){\circle{2.}} \put(45.00,40.00){\circle{2.}} \put(45.00,41.00){\circle{2.}} \put(45.00,42.00){\circle{2.}} \put(45.00,43.00){\circle{2.}} \put(45.00,44.00){\circle{2.}} \put(45.00,45.00){\circle{2.}} \put(45.00,46.00){\circle{2.}} \put(45.00,47.00){\circle{2.}} \put(45.00,53.00){\circle{2.}} \put(45.00,54.00){\circle{2.}} \put(45.00,55.00){\circle{2.}} \put(45.00,56.00){\circle{2.}} \put(45.00,57.00){\circle{2.}} \put(45.00,58.00){\circle{2.}} \put(45.00,59.00){\circle{2.}} \put(45.00,60.00){\circle{2.}} \put(45.00,61.00){\circle{2.}} \put(45.00,62.00){\circle{2.}} \put(45.00,63.00){\circle{2.}} \put(45.00,64.00){\circle{2.}} \put(45.00,65.00){\circle{2.}} \put(45.00,66.00){\circle{2.}} \put(45.00,67.00){\circle{2.}} \put(45.00,68.00){\circle{2.}} \put(45.00,69.00){\circle{2.}} \put(45.00,70.00){\circle{2.}} \put(45.00,71.00){\circle{2.}} \put(45.00,72.00){\circle{2.}} \put(45.00,73.00){\circle{2.}} \put(45.00,74.00){\circle{2.}} \put(45.00,75.00){\circle{2.}} \put(45.00,76.00){\circle{2.}} \put(45.00,77.00){\circle{2.}} \put(45.00,78.00){\circle{2.}} \put(45.00,79.00){\circle{2.}} \put(45.00,80.00){\circle{2.}} \put(45.00,80.00){\circle{2.}} \put(46.00,80.00){\circle{2.}} \put(47.00,80.00){\circle{2.}} \put(48.00,80.00){\circle{2.}} \put(49.00,80.00){\circle{2.}} \put(50.00,80.00){\circle{2.}} \put(51.00,80.00){\circle{2.}} \put(52.00,80.00){\circle{2.}} \put(53.00,80.00){\circle{2.}} \put(54.00,80.00){\circle{2.}} \put(55.00,80.00){\circle{2.}} \put(56.00,80.00){\circle{2.}} \put(57.00,80.00){\circle{2.}} \put(25.,20.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} \ldots$}} \put(33.00,20.00){\circle{2.}} \put(34.00,20.00){\circle{2.}} \put(35.00,20.00){\circle{2.}} \put(36.00,20.00){\circle{2.}} \put(37.00,20.00){\circle{2.}} \put(38.00,20.00){\circle{2.}} \put(39.00,20.00){\circle{2.}} \put(40.00,20.00){\circle{2.}} \put(41.00,20.00){\circle{2.}} \put(42.00,20.00){\circle{2.}} \put(43.00,20.00){\circle{2.}} \put(44.00,20.00){\circle{2.}} \put(45.00,20.00){\circle{2.}} \put(46.00,20.00){\circle{2.}} \put(47.00,20.00){\circle{2.}} \put(48.00,20.00){\circle{2.}} \put(49.00,20.00){\circle{2.}} \put(50.00,20.00){\circle{2.}} \put(51.00,20.00){\circle{2.}} \put(52.00,20.00){\circle{2.}} \put(53.00,20.00){\circle{2.}} \put(54.00,20.00){\circle{2.}} \put(55.00,20.00){\circle{2.}} \put(56.00,20.00){\circle{2.}} \put(57.00,20.00){\circle{2.}} \put(58.00,20.00){\circle{2.}} \put(59.00,20.00){\circle{2.}} \put(60.00,20.00){\circle{2.}} \put(61.00,20.00){\circle{2.}} \put(112.00,70.00){\circle{2.}} \put(113.00,70.00){\circle{2.}} \put(117.00,70.00){\circle{2.}} \put(118.00,70.00){\circle{2.}} \put(119.00,70.00){\circle{2.}} \put(120.00,70.00){\circle{2.}} \put(122.,70.){\vector(1,0){1.}} \put(129.00,70.00){\circle{2.}} \put(130.00,70.00){\circle{2.}} \put(127.00,70.00){\circle{2.}} \put(128.00,70.00){\circle{2.}} \put(132.,70.){\vector(1,0){1.}} \put(136.,70.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} \ldots$}} \put(142.00,70.00){\circle{2.}} \put(143.00,70.00){\circle{2.}} \put(144.00,70.00){\circle{2.}} \put(148.00,70.00){\circle{2.}} \put(149.00,70.00){\circle{2.}} \put(150.00,70.00){\circle{2.}} \put(151.00,70.00){\circle{2.}} \put(152.00,70.00){\circle{2.}} \put(153.00,70.00){\circle{2.}} \put(154.00,70.00){\circle{2.}} \put(155.00,70.00){\circle{2.}} \put(156.00,70.00){\circle{2.}} \put(157.00,70.00){\circle{2.}} \put(158.00,70.00){\circle{2.}} \put(159.00,70.00){\circle{2.}} \put(160.00,70.00){\circle{2.}} \put(161.00,70.00){\circle{2.}} \put(162.00,70.00){\circle{2.}} \put(163.00,70.00){\circle{2.}} \put(164.00,70.00){\circle{2.}} \put(166.,70.){\vector(1,0){1.}} \end{picture} \] The occurance of the constant matrices ${}_{vv}R_+$ will greatly simplify the calculations of the algebraic Bethe ansatz. In order to perform the nested algebraic Bethe ansatz (NABA) we define an auxiliary transfer matrix as the (super) Markov trace of the monodromy matrix, that is, \begin{equation} {}_{vw}\tau^{ \{ j \} }_{ \{i\} }(y)=\sum_i {}_vK^\alpha_\alpha ~~{}_{vw}U^{\alpha \{ j \} }_{\alpha \{ i \} }(y) = \begin{array}} \newcommand{\ea}{\end{array}{c} \unitlength=0.50mm \begin{picture}(95.,49.) \put(45.,15.){\makebox(0,0)[cc]{$\cdots$}} \put(21.,-3.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} i_1$}} \put(31.,-3.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} i_2$}} \put(71.,-3.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} i_k$}} \put(21.,47.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} j_1$}} \put(31.,47.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} j_2$}} \put(71.,47.){\makebox(0,0)[cc]{$\scriptstyle}\newcommand{\tr}{{\rm tr} j_k$}} \put(70.,0.){\vector(0,1){45.}} \put(30.,0.){\vector(0,1){45.}} \put(20.,0.){\vector(0,1){45.}} \put(80.,35.){\vector(1,0){1.}} \put(57.00,35.00){\circle{2.}} \put(58.00,35.00){\circle{2.}} \put(59.00,35.00){\circle{2.}} \put(60.00,35.00){\circle{2.}} \put(61.00,35.00){\circle{2.}} \put(62.00,35.00){\circle{2.}} \put(63.00,35.00){\circle{2.}} \put(64.00,35.00){\circle{2.}} \put(65.00,35.00){\circle{2.}} \put(66.00,35.00){\circle{2.}} \put(67.00,35.00){\circle{2.}} \put(68.00,35.00){\circle{2.}} \put(72.00,35.00){\circle{2.}} \put(73.00,35.00){\circle{2.}} \put(74.00,35.00){\circle{2.}} \put(75.00,35.00){\circle{2.}} \put(76.00,35.00){\circle{2.}} \put(77.00,35.00){\circle{2.}} \put(78.00,35.00){\circle{2.}} \put(57.00,15.00){\circle{2.}} \put(58.00,15.00){\circle{2.}} \put(59.00,15.00){\circle{2.}} \put(60.00,15.00){\circle{2.}} \put(61.00,15.00){\circle{2.}} \put(62.00,15.00){\circle{2.}} \put(63.00,15.00){\circle{2.}} \put(64.00,15.00){\circle{2.}} \put(65.00,15.00){\circle{2.}} \put(66.00,15.00){\circle{2.}} \put(67.00,15.00){\circle{2.}} \put(68.00,15.00){\circle{2.}} \put(69.00,15.00){\circle{2.}} \put(70.00,15.00){\circle{2.}} \put(71.00,15.00){\circle{2.}} \put(72.00,15.00){\circle{2.}} \put(73.00,15.00){\circle{2.}} \put(74.00,15.00){\circle{2.}} \put(75.00,15.00){\circle{2.}} \put(76.00,15.00){\circle{2.}} \put(77.00,15.00){\circle{2.}} \put(78.00,15.00){\circle{2.}} \put(79.00,15.00){\circle{2.}} \put(80.00,15.00){\circle{2.}} \put(45.,35.){\makebox(0,0)[cc]{$\cdots$}} \put(15.00,15.00){\circle{2.}} \put(16.00,15.00){\circle{2.}} \put(17.00,15.00){\circle{2.}} \put(18.00,15.00){\circle{2.}} \put(19.00,15.00){\circle{2.}} \put(20.00,15.00){\circle{2.}} \put(21.00,15.00){\circle{2.}} \put(22.00,15.00){\circle{2.}} \put(23.00,15.00){\circle{2.}} \put(24.00,15.00){\circle{2.}} \put(25.00,15.00){\circle{2.}} \put(26.00,15.00){\circle{2.}} \put(27.00,15.00){\circle{2.}} \put(28.00,15.00){\circle{2.}} \put(29.00,15.00){\circle{2.}} \put(30.00,15.00){\circle{2.}} \put(31.00,15.00){\circle{2.}} \put(32.00,15.00){\circle{2.}} \put(33.00,15.00){\circle{2.}} \put(34.00,15.00){\circle{2.}} \put(35.00,15.00){\circle{2.}} \put(36.00,15.00){\circle{2.}} \put(37.00,15.00){\circle{2.}} \put(15.00,35.00){\circle{2.}} \put(16.00,35.00){\circle{2.}} \put(17.00,35.00){\circle{2.}} \put(18.00,35.00){\circle{2.}} \put(22.00,35.00){\circle{2.}} \put(23.00,35.00){\circle{2.}} \put(24.00,35.00){\circle{2.}} \put(25.00,35.00){\circle{2.}} \put(26.00,35.00){\circle{2.}} \put(28.00,35.){\vector(1,0){1.}} \put(32.00,35.00){\circle{2.}} \put(33.00,35.00){\circle{2.}} \put(34.00,35.00){\circle{2.}} \put(35.00,35.00){\circle{2.}} \put(36.00,35.00){\circle{2.}} \put(37.00,35.00){\circle{2.}} \put(39.00,35.){\vector(1,0){1.}} \put(5.00,25.){\circle*{2.5}} \put(5.00,25.00){\circle{2.}} \put(6.00,26.00){\circle{2.}} \put(7.00,27.00){\circle{2.}} \put(8.00,28.00){\circle{2.}} \put(9.00,29.00){\circle{2.}} \put(10.00,30.00){\circle{2.}} \put(11.00,31.00){\circle{2.}} \put(12.00,32.00){\circle{2.}} \put(13.00,33.00){\circle{2.}} \put(14.00,34.00){\circle{2.}} \put(15.00,35.00){\circle{2.}} \put(5.00,25.00){\circle{2.}} \put(6.00,24.00){\circle{2.}} \put(7.00,23.00){\circle{2.}} \put(8.00,22.00){\circle{2.}} \put(9.00,21.00){\circle{2.}} \put(10.00,20.00){\circle{2.}} \put(11.00,19.00){\circle{2.}} \put(12.00,18.00){\circle{2.}} \put(13.00,17.00){\circle{2.}} \put(14.00,16.00){\circle{2.}} \put(15.00,15.00){\circle{2.}} \put(85.00,10.00){\circle{2.}} \put(84.00,11.00){\circle{2.}} \put(83.00,12.00){\circle{2.}} \put(82.00,13.00){\circle{2.}} \put(81.00,14.00){\circle{2.}} \put(80.00,15.00){\circle{2.}} \put(85.00,40.00){\circle{2.}} \put(84.00,39.00){\circle{2.}} \put(83.00,38.00){\circle{2.}} \put(82.00,37.00){\circle{2.}} \put(81.00,36.00){\circle{2.}} \put(80.00,35.00){\circle{2.}} \put(85.00,40.00){\circle{2.}} \put(86.00,39.00){\circle{2.}} \put(87.00,38.00){\circle{2.}} \put(88.00,37.00){\circle{2.}} \put(89.00,36.00){\circle{2.}} \put(90.00,35.00){\circle{2.}} \put(91.00,34.00){\circle{2.}} \put(92.00,33.00){\circle{2.}} \put(93.00,32.00){\circle{2.}} \put(94.00,31.00){\circle{2.}} \put(95.00,30.00){\circle{2.}} \put(96.00,29.00){\circle{2.}} \put(97.00,28.00){\circle{2.}} \put(98.00,27.00){\circle{2.}} \put(99.00,26.00){\circle{2.}} \put(100.00,25.00){\circle{2.}} \put(85.00,10.00){\circle{2.}} \put(86.00,11.00){\circle{2.}} \put(87.00,12.00){\circle{2.}} \put(88.00,13.00){\circle{2.}} \put(89.00,14.00){\circle{2.}} \put(90.00,15.00){\circle{2.}} \put(91.00,16.00){\circle{2.}} \put(92.00,17.00){\circle{2.}} \put(93.00,18.00){\circle{2.}} \put(94.00,19.00){\circle{2.}} \put(95.00,20.00){\circle{2.}} \put(96.00,21.00){\circle{2.}} \put(97.00,22.00){\circle{2.}} \put(98.00,23.00){\circle{2.}} \put(99.00,24.00){\circle{2.}} \put(100.00,25.00){\circle{2.}} \end{picture} \ea\ , \end{equation} where \begin{eqnarray} {}_vK&=& \left( \begin{array}{clcr} 1& 0 & 0\\ 0 & q^2 & 0 \\ 0 & 0 & -q^2 \end{array}\right).\label{kmatrix} \end{eqnarray} Therefore the ${}_{vw}\tau(y)$ form a one-parameter family of commuting operators and it may be shown that they commute with the transfer matrix, ${}_{ww}\tau(y)$ of the Hamiltonian (\ref{ham1}). This means that they have a common set of eigenvectors. We find that \begin{eqnarray} {}_{vw}\tau (y)=~~{}_{vw}U_1^1(y)+q^2~~{}_{vw}U_2^2(y)-q^2~~{}_{vw}U_3^3(y).\label{vwt} \end{eqnarray} Take the lowest weight state as a reference state (pseudo-vacuum) in $W$, which we denote as $|0>_i$. Then $|0>=\otimes_{i=1}^k |0>_i$ and we find the action of the doubled monodromy matrix on this reference state to be given by \begin{eqnarray} {}_{vw}U(x)_k|0>= \left(\begin{array}{clcr} I(x)^k & 0 & 0 \\ 0 & I(x)^k & 0 \\ {}_{vw }U^3_1(x) & {}_{vw }U^3_2(x) & 1 \end{array}\right)|0>, \label{vwaction} \end{eqnarray} where $$I(x)= \frac {(1-x q^{\alpha})}{(1-x q^{-\alpha-2} )}.$$ We construct a set of eigenstates of the transfer matrix using the technique of the NABA. The creation operators are ${}_{vw}U^3_1(x)$, ${}_{vw}U^3_2(x)$ due to the choice of reference state. Thus we use the following for the ansatz for the eigenstates of ${}_{vw}\tau(y)$: \begin{eqnarray} \Psi={}_{vw}U^3_{a_1}(x_1)~~{}_{vw}U^3_{a_2}(x_2)...{}_{vw}U^3_{a_r}(x_r)~~ \Psi^{(1)}_{ \{a\} }|0> ,\label{ansatz} \end{eqnarray} where indices $a_i$ have values 1 or 2. We seek a solution of the eigenvalue equation \begin{eqnarray} {}_{vw}\tau(y)\Psi = {}_{vw}\Lambda(y) \Psi.\label{eig} \end{eqnarray} The action of these states is determined by the monodromy matrix and the relations (\ref{newybe}). The relations necessary for the construction of the NABA are \begin{eqnarray} {}_{vw}U^3_3(y)~~{}_{vw}U^3_\alpha(x)&=&-\frac {1}{q~E(y/x)}~~{}_{vw}U^3_\alpha(x)~~{}_{vw}U^3_3(y) - \frac{y~C( y/x)}{x~q~E( y/x)}~~{}_{vw}U^3_\alpha(y)~~{}_{vw}U^3_3(x)\nonumber \\ && -\left(\frac{q-q^{-1}}{q}\right)\sum_\beta {}_{vw}U^3_\alpha(y) ~~{}_{vw}U^\beta_\alpha(x),\\ {}_{vw}U^\gamma_\beta(y)~~ {}_{vw}U^3_\alpha(x)&=&\frac {r_+{}^{\gamma\alpha' }_{\delta'\gamma'}~~ r^{\beta'\gamma'}_{\beta \alpha} (x/y)} {q~E(x/y)} ~~{}_{vw}U^3_{\alpha'} (x) ~~{}_{vw}U^{\delta'}_{\beta'}(y) - \frac{ x~~r_+{}^{\gamma\alpha'}_{\delta'\beta}~~ C(x/y) }{y~ q~ E(x/y)} ~~{}_{vw}U^3_{\alpha'} (y) ~~{}_{vw}U^{\delta'}_{\alpha}(x), \label{vwrels} \end{eqnarray} with the indices taking values of 1 and 2. It can be seen that this $R$-matrix $r(y)$ also fulfills a Yang-Baxter equation and can be identified with the $R$-matrix of the quantum spin $\frac 12$ Heisenberg (XXZ) model. The action of the ansatz (\ref{ansatz}) on the diagonal elements of the monodromy matrix (\ref{mono}) is given by \begin{eqnarray} {}_{vw}U^3_3(y)\Psi &=&\frac{(-1)^r}{q^r}\prod^r_{i=1}\frac 1{E(y/x_i)}\Psi + \mbox{ u.t.}\nonumber \end{eqnarray} \begin{eqnarray} [{}_{vw}U^1_1(y)+q^2~~{}_{vw}U^2_2(y)]\Psi &=& \frac{I(y)^k}{q^r}\prod_{j=1}^r \frac 1{E(x_j/y)} \prod_{l=1}^r {}_{vw}U^3_{b_l}(x_l)|0> q \tau^{(1)}(y)^{b_1...b_r}\Psi^{(1)} +\mbox{ u.t.}\nonumber \end{eqnarray} where \begin{eqnarray} {}_{vw}\tau^{(1)}(y) = q^{-1}~~{}_{vw}U^{(1)}{}_1^1(y)+q~~{}_{vw}U^{(1)}{}_2^2(y).\label{vwt1} \end{eqnarray} In order that the eigenvalue problem (\ref{eig}) is satisfied, it is necessary to solve a new eigenvalue problem for the nesting as follows: $${}_{vw}\tau^{(1)}(y)\Psi^{(1)}=\Lambda^{(1)}(y,\{y_j\})\Psi^{(1)},$$ where \begin{eqnarray} \Psi^{(1)}={}_{vw}U^{(1)}{}^2_1(y_1)~~{}_{vw}U^{(1)}{}^2_1(y_2) ...{}_{vw}U^{(1)}{}^2_1(y_m)|0>^{(1)}.\label{nansatz} \end{eqnarray} The second level reference state is given by $|0>^{(1)}=\otimes_{i=1}^r |2>_i$. We represent the nested monodromy matrix as \begin{eqnarray} {}_{vw}U^{(1)}_m(y) =\left(\begin{array}{clcr} {}_{vw}U^{(1)}{}_1^1(y) & {}_{vw}U^{(1)}{}_2^1(y)\\ {}_{vw}U^{(1)}{}^2_1(y) & {}_{vw}U^{(1)}{}_2^2(y) \end{array}\right). \end{eqnarray} The action of the nested monodromy matrix ${}_{vw}U^{(1)}_m(y)$ on the second level reference state is \begin{eqnarray} {}_{vw}U^{(1)}{}_1^1(y)|0>^{(1)}&=&q^r\prod^r_{j=1}E(x_j/y)|0>^{(1)}, \nonumber\\ {}_{vw}U^{(1)}{}_2^2(y)|0>^{(1)}&=&\prod^r_{j=1} A(x_j/y)|0>^{(1)}.\nonumber \\ \end{eqnarray} The action of ${}_{vw}\tau^{(1)}(y)$ on the ansatz (\ref{nansatz}) is computed similar to the first level case from the relations (\ref{newybe}). We obtain \begin{eqnarray} {}_{vw}U^{(1)}{}_1^1(y)\Psi^{(1)}&=&q^{r-m}\prod^m_{i=1} \frac{A(y_i/y)}{E(y_i/y)}\prod_{l=1}^r E(x_l/y) \Psi^{(1)}+ \mbox{ u.t. }\nonumber \\ {}_{vw}U^{(1)}{}_2^2(y)\Psi^{(1)}&=&q^m\prod^m_{i=1} \frac{A(y/y_i)}{E(y/y_i)}\prod_{l=1}^r A(x_l/y) \Psi^{(1)}+ \mbox{ u.t. }\nonumber \\ \end{eqnarray} The eigenvalues for the auxilliary transfer matrix, ${}_{vw}\tau(y)$ are found to be \begin{eqnarray} {}_{vw}\Lambda(y)&=&q^{-m} I(y)^k \prod^m_{i=1} \frac{ A(y_i/y)}{E(y_i/y)}+q^{2+m-r} I(y)^k \prod_{i=1}^r \frac{ A(x_i/y)}{E(x_i/y)} \prod^m_{j=1}\frac{A(y/y_j)}{E(y/y_j)} -(-1)^r q^{2-r} \prod^r_{i=1} \frac 1{E(y/x_i)},\label{eigs} \end{eqnarray} provided the ``unwanted terms" cancel. The cancellation of these terms lead to the Bethe ansatz equations obtained by eliminating the poles of the eigenvalues (\ref{eigs}) \begin{eqnarray} \prod_{i\neq n}^m \frac {qy_i-q^{-1}y_n}{q^{-1}y_i-qy_n} &=& q^{2(1+m)-r}\prod_{j=1}^r \frac{q^{-1}y_n -qx_j}{y_n -x_j} ~~~~n=1,...,m,\nonumber \\ q^mI(x_l)^k& =&\prod^m_{j=1} \frac{y_j-x_l}{q^{-1}y_j -x_lq} ~~~~l=1,...,r. \label{BAE5} \end{eqnarray} Associated with these solutions, the energies of the Hamiltonian are given by $$ E= \sum_j \frac{-(q^{\alpha+1} - q^{-\alpha-1})^2}{(q^{\alpha/2}x_j^{-1/2} - q^{-\alpha/2} x_j^{1/2}) ( q^{-\alpha/2-1} x_j^{-1/2} - q^{\alpha/2+1} x_j^{1/2})} + k \left( {q^{\alpha+1} + q^{-\alpha-1} }\right).$$ This expression reduces to the normal periodic case \cite{periodic} in the rational limit as $q\rightarrow 1$. \section{Conclusion} In this work, we have constructed a quantum algebra invariant supersymmetric $U$ model on a closed lattice and derived the Bethe ansatz equations. Notice that in the Bethe ansatz equations (\ref{BAE5}) the presence of $``q"$ terms in comparison with the corresponding equations for the usual periodic boundary conditions \cite{meqaba}. In fact, this feature also appeared in other models \cite{GPPR,karow,Angi,lima} and seems to be a peculiarity of quantum-group-invariant closed spin chains. In the limit for $q\rightarrow 1$, the usual Bethe ansatz equations for the periodic chain \cite{meaba} are recovered. An appealing direction for further study of the present closed supersymmetric $U$ model with $U_q[gl(2|1)]$ invariance would be to investigate its thermodynamic properties. In particular the partition function in the finite size scaling limit which can be used to derive the operator content \cite{GPPR} of the related statistical models. \section*{Acknowledgements} Katrina Hibberd, Itzhak Roditi and Angela Foerster are financially supported by CNPq (Conselho Nacional de Deservolvimento Cient\'{\i}fico e Tecnol\'ogico). Itzhak Roditi also wishes to thank PRONEX/FINEP/MCT. Jon Links is supported by the Australian Research Council.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Fuzzing is one of the most prolific approaches to discover bugs in software. Nowadays, it is even becoming an integral part of the software development process. For example, OSS-Fuzz~\cite{oss-fuzz} is a popular platform that can fuzz open-source software continuously. In addition to OSS-Fuzz which targets primarily user-space applications, there is also an continuous fuzz testing platform called syzbot~\cite{syzbot} dedicated to fuzz testing Linux kernels. The platform has been operational since 2017. Equipped with the state-of-the-art kernel fuzzer Syzkaller~\cite{syzkaller}, it has already reported more than 4,000 bugs to date. Despite the huge success, such continuous fuzzing platform leads to a major challenge --- the rate of bug discovery is much higher than the rate of bug fixes. Taking the syzbot platform as an example, at the time of writing (Jun 2021), we find that it takes on average 51 days to fix a bug (over 3,396 \ fixed bugs), whereas it takes less than 0.4 day for syzbot to report a new bug. Another critical challenge is the patch propagation to downstream kernels~\cite{E-Fiber}, \emph{e.g.,}\xspace various PC distributions such as Ubuntu, and the smartphone OS - Android. Even if the syzbot bugs are patched in Linux, there is often a lack of knowledge on which patched bugs are security-critical. This can have a significant impact on patch propagation delays~\cite{E-Fiber}. One prominent example is CVE-2019-2215~\cite{cve-2019-2215}, a use-after-free bug that can root most Android devices (\emph{e.g.,}\xspace Samsung Galaxy S9). It was initially reported by syzbot and fixed in Linux upstream in 52 days~\cite{fix_cve-2019-2215}. Unfortunately, it took over a year for the patch to propagate to downstream Android kernels due to the lack of knowledge on its security impact. In fact, it was only until after bad actors were caught exploiting this vulnerability in the wild did Google start to realize its severity and obtained a CVE number~\cite{cve-2019-2215}. The previous two challenges illustrate a critical deficiency in today's continuous fuzzing platforms --- lack of automated bug triage, or bug impact analysis. The goal of this paper is to bridge this gap in the context of Linux kernel bugs. Performing automated bug impact analyses is challenging and an active area of research, especially in the kernel space. Recently, Wu~\emph{et al.}\xspace proposed a solution called \emph{sid} on inferring the security impact of a patch statically~\cite{DBLP:conf/ndss/WuHML20}, \emph{e.g.,}\xspace use-after-free, or out-of-bound memory access. However, due to the nature of the analysis being completely static, it has to make tradeoffs between soundness and completeness. Furthermore, without an actual reproducer, it falls short in determining whether the bug is actually triggerable and exploitable in reality. On the other end of the spectrum, there has been recent progress on automated exploit generation against kernel bugs, fully demonstrating the exploitability of a bug by turning a reproducer into an actual exploit. Fuze~\cite{DBLP:conf/uss/WuCXXGZ18} and KOOBE~\cite{DBLP:conf/uss/ChenZLQ20} are two representative studies that target use-after-free (UAF) and out-of-bound (OOB) memory access bugs respectively. However, fuzzer-generated bug reports often do not contain the real security impact. For example, as we find in our experiments, a WARNING error can in fact lead to UAF or OOB. Furthermore, while UAF and OOB bug types are arguably the most impactful bug types that lead to most real-world exploits, not all of them are created equal. For example, a write primitive, as opposed to read, is always more dangerous than a read, potentially causing corruption of critical pieces of data (e.g., function pointer), leading to arbitrary code execution and privilege escalation. Similarly, a function pointer dereference primitive is also more dangerous than a read primitive. We define the high-risk and low-risk impacts in \S\ref{sec:definition}. As evidence that such a distinction between high-risk and low-risk bugs already exists, we find, from syzbot's historical data, that UAF/OOB write bugs are often fixed much sooner than UAF/OOB read bugs --- 37 days vs. 63 days for UAF and 29 days vs. 89 days for OOB in terms of average attempt-to-fix delay. In addition, we also look at the patch propagation delays from upstream to downstream using Ubuntu Bionic as an example. Patches for WARNING errors have an average delay of 83 days vs. 59 in the case of patches for OOB write bugs. In lieu of the above, the goal of this paper is to \emph{check whether any of the seemingly low-risk bugs (\emph{e.g.,}\xspace the ones with read primitives or simply an assertion error) can be turned into high-risk (\emph{e.g.,}\xspace write primitive or function pointer dereference?}) To this end, we design a system called SyzScope{} that takes a reproducer and the corresponding bug report as input and produces a report on any high-risk impacts that are associated with the bug (and optionally new PoCs that exhibits such high-risk impacts). SyzScope{} has two distinct modes of operations with respect to the two challenges aforementioned (details are in \S\ref{sec:design}). First, to evaluate the security impact of open bugs and facilitate prioritized bug fixing, we perform a combination of static analysis and symbolic execution to look beyond the initial bug impact reported by the reproducer. Second, to evaluate the security impact of fixed bugs and facilitate timely patch propagation, we additionally allow a fuzzing component as we can use the patch to confirm whether new impacts belong to the same bug. Surprisingly, after analyzing thousands of seemingly low-risk bugs published on syzbot, SyzScope{} found 183{} bugs have high-risk impacts. Along the process, we have identified the limitations of the current bug reports and believe SyzScope{} is a great complement to recognize the hidden impacts of a bug. In summary, this paper makes the following contributions: \squishlist \item We propose SyzScope{}, a system that can automatically evaluate the impact of a given seemingly low-risk bug and uncover its true nature. The system can be easily integrated into the pipeline of syzbot. \item To achieve both accuracy and scalability, we have packaged together fuzzing, static analysis, and symbolic execution together to achieve the end goals. To facilitate reproduction and future research, we open source the system~\cite{SyzScope_github_repo}. \item Our tool successfully converts 183{} seemingly low-risk bugs on syzbot to high-risk bugs including 42{} of them with control flow hijacking primitives and 173{} with write primitives. \squishend \section{Background and Overview} \subsection{Syzbot and Bug Reporting} \label{sec:syzbot_bug_reporting} As mentioned earlier, syzbot is a platform that continuously fuzzes the Linux mainline kernel branches. The kernel version advances on a daily basis so that syzbot always fuzzes the latest version. All discovered bugs are not only sent to kernel developers and maintainers but also published on an open dashboard in real time~\cite{syzbot}. For every bug, it also includes valuable information such as bug reports (\emph{e.g.,}\xspace call trace, perceived bug impact), valid reproducers, specific kernel source version where the bug was found, kernel configuration files, and patches (if available). This is a valuable data source that is also suitable for automated analysis. \begin{table}[t] \resizebox{\columnwidth}{!}{ \centering \begin{tabular}{|c|c|} \hline Bug type & Bug Impact \\ \hline \multirow{3}{*}{Sanitizer: KASAN} & use-after-free (UAF) \\ & out-of-bounds (OOB) \\ & double-free \\ \hline Sanitizer: KCSAN & data race \\ \hline Sanitizer: KMSAN & uninitialized use \\ \hline Sanitizer: UBSAN & variety* \\ \hline Kernel: WARNING / INFO / BUG & Assertions on any unexpected behaviors \\\hline Kernel: GPF & corrupted pointer dereference \\ \hline \multicolumn{2}{l}{* UBSAN (Undefined Behavior Sanitizer) can detect a variety of impacts} \\ \end{tabular}} \caption{Main impacts of bugs on syzbot} \label{tab:Main impacts of bugs on syzbot} \end{table} \vspace{0.02in} \noindent\textbf{Bug detectors.} Syzbot uses the state-of-the-art kernel fuzzer Syzkaller~\cite{syzkaller} which relies on handcrafted ``templates'' or specifications that encode various knowledge about syscalls, including their relationships (\emph{e.g.,}\xspace \texttt{open()} and \texttt{close()}) and the domain of syscall arguments. During fuzzing, test cases will be generated according to the templates and mutation on the test cases will occur as well. Most importantly, there are two general mechanisms to catch bugs at runtime. First, it leverages various sanitizers that instrument the kernel code to catch memory corruption bugs, including Kernel Address Sanitizer (KASAN) ~\cite{KASAN}, Kernel Concurrency Sanitizer (KCSAN)~\cite{KCSAN}, and Kernel Memory Sanitizer (KMSAN)~\cite{KMSAN}, each capable of catching a certain class of bugs (categorized by their impacts, \emph{e.g.,}\xspace use-after-free). Undefined Behavior Sanitizer (UBSAN)~\cite{UBSAN} is a special sanitizer that is recently enabled and can detect a variety of bug impacts~\cite{UBSAN_enable}. Second, it relies on the kernel itself with its built-in assertions such as BUG and WARNING representing uncategorized errors and unexpected behaviors, as well as exception handling, \emph{e.g.,}\xspace general protection fault (GPF) due to accessing invalid memory pages. Whenever a bug is discovered, it must be caught by one of the mechanisms. The details are listed in Table~\ref{tab:Main impacts of bugs on syzbot}. In fact, the title of a bug report is automatically generated according to the detection mechanism and the perceived bug impact. For example, the bug title ``KASAN: use-after-free Read in hci\_dev\_do\_open'' indicates that it is caught by KASAN and the perceived bug impact is use-after-free read. \vspace{0.02in} \noindent\textbf{Limitations of the current bug detectors.} One principle currently embraced by fuzzers is that the execution of some buggy input stops as soon as any bug impact is discovered. This is because the goal of a fuzzer is to discover new bugs and fix them. When the first error is caught, it is pointless to let the program continue executing because it is already in a corrupted state. Any subsequent buggy behaviors are further and further away from the root cause of a bug, and therefore do not really contribute in understanding and fixing a bug. However, this principle does not help with realizing the maximum impact of a bug. In fact, it is the opposite of what we need because a bug can often lead to multiple impacts, some of which may not be immediately uncovered and may even require additional syscall invocations to manifest. \subsection{High-Risk vs. Low-Risk Bug Impacts} \label{sec:definition} Based on the recent literature~\cite{DBLP:conf/uss/WuCXXGZ18,DBLP:conf/uss/ChenZLQ20,DBLP:conf/ndss/WuHML20, 236346} and the recent high-profile bugs that are exploited in practice~\cite{cve-2018-9568,cve-2019-2025,cve-2019-2215}, we define high-risk bug impacts to be the following: 1. Any UAF and heap OOB bugs\footnote{Subsequently, we refer to heap OOB bugs as OOB bugs for brevity} that lead to a \emph{function pointer dereference primitive}, which is effectively a control flow hijacking primitive that can likely lead to an end-to-end exploitation (\emph{i.e.,}\xspace arbitrary code execution in the kernel context)~\cite{236346}. Such primitives can happen for example when the function pointer is located in a freed object or out-of-bound memory, and is incorrectly dereferenced. 2. Any UAF and OOB bugs that lead to a \emph{write primitive}, including overwriting a freed object or an object out-of-bounds, and in general any writes to unintended locations and/or with unintended values (\emph{e.g.,}\xspace a write by dereferencing an unsafe data pointer). Write primitives, as opposed to read, have the opportunity to corrupt control data (\emph{e.g.,}\xspace function pointers) and can be effectively turned into control flow hijacking as well. In addition, write primitives can be used for data-only attacks that achieve privilege escalation without explicitly altering the control flow (\emph{e.g.,}\xspace by modifying the uid of a process)~\cite{PT-Rand}. 3. Any invalid free bugs. This includes freeing a memory area that should not be freed or an already freed object (the latter corresponds to double-free bugs). Invalid frees can be turned into a UAF of multiple different candidate objects chosen by an adversary~\cite{slake}. With the freedom of choice of various candidate objects, the likelihood of finding a function pointer dereference or write primitive is high. In contrast, a low-risk impact is defined to be anything other than the above. This includes any UAF or OOB bugs that lead to only read primitives (without write or function pointer dereference primitives), as well as any other impacts such as WARNING, INFO, BUG, and GPF, as defined in Table~\ref{tab:Main impacts of bugs on syzbot}. Finally, we will give a more detailed breakdown of the high-risk impacts by their primitives in \S\ref{sec:validation}. \begin{figure}[t] \centering \begin{minipage}[t]{0.9\linewidth} \begin{lstlisting}[language=C] static void tcindex_free_perfect_hash(struct tcindex_data *cp) { for (int i = 0; i < <@\textcolor{red}{cp->hash}@>; i++) <@\textcolor{red}{tcf\_exts\_destroy}@>(<@\textcolor{mygreen}{\&cp->perfect[i].exts}@>); kfree(cp->perfect); } void tcf_exts_destroy(struct tcf_exts *<@\textcolor{mygreen}{exts}@>) { if (<@\textcolor{mygreen}{exts->actions}@>) <@\textcolor{red}{tcf\_action\_destroy}@>(<@\textcolor{mygreen}{exts->actions}@>); } int tcf_action_destroy(struct tc_action *<@\textcolor{mygreen}{actions[]}@>) { struct tc_action *a; for (i = 0; i < TCA_ACT_MAX_PRIO && <@\textcolor{mygreen}{actions[i]}@>; i++) { <@\textcolor{mygreen}{a = actions[i];}@> <@\textcolor{mygreen}{actions[i] = NULL};@> // AAW ret = <@\textcolor{red}{\_\_tcf\_idr\_release}@>(<@\textcolor{mygreen}{a}@>); } } int __tcf_idr_release(struct tc_action *<@\textcolor{mygreen}{p}@>) { if (<@\textcolor{red}{\_\_tcf\_action\_put}@>(<@\textcolor{mygreen}{p}@>, ...)) ... } static int __tcf_action_put(struct tc_action *<@\textcolor{mygreen}{p}@>, ...) { struct tcf_idrinfo *<@\textcolor{mygreen}{idrinfo}@> = <@\textcolor{mygreen}{p->idrinfo}@>; if (refcount_dec_and_mutex_lock(&<@\textcolor{mygreen}{p->tcfa\_refcnt}@>, &<@\textcolor{mygreen}{idrinfo->lock}@>)) { ... <@\textcolor{red}{tcf\_action\_cleanup}@>(<@\textcolor{mygreen}{p}@>); } } static void tcf_action_cleanup(struct tc_action *<@\textcolor{mygreen}{p}@>) { if (<@\textcolor{mygreen}{p->ops->cleanup}@>) <@\textcolor{red}{p->ops->cleanup(p);}@> // FPD } \end{lstlisting} \end{minipage} \begin{tablenotes} \small \item AAW = Arbitrary address write \item FPD = Function pointer dereference \end{tablenotes} \caption{A slab-out-of-bounds Read bug on syzbot} \label{fig:A example case of a slab-out-of-bounds Read bug} \end{figure} \subsection{Motivating Example} To illustrate why that is the case, we use a real bug from Syzbot~\cite{case_study_3} as an example to demonstrate how it is possible to turn a low-risk \emph{slab-out-of-bounds read} bug into a control flow hijack exploit. As shown in Figure \ref{fig:A example case of a slab-out-of-bounds Read bug}, the bounds of the iteration \texttt{cp->hash} (line 2) can be turned larger than the size of array \texttt{cp->perfect} (which resides on the heap), creating a potential OOB situation(the vulnerable object marked in green). More precisely, at line 8, an OOB read access occurred via \texttt{exts->actions}, leading to a slab-out-of-bounds read. Syzkaller stops at line 8 and generates a bug report with the title of ``KASAN: slab-out-of-bounds Read in tcf\_exts\_destroy'' on syzbot. Interestingly, even if we allow the execution to continue forward during fuzzing, it will almost always end up with an exception at line 14 because it is highly likely that \texttt{actions[i]} attempts to retrieve an element from an invalid address, \emph{i.e.,}\xspace it is equivalent of *(actions+i). Note that \texttt{actions} itself can point to any random location because it was read out-of-bounds (see line 8 and 9). As a result, one may come to the conclusion this is indeed a low-risk bug. However, since the entire \texttt{exts} structure is out-of-bounds, it is actually possible for an attacker to control the data at an appropriate offset by spraying a number of objects nearby~\cite{DBLP:conf/uss/ChenZLQ20}. Specifically, if the correct data is sprayed, \texttt{actions[i]} will retrieve an element from a valid address and prevent the kernel from crashing at line 14. After that, we are able to observe an arbitrary address write opportunity at line 16. This is because the pointer \texttt{action} comes from the OOB memory \texttt{exts}, which means \texttt{action} can potentially point to any arbitrary memory address. Even further down, we can see a control flow hijack opportunity arises at line 36 through a function pointer dereference, where the value of the function pointer \texttt{p} is controlled by an attacker as well (basically \texttt{actions[i]}). By now, we can conclude that this bug is actually very much high risk and needs to be patched as soon as possible. Interestingly, no one seemed to have realized the potential impact of the vulnerability. As a result, the bug was silently fixed without any CVE being assigned and it took almost 4 months (much longer than the average time-to-fix). The fact that there is no CVE assigned would also delay downstream kernels from applying the patch. \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{pics/workflow.pdf} \caption{Workflow} \label{fig:Workflow} \end{figure*} \subsection{Goals and Non-Goals} \label{sec:scope} \noindent\textbf{Goals.} SyzScope{} aims to reveal high-risk impacts of a seemingly low-risk bug by analyzing subsequent behaviors since the first reported impact and alternative paths where high-risk impacts may be located. As alluded to earlier, we believe there are two benefits provided by SyzScope{}. First, it will facilitate prioritized bug fixing. This is especially important given that the number of kernel bugs that are discovered on a daily basis due to continuous fuzzing and the transparency offered by the syzbot platform. Second, it will speed up the patch propagation in Linux-derived ecosystems. Even when a patch is already available in Linux mainline, it can take months and even years to propagate to all its downstream kernels, \emph{e.g.,}\xspace Android~\cite{DBLP:conf/ndss/WuHML20,E-Fiber}. \vspace{0.015in} \noindent\textbf{Non-goals.} SyzScope{} does NOT aim to produce an end-to-end exploit automatically, which would require the automation of a number of additional steps such as reliable heap feng shui, bypass of various defenses including KASLR, SMEP, and SMAP~\cite{236346}. Therefore, we do not claim that the high-risk impacts we defined are 100\% exploitable. Instead, we \emph{aim to uncover as many such high-risk impacts as possible, within a reasonable time and resource budget}. The more high-risk impacts, the more likely an exploit can be generated. In fact, our work is complementary to the research on automated exploit generation. For example, KOOBE~\cite{DBLP:conf/uss/ChenZLQ20} is designed to take an OOB write bug and turn it into a control flow hijack primitive. SyzScope{} can turn a seemingly non-OOB bug or an OOB read into an OOB write such that KOOBE can take it further and prove its actual exploitability. Finally, SyzScope{} does NOT aim to evaluate bugs whose impacts are outside of the types listed in Table~\ref{tab:Main impacts of bugs on syzbot}. \section{Design} \label{sec:design} In this section, we describe the design of SyzScope{}. The intuition driving the design is that uncovering more impacts of a bug is fundamentally a search problem. Interestingly, even though a fuzzer is designed to essentially search through an input space as well, its goal is to optimize for maximum code coverage and/or number of bugs; it is never to maximize the impacts of a given bug. Therefore, SyzScope{} is designed to perform a much more targeted search starting from a PoC that already uncovers some impact of a bug. \vspace{0.05in} \noindent\textbf{High-level Workflow} The workflow is depicted in Figure~\ref{fig:Workflow}, which contains three main components: \emph{Vulnerable Contexts Exploration}, \emph{Hidden Impacts Estimation}, and \emph{Validation and Evaluation}. They correspond to the three main techniques that we leverage and integrate together: fuzzing, static analysis, and dynamic symbolic execution. At a high level, given a PoC and its bug report that demonstrates some low-risk impact, we first start with a ``fuzzy'' approach by performing a targeted fuzzing campaign to explore other potential vulnerable contexts, \emph{i.e.,}\xspace additional impacts. This will allow us to cast a wider net early on. Second, given these additional PoCs and impacts (where some may be high-risk already and may still be low-risk), we then leverage static analysis to locate any potential high-risk impacts in alternative execution paths that we have not yet been able to reach during fuzzing. Finally, the static analysis will guide the execution of symbolic execution to confirm whether these high-risk impacts are reachable in practice. \subsection{Vulnerable Contexts Exploration} \label{sec:design_fuzzing} \begin{figure}[t] \centering \begin{minipage}[t]{1\linewidth} \begin{lstlisting}[language=C] void __rxrpc_put_peer(struct rxrpc_peer *peer) { struct rxrpc_net *rxnet = peer-><@\textcolor{mygreen}{local->rxnet}@>; ... } void rxrpc_send_keepalive(struct rxrpc_peer *peer){ ... whdr.epoch=htonl(peer-><@\textcolor{mygreen}{local->rxnet->epoch}@>); ... peer-><@\textcolor{mygreen}{local->socket->ops->sendmsg()}@> ... } \end{lstlisting} \end{minipage} \caption{Impacts of the same bug located in different syscalls} \label{fig:Buggy context may have different hidden impacts.} \end{figure} As can be seen in Figure~\ref{fig:Workflow}, the \emph{vulnerable context exploration} component takes a seemingly low-risk bug (including a PoC and a corresponding bug report), and produces one or more new PoCs that exhibit additional impacts (either low-risk or high-risk). The problem with the original PoC is that it traverses only a single path in the kernel and can therefore be very limited in terms of impact coverage. By exploring more contexts (\emph{i.e.,}\xspace paths) associated with the bug, we are more likely to uncover additional impacts. In general, there are two logical possibilities that additional impacts are located. First, it may be hidden directly behind the reported impact, \emph{i.e.,}\xspace in the same invocation of a syscall as shown in Figure~\ref{fig:A example case of a slab-out-of-bounds Read bug}. Second, it may be triggered in a completely different syscall not present in the original PoC, and may even require additional syscalls to set up the state beforehand and removal of existing syscalls to undo some state. Figure \ref{fig:Buggy context may have different hidden impacts.} illustrates a real case from syzbot (simplified) to show two different impacts of the same bug. In the original PoC, it causes \texttt{peer->local} to be freed accidentally before entering function \texttt{\_\_rxrpc\_put\_peer()}, leading to a UAF read. However, the PoC fails to discover the additional function pointer dereference which is located in \texttt{rxrpc\_send\_keepalive()}. In addition, \texttt{rxrpc\_send\_keepalive()} is not a function reachable by \texttt{\_\_rxrpc\_put\_peer()}. Therefore, a good test case would need to insert an additional syscall (with the right arguments) after the syscall that triggered the free. To this end, we will leverage Syzkaller to mutate the original PoC and explore any missed additional impacts. As mentioned in \S\ref{sec:syzbot_bug_reporting}, the goal of a fuzzer is never to uncover a bug's maximum impact. However, it is suitable for Syzkaller to search for new contexts that may be locked away in a specific sequence of syscalls~\cite{syzkaller_intro}. More specifically, we define \textit{the original context} as the execution path exercised by the original PoC. \textit{Any new context} by definition must be associated with a different execution path that must not share the same initial primitive (UAF/OOB read or write). Along with vulnerable context exploration, we also log all the impacts along each path and attribute any high-risk impacts to the corresponding context. Nevertheless, there are three challenges in making the search effective. First, as illustrated earlier in Figure~\ref{fig:A example case of a slab-out-of-bounds Read bug}, an earlier impact can block the execution from exploring deeper parts of the code and uncovering additional impacts. Second, even though a coverage-guided fuzzing strategy is approximately aiming to explore more code blocks and paths, it does not directly recognize the importance of impacts. In fact, it is possible that a new impact is triggered in an already explored path, \emph{e.g.,}\xspace OOB access can happen in array element access only when an argument (representing the array index) of a syscall becomes large enough. Third, we need to ensure that the mutated PoC, should it uncover any new impacts, is still exercising the same bug. Indeed, if we allow Syzkaller to mutate any part of the PoC, \emph{e.g.,}\xspace by removing a critical syscall or adding arbitrary syscalls, then it is entirely possible that the mutated test case will trigger a different bug altogether. To overcome the challenges, we make two important changes to Syzkaller and made an important observation. \vspace{0.015in} \noindent\textbf{Impact-aware fuzzing.} To address the first challenge, we will attempt to carry on the fuzzing session even when impacts are detected (by sanitizers or the kernel itself). Specifically, only a few bugs like \emph{general protection fault} or some \emph{BUG} impacts are irrecoverable errors (\emph{e.g.,}\xspace a NULL pointer dereference and divide by zero); other ones can be simply safely ignored. For example, when an OOB read impact occurs, we simply allow the kernel to read any data from out-of-bounds memory without panic and continue executing. This way, it is possible that KASAN catches another OOB write eventually. To address the second challenge, we note again that Syzkaller is an entirely coverage-guided fuzzer, \emph{i.e.,}\xspace its feedback metric is only the coverage. However, it tends to focus its attention and energy quickly on newly covered code, leaving covered but potentially buggy code space behind. Therefore, we introduced the feedback of impact. The idea is that Syzkaller will now consider a test case as a seed not only when it discovers any new coverage but also when it discovers any new impact. As it is rare to discover test cases that uncover new impacts, we always assign higher priority to them for mutation. \vspace{0.015in} \noindent\textbf{Restricted-fuzzing space.} By default, Syzkaller considers all syscalls in the operating system as candidates to insert into a test case (during mutation). However, clearly this is undesirable for uncovering impacts of a specific bug. For instance, a bug in the \texttt{kvm} module should not allow the TCP/IP modules' syscalls, which will likely drive the test cases to cover code outside of \texttt{kvm}. Therefore, to address the third challenge, we restrict the syscalls to those Syzkaller templates that subsume the syscalls in the PoC. In addition, we preserve all the syscalls in the original PoC and allow only insertion of syscalls. We also allow the mutation of arguments of existing syscalls. This strategy aims at preserving the root cause of a bug while still allowing new impacts to be discovered. If the above restricted-fuzzing strategy cannot uncover any new impacts or coverage (currently after a threshold of 5 minutes), we will activate another slightly more aggressive fuzzing strategy, where we will relax the restrictions to allow syscalls against the entire module, \emph{e.g.,}\xspace all syscalls related to the networking module or kvm module (one module sometimes can correspond to multiple templates). In addition, we also allow the removals of syscalls in the original PoC (though the latter is set to a very low probability). \vspace{0.015in} \noindent\textbf{Side effects of fuzzing.} Unfortunately this does still result in test cases that trigger different bugs. In fact, as we later find out, close to half of the new impacts are associated with a completely unrelated bug, where the other half are indeed new impacts belonging to the original bug. Therefore, we choose to perform fuzzing only when we have a strategy to confirm whether a new impact belongs to the original bug. Specifically, where patches are already available, \emph{i.e.,}\xspace the second use case of SyzScope{} as discussed in \S\ref{sec:scope}, we develop a heuristic that can do so accurately. The details are deferred to \S\ref{sec:implementation}. \subsection{Hidden Impacts Estimation} \label{sec:design_static} From the previous step, we have likely found more PoCs that uncovered additional impacts. However, it is possible that they are still low-risk impacts. After all, it is challenging for a fuzzer to explore deeper part of the kernel code, especially when there are complex conditions that may prevent it from making progress. Therefore, we can invoke the hidden impact estimation component to determine whether further high-risk impacts beyond the low-risk ones are available. Specifically, given an existing low-risk impact, we leverage static analysis to conduct a type-specific search. At the moment, we support the analysis of two types of low-risk impacts, \emph{i.e.,}\xspace UAF read and OOB read. Again, we choose them because UAF and OOB are two of the most dangerous bug impact types. In order to discover high-risk impacts such as write or function pointer dereference impact, we formulate the problem as a static taint analysis problem as will be articulated later. This component requires two inputs: a PoC that triggers one or more UAF/OOB read impacts, and the corresponding bug report as input. It then extracts two pieces of critical information from the report: (1) vulnerable object and (2) vulnerability point(s). A \emph{vulnerable object} is defined to be the freed object (in the UAF case) and the object intended to be accessed yet an out-of-bound access occurred (in the OOB case). A \emph{vulnerability point} is the statement where the use of a freed object happens (in the UAF case) or the OOB memory access (in the OOB case) occurs. We first observe that the UAF/OOB read impacts are due to the read of some data that can be potentially controlled by an adversary, \emph{i.e.,}\xspace either a freed and refilled object or something sprayed next to the vulnerable object. This means that any subsequent use of the read memory can be potentially dangerous. Specifically, we consider dereferences of function pointers that are derived from such memory high-risk impacts, \emph{i.e.,}\xspace UAF/OOB function pointer dereferences (as mentioned in \S\ref{sec:scope}). Similarly, we consider any dereferences of data pointers that are derived from such memory high-risk impacts if they lead to memory writes (to an attacker-controlled address), \emph{e.g.,}\xspace \texttt{*data\_ptr = 0}. The second observation is that UAF/OOB write impacts are basically writes to the same planted memory by an adversary. For example, if a planted object whose function pointer is overwritten due to a UAF/OOB write, it can potentially lead to arbitrary code execution, as line 32 in the example presented in Figure~\ref{fig:A example case of a slab-out-of-bounds Read bug} shows. To summarize, the above high-risk impacts can be formulated as a static taint analysis problem. For pointer dereferences, the taint source will be the \emph{vulnerable object}. The sink would be the dereference of a pointer (either function pointer or data pointer that leads to a write). Whenever any source flows to the sink, we consider it a high-risk impact. It is slightly different for UAF/OOB write. The taint source is the same. However, we do not need any sinks. Instead, whenever any value is written to the tainted memory, we consider it a high-risk impact. In the example presented earlier in Figure~\ref{fig:A example case of a slab-out-of-bounds Read bug}, line 16 is such a write to the OOB memory. In addition to reporting the additional impacts, we also record the branches that definitely need to be taken and the ones that definitely need to be avoided in order to reach the them. This is later used to guide the symbolic execution. We defer the implementation details of the static taint analysis to \S\ref{sec:implementation}. \subsection{Validation and Evaluation} \label{sec:validation} As shown in Figure \ref{fig:Workflow}, this module takes a PoC with potentially new high-risk impacts, aim to achieve two goals in this component: (1) validating the feasibility of reaching these high-risk impacts, and (2) evaluating the more fine-grained impact in terms of the enabled primitives, \emph{e.g.,}\xspace arbitrary value write vs. constrained value write. \vspace{0.015in} \noindent\textbf{Validate the feasibility of high-risk impacts.} In this step, we symbolize the same vulnerable object, \emph{i.e.,}\xspace taint source in static analysis, dynamically at the point when the vulnerable object is first read, \emph{e.g.,}\xspace an OOB read or UAF read. The entire vulnerable object is symbolized with the assumption that an attacker can spray desired payloads onto the heap. However, a well-known limitation of symbolic execution is its scalability due to path explosion. Therefore, we additionally leverage static analysis to guide the symbolic execution in two ways: (1) When an additional impact is found to be far away(40 basic blocks is our current threshold, see the details at ~\ref{sec:comp_by_comp}) from the initial impact, we take the path guidance from static analysis results in terms of which branches must and must not be taken. For example, an additional impact can be located in only a specific true branch, and therefore the false branch does not need to be explored. This avoid unnecessary explorations during symbolic execution and therefore speed up the process. Currently we choose to apply the guidance only when the additional high-risk impact is at least 40 basic blocks away intraprocedurally . This is because if they are nearby, symbolic execution will have no problem locating them anyways. Furthermore, we are not able to rule out any potential false negatives from static analysis and therefore it is best to let symbolic execution explore all possible paths. (2) Based on the farthest potential high-risk impact, we limit the scope of the analysis up to that point. Otherwise, the symbolic execution may continue executing until the end of a syscall, taking much more significant time. \vspace{0.015in} \noindent\textbf{Evaluate the fine-grained impacts.} As mentioned earlier in \S\ref{sec:definition}, the high-risk impacts can include three types of primitives: \textit{write}, \textit{func-ptr-deref}, and \textit{double free}. Below we discuss the \textit{write} and \textit{func-ptr-deref} primitive. (1) \emph{Overwriting symbolic memory}. The impact of such a write requires a more careful analysis. For example, when such an OOB write occurs, it is necessary to understand the offset of the write, the length of the write, and the data that can be written~\cite{DBLP:conf/uss/ChenZLQ20}. Then we can evaluate whether the write is flexible enough to overwrite a function pointer in a heap-sprayed object nearby. Since such an analysis is already done in \cite{DBLP:conf/uss/ChenZLQ20}, we simply refer to such writes as \emph{UAF write} or \emph{OOB write} and leave them to existing work for further triage. (2) \emph{Write with symbolic data or write to symbolic address}. When the write target is symbolic or data to-be-written is symbolic, We follow a classic definition of a write primitive is ``write-what-where''~\cite{write-what-where}. For the ``what'' dimension, it will be either an ``arbitrary value'' or ``constrained value'' write, depending on the absence or presence of any symbolic constraints on the value. For the ``where'' dimension, it will be either an ``arbitrary address'' or ``constrained address'' write, again depending on the symbolic constraints of the address. More concretely, considering a write instruction \texttt{mov qword ptr [rax], rdi}. It attempts to store the data in \texttt{rdi} to an address stored in \texttt{rax}. If \texttt{rdi} is symbolic, it may be considered an arbitrary value or constrained value write. Similarly, if \texttt{rax} is symbolic, it may be considered an arbitrary address or constrained address write. Note that \texttt{KASAN} cannot detect such primitives because it is designed to catch the initial read of freed/OOB memory. Subsequent propagation of them via writes may require taint tracking. (3) \textit{Dereferencing symbolic function pointer} is detectable by monitoring the symbolic status of the function pointer. Similar to the write primitives, \texttt{KASAN} can only detect the initial problematic read, and not the subsequent use (\emph{i.e.,}\xspace function pointer dereference) of the freed/OOB memory. (4) \textit{Passing pointer (either being symbolic or pointing to symbolic memory) to free function}. To detect invalid frees (including double frees), we examine the pointer argument of heap free functions such as \texttt{kfree()}. If the pointer itself is symbolic, it indicates that an attacker can control the memory that will be freed, therefore corresponding to the invalid free cases. If the pointer itself is not symbolic but the memory the pointer points to is, it is considered a double free (we group these two cases together as invalid frees). \section{Implementation} \label{sec:implementation} In this section, we describe details about each component in SyzScope{}. In total, the system has over 10K lines of code and is fully automated. \subsection{Vulnerable Context Exploration} \label{sec:impl_fuzzing} There are several things worth describing in more details in this component. \vspace{0.015in} \noindent\textbf{Impact-aware fuzzing.} Syzkaller has two major components: (1) the fuzzer itself that runs on a target OS where test cases are executed (\emph{i.e.,}\xspace syz-fuzzer), and (2) a manager that runs outside of the target OS, overseeing the fuzzing process (\emph{i.e.,}\xspace syz-manager). The fuzzer is designed to mutate test cases locally and only send test cases that contribute to more coverage back to the manager. However, different for traditional coverage-guided fuzzing, our impact-aware fuzzing will mutate a PoC that already leads to an impact, which essentially causes a crash of the entire target OS. This often leads to a burst of crashes, which means the fuzzer will lose the states of the mutation because of the crash. By default, the manager will simply log the test case but no further mutation will be performed, because it is likely going to trigger the same bug over and over again. In our implementation, we enable the manager to remember the impact-inducing PoC and send it to the fuzzer for mutation (currently 500 times), preempting any other mutation of regular seeds. If a new impact is discovered, the corresponding PoC will then be treated similarly. \vspace{0.015in} \noindent\textbf{Multiple impacts in a single PoC.} As mentioned earlier in \S\ref{sec:syzbot_bug_reporting}, Syzkaller by default allows only the first impact to be reported while ignoring all the rest. This means that if a PoC happens to trigger multiple bug impacts, \emph{e.g.,}\xspace one UAF read and another UAF write, the later impacts are hidden. This contradicts with our goal of recovering more bug impacts. Therefore, we turn on a \texttt{KASAN} booting option called \texttt{kasan\_multi\_shot} when booting the kernel, which will present all impacts instead of only the first. However, there are some impacts that are not possible to ignore, \emph{e.g.,}\xspace a NULL pointer dereference which would cause an irrecoverable crash. We can bypass some other assertion-related kernel panics by disabling \texttt{panic\_on\_warn} in the kernel boot options or some options in the kernel config like \texttt{CONFIG\_BUG\_ON\_DATA\_CORRUPTION}. Note that these debugging options are turned on specifically for fuzzing or debugging (as they are useful in catching errors). In practice, they are off by default in production settings. \label{sec:Confirming the impact belonging to the same bug} \vspace{0.015in} \noindent\textbf{Confirming the impact belonging to the same bug.} As described earlier in \S\ref{sec:design_fuzzing}, when a bug already has a patch available (we assume that these patches are correct), we use a heuristic that can use the patch to test whether a new PoC and its new impacts (generated from fuzzing) are still a result of the same bug. The idea is as follows. If a new PoC can still trigger the same impact after the patch (as well as all prior commits) is applied, clearly the new PoC is triggering a different bug. However, if it no longer works, we are not sure if it is because of the patch itself that breaks the PoC or one of the earlier commits does so. To deal with this ambiguity, instead of applying the patch and its prior commits up to that point, we will attempt to apply the patch commit itself directly. If it can be successfully applied, \emph{i.e.,}\xspace git does not reject it and kernel compiles/boots normally without breakage, then if the new PoC no longer works, we say that the PoC is indeed exploiting the same bug. Otherwise, if it cannot be successfully applied, we will instead apply all the commits up to the patch (but not including the patch itself) and retest the PoC. If it can reproduce the impacts, then we know that these intermediate commits do not interfere with the bug triggering. Therefore, we know that the new PoC is still triggering the same bug, because earlier we have verified that it does not work against a patched kernel (with all prior commits applied also). The one last remaining corner case is that the PoC cannot reproduce the impacts when we apply all the commits up to the patch (but not including the patch itself). In such cases, we will act conservatively and simply give up the new PoC because it remains ambiguous. \subsection{Hidden Impacts Estimation} \label{sec:impl_static} Our static analysis engine is built on top of \texttt{DR. CHECKER}~\cite{DBLP:conf/uss/MachirySCSKV17}, which is flow-sensitive, context-sensitive, field-sensitive, and inter-procedural. We made some changes to adapt it to our scenario as described below. \vspace{0.015in} \noindent\textbf{Interfacing with the fuzzer result.} Recall that our static analysis engine takes a vulnerable object and vulnerability point(s) as input. Since DR. CHECKER is based on LLVM IR, the input would also be given at the LLVM IR level. However, the fuzzing result (\texttt{KASAN} report) includes such information at the binary level, thus requiring a mapping from the binary to IR. Specifically, it includes (1) the call trace that contains vulnerability points that trigger the UAF/OOB bug (binary instruction addresses). (2) the size of a vulnerable object and the offset at which the vulnerable object is accessed (in either UAF or OOB). With such information, we will first try to locate the vulnerable function that triggered the impact in LLVM IR. Usually, the vulnerable function is the first function on the call trace besides KASAN-related ones. However, if the vulnerable function is inlined, we then resort to its caller (potentially recursively if multiple layers of inlining occur). To simplify the handling of inlined functions, we choose to compile the kernel using Clang without any inlining to generate the kernel bitcode. After locating the vulnerable function in LLVM IR, we leverage the Clang-generated debug information to map each IR instruction back to a corresponding source line. Specifically, we look for a \textit{load} IR instruction that maps to the same source line number as what is reported in the vulnerable function in the \texttt{KASAN} report. Once we successfully locate such a \textit{load} instruction that triggers the bug, we can retrieve the base pointer (vulnerable object) of the \textit{load} instructions and taint the object. \vspace{0.015in} \noindent\textbf{Trace recorder}: In order to generate the branch guidance for symbolic execution (described in \S\ref{sec:design_static}), we record the detailed taint trace from an initial vulnerability point to the high-impact one, including the calling context and instruction of each taint propagation. This way, if the taint propagation occurs in a specific call sequence and specific branch, we can precisely drive the dynamic symbolic execution accordingly. \subsection{Validation and Evaluation} \label{sec:impl_validation} There are two possible symbolic execution engines we can potentially use: S2E~\cite{s2e} and angr~\cite{angr}. S2E is a great candidate as it is designed to support dynamic (in-vivo) symbolic execution, such that most of the memory locations are populated with runtime data. This limits the symbolized memory to a much smaller scope, \emph{i.e.,}\xspace those that are potentially controlled by an adversary. Unfortunately, S2E supports only a single CPU core in each QEMU instance~\cite{s2e_single_core}. Therefore, it has a major drawback in its ability to reproduce race condition bugs. In our preliminary experiments, we find that over half of the race condition bugs simply cannot be reproduced reliably within a reasonable time frame, \emph{i.e.,}\xspace an hour. Therefore, we decided to first reproduce bugs in a vanilla QEMU. Once the bug is successfully reproduced (\texttt{KASAN} report being triggered). The breakpoint we set in \texttt{KASAN} report function will immediately freeze the memory of QEMU, and provide the corresponding CPU registers and memory address of the vulnerable object (either freed object or an object that is out-of-bounds) to angr. Whenever angr needs to access a memory location, we will look up their actual values on the snapshot on demand. \section{Evaluation} \label{sec:evaluation} \subsection{Dataset and Setup} \label{sec:setup} We evaluate SyzScope{} against the majority of low-risk bugs reported on syzbot. At the time of writing, there are 3,861 low-risk bug reports in total according to our definition. We exclude the ones detected by KCSAN, KMSAN, UBSAN. This is because they are either not mature enough yet (KMSAN and UBSAN) and do not contain critical information in the bug reports for us to continue the analysis (\emph{e.g.,}\xspace vulnerable object), or do not have any valid reproducer (none of the KCSAN bugs has a reproducer). After this step, there are 3,267 remaining bug reports. Next, we filter the bug reports that do not target the upstream Linux kernel (which is our main focus) and those that do not contain any reproducer (either a Syzkaller program or a C program). Then, we divide the dataset into ``fixed'' and ``open'' sections. For the fixed cases, since each bug report comes with a corresponding patch, we can deduplicate bug reports based on shared patches (unfortunately we are unable to do so for the bugs in the open section), \emph{e.g.,}\xspace the bug reports may look different but their root causes are the same. In our tests, we pick only one bug report from the group that share the same patch. Specifically, we use the one with the highest risk impact. For example, if a low-risk impact bug report (\emph{e.g.,}\xspace UAF read) happens to share the same root cause (\emph{i.e.,}\xspace same patch) with a high-risk one (\emph{e.g.,}\xspace UAF write), we eliminate the entire group of reports because the corresponding bug should be recognized as high-risk already. If a WARNING bug report happens to share the same patch with a UAF read report, we will pick the UAF one as input to SyzScope{}, because it already contains critical information such as the vulnerable object, allowing us to continue the analysis in the pipeline. Otherwise, we simply randomly pick a bug report among the available ones (WARNING, INFO, BUG, or GPF) as there is no major difference. Finally, we obtain 1,170{} bug reports (after deduplication) and their corresponding reproducers. In our current configuration, to be conservative, we do not attempt the vulnerable context exploration step on bug reports in the open section since we can not verify the new impacts still belong to the same bug (as discussed in \S\ref{sec:design_fuzzing}). All experiment are conducted in Ubuntu-18.04 with 1TB memory and Intel(R) Xeon(R) Gold 6248 20 Core CPU @ 2.50GHz * 2. For each bug report and its reproducer, we allocate a single CPU core for 3 hours of kernel fuzzing maximum, 1 hour of static analysis (it usually finished within half an hour), and 4 hours of symbolic execution. \begin{table*}[t] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{@{}llllllrrrrrrr@{}} \toprule & \multirow{2}{*}{Initial Bug Impact} & \multirow{2}{4em}{Raw bug reports} & \multirow{2}{4em}{Valid bugs*} & \multirow{2}{4.5em}{High-risk bugs*} & \multirow{2}{4em}{High-risk impacts} & \multicolumn{7}{c}{High-risk impact breakdown by primitive type} \\ \cmidrule{7-13} & & & & & & UOW & AAW & CAW & AVW & CVW & FPD & IF\\ \midrule \multirow{3}{*}{Fixed} & GPF and BUG & 687 & 215 & 17 & 323 & 71 & 124 & 62 & 29 & 20 & 8 & 9\\ & WARNING and INFO & 918 & 293 & 15 & 379 & 85 & 166 & 66 & 20 & 30 & 9 & 3 \\ & UAF and OOB Read & 680 & 202 & 99 & 2866 & 319 & 1490 & 446 & 271 & 153 & 104 & 83\\\midrule \multirow{3}{*}{Open} & GPF and BUG & 225 & 83 & 4 & 6 & 4 & 0 & 0 & 0 & 0 & 0 & 2\\ & WARNING and INFO & 541 & 292 & 10 & 501 & 97 & 213 & 91 & 47 & 22 & 18 & 13\\ & UAF and OOB Read & 216 & 85 & 38 & 768 & 151 & 381 & 113 & 43 & 22 & 40 & 18\\\midrule & Sum & 3267 & 1170 & 183 & 4843 & 727 & 2374 & 778 & 410 & 247 & 179 & 128\\ \bottomrule \multicolumn{13}{l}{* Valid bugs and high-risk bugs in the fixed section are all unique ones (after deduplicating based on the bug reports) } \\ \end{tabular}} \begin{tablenotes} \small \item UOW= UAF/OOB write, AAW= Arbitrary address write, CAW= Constrained address write, AVW= Arbitrary value write \item CVW= Constrained value write, FPD= Function pointer dereference, IF= Invalid Free \end{tablenotes} \caption{Overall Results} \label{tab:Overall Results} \end{table*} \subsection{Overall Results} \label{sec:overall_results} In total, out of the 1,170{} low-risk bugs analyzed by SyzScope, we report that 183{} of them turn out to contain at least one high-risk impact. This is more than 15\% of all the bugs. Furthermore, out of the 183{} bugs, 173{} have at least one write primitive (\emph{e.g.,}\xspace UAF/OOB write, arbitrary address write). 42{} of them have at least one func-ptr-defer primitive. We break the results down, first by open vs. fixed bugs, and then by the initial bug impact type. The results are shown in Table~\ref{tab:Overall Results}. In addition, we categorize the high-risk impacts into 7 primitives, as defined in \S\ref{sec:validation} and listed in the table. First, comparing the results for open and fixed bugs, we can see in general there is a higher percentage of low-risk bugs turned into high-risk in the fixed section compared to the ones in the open section (18.4\% vs. 11.3\%). In addition, the average number of primitives for each bug in the fixed section is 27.2 versus 24.5 in the open section. This is because the open bugs did not go through the vulnerable context exploration phase, due to the concern that newly discovered contexts may belong to different bugs (as mentioned in \S\ref{sec:design_fuzzing}). As another evidence demonstrating the utility of fuzzing, we find that bugs in the fixed section have 1.33 contexts on average compared to the only 1 context per bug in the open section. Since each context has about 22 primitives on average, it is clear that fuzzing will allow more primitives to be uncovered. Nevertheless, even without the help of fuzzing, SyzScope{} still managed to turn 52 open bugs from low-risk into high-risk. In addition, it is worth noting that bugs with initial impacts being UAF/OOB read have a much higher chance of turning into high-risk cases. In the fixed section, 99 out of 202 UAF/OOB read cases can be turned into high-risk. Furthermore, the total number of high-risk impacts for just the 99 cases is 2866 (on average 29 per case). In the open section, 38 out of 85 UAF/read cases (slightly smaller fraction) can be turned into high-risk. For GPF, BUG, WARNING, and INFO, they usually represent a diverse set of root causes which are masked by the kernel itself or simply exceptions. In reality, there are still a subset of these cases that are really memory corruption bugs. Indeed, we are able to discover 46 high-risk cases out of 883 in these categories across fixed and open sections. In general, GPF and BUG cases typically represent more serious bugs in the kernel (compared to WARNING and INFO) and some of these impacts can lead to exceptions that are difficult to bypass (as mentioned previously, \emph{e.g.,}\xspace NULL pointer dereference). As a result, we find that the number of new impacts discovered under them is very small in the open section, as we did not perform any fuzzing. With respect to the impact primitives, We can see that the arbitrary address write has by far the highest number. This demonstrates the necessity of applying symbolic execution, as such impacts can only be uncovered with such analyses. If one can write to an arbitrary address, it is typically a strong primitive (even if the write value is not arbitrary) that is highly exploitable, especially when combined with a read primitive~\cite{androidbinder}. The second most common types of primitives are arbitrary value and constrained value write (778 and 410 cases in total), followed by UAF and OOB wirte (727 in total). Writing arbitrary values can be dangerous but its exploitability depends highly on where we can write to. The UAF and OOB write are generally serious as an attacker can often choose multiple different objects (e.g., containing a function pointer) to overwrite~\cite{DBLP:conf/uss/ChenZLQ20,DBLP:conf/uss/WuCXXGZ18}. The func-ptr-defer primitives are relatively rare (179 cases in total) but highly exploitable~\cite{236346}. We sampled a few cases for further analysis and managed to write three working PoCs that can successfully hijack the control flow of the kernel. One of them will be described in the case studies later. Finally, we find 128 invalid free primitives, which can be turned into adjustable UAF bugs as mentioned in \S\ref{sec:definition}. % \label{sec:comp_by_comp} \subsection{Component by Component Analysis} \begin{table}[t] \centering \setlength\tabcolsep{1pt} \resizebox{\columnwidth}{!}{ \begin{tabular}{@{}lcccl@{}} \hline \multirow{2}{*}{Initial bug impact} & \multirow{2}{2.3cm}{High-risk bugs by fuzzing} & \multirow{2}{3cm}{Primitives found by fuzzing} & \multirow{2}{2.4cm}{Extra high-risk bugs by S\&S} & \multirow{2}{2.4cm}{Extra primitives found by S\&S}\\ & & & & \\ \hline GPF and BUG & 13 & 37 & 4 & 285 \\ WARNING and INFO & 11 & 33 & 4 & 346 \\ UAF and OOB Read & 42 & 128 & 57 & 2738 \\ \hline \end{tabular}} \begin{tablenotes} \small \item S\&S = Static analysis and symbolic execution \end{tablenotes} \caption{Result breakdown by components on fixed bugs} \label{tab:Results in detail} \end{table} Now that we have shown the end-to-end results, we break them down by components to understand the contribution of each. Here we focus on fixed bugs only because we applied all the components in SyzScope{}. Specifically, we show the intermediate results obtained from fuzzing alone, and then the additional results from performing the static analysis and symbolic execution (S\&S) on top of fuzzing. As a main result shown in Table~\ref{tab:Results in detail}, we can see that 66 bugs are turned from low-risk to high-risk by fuzzing alone, and an extra 65 bugs are turned after the static analysis and symbolic execution (S\&S). \vspace{0.015in} \noindent\textbf{Vulnerable Context Exploration.} Even though fuzzing is generally effective, due to the lack of systematic path exploration as is done by symbolic execution, we see that the number of primitives found by fuzzing alone is limited (as shown in Table~\ref{tab:Results in detail}). This is because every context (i.e., path) provided by fuzzing is concrete and thus only a limited number of primitives may be covered. Furthermore, as mentioned in \S\ref{sec:validation}, fuzzing relies on KASAN which by design is unable to recognize the more indirect types of primitives such as arbitrary address write or control flow hijacking. This is the reason why we still continue with S\&S even if the bug is determined to be high-risk by fuzzing. For example, we may find a UAF write primitive through fuzzing, but through S\&S we may find an even more serious control flow hijacking primitive. Nevertheless, new contexts can be beneficial for S\&S, as they can be considered as ''seeds'' fed into the S\&S to explore many more paths and uncover more primitives. As we mentioned in \S\ref{sec:overall_results}, there are on average 1.33 contexts on average per fixed bug from fuzzing. On a different note, we verified the heuristic we proposed in \S\ref{sec:impl_fuzzing} to confirm that the new impacts found through fuzzing indeed belong to the same bugs. To do so, we manually analyzed 10 sampled new impacts corresponding to 10 different bugs found by SyzScope{}. We are able to confirm they are all caused by the same bug. \vspace{0.015in} \noindent\textbf{Hidden Impact Estimation.} Here, we evaluate the effectiveness of using static analysis to guide the symbolic execution. In particular, we sampled 53 bugs and evaluated them with and without the guidance using static analysis. We observed 16 bugs, with guidance, experienced a 12x-190x speedup compared to no guidance, which allowed the symbolic execution to finish in minutes as opposed to hours. In addition, we found 2 bugs whose high-risk impacts can be found only with guided symbolic execution, and the unguided version simply finds nothing in four hours. However, we note that there is a fundamental tradeoff between running time and false negatives. This is because our static analysis is not sound and can potentially miss important primitives. Therefore, as mentioned in \S\ref{sec:validation}, we set a threshold of 40 basic blocks and only primitives beyond the distance of 40 will trigger the guidance. We also evaluate the choice of the threshold by varying it as 30, 40, and 50. With a threshold of 30, we observe seven more false negative cases compared to the threshold of 40, due to the fact the static analysis missed those primitives that are located in a distance between 30 and 40. With a threshold of 50, we find two primitives in a distance between 40 and 50 will take more than four hours to finish (as they become unguided). Note that this threshold is considered suitable for our experiment setup only. With more resources to expend on symbolic execution, the threshold can be further increased. \vspace{0.015in} \noindent\textbf{Exploration and Validation.} As we mentioned earlier, on top of the 66 bugs that were turned from low-risk to high-risk by fuzzing alone, SyzScope{} turned an additional 65 bugs (via S\&S) from low-risk into high-risk, \emph{i.e.,}\xspace from zero primitive to at least one. The result shows that the different components are complementary to each other. Note that even if fuzzing has already found a primitive, we still applied the static analysis and symbolic execution to find even more primitives, to obtain a more complete picture of the bug impact. In fact, the majority of the high-risk primitives are attributed to S\&S, as shown in Table~\ref{tab:Results in detail}. We also considered the scenario where we omit the fuzzing phase altogether and apply S\&S directly to the original context provided by the reproducer. Interestingly, we find that it is still capable of turning all but two bugs into high-risk (albeit with a a subset of primitives discovered). This shows that there are many bugs that had at least one primitive reachable from the original context. \subsection{False Positives \& False Negatives} \label{sec:FPFN} \textbf{False Positives.} By design, SyzScope{} confirms all the high-risk impacts dynamically through either fuzzing or dynamic symbolic execution, and therefore should not incur any false positive. Nevertheless, in practice, we do make implementation-level simplifications for scalability considerations that can potentially lead to false positives. For example, during the dynamic symbolic execution, we skip a list of common kernel functions (a total of 51) such as \texttt{printk(), kasan\_slab\_free()} for performance reasons, which can potentially lead to unwanted side effects. Fortunately, we have not observed any false positives because of this. \vspace{0.015in} \noindent\textbf{False Negatives.} It is expected that SyzScope{} will incur false negatives as the solution is opportunistic in nature. To understand the extent of the problem, we manually inspected 83 bugs and noticed that the false negatives can come from three different sources. \textit{(1) Failing to find more vulnerable contexts.} We observed 4 bugs with an incomplete set of primitives because SyzScope{} fails to find the appropriate vulnerable contexts. Specifically, race condition is the major reason in such cases. \textit{(2) Imprecise static analysis.} We observed 2 bugs that have an incomplete set of primitives because of the imprecise result from static analysis. \textit{(3) Timeout of symbolic execution.} We observe 3 bugs that have an incomplete set of primitives due to the early termination of symbolic execution (currently a 4-hour timeout). We discover these cases by increasing the timeout to 16 hours. In summary, all of the above false negatives can be potentially reduced when given more computational resources, \emph{e.g.,}\xspace more fuzzing time and symbolic execution time (without having to rely on the precision of static analysis). We also note these 9 bugs are all turned into high-risk already because SyzScope{} found other primitives. \subsection{Case Studies} \begin{figure}[t] \centering \begin{minipage}[t]{0.9\linewidth} \begin{lstlisting}[language=C] static int bfs_create(...){ ... ino = <@\textcolor{red}{find\_first\_zero\_bit}@>(<@\textcolor{mygreen}{info->si\_imap}@>, info->si_lasti + 1); if (ino > info->si_lasti) { ... return -ENOSPC; } <@\textcolor{red}{set\_bit}@>(ino, <@\textcolor{mygreen}{info->si\_imap}@>); ... } unsigned long find_first_zero_bit(const unsigned long *<@\textcolor{mygreen}{addr}@>, unsigned long size) { unsigned long idx; for (idx = 0; ...) { if (<@\textcolor{mygreen}{addr[idx]}@> != ~0UL) return min(idx * BITS_PER_LONG + ffz(<@\textcolor{mygreen}{addr[idx]}@>), size); } return size; } \end{lstlisting} \end{minipage} \caption{Code snippet of a arbitrary address write} \label{fig:Code snippet of a arbitrary address write} \end{figure} \begin{figure}[t] \centering \begin{minipage}[t]{0.9\linewidth} \begin{lstlisting}[language=C] static void tcp_check_sack_reordering(struct sock *sk, ...) { struct tcp_sock *<@\textcolor{mygreen}{tp}@> = tcp_sk(sk); ... fack = <@\textcolor{red}{tcp\_highest\_sack\_seq}@>(<@\textcolor{mygreen}{tp}@>); if (!before(low_seq, fack)) return; <@\textcolor{mygreen}{tp->reordering}@> = min_t(... , ...); <@\textcolor{mygreen}{tp->reord\_seen++;}@> ... } static inline u32 tcp_highest_sack_seq(struct tcp_sock *<@\textcolor{mygreen}{tp}@>) { if (!<@\textcolor{mygreen}{tp->sacked\_out}@>) return tp->snd_una; if (<@\textcolor{mygreen}{tp->highest\_sack}@> == NULL) return tp->snd_nxt; return TCP_SKB_CB(tp->highest_sack)->seq; } \end{lstlisting} \end{minipage} \caption{Code snippet of a constrained value write.} \label{fig:Code snippet of a constrained value write} \end{figure} In this section, we provide a few case studies to highlight the process through SyzScope{} successfully converted low-risk bugs into high-risk ones. \textbf{OOB read $\Longrightarrow$ Arbitrary address write.} Figure \ref{fig:Code snippet of a arbitrary address write} shows a real case on syzbot titled ``KASAN: slab-out-of-bounds Read in find\_first\_zero\_bit''~\cite{case_study_1}. Starting at line 3, \texttt{info->si\_imap} was already out-of-bounds (we omit the code earlier), which is passed as an argument to \texttt{find\_first\_zero\_bit()}. Then, there is an OOB read impact caught by \texttt{KASAN} at line 16. However, if line 17 is executed next, which does occur during fuzzing, it will lead to a write impact in \texttt{set\_bit()} (invoked at at line 8). Since we allow the kernel to continue executing despite a bug impact being caught by \texttt{KASAN} (see \S\ref{sec:impl_fuzzing}), the write impact is reported during the fuzzing process. Specifically, \texttt{set\_bit()} will take the \texttt{info->si\_imap} as a memory address and set one bit to \texttt{ino}. During the symbolic execution (the scope of which is guided by static analysis), we can quickly determine the write impact is feasible and determine that it is an arbitrary write primitive because the only constraint on \texttt{info->si\_imap} is \texttt{addr[idx] != ~0UL} at line 16. \textbf{UAF read $\Longrightarrow$ UAF write.} Figure \ref{fig:Code snippet of a constrained value write} is a real example from a use-after-free read bug~\cite{case_study_2} on syzbot, \texttt{tp} is a freed object, KASAN catches the UAF read at line 15. There are two UAF write impacts at line 8 and 9 afterwards. Due to the non-zero value of \texttt{tp->sacked\_out} at line 15 and \texttt{tp->highest\_sack} at line 17, \texttt{tcp\_highest\_sack\_seq()} consistently return at line 19, which makes the following condition at line 6 always true. This prevents the two write impacts from being unreachable. However, since we know the entire \texttt{tp} object was freed, we can in principle control the value of the object by heap spraying. Therefore, symbolic execution will determine the right values of \texttt{tp->sacked\_out} and \texttt{tp->highest\_sack} in order to reach the write impacts. Note that according to \S\ref{sec:validation}, these two write impacts are considered ``UAF write primitives'' because it writes to the symbolic memory (\emph{i.e.,}\xspace freed object \texttt{tp}), as opposed to the above case study where the we write to a symbolic address. In order to exploit the bug further, we will need to spray an object that has a function pointer or data pointer which will be overwritten (due to line 8 and 9). \begin{figure}[t] \centering \begin{minipage}[t]{0.9\linewidth} \begin{lstlisting}[language=C] bool refcount_dec_and_mutex_lock(refcount_t *<@\textcolor{mygreen}{r}@>, struct mutex *<@\textcolor{mygreen}{lock}@>) { if (refcount_dec_not_one(<@\textcolor{mygreen}{r}@>)) return false; ... return true; } bool refcount_dec_not_one(refcount_t *<@\textcolor{mygreen}{r}@>) { unsigned int new, <@\textcolor{mygreen}{val}@> = atomic_read(&<@\textcolor{mygreen}{r->refs}@>); do { if (unlikely(<@\textcolor{mygreen}{val}@> == UINT_MAX)) return true; if (<@\textcolor{mygreen}{val}@> == 1) return false; <@\textcolor{mygreen}{new}@> = <@\textcolor{mygreen}{val}@> - 1; if (<@\textcolor{mygreen}{new}@> > <@\textcolor{mygreen}{val}@>) { ... return true; } } while (...); return true; } \end{lstlisting} \end{minipage} \caption{Code snippet of refcount\_dec\_and\_mutex\_lock and refcount\_dec\_not\_one} \label{fig:Code snippet of refcount_dec_and_mutex_lock} \end{figure} \label{case study:exploit case study} \textbf{UAF/OOB read $\Longrightarrow$ func-ptr-deref}. Figure \ref{fig:A example case of a slab-out-of-bounds Read bug} illustrates a real case~\cite{case_study_3} on syzbot initially marked as an OOB read. It was described as the motivating example already. To exploit the bug, we need to reach line 36 to dereference a function pointer from the out-of-bounds object or its derived objects, and then we can hijack the control flow. Again, due to the fact that the fuzzing process is blocked at line 14 due to an exception, it required symbolic execution to provide a legitimate value of \texttt{actions}. In addition, at line 28, there is another condition regarding the return value of \texttt{refcount\_dec\_and\_mutex\_lock()} (invoked at line 28), which has to turn true. Figure \ref{fig:Code snippet of refcount_dec_and_mutex_lock} illustrates the internals of the function. Here, the parameters \texttt{r} and \texttt{lock} are still from out-of-bounds memory, and therefore take symoblic values. In order to determine the right values of them, we need to follow into \texttt{refcount\_dec\_not\_one()} and make sure that it will return false, in order to return true at line 6 ultimately, we have determined that \texttt{val} should be 1 in order for this to happen. As we can see, this example is more convoluted involving more conditions and more path exploration by symbolic execution. As a result, the symbolic execution is guided according to the direction at each condition, which eventually finishes the exploration in only 18 seconds. Without guidance from static analysis, symbolic execution is unable to find this function pointer dereference in 2 hours. We have managed to write an actual exploit that does the actual heap spraying and show that we can hijack the control flow, \emph{i.e.,}\xspace execute code at an arbitrary address. \subsection{Disclosure} In this section, we validate the utility of SyzScope{} in terms of helping with timely patch propagation to downstream kernels (for fixed bugs), as well as enabling the prioritization of bug fixes in the upstream (for open bugs). \vspace{0.014in} \noindent\textbf{Downstream kernels.} We initially attempted to report to downstream kernels (\emph{e.g.,}\xspace Ubuntu) and have them apply patches from upstream after we determine that the patches are missing. This is unfortunately time-consuming as we have to manually analyze the source to determine which downstream kernels are actually affected by the bug. Eventually, we followed the suggestion from a Ubuntu kernel developer~\cite{discuss_with_ubuntu} and resort to the CVE Numbering Authority. Interestingly, we observe that the majority of the syzbot bugs do not have any CVEs assigned to them. Thus, if we are able to successfully request CVEs for the high-risk bugs, we can potentially benefit all the distributions that follow the CVE database. We reported all 32 high-risk bugs after Linux kernel v4.19. At the time of writing, 8 of them have already been assigned CVEs and we have not heard back from the remaining 24. As an example demonstrating the effectiveness of this strategy, the publication of CVE-2021-33034 has led to immediate actions by Ubuntu~\cite{Ubuntu_cve_2021-33034}, Fedora~\cite{fedora_cve_2021-33034}, RedHat~\cite{Red_Hat_cve_2021-33034}, and Debian~\cite{Debian_cve_2021-33034}. \vspace{0.014in} \noindent\textbf{Upstream kernels.} Since the CVE assignment is predicated on the availability of patches, we had to report our findings directly to upstream kernel developers. This process is more tricky than we anticipated. In total, we have reported 6 bugs in the open section out of the total 34 high-risk bugs determined by SyzScope. We reported a subset because it turns out that most open bugs already have a pending fix (but need time to be confirmed effective). At the time of writing, we have seen two patches submitted because of our reporting (with email replies to the same thread), demonstrating the positive influence on getting important security bugs fixed. During the process of reporting, while we have received appreciation of our project and reports, we have also learned that there is a serious lack of resources to fix syzbot-generated bugs, which is likely the reason why we have not received responses for the other 4 bugs. Nevertheless, we do receive some positive feedback on our research~\cite{SyzScope_debate_positive}. \section{Discussion} \label{sec:discussion} \vspace{0.018in} \noindent\textbf{Evaluating bugs that are already high-impact.} In this project, we take only low-risk bugs as input to SyzScope{}. However, the result shows the number of high-risk impacts associated with a single bug can be extremely high. Specifically, among the 183{} high-risk bug cases, we find 4,843 high-risk impacts. Moreover, not all high-risk impacts are equal, \emph{e.g.,}\xspace func-ptr-deref being the most dangerous. Therefore, even if a bug already exhibits some high-risk impact, \emph{e.g.,}\xspace OOB write, we can still feed it to SyzScope{} to uncover even more high-risk impacts. This can be beneficial if the goal is to further determine the exploitability of a bug. \vspace{0.018in} \noindent\textbf{Supporting more types of bug impacts.} SyzScope{} currently focused on modeling OOB and UAF, starting from the hidden impact estimation component, \emph{i.e.,}\xspace identifying the vulnerable objects and symbolizing the memory that the adversary can control. However, there are still other bug types such as unitialized use of memory which we can support in the future. \vspace{0.018in} \noindent\textbf{Interfacing with other exploitability testing systems.} SyzScope{} is complementary to these projects that aim to automatically or semiautomatically evaluate the exploitability of kernel bugs. Fuze~\cite{DBLP:conf/uss/WuCXXGZ18} and KOOBE~\cite{DBLP:conf/uss/ChenZLQ20} are two representative projects that target UAF and OOB bugs respectively. For example, KOOBE~\cite{DBLP:conf/uss/ChenZLQ20} can work with only bugs that exhibit an OOB write impact already and ignore any OOB read bugs by design. According to our results, SyzScope{} is able to turn 32 OOB read bugs into OOB write ones, which would allow KOOBE to almost double the number of cases that it has evaluated against. \vspace{0.018in} \noindent\textbf{Pending patches for Syzbot open bugs.} Syzbot recently added a new feature called \textit{Patch testing requests}. This feature allows developers and maintainers to upload patches for bug testing, if a patch managed to eliminate the bug, syzbot will release such patches on the dashboard and the patch will be merged into the upstream. This feature speeds up the patching process by automating the testing of patches. We note that it is possible to use those pending patches in the context exploration process when it comes to eliminate unrelated bugs. However, we did not choose to do this because such patches are not officially accepted by Linux and can potentially lead to misleading results. \section{Related Work} \noindent\textbf{Kernel fuzzing.} There are several prominent general-purpose kernel fuzzers that have been developed to discover security bugs~\cite{syzkaller, TriforceLinuxSyscallFuzzer, trinity}. More recently, many projects have focused on improving various aspects of kernel fuzzing~\cite{DBLP:conf/uss/PailoorAJ18, DBLP:conf/ndss/KimJKJSL20, DBLP:conf/sp/JeongKSLS19, DBLP:conf/sp/XuKZK20}. For example, MoonShine~\cite{DBLP:conf/uss/PailoorAJ18} aims to provide highly distilled seeds to bootstrap the fuzzing process. HFL~\cite{DBLP:conf/ndss/KimJKJSL20} proposed a hybrid fuzzing solution to address a few weaknesses encountered in coverage-guided fuzzing, \emph{e.g.,}\xspace identifying explicit dependencies among syscalls. Razzer~\cite{DBLP:conf/sp/JeongKSLS19} and KRACE~\cite{DBLP:conf/sp/XuKZK20} improve the fuzzing logic specifically against race condition bugs. Unfortunately, these general-purpose or custom kernel fuzzers all target at uncovering more bugs quickly instead of understanding the security impacts of reported bugs (which are often manually investigated afterwards). Finally, in addition to syzbot, continuous fuzzing against Linux kernels has become a common practice in the industry~\cite{DBLP:conf/sigsoft/ShiWFWSJSJS19}. \vspace{0.018in} \noindent\textbf{Security impact of Linux kernel bugs and crash deduplication.} A recent talk at Linux Security Summit by Dmitry Vyukov has shown that some seemingly low-risk bugs from syzbot turn out to be of high risks~\cite{Dmitry_Vyukov}. However, no systematic and automated solution has been proposed to understand this phenomenon on a large scale. A closely related work~\cite{DBLP:conf/ndss/WuHML20} aims to infer the security impact of a bug by analyzing its patches (as opposed to a bug reproducer and sanitizer-generated bug report). Unfortunately, due to the limited information provided by a patch, they were able to identify only 227 patches that are fixing security bugs (with only 243 security impacts identified) after scanning 54,000 commits. This is in contrast with the 4,843 security impacts that we discover from only 183{} low-risk bugs. We believe the incomplete result is partly due to the lack of runtime information which forces the analysis to be constrained at a local scale. Furthermore, there is no differentiation of high-risk and low-risk impacts, which is a key goal of SyzScope{}. Finally, there have been many studies on determining the security impact of bugs based on text analysis (\emph{e.g.,}\xspace bug report descriptions) using data mining and machine learning ~\cite{mining_bug_databases,6798341,8424985,graduate_thesis_reports,vulnerability_identification}. \vspace{0.018in} \noindent\textbf{Exploitability testing.} Recent work has attempted to turn different types of Linux kernel security bugs into actual exploits in an automated or semi-automated fashion. Fuze~\cite{DBLP:conf/uss/WuCXXGZ18} and KOOBE~\cite{DBLP:conf/uss/ChenZLQ20} are two representative projects that target use-after-free (UAF) and out-of-bound (OOB) bugs respectively. They take in a proof-of-concept (PoC) that can trigger a bug (often only crashing the kernel) as input, and then conduct various analyses to convert the PoC into an exploit. In addition, there are also related work on exploiting other types of security bugs such as uninitialized uses of stack variables~\cite{stack_uninit}, which we did not support in our current implementation, due to the fact the corresponding \texttt{KMSAN} sanitizer currently does not provide sufficient details in the bug report. Finally, there are also related work that aim to assist the process of generating an exploit~\cite{slake,236346}. All of these related work are complementary to SyzScope{}, where our system can be considered a frontend that can interface with these systems as backends. In addition to kernel exploits, there are also other related work on userspace analyzing the exploitability of heap bugs~\cite{heaphopper, auto-heap-exploit}. \section{Conclusion} In this paper, we perform a systematic investigation on fuzzer-exposed bugs on the syzbot platform. In order to conduct such an analysis, we develop an automated pipeline that combines fuzzing, static analysis, and symbolic execution together to reveal high-risk security impacts of seemingly low-risk bugs. The system can be easily integrated into the syzbot platform that continuously evaluates newly discovered bugs. After analyzing over one thousand such bugs, we demonstrate that 183{} of them can be turned into high-risk bugs. The results have important implications on patch prioritization and propagation moving forward. To facilitate reproduction and future research, we open source SyzScope at https://github.com/seclab-ucr/SyzScope.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{INTRODUCTION}\label{1} Over the past decade, the number of Lyman break galaxies \citep[LBGs; for a review, see][]{giavalisco02} identified at $z\sim3-6$ has grown rapidly from deep, wide-field optical imaging surveys \citep[e.g.,][]{steidel99,bouwens06,yoshida06}. Follow-up spectroscopy on large telescopes has shown that this method (called the Lyman break technique or the ``drop-out'' method) is efficient at identifying high-$z$ star-forming galaxies. Furthermore, these studies have measured the cosmic star-formation history (SFH) at $z>3$, which is key for understanding galaxy evolution. It indicates that the star-formation rate (SFR) density is 10 or more times higher in the past than at $z\sim0$.\\ \indent Extending the Lyman break technique to $z<3$ requires deep, wide-field UV imaging from space, which is difficult. In addition, [\ion{O}{2}] (the bluest optical nebular emission line) is redshifted into the near-infrared (NIR) for $z\gtrsim1.5$ where high background and lower sensitivity limit surveys to small samples \citep[e.g.,][]{malkan96,moorwood00,werf00,erb03}. The combination of these observational limitations has made it difficult to probe $z\approx1.5-2.5$.\\ \indent One solution to the problem is the `BX' method developed by \cite{adelberger04}. This technique identifies blue galaxies that are detected in $U$, but show a moderately red $U-G$ color when the Lyman continuum break begins to enter into the $U$-band at $z\sim2$.\\ \indent Other methods have used NIR imaging to identify galaxies at $z=1-3$ via the Balmer/4000\AA\ break. For example, selection of objects with $J-K>2.3$ (Vega) has yielded ``distant red galaxies'' at $z\sim2-3$ \citep{dokkum04}, and the `{\it BzK}' method has found passive and star-forming (dusty and less dusty) galaxies at $z\approx1.5-2.5$ \citep{daddi04,hayashi07}. The completeness of these methods is not as well understood as UV-selected techniques, since limited spectra have been obtained.\\ \indent In this paper, the Lyman break technique is extended down to $z\sim1.8$ with wide-field, deep {\it NUV}~imaging of the Subaru Deep Field (SDF) with the Galaxy Evolution Explorer \citep[\galex;][]{martin05}. This survey has the advantage of sampling a large contiguous area, which allows for large scale structure studies (to be discussed in future work), an accurate measurement of a large portion of the luminosity function, and determining if the SFH peaks at $z\sim2$.\\ \indent In \S~\ref{2}, the photometric and spectroscopic observations are described. Section~\ref{3} presents the color selection criteria to produce a photometric sample of {\it NUV}-dropouts, which are objects undetected or very faint in the {\it NUV}, but present in the optical. The removal of foreground stars and low-$z$ galaxy contaminants, and the sample completeness are discussed in \S~\ref{4}. In \S~\ref{5}, the observed UV luminosity function (LF) is constructed from $\sim$7100 {\it NUV}-dropouts in the SDF, and the comoving star-formation rate (SFR) density at $z=1.8-2.8$ is determined. Comparisons of these results with previous surveys are described in \S~\ref{6}, and a discussion is provided in \S~\ref{7}. The appendix includes a description of objects with unusual spectral properties. A flat cosmology with [$\Omega_{\Lambda}$, $\Omega_M$, $h_{70}$] = [0.7, 0.3, 1.0] is adopted for consistency with recent LBG studies. All magnitudes are reported on the AB system \citep{oke74}. \section{OBSERVATIONS}\label{2} This section describes the deep {\it NUV}~data obtained (\S~\ref{2.1}), followed by the spectroscopic observations (\S~\ref{2.2} and \ref{2.4}) from Keck, Subaru, and MMT (Multiple Mirror Telescope). An objective method for obtaining redshifts, cross-correlating spectra with templates, is presented (\S~\ref{2.3}) and confirms that most {\it NUV}-dropouts are at $z\sim2$. These spectra are later used in \S~\ref{3.2} to define the final empirical selection criteria for $z\sim2$ LBGs. A summary of the success rate for finding $z\sim2$ galaxies as {\it NUV}-dropouts is included. \subsection{\galex/{\it NUV}~Imaging of the SDF}\label{2.1} The SDF \citep{kashik04}, centered at $\alpha$(J2000) = 13$^{\rm h}$24$^{\rm m}$38\fs9, $\delta$(J2000) = +27\arcdeg29\arcmin25\farcs9, is a deep wide-field (857.5 arcmin$^2$) extragalactic survey with optical data obtained from Suprime-Cam \citep{miyazaki02}, the prime-focus camera mounted on the Subaru Telescope \citep{iye04}. It was imaged with \galex~in the {\it NUV}~($1750-2750$\AA) between 2005 March 10 and 2007 May 29 (GI1-065) with a total integration time of 138176 seconds. A total of 37802 objects are detected in the full {\it NUV}~image down to a depth of $\approx$27.0 mag (3$\sigma$, 7.5\arcsec~diameter aperture). The \galex-SDF photometric catalog will be presented in future work. For now, objects undetected or faint ($NUV>25.5$) in the {\it NUV}~are discussed.\\ \indent The {\it NUV}~image did not require mosaicking to cover the SDF, since the \galex~field-of-view (FOV) is larger and the center of the SDF is located at (+3.87\arcmin, +3.72\arcsec) from the center of the {\it NUV}~image. The {\it NUV}~spatial resolution (FWHM) is 5.25\arcsec, and was found to vary by no more than 6\% across the region of interest \citep{morrissey07}. \subsection{Follow-up Spectroscopy}\label{2.2} \subsubsection{Keck/LRIS}\label{2.2.1} When objects for Keck spectroscopy were selected, the {\it NUV}~observations had accumulated 79598 seconds. Although the selection criteria and photometric catalog are revised later in this paper, a brief description of the original selection is provided, since it is the basis for the Keck sample. An {\it initial} {\it NUV}-dropout catalog (hereafter ver. 1) of sources with $NUV-B>1.5$ and $B-V<0.5$ was obtained. No aperture correction was applied to the 7.5\arcsec~aperture {\it NUV}~flux and the 2\arcsec~aperture was used for optical photometry. These differ from the final selection discussed in \S~\ref{3.2}. The {\it NUV}~3$\sigma$ limiting magnitude for the ver. 1 catalog is 27.0 within a 3.39\arcsec~radius aperture. Postage stamps (see Figure~\ref{postage}) were examined for follow-up targets to ensure that they are indeed {\it NUV}-dropouts. \begin{figure} \epsscale{1.0} \plotone{f1.eps} \caption{Postage stamps for some {\it NUV}-dropouts targeted with LRIS. From left to right is {\it NUV}, $B$, and $V$. Each image is 24\arcsec~on a side and reveals that optical sources do not have a {\it NUV}~counterpart. Photometric and spectroscopic information are provided in Table~\ref{table1}.} \label{postage} \end{figure} The Keck Low Resolution Imaging and Spectrograph \citep[LRIS;][]{oke95} was used to target candidate LBGs in multi-slit mode on 2007 January 23$-$25. The total integration times were either 3400, 3600, or 4833 seconds, and 36 {\it NUV}-dropouts were targeted within 3 slitmasks. A dichroic beam splitter was used with the 600 lines mm$^{-1}$ grism blazed at 4000\AA\ and the 400 lines mm$^{-1}$ grating blazed at 8500\AA, yielding blue (red) spectral coverage of $3500-5300$\AA\ ($6600-9000$\AA), although the coverages varied with location along the dispersion axis. The slits were 4\arcsec~to 8\arcsec~in length and 1\arcsec~in width, yielding spectral resolution of $\approx$0.9\AA\ at 4300\AA\ and $\approx$1.2\AA\ at 8000\AA.\\ \indent Standard methods for reducing optical spectra were followed in PyRAF where an IRAF script, developed by K. Adelberger to reduce LRIS data, was used. When reducing the blue spectra, dome flat-fields were not used due to the known LRIS ghosting problem. Other LRIS users have avoided flat-fielding their blue spectra, since the CCD response is mostly flat (D. Stern, priv. comm).\\ \indent HgNe arc-lamps were used for wavelength calibration of the blue side while OH sky-lines were used for the red side. Typical wavelength RMS was less than 0.1\AA. For flux calibration, long-slit spectra of BD+26 2606 \citep{oke83} were obtained following the last observation for each night.\\ \indent In the first mask, three of five alignment stars had coordinates that were randomly off by as much as 1\arcsec~from the true coordinates. These stars were taken from the USNO catalog, where as the better alignment stars were from the 2MASS catalog with a few tenths of an arcsecond offsets. This hindered accurate alignment, and resulted in a lower success rate of detection: the first mask had 7 of 12 {\it NUV}-dropouts that were {\it not} identified, while the other two masks had 2/10 and 3/14. \subsubsection{MMT/Hectospec}\label{2.2.2} Spectra of {\it NUV}-dropouts from the final photometric catalog were obtained with the multifiber optical spectrograph Hectospec \citep{fabricant05} on the 6.5m MMT on 2008 March 13 and April 10, 11, and 14. Compared to Keck/LRIS, MMT/Hectospec has a smaller collecting area and lower throughput in the blue, so fewer detections were anticipated. Therefore, observations were restricted to bright ($V_{\texttt{auto}} = 22.0-23.0$) sources, which used 21 of 943 fibers from four configurations. Each source was observed in four, six, or seven 20-minute exposures using the 270 mm$^{-1}$ grating. This yielded a spectral coverage of $4000-9000$\AA\ with 6\AA\ resolution. The spectra were wavelength calibrated, and sky-subtracted using the standard Hectospec reduction pipeline \citep{fabricant05}. A more detailed discussion of these observations is deferred to a forthcoming paper (Ly et al. 2008, in prep.). \subsection{Spectroscopic Identification of Sources}\label{2.3} The IRAF task, \texttt{xcsao} from the \textsc{rvsao} package \citep[ver. 2.5.0]{kurtz98}, was used to cross-correlate with six UV spectral templates of LBGs. For cases with Ly$\alpha$~in emission, the composite of 811 LBGs from \cite{shapley03} and the two top quartile bins (in Ly$\alpha$~equivalent width) of \cite{steidel03} were used. For sources lacking Ly$\alpha$~emission (i.e., pure absorption-line systems), the spectra of MS 1512-cB58 (hereafter `cB58') from \cite{pettini00}, and the two lowest quartile bins of \cite{steidel03} were used.\\ \indent When no blue features were present, the red end of the spectrum was examined. An object could still be at $z>1.5$, but at a low enough redshift for Ly$\alpha$~to be shortward of the spectral window. In this case, rest-frame NUV features, such as \ion{Fe}{2} and \ion{Mg}{2}, are available. \citet{savaglio04} provided a composite rest-frame NUV spectrum of 13 star-forming galaxies at $1.3<z<2$. For objects below $z\approx1.5$, optical features are available to determine redshift. The composite SDSS spectra ($3500-7000$\AA\ coverage) from \citet{yip04} and those provided with \textsc{rvsao} ($3000-7000$\AA) are used for low-$z$ cases. Note that in computing redshifts, several different initial guesses were made to determine the global peak of the cross-correlation. In most cases, the solutions converged to the same redshift when the initial guesses are very different. The exceptions are classified as `ambiguous'.\\ \indent Where spectra had poor S/N, although a redshift was obtained for the source, the reliability of identification (as given by \texttt{xcsao}'s $R$-value) was low ($R=2-3$). An objective test, which was performed to determine what $R$-values are reliable, was to remove the Ly$\alpha$~emission from those spectra and templates, and then re-run \texttt{xcsao} to see what $R$-values are obtained based on absorption line features in the spectra. Among 10 cases (from LRIS spectroscopy), 6 were reconfirmed at a similar redshift ($\Delta z = 2.4\times10^{-4}-1.5\times10^{-3}$)\footnote[11]{This is lower, but still consistent with differences between emission and absorption redshifts of 650 km s$^{-1}$ for LBGs \citep{shapley03}.} with $R$-values of 2.30$-$7.07. This test indicates that a threshold of $R=2.5$ is reasonable for defining whether the redshift of a source (lacking emission lines) was determined. This cut is further supported by \cite{kurtz98}, who found that the success of determining redshifts at $R=2.5-3$ is $\sim$90\%. However, to obtain more reliable redshifts, a more stricter $R=3.0$ threshold is adopted. If a $R=2.5$ threshold is adopted, then seven sources with $R=2.5-3$ (ID 86765, 92942, 96927, 153628, 169090, 190498, and 190947) are re-classified as `identified'. These redshifts are marginally significant: a few to several absorption features coincide with the expected UV lines for the best-fit redshifts of $\sim$2, but a few additional absorption lines are not evident in the low S/N data. Statistics presented below are provided for both adopted $R$-value cuts.\\ \indent While some sources are classified ambiguous, it is likely that they could be at high-$z$. For example, 185177 (classified as ambiguous) could be a LBG, since it shows a weak emission line at $\sim$4500\AA\ ($z\sim2.7$ if Ly$\alpha$) and a few absorption lines. This source, statistics ($\sim$50\% successful identification for $R=2.0-2.5$) from \cite{kurtz98}, and NUV-78625 (with $R=2.3$ but identified `by eye' to be a $z\approx1.6$ AGN) suggest that while a cut is placed at $R=2.5$ or $R=3.0$, it could be that some solutions with $R=2.0-3.0$ are correctly identified. An $R=3.0$ ($R=2.5$) typically corresponds to a peak of $\sim$0.25 ($\sim$0.2) in the cross correlation spectra, which is typically 3$\sigma$ ($2-3\sigma$) above the RMS in the cross-correlation (see Figure~\ref{xcplot}). \begin{figure} \plotone{f2.eps} \caption{Cross-correlation spectra for targets that yielded $R=2.5-3.2$ without an emission line. From top to bottom shows {\it NUV}-dropout ID 182284, 186254, 96927, and 92942. The top two have $R$-values of $\gtrsim3.1$ and are identified as LBGs while the latter two have $R=2.6-3.0$ and are classified as ambiguous. The peak near the center of the plots represents the strongest peak in the cross-correlation.} \label{xcplot} \end{figure} \subsubsection{LRIS Results}\label{2.3.1} 12 (14 with $R\geq2.5$) LBGs are found at $1.7\lesssim z\lesssim2.7$ out of 36 attempts. Among those, 10 show Ly$\alpha$~in emission, while 2 (4 with $R\geq2.5$) are identified purely by UV absorption lines. Their spectra are shown in Figures~\ref{spec1} and \ref{spec2}, and Table~\ref{table1} summarizes their photometric and spectroscopic properties. Contamination was found from 3 stars and 5 (7 with $R\geq2.5$) low-$z$ galaxies (shown in Figure~\ref{spec3}), corresponding to a 60\% success rate (58\% if $R>2.5$ is adopted). Four sources showed a single emission line, which is believed to be [\ion{O}{2}] at $z\sim1-1.5$, one source showed [\ion{O}{2}], H$\beta$, and [\ion{O}{3}] at $z\sim0.7$, and two sources with absorption lines have $R\sim2.5$ results with $z\sim0.1$ and $\sim0.5$ (these would be ``ambiguous'' with the $R\geq3.0$ criterion). The success of identifying $z\sim2$ LBGs improves with different color selection criteria that remove most interlopers (see \S~\ref{3.2}).\\ \indent Of the remaining 16 spectra (12 with $R>2.5$ cut), 8 (4 with $R>2.5$ cut) were detected, but the S/N of the spectra was too low, and the other 8 were undetected. These objects were unsuccessful due to the short integration time of about one hour and their faintness (average $V$ magnitude of 24.2).\\ \indent It is worthwhile to indicate that the fraction of LRIS spectra with Ly$\alpha$\ emission is high (83\%). In comparison, \cite{shapley03} reported that 68\% of their $z\sim3$ spectroscopic sample contained Ly$\alpha$\ in emission. If the fraction of LBGs with Ly$\alpha$\ emission does not increase from $z\sim3$ to $z\sim2$, it would imply that 5 $z\sim2$ galaxies would not show Ly$\alpha$\ in emission. Considering the difficulties with detecting Ly$\alpha$\ in absorption with relatively short integration times, the above 83\% is not surprising, and suggests that most of the $z>1.5$ ambiguous LRIS redshifts listed in Table~\ref{table1} are correct. \subsubsection{Hectospec Results}\label{2.3.2} Among 21 spectra, 7 objects (2 are AGNs) are identified ($R>3.0$; 9 if $R>2.5$) at $z>1.5$, 2 objects are stars, 1 (2 with $R>2.5$) is a $z<1.5$ interloper, and 11 are ambiguous (8 if $R>2.5$ is adopted). These MMT spectra are shown in Figures~\ref{spec4}$-$\ref{spec6}, and their properties are listed in Table~\ref{table1}.\\ \indent The spectrum of a $R_{\rm C}$ $\sim$ 22 $z\sim1.6$ LBG detected the \ion{Fe}{2} and \ion{Mg}{2} absorption lines, which indicates that MMT is sensitive enough to detect luminous LBGs. In fact, since the surface density of bright LBGs is low, slitmask instruments are not ideal for the bright end. However, the entire SDF can be observed with Hectospec, so all $\sim$150 $V_{\texttt{auto}}<23.0$ objects can be simultaneously observed. \begin{figure*} \epsscale{1.0} \plotone{f3_color.eps} \caption{LRIS spectra of confirmed {\it NUV}-dropouts from the ver. 1 catalog with known redshifts. Most of the LBGs with Ly$\alpha$~emission are shown here with the remaining in Figure~\ref{spec2}. Overlayed on these spectra is the composite template (shown as grey) with the highest $R$-value (see Table~\ref{table1}) from cross-correlation. Note that these overlayed templates are intended to show the location of spectral features, and is not meant to compare the flux and/or the spectral index differences between the spectra and the templates. The ID number, redshifts, and $R$-values are shown in the upper left-hand corner of each panel. [{\it See the electronic edition of the Journal for a color version of this figure}.]} \label{spec1} \end{figure*} \begin{figure*} \epsscale{1.0} \plotone{f4_color.eps} \caption{Same as Figure~\ref{spec1}, but some spectra do not have Ly$\alpha$~emission. The strong line seen in the spectrum of 96927 at $\sim$5570\AA\ is a sky subtraction artifact, and cosmic rays are seen in the spectra of 94093 (at 3780\AA), 186254 (at 3325\AA), and 92942 (at 3990\AA). These features are removed in the cross-correlation process. [{\it See the electronic edition of the Journal for a color version of this figure}.]} \label{spec2} \end{figure*} \begin{figure*} \epsscale{0.57}\plotone{f5a_color.eps}\epsscale{0.5478}\plotone{f5b_color.eps} \caption{Same as Figures~\ref{spec1} and \ref{spec2}, but this shows the $z<1.5$ interlopers and galactic stars. [{\it See the electronic edition of the Journal for a color version of this figure}.] } \label{spec3} \end{figure*} \begin{figure*} \epsscale{1.0} \plotone{f6_color.eps} \caption{Same as Figures~\ref{spec1} and \ref{spec2}, but these are Hectospec observations of LBGs in the final photometric catalog. The cross-correlation template and the typical sky spectrum are shown above and below the spectrum of the source, respectively. [{\it See the electronic edition of the Journal for a color version of this figure}.]} \label{spec4} \end{figure*} \begin{figure*} \epsscale{1.0} \plotone{f7_color.eps} \caption{Same as Figure~\ref{spec4}. [{\it See the electronic edition of the Journal for a color version of this figure}.] } \label{spec5} \end{figure*} \begin{figure*} \epsscale{1.0} \plotone{f8_color.eps}\vspace{-1.25in} \caption{Same as Figures~\ref{spec4} and \ref{spec5}, but this shows the low-$z$ interlopers and galactic stars [{\it See the electronic edition of the Journal for a color version of this figure}.] } \label{spec6} \end{figure*} \subsection{Additional Spectra with Subaru/MOIRCS}\label{2.4} The {\it BzK} technique, which identifies galaxies with a wide range (old and young, dusty and unreddened) of properties, could include objects that would also be classified as {\it NUV}-dropouts. As a check, cross-matching of spectroscopically identified star-forming {\it BzK}'s with the \galex-SDF photometric catalog was performed. Spectra of BzKs were obtained on 2007 May 3$-$4 with Subaru using the Multi-Object Infrared Camera and Spectrograph \citep[MOIRCS;][]{ichikawa06}. 44 sources were targeted and 15 were identified by the presence of H$\alpha$ and [\ion{N}{2}] or [\ion{O}{2}], [\ion{O}{3}] and H$\beta$ emission. One of the 15 was not in the $B$-band catalog. Among the 14 objects, 7 are also classified as {\it NUV}-dropouts and were not previously identified (i.e., LRIS or Hectospec targets). This included 5 galaxies at $z>1.5$ and 2 at $z=1-1.5$. Their properties are included in Table~\ref{table1}. Among the 7 BzKs that did {\it not} meet the {\it NUV}-dropout criteria, 2 are below $z=1.5$ and the other five are at high-$z$. For two of the high-$z$ BzKs, one was below the $NUV-B=1.75$ cut because it is faint ($V>25.3$), thus not considered a {\it NUV}-dropout, and the other missed the $B-V=0.5$ selection by having $B-V = 0.53$.\footnote[12]{If the selection criteria were modified to include this object, no low-$z$ interlopers or stars would have contaminated the criteria. However, a $B-V\leq0.5$ is still adopted for simplicity.} The other three sources have low-$z$ neighboring sources that are detected in the {\it NUV}, which influences the {\it NUV}~photometry to be brighter. The cause of confusion is due to the poor resolution of \galex, which is discussed further in \S~\ref{4.3}. The details of these observations and their results are deferred to \cite{hayashi08}. \subsection{Summary of Observations}\label{2.5} In order to probe $1.5<z<3$ with the Lyman break technique, deep ($>$100 ks) \galex/{\it NUV}~imaging was obtained. Spectroscopic observations from Keck and MMT independently confirm that most {\it NUV}-dropouts (with their UV continuum detected spectroscopically) are found to be at $1.5 < z < 2.7$.\\ \indent A summary of the number of LBGs, stars, and low-$z$ interlopers identified spectroscopically is provided in Table~\ref{table2}. Among the spectra targeting {\it NUV}-dropouts (i.e., excluding MOIRCS spectra), 53\% (30/57) were identified, and among those, 63\% are at $z>1.5$. Including seven objects with $R=2.5-3.0$, the percentages are 65\% and 62\%, respectively. These statistics are improved with the final selection criteria discussed in \S~\ref{3.2}. \section{Photometric Selection of {\it NUV}-dropouts}\label{3} This section describes the {\it NUV}~and optical photometric catalogs (\S~\ref{3.1}) and the methods for merging the two catalogs. Then in \S~\ref{3.2}, $\sim$8000 {\it NUV}-dropouts are empirically identified with the spectroscopic sample to refine the selection criteria. \subsection{Revised {\it NUV}~Photometric Catalogs}\label{3.1} Prior to any measurements, an offset ($\Delta\alpha=-0.39$\arcsec, $\Delta\delta=-0.18$\arcsec) in the {\it NUV}~image coordinates was applied to improve the astrometry for alignment with Suprime-Cam data. The scatter in the astrometric corrections was found to be $\sigma_{\Delta\alpha}$=0.39\arcsec~and $\sigma_{\Delta\delta}$=0.33\arcsec. This only results in a 0.01 mag correction for {\it NUV}~measurements, and is therefore neglected.\\ \indent The coordinates of $\sim$100000 SDF $B$-band sources with $B_{\texttt{auto}}<27.5$ were used to measure {\it NUV}~fluxes within a 3.39\arcsec~(2.26 pixels) radius aperture with the \textsc{iraf/daophot} task, \texttt{phot}. For objects with {\it NUV}~photometry below the $3\sigma$ background limit, the $3\sigma$ value is used. This limit is determined from the mode in an annulus with inner and outer radii of 22.5\arcsec~and 37.5\arcsec~(i.e., an area of 1200 pixels), respectively. For sources detected in the {\it NUV}, a point-source aperture correction of a factor of $\approx$1.83 is applied to obtain the ``total'' {\it NUV}~flux. This correction was determined from the point spread function (PSF) of 21 isolated sources distributed across the image. The {\it NUV}~catalog is then merged with the $B$-band catalog from SExtractor \citep[SE; ][]{bertin96} that contains $BV$$R_{\rm C}$$i\arcmin z\arcmin$ photometry.\\ \indent Throughout this paper, ``total'' magnitudes from the Suprime-Cam images are given by SE \texttt{mag\_auto}, since the corrections between $B$-band Kron and the 5\arcsec~diameter magnitudes were no greater than 0.03 mag for isolated (5\arcsec~radius), point-like (SE \texttt{class\_star} $\geq$ 0.8) targets.\\ \indent The merged catalog was also corrected for galactic extinction based on the \cite{cardelli89} extinction law. For the SDF, they are: $A$({\it NUV}) = 0.137, $A(B)$ = 0.067, $A(V)$ = 0.052, $A$($R_{\rm C}$) = 0.043, $A(i\arcmin)$ = 0.033, and $A(z\arcmin)$ = 0.025. Since the Galactic extinction for the SDF is low, the amount of variation in A({\it NUV}) is no more than 0.02, so all {\it NUV}~magnitudes are corrected by the same value. \subsection{Broad-band Color Selection}\label{3.2} Using the sample of spectroscopically confirmed $z>1.5$ LBGs, low-$z$ interlopers, and stars, the color selection is optimized to minimize the number of interlopers while maximizing the number of confirmed LBGs. In Figure~\ref{select}, known LBGs are identified in the $NUV-B$ versus $B-V$ diagram, where the $NUV-B$ color is given by the ``total'' magnitude and the $B-V$ is the color within a 2\arcsec~aperture. The latter was chosen because of the higher S/N compared to larger apertures. The final empirical selection criteria for the LBG sample are: \begin{eqnarray} NUV-B \geq 1.75,&\\ B-V \leq 0.50,&~{\rm and}\\ NUV-B \geq 2.4(B-V) + 1.15, \end{eqnarray} which yielded 7964 {\it NUV}-dropouts with $21.90\leq V\leq25.30$. Among the Hectospec and LRIS spectra, these selection criteria included all spectroscopic LBGs and excluded 4/5 stars and 4/6 (4/9 with $R>2.5$) interlopers. Therefore, the fraction of {\it NUV}-dropouts that are confirmed to be LBGs with the new selection criteria is 86\% (the $R=2.5$ cut implies 79\%). Note that while the $B$-band catalog was used (since the $B$ filter is closer in wavelength to the {\it NUV}), the final magnitude selection was in $V$, to compare with the rest-frame wavelength ($\approx$1700\AA) of $z\sim3$ LBGs in the $R$-band.\\ \indent To summarize, a {\it NUV}-optical catalog was created, and it was combined with spectroscopic redshifts to select 7964 {\it NUV}-dropouts with $NUV-B\geq1.75$, $B-V\leq0.50$, $NUV-B\geq2.4(B-V)+1.15$, and $21.90\leq V\leq25.30$. The spectroscopic sample indicates that 14\% of {\it NUV}-dropouts are definite $z\leq1.5$ interlopers. \begin{figure} \epsscale{1.0} \plotone{f9_color.eps} \caption{$NUV-B$ and $B-V$ colors for $22.0<V_{\texttt{auto}}<25.3$ sources. A total of $\sim$33,000 sources are represented here, but only one-third are plotted, for clarity. Sources undetected (at the 3$\sigma$ level) in the {\it NUV}~are shown as grey unfilled triangles while the detected sources are indicated as dark grey unfilled squares. Filled (unfilled) circles correspond to sources that have been confirmed as LBGs with (without) emission lines. Low-$z$ interlopers are shown as filled squares while stars are shown as unfilled stars. Skeletal stars represent Gunn-Stryker stars. [{\it See the electronic edition of the Journal for a color version of this figure}.]} \label{select} \end{figure} \section{CONTAMINATION AND COMPLETENESS ESTIMATES}\label{4} Prior to constructing a normalized luminosity function, contaminating sources that are not LBGs must be removed statistically. Section \ref{4.1} discusses how foreground stars are identified and removed, which was found to be a 4$-$11\% correction. Section \ref{4.2} describes the method for estimating low-$z$ contamination, and this yielded a correction of $34\%\pm17\%$. These reductions are applied to the number of {\it NUV}-dropouts to obtain the surface density of $z\sim2$ LBGs. Monte Carlo (MC) realizations of the data, to estimate the completeness and the effective volume of the survey, are described in \S~\ref{4.3}. The latter reveals that the survey samples $z\approx1.8-2.8$. \subsection{Removal of Foreground Stars}\label{4.1} The \cite{GS} stellar track passes above the {\it NUV}-dropout selection criteria box (as shown in Figure~\ref{select}). This poses a problem, as objects that are undetected in the {\it NUV}~can be faint foreground stars. A simple cut to eliminate bright objects is not sufficient, because faint halo stars exist in the SDF (as shown later). To reduce stellar contamination, additional photometric information from the SExtractor $BVR_{\rm c}i$\arcmin$z$\arcmin~catalogs is used. The approach of creating a ``clean'' sample of point-like sources, as performed by \cite{richmond05}, is followed. He used the \texttt{class\_star} parameter and the difference ($\delta$) between the 2\arcsec~and 3\arcsec~aperture magnitudes for each optical image. A `1' is assigned when the \texttt{class\_star} value is $0.90-1.00$ or $0.10<\delta<0.18$, and `0' otherwise for each filter. The highest score is 10 [(1+1)$\times$5], which 2623 $V_{\texttt{auto}}=21.9-26.0$ objects satisfied, and is referred to as ``perfect'' point-like or ``rank 10'' object. These rank 10 objects will be used to define the stellar locus, since contamination from galaxies is less of a problem for the most point-like sample. Then objects with lower ranks that fall close to the stellar locus will also be considered as stars after the locus has been defined.\\ \begin{figure*}[!htc] \epsscale{1.1} \plottwo{f10a_color.eps}{f10b_color.eps} \caption{Two color-color diagrams for rank 10 point-like objects. Grey (small) and black (large) squares represent sources brighter than $V_{\texttt{auto}}=26.0$ and 23.0, respectively. The Gunn-Stryker stars are shown as stars, and the SDF stellar locus of \cite{richmond05} is shown as filled squares. The solid lines define the stellar locus for calculating $\Delta$ (see \S~\ref{4.1}). The five sources that have been spectroscopically identified to be stars are shown as filled green circles. [{\it See the electronic edition of the Journal for a color version of this figure}.]} \label{fig3} \end{figure*} \indent Unfortunately, distant galaxies can also appear point-like, and must be distinguished from stars. This is done by comparing their broad-band optical colors relative to the stellar locus. Figure~\ref{fig3} shows the $B-V$, $V-R_{\rm C}$, and $R_{\rm C}-z\arcmin$ colors used in \cite{richmond05} for the ``clean'' sample. The stellar locus is defined by the solid black lines using brighter ($V\leq23.0$) sources. Figure~\ref{fig3} shows differences in the colors between the stellar locus defined for point-like SDF stars and those of Gunn-Stryker stars. \cite{richmond05} states that this is due to metallicity, as the SDF and Gunn-Stryker stars are selected from the halo and the disk of the Galaxy, respectively.\\ \indent For each object in the clean sample, the $V-R_{\rm C}$ color is used to predict the $B-V$ and $R_{\rm C}-z\arcmin$ colors along the stellar locus (denoted by `S.L.' in the subscript of the colors below). These values are then compared to the observed colors to determine the magnitude deviation from the stellar locus, $\Delta = -[(B-V)_{\rm obs} - (B-V)_{\rm S.L.}] + [(R_{\rm C}-z\arcmin)_{\rm obs} - (R_{\rm C}-z\arcmin)_{\rm S.L.}]$. Therefore, an object with $\Delta\approx0$ mag is classified as a star. This method is similar to what is done in \cite{richmond05}, where an object is considered a star if it is located within the stellar locus ``tube'' in multi-color space. This approach provides stellar contamination at faint magnitudes, which is difficult spectroscopically \citep{steidel03}. A histogram showing the distribution of $\Delta$ in Fig.~\ref{fig4}a reveals two peaks: at $\Delta\approx0$ and 0.8 mag. The comparison of $\Delta$ versus the $V$-band magnitude is shown in Fig.~\ref{fig4}b, and a source is identified as a star if it falls within the selection criteria shown by the solid lines in this figure. A total of 1431 stars $V\leq26.0$ are identified, while the remaining 1192 sources are classified as galaxies. The surface density as a function of magnitude for the identified stars agrees with predictions made by \cite{robin03} and other surface density measurements near the galactic pole. When the {\it NUV}-dropout selection criteria are applied\footnote[13]{The $B-V$ and $NUV-B$ color cuts limit the stellar sample to spectral types between A0 and G8.}, these numbers are reduced to 336 stars (i.e., a 4\% contamination for the {\it NUV}-dropout sample) and 230 galaxies with $21.9\leq V_{\texttt{auto}}\leq25.3$.\\ \indent Sources that are ranked $7-9$ are also considered and were classified as a star or a galaxy using the above approach. Of those that met the {\it NUV}-dropout criteria, 535 and 252 have the colors of stars and galaxies, respectively. Thus, the photometric sample of {\it NUV}-dropouts contains 7093 objects after statistically removing 871 stars (11\% of the {\it NUV}-dropout) that are ranked 7$-$10. The reasons for only considering objects with a rank of 7 or greater are (1) the stellar contamination does not significantly increase by including rank 6 or rank 5 objects (i.e., another 128 rank 6 stars or 1.5\% and 143 rank 5 stars or 1.8\%), and (2) comparison of the surface density of rank 7$-$10 stars with expectations from models showed evidence for possible contamination from galaxies at the faint end ($V>24.0$; A. Robin, priv. comm.), and the problem will worsen with rank 5 and 6 objects included. As it will be apparent later in this paper, stellar contamination is small and not expected to significantly alter any discussion of differences seen in the luminosity function. A hard upper limit by considering objects of rank 1 and above as stars would imply an additional (rank 1 to 6) stellar contamination of 14.5\%.\\ \begin{figure*}[!htc] \epsscale{1.1} \plottwo{f11a.eps}{f11b.eps} \caption{Photometric properties of rank 10 point-like objects. A histogram of $\Delta$ is shown in ({\it a}) while ({\it b}) plots $\Delta$ versus $V$-band Kron magnitude. The grey histogram and squares are for {\it all} point-like sources while those that satisfy the {\it NUV}-dropout selection criteria are represented in black. The selection of foreground stars is given by the solid lines in ({\it b}). The horizontal solid lines represent a minimum $\Delta$ at the bright end while the two solid curves are the $\pm$3$\sigma$ criteria for $\Delta$, as given by $-2.5\log{[1\mp(f_{3\sigma B}^2+f_{3\sigma V}^2+f_{3\sigma R_{\rm C}}^2 +f_{3\sigma z\arcmin}^2)^{0.5}/f_{V}]}$. Here $f_{X}$ is the flux density in the $X$ filter.} \label{fig4} \end{figure*} \indent Among the 5 sources spectroscopically determined to be stars, 3 of them (71239, 66611, and 149720) are classified as stars with the $\Delta$ method, and the other two stars (86900 and 178741) fall outside the $\Delta$ selection criteria. Among the known LBGs, 8 are rank 8$-$10 and 3 (166380, 78625, and 133660) are classified as {\it not} being stars. Since the spectroscopic sample of rank 10 objects is small, additional spectra will be required to further optimize the $\Delta$ technique. However, the spectroscopic sample (presented in this paper) indicates that $3-7\%$ of {\it NUV}-dropouts are stars, which is consistent with the $4-11\%$ derived with the $\Delta$ method. \subsection{Contamination from $z<1.5$ Interlopers}\label{4.2} One of the biggest concerns in any survey targeting a particular redshift range is contamination from other redshifts. The spectroscopic sample of {\it NUV}-dropouts shows that 5\% are definite $z<1.5$ galaxies. This number increases to an upper value of 51\% if the ambiguous {\it NUV}-dropouts (that meet the color selection criteria) are all assumed to be low-$z$ interlopers. However, it is unlikely that all unidentified {\it NUV}-dropouts are low-$z$, since LBGs without Ly$\alpha$~emission in their spectra\footnote[14]{Either because they do not possess Ly$\alpha$~in emission or they are at too low of a redshift for Ly$\alpha$\ to be observed.} are likely missed. A secondary independent approach for estimating low-$z$ contamination, which is adopted later in this paper, is by using a sample of $z<1.5$ emission-line galaxies identified with narrow-band (NB) filters. Since a detailed description of this sample is provided in \cite{ly07}, only a summary is given below:\\ A total of 5260 NB emitters are identified from their excess fluxes in the NB704, NB711, NB816, or NB921 filter either due to H$\alpha$, [\ion{O}{3}], or [\ion{O}{2}]~emission line in 12 redshift windows (some overlapping) at $0.07\lesssim z\lesssim1.47$. These galaxies have emission line equivalent widths and fluxes as small as 20\AA\ (observed) and a few $\times 10^{-18}$ erg s$^{-1}$ cm$^{-2}$, and are as faint as $V=25.5-26.0$. Cross-matching was performed with the {\it NUV}-dropout sample, which yielded 487 NB emitters as {\it NUV}-dropouts. The redshift and $V$-band magnitude distributions are shown in Figure~\ref{NBemitters}. Note that most of the contaminating sources are at $1.0<z<1.5$, consistent with the spectroscopic sample. \begin{figure}[!htc] \epsscale{1.0} \plotone{f12.eps} \caption{Redshift (top) and $V$-band magnitude (bottom) distributions of 487 NB emitters that meet the {\it NUV}-dropout criteria. Note that the redshift bins are made larger to clearly show the histogram.} \label{NBemitters} \end{figure} Since this sample represents a fraction of the $0.07\lesssim z \lesssim1.5$ redshift range, the above results must be interpolated for redshifts in between the NB redshifts. It is assumed that emission-line galaxies exist at all redshifts, and possess similar properties and number densities to the NB emitters. One caveat of this approach is that blue galaxies that do {\it not} possess nebular emission lines, may meet the {\it NUV}-dropout selection.\footnote[15]{Red galaxies are excluded by the $B-V<0.5$ criterion.} The statistics of such objects are not well known, since spectroscopic surveys are biased toward emission line galaxies, due to ease of identification. Therefore, these contamination estimates are treated as lower limits. A further discussion of this approach is provided in \S~\ref{7}.\\ \indent Using the redshift distribution shown in Figure~\ref{NBemitters}, the number of objects per comoving volume ($N/\Delta V$) is computed at each NB redshift window. For redshifts not included by the NB filters, a linear interpolation is assumed. Integrating over the volume from $z=0.08$ to $z=1.5$ yields the total number of interlopers to be $2490\pm1260$, which corresponds to a contamination fraction of $f_{\rm contam}$ = $0.34\pm0.17$. The error on $f_{\rm contam}$~is from Poissonian statistics for each redshift bin, and are added in quadrature during the interpolation step for other redshifts. This is also determined as a function of magnitude (hereafter the ``mag.-dep.'' correction), since the redshift distribution will differ between the bright and faint ends. The $f_{\rm contam}$~($V$-band magnitude range) values are $0.39\pm0.20$ (22.9$-$23.3), $0.40\pm0.21$ (23.3$-$23.7), $0.37\pm0.21$ (23.7$-$24.1), $0.31\pm0.16$ (24.1$-$24.5), $0.27\pm0.14$ (24.5$-$24.9), and $0.39\pm0.19$ (24.9$-$25.3). \subsection{Modelling Completeness and Effective Volume}\label{4.3} In order to obtain an accurate LF for {\it NUV}-dropouts, the completeness of the sample must be quantified. This is accomplished with MC simulations to calculate $P(m,z)$, which is the probability that a galaxy of apparent $V$-band magnitude $m$ and at redshift $z$ will be detected in the image, and will meet the {\it NUV}-dropout color selection criteria. The effective comoving volume per solid area is then given by \begin{equation} \frac{V_{\rm eff}(m)}{\Omega} = \int dz P(m,z) \frac{dV(z)}{dz}\frac{1}{\Omega}, \end{equation} where $dV/dz/\Omega$ is the differential comoving volume per $dz$ per solid area at redshift $z$. Dividing the number of {\it NUV}-dropouts for each apparent magnitude bin by $V_{\rm eff}$ will yield the LF. This approach accounts for color selection biases, limitations (e.g., the depth and spatial resolution) of the images \citep{steidel99}, and choice of apertures for ``total'' magnitude.\\ \indent In order to determine $P(m,z)$, a spectral synthesis model was first constructed from \textsc{galaxev} \citep{bc03} by assuming a constant SFR with a Salpeter initial mass function (IMF), solar metallicity, an age of 1Gyr, and a redshift between $z=1.0$ and $z=3.8$ with $\Delta z=0.1$ increments. The model was reddened by assuming an extinction law following \cite{calzetti00} with $E(B-V)=0.0-0.4$ (0.1 increments) and modified by accounting for IGM absorption following \cite{madau95}. The latter was chosen over other IGM models \citep[e.g.,][]{bershady99} for consistency with previous LBG studies. This model is nearly identical to that of \cite{steidel99}.\\ \indent Figure~\ref{lbgmodel} shows the redshift evolution of the $NUV-B$ and $B-V$ colors for this model. These models were scaled to apparent magnitudes of $V=22.0-25.5$ in increments of 0.25. These 2175 ($29\times15\times5$) artificial galaxies are randomly distributed across the {\it NUV}, $B$, and $V$ images with the appropriate spatial resolution (assumed to be point-like) and noise contribution with the IRAF tasks \texttt{mkobject} (for optical images) and \texttt{addstar} (using the empirical {\it NUV}~PSF). Because of the poor spatial resolution of \galex, each iteration of 435 sources (for a given $E[B-V]$ value) was divided into three sub-iterations to avoid source confusion among the mock galaxies. The artificial galaxies were then detected in the same manner as real sources. This process was repeated 100 times. Note that 21\% of artificial sources did not meet the {\it NUV}-dropout criteria (see e.g., Figure~\ref{MCsim}), as they were confused with one or more nearby sources detected in the {\it NUV}. This serves as an estimate for incompleteness due to confusion, and is accounted for in the final LF. These results are consistent with MOIRCS spectra that finds that $14-29$\% of BzKs with $z\geq1.5$ was missed by {\it NUV}-dropout selection criteria with nearby objects affecting the {\it NUV}~flux. In addition, this simulation also revealed that among all mock LBGs with $z\leq1.5$, 30\% were photometrically scattered into the selection criteria of {\it NUV}-dropouts, which is consistent with the 34\% low-$z$ contamination fraction predicted in \S~\ref{4.2}.\\ \indent Figure~\ref{MCsim} shows $P(m,z)$ as a function of magnitude for $E(B-V)=0.1$, 0.2, and $0.0-0.4$. The latter is determined from a weighted average where the $E(B-V)$ distribution from \cite{steidel99} is used for weighting each completeness distribution. This corresponds to an average $E(B-V)\sim0.15$. The adopted comoving volume uses the weighted-average results. Table~\ref{table3} provides the effective comoving volume per arcmin$^2$, the average redshift, the FWHM and standard deviation of the redshift distribution for subsets of apparent magnitudes. \begin{figure} \epsscale{1.0} \plotone{f13.eps} \caption{Modelled $NUV-B$ and $B-V$ colors for {\it NUV}-dropouts. The solid, dotted, short-dashed, long-dashed, and dot short-dashed lines correspond to the spectral synthesis model described in \S~\ref{4.3} with $E(B-V)=0.0$, 0.1, 0.2, 0.3, and 0.4, respectively. The thick solid black lines represent the selection criteria in \S~\ref{3.2}. } \label{lbgmodel} \end{figure} \begin{figure*} \epsscale{0.35} \plotone{f14a_color.eps}\plotone{f14b_color.eps}\plotone{f14c_color.eps}\\ \caption{Monte Carlo completeness estimates as a function of redshift for different apparent magnitude. From left to right is the result for $E(B-V)=0.1$, 0.2, and $0.0-0.4$ \citep[a weighted average assuming the $E(B-V)$ distribution of][]{steidel99}. [{\it See the electronic edition of the Journal for a color version of this figure}.]} \label{MCsim} \end{figure*} \subsection{Summary of Survey Completeness and Contamination}\label{4.4} Using optical photometry, 871 foreground stars (i.e., a 11\% correction) were identified and excluded to yield 7093 candidate LBGs. Then $z<1.5$ star-forming galaxies, identified with NB filters, were cross-matched with the {\it NUV}-dropout sample to determine the contamination fraction of galaxies at $z<1.5$. Redshifts missed by the NB filters were accounted for by interpolating the number density between NB redshifts, and this yielded $2490\pm1260$ interlopers, or a contamination fraction of $0.34\pm0.17$.\\ \indent To determine the survey completeness, the $V_{\rm eff}$ was simulated. This consisted of generating spectral synthesis models of star-forming galaxies, and then adding artificial sources with modelled broad-band colors to the images. Objects were then detected and selected as {\it NUV}-dropouts in the same manner as the final photometric catalog. These MC simulations predict that the survey selects galaxies at $z\sim2.28\pm0.33$ (FWHM of $z=1.8-2.8$), and has a maximum comoving volume of $2.8\times10^3~h_{70}^{-3}$ Mpc$^3$ arcmin$^{-2}$. \section{RESULTS}\label{5} This section provides the key measurements for this survey: a $z\sim2$ rest-frame UV luminosity function for LBGs (\S~\ref{5.1}), and by integrating this luminosity function, the luminosity and SFR densities are determined (\S~\ref{5.2}). \subsection{The 1700\AA\ UV Luminosity Function}\label{5.1} To construct a luminosity function, a conversion from apparent to absolute magnitude is needed. The distance modulus is $m_{1700}-M_{1700} \approx 45.0$, where it is assumed that all the sources are at $z\approx2.28$ and the K-correction term has been neglected, since it is no more than 0.08 mag. The luminosity function is given by \begin{equation} \Phi(M_{1700}) = \frac{1}{\Delta m}\frac {N_{\rm raw}(1-f_{\rm contam})}{V_{\rm eff}(M_{1700})}, \end{equation} where $N_{\rm raw}$ is the raw number of {\it NUV}-dropouts within a magnitude bin ($\Delta m = 0.2$), $V_{\rm eff}(M_{1700})$ is the effective comoving volume described in \S~\ref{4.3}, and $f_{\rm contam}$~is the fraction of {\it NUV}-dropouts that are at $z<1.5$ (see \S~\ref{4.2}). The photometric LF is shown in Figure~\ref{Vlf}. For the mag.-dep. $f_{\rm contam}$~case, the adopted correction factor for $V\leq22.9$ is $f_{\rm contam}$ = 0.34 (the average over all magnitudes).\\ \indent Converting the \cite{schechter76} formula into absolute magnitude, the LF is fitted with the form: \begin{equation} \Phi(M_{1700})dM_{1700} = \frac{2}{5}\ln{(10)}\phi^{\star} x^{\alpha+1}\exp{[-x]}dM_{1700}, \end{equation} where $x \equiv 10^{-0.4(M_{1700}-M_{1700}^{\star})}$. In order to obtain the best fit, a MC simulation was performed to consider the full range of scatter in the LF. Each datapoint was perturbed randomly $5\times10^5$ times following a Gaussian distribution with $1\sigma$ given by the uncertainties in $\Phi$. Each iteration is then fitted to obtain the Schechter parameters. This yielded for the mag.-dep. $f_{\rm contam}$~case: $M_{1700}^{\star}=-20.50\pm0.79$, $\log{\phi^{\star}}=-2.25\pm0.46$, and $\alpha=-1.05\pm1.11$ as the best fit with 1$\sigma$ {\it correlated} errors. Since these Schechter parameters are based on lower limits of low-$z$ contamination (see \S~\ref{4.2}), they imply an upper limit on $\phi^{\star}$. This luminosity function is plotted onto Figure~\ref{Vlf} as the solid black line, and the confidence contours are shown in Figure~\ref{contours}. With the faint-end slope fixed to $\alpha=-1.60$ \citep{steidel99} and $-1.84$ \citep{reddy08}, the MC simulations yielded ($M_{1700}^{\star}$, $\log{\phi^{\star}}$) of ($-20.95\pm0.29$, $-2.50\pm0.17$) and ($-21.30\pm0.35$, $-2.75\pm0.21$), respectively. \begin{figure} \epsscale{1.0} \plotone{f15_color.eps} \caption{The {\it observed} $V$-band luminosity function for {\it NUV}-dropouts. The LF of this work is shown by the thick black solid curve with unfilled squares. Grey points are those excluded from the MC fit. \cite{steidel99} measurements are shown as filled squares with solid thin curve ($z\sim3$) and opened circles with short-dashed thin curve ($z\sim4$). \cite{reddy08} BX results are shown as filled circles with long-dashed line, and \cite{ST06a} is represented by unfilled triangles and dotted line. Corrections to a common cosmology were made for \cite{steidel99} measurements, and SFR conversion follows \cite{kennicutt98}. [{\it See the electronic edition of the Journal for a color version of this figure}.] } \label{Vlf} \end{figure} \begin{figure} \epsscale{1.0} \plotone{f16.eps} \caption{Confidence contours representing the best-fitting Schechter parameters for the LF. (Top) The mag.-dep. correction where the faint-end slope is a free parameter. The vertical axes show $\alpha$ while the horizontal axes show $\log{(\phi^{\star})}$ (left) and $M^{\star}$ (right). (Bottom) $M^{\star}$ vs. $\log{(\phi^{\star})}$ for $\alpha=-1.6$ (left) and $\alpha = -1.84$ (right). The inner and outer contours represent 68\% and 95\% confidence levels.} \label{contours} \end{figure} \subsection{The Luminosity and Star-Formation Rate Densities}\label{5.2} The LF is integrated down to $M_{1700}=-20.11$---the magnitude where incompleteness is a problem---to obtain a comoving {\it observed} specific luminosity density (LD) of $\log{\mathcal{L}_{\rm lim}}=26.28\pm0.69$ erg s$^{-1}$ Hz$^{-1}$ Mpc$^{-3}$ at 1700\AA. The conversion between the SFR and specific luminosity for 1500-2800\AA\ is SFR$_{\rm UV}$(M$_{\sun}$ yr$^{-1}$) = $1.4\times10^{-28}L_{\nu}$(erg s$^{-1}$ Hz$^{-1}$), where a Salpeter IMF with masses from $0.1-100M_{\sun}$ is assumed \citep{kennicutt98}. Therefore, the extinction- (adopted $E[B-V]=0.15$ and Calzetti law) and completeness-corrected SFR density of $z\sim2$ LBGs is $\log{\dot\rho_{star}}=-0.99\pm0.69$ $M_{\sun}$ yr$^{-1}$ Mpc$^{-3}$. Using the \cite{madau98} conversion would decrease the SFR by $\sim$10\%. Integrating to $L=0.1L^{\star}_{z=3}$, where $L^{\star}_{z=3}$ is $L^{\star}$ at $z\sim3$ \cite[$M^{\star}_{z=3}=-21.07$,][]{steidel99}, yields $\log{\mathcal{L}}=26.52\pm0.68$ erg s$^{-1}$ Hz$^{-1}$ Mpc$^{-3}$ or an extinction-corrected SFR density of $\log{\dot\rho_{star}}=-0.75\pm0.68$~$M_{\sun}$ yr$^{-1}$ Mpc$^{-3}$.\footnote[16]{The above numbers are upper limits if the low-$z$ contamination fraction is higher than estimates described in \S~\ref{4.2}.} \subsection{Summary of Results}\label{5.3} A UV luminosity function was constructed and yielded a best Schechter fit of $M_{1700}^{\star}=-20.50\pm0.79$, $\log{\phi^{\star}}=-2.25\pm0.46$, and $\alpha=-1.05\pm1.11$ for $z\sim2$ LBGs. The UV specific luminosity density, above the survey limit, is $\log{\mathcal{L}_{\rm lim}}=26.28\pm0.68$ erg s$^{-1}$ Hz$^{-1}$ Mpc$^{-3}$. Correcting for dust extinction, this corresponds to a SFR density of $\log{\dot\rho_{star}}=-0.99\pm0.68$ $M_{\sun}$ yr$^{-1}$ Mpc$^{-3}$. \section{COMPARISONS WITH OTHER STUDIES}\label{6} Comparisons in the UV specific luminosity densities, LFs, and Schechter parameters can be made with previous studies. First, a comparison is made between the $z\sim2$ LBG LF with $z\sim2$ BX and $z\sim3$ LBG LFs. Then a discussion of the redshift evolution in the UV luminosity density and LF (parameterized in the Schechter form) is given in \S~\ref{6.2}.\\ \indent The results are summarized in Figures~\ref{Vlf}, \ref{schechter}, and \ref{lumdens} and Table~\ref{table4}. For completeness, three different UV specific luminosity densities are reported by integrating the LF down to: (1) $0.1L^{\star}_{z=3}$; (2) $L_{\rm lim}$, the limiting depth of the survey; and (3) $L=0$. The latter is the least confident, as it requires extrapolating the LF to the faint-end, where in most studies, it is not well determined. \subsection{UV-selected Studies at $z\sim2-3$}\label{6.1} In Figure~\ref{Vlf}, the $z\sim2$ LBG LF at the bright end is similar to those of LBGs from \cite{steidel99} and BX galaxies from \cite{ST06a} and \cite{reddy08}; however, the faint end is systematically higher. This is illustrated in Figure~\ref{LFcomp} where the ratios between the binned $z\sim2$ UV LF and the fitted Schechter forms of \cite{steidel99} and \cite{reddy08} are shown. When excluding the four brightest and two faintest bins, the {\it NUV}-dropout LF is a factor of $1.7\pm0.1$ with respect to $z\sim3$ LBGs of \cite{steidel99} and $z\sim2$ BX galaxies of \cite{reddy08} and \cite{ST06a}. The hard upper limit for stellar contamination (see \S~\ref{4.1}) would reduce this discrepancy to a factor of $1.4\pm0.1$. There appears to be a trend that the ratio to \cite{reddy08} LF increases towards brighter magnitudes. This is caused by the differences in the shape of the two LFs, particularly the faint-end slope. The increase in the ratio is less noticeable when compared to \cite{steidel99}, which has a shallower faint-end slope. Since the LFs of \cite{ST06a} and \cite{reddy08} are similar, the comparison of any results between the {\it NUV}-dropout and the BX selections will be made directly against \cite{reddy08}.\\ \indent All 11 points are $1-3\sigma$ from a ratio of 1. It has been assumed in this comparison that the amount of dust extinction does not evolve from $z\sim3$ to $z\sim2$. Evidence supporting this assumption is: in order for the {\it intrinsic} LBG LFs at $z\sim2$ and 3 to be consistent, the population of LBGs at $z\sim2$ would have to be relatively {\it less} reddened by $\Delta E(B-V)=0.06$ (i.e., $E[B-V] = 0.09$ assuming a Calzetti extinction law). However, the stellar synthesis models, described previously, indicate that $E(B-V)$ = 0.1 star-forming galaxies are expected to have observed $B-V\sim0.1$, and only 15\% of {\it NUV}-dropouts have $B-V \leq 0.1$. This result implies that dust evolution is unlikely to be the cause for the discrepancy seen in the LFs.\\ \begin{figure} \epsscale{1.0} \plotone{f17.eps} \caption{Comparisons of the LBG LF with other LFs. The ratios of the $z\sim2$ LBG LF to the Schechter fits of \cite{steidel99} LF and \cite{reddy08} are shown in the top and bottom panels, respectively. On average, the $z\sim2$ LBG LF is a factor of $1.7\pm0.1$ higher than these studies.} \label{LFcomp} \end{figure} \indent To compare the luminosity densities, the binned LF is summed. This is superior to integrating the Schechter form of the LF as (1) no assumptions are made between individual LF values and for the faint-end, and (2) the results do not suffer from the problem that Schechter parameters are affected by small fluctuations at the bright- and faint-ends. The logarithm of the binned luminosity densities for $-22.91<M_{1700}<-20.11$ are $26.27\pm0.16$ (this work), $26.02\pm0.04$ \citep{steidel99}, and $26.08\pm0.07$ ergs s$^{-1}$ Hz$^{-1}$ Mpc$^{-3}$ \citep{reddy08}, which implies that the $z\sim2$ LBG UV luminosity density is $0.25\pm0.16$ dex higher than the other two studies at the 85\% confidence level.\\ \indent Since the low-$z$ contamination fraction is the largest contributor to the errors, more follow-up spectroscopy will reduce uncertainties on the LF. This will either confirm or deny with greater statistical significance that the luminosity density and LF of $z\sim2$ LBGs are higher than the $z\sim3$ LBGs and $z\sim2$ BXs. \subsection{Evolution in the UV Luminosity Function and Density}\label{6.2} The Schechter LF parameters, listed in Table~\ref{table4}, are plotted as a function of redshift in Figure~\ref{schechter}. There appears to be a systematic trend that $M^{\star}$ is less negative (i.e., a fainter $L^{\star}$) by $\approx$1 mag at higher redshifts for surveys with $\alpha\leq-1.35$. No systematic evolution is seen for $\phi^{\star}$, given the measurement uncertainties. Limited information are available on the faint-end slope, so no analysis on its redshift evolution is provided. It is often difficult to compare Schechter parameters, since they are correlated, and without confidence contours for the fits of each study, the apparent evolution could be insignificant. A more robust measurement is the product ($\phi^{\star}\times L^{\star}$), which is related to the luminosity density.\\ \indent The observed LDs, integrated to $0.1L^*_{z=3}$, show a slight increase of $\approx0.5$ dex from $z\sim6$ to $z\sim3$. However, the two other luminosity densities appear to be flat, given the scatter in the measurements of $\approx0.5-1.0$ dex. A comparison between $z\sim2$ and $z\sim5$ studies reveal a factor of $3-6$ higher luminosity density at $z\sim2$. The extinction-corrected results for $L_{\rm lim}=0$ and $L_{\rm lim}=0.1L^*_{z=3}$ show a factor of 10 increase from $z\sim6$ \cite{bouwens07}'s measurement to $z\sim2$. \cite{bouwens07} assumed a lower dust extinction correction. If an average $E(B-V)=0.15$ with a Calzetti law is adopted, the rise in the extinction-corrected luminosity density is $\approx3$. \begin{figure*} \epsscale{0.9 \plotone{f18.eps \caption{Compiled Schechter parameters of LBG and BX studies versus redshift. Top and bottom show the the normalization ($\phi^{\star}$), and the ``knee'' of the UV LF ($M^{\star}$), respectively. Measurements are grouped according to $\alpha$: $\leq-1.70$, between $-1.70$ and $-1.35$, and $>-1.35$. This {\it NUV}-dropout work is shown as black filled square ($\alpha=-1.05$). The color and symbol conventions for studies in Figure~\ref{Vlf} are identical for this figure. In the legend, \cite{ST06a} is abbreviated as ``ST(2006)''. Some points are not shown here but have luminosity density measurements presented in Figure~\ref{lumdens}.} \label{schechter} \end{figure*} \begin{figure*} \epsscale{1.1} \plottwo{f19a.eps}{f19b.eps} \caption{The observed ({\it left}) and extinction-corrected ({\it right}) UV specific luminosity densities as a function of redshift. The luminosity function is integrated to three different limits: $L=0$ (top panel), $L=L_{\rm lim}$ (the survey's limit; middle panel), and $L=0.1L^*_{z=3}$. The color and point-type schemes are the same as Figure~\ref{schechter}. The SFR densities are shown on the right axes following \cite{kennicutt98} conversion. For the $z\sim2$ LBG luminosity density integrated to $L=L_{\rm lim}$, only one value is shown, since all the fits with different $\alpha$ are almost identical.} \label{lumdens} \end{figure*} \section{DISCUSSION}\label{7} In this section, the discrepancy between the UV LF of this study and two BX studies, shown in \S~\ref{6.1}, is examined. Three possible explanations are considered:\\ \noindent{\bf 1. Underestimating low-$z$ contamination.} To estimate contamination, a large sample of $z\lesssim1.5$ NB emitters was cross-matched with the {\it NUV}-dropout sample. This method indicated that $34\%\pm17\%$ of {\it NUV}-dropouts are at $z<1.5$. However, it is possible that star-forming galaxies at $z=1-1.5$ could be missed by the NB technique, but still be identified as {\it NUV}-dropouts. This would imply that the contamination rate was underestimated. To shift the {\it NUV}-dropout LF to agree with \cite{reddy08} and \cite{ST06a} would require that the contamination fraction be more than 60\%. However, the spectroscopic sample has yielded a large number of genuine LBGs and a similar low-$z$ contamination (at least 21\% and at most 38\%). If the large (60\%) contamination rate is adopted, it would imply that only 15 of 40 spectra (LRIS and Hectospec) are at $z>1.5$, which is argued against at the 93\% confidence level (98\% with $R=2.5$ threshold), since 24 LBGs (1.6 times as many) have been identified. Furthermore, the LRIS and Hectospec observations independently yielded similar low contamination fractions, and the MC simulation (that involved adding artificial LBGs to the images) independently suggested 30\% contamination from $z\leq 1.5$.\\ \noindent{\bf 2. Underestimating the comoving effective volume.} The second possibility is that $V_{\rm eff}$ was underestimated, as the spectral synthesis model may not completely represent the galaxies in this sample, and misses $z\sim1-1.5$ galaxies. However, a comparison between a top-hat $P(m,z)$ from $z=1.7-2.7$ versus $z=1.4-2.7$ ($z=1.0-2.7$) would only decrease number densities by $\approx20$\% (37\%). Note that the latter value is consistent with $f_{\rm contam}$.\\ \noindent{\bf 3. Differences between LBG and BX galaxies selection.} This study uses the Lyman break technique while other studies used the `BX' method to identify $z\sim2$ galaxies. Because of differences in photometric selection, it is possible that the galaxy population identified by one method does not match the other, but instead, only a fraction of BX galaxies are also LBGs and vice versa. This argument is supported by the higher surface density of LBGs compared to BXs over 2.5 mag, as shown in Figure~\ref{LBGBX}a. However, their redshift distributions, as shown in Figure~\ref{LBGBX}b, are very similar.\\ \indent This scenario would imply that there is an increase in the LF and number density of LBGs from $z\sim3$ to $z\sim2$, indicating that the comoving SFR density peaks at $z\sim2$, since there is a decline towards $z\sim0$ from UV studies \cite[see][and references therein]{hopkins04}. However, it might be possible that the selection ({\it NUV}$-B-V$) of $z\sim2$ LBGs could include more galaxies than the $U_nGR$ color selection used to find $z\sim3$ LBGs. Although no reason exists to believe that $z\sim3$ LBG selection is more incomplete than at $z\sim2$ (nor is there any evidence for such systematic incompleteness for $z>4$ LBGs), it is difficult to rule out this possibility for certain. But if so, then the SFR density might not evolve. In addition, the conclusion that $z\sim2$ is the peak in star-formation is based on UV selection techniques, which are less sensitive at identifying dusty ($E[B-V]>0.4$) star-forming galaxies. However, spectroscopic surveys have revealed that the sub-mm galaxy population peaks at $z\approx2.2$ \citep{chapman05}, which further supports the above statement that $z\sim2$ is the epoch of peak star-formation. \begin{figure*} \plottwo{f20a.eps}{f20b.eps} \caption{Surface densities and redshift distributions for $z\sim2$ BXs and LBGs. In ({\it a}), the surface densities of LBGs and BXs are shown as circles and triangles, respectively. Both studies have stellar and low-$z$ contamination corrections applied. This figure reveals that the LBG surface density is systematically higher than the BX's. The redshift distributions are shown in ({\it b}). The shaded (unshaded) histogram corresponds to BXs (LBGs). For the BX, the redshift distribution is obtained from \cite{reddy08} spectroscopic sample, while the LBG is determined from the MC simulations described in \S~{4.3} for all magnitudes. The similarities in redshifts surveyed by both studies and the higher surface density of LBGs indicate that the BX technique misses a fraction of LBGs.} \label{LBGBX} \end{figure*} \section{CONCLUSIONS}\label{8} By combining deep \galex/{\it NUV}\ and optical Suprime-Cam imaging for the Subaru Deep Field, a large sample of LBGs at $z\sim2$ has been identified as {\it NUV}-dropouts. This extends the popular Lyman break technique into the redshift desert, which was previously difficult due to the lack of deep and wide-field UV imaging from space. The key results of this paper are: \begin{enumerate} \setlength{\itemsep}{1pt}\setlength{\parskip}{0pt}\setlength{\parsep}{0pt} \item Follow-up spectroscopy was obtained, and 63\% of identified galaxies are at $z=1.6-2.7$. This confirms that most {\it NUV}-dropouts are LBGs. In addition, MMT/Hectospec will complement Keck/LRIS by efficiently completing a spectroscopic survey of the bright end of the LF. \item Selecting objects with $NUV-B\geq1.75$, $B-V\leq0.5$, and $NUV-B\geq2.4(B-V)+1.15$ yielded 7964 {\it NUV}-dropouts with $V=21.9-25.3$. The spectroscopic sample implied that 50$-$86\% of {\it NUV}-dropouts are LBGs. \item Using broad-band optical colors and stellar classification, 871 foreground stars have been identified and removed from the photometric sample. This corresponds to a $4-11\%$ correction to the {\it NUV}-dropout surface density, which is consistent with the $3-7\%$ from limited spectra of stars presented in this paper. \item In addition, low-$z$ contamination was determined using a photometric sample of NB emitters at $z\lesssim1.47$. This novel technique indicated that the contamination fraction is (at least) on average $34\%\pm17\%$, which is consistent with the spectroscopic samples and predictions from MC simulations of the survey. \item After removing the foreground stars and low-$z$ interlopers, MC simulations were performed to estimate the effective comoving volume of the survey. The UV luminosity function was constructed and fitted with a Schechter profile with $M_{1700}^{\star}=-20.50\pm0.79$, $\log{\phi^{\star}}=-2.25\pm0.46$, and $\alpha=-1.05\pm1.11$. \item A compilation of LF and SFR measurements for UV-selected galaxies is made, and there appears to be an increase in the luminosity density: a factor of 3$-$6 ($3-10$) increase from $z\sim5$ ($z\sim6$) to $z\sim2$. \item Comparisons between {\it NUV}-dropouts with LBGs at $z\sim3$ \citep{steidel99} and BXs at $z\sim2$ \citep{ST06a,reddy08} reveal that the LF is $1.7\pm0.1$ ($1.4\pm0.1$ if the hard upper limit of stellar contamination is adopted) times higher than these studies. The summed luminosity density for $z\sim2$ LBGs is 1.8 times higher at 85\% confidence (i.e., $0.25\pm0.16$ dex). \item Three explanations were considered for the discrepancy with $z\sim2$ BX studies. The possibility of underestimating low-$z$ contamination is unlikely, since optical spectroscopy argues against the possibility of a high (60\%) contamination fraction at the 93\% confidence. Second, even extending the redshift range to increase the comoving volume is not sufficient to resolve the discrepancy. The final possibility, which cannot be ruled out, is that a direct comparison between BX-selected galaxies and LBG is not valid, since the selection criteria differ. It is likely that the BX method may be missing some LBGs. This argument is supported by the similar redshift distribution of BXs and LBGs, but the consistently higher surface density of LBGs over 2.5 mag. \item If the latter holds with future reduction of low-$z$ contamination uncertainties via spectroscopy, then the SFR density at $z\sim2$ is higher than $z\gtrsim3$ and $z\lesssim1.5$ measurements obtained via UV selection. Combined with sub-mm results \citep{chapman05}, it indicates that $z\sim2$ is the epoch where galaxy star-formation peaks. \end{enumerate}\vspace{-0.75cm} \acknowledgements The Keck Observatory was made possible by the generous financial support of the W.M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. We gratefully acknowledge NASA's support for construction, operation, and science analysis for the \galex~mission. This research was supported, by NASA grant NNG-06GDD01G. We thank the Hectospec instrument and queue-mode scientists and the MMT operators for their excellent assistance with the observations. Public Access MMT time is available through an agreement with the National Science Foundation. C.L. thank A. Shapley, M. Pettini, and S. Savaglio for providing their composite spectra, and S. C. Odewahn for providing K. Adelberger's LRIS reduction code. {\it Facilities:} \facility{Keck:I (LRIS)}, \facility{\galex}, \facility{MMT (Hectospec)}, \facility{Subaru (MOIRCS, Suprime-Cam)} \begin{appendix} \section{Individual Sources of Special Interest}\label{appendix} In most cases, the confirmed LBGs showed no unique spatial or spectral properties. However, 3 cases are worth mentioning in more detail.\\ \textbf{1. SDFJ132431.8+274214.3 (179350)}. Upon careful examination of the 2-D spectra, it appears that the Ly$\alpha$~emission from this source is offset by $\approx$1.1\arcsec~(9 kpc at 107\arcdeg~east of north) from the continuum emission, which is shown in Figure~\ref{extlya}a. The extended emission appears in the individual exposures of $15-30$ minutes. The deep ($3\sigma=28.45$) $B$-band image (Figure~\ref{extlya}b) reveals that there are no sources in this direction and at this distance, assuming that the continuum emission in the spectrum corresponds to the bright source in the $B$-band image. The two sources located below the bright object in Figure~\ref{extlya}b are too faint for their continuum emission to be detected with LRIS. Also, absorption features seen in the 1-D spectra (see Figure~\ref{spec1}a) are at nearly the same redshift as Ly$\alpha$. This indicates that the Ly$\alpha$~emission is associated with the targeted source, rather than a secondary nearby companion. Extended Ly$\alpha$~emission galaxies are rare \cite[e.g.,][have the largest sample of 41 objects]{saito06}, and the extreme cases are extended on larger ($\sim100$ kpc) scales, such as LAB1 and LAB2 of \cite{steidel00}. In addition, extended Ly$\alpha$~emission has been seen in some cases that show evidence for energetic galactic winds \citep{hesse03}. Either this source is a fortuitous discovery from a dozen spectra, or perhaps a fraction of {\it NUV}-dropouts have extended Ly$\alpha$~emission. The physical significance of this source is not discussed here, given limited information. \begin{figure}[!htc] \epsscale{0.75} \plotone{f21_color.eps} \caption{Optical images for 179350. (a) The 2-D spectrum with wavelength increasing to the right shows Ly$\alpha$~emission offset by $\approx$1\arcsec~from the center of the continuum. The vertical white line corresponds to 2\arcsec. (b) The Suprime-Cam $B$-band image centered on the targeted source shows that there are no sources in the direction of the extended emission. The two white vertical lines correspond to the slit, so (b) is rotated to have the same orientation as (a), and the vertical scales are the same. [{\it See the electronic edition of the Journal for a color version of this figure}.]} \label{extlya} \end{figure} \textbf{2. SDFJ132452.9+272128.5 (62056)}. The 1- and 2-D spectra for this source reveal an asymmetric emission line, as shown in Figure~\ref{bluelya}a, but with a weak ``bump'' about 10\AA\ blue-ward from the peak of Ly$\alpha$~emission. The $B$-band image (see Figure~\ref{62056}) shows two nearby sources where one is displaced $\approx$2\arcsec~nearly in the direction of the slit orientation while the other source is displaced in the direction perpendicular to the slit orientation. It may be possible that the blue excess is originating from the latter source due to a slight misalignment of the slit to fall between the two sources (i.e., they are physically near each other). To confirm this hypothesis, spectroscopy with a 90\arcdeg~rotation of the slit would show two sources with Ly$\alpha$~emission $\approx$800 km s$^{-1}$ apart. \begin{figure}[!btc] \vspace{9cm} \special{psfile=f22a.eps hoffset=-15 voffset=-68 hscale=42.0 vscale=42.0} \special{psfile=f22b.eps hoffset=225 voffset=-45 hscale=42.0 vscale=42.0} {{}} \caption{(a) One- and (b) two-dimensional spectra for 62056 (top) and 72012 (bottom) centered on the Ly$\alpha$~emission. These objects appear to show weak emission blue-ward of Ly$\alpha$. See \S~\ref{appendix} for a discussion. [{\it See the electronic edition of the Journal for a color version of this figure}.]} \label{bluelya} \end{figure} \begin{figure}[!htc] \epsscale{0.5} \plotone{f23.eps} \caption{The $B$-band image cropped to 20\arcsec~on a side and centered on 62056. The white box with thick lines is the LRIS slit intended to target the bright object. However, a 1.5\arcsec~offset of the slit in the north-west direction (as shown by the thin white box) may explain the blue excess seen in the 1-D and 2-D spectra (Figure~\ref{bluelya}) by including both objects.} \label{62056} \end{figure} \textbf{3. SDFJ132450.3+272316.24 (72012)}. This object is not listed in Table~\ref{table1}, as it was serendipitously discovered. The slit was originally targeting a narrow-band (NB) emitter. The LRIS-R spectrum showed an emission line at 7040\AA, but the blue-side showed a strong emission line that appears asymmetric at $\approx$4450\AA. One possibility is that the 4450\AA\ feature is Ly$\alpha$, so that the 7040\AA\ emission line is the redshifted \ion{C}{3}]~$\lambda$1909, but at $z=2.6634$, \ion{C}{3}] is expected at $\approx$6994\AA. This $\approx$40\AA\ difference is not caused by poor wavelength calibration, as night sky and arc-lamps lines are located where they are expected in both the blue and red spectra. In Figure~\ref{72012}, the $B$-band image reveals two sources, one of which is moderately brighter in the NB704 image, as expected for a NB704 emitter. These two sources were too close for SExtractor to deblend, but the coordinate above has been corrected. Because the NB704 emitter is a foreground source, the measured {\it NUV}~flux for the other source is affected, and results in a weak detected source in the {\it NUV}. Thus, this source is missed by the selection criteria of the ver. 1 catalog and those described in \S~\ref{3.2}. It is excluded from the spectroscopic sample discussed in \S~\ref{2}. This source is of further interest because it also shows a blue excess bump (shown in Figure~\ref{bluelya}) much like 62056, but weaker. This blue bump does not correspond to a different emission line with the same redshift as the 7040\AA\ emission line. Since the bump is 10\AA\ from the strong Ly$\alpha$~emission, it is likely associated with the source producing Ly$\alpha$. Both 62056 and 72012 were obtained on the second mask. These blue bumps are not due to a misalignment of single exposures when stacking the images together, as other equally bright sources in the mask with emission lines do not show a secondary blue peak. Other studies have also seen dual peak Ly$\alpha$~emission profiles \citep[e.g.,][]{tapken04,tapken07,cooke08,verhamme08}. In addition, high resolution spectra of 9 LBGs have also revealed 3 cases with double-peaked Ly$\alpha$~profile \citep{shapley06}, which indicates that such objects may not be rare. \begin{figure}[!htc] \epsscale{0.75} \plotone{f24.eps} \caption{Postage stamp images (10\arcsec~on a side) for 72012. From left to right is {\it NUV}, $B$, $R_{\rm C}$, and NB704. North is up and east is to the left. The source on the right shows a weak excess in NB704 relative to the broad-band images.} \label{72012} \end{figure} \end{appendix}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} A Latin square of order $n$ is an $n\times n$ array filled with $n$ different symbols, where no symbol appears in the same row or column more than once. Latin squares arise in different branches of mathematics such as algebra (where Latin squares are exactly the multiplication tables of quasigroups) and experimental design (where they give rise to designs called Latin square designs). They also occur in recreational mathematics---for example completed Sudoku puzzles are Latin squares. In this paper we will look for \emph{transversals} in Latin squares---a transversal in a Latin square of order $n$ is a set of $n$ entries such that no two entries are in the same row, same column, or have the same symbol. One reason transversals in Latin squares are interesting is that a Latin square has an orthogonal mate if, and only if, it has a decomposition into disjoint transversals. See \cite{WanlessSurvey} for a survey about transversals in Latin squares. It is easy to see that not every Latin square has a transversal (for example the unique $2\times 2$ Latin square has no transversal), however perhaps every Latin square contains a large \emph{partial transversal} (a partial transversal of size $m$ is a set of $m$ entries such that no two entries are in the same row, same column, or have the same symbol)? There are several closely related, old, and difficult conjectures which guarantee large partial transversals in Latin squares. The first of these is a conjecture of Ryser that every Latin square of odd order contains a transversal \cite{Ryser}. Brualdi conjectured that every Latin square contains a partial transversal of size $n-1$ (see \cite{Brualdi}). Stein independently made the stronger conjecture that every $n\times n$ array filled with $n$ symbols, each appearing exactly $n$ times contains a partial transversal of size $n-1$ \cite{Stein}. Because of the similarity of the above two conjectures, the following is often referred to as ``the Brualdi-Stein Conjecture'' \begin{conjecture}[Brualdi and Stein, \cite{Brualdi, Stein}]\label{BrualdiStein} Every Latin square contains a partial transversal of size $n-1$. \end{conjecture} There have been many partial results about this conjecture. It is known that every Latin square has a partial transversal of size $n-o(n)$---Woolbright \cite{Woolbright} and independently Brower, de Vries and Wieringa \cite{Brower} proved that ever Latin square contains a partial transversal of size $n-\sqrt n$. This has been improved by Hatami and Schor \cite{HatamiSchor} to $n-O(\log^2 n)$. A remarkable result of H\"aggkvist and Johansson shows that if we consider $(1-\epsilon)n\times n$ Latin rectangles rather than Latin squares, then it is possible to decompose all the entries into disjoint transversals (for $m\leq n$ a $m\times n$ Latin rectangle is an $m\times n$ array of $n$ symbols where no symbol appears in the same row or column more than once. A transversal in a Latin rectangle is a set of $m$ entries no two of which are in the same row, column, or have the same symbol). \begin{theorem}[H\"aggkvist and Johansson, \cite{HaggkvistJohansson}]\label{HaggvistTheorem} For every $\epsilon$, there is an $m_0=m_0(\epsilon)$ such that the following holds. For every $n\geq (1+\epsilon)m\geq m_0$, every $m\times n$ Latin rectangle can be decomposed into disjoint transversals. \end{theorem} This theorem is proved by a probabilistic argument, using a ``random greedy process'' to construct the transversals. The above theorem gives yet another proof that every sufficiently large $n\times n$ Latin square has a partial transversal of size $n-o(n)$---indeed if we remove $\epsilon n$ rows of a Latin square we obtain a Latin rectangle to which Theorem~\ref{HaggvistTheorem} can be applied. In this paper we will look at a strengthening of Conjecture~\ref{BrualdiStein}. The strengthening we'll look at is a conjecture due to Aharoni and Berger which takes place in a more general setting than Latin squares---namely coloured bipartite graphs. To see how the two settings are related, notice that there is a one-to-one correspondence between $n\times n$ Latin squares and proper edge-colourings of $K_{n,n}$ with $n$ colours---indeed to a Latin square $S$ we associate the colouring of $K_{n,n}$ with vertex set $\{x_1, \dots, x_n, y_1, \dots, y_n\}$ where the edge between $x_i$ and $y_j$ receives colour $S_{i,j}$. It is easy to see that in this setting transversals in $S$ correspond to perfect rainbow matchings in $K_{n,n}$ (a matching is \emph{rainbow} if all its edges have different colours). Thus Conjecture~\ref{BrualdiStein} is equivalent to the statement that ``in any proper $n$-colouring of $K_{n,n}$, there is a rainbow matching of size $n-1$''. One could ask whether a large rainbow matching exists in more general bipartite graphs. Aharoni and Berger posed the following conjecture, which generalises Conjecture~\ref{BrualdiStein}. \begin{conjecture}[Aharoni and Berger, \cite{AharoniBerger}]\label{ConjectureAharoni} Let $G$ be a bipartite graph consisting of $n$ matchings, each with at least $n+1$ edges. Then $G$ contains a rainbow matching with $n$ edges. \end{conjecture} In the above conjecture we think of the $n$ matchings forming $G$ as having different colours, and so ``rainbow matching'' means a matching containing one edge from each matching in $G$. It is worth noting that the above conjecture does not require the matchings in $G$ to be disjoint i.e. it is about bipartite multigraphs rather than simple graphs. This conjecture was first posed in a different form in~\cite{AharoniBerger} as a conjecture about matchings in tripartite hypergraphs (Conjecture 2.4 in \cite{AharoniBerger}). It was first stated as a conjecture about rainbow matchings in~\cite{AharoniCharbitHoward}. The above conjecture has attracted a lot of attention recently, and there are many partial results. Just like in Conjecture~\ref{BrualdiStein}, one natural way of attacking Conjecture~\ref{ConjectureAharoni} is to prove approximate versions of it. As observed by Barat, Gy\'arf\'as, and S\'ark\"ozy \cite{Barat}, the arguments that Woolbright, Brower, de Vries, and Wieringa used to find partial transversals of size size $n-\sqrt n$ in Latin squares actually generalise to bipartite graphs to give the following. \begin{theorem}[Woolbright, \cite{Woolbright}; Brower, de Vries, and Wieringa, \cite{Brower}; Barat, Gy\'arf\'as, and S\'ark\"ozy, \cite{Barat}]\label{Woolbright} Let $G$ be a bipartite graph consisting of $n$ matchings, each with at least $n$ edges. Then $G$ contains a rainbow matching with $n-\sqrt{n}$ edges. \end{theorem} Barat, Gy\'arf\'as, and S\'ark\"ozy actually proved something a bit more precise in \cite{Barat}---for every $k$, they gave an upper bound on the number of matchings of size $n$ needed to find a rainbow matching of size $n-k$. Another approximate version of Conjecture~\ref{ConjectureAharoni} comes from Theorem~\ref{HaggvistTheorem}. It is easy to see that Theorem~\ref{HaggvistTheorem} is equivalent to the following ``let $G$ be a bipartite graph consisting of $n$ edge-disjoint perfect matchings, each with at least $n+o(n)$ edges. Then $G$ can be decomposed into disjoint rainbow matchings of size $n$'' (to see that this is equivalent to Theorem~\ref{HaggvistTheorem}, associate an $m$-edge-coloured bipartite graph with any $m\times n$ Latin rectangle by placing a colour $k$ edge between $i$ and $j$ whenever $(k,i)$ has symbol $j$ in the rectangle). The main result of this paper is an approximate version of Conjecture~\ref{ConjectureAharoni} in the case when the matchings in $G$ are disjoint, but not necessarily perfect. \begin{theorem}\label{MainTheorem} For all $\epsilon>0$, there exists an $N=N(\epsilon)=10^{20}\epsilon^{-16\epsilon^{-1}}$ such that the following holds. Let $G$ be a bipartite graph consisting on $n\geq N$ edge-disjoint matchings, each with at least $(1+\epsilon)n$ edges. Then $G$ contains a rainbow matching with $n$ edges. \end{theorem} Unlike the proof of Theorem~\ref{HaggvistTheorem} which can be used to give a randomised process to find a rainbow matching, the proof of Theorem~\ref{MainTheorem} is algorithmic i.e. the matching guaranteed by Theorem~\ref{MainTheorem} can be found in polynomial time. Another very natural approach to Conjecture~\ref{ConjectureAharoni} is to to prove it when the matchings have size much larger than $n+1$. When the matchings have size $2n$, then the result becomes trivial. \begin{lemma}\label{GreedyLemma} Let $G$ be a bipartite graph consisting of $n$ matchings, each with at least $2n$ edges. Then $G$ contains a rainbow matching with $n$ edges. \end{lemma} This lemma is proved by greedily choosing disjoint edges of different colours. We can always choose $n$ edges this way, since each colour class has $2n$ edges (one of which must be disjoint from previously chosen edges). There have been several improvements to the $2n$ bound in Lemma~\ref{GreedyLemma}. Aharoni, Charbit, and Howard \cite{AharoniCharbitHoward} proved that matchings of size $7n/4$ are sufficient to guarantee a rainbow matching of size $n$. Kotlar and Ziv \cite{KotlarZiv} improved this to $5n/3$. Clemens and Ehrenm\"uller \cite{DennisJulia} further improved the constant to $3n/2$ which is currently the best known bound. \begin{theorem}[Clemens and Ehrenm\"uller, \cite{DennisJulia}] Let $G$ be a bipartite graph consisting of $n$ matchings, each with at least $3n/2+o(n)$ edges. Then $G$ contains a rainbow matching with $n$ edges. \end{theorem} Though we won't improve this theorem, we give an alternative proof which gives a weaker bound of $\phi n$ where $\phi\approx 1.618$ is the Golden Ratio. \begin{theorem}\label{GoldenRatioTheorem} Let $G$ be a bipartite graph consisting of $n$ matchings, each with at least $\phi n + 20n/\log n$ edges. Then $G$ contains a rainbow matching with $n$ edges. \end{theorem} We'll spend the rest of this section informally discussing the methods which we use to prove Theorems~\ref{MainTheorem} and~\ref{GoldenRatioTheorem}. The basic idea is to introduce an auxiliary coloured directed graph and then apply some simple lemmas about directed graphs. The results and concepts about coloured graphs which we use are perhaps of independent interest. These results are all gathered in Section~\ref{SectionConnectedness}. The key idea in the proof of Theorem~\ref{MainTheorem} seems to be a new notion of connectivity of coloured graphs. \begin{definition}\label{DefinitionConnectivity} An edge-coloured graph $G$ is said to be rainbow $k$-edge-connected if for any set of at most $k-1$ colours $S$ and any pair of vertices $u$ and $v$, there is a rainbow $u$ to $v$ path whose edges have no colours from $S$. \end{definition} The above definition differs from usual notions of connectivity, since generally the avoided set $S$ is a set of \emph{edges} rather than colours. As we shall see, in some ways Definition~\ref{DefinitionConnectivity} is perhaps \emph{too strong}. In particular the natural analogue of Menger's Theorem for rainbow $k$-edge-connected graphs fails (see Section~\ref{SectionConclusion}). Nevertheless, rainbow $k$-edge-connected graphs turn out to be very useful for studying rainbow matchings in bipartite graphs. It would be interesting to know whether any statements about edge-connectivity have natural generalizations to rainbow $k$-edge-connected graphs. The structure of this paper is as follows. In Section~\ref{SectionConnectedness} we introduce variations on Definition~\ref{DefinitionConnectivity} and prove a number of lemmas about coloured and directed graphs. In Section~\ref{SectionMainTheorem} we prove Theorem~\ref{MainTheorem}. In Section~\ref{SectionGoldenRatio} we prove Theorem~\ref{GoldenRatioTheorem}. In Section~\ref{SectionConclusion} we make some concluding remarks about the techniques used in this paper. For all standard notation we follow~\cite{BollobasModernGraphTheory}. \section{Paths in directed and coloured graphs}\label{SectionConnectedness} In this section we prove results about paths in various types of directed graphs. All graphs in this section have no multiple edges, although we allow the same edge to appear twice in opposite directions. In directed graphs, ``path'' will always mean a sequence of vertices $x_1, \dots, x_k$ such that $x_ix_{i+1}$ is a directed edge for $i=1, \dots, k-1$. We will use additive notation for concatenating paths---for two paths $P=p_1 \dots p_i$ and $Q=q_1 \dots q_j$, $P + Q$ denotes the path with vertex sequence $p_1\dots p_i q_1\dots q_j$. Recall that $N^+(v)$ denotes the out-neighbourhood of a vertex $v$ i.e. the set of vertices $x$ for which $vx$ is an edge. We will sometimes have two graphs $G$ and $H$ on the same vertex set. In this case $N^+_G(v)$ and $N^+_H(v)$ denote the out-neighbourhoods of $v$ in $G$ and $H$ respectively. Similarly $d_G(u,v)$ and $d_H(u,v)$ denote the lengths of the shortest part from $u$ to $v$ in $G$ and $H$ respectively. We will look at coloured graphs. An edge-colouring of a graph is an arbitrary assignment of colours to the edges of a graph. A total colouring is an arbitrary assignment of colours to both the vertices and edges of a graph. For any coloured graph we denote by $c(v)$ and $c(uv)$ the colour assigned to a vertex or edge respectively. An edge-colouring is out-proper if for any vertex $v$, the outgoing edges from $v$ all have different colours. Similarly an edge-colouring is in-proper if for any vertex $v$, the ingoing edges from $v$ all have different colours. We say that an edge colouring is proper if it is both in and out-proper (notice that by this definition it is possible to have two edges with the same colour at a vertex $v$---as long as one of the edges is oriented away from $v$ and one is oriented towards $v$). A total colouring is proper if the underlying edge colouring and vertex colourings are proper and the colour of any vertex is different from the colour of any edge containing it. A totally coloured graph is \emph{rainbow} if all its vertices and edges have different colours. For two vertices $u$ and $v$ in a coloured graph, $\dRainbow{u,v}$ denotes the length of the shortest rainbow path from $u$ to $v$. We say that a graph has \emph{rainbow vertex set} if all its vertices have different colours. This section will mostly be about finding highly connected subsets in directed graphs. The following is the notion of connectivity that we will use. \begin{definition}\label{DefinitionUncolouredkdCon} Let $A$ be a set of vertices in a digraph $D$. We say that $A$ is $(k,d)${-connected} in $D$ if, for any set of vertices $S\subseteq V(D)$ with $|S|\leq k-1$ and any vertices $x,y \in A\setminus S$, there is an $x$ to $y$ path of length $\leq d$ in $D$ avoiding $S$. \end{definition} Notice that a directed graph $D$ is strongly $k$-connected if, and only if, $V(D)$ is $(k, \infty)$-connected in $D$. Also notice that it is possible for a subset $A\subseteq V(D)$ to be highly connected without the induced subgraph $D[A]$ being highly connected---indeed if $D$ is a bipartite graph with classes $X$ and $Y$ where all edges between $X$ and $Y$ are present in both directions, then $X$ is a $(|Y|, 2)$-connected subset of $D$, although the induced subgraph on $X$ has no edges. We will also need a generalization this notion of connectivity to coloured graphs \begin{definition}\label{DefinitionColouredkdCon} Let $A$ be a set of vertices in a coloured digraph $D$. We say that $A$ is $(k,d)${-connected} in $D$ if, for any set of at most $k-1$ colours $S$ and any vertices $x,y \in A$, there is a rainbow $x$ to $y$ path of length $\leq d$ in $D$ internally avoiding colours in $S$. \end{definition} Notice that in the above definition, we did not specify whether the colouring was a edge colouring, vertex colouring, or total colouring. The definition makes sense in all three cases. For edge colourings a path $P$ ``internally avoiding colours in $S$'' means $P$ not having edges having colours in $S$. For vertex colourings a path $P$ ``internally avoiding colours in $S$'' means $P$ not having vertices having colours in $S$ (except possibly for the vertices $x$ and $y$). For total colourings a path $P$ ``internally avoiding colours in $S$'' means $P$ having no edges or vertices with colours in $S$ (except possibly for the vertices $x$ and $y$). Comparing the above definition to ``rainbow $k$-connectedness'' defined in the introduction we see that an edge-coloured graph is rainbow $k$-connected exactly when it is $(k, \infty)$-connected. We'll need the following lemma which could be seen as a weak analogue of Menger's Theorem. It will allow us to find rainbow paths through prescribed vertices in a highly connected set. \begin{lemma}\label{MengerLemma} Let $D$ be a totally coloured digraph and $A$ a $(3kd,d)$-connected subset of $D$. Let $S$ be a set of colours with $|S|\leq k$ and $a_1, \dots, a_k$ be vertices in $A$ such that no $a_i$ has a colour from $S$ and $a_1, \dots, a_k$ all have different colours. Then there is a rainbow path $P$ from $a_1$ to $a_k$ of length at most $kd$ which passes through each of $a_1, \dots, a_k$ and avoids $S$ \end{lemma} \begin{proof} Using the definition of $(3kd,d)$-connected, there is a rainbow path $P_1$ from $a_1$ to $a_2$ of length $\leq d$ avoiding colours in $S$. Similarly for $i\leq k$, there is a rainbow path $P_i$ from $a_i$ to $a_{i+1}$ of length $\leq d$ internally avoiding colours in $S$ and colours in $P_1, \dots, P_{i-1}$. Joining the paths $P_1, \dots, P_{k-1}$ gives the required path. \end{proof} To every coloured directed graph we associate an uncoloured directed graph where two vertices are joined whenever they have a lot of short paths between them. \begin{definition}\label{DefinitionDm} Let $D$ be a totally coloured digraph and $m\in \mathbb{N}$. We denote by $D_m$ the uncoloured directed graph with $V(D_m)=V(D)$ and $xy$ an edge of $D_m$ whenever there are $m$ internally vertex disjoint paths $P_1, \dots, P_m$, each of length $2$ and going from $x$ to $y$ such that $P_1\cup \dots\cup P_m$ is rainbow. \end{definition} It turns out that for properly coloured directed graphs $D$, the uncoloured graph $D_m$ has almost the same minimum degree as $D$. The following lemma will allow us to study short rainbow paths in coloured graphs by first proving a result about short paths in uncoloured graphs. \begin{lemma}\label{DCHighDegree} For all $\epsilon>0$ and $m\in \mathbb{N}$, there is an $N=N(\epsilon,m)=(5m+4)/\epsilon^2$ such that the following holds. Let $D$ be a properly totally coloured directed graph on at least $N$ vertices with rainbow vertex set. Then we have $$\delta^+(D_m)\geq \delta^+(D) -\epsilon|D|.$$ \end{lemma} \begin{proof} Let $v$ be an arbitrary vertex in $D_m$. It is sufficient to show that $|N^+_{D_m}(v)|\geq |\delta^+(D)| -\epsilon|D|.$ For $w\in V(D)$, let $r_v(w)= \# \text{rainbow paths of length 2 from } v \text{ to } w.$ Let $W=\{w: r_v(w)\geq 5m\}$. We show that $W$ is contained in $N^+_{D_m}(v)$. \begin{claim}\label{WinDC} If $w \in W$, then we have $vw\in E(D_m)$. \end{claim} \begin{proof} From the definition of $W$, we have $5m$ distinct rainbow paths $P_1, \dots, P_{5m}$ from $v$ to $w$ of length $2$. Consider an auxiliary graph $G$ with $V(G)=\{P_1, \dots, P_{5m}\}$ and $P_i P_j\in E(G)$ whenever $P_i\cup P_j$ is rainbow. We claim that $\delta(G)\geq 5m-4$. Indeed if for $i\neq j$ we have $P_i=vxw$ and $P_j=vyw$, then, using the fact that the colouring on $D$ is proper and the vertex set is rainbow, it is easy to see that the only way $P_i\cup P_j$ could not be rainbow is if one of the following holds: \begin{align*} c(vx)=c(yw) \hspace{1cm} &c(vx)=c(y) \\ c(vy)=c(xw) \hspace{1cm} &c(vy)=c(x). \end{align*} Thus if $P_i=vxw$ had five non-neighbours $vy_1w, \dots, vy_5w$ in $G$, then by the Pigeonhole Principle for two distinct $j$ and $k$ we would have one of $c(y_jw)=c(y_kw)$, $c(y_j)=c(y_k)$, or $c(vy_j)=c(vy_k)$. But none of these can occur for distinct paths $vy_jw$ and $vy_kw$ since the colouring on $D$ is proper and the vertex set is rainbow. Therefore $\delta(G)\geq 5m-4$ holds. Now by Tur\'an's Theorem, $G$ has a clique of size at least $|V(G)|/5=m$. The union of the paths in this clique is rainbow, showing that $vw\in E(D_m)$. \end{proof} Now we show that $W$ is large. \begin{claim}\label{WLarge} $|W|\geq \delta^+(D)-\epsilon|D|$ \end{claim} \begin{proof} For any $u\in N^+_D(v)$ we let $$N'(u)= N^+_D(u)\setminus\{x\in N^+_D(u):ux \text{ or } x \text{ has the same colour as } v \text{ or } vu\}.$$ Since $D$ is properly coloured, and all the vertices in $D$ have different colours, we have that $|\{x \in N^+(u):ux \text{ or } x \text{ has the same colour as } v \text{ or } vu\}|\leq 4$. This implies that $|N'(u)|\geq \delta^+(D)-4$. Notice that for a vertex $x$, we have $x \in N'(u)$ if, and only if, the path $vux$ is rainbow. Indeed $vu$ has a different colour from $v$ and $u$ since the colouring is proper. Similarly $ux$ has a different colour from $u$ and $x$. Finally $ux$ and $x$ have different colours from $v$ and $vu$ by the definition of $N'(u)$. Therefore there are $\sum_{u\in N^+_D(v)} |N'(u)|$ rainbow paths of length $2$ starting at $v$ i.e. we have $\sum_{x\in V(D)} r_v(x)=\sum_{u\in N^+_D(v)} |N'(u)|$. For any $x\in D$, we certainly have $r_v(x)\leq |N^+(v)|$. If $x \not\in W$ then we have $r_v(x)< 5m$. Combining these we obtain $$(|D|-|W|)5m+|W||N^+_D(v)|\geq \sum_{x\in V(D)} r_v(x)=\sum_{u\in N^+(v)} |N'(u)|\geq |N^+_D(v)|(\delta^+(D)-4).$$ The last inequality follows from $|N'(u)|\geq \delta^+(D)-4$ for all $u \in N^+_D(v)$. Rearranging we obtain $$|W|\geq \frac{|N^+_D(v)|(\delta^+(D)-4)-5m|D|}{|N^+_D(v)|-5m}\geq\delta^+(D)-\frac{5m|D|}{|N^+_D(v)|}-4\geq \delta^+(D)-(5m+4)\frac{|D|}{\delta^+(D)}.$$ If $(5m+4)/\delta^+(D)\leq\epsilon$, then this implies the claim. Otherwise we have $\delta^+(D)< (5m+4)/\epsilon$ which, since $|D|\geq N_0=(5m+4)/\epsilon^2$, implies that $\delta^+(D)\leq \epsilon|D|$ which also implies the claim. \end{proof} Claim~\ref{WinDC} shows that $W\subseteq N^+_{D_m}(v)$, and so Claim~\ref{WLarge} implies that $|N^+_{D_m}(v)|\geq \delta^+(D)-\epsilon|D|$. Since $v$ was arbitrary, this implies the lemma. \end{proof} The following lemma shows that every directed graph with high minimum degree contains a large, highly connected subset. \begin{lemma}\label{LargeConnectedSetLemma} For all $\epsilon>0$ and $k \in \mathbb{N}$, there is a $d=d(\epsilon)=40\epsilon^{-2}$ and $N=N(\epsilon,s)=32k\epsilon^{-2}$ such that the following holds. Let $D$ be a directed graph of order at least $N$. Then there is a $(k,d)$-connected subset $A\subseteq V(D)$ satisfying $$|A|\geq \delta^+(D)-\epsilon|D|.$$ \end{lemma} \begin{proof} We start with the following claim. \begin{claim}\label{ClaimLargeConnectedSet} There is a set $\tilde A\subseteq V(D)$ satisfying the following \begin{itemize} \item For all $B\subseteq \tilde A$ with $|B|> \epsilon |D|/4$ there is a vertex $v \in \tilde A\setminus B$ such that $|N^+(v)\cap B|\geq \epsilon^2 |D|/16$. \item $\delta^+(D[\tilde A])\geq \delta^+(D)-\epsilon |D|/4$. \end{itemize} \end{claim} \begin{proof} Let $A_0= V(D)$. We define $A_1, A_2, \dots, A_M$ recursively as follows. \begin{itemize} \item If $A_i$ contains a set $B_i$ such that $|B_i|> \epsilon|D|/4$ and for all $v \in A_i\setminus B_i$ we have $|N^+(v)\cap B_i|< \epsilon^2 |D|/16$, then we let $A_{i+1}=A_i\setminus B_i$. \item Otherwise we stop with $M=i$. \end{itemize} We will show that that $\tilde A=A_M$ satisfies the conditions of the claim. Notice that by the construction of $A_M$, it certainly satisfies the first condition. Thus we just need to show that $\delta^+(D[A_M])\geq \delta^+(D)-\epsilon |D|/4$. From the definition of $A_{i+1}$ we have that $\delta^+(D[A_{i+1}])\geq \delta^+(D[A_{i}])-\epsilon^2|D|/16$ which implies $\delta^+(D[A_{M}])\geq \delta^+(D)-M\epsilon^2|D|/16$. Therefore it is sufficient to show that we stop with $M\leq 4\epsilon^{-1}$. This follows from the fact that the sets $B_0, \dots, B_{M-1}$ are all disjoint subsets of $V(D)$ with $|B_i|> \epsilon|D|/4$. \end{proof} Let $\tilde A$ be the set given by the above claim. Let $A=\{v\in \tilde A: |N^-(v)\cap \tilde A|\geq \frac{\epsilon}{2} |D|\}$. We claim that $A$ satisfies the conditions of the lemma. To show that $|A|\geq \delta^+(D)-\epsilon|D|$, notice that we have $$\frac{\epsilon}{2}|D|(|\tilde A|-|A|)+|A||\tilde A|\geq \sum_{v\in \tilde A} |N^-(v)\cap \tilde A|= \sum_{v\in \tilde A} |N^+(v)\cap \tilde A|\geq |\tilde A|(\delta^+(D)-\epsilon|D|/4).$$ The first inequality come from bounding $|N^-(v)\cap \tilde A|$ by $\frac{\epsilon}2|D|$ for $v\not\in A$ and by $|\tilde A|$ for $v\in A$. The second inequality follows from the second property of $\tilde A$ in Claim~\ref{ClaimLargeConnectedSet}. Rearranging we obtain $$|A|\geq \frac{|\tilde A|}{|\tilde A|-\epsilon|D|/2}(\delta^+(D)-3\epsilon|D|/4)\geq \delta^+(D)-\epsilon|D|.$$ Now, we show that $A$ is $(k,d)$-connected in $D$. As in Definition~\ref{DefinitionUncolouredkdCon}, let $S$ be a subset of $V(D)$ with $|S|\leq k-1$ and let $x,y$ be two vertices in $A\setminus S$. We will find a path of length $\leq d$ from $x$ to $y$ in $\tilde A\setminus S$. Notice that since $|D|\geq 32k\epsilon^{-2}$, we have $|S|\leq \epsilon^{2} |D|/32$. Let $N^t(x)=\{u\in \tilde A\setminus S: d_{D[\tilde A\setminus S]}(x,u)\leq t\}$. We claim that for all $x\in \tilde A$ and $t\geq 0$ we have $$|N^{t+1}(x)|\geq \min(|\tilde A|-\epsilon|D|/4, |N^t(x)|+\epsilon^2|D|/32).$$ For $t=0$ this holds since we have $|N^1|=|\tilde A|\geq \epsilon |D|/4$. Indeed if $|N^{t}(x)|< |\tilde A|-\epsilon|D|/4$ holds for some $t$ and $x$, then letting $B=\tilde A\setminus N^{t}(x)$ we can apply the first property of $\tilde A$ from Claim~\ref{ClaimLargeConnectedSet} in order to find a vertex $u\in N^t(x)$ such that $|N^+(u)\cap (\tilde A\setminus N^t(x))|\geq \epsilon^2 |D|/16$. Using $|S|\leq \epsilon^{2} |D|/32$ we get $|(N^+(u)\setminus S)\cap (\tilde A\setminus N^t(x))|\geq |N^+(u)\cap (\tilde A\setminus N^t(x))|-|S|\geq \epsilon^2 |D|/32$. Since $(N^+(u)\cap \tilde A\setminus S)\cup N^t(x)\subseteq N^{t+1}(x)$, we obtain $|N^{t+1}(x)|\geq |N^t(x)|+\epsilon^2|D|/32$. Thus we obtain that $|N^{t}(x)|\geq \min(|\tilde A|-\epsilon|D|/4, t\epsilon^2|D|/32)$. Since $(d-1)\epsilon^2/32>1$, we have that $|N^{d-1}(x)|\geq |\tilde A|-\epsilon|D|/4$. Recall that from the definition of $A$, we also have also have $|N^-(y)\cap \tilde A|\geq \epsilon |D|/2$. Together these imply that $N^-(y)\cap N^{d-1}(x)\neq \emptyset$ and hence there is a $x$ -- $y$ path of length $\leq d$ in $\tilde A\setminus S$. \end{proof} The following is a generalization of the previous lemma to coloured graphs. This is the main intermediate lemma we need in the proof of Theorem~\ref{MainTheorem}. \begin{lemma}\label{LargeRainbowConnectedSetLemma} For all $\epsilon>0$ and $k \in \mathbb{N}$, there is an $d=d(\epsilon)=1280\epsilon^{-2}$ and $N=N(\epsilon,k)=1800k\epsilon^{-4}$ such that the following holds. Let $D$ be a properly totally coloured directed graph on at least $N$ vertices with rainbow vertex set. Then there is a $(k,d)$-connected subset $A\subseteq V(D)$ satisfying $$|A|\geq \delta^+(D)-\epsilon|D|.$$ \end{lemma} \begin{proof} Set $m=9d+3k$, and consider the directed graph $D_m$ as in Definition~\ref{DefinitionDm}. Using $|D|\geq 1800k\epsilon^{-4}$, we can apply Lemma~\ref{DCHighDegree} with the constant $\epsilon/4$ we have that $\delta^+(D_m)\geq \delta^+(D) -\epsilon|D|/4.$ Apply Lemma~\ref{LargeConnectedSetLemma} to $D_m$ with the constants $\epsilon/4$, and $k$. This gives us a $(k, d/2)$-connected set $A$ in $D_m$ with $|A|\geq \delta^+(D_m)-\epsilon|D_m|/4\geq \delta^+(D)-\epsilon|D|/2$. We claim that $A$ is $(k, d)$-connected in $D$. As in Definition~\ref{DefinitionColouredkdCon}, let $S$ be a set of $k$ colours and $x,y \in A$. Let $S_V$ be the vertices of $D$ with colours from $S$. Since all vertices in $D$ have different colours, we have $|S_V|\leq k$. Since $A$ is $(k, d/2)$-connected in $D_m$, there is a $x$ -- $y$ path $P$ in $(D_m\setminus S_V)+x+y$ of length $\leq d/2$. Using the property of $D_m$, for each edge $uv\in P$, there are at least $m$ choices for a triple of three distinct colours $(c_1, c_2, c_3)$ and a vertex $y(uv)$ such that there is a path $uy(uv)v$ with $c(uy(uv))=c_1$, $c(y(uv))=c_2$, and $c(y(uv)v)=c_3$. Since $m\geq 9d+3k\geq 6|E(P)|+3|V(P)|+3|S|$, we can choose such a triple for every edge $uv\in P$ such that for two distinct edges in $P$, the triples assigned to them are disjoint, and also distinct from the colours in $S$ and colours of vertices of $P$. Let the vertex sequence of $P$ be $u,x_1, x_2, \dots, x_{p}, v$. The following sequence of vertices is a rainbow path from $u$ to $v$ of length $2|P|\leq d$ internally avoiding colours in $S$ $$P'=u, y(ux_1), x_1, y(x_1x_2), x_2, y(x_2x_3), x_3,\dots, x_{p-1},y(x_{p-1}x_p), x_p, y(x_pv), v.$$ To show that $P'$ is a rainbow path we must show that all its vertices and edges have different colours. The vertices all have different colours since the vertices in $D$ all had different colours. The edges of $P'$ all have different colours from each other and the vertices of $P'$ by our choice of the vertices $y(x_ix_{i+1})$ and the triples of colours associated with them. \end{proof} We'll need the following simple lemma which says that for any vertex $v$ there is a set of vertices $N^{t_0}$ close to $v$ with few edges going outside $N^{t_0}$. \begin{lemma}\label{CloseSubgraphsLowExpansion} Suppose we have $\epsilon>0$ and $D$ a totally coloured directed graph. Let $v$ be a vertex in $D$ and for $t\in \mathbb{N}$, let $N^t(v)=\{x:\dRainbow{v,x}\leq t\}$. There is a $t_0\leq \epsilon^{-1}$ such that we have $$|N^{t_{0}+1}(v)|\leq |N^{t_{0}}(v)|+ \epsilon |D|.$$ \end{lemma} \begin{proof} Notice that if $|N^{t+1}(v)|> |N^{t}(v)|+ \epsilon |D|$ held for all $t\leq \epsilon^{-1}$, then we would have $|N^{t}(v)|>\epsilon t|D|$ for all $t\leq \epsilon^{-1}$. When $t=\epsilon^{-1}$ this gives $|N^{\epsilon^{-1}}(v)|>|D|$, which is a contradiction. \end{proof} A corollary of the above lemma is that for any vertex $v$ in a properly coloured directed graph, there is a subgraph of $D$ close to $v$ which has reasonably large minimum out-degree. \begin{lemma}\label{CloseHighDegreeSubgraph} Suppose we have $\epsilon>0$ and $D$ a properly totally coloured directed graph on at least $2\epsilon^{-2}$ vertices, with rainbow vertex set. Let $v$ be a vertex in $D$ and let $\delta^+= \min_{x : \dRainbow{v,x}\leq \epsilon^{-1}} d^+(x)$. Then there is a set $N$ such that $\dRainbow{v,N}\leq \epsilon^{-1}$ and we have $$\delta^+(D[N])\geq \delta^+ - 2\epsilon |D|.$$ \end{lemma} \begin{proof} Apply Lemma~\ref{CloseSubgraphsLowExpansion} to $D$ in order to obtain a number $t_0\leq \epsilon^{-1}$ such that $|N^{t_{0}+1}(v)|\leq |N^{t_{0}}(v)|+ \epsilon |D|$. We claim that the set $N=N^{t_{0}}(v)$ satisfies the conditions of the lemma. Suppose, for the sake of contradiction that there is a vertex $x\in N^{t_0}(v)$ with $|N^+(x)\cap N^{t_0}(v)|< \delta^+ - 2\epsilon |D|$. Since $\delta^+\leq |N^+(x)|$, we have $|N^+(x)\setminus N^{t_0}(v)|> 2\epsilon |D|$. Let $P$ be a length $\leq t_0$ path from $v$ to $x$. Notice that since the colouring on $D$ is proper and all vertices in $D$ have different colours, the path $P+y$ is rainbow for all except at most $2|P|$ of the vertices $y \in N^+(x)$. Therefore we have $|N^+(x)\setminus N^{t_0+1}(v)|\leq 2|P|\leq 2\epsilon^{-1}$. Combining this with $|D|\geq 2\epsilon^{-2}$, this implies \begin{align*} |N^{t_0+1}(v)|&\geq |N^{t_0}(v)|+|N^+(x)\setminus N^{t_0}(v)|-|N^+(x)\setminus N^{t_0+1}(v)|\\ &>|N^{t_0}(v)|+2\epsilon |D|-2\epsilon^{-1}\\ &\geq |N^{t_0}(v)|+\epsilon|D|. \end{align*} This contradicts the choice of $t_0$ in Lemma~\ref{CloseSubgraphsLowExpansion}. \end{proof} \section{Proof of Theorem~\ref{MainTheorem}}\label{SectionMainTheorem} The goal of this section is to prove an approximate version of Conjecture~\ref{ConjectureAharoni} in the case when all the matchings in $G$ are disjoint. The proof will involve considering auxiliary directed graphs to which Lemmas~\ref{LargeRainbowConnectedSetLemma} and~\ref{CloseHighDegreeSubgraph} will be applied. We begin this section by proving a series of lemmas (Lemmas~\ref{SwitchingLemma} --~\ref{IncrementLemma}) about bipartite graphs consisting of a union of $n_0$ matchings. The set-up for these lemmas will always be the same, and so we state it in the next paragraph to avoid rewriting it in the statement of every lemma. We will always have bipartite graph called ``$G$'' with bipartition classes $X$ and $Y$ consisting of $n+1$ edge-disjoint matchings $M_1, \dots, M_{n+1}$. These matchings will be referred to as colours, and the colour of an edge $e$ means the matching $e$ belongs to. There will always be a rainbow matching called $M$ of size $n$ in $G$. We set $X_0=X\setminus V(M)$ and $Y_0-Y\setminus V(M)$. The colour missing from $M$ will denoted by $c^*$. Notice that for any edge $e$, there is a special colour (the colour $c_e$ of the edge $e$) as well as a special vertex in $X$ (i.e. $e\cap X$) and in $Y$ (i.e. $e\cap Y$). In what follows we will often want to refer to the edge $e$, the colour $c_e$, and the vertices $e\cap X$ and $e\cap Y$ interchangeably. To this end we make a number of useful definitions: \begin{itemize} \item For an edge $e$, we let $(e)_C$ be the colour of $e$, $(e)_X=e\cap X$, and $(e)_Y=e\cap Y$. \item For a vertex $x \in X$, we let $(x)_M$ be the edge of $M$ passing through $x$ (if it exists), $(x)_C$ the colour of $(x)_M$, and $(x)_Y$ the vertex $(x)_M\cap Y$. If there is no edge of $M$ passing through $x$, then $(x)_M$, $(x)_C$, and $(x)_Y$ are left undefined. \item For a vertex $y \in Y$, we let $(y)_M$ be the edge of $M$ passing through $y$ (if it exists), $(y)_C$ the colour of $(y)_M$, and $(y)_X$ the vertex $(y)_M\cap X$. If there is no edge of $M$ passing through $y$, then $(y)_M$, $(y)_C$, and $(y)_X$ are left undefined. \item For a colour $c$, we let $(c)_M$ be the colour $c$ edge of $M$ (if it exists), $(x)_X=(c)_M\cap X$, and $(c)_Y=(c)_M\cap Y$. For the colour $c^*$, we leave $(c)_M$, $(c)_X$, and $(c)_Y$ undefined. \end{itemize} For a set $S$ of colours, edges of $M$, or vertices, we let $(S)_M=\{(s)_M:s\in S\}$, $(S)_X=\{(s)_X:s\in S\}$, $(S)_Y=\{(s)_Y:s\in S\}$, and $(S)_C=\{(s)_C:s\in S\}$. Here $S$ is allowed to contain colours/edges/vertices for which $(*)_M$/$(*)_X$/$(*)_Y$/$(*)_C$ are undefined---in this case $(S)_M$ is just the set of $(s)_M$ for $s\in S$ where $(s)_M$ is defined (and similarly for $(S)_X$/$(S)_Y$/$(S)_C$. It is useful to observe that from the above definitions we get identities such as $(((S)_X)_C)_M=S$ for a set $S$ of edges of $M$. We will now introduce two important and slightly complicated definitions. Both Definition~\ref{DefinitionSwitching} and~\ref{DefinitionFree} will take place in the setting of a bipartite graph $G$ with bipartition $X\cup Y$ consisting of $n+1$ edge-disjoint matchings, and a rainbow matching $M$ of size $n$ missing colour $c^*$. The first definition is that of a \emph{switching}---informally this should be thought of as a sequence of edges of $G\setminus M$ which might be exchanged with a sequence of edges of $M$ in order to produce a new rainbow matching of size $n$. See Figure~\ref{SwitchingFigure} for an illustration of a switching. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{Switching.png} \caption{An $X'$-switching of length $4$. The solid lines represent edges of $M$ and the dashed lines represent edges not in $M$. \label{SwitchingFigure}} \end{figure} \begin{definition}\label{DefinitionSwitching} Let $X'\subseteq X$. We call a sequence of edges, $\sigma=(e_0, m_1, e_1, m_2, e_2, \dots, e_{\ell-1}, m_{\ell})$, an $X'$-switching if the following hold. \begin{enumerate}[(i)] \item For all $i$, $m_i$ is an edge of $M$ and $e_i$ is not an edge of $M$. \item For all $i$, $m_i$ and $e_i$ have the same colour, $c_i$. \item For all $i$, $e_{i-1}\cap m_i=(m_i)_Y$. \item For all $i\neq j$, we have $e_{i}\cap e_j=e_{i-1}\cap m_{j}=\emptyset$ and also $c_i\neq c_j$. \item For all $i$, $(e_i)_X\in X'$. \end{enumerate} \end{definition} If $\sigma$ is a switching defined as above, then we say that $\sigma$ is a length $\ell$ switching from $c_0$ to $c_{\ell}$. Let $e(\sigma)=\{e_0, \dots, e_{\ell-1}\}$ and $m(\sigma)=\{m_1, \dots, m_{\ell}\}$. For a switching $\sigma$ we define $(\sigma)_X=(e(\sigma))_X\cup (m(\sigma))_X$. The next definition is that of a \emph{free} subset of $X$---informally a subset $X'\subset X$ is free if there are matchings $M'$ which ``look like'' $M$, but avoid $X'$. \begin{definition}\label{DefinitionFree} Let $X', T\subseteq X$, $k\in\mathbb{N}$, and $c$ be a colour. We say that $X'$ is $(k,T,c)$-free if $T\cap X'=\emptyset$, $c \not\in (X'\cup T)_C$, and the following holds: Let $A$ be any set of $k$ edges in $M\setminus ((T)_M\cup (c)_M)$, $B\subseteq X'$ any set of $k$ vertices such that $(A)_X\cap B=\emptyset$. Then there is a rainbow matching $M'$ of size $n$ satisfying the following: \begin{itemize} \item $M'$ agrees with $M$ on $A$. \item $M'\cap B=\emptyset$. \item $M'$ misses the colour $c$. \end{itemize} \end{definition} It is worth noticing that $X_0$ is $(n,\emptyset,c^*)$-free (always taking the matching $M'$ to be $M$ in the definition). Intuitively free sets should be thought of as sets which ``behave like $X_0$'' for the purposes of finding a matching larger than $M$. The following lemma is crucial---it combines the preceding two definitions together and says that if we have an $X'$-switching $\sigma$ for a free set $X'$, then there is a new rainbow matching of size $n$ which avoids $(m(\sigma))_X$. \begin{lemma}\label{SwitchingLemma} Suppose that $X'$ is $(2k,T,c)$-free and $\sigma=(e_0, m_1, e_1, \dots, e_{\ell-1}, m_{\ell})$ is an $X'$-switching from $c$ to $(m_{\ell})_C$ of length $\ell\leq k$. Let $A$ be any set of at most $k$ edges in $M-(c)_M$ and let $B$ be any subset of $X'$ of order at most $k$. Suppose that the following disjointness conditions hold \begin{align*} \hspace{2cm} &(\sigma)_X\cap T=\emptyset &(\sigma)_X\cap (A)_X=\emptyset \hspace{1cm} &(\sigma)_X\cap B=\emptyset \hspace{2cm}\\ \hspace{2cm} & &T\cap(A)_X =\emptyset \hspace{1cm} &(A)_X\cap B=\emptyset.\hspace{2cm} \end{align*} Then there is a rainbow matching $\tilde M$ of size $n$ in $G$ which misses colour $(m_{\ell})_C$, agrees with $M$ on $A$, and has $\tilde M\cap (m(\sigma))_X=\tilde M\cap B=\emptyset$. \end{lemma} \begin{proof} We let $A'=m(\sigma)\cup A$ and $B'=(e(\sigma))_X\cup B$. Notice that we have $|A'|, |B'|\leq 2k$. Also from the definition of ``switching'', we have that for any $i$ and $j$, the edges $e_i$ and $m_j$ never intersect in $X$ which together with $(A)_X\cap (\sigma)_X=\emptyset$, $B\cap (\sigma)_X=\emptyset$, and $(A)_X\cap B=\emptyset$ implies that $(A')_X\cap B'=\emptyset$. Also, $(\sigma)_X\cap T=\emptyset$ and $(A)_X\cap T=\emptyset$ imply that $A'\cap (T)_M=\emptyset$, and hence since $\sigma$ is a switching starting at $c$ we have $A'\subseteq M\setminus ((T)_M\cup (c)_M)$. Finally, $\sigma$ being an $X'$-switching and $B\subseteq X'$ imply that $B'\subseteq X'$. Therefore we can invoke the definition $X'$ being $(2k,T,c)$-free in order to obtain a rainbow matching $M'$ of size $n$ avoiding $B'$, agreeing with $M$ on $A'$, and missing colour $c=(e_{0})_C$. We let $$\tilde M=(M'\setminus m(\sigma))\cup e(\sigma) = M' +e_0- m_1+ e_1- m_2+ e_2 - \dots+ e_{\ell-1}- m_{\ell}.$$ We claim that $\tilde M$ is a matching which satisfies all the conditions of the lemma. Recall that $B'\supseteq(e(\sigma))_X$, $A'\supseteq m(\sigma)$, and $(A')_X\cap B'=\emptyset$. Since $M'$ agreed with $M$ on $A'$ and was disjoint from $B'$, we get $m(\sigma)\subseteq M'$ and $e(\sigma)\cap (M'\setminus m(\sigma))=\emptyset$. This implies that $\tilde M$ is a set of $n$ edges and also that $(\tilde M)_X=\big((M')_X\setminus (m(\sigma))_X\big)\cup (e(\sigma))_X$ is a set of $n$ vertices. Finally notice that since $(e_i)_Y=(m_{i+1})_Y$ we have $(\tilde M)_Y=(M')_Y$. Thus $\tilde M$ is a set of $n$ edges with $n$ vertices in each of $X$ and $Y$ i.e. a matching. The matching $\tilde M$ is clearly rainbow, missing the colour $(m_{\ell})_C$ since $m_i$ and $e_i$ always have the same colour. To see that $\tilde M$ agrees with $M$ on edges in $A$, notice that $M'$ agreed with $M$ on these edges since we had $A\subseteq A'$. Since $(\sigma)_X\cap (A)_X=\emptyset$ implies that $\sigma$ contains no edges of $A$, we obtain that $\tilde M$ agrees with $M$ on $A$. To see that $\tilde M\cap (m(\sigma))_X=\emptyset$, recall that $(\tilde M)_X=\big((M')_X\setminus (m(\sigma))_X\big)\cup (e(\sigma))_X$ and $(m(\sigma))_X\cap (e(\sigma))_X=\emptyset$. Finally, $\tilde M\cap B=\emptyset$ follows from $M'\cap B=\emptyset$, $(\tilde M)_X=\big((M')_X\setminus (m(\sigma))_X\big)\cup (e(\sigma))_X$, and $B\cap (\sigma)_X=\emptyset$. \end{proof} We study $X'$-switchings by looking at an auxiliary directed graph. For any $X'\subseteq X$, we will define a directed, totally labelled graph $D_{X'}$. We call $D_{X'}$ a ``labelled'' graph rather than a ``coloured'' graph just to avoid confusion with the coloured graph $G$. Of course the concepts of ``coloured'' and ``labelled'' graphs are equivalent, and we will freely apply results from Section~\ref{SectionConnectedness} to labelled graphs. The vertices and edges of $D_{X'}$ will be labelled by elements of the set $X\cup \{*\}$. \begin{definition} Let $X'$ be a subset of $X$. The directed graph $D_{X'}$ is defined as follows: \begin{itemize} \item The vertex set of $D_{X'}$ is the set of colours of edges in $G$. For any colour $v\in V(D_{X'})$ present in $M$, $v$ is labelled by ``$(v)_X$''. The colour $c^*$ is labelled by ``$*$''. \item For two colours $u$ and $v\in V(D_{X'})$, there is a directed edge from $u$ to $v$ in $D_{X'}$ whenever there is an $x\in X'$ such that there is a colour $u$ edge from $x$ to the the vertex $(v)_Y$ in $G$. In this case $uv$ is labelled by ``$x$''. \end{itemize} \end{definition} Notice that in the second part of this definition the labelling is well-defined since there cannot be colour $u$ edges from two distinct vertices $x$ and $x'$ to $(v)_Y$ (since the colour $u$ edges form a matching in $G$). Recall that a total labelling is proper if outgoing edges at a vertex always have different labels, ingoing edges at a vertex always have different labels, adjacent vertices have different labels, and an edge always has different labels from its endpoints. Using the fact that the matchings in $G$ are disjoint we can show that $D_{X'}$ is always properly labelled. \begin{lemma}\label{ProperColouring} For any $X'\subseteq X$ the total labelling on $D_{X'}$ is always proper. In addition $D_{X'}$ has rainbow vertex set. \end{lemma} \begin{proof} Suppose that $uv$ and $u'v'$ are two distinct edges of $D_{X'}$ with the same label $x\in X'$. By definition of $D_{X'}$ they correspond to two edges $x(v)_Y$ and $x(v')_Y$ of $G$ having colours $u$ and $u'$ respectively. This implies that $u$ and $u'$ are different since otherwise we would have two edges of the same colour leaving $x$ in $G$ (which cannot happen since colour classes in $G$ are matchings). We also get that $v$ and $v'$ are distinct since otherwise we would have edges of colours both $u$ and $u'$ between $x$ and $(v)_Y$ in $G$ (contradicting the matchings forming $G$ being disjoint). Let $uv$ be an edge of $D_{X'}$ labelled by $x$ and $x(v)_Y$ the corresponding colour $u$ edge of $G$. Then $u$ cannot be labelled by ``$x$'' (since that would imply that the colour $u$ edge at $x$ would end at $(u)_Y$ rather than $(v)_Y$), and $v$ cannot be labelled by ``$x$'' (since then there would be edges from $x$ to $(v)_Y$ in $G$ of both colours $u$ and $v$). The fact that $D_X'$ has rainbow vertex set holds because $M$ being a matching implies that $(c)_X$ is distinct for any colour $c$. \end{proof} Recall that a path in a totally labelled graph is defined to be rainbow whenever all its vertices and edges have different colours. The reason we defined the directed graph $D_{X'}$ is that rainbow paths in $D_{X'}$ correspond exactly to $X'$-switchings in $G$. Let $P=v_0, \dots, v_{\ell}$ be a path in $D_{X'}$ for some $X'$. For each $i=0, \dots, \ell-1$ let $e_i$ be the colour $v_i$ edge of $G$ corresponding to the edge $v_{i} v_{i+1}$ in $D_{X'}$. We define $\sigma_P$ to be the sequence of edges $(e_0,$ $(v_{1})_M, e_1,$ $(v_{2})_M,$ $e_2, \dots, (v_{\ell-1})_M, e_{\ell-1},$ $(v_{\ell})_M)$. Notice that $(e(\sigma_P))_X$ is the set of labels of edges in $P$, and $(m(\sigma_P))_X$ is the set of labels of vertices in $P-v_0$. The following lemma shows that if $P$ is rainbow then $\sigma_P$ is a switching. \begin{lemma}\label{PathSwitching} Let $P=v_0, \dots, v_{\ell}$ be a rainbow path in $D_{X'}$ for some $X'\subseteq X$. Then $\sigma_P$ is an $X'$-switching from $v_0$ to $v_\ell$ of length $\ell$. \end{lemma} \begin{proof} As in the definition of $\sigma_P$, let $e_i$ be the colour $v_i$ edge of $G$ corresponding to the edge $v_{i} v_{i+1}$ in $D_{X'}$. We need to check all the parts of the definition of ``$X'$-switching''. For part (i), notice that $(v_{1})_M, \dots, (v_{\ell})_M$ are edges of $M$ by definition of $(.)_M$, whereas $e_i$ cannot be the colour $v_i$ matching edge $(v_i)_M$ since $(e_i)_Y=(v_{i+1})_M\cap Y$ which is distinct from $(v_i)_M\cap Y$. Parts (ii), (iii), and (v) follow immediately from the definition of $e_i$ and the graph $D_{X'}$. Part (iv) follows from the fact that $P$ is a rainbow path. Indeed to see that for $i\neq j$ we have $e_i\cap e_j=\emptyset$, notice that $e_i\cap e_j\cap X=\emptyset$ since $v_i v_{i+1}$ and $v_j v_{j+1}$ have different labels in $D_{X'}$, and that $e_i\cap e_j\cap Y=\emptyset$ since $(e_i)_Y\in (v_{i+1})_M$, $(e_j)_Y\in (v_{j+1})_M$, and $(v_{i+1})_M\cap(v_{j+1})_M=\emptyset$. Similarly for $i\neq j$, $e_{i-1}\cap (v_j)_M\cap X=\emptyset$ since $v_{i-1}v_{i}$ and $v_j$ have different labels in $D_{X'}$, and $e_{i-1}\cap (v_j)_M\cap Y=\emptyset$ since $(e_{i-1})_Y\in (v_{i})_M$ and $(v_{i})_M\neq (v_{j})_M$. Finally, $c_i\neq c_j$ since $v_0, \dots, v_{\ell}$ are distinct. \end{proof} Although it will not be used in our proof, it is worth noticing that the converse of Lemma~\ref{PathSwitching} holds i.e. to every $X'$-switching $\sigma$ there corresponds a unique rainbow path $P$ in $D_{X'}$ such that $\sigma=\sigma_P$. So far all our lemmas were true regardless whether the rainbow matching $M$ was maximum or not. Subsequent lemmas will assume that $M$ is maximum. The following lemma shows that for a free set $X'$, vertices in $D_{X'}$ have large out-degree. \begin{lemma}\label{ShortPathHighDegree} Suppose there is no rainbow matching in $G$ of size $n+1$. Let $X'$, $T$, $k$ and $c$ be such that $X'$ is $(2k,T,c)$-free. Let $D=D_{X'}\setminus (T)_C$, $v$ a vertex of $D$, and $P$ a rainbow path in $D$ from $c$ to $v$ of length at most $k$. Then we have $$|N^+_D(v)|\geq (1+\epsilon_0)n+|X'|-|X|-2|P|-|T|.$$ \end{lemma} \begin{proof} Notice that since $P$ is contained in $D_{X'}\setminus (T)_C$ and since $X'$ being $(k,T,c)$-free implies $X'\cap T=\emptyset$, we can conclude that $(\sigma_P)_X\cap T=\emptyset$. Therefore, Lemma~\ref{SwitchingLemma} applied with $A=\emptyset$ implies that for any $B\subseteq X'$ with $|B|\leq k$ and $B\cap (\sigma_P)_{X}=\emptyset$, there is a rainbow matching $M'$ of size $n$ which is disjoint from $B$ and misses colour $v$. Since there are no rainbow matchings of size $n+1$ in $G$ this means that there are no colour $v$ edges from $X'\setminus (\sigma_P)_{X}$ to $Y_0$ (indeed if such an edge $xy$ existed, then we could apply Lemma~\ref{SwitchingLemma} with $B=\{x\}$ in order to obtain a rainbow matching $M'$ missing colour $v$ and vertex $x$ which can be extended to a rainbow $n+1$ matching by adding the edge $xy$). We claim that there are at least $(1+\epsilon_0)n+|X'|-|X|-2|P|$ colour $v$ edges from $X'\setminus (\sigma_P)_X$. Indeed out of the $(1+\epsilon_0)n$ colour $v$ edges in $G$ at most $|X|-|X'|$ of them can avoid $X'$, and at most $2|P|$ of them can pass through $(\sigma_P)_X$, leaving at least $(1+\epsilon_0)n-(|X|-|X'|)-2|P|$ colour $v$ edges to pass through $X'\setminus (\sigma_P)_X$. Since none of these edges can touch $Y_0$, each of them must give rise to an out-neighbour of $v$ in $D_{X'}$. This shows that $|N^+_{D_{X'}}(v)|\geq (1+\epsilon_0)n+|X'|-|X|-2|P|$ which implies the result. \end{proof} The following lemma is the essence of the proof of Theorem~\ref{MainTheorem}. It roughly says that given a free set $X_1$ containing $X_0$, there another free set $X_2$ containing $X_0$ such that $X_2$ is much bigger than $X_1$, but has worse parameter $k$. The proof of this lemma combines everything in this section with Lemmas~\ref{MengerLemma},~\ref{LargeRainbowConnectedSetLemma} and~\ref{CloseHighDegreeSubgraph} from Section~\ref{SectionConnectedness}. \begin{lemma}\label{IncrementLemma} Let $k_1$ be an integer such that $n\geq 10^{20} \epsilon_0^{-8}k_1$ and $k_1\geq 20\epsilon_0^{-1}$. Set $k_2=10^{-6}\epsilon_0^{2}k_1$. Suppose there is no rainbow matching in $G$ of size $n+1$. \begin{itemize} \item Suppose that we have $X_1, T_1 \subseteq X$ and a colour $c_1$ such that $X_1$ is $(k_1, T_1, c_1)$-free and we also have $X_0\subseteq X_1\cup T_1$ and $|T_1|\leq k_1-30\epsilon_0^{-1}$. \item Then there are $X_2, T_2\subseteq X$ and a colour $c_2$ such that $X_2$ is $(k_2, T_2, c_2)$-free and we also have $X_0\subseteq X_2\cup T_2$, $|T_2|\leq |T_1|+ 30\epsilon_0^{-1}$ and $$|X_2|> |X_1|+\frac{\epsilon_0}2n.$$ \end{itemize} \end{lemma} \begin{proof} Set $d=10^5\epsilon_0^{-2}$. Let $D=D_{X_1}\setminus (T_1)_C$. Recall that Lemma~\ref{ProperColouring} implies that $D$ is properly labelled with rainbow vertex set. Lemma~\ref{ShortPathHighDegree}, together with $n\geq 10^{20} \epsilon_0^{-8}k_1$, $k_1\geq 20\epsilon_0^{-1}$, and $|T_1|\leq k_1$ imply that all vertices in $D$ within rainbow distance $(10\epsilon_0)^{-1}$ of $c_1$ satisfy $d^+(v)\geq (1+\epsilon_0)n+|X_1|-|X|-30\epsilon_0^{-1}\geq(1+0.9\epsilon_0)n+|X_1|-|X|$. Lemma~\ref{CloseHighDegreeSubgraph} applied with $\epsilon=0.1\epsilon_0$ implies that there is a subgraph $D'$ in $D$ satisfying $\delta^+(D')\geq (1+0.7\epsilon_0)n+|X_1|-|X|$ and $\dRainbow{c_1, v}\leq 10\epsilon_0^{-1}$ for all $v\in D'$. Therefore, using $n\geq 10^{20} \epsilon_0^{-8}k_1$, we can apply Lemma~\ref{LargeRainbowConnectedSetLemma} to $D'$ with $\epsilon=0.1\epsilon_0$ and $k=9k_2d$ in order to find a set $W$ with $|W|\geq(1+0.6\epsilon_0)n+|X_1|-|X|$ which is $(9k_2 d, d)$-connected in $D'$. Since $W\subseteq D'$, there is a path, $Q$, of length $\leq 10\epsilon_0^{-1}$ from $c_1$ to some $q\in W$. Let $c_2$ be any vertex in $W$ with $(c_2)_X\not \in (\sigma_Q)_X$. Let $T_2=T_1\cup (\sigma_Q)_X\cup (c_1)_X$. Let $X_2=((W)_X\cup X_0)\setminus (T_2\cup (c_2)_X)$. We claim that $X_2$, $T_2$, and $c_2$ satisfy the conclusion of the lemma. First we show that $X_2$ is $(k_2, T_2, c_2)$-free. The facts that $T_2\cap X_2=\emptyset$ and $c_2\not\in X_2\cup T_2$ follow from the construction of $X_2$, $T_2$, and $c_2$. Let $A$ be any set of $k_2$ edges of $M\setminus ((T_2)_M\cup (c_2)_M)$, and $B\subseteq X_2$ any set of $k_2$ vertices such that $(A)_X\cap B=\emptyset$. Let $B_{X_0}=B\cap X_0$ and $B_{W}=B\cap (W)_X=B\setminus B_{X_0}$. By Lemma~\ref{MengerLemma}, applied with $k=3k_2$, $d=d$, $A=W$, $\{q, a_1, \dots, a_k, c_2\}=(B_W)_C$, and $S=(A)_X\cup (\sigma_Q)_X\cup B_{X_0}$, there is a rainbow path $P$ in $D'$ of length $\leq 3k_2 d$ from $q$ to $c_2$ which is disjoint from $V(Q-q)$ and $(A)_C$, passes through every colour of $(B_W)_C$, and whose edges and vertices don't have labels in $(A)_X\cup (\sigma_Q)_X\cup B_{X_0}\setminus (q)_X$. Notice that this means that $Q+P$ is a rainbow path from $c_1$ to $c_2$. We apply Lemma~\ref{SwitchingLemma} with $X'=X_1$, $T=T_1$, $c=c_1$, $\sigma=\sigma_{Q+P}$, $A=A$, $B=B_{X_0}$. For this application notice that $\sigma_{Q+P}$ is a $X_1$-switching of length $\leq k_1/2$, which holds because of Lemma~\ref{PathSwitching} and because $2|Q|+2|P|\leq 20\epsilon_0^{-1}+2k_2d\leq k_1/2$. We also need to check the various disjointness conditions---$(A)_X\cap T_1=(A)_X\cap (\sigma_{Q+P})_X=(A)_X\cap B_{X_0}=\emptyset$ (which hold because $(A)_X$ was disjoint from $T_2$, $P$, and $B$), $(\sigma_{Q+P})_X\cap T_1=\emptyset$ (which holds since vertices and edges in $D$ have no labels from $T_1$), and $(\sigma_{Q+P})_X\cap B_{X_0}=\emptyset$ (which holds since $B$ was disjoint from $T_2$ and $P$ had no labels from $B_{X_0}$). Therefore Lemma~\ref{SwitchingLemma} produces a rainbow matching $M'$ of size $n$ which agrees with $M$ on $A$, avoids $(m(\sigma_{Q+P}))_X\cup B_{X_0}$, and misses colour $c_2$. Since $P$ passes through every colour in $(B_W)_C$, we have $B_W\subseteq (m(\sigma_{Q+P}))_X$ and so $M'$ avoids all of $B$. Since $A$ and $B$ were arbitrary, we have shown that $X_2$ is $(k_2,T_2,c_2)$-free. The identity $X_0\subseteq X_2\cup T_2$ holds because $X_0\subseteq X_1\cup T_1\subseteq X_2\cup T_2$. Notice that $|T_2|\leq |T_1|+ 30\epsilon_0^{-1}$ follows from $|Q|\leq 10\epsilon^{-1}$. Finally, $|X_2|> |X_1|+\epsilon_0n/2$ holds because since $(W)_X$ was disjoint from $X_0$ we have $$|X_2|\geq |X_0|+|W|\geq |X_0|+(1+0.6\epsilon_0)n+|X_1|-|X|=|X_1|+ 0.6\epsilon_0n.$$ \end{proof} We are finally ready to prove Theorem~\ref{MainTheorem}. The proof consists of starting with $X_0$ and applying Lemma~\ref{IncrementLemma} repeatedly, at each step finding a free set $X_i$ which is $\epsilon n/2$ bigger than $X_{i-1}$. This clearly cannot be performed more than $2\epsilon_0 ^{-1}$ times (since otherwise it would contradict $|X_i|\leq |X|=|X_0|+n$), and hence the ``there is no rainbow matching in $G$ of size $n+1$'' clause of Lemma~\ref{IncrementLemma} could not be true. \begin{proof}[Proof of Theorem~\ref{MainTheorem}] Let $G$ be a bipartite graph which is the union of $n_0\ge N_0$ disjoint matchings each of size at least $(1+\epsilon_0)n_0$. Let $M$ be the largest rainbow matching in $G$ and $c^*$ the colour of any matching not used in $M$. Let $n$ be the number of edges of $M$. Since $M$ is maximum, Lemma~\ref{GreedyLemma} tells us that $n\geq N_0/2$. Let $X_0=X\setminus M$ and $Y_0=Y\setminus M$. Suppose for the sake of contradiction that $n<n_0$. Let $T_0=\emptyset$, $k_0=(10^{-6}\epsilon^{-2})^{2\epsilon^{-1}}$, and $c_0=c^*$. Notice that since $X_0$ is $(n, T_0, c_0)$-free and $n\geq N_0/2\geq k_0$ we get that $X_0$ is $(k_0, T_0, c_0)$-free. For $i=1, \dots, 2\epsilon^{-1}$, we set $k_{i}=10^{-6}\epsilon_0^{2}k_{i-1}$. For $i=0, \dots, 2\epsilon^{-1}$ we repeatedly apply Lemma~\ref{IncrementLemma} to $X_i$, $k_i$, $T_i$, $c_i$ in order to obtain sets $X_{i+1}$, $T_{i+1}\subseteq X$ and a colour $c_{i+1}$ such that $X_{i+1}$ is $(k_{i+1}, T_{i+1}, c_{i+1})$-free, $X_0\subseteq X_{i+1}\cup T_{i+1}$, $|T_{i+1}|\leq |T_i|+30\epsilon_0^{-1}$, and $|X_{i+1}|>|X_i|+\epsilon_0n/2$. To see that we can repeatedly apply Lemma~\ref{IncrementLemma} this way we only need to observe that there are no rainbow $n+1$ matchings in $G$, and that for $i\leq 2\epsilon^{-1}$ we always have $n\geq 10^{20} \epsilon_0^{-8}k_i$, $k_i\geq 10\epsilon_0^{-1}$, and $|T_i|\leq 30\epsilon^{-1}i\leq k_i-30\epsilon^{-1}$. But now we obtain that $|X_{2\epsilon^{-1}}|> |X_0|+n=|X|$ which is a contradiction since $X_i$ is a subset of $X$. \end{proof} \section{Golden Ratio Theorem}\label{SectionGoldenRatio} In this section we prove Theorem~\ref{GoldenRatioTheorem}. The proof uses Theorem~\ref{Woolbright} as well as Lemma~\ref{CloseSubgraphsLowExpansion}. \begin{proof}[Proof of Theorem~\ref{GoldenRatioTheorem}.] The proof is by induction on $n$. The case ``$n=1$'' is trivial since here $G$ is simply a matching. Suppose that the theorem holds for all $G$ which are unions of $<n$ matchings. Let $G$ be a graph which is the union of $n$ matchings each of size $\phi n+ 20n/ \log n$. Suppose that $G$ has no rainbow matching of size $n$. Let $M$ be a maximum rainbow matching in $G$. By induction we can suppose that $|M|= n-1$. Let $c^*$ be the missing colour in $M$. Let $X_0=X\setminus V(M)$ and $Y_0=Y\setminus V(M)$. Notice that for any colour $c$ there are at least $(\phi-1) n+20n/\log n$ colour $c$ edges from $X_0$ to $Y$ and at least at least $(\phi-1) n+20n/\log n$ colour $c$ edges from $Y_0$ to $X$. If $n< 10^6$, then this would give more than $n$ colour $c^*$ edges from $X_0$ to $Y$, one of which could be added to $M$ to produce a larger matching. Therefore, we have that $n\geq 10^6$. We define an edge-labelled directed graph $D$ whose vertices are the colours in $G$, and whose edges are labelled by vertices from $X_0\cup Y_0$. We set $cd$ an edge in $D$ with label $v\in X_0\cup Y_0$ whenever there is a colour $c$ edge from $v$ to the colour $d$ edge of $M$. Notice that $D$ is out-proper---indeed if edges $ux$ and $uy\in E(D)$ had the same label $v\in X_0\cup Y_0$, then they would correspond to two colour $u$ edges touching $v$ in $G$ (which cannot happen since the colour classes of $G$ are matchings). Recall that $\dRainbow{x,y}$ denotes the length of the shortest rainbow $x$ to $y$ path in $D$. We'll need the following two claims. \begin{claim}\label{GoldRatioFewXYedges} For every $c\in V(D)$, there are at most $\dRainbow{c^*,c}$ colour $c$ edges between $X_0$ and $Y_0$. \end{claim} \begin{proof} Let $P=c^*p_1 \dots, p_k c$ be a rainbow path of length $\dRainbow{c^*,c}$ from $c^*$ to $c$ in $D$. For each $i$, let $m_i$ be the colour $p_i$ edge of $M$, and let $e_i$ be the colour $p_i$ edge from the label of $p_ip_{i+1}$ to $m_{i+1}$. Similarly, let $e_{c^*}$ be the colour $c^*$ edge from the label of $c^*p_1$ to $m_{1}$, and let $m_{c}$ be the colour $c$ edge of $M$. If there are more than $\dRainbow{c^*,c}$ colour $c$ edges between $X_0$ and $Y_0$, then there has to be at least one such edge, $e_{c}$, which is disjoint from $e_{c^*}, e_1, \dots, e_{k}$. Let $$M'=M+e_{c^*}-m_1+e_1-m_2+e_2\dots -m_{k-1}+e_{k-1}-m_{c}+e_{c}.$$ The graph $M'$ is clearly a rainbow graph with $n$ edges. We claim that it is a matching. Distinct edges $e_i$ and $e_j$ satisfy $e_i\cap e_j=\emptyset$ since $P$ is a rainbow path. The edge $e_i$ intersects $V(M)$ only in one of the vertices of $m_i$, which are not present in $M'$. This means that $M'$ is a rainbow matching of size $n$ contradicting our assumption that $M$ was maximum. \end{proof} \begin{claim}\label{CloseSubgraphGoldenRatio} There is a set $A\subseteq V(D)$ containing $c^*$ such that for all $v\in A$ we have $|N^+(a)\setminus A|\leq n/\log n$ and $\dRainbow{c,v} \leq \log n$. \end{claim} \begin{proof} This follows by applying Lemma~\ref{CloseSubgraphsLowExpansion} to $D$ with $\epsilon=(\log n)^{-1}$. \end{proof} Let $A$ be the set of colours given by the above claim. Let $M'$ be the submatching of $M$ consisting of the edges with colours not in $A$. Since $c^* \in A$, we have $|M'|+|A|=n$. Let $A_X$ be the subset of $X$ spanned by edges of $M$ with colours from $A$, and $A_Y$ be the subset of $Y$ spanned by edges of $M$ with colours from $A$. Claim~\ref{GoldRatioFewXYedges} shows that for any $a\in A$ there are at most $\log n$ colour $a$ edges between $X_0$ and $Y_0$. Therefore there are at least $(\phi-1) n+20n/\log n-\log n$ colour $a$ edges from $X_0$ to $Y\cap M$. Using the property of $A$ from Claim~\ref{CloseSubgraphGoldenRatio} we obtain that there are at least $(\phi-1) n+19n/\log n-\log n$ colour $a$ edges from $X_0$ to $A_Y$. Similarly, for any $a\in A$ we obtain at least $(\phi-1) n+19n/\log n-\log n$ colour $a$ edges from $Y_0$ to $A_X$. By applying Theorem~\ref{Woolbright} to the subgraph of $G$ consisting of the colour $A$ edges between $X_0$ and $A_Y$ we can find a subset $A_0\subseteq A$ and a rainbow matching $M_0$ between $X_0$ and $A_Y$ using exactly the colours in $A_0$ such that we have \begin{align*} |A_0|&\geq (\phi-1)n+19n/\log n-\log n -\sqrt{(\phi-1)n+19n/\log n-\log n}\\ &\geq (\phi-1)n-6\sqrt{n} \end{align*} Let $A_1=A\setminus A_0$. We have $|A_1|\leq n-|A_0|\leq (2-\phi)n+ 6\sqrt{n}$. Recall that for each $a\in A_1$ there is a colour $a$ matching between $Y_0$ and $A_X$ of size at least $(\phi-1) n+19n/\log n-\log n$. Notice that the following holds \begin{align*} (\phi-1) n+\frac{19n}{\log n} -\log n&\geq \phi ((2-\phi)n+ 6\sqrt{n})+\frac{20((2-\phi)n+ 6\sqrt{n})}{ \log((2-\phi)n+ 6\sqrt{n})}\\ &\geq \phi|A_1|+\frac{20|A_1|}{\log |A_1|}. \end{align*} The first inequality follows from $\phi^2-\phi-1=0$ as well as some simple bounds on $\sqrt n$ and $\log n$ for $n\geq 10^6$. The second inequality holds since $x/\log x$ is increasing. By induction there is a rainbow matching $M_1$ between $Y_0$ and $A_X$ using exactly the colours in $A_1$. Now $M'\cup M_0\cup M_1$ is a rainbow matching in $G$ of size $n$. \end{proof} \section{Concluding remarks}\label{SectionConclusion} Here we make some concluding remarks about the techniques used in this paper. \subsection*{Analogues of Menger's Theorem for rainbow $k$-connectedness} One would like to have a version of Menger's Theorem for rainbow $k$-edge-connected graphs as defined in the introduction. In this section we explain why the most natural analogue fails to hold. Consider the following two properties in an edge-coloured directed graph $D$ and a pair of vertices $u,v\in D$. \begin{enumerate}[(i)] \item For any set of $k-1$ colours $S$, there is a rainbow $u$ to $v$ path $P$ avoiding colours in $S$. \item There are $k$ edge-disjoint $u$ to $v$ paths $P_1, \dots, P_k$ such that $P_1\cup \dots\cup P_k$ is rainbow. \end{enumerate} The most natural analogue of Menger's Theorem for rainbow $k$-connected graphs would say that for any graph we have (i) $\iff$ (ii). One reason this would be a natural analogue of Menger's Theorem is that there is fractional analogue of the statement (i) $\iff$ (ii). We say that a rainbow path $P$ contains a colour $c$ if $P$ has a colour $c$ edge. \begin{proposition}\label{FractionalMenger} Let $D$ be a edge-coloured directed graph, $u$ and $v$ two vertices in $D$, and $k$ a real number. The following are equivalent. \begin{enumerate}[(a)] \item For any assignment of non-negative real number $y_c$ to every colour $c$, with $\sum_{c \text{ a colour}} y_c< k$, there is a rainbow $u$ to $v$ path $P$ with $\sum_{c \text{ contained in } P} y_c< 1$. \item We can assign a non-negative real number $x_P$ to every rainbow $u$ to $v$ path $P$, such that for any colour $c$ we have $\sum_{P \text{ contains } c} x_P\leq 1$ and also $\sum_{P \text{ a rainbow } u \text{ to } v \text{ path}} x_P\geq k$. \end{enumerate} \end{proposition} \begin{proof} Let $k_a$ be the minimum of $\sum_{c \text{ a colour}} y_c$ over all choices of non-negative real numbers $y_c$ satisfying $\sum_{c \text{ contained in } P} y_c\geq 1$ for all $u$ to $v$ paths $P$. Similarly, we let $k_b$ be the maximum of $\sum_{P \text{ a rainbow } u \text{ to } v \text{ path}} x_P$ over all choices of non-negative real numbers $x_P$ satisfying $\sum_{P \text{ contains } c} x_P\leq 1$ for all colours $c$. It is easy to see that $k_a$ and $k_b$ are solutions of two linear programs which are dual to each other. Therefore, by the strong duality theorem (see~\cite{LinearProgrammingBook}) we have that $k_a=k_b$ which implies the proposition. \end{proof} The reason we say that Proposition~\ref{FractionalMenger} is an analogue of the statement ``(i) $\iff$ (ii)'' is that if the real numbers $y_c$ and $x_P$ were all in $\{0,1\}$ then (a) would be equivalent to (i) and (b) would be equivalent to (ii) (this is seen by letting $S=\{c: y_c=1\}$ and $\{P_1, \dots, P_k\}=\{P: x_P=1\}$). Unfortunately (i) does not imply (ii) in a very strong sense. In fact even if (ii) was replaced by the weaker statement ``there are $k$ edge-disjoint rainbow $u$ to $v$ paths'', then (i) would still not imply (ii). \begin{proposition} For any $k$ there is a coloured directed graph $D_k$ with two vertices $u$ and $v$ such that the following hold. \begin{enumerate}[(I)] \item For any set of $k$ colours $S$, there is a rainbow $u$ to $v$ path $P$ avoiding colours in $S$. \item Any pair $P_1$, $P_2$ of rainbow $u$ to $v$ paths have a common edge. \end{enumerate} \end{proposition} \begin{proof} We will construct a multigraph having the above property. It is easy to modify the construction to obtain a simple graph. Fix $m>2k+1$. The vertex set of $D$ is $\{x_0, \dots, x_m\}$ with $u=x_0$ and $v=x_m$. For each $i=0, \dots, m-1$, $D$ has $k+1$ copies of the edge $x_ix_{i+1}$ appearing with colours $i$, $m+1$, $m+2$, $\dots$, $m+k$. In other words $G$ is the union of $k+1$ copies of the path $x_0 x_1 \dots x_m$ one of which is rainbow, and the rest monochromatic. Notice that $D$ satisfies (II). Indeed if $P_1$ and $P_2$ are $u$ to $v$ paths, then they must have vertex sequence $x_0 x_1 \dots x_m$. Since there are only $m+k$ colours in $D$ both $P_1$ and $P_2$ must have at least $m-k$ edges with colours from $\{0, \dots, m-1\}$. By the Pigeonhole Principle, since $2(m-k)>m$, there is some colour $i\in \{0, \dots, m\}$ such that both $P_1$ and $P_2$ have a colour $i$ edge. But the only colour $i$ edge in $D$ is $x_ix_{i+1}$ which must therefore be present in both $P_1$ and $P_2$. \end{proof} There is another, more subtle, reason why (i) does not imply (ii). Indeed if we had ``(i) $\implies$ (ii)'' then this would imply that every bipartite graph consisting of $n$ matchings of size $n$ contains a rainbow matching of size $n$. Indeed given a bipartite graph $G$ with bipartition $X \cup Y$ consisting of $n$ matchings of size $n$ construct an auxiliary graph $G'$ by adding two vertices $u$ and $v$ to $G$ with all edges from $u$ to $X$ and from $Y$ to $v$ present. These new edges all receive different colours which were not in $G$. It is easy to see for any set $S$ of $n-1$ colours, there is a rainbow $u$ to $v$ path in $G'$ i.e. (i) holds for this graph with $k=n$. In addition, for a set of paths $P_1, \dots, P_t$ with $P_1\cup \dots\cup P_t$ rainbow, it is easy to see that $\{P_1\cap E(G), \dots, P_t\cap E(G)\}$ is a rainbow matching in $G$ of size $t$. Therefore if ``(i) $\implies$ (ii)'' was true then we would have a rainbow matching in $G$ of size $n$. However, as noted in the introduction, there exist Latin squares without transversals, and hence bipartite graphs consisting of $n$ matchings of size $n$ containing no rainbow matching of size $n$. The above discussion has hopefully convinced the reader that the natural analogue of Menger's Theorem for rainbow $k$-connectedness is not true. Nevertheless, it would be interesting to see if any statements about connectedness carry over to rainbow $k$-connected graphs. \subsection*{Improving the bound in Theorem~\ref{MainTheorem}} One natural open problem is to improve the dependency of $N_0$ on $\epsilon$ in Theorem~\ref{MainTheorem}. Throughout our proof we made no real attempt to do this. However there is one interesting modification which one can make in order to significantly improve the bound on $N_0$ which we mention here. Notice that the directed graphs $D_{X'}$ in Section~\ref{MainTheorem} and the directed graph $D$ in Section~\ref{GoldenRatioTheorem} had one big difference in their definition---to define the graphs $D_{X'}$ we only considered edges starting in $X$, whereas to define the graph $D$, we considered edges starting from both $X_0$ and $Y_0$. It is possible to modify the proof of Theorem~\ref{MainTheorem} in order to deal with directed graphs closer to those we used in the proof of Theorem~\ref{GoldenRatioTheorem}. There are many nontrivial modifications which need to be made for this to work. However, the end result seems to be that the analogue of Lemma~\ref{IncrementLemma} only needs to be iterated $O(\log \epsilon_0^{-1})$ many times (rather than $O(\epsilon_0^{-1})$ as in the proof of Theorem~\ref{MainTheorem}). This would lead to an improved bound on $N$ in Theorem~\ref{MainTheorem} $N=O\left(\epsilon^{C\log\epsilon}\right)$ for some constant $C$. In the grand scheme of things, this is still a very small improvement to bound in Theorem~\ref{MainTheorem}, and so we do not include any further details here. It is likely that completely new ideas would be needed for a major improvement in the bound in Theorem~\ref{MainTheorem}. \subsection*{Acknowledgement} The author would like to thank J\'anos Bar\'at for introducing him to this problem as well as Ron Aharoni, Dennis Clemens, Julia Ehrenm\"uller, and Tibor Szab\'o for various discussions related to it. This research was supported by the Methods for Discrete Structures, Berlin graduate school (GRK 1408).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} Patients suffering from osteoarthritis (OA) experience joint pain due to an increased porosity of articular cartilage or even a complete loss of tissue \cite{Arden2006}. This pain is especially severe in the frequently loaded knee joint. Since cartilage degeneration also affects the mechanical properties of the tissue, an analysis of the behavior under load can help to understand and analyze diseased tissue \cite{Powers2003}. Contrast-agent enhanced CT-imaging in a weight-bearing standing position is an established method to visualize the articular cartilage in the knee joint \cite{Choi2013,Choi2014}. It is realized by using a flexible robotic C-arm system scanning on a horizontal trajectory around the patient's knees \cite{Maier2011}. The setup of such a scan can be seen in Fig.~\ref{fig-intromodel_carm}. One drawback of this setting is that subjects tend to have a higher range of motion when standing compared with the supine lying position, which leads to image artifacts like blurring and double edges in the resultant reconstructions. Motion corrupted reconstructions lose their diagnostic value and are unsuitable for further processing. Since it is not useful to prevent subject motion when aiming for natural stance, the movement during the scan has to be estimated and corrected. \begin{figure}[tb] \centering \subcaptionbox {\label{fig-intromodel_carm}} {\includegraphics[height=10pc]{images/CArmScan.jpg}} \quad \subcaptionbox {\label{fig-intromodel_model}} {\includegraphics[height=10pc]{images/ModelMarkersSensors.png}} \caption{(a) Setup of a weight-bearing C-arm cone-beam CT scan of the knees \cite{Maier2020}, (b) Biomechanical model with virtual reflective markers on the legs (pink spheres) and IMUs on thigh and shin (green boxes).} \label{fig-intromodel} \end{figure} Previous approaches are either image-based or use an external signal or marker in order to correct for motion. Performing 2D/3D registration showed very good motion compensation capabilities, but required prior bone segmentations and is computationally expensive \cite{Berger2016}. The same limitation holds for an approach based on a penalized image sharpness criterion \cite{Sisniega2017}. By leveraging the epipolar consistency of CT scans, the translation but not the rotation of the knees during a CT scan was estimated \cite{Bier2017}. Bier et al. \cite{Bier2018landmark} proposed to estimate motion by tracking anatomical landmarks in the projection images using a neural network. Until now, their approach was not applied for motion compensation and was only reliable if there were no other objects present. An investigation on the practicality of using range cameras for motion compensated reconstruction showed promising results on simulated data \cite{Bier2018range}. An established and effective method for motion compensation in weight-bearing imaging of the knee is based on small metallic markers attached to the leg and tracked in the projection images to iteratively estimate 3D motion \cite{Choi2013,Choi2014}. However, the process of placing the markers is tedious and they produce metal artifacts in areas of interest in the resulting images. In C-arm CT, inertial measurement units (IMUs) containing an accelerometer and a gyroscope have until now been applied for navigation \cite{Jost2016} and calibration \cite{Lemammer2019} purposes. Our recent work was the first to propose the use of IMUs for motion compensation in weight-bearing cone-beam CT (CBCT) of the knee \cite{Maier2020}. We evaluated the feasibility of using the measurements of one IMU placed on the shin of the subject for rigid motion compensation in a simulation study. However, since the actual movement during the scan is non-rigid, not all artifacts could be resolved with the rigid correction approach. For this reason, we now investigate non-rigid motion compensation based on 2D or 3D deformation using signals recorded by two IMUs placed on the shin and the thigh. Furthermore, a method to estimate the initial pose and velocity of the sensors is presented. These two parameters are needed for motion estimation and were assumed to be known in the aforementioned publication \cite{Maier2020}. Another drawback of our previous publication is that we only simulated optimal IMU signals and neglected possible measurement errors. In order to assess the applicability of our proposed methods in a real setting, and as a third contribution, we now analyze how sensor noise added to the optimal IMU signals influences the motion compensation capabilities. In this article, we present a simulation study similar to the one in our previous publication, therefore some content of section \ref{sec-simulation} is closely related to Maier et al. \cite{Maier2020}. Furthermore, the previously published rigid motion estimation approach is repeated for better comprehensibility. \section{Materials and methods} \label{sec-methods} \begin{figure}[tb] \centerline{\includegraphics[width=25pc]{images/GraphTimesNoiseShaded_noiseVorne.pdf}} \caption{Processing Pipeline presented in section \ref{sec-methods}. Black font: processing steps, green font: respective output. The simulation study is presented in section \ref{sec-simulation} (shaded in grey). Sections \ref{sec-noise} to \ref{sec-nonrigid} (shaded in green) describe the proposed data processing.} \label{fig:graph} \end{figure} The whole processing pipeline of the presented simulation study is shown in Fig.~\ref{fig:graph}, where black font describes each processing step and green font the respective output. All steps shaded in gray relate to the simulation and are described in Section \ref{sec-simulation}, while all steps shaded in green refer to the proposed data processing presented in Sections \ref{sec-noise} to \ref{sec-nonrigid}. The simulation contains the following steps: The motion of standing subjects is recorded with an optical motion capture system and used to animate a biomechanical model to obtain the trajectories of hip, knees and ankles (\ref{sec-Biomecsimulation}). These positions are then used in two ways: First, the lower body of a numerical phantom is deformed to mimic the subject motion and a motion-corrupted C-arm CBCT scan is simulated (\ref{sec-XCATsimulation}). Secondly, the signals of IMUs placed on the model's leg are computed (\ref{sec-IMUsimulation}). In Section \ref{sec-noise}, measurement noise is added to the optimal sensor signals. These noisy signals are later used to analyze the influence of measurement errors on the motion correction with IMUs. Then, the proposed IMU-based approaches for motion compensated reconstruction of the motion-corrupted CT scan are described: From the IMU measurements, the position and orientation, i.e. the pose, of the IMUs over time are computed (\ref{sec-poseestimation}). For this step, the initial sensor pose and velocity need to be known and are estimated from the first two projection images (\ref{sec-initialization}). The computed poses are then used for three different motion correction approaches compared in this article. First, rigid motion matrices are computed from the IMU poses and used to adapt the projection matrices for reconstruction (\ref{sec-rigid}). Second, the projection images are non-rigidly deformed before 3D reconstruction (\ref{sec-nonrigid2D}). Third, the sensor poses are incorporated in the reconstruction algorithm for a 3D non-rigid deformation (\ref{sec-nonrigid3D}). \subsection{Simulation} \label{sec-simulation} \subsubsection{Data acquisition and biomechanical model} \label{sec-Biomecsimulation} In order to create realistic simulations, real motion of standing persons is acquired. Seven healthy subjects are recorded in three settings of 20 seconds duration: holding a squat of 30 degrees and 60 degrees knee flexion, and actively performing squats. Seven reflective markers are attached to each subject's sacrum, to the right and left anterior superior iliac spine, to the right and left lateral epicondyle of the knee, and to the right and left malleolus lateralis. The marker positions are tracked with a 3D optical motion tracking system (Vicon, Oxford, UK) at a sampling rate of 120\,Hz. Subsequently, in the software OpenSim \cite{Delp2007}, the marker positions of the active squatting scan of each subject are used to scale a biomechanical model of the human lower body \cite{Hamner2010} to the subject's anthropometry. The model with attached virtual markers shown in pink is displayed in Fig.~\ref{fig-intromodel_model}. The scaled model is then animated two times per subject by computing the inverse kinematics using the marker positions of the 30 degrees and 60 degrees squatting scans \cite{Seth2018}. The inverse kinematics computation results in the generalized coordinates that best represent the measured motion. These generalized coordinates describe the complete model motion as the global position and orientation of the pelvis and the angles of all leg joints. Before further processing, jumps in the data that occur due to noise are removed, and the signals are filtered with a second order Butterworth filter with a cutoff frequency of 6\,Hz in order to remove system noise. Since the model is scaled to the subject's anthropometry, the generalized coordinates can be used to compute the trajectories of hip, knee and ankle joints. These joint trajectories are the input to all further steps of the data simulation. \subsubsection{XCAT deformation and CT projection generation} \label{sec-XCATsimulation} We generate a virtual motion-corrupted CT scan using the moving 4D extended cardiac-torso (XCAT) phantom \cite{Segars2010}. The legs of the numerical phantom consist of the bones tibia, fibula, femur and patella including bone marrow and surrounded by body soft tissue. All structures contained in the phantom have material-specific properties. Their shapes are defined by 3D control points spanning non-uniform rational B-splines. By changing the positions of these control points the structures of the XCAT phantom can be non-rigidly deformed. In the default XCAT phantom the legs are extended. To simulate a standing C-arm CT scan, the phantom needs to take on the squatting pose of the recorded subjects that is varying over time. For this purpose, the positions of the XCAT spline control points are changed based on the hip, knee, and ankle positions of the biomechanical model. The deformation process is described in detail by Choi et al. \cite{Choi2013}. Then, a horizontal circular CT scan of the knees is simulated. As in a real setting, 248\,projections are generated with an angular increment of 0.8\,degrees between projections, corresponding to a sampling rate of 31\,Hz. The virtual C-arm rotates on a trajectory with 1198\,mm source detector distance and 780\,mm source isocenter distance. The detector has a size of $620\times480$ pixels with an isotropic pixel resolution of 0.616\,mm. In a natural standing position, the knees are too far apart to both fit on the detector, therefore the rotation center is placed in the center of the left leg of the deformed XCAT phantom. Then, forward projections are created as described in Maier et al. \cite{Maier2012}. Since the subject of this study is to analyze the motion compensation capability of IMUs, CBCT artifacts other than motion are not included in the simulation. \subsubsection{Simulation of IMU measurements} \label{sec-IMUsimulation} The trajectories of hip, knees and ankles computed using the biomechanical model are used to simulate the measurements of IMUs placed on the leg of the model. IMUs are commonly used for motion analysis in sports and movement disorders \cite{Kautz2017}. They are low cost, small and lightweight devices that measure their acceleration and angular velocity on three perpendicular axes. Besides the motion signal, the accelerometer measures the earth's gravitational field distributed on its three axes depending on its orientation. We virtually place two such sensors on the shin, 14\,cm below the left knee joint and on the thigh, 25\,cm below the hip joint aligned with the respective body segment (Fig.~\ref{fig-intromodel_model}). In a future real application, sensors in these positions are visible in the projections as needed for initialization (\ref{sec-initialization}). At the same time, they are situated at a sufficient distance from the knee joint in the direction of the CBCT rotation axis such that their metal components do not cause artifacts in the region of interest. The simulated acceleration $\mathbf{a}(t)$ and angular velocity $\bm{\omega}(t)$ at time point $t$ are computed as described by \cite{Bogert1996,Desapio2017}: \begin{equation} \mathbf{a}(t) = \mathbf{R}(t)^\top(\ddot{\mathbf{r}}_{Seg}(t) + \ddot{\mathbf{R}}(t) \mathbf{p}_{Sen}(t) - \mathbf{g})\,, \end{equation} \begin{equation} [\bm{\omega}(t)]_{\times} = \mathbf{R}(t)^\top\dot{\mathbf{R}}(t) = \begin{pmatrix} 0 & -\omega_{z}(t) & \omega_{y}(t) \\ \omega_{z}(t) & 0 & -\omega_{x}(t) \\ -\omega_{y}(t) & \omega_{x}(t) & 0 \end{pmatrix}\,, \end{equation} \begin{equation} \bm{\omega}(t) = (\omega_{x}(t), \omega_{y}(t), \omega_{z}(t))^\top\,. \end{equation} All parameters required in these equations are obtained by performing forward kinematics of the biomechanical model. The 3$\times$3 rotation matrix $\mathbf{R}(t)$ describes the orientation of the sensor at time point $t$ in the global coordinate system, $\dot{\mathbf{R}}(t)$ and $\ddot{\mathbf{R}}(t)$ are its first and second order derivatives with respect to time. The position of the segment the sensor is mounted on in the global coordinate system at time point $t$ is described by $\mathbf{r}_{Seg}(t)$, with $\ddot{\mathbf{r}}_{Seg}(t)$ being its second order derivative. $\mathbf{p}_{Sen}(t)$ is the position of the sensor in the local coordinate system of the segment the sensor was mounted on. Parameter $\mathbf{g}=(0,-9.80665, 0)^\top$ is the global gravity vector. \subsection{IMU noise simulation} \label{sec-noise} The IMU signal computation assumes a perfect IMU that can measure without the influence of errors. However, in a real application errors can have a significant influence preventing an effective motion compensation. For example, Kok et al. \cite{Kok2017} showed that the integration of a stationary IMU over ten seconds leads to errors in the orientation and position estimates of multiple degrees and meters, respectively. The most prominent error sources in IMUs leading to these high deviations are random measurement noise and an almost constant bias \cite{Woodman2007}. In this study, we focus on the analysis of the unpredictable sensor noise. Commercially available consumer IMU devices have noise densities that are acceptable for larger motion analysis. An example of a commercially available sensor BMI160, Bosch Sensortec GmbH, Reutlingen, Germany, has an output noise density of $180\,{\mu}g/s^2$ and $0.007\,^\circ$/s and a root mean square (RMS) noise of $1.8\,mg/s^2$ and $0.07\,^\circ$/s at $200\,Hz$ \cite{Bosch2020}. However, our data shows that the signals produced by a standing swaying motion have amplitudes in the range of $0.3\,mg/s^2$ resp. $0.02\,^\circ$/s. This means that when measuring with an off-the-shelf sensor, the signal would be completely masked by noise. For this reason, we investigate the noise level improvement necessary to use IMUs for the task of motion compensation in weight-bearing CT imaging. We simulate white Gaussian noise signals of different RMS levels and add them onto the simulated acceleration $\mathbf{a}(t)$ and angular velocity $\bm{\omega}(t)$. Starting with the RMS values of the aforementioned Bosch sensor, the noise level is divided by factors of ten down to a factor of $10^{5}$. The accelerometer and gyroscope noise levels are decreased independently. In the following, we will use the notation $f_a$ and $f_g$ for the exponent, i.e. the factor the RMS value is divided by is $10^{f_a}$ resp. $10^{f_g}$. The noisy IMU signals are then used to compute rigid motion matrices for motion compensation as explained in Sections \ref{sec-poseestimation} and \ref{sec-rigid}. Note that the noise influence is evaluated independently of the IMU-based motion compensation methods. All motion compensation methods presented in the following sections are first evaluated on the noise-free signals. Afterwards, we perform rigid motion compensation with noisy IMU signals to investigate the influence of noise on the applicability of IMUs for motion compensation. \subsection{Transformation algorithm} \label{sec-poseestimation} The following descriptions are based on Maier et al. \cite{Maier2020} and are required for all IMU-based motion compensation approaches presented in this article. The IMU measures motion in its local coordinate system, however, motion in the global coordinate system of the CBCT scan is required for motion compensation. The orientation and position of the IMU $\mathbf{S}(t)$ in the global coordinate system at each frame $t$ is described by the affine matrix \begin{equation} \mathbf{S}(t) = \begin{pmatrix} \begin{array}{c|c} \mathbf{R}(t) & \mathbf{r}(t) \\ \hline \mathbf{0}^\top & 1 \end{array} \end{pmatrix}\,, \end{equation} where $\mathbf{R}(t)$ is a 3$\times$3 rotation matrix, $\mathbf{r}(t)$ is a 3$\times$1 translation vector, and $\mathbf{0}$ is the 3$\times$1 zero-vector. The IMU pose can be updated for each subsequent frame using the affine global pose change matrix $\bm{\Updelta}_{g}(t)$: \begin{equation} \label{eq:Siplus1} \mathbf{S}(t+1) = \bm{\Updelta}_{g}(t)\mathbf{S}(t)\,. \end{equation} This global change $\bm{\Updelta}_{g}(t)$ can be computed by transforming the local change in the IMU coordinate system $\bm{\Updelta}_{l}(t)$ to the global coordinate system using the current IMU pose: \begin{equation} \label{eq:deltaglobal} \bm{\Updelta}_{g}(t) = \mathbf{S}(t)\bm{\Updelta}_{l}(t)\mathbf{S}(t)^{-1}\,. \end{equation} Thus, if the initial pose $\mathbf{S}(t=0)$ is known, the problem is reduced to estimating the local pose change $\bm{\Updelta}_{l}(t)$ in the IMU coordinate system, which is described in the following paragraphs. The gyroscope measures the orientation change over time $\bm{\omega}(t)$ on the three axes of the IMU's local coordinate system which can be directly used to rotate the IMU from frame to frame. The measured acceleration $\mathbf{a}(t)$, however, needs to be processed to obtain the position change over time. First, the gravity measured on the IMU's three axes is removed based on its global orientation. For this purpose, the angular velocity $\bm{\omega}(t)$ is rewritten to 3$\times$3 rotation matrices $\mathbf{G}(t)$ and used to update the global orientation of the sensor $\mathbf{R}(t)$. This orientation can then be used to obtain the gravity vector $\mathbf{g}(t)$ in the local coordinate system for each frame $t$: \begin{align} \mathbf{R}(t+1) &= \mathbf{R}(t) \mathbf{G}(t)\,,\\ \mathbf{g}(t) &= \mathbf{R}(t)^\top \mathbf{g}\,. \end{align} The sensor's local velocity $\mathbf{v}(t)$, i.e. its position change over time, is then computed as the integral of the gravity-free acceleration considering the sensor's orientation changes: \begin{equation} \label{eq:velocity} \mathbf{v}(t+1) = \mathbf{G}(t)^\top(\mathbf{a}(t) + \mathbf{g}(t) + \mathbf{v}(t))\,. \end{equation} With these computations, the desired local pose change of the IMU $\bm{\Updelta}_{l}(t)$ for each frame $t$ is expressed as an affine matrix containing the local rotation change and position change: \begin{equation} \label{eq:deltalocal} \bm{\Updelta}_{l}(t) = \begin{pmatrix} \begin{array}{c|c} \mathbf{G}(t) & \mathbf{v}(t) \\ \hline \mathbf{0}^\top & 1 \end{array} \end{pmatrix}\,. \end{equation} Note that the initial pose $\mathbf{S}(t=0)$ and velocity $\mathbf{v}(t=0)$ need to be known or estimated in order to apply this transformation process. \subsection{IMU pose and velocity initialization} \label{sec-initialization} \begin{figure}[tb] \centering \subcaptionbox {\label{fig-initialization_pose}} {\includegraphics[height=10pc]{images/initializationScene.pdf}} \quad \subcaptionbox {\label{fig-initialization_velocity}} {\includegraphics[height=7pc]{images/initialVelocity2.pdf}} \caption{(a) Disproportionate visualization of the initialization concept. The green box represents the sensor with its coordinate system plotted inside. The X-rays (blue) pass through the metal components and hit the detector (gray). (b) Visualization of the velocity initialization approach. Computing the pose $\mathbf{S}'(t=n)$ with incorrect initial velocity $\mathbf{v}(t=0)$ leads to a wrong translation $\mathbf{t}'$ which is used for velocity initialization.} \label{fig-initialization} \end{figure} In our previously published work, the initial pose and velocity of the IMU necessary for pose estimation in (\ref{eq:Siplus1}) and (\ref{eq:velocity}) were assumed to be known, which is not the case in a real setting \cite{Maier2020}. Thies et al. \cite{Thies2019} proposed to estimate the initial pose as an average sensor pose computed from the complete set of projection images. However, using the average position over the multi-second scan including subject motion leads to inaccurate motion compensation results. For this reason, we present an initial pose estimation based only on the first projection image. By incorporating also the second projection image, the initial velocity can be estimated. \subsubsection{Initial IMU pose} \label{sec-initialization1} The pose of the IMU $\mathbf{S}(t)$ at frame $t$ contains the 3D position of the origin $\mathbf{r}(t)$ and the three perpendicular axes of measurement $\mathbf{u_x}(t)$, $\mathbf{u_y}(t)$ and $\mathbf{u_z}(t)$ in the rotation matrix $\mathbf{R}(t)$. This coordinate system can be computed from any four non-coplanar points within the IMU if their geometrical relation to the sensor's measurement coordinate system is known. For simplicity, in this simulation study we assume that these four points are the IMU's origin $\mathbf{r}$ and the points $\mathbf{x}$, $\mathbf{y}$ and $\mathbf{z}$ at the tips of the three axes' unit vectors $\mathbf{u_x}$, $\mathbf{u_y}$, and $\mathbf{u_z}$. We also assume that the sensor has small, highly attenuating metal components at these four points making their projected 2D positions easy to track on an X-ray projection image. Since the C-arm system geometry is calibrated prior to performing CT acquisitions, the 3D position of each 2D detector pixel, and the 3D source position for each projection are also known. Then the searched points $\mathbf{r}(t)$, $\mathbf{x}(t)$, $\mathbf{y}(t)$, and $\mathbf{z}(t)$ are positioned on the straight line between the source and the respective projected point (Fig.~\ref{fig-initialization_pose}). In the considered case, the four points need to fulfill the following properties: \begin{itemize} \item The vectors $\mathbf{u_x}(t)$, $\mathbf{u_y}(t)$ and $\mathbf{u_z}(t)$ spanned by $\mathbf{r}(t)$, $\mathbf{x}(t)$, $\mathbf{y}(t)$ and $\mathbf{z}(t)$ must have unit length. \item The euclidean distance between two of the points $\mathbf{x}(t)$, $\mathbf{y}(t)$ and $\mathbf{z}(t)$ must be $\sqrt{2}$. \item The inner product of two of the vectors $\mathbf{u_x}(t)$, $\mathbf{u_y}(t)$ and $\mathbf{u_z}(t)$ must be zero. \item The right-handed cross product of two of the vectors $\mathbf{u_x}(t)$, $\mathbf{u_y}(t)$ and $\mathbf{u_z}(t)$ must result in the third vector. \end{itemize} Solving the resulting non-linear system of equations defined by these constraints for the first projection at time point $t=0$ yields the 3D positions of $\mathbf{r}(t=0)$, $\mathbf{x}(t=0)$, $\mathbf{y}(t=0)$, and $\mathbf{z}(t=0)$ and thereby the initial sensor pose $\mathbf{S}(t=0)$. \subsubsection{Initial IMU velocity} \label{sec-initialization2} The initial IMU velocity $\mathbf{v}(t=0)$ is needed to compute the velocity update in (\ref{eq:velocity}). In the following paragraphs, we describe a process to estimate the initial velocity, which is illustrated in Fig.~\ref{fig-initialization_velocity}. The IMU acquires data with a higher sampling rate than the C-arm acquires projection images (120\,Hz and 31\,Hz, respectively). If the two systems are synchronized the correspondence between the sampling time points $t$ of the IMU and the projection image acquisition points $i$ is known. The first projection image corresponds to the first IMU sample at time point $t=i=0$ and is used to estimate the initial pose $\mathbf{S}(t=0)$. The second projection image at $i=1$ corresponds to the IMU sampling point $t=n$ with $n>1$ and is used to estimate the pose $\mathbf{S}(t=n)$. Since each IMU pose can be computed from the previous one by applying the pose change between frames with (\ref{eq:Siplus1}) and (\ref{eq:deltaglobal}), the IMU pose at frame $t=n$ can also be expressed as \begin{equation} \mathbf{S}(t=n) = \mathbf{S}(t=0)\bm{\Updelta}_{l}(t=0)\bm{\Updelta}_{l}(t=1)\ldots\bm{\Updelta}_{l}(t=n-1)\,, \end{equation} which can be rearranged to \begin{equation} \mathbf{S}(t=0)^{-1}\mathbf{S}(t=n) = \bm{\Updelta}_{l}(t=0)\bm{\Updelta}_{l}(t=1)\ldots\bm{\Updelta}_{l}(t=n-1)\,. \end{equation} However, since $\mathbf{v}(t=0)$ is not known, also $\bm{\Updelta}_{l}(t=0)$ and all subsequent local change matrices are not known. Therefore, instead of the actual $\mathbf{v}(t=0)$, we use the zero-vector as initial velocity introducing an error vector $\mathbf{e}$: \begin{equation} \mathbf{v}'(t=0) = \mathbf{0} = \mathbf{v}(t=0) + \mathbf{e}\,. \end{equation} This error is propagated and accumulated in the frame by frame velocity computation in (\ref{eq:velocity}) and for $t>=1$ the resulting error-prone velocity is \begin{equation} \label{eq:velocityprime} \mathbf{v}'(t) = \mathbf{v}(t) + \mathbf{G}(t-1)^\top\mathbf{G}(t-2)^\top\ldots\mathbf{G}(0)^\top\mathbf{e}\,. \end{equation} These error-prone velocities $\mathbf{v}'(t)$ lead to incorrect pose change matrices $\bm{\Updelta}_{l}'(t)$ and thereby to an incorrect computation of $\mathbf{S}'(t=n)$: \begin{align} \bm{\Updelta}'_{l}(t) &= \begin{pmatrix} \begin{array}{c|c} \mathbf{G}(t) & \mathbf{v}'(t) \\ \hline \mathbf{0}^\top & 1 \end{array} \end{pmatrix}\,,\\ \mathbf{S}(t=0)^{-1}\mathbf{S}'(t=n) &= \bm{\Updelta}'_{l}(t=0)\bm{\Updelta}'_{l}(t=1)\ldots\bm{\Updelta}'_{l}(t=n-1)\,. \label{eq:Sprime} \end{align} Inserting (\ref{eq:velocityprime}) and expanding (\ref{eq:Sprime}) shows that the incorrect initial velocity only has an effect on the translation of the resulting affine matrix: \begin{equation} \mathbf{S}(t=0)^{-1}\mathbf{S}'(t=n) = \mathbf{S}(t=0)^{-1}\mathbf{S}(t=n)+ \begin{pmatrix} \begin{array}{c|c} \mathbf{0}_{3,3} & n\mathbf{e} \\ \hline \mathbf{0}^\top & 1 \end{array} \end{pmatrix}\,. \end{equation} In this equation, $\mathbf{0}_{3,3}$ denotes a 3$\times$3 matrix filled with zeros. If the translation of $\mathbf{S}(t=0)^{-1}\mathbf{S}'(t=n)$ is denoted as $\mathbf{t}'$ and the translation of $\mathbf{S}(t=0)^{-1}\mathbf{S}(t=n)$ is denoted as $\mathbf{t}$, this leads to: \begin{equation} \mathbf{t}' = \mathbf{t} + n\mathbf{e}. \end{equation} The correct initial velocity $\mathbf{v}(t=0)$ is computed as \begin{equation} \mathbf{v}(t=0) = -\mathbf{e} = -\frac{1}{n}\cdot(\mathbf{t}' - \mathbf{t})\,. \end{equation} \subsection{Rigid projection matrix correction} \label{sec-rigid} Under the assumption that the legs move rigidly during the CT scan, it is sufficient to use the measurements of only one sensor placed e.g. on the shin for motion estimation. The pose change matrices estimated in (\ref{eq:deltaglobal}) and (\ref{eq:deltalocal}) can then be directly applied for motion correction. Note that the angular velocity and velocity are resampled to the CT acquisition frequency before pose change computation using the synchronized correspondences between C-arm and IMU. An affine motion matrix $\mathbf{M}(i)$ containing the rotation and translation is computed for each projection $i$. The motion matrix for the first projection $i=0$ is defined as the identity matrix $\mathbf{M}(i=0)=\mathbf{I}$, i.e. the pose at the first projection is used as the reference pose. Each subsequent matrix is then obtained using the global pose change matrix computed from the sensor measurements: \begin{equation} \mathbf{M}(i+1) = \mathbf{M}(i) \bm{\Updelta}_{g}(i)\,. \end{equation} In order to correct for the motion during the CBCT scan, we then modify the projection matrices $\mathbf{P}(i)$ of the calibrated CT scan with the motion matrices $\mathbf{M}(i)$ resulting in motion corrected projection matrices $\hat{\mathbf{P}}(i)$: \begin{equation} \hat{\mathbf{P}}(i) = \mathbf{P}(i) \mathbf{M}(i)\,. \end{equation} The corrected projection matrices are then used for the volume reconstruction as described in Section \ref{sec-evaluationMoCo}. \subsection{Non-rigid motion correction} \label{sec-nonrigid} Contrary to the assumption in Section \ref{sec-rigid}, the leg motion during the scan is non-rigid since the subjects are not able to hold exactly the same squatting angle for the duration of the scan. As a consequence, the motion can not entirely be described by a rigid transformation. To address this issue, we propose a non-rigid motion correction using both IMUs placed on the model. Using the formulas presented in \ref{sec-poseestimation}, we can compute the poses $^t\mathbf{S}(t)$ and $^f\mathbf{S}(t)$ of the IMUs on tibia and femur, respectively. Since the placement of the IMUs on the segments relative to the joints is known, the IMU poses can be used to describe the positions of ankle, knee and hip joint, $\mathbf{a}(t)$, $\mathbf{k}(t)$ and $\mathbf{h}(t)$, at each time point $t$. These estimated joint positions are used to non-rigidly correct for motion during the scans. We propose two approaches that make use of Moving Least Squares (MLS) deformations in order to correct for motion \cite{Schaefer2006,Zhu2007}. The first approach applies a 2D deformation to each projection image, and the second approach performs a 3D dynamic reconstruction where the deformation is integrated into the volume reconstruction. \subsubsection{Moving least squares deformation} The idea of MLS deformation is that the deformation of a scene is defined by a set of $m$ control points. The original positions of the control points are denoted as $\mathbf{p}_j$, and their deformed positions are $\mathbf{q}_j$ with ${j=1,...,m}$. For each pixel $\bm{\nu}$ in the image or volume, the goal is to find its position in the deformed image or volume depending on these control points. For this purpose, the affine transformation $f(\bm{\nu})$ that minimizes the weighted distance between the known and estimated deformed positions should be found: \begin{equation} \label{eq:mls} \sum_{j} \omega_j \lvert{f(\mathbf{p}_j)-\mathbf{q}_j}\rvert^2\,. \end{equation} This optimization is performed for each pixel individually, since the weights $\omega_j$ depend on the distance of the pixel $\bm{\nu}$ to the control points $\mathbf{p}_j$: \begin{equation} \omega_j = \frac{1}{\lvert{\mathbf{p}_j-\bm{\nu}}\rvert^2}\,. \end{equation} The weighted centroids $\mathbf{p}_*$ and $\mathbf{q}_*$ and the shifted control points $\mathbf{\hat{p}}_j = \mathbf{p}_j - \mathbf{p}_*$ and $\mathbf{\hat{q}}_j = \mathbf{q}_j - \mathbf{q}_*$ are used in order to find the optimal solution of (\ref{eq:mls}) in both the 2D and 3D case: \begin{align} \mathbf{p}_* &= \frac{\sum_j\omega_j\mathbf{p}_j}{\sum_j\omega_j}\,, \\ \mathbf{q}_* &= \frac{\sum_j\omega_j\mathbf{q}_j}{\sum_j\omega_j}\,. \end{align} According to Sch\"afer et al. \cite{Schaefer2006}, in the 2D image deformation case, the transformation minimizing (\ref{eq:mls}) is described by: \begin{equation} \label{eq-2D} f(\bm{\nu}) = \lvert\bm{\nu}-\mathbf{p}_*\rvert \frac{\sum_j\mathbf{\hat{q}}_j\mathbf{A_j}}{\lvert\sum_j\mathbf{\hat{q}}_j\mathbf{A_j}\rvert} + \mathbf{q}_*\,, \end{equation} where \begin{equation} \mathbf{A}_j = \omega_j \begin{pmatrix} \mathbf{\hat{p}}_j\\ -\mathbf{\hat{p}}_j^\bot\\ \end{pmatrix} \begin{pmatrix} \bm{\nu} - \mathbf{p}_*\\ -(\bm{\nu} - \mathbf{p}_*)^\bot\\ \end{pmatrix}\,. \end{equation} Finding the transformation that minimizes (\ref{eq:mls}) in the 3D case requires the computation of a singular value decomposition, as explained by Zhu et al. \cite{Zhu2007}: \begin{equation} \sum_j \omega_j\hat{\mathbf{p}}_j\hat{\mathbf{q}}_j^T = \mathbf{U\Sigma{V}}^T\,. \end{equation} The optimal transformation is then described by: \begin{equation} \label{eq-3D} f(\bm{\nu}) = \mathbf{VU}^T(\bm{\nu}-\mathbf{p}_*) + \mathbf{q}_*\,. \end{equation} \subsubsection{2D projection deformation} \label{sec-nonrigid2D} In our first proposed non-rigid approach, we deform the content of the 2D projection images in order to correct for motion. The initial pose of the subject is used as reference pose, so the first projection image $i=0$ is left unaltered. Each following projection image acquired at time point $i$ is transformed by MLS deformation using the estimated hip, knee and ankle joint positions $\mathbf{h}(i)$, $\mathbf{k}(i)$ and $\mathbf{a}(i)$ by using them as control points for the MLS deformation as described in the following paragraph. To obtain the 2D points needed for a 2D projection image deformation, the 3D positions $\mathbf{h}(i)$, $\mathbf{k}(i)$ and $\mathbf{a}(i)$ are forward projected onto the detector using the system geometry. However, since the detector is too small to cover the whole leg of a subject, the projected positions of the hip and ankle would be outside of the detector area. For this reason, 3D points situated closer to the knee on the straight line between hip and knee, and on the straight line between ankle and knee are computed with $\alpha = 0.8$: \begin{align} \label{eq-closertoknee1} \mathbf{h}'(i) &= (1-\alpha)\mathbf{h}(i) + \alpha{\mathbf{k}(i)}\,, \\ \label{eq-closertoknee2} \mathbf{a}'(i) &= (1-\alpha)\mathbf{a}(i) + \alpha{\mathbf{k}(i)}\,. \end{align} Then, for each projection $i$, the initial 3D reference positions $\mathbf{a}'(i=0)$, $\mathbf{k}(i=0)$ and $\mathbf{h}'(i=0)$ are forward projected onto the detector resulting in the 2D control points $\mathbf{p}_j(i)$ with ${j = 1,2,3}$. The 3D positions $\mathbf{h}'(i)$, $\mathbf{k}(i)$ and $\mathbf{a}'(i)$ at time of projection acquisition $i$ are forward projected to obtain $\mathbf{q}_j(i)$ with ${j = 1,2,3}$. Each projection image is then deformed by computing the transformation $f(\bm{\nu})$ according to (\ref{eq-2D}) for each image pixel using these control points. Finally, the motion corrected 3D volume is reconstructed from the resulting deformed projection images as described in Section \ref{sec-evaluationMoCo}. \subsubsection{3D dynamic reconstruction} \label{sec-nonrigid3D} The second proposed non-rigid approach applies 3D deformations during volume reconstruction. In the typical back-projection process of CT reconstruction, the 3D position of each voxel of the output volume is forward projected onto the detector for each projection image $i$, and the value at the projected position is added to the 3D voxel value. For the proposed 3D dynamic reconstruction, this process is altered: before forward projecting the 3D voxel position onto the detector for readout, it is transformed using 3D MLS deformation. However, the readout value is added at the original voxel position. For MLS deformation during reconstruction, the estimated positions of hip, knee and ankle joint $\mathbf{h}(i)$, $\mathbf{k}(i)$ and $\mathbf{a}(i)$ are used. The reference pose is again the first pose at $i=0$ and the 3D positions of hip, knee and ankle $\mathbf{h}(i=0)$, $\mathbf{k}(i=0)$ and $\mathbf{a}(i=0)$ are used as control points $\mathbf{p}_j$ with ${j = 1,2,3}$. The 3D positions $\mathbf{h}(i)$, $\mathbf{k}(i)$ and $\mathbf{a}(i)$ are used as $\mathbf{q}_j(i)$ with ${j = 1,2,3}$. Note that the 3D positions $\mathbf{p}_j$ are the same for each projection, contrary to the 2D approach where they depend on forward projection using the system geometry. Using these control points, during reconstruction the transformation $f(\bm{\nu})$ is computed for each voxel of the output volume and each projection according to (\ref{eq-3D}) and applied for deformation resulting in a motion-compensated output volume. \section{Evaluation} \subsection{IMU-based motion compensation} \label{sec-evaluationMoCo} All volumes are reconstructed by GPU accelerated filtered back-projection in the software framework CONRAD \cite{Maier2013}. The filtered back-projection pipeline included a cosine weighting, a Parker weighting, a truncation correction and a Shepp Logan ramp filtering. The reconstructed volumes have a size of $512^3$\,voxels with isotropic spacing of 0.5\,mm. In the case of rigid motion compensation, the motion compensated projection matrices $\mathbf{P}'$ are used for reconstruction. In the case of 2D non-rigid motion compensation, the deformed projection images are reconstructed using the original projection matrices. In the case of 3D non-rigid motion compensation, the original projection matrices and projection images are used, but the back-projection process is adapted as described in Section \ref{sec-nonrigid3D}. For comparison, an uncorrected motion-corrupted volume is reconstructed. Furthermore, a motion-free reference is realized by simulating a CT scan where the initial pose of the model is kept constant throughout the scan. The IMU-based motion compensation approaches are compared with a marker-based gold standard approach \cite{Choi2014}. For this purpose, small highly attenuating metal markers placed on the knee joint are added to the projections and tracked for motion compensation as proposed by Choi et al. \cite{Choi2014}. All volumes are scaled from 0 to 1 and registered to the motion-free reference reconstruction. The image quality is compared against the motion-free reference by the structural similarity index measure (SSIM) and the root mean squared error (RMSE). The SSIM index ranges from 0 (no similarity) to 1 (identical images) and considers differences in luminance, contrast and structure \cite{Wang2004}. The metrics are computed on the whole reconstructed leg and on the lower leg and upper leg separately. \subsection{Noise analysis} The influence of noise on the motion correction is evaluated in two ways: First, the estimated motion is compared by decomposing the motion matrices resulting from the noise-free signal and from the different levels of noisy signals into three-axial translations and rotations. Each noisy result is then compared to the noise-free estimate. For comparison, we compute the RMSE between each axis of the noise-free and the noisy translations and rotations, and then average over the three axes. We only evaluate on one scan of one subject, but average over five independent repetitions of adding random white noise and computing motion matrices and RMSE. Secondly, volumes reconstructed from noisy signal motion estimates are analyzed. Based on the RMSE results from the first part of the analysis, certain noise levels are chosen for rigid motion compensated reconstruction. Rigid motion matrices are computed from the noisy signals as described in Sections \ref{sec-poseestimation} and \ref{sec-rigid} and used for volume reconstruction as described above. For image quality comparison, the SSIM and RMSE are again computed. \section{Results} \subsection{IMU-based motion compensation} The proposed initialization method yields the correct initial pose and velocity for all scans and all further computations are based on these estimates. \begin{figure*}[tp] \centering \subcaptionbox*{}{\includegraphics[width=0.14\textwidth]{images/result/sub8_reco_squat_nomotion-152.png}} \quad \subcaptionbox*{}{\includegraphics[width=0.14\textwidth]{images/result/sub8_reco_squat_uncorrected-152.png}} \quad \subcaptionbox*{}{\includegraphics[width=0.14\textwidth]{images/result/sub8_reco_squat_marker-152.png}} \quad \subcaptionbox*{}{\includegraphics[width=0.14\textwidth]{images/result/sub8_reco_squat_RigidIMU-152.png}} \quad \subcaptionbox*{}{\includegraphics[width=0.14\textwidth]{images/result/sub8_reco_squat_nonrigidIMU2D-152.png}} \quad \subcaptionbox*{}{\includegraphics[width=0.14\textwidth]{images/result/sub8_reco_squat_nonrigidIMU3D-152.png}} \\ \subcaptionbox*{}{\includegraphics[width=0.14\textwidth]{images/result/sub8_reco_squat_nomotion-400.png}} \quad \subcaptionbox*{}{\includegraphics[width=0.14\textwidth]{images/result/sub8_reco_squat_uncorrected-400.png}} \quad \subcaptionbox*{}{\includegraphics[width=0.14\textwidth]{images/result/sub8_reco_squat_marker-400.png}} \quad \subcaptionbox*{}{\includegraphics[width=0.14\textwidth]{images/result/sub8_reco_squat_RigidIMU-400.png}} \quad \subcaptionbox*{}{\includegraphics[width=0.14\textwidth]{images/result/sub8_reco_squat_nonrigidIMU2D-400.png}} \quad \subcaptionbox*{}{\includegraphics[width=0.14\textwidth]{images/result/sub8_reco_squat_nonrigidIMU3D-400.png}} \\ \subcaptionbox{No motion}{\includegraphics[width=0.14\textwidth]{images/result/sub8_reco_squat_nomotion_reslice-167.png}} \quad \subcaptionbox{Uncorrected}{\includegraphics[width=0.14\textwidth]{images/result/sub8_reco_squat_uncorrected_reslice-167.png}} \quad \subcaptionbox{Marker-based}{\includegraphics[width=0.14\textwidth]{images/result/sub8_reco_squat_marker_reslice-167.png}} \quad \subcaptionbox{Rigid}{\includegraphics[width=0.14\textwidth]{images/result/sub8_reco_squat_RigidIMU_reslice-167.png}} \quad \subcaptionbox{2D non-rigid}{\includegraphics[width=0.14\textwidth]{images/result/sub8_reco_squat_nonrigidIMU2D_reslice-167.png}} \quad \subcaptionbox{3D non-rigid}{\includegraphics[width=0.14\textwidth]{images/result/sub8_reco_squat_nonrigidIMU3D_reslice-167.png}} \caption{Exemplary slices of a reconstructed volume. Rows: axial slice through shin, axial slice through thigh, sagittal slice. (a) Scan without motion, (b) uncorrected case, (c) marker-based reference method, (d) rigid IMU method, (e) non-rigid IMU 2D projection deformation, (f) non-rigid IMU 3D dynamic reconstruction. Motion artifacts can be reduced by all proposed methods in a similar manner as the marker-based method.} \label{fig-result} \end{figure*} Figure~\ref{fig-result} shows axial slices through the tibia and the femur, and a sagittal slice of one example reconstruction. All proposed methods are able to compensate for motion equally as well as the marker-based reference approach, or even slightly better. \begin{figure}[tb] \centering \subcaptionbox*{}{\includegraphics[width=6pc]{images/resultzoom/sub8_squat_nomotionRigid-400_zoom1.png}} \quad \subcaptionbox*{}{\includegraphics[width=6pc]{images/resultzoom/sub8_squat_nomotionNonrigid2D-400_zoom1.png}} \quad \subcaptionbox*{}{\includegraphics[width=6pc]{images/resultzoom/sub8_squat_nomotionNonrigid3D-400_zoom1.png}} \\ \subcaptionbox{Rigid}{\includegraphics[width=6pc]{images/resultzoom/sub8_squat_nomotionRigid-400_zoom2.png}} \quad \subcaptionbox{2D non-rigid}{\includegraphics[width=6pc]{images/resultzoom/sub8_squat_nomotionNonrigid2D-400_zoom2.png}} \quad \subcaptionbox{3D non-rigid}{\includegraphics[width=6pc]{images/resultzoom/sub8_squat_nomotionNonrigid3D-400_zoom2.png}} \caption{Details of an axial slice through the thigh. Rows: femoral bone, skin border. Overlay of motion-free reference (red) and the result of the method (a) rigid IMU, (b) non-rigid 2D IMU, (c) non-rigid 3D IMU (green), overlapping pixels shown in yellow.} \label{fig-resultzoom} \end{figure} Differences between the methods can only be seen in a detailed overlay of the motion compensated reconstruction with the motion-free reconstruction. In Fig.~\ref{fig-resultzoom}, details of the axial slice through the thigh are depicted at the femoral bone and at the skin-air-border. The motion-free reconstruction is shown in red, and the motion compensated reconstructions of the rigid, non-rigid 2D and non-rigid 3D IMU methods are shown in green in the three columns. All pixels that occur in both overlaid images are depicted in yellow. It is noticeable that the rigid correction method fails to estimate the exact thigh motion leading to an observable shift as a red or green halo at the bone interface and at the skin border. This is reduced for the non-rigid 2D correction and almost imperceptible for the non-rigid 3D correction. \input{resultTable_big} This visual impression is confirmed by the SSIM and RMSE values in Table~\ref{tab-results}. All proposed methods achieve SSIM and RMSE values that are similar or better than those of the reference marker-based method. Compared with the uncorrected case, this denotes an improvement of 24-35\% in the SSIM and 78-85\% in the RMSE values, respectively. Higher SSIM scores and lower RMSE values are achieved for the 30 degrees squat scans compared with the 60 degrees squat scans. When comparing the three proposed IMU methods, the results show a slight advantage of the non-rigid 3D approach over the other two IMU-based approaches. \subsection{Noise analysis} \input{resultTableNoiseSignals_final} The decremental signal noise analysis shows that the RMS noise of current commercially available community IMUs would prevent a successful IMU motion compensation (Table~\ref{tab-noiseSignals}, top left). While the resulting rotation estimate shows an average RMSE to the noise-free estimate of 1.45$^\circ$, the value of the estimated translation is considerably larger (9461\,mm). Deviations above 1\,mm and 1$^\circ$ of the translation and rotation are expected to decrease the reconstruction quality considerably. For noisy acceleration and angular velocity, an average RMSE value below these thresholds was only achieved if the RMS noise value was decreased by a factor of $10^{4}$ or $10^{5}$. For this reason, and in the further analysis, the estimated motion matrices of these noise levels are used to perform a motion compensated reconstruction. \begin{figure*}[tb] \centering \subcaptionbox*{}{\includegraphics[width=7pc]{images/resultnoise/sub8_reco_squat_RigidIMU-152.png}} \quad \subcaptionbox*{}{\includegraphics[width=7pc]{images/resultnoise/sub8_reco_squat_RigidIMU_noise55-152.png}} \quad \subcaptionbox*{}{\includegraphics[width=7pc]{images/resultnoise/sub8_reco_squat_RigidIMU_noise54-152.png}} \quad \subcaptionbox*{}{\includegraphics[width=7pc]{images/resultnoise/sub8_reco_squat_RigidIMU_noise45-152.png}} \quad \subcaptionbox*{}{\includegraphics[width=7pc]{images/resultnoise/sub8_reco_squat_RigidIMU_noise44-152.png}} \\ \subcaptionbox*{}{\includegraphics[width=7pc]{images/resultnoise/sub8_reco_squat_RigidIMU-400.png}} \quad \subcaptionbox*{}{\includegraphics[width=7pc]{images/resultnoise/sub8_reco_squat_RigidIMU_noise55-400.png}} \quad \subcaptionbox*{}{\includegraphics[width=7pc]{images/resultnoise/sub8_reco_squat_RigidIMU_noise54-400.png}} \quad \subcaptionbox*{}{\includegraphics[width=7pc]{images/resultnoise/sub8_reco_squat_RigidIMU_noise45-400.png}} \quad \subcaptionbox*{}{\includegraphics[width=7pc]{images/resultnoise/sub8_reco_squat_RigidIMU_noise44-400.png}} \\ \subcaptionbox{No noise}{\includegraphics[width=7pc]{images/resultnoise/sub8_reco_squat_RigidIMU_reslice-167.png}} \quad \subcaptionbox{$f_{a} = 5$, $f_{g} = 5$} {\includegraphics[width=7pc]{images/resultnoise/sub8_reco_squat_RigidIMU_noise55_reslice-167.png}} \quad \subcaptionbox{$f_{a} = 5$, $f_{g} = 4$} {\includegraphics[width=7pc]{images/resultnoise/sub8_reco_squat_RigidIMU_noise54_reslice-167.png}} \quad \subcaptionbox{$f_{a} = 4$, $f_{g} = 5$} {\includegraphics[width=7pc]{images/resultnoise/sub8_reco_squat_RigidIMU_noise45_reslice-167.png}} \quad \subcaptionbox{$f_{a} = 4$, $f_{g} = 4$} {\includegraphics[width=7pc]{images/resultnoise/sub8_reco_squat_RigidIMU_noise44_reslice-167.png}} \caption{Comparison of noise-free and noisy rigid IMU compensation. Rows: axial slice through shin, axial slice through thigh, sagittal slice. (a) Noise-free IMU signal, in row (b)-(e) noise is added to the simulated acceleration and angular velocity. The RMS noise value is $1.8\,mg/s^2$ resp. $0.07\,^\circ$/s divided by $10^{f_{a}}$ resp. $10^{f_{g}}$.} \label{fig-resultnoise} \end{figure*} The resultant reconstructions are shown in Fig.~\ref{fig-resultnoise}. While the image quality for $f_a = 5$ is similar to the motion-free case, streaking artifacts are visible when $f_a = 4$, independent of $f_g = 4$ or $f_g = 5$. \input{resultTable_Noise} The quantitative analysis of the noisy results in Table~\ref{tab-noise} confirms this finding: The average SSIM and RMSE values are only slightly decreased respectively increased compared with the noise-free estimation if $f_a = 5$, but deteriorate markedly when $f_a = 4$. \section{Discussion and conclusion} With the presented simulation study, we have shown the feasibility and limitations of using IMUs for motion compensated CT reconstruction. While all proposed methods are capable of reducing motion artifacts in the noise-free case, our noise analysis shows that the applicability in real settings is not yet fully realizable. The presented initialization approach based on the system geometry and the first two projection images works well under the optimal conditions of a simulation. In a real setting, it is unlikely that the IMU will contain clearly distinguishable metal components at the IMU coordinate system and they are unlikely to be resolved using current flat panel detectors. However, the presented approach can be applied with arbitrary four IMU points, assuming their relation to the origin and coordinate system is known. The IMU should then be positioned such that their projections are well distinguishable in the two projection images required for initialization. The results of all proposed IMU-based motion compensation methods are qualitatively and quantitatively equivalent, or even improved, compared with the gold standard marker-based approach that estimates a rigid motion. For the marker-based approach, individual multiple tiny markers have to be placed successively, and need to be attached directly to the skin in order to limit soft tissue artifact. For effective marker tracking, it should be ensured that they don't overlap in the projections. The metal also produces artifacts in the knee region. An advantage of our proposed methods is the need for only one or two IMUs on the leg. Here, the only is that the components used for initialization need to be visible in the first projection images. Since the shank and thigh are modeled as stiff segments, the sensors can be placed sufficiently far away from the knee joint in order not to cause metal artifacts that could hinder subsequent image analyses. It is noticeable that all methods performed slightly better on the scans where subjects were asked to hold a squat of 30 degrees compared with those for the 60 degrees squat. This is likely a result of it being more challenging to hold the same pose at a lower squat, where the motion in these cases has a higher range leading to increased error. With the non-rigid 3D IMU approach, improved results are achieved compared with the rigid IMU approach, especially in the region of the thigh. Although this is only a small improvement, it may have significant impact on further image analyses given the sub-millimeter range of the expected cartilage change under load \cite{Glaser2002}. However, the simple model of three moving joint positions and an affine deformation is considerably less complex than the XCAT spline deformation during projection generation suggesting that further improvements can be achieved by using a more realistic model. The non-rigid 2D IMU approach provides small improvements in visual results compared with the rigid approach (Fig.~\ref{fig-resultzoom}), but the quantitative evaluation shows similar SSIM and RMSE values. Although the non-rigid motion estimate might be more accurate, at the same time the image deformation introduces small errors, since X-rays measured at a deformed detector position would have also been attenuated by other materials. It is notable that the noise has a larger effect on the processing of the accelerometer signal compared with that of the gyroscope signal (Table~\ref{tab-noiseSignals}). On the one hand, the double integration performed on the acceleration leads to a quadratic error propagation. On the other hand, the noisy gyroscope signals used for gravity removal and velocity integration introduce additional errors that are accumulated during acceleration processing. In our study, we focus only on signal noise as one of the most severe IMU measurement errors. In the future, similar simulations might be performed in order to determine further necessary specifications. The noise level improvements that are required for real application are in the range of $10^{5}$ for the accelerometer and $10^{4}$ for the gyroscope. Although recently developed accelerometers and gyroscopes achieve these low noise levels, they are designed to measure signals in the mg-range and are far too delicate for the application at hand \cite{Darvishia2019,Masu2020,Yamane2019}. If developments continue to progress rapidly, and a robust sensor with low noise level and high measurement range is developed, our method could be applied in a real setting. \section*{Acknowledgments} This work was supported by the Research Training Group 1773 Heterogeneous Image Systems funded by the German Research Foundation (DFG), and by the AIT4Surgery grant funded by the Federal Ministry of Education and Research (BMBF, grant number 03INT506BA). Bjoern Eskofier gratefully acknowledges the support of the German Research Foundation (DFG) within the framework of the Heisenberg professorship programmme (grant number ES 434/8-1). The authors acknowledge funding support from NIH 5R01AR065248-03 and NIH Shared Instrument Grant No. S10 RR026714 supporting the zeego@StanfordLab. \bibliographystyle{splncs04.bst}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The tunneling of particles from a potential well into empty space is one of the fundamental problems in quantum mechanics. It has been used in the analysis of such phenomena as the nuclear $\alpha$ decay \cite{1928-Gamow-ZP,1928-Gurney-Nature}, proton emission \cite{1999-Talou-PRC,2000-Talou-PRC}, fusion \cite{1998-Balantekin-RevModPhys}, fission \cite{1991-Bhandari-PRL}, photoassociation \cite{2000-Vatasescu-PRA}, photodissociation \cite{1984-Keller-PRA}, or the functioning of tunnel diodes \cite{1984-Ricco-PRB}. Many aspects of particle tunneling into open space have been studied in detail over the years. For example, the single-particle tunneling process and the tunneling of a multi-body Bose-Einstein condensate are now well understood \cite{1961-Winter-PR,2003-Razavy-Book,1998-Ueda-PRL,2001-Salasnich-PRA,2005-Carr-JPB,2006-Schlagheck-PRA,2007-Huhtamaki-PRA,2017-Zhao-PRA}. Between these two extreme situations lies the problem of tunneling of a few strongly interacting particles, which turns out to be a much more complicated issue. In this case, strong inter-body correlations play a role in the dynamics of the system, and thus the physics cannot be reduced to an approximate description at the one-body level \cite{2004-Gogolin-Book}. As a result, the problem of few-particle tunneling raises many questions that still have no satisfactory answers. In recent years, interest in the subject of quantum tunneling has increased thanks to the rapid development of experimental techniques in the field of ultracold atom physics. It is possible to engineer systems with nearly any desired properties, such as the shape of the external potential \cite{2005-Meyrath-PRA,2009-Henderson-NJP,2010-VanEs-JPB}, the effective dimensionality \cite{2001-Gorlitz-PRL,2001-Greiner-PRL,2001-Schreck-PRL,2004-Stoferle-PRL}, the initial state \cite{2011-Serwane-Science}, or the strength of interparticle interactions \cite{2008-Pethick-Book,2010-Chin-RevModPhys}. Recent important experimental achievements in this area include the experiments in Selim Jochim's group in Heidelberg, where the decay of tunneling few-fermion systems was investigated \cite{2012-Zurn-PRL,2013-Zurn-PRL}. The problem of a few particles tunneling from an open well has received significant attention in recent years, and multiple theoretical works on the subject have been published. In most of these works \cite{2006-DelCampo-PRA,2009-Lode-JPB,2011-Kim-JPB,2012-Maruyama-PRC,2012-Rontani-PRL,2012-Lode-PNAS,2013-Bugnion-PRA,2013-Hunn-PRA,2014-Lode-PRA,2013-Rontani-PRA,2014-Maksimov-PRA,2015-Gharashi-PRA,2015-Lundmark-PRA,2017-Ishmukhamedov-PRA,2019-Ishmukhamedov-PhysE,2020-Koscik-PRA}, the inter-particle interactions are assumed to be dominated by short-range forces, with only a few works \cite{2014-Krassovitskiy-JPB,2016-Fasshauer-PRA,2018-Oishi-JPG,2018-Oishi-Acta} focusing on long-range interactions. However, longer-range interacting systems can show interesting properties. There is a variety of approaches to creating long-range-interacting systems, such as e.g. using molecules or atoms with strong dipolar interactions \cite{2009-Lahaye-RepProgPhys}. One such possibility which has raised significant interest in recent years is the creation of cold atoms in so-called Rydberg-dressed states, which can be achieved when the ground atomic state is off-resonantly coupled to a high-lying Rydberg state \cite{2010-Pupillo-PRL,2010-Johnson-PRA,2010-Henkel-PRL,2010-Honer-PRL,2012-Li-PRA,2017-Plodzien-PRA}. Atoms in Rydberg-dressed states can exhibit strong interactions at large distances \cite{2016-Browaeys-JPB}, which at short distances saturate to a constant value \cite{2010-Johnson-PRA}. These interactions are highly controllable since the parameters of the interaction can be tuned by changing the parameters of the coupling laser. At the same time, Rydberg-dressed atoms avoid problems associated with ultra-cold atoms in bare Rydberg states, such as short lifetimes, or interaction energies large enough to overwhelm typical trapping potentials \cite{2010-Johnson-PRA}. Rydberg-dressed systems have been succesfully implemented in various setups, for both small and large systems \cite{2016-Jau-NatPhys,2016-Zeiher-NatPhys,2017-Zeiher-PRX,2019-Arias-PRL,2020-Borish-PRL}. They have many possible applications and can also be applied to systems in 1D geometry \cite{2017-Plodzien-PRA}. Recently, correlations in trapped two-atom systems with interactions of this kind were studied in \cite{2018-Koscik-SciRep,2019-Koscik-SciRep}. However, the correlations between two Rydberg-dressed particles tunneling from a potential trap have not yet been considered. In this paper we numerically analyze the dynamics of two particles (bosons or fermions) escaping from an effectively one-dimensional potential well into open space. The interaction potential is described by two freely tunable parameters: the approximate interaction range, and the effective interaction strength. We explore the dynamics of the particle tunneling for different interaction parameters and particle statistics. Similarly to our earlier studies of contact-interacting bosons \cite{2018-Dobrzyniecki-PRA,2019-Dobrzyniecki-PRA}, here we focus mainly on determining the dominant decay mechanism of the system: whether the particles tunnel sequentially (one by one), or as pairs. In this way we show how the tunneling dynamics of the system can be modified by tuning the interaction parameters. We additionally compare dynamical properties between bosonic and fermionic particles, showing how the quantum statistics affects the dynamical properties. It is worth mentioning that various aspects of pair tunneling in few-particle systems have been investigated previously \cite{2012-Maruyama-PRC,2013-Rontani-PRA,2015-Lundmark-PRA,2015-Gharashi-PRA,2018-Oishi-Acta,2018-Oishi-JPG,2019-Ishmukhamedov-PhysE}. However, in our work we give a comprehensive analysis of pair tunneling from different points of view, taking into account the interplay of quantum statistics, interaction strength and shape of interaction potential. This work is organized as follows. In Sec.~\ref{sec:model} we describe the model system under study and the interaction potential. In Sec.~\ref{sec:initial-state} we examine the initial state of the system at $t = 0$, depending on the interaction parameters. In Sec.~\ref{sec:eigenstate-spectrum} we describe the spectrum of eigenstates of the two particles after opening the well. In Sec.~\ref{sec:short-time-dynamics} we describe the dynamics of the two-particle system, showing the basic nature of the tunneling dynamics, and the transition between distinct regimes that occurs at a specific value of the interaction strength. In Sec.~\ref{sec:long-time-dynamics} we focus on the long-time dynamics, analyzing the exponential nature of the decay. Section \ref{sec:conclusion} is the conclusion. \section{The model} \label{sec:model} \begin{figure}[t] \includegraphics[width=\linewidth]{Fig1.pdf} \caption{The shape of the external potential at time $t < 0$ ($V_0(x)$, gray continuous line) and after the sudden change at $t = 0$ ($V(x)$, red and blue dotted line), for two different values of the parameter $\lambda = 1.5, 2.5$. Energy and length are shown in units of $\hbar\omega$ and $\sqrt{\hbar/m\omega}$, respectively.} \label{fig:potential} \end{figure} We consider an effectively one-dimensional system of two identical spinless particles (bosons or fermions) of mass $m$, confined in an external potential $V(x)$ and interacting via the two-body interaction potential $U(r)$. The Hamiltonian of the system has the form \begin{equation} H = \sum\limits_{i=1}^2 \left[ -\frac{\hbar^2}{2m} \frac{\partial^2}{\partial x_i^2} + V(x_i) \right] + U(x_1-x_2), \end{equation} where $x_i$ represents the position of the $i$-th particle. We assume that at time $t < 0$ the particles are confined inside a harmonic well potential with frequency $\omega$, $V_0(x) = \frac{1}{2} m \omega^2 x^2$. Then at $t = 0$ the well is opened from one side, and the external potential for $t \ge 0$ is given by \begin{equation} \label{eq:potential_t_gt_0} V(x) = \begin{cases} \frac{1}{2}m\omega^2x^2 ,& x < \sqrt{2 \lambda}x_0, \\ \frac{1}{2}m\omega^2x^2 e^{-6(x/x_0-\sqrt{2 \lambda})^2} ,& x \ge \sqrt{2 \lambda}x_0, \end{cases} \end{equation} where $x_0 = \sqrt{\hbar/m\omega}$ is the initial oscillator length unit. This potential has the form of a well separated from open space by a finite barrier and it is parametrized by the dimensionless parameter $\lambda$, approximately equal to the height of the barrier in units of $\hbar \omega$. The external potential $V(x)$ is shown in Fig.~\ref{fig:potential} and compared to the harmonic oscillator potential $V_0(x)$. The barrier height $\lambda$ is chosen so that it is higher than the energy of the system. This ensures that under-the-barrier tunneling is the only way to exit the well. For the bosonic system, we pick $\lambda=1.5$. In the case of fermions, we pick $\lambda=2.5$ since, due to the fermionic statistics, the energy of the initial state is different and a higher barrier height is necessary. We assume the particles interact through a non-zero-range potential $U(r)$, in contrast to the contact interaction $g \delta(r)$ which is typically used to model interactions in ultracold systems. The form of the interaction potential $U(r)$ is based on the interaction between cold atoms in ``Rydberg-dressed'' states \cite{2010-Pupillo-PRL,2010-Johnson-PRA,2010-Henkel-PRL,2010-Honer-PRL,2012-Li-PRA,2016-Jau-NatPhys,2016-Zeiher-NatPhys,2017-Plodzien-PRA}. Experimentally, Rydberg dressing can be achieved by means of an off-resonant laser coupling between the atomic ground state and a highly-excited Rydberg state. As a result of this coupling, the ground state gains a small admixture of the Rybderg state. The effective interaction potential between such Rydberg-dressed atoms has a very characteristic form \cite{2010-Johnson-PRA,2010-Henkel-PRL,2010-Honer-PRL,2019-Koscik-SciRep}. At long interparticle distances, the interaction potential resembles the interaction between Rydberg atoms. We assume that in the studied case, the dominant contribution to this interaction are van der Waals forces that depend on the interatomic distance $r$ as $r^{-6}$. For short interparticle distances (below a certain critical range $R_\mathrm{c}$), the so-called Rydberg blockade effect suppresses a simultaneous excitation of two atoms, so the effective interaction saturates to a constant value as $r \to 0$ \cite{2010-Johnson-PRA}. The resulting effective interaction potential (under the assumption that the spatial size of the system in the perpendicular direction is much smaller than the interaction range) is modelled by the function \cite{2010-Honer-PRL,2010-Henkel-PRL,2012-Li-PRA,2016-Zeiher-NatPhys,2017-Plodzien-PRA} \begin{equation} \label{eq:rydberg_interaction} U(r) = U_0 \left[1+\left( \frac{r}{R_\mathrm{c}} \right)^6 \right]^{-1}, \end{equation} where $U_0$ (having units of energy) is the interaction amplitude at $r = 0$, and $R_\mathrm{c}$ (having units of length) can be treated as the effective range of the interaction. Both these parameters can be independently regulated experimentally, being dependent on the detuning and the Rabi frequency of the coupling laser \cite{2010-Johnson-PRA,2012-Li-PRA,2016-Zeiher-NatPhys,2017-Plodzien-PRA}. The interaction potential \eqref{eq:rydberg_interaction} as a function of the interparticle distance is shown in Fig.~\ref{fig:rydberg_interaction_potential}. It is worth noting that in the limit $R_\mathrm{c} \to 0$, the interaction potential \eqref{eq:rydberg_interaction} is approximately equivalent to a contact interaction potential $g\delta(r)$ with $g = 2 R_\mathrm{c} U_0$ \cite{2018-Koscik-SciRep}. Basing on this fact, we adopt a convention that will allow us to compare the strength of interactions for different values of the range $R_\mathrm{c}$. Namely, we make the substitution $U_0 \to g/(2 R_\mathrm{c})$, and rewrite the potential \eqref{eq:rydberg_interaction} as \begin{equation} \label{eq:rydberg_interaction_rescaled} U(r) = \frac{g}{2 R_\mathrm{c}} \left[1+\left( \frac{r}{R_\mathrm{c}} \right)^6 \right]^{-1}. \end{equation} In this approach, the interaction is parametrized not directly by the amplitude $U_0$, but rather the effective interaction strength $g$ in the $R_\mathrm{c} \to 0$ limit. This convention has the benefit that it allows us to directly compare the bosonic system properties with those of a contact-interacting system, which have been previously analyzed e.g. in \cite{2018-Dobrzyniecki-PRA,2019-Dobrzyniecki-PRA} (although for a slightly different shape of the potential barrier). For convenience, in the following we express all magnitudes in natural units of the problem, i.e., energy is given in units of $\hbar\omega$, length in units of $\sqrt{\hbar/(m\omega)}$, interaction strength in units of $\sqrt{\hbar^3\omega/m}$, time in units of $1/\omega$, and momentum in units of $\sqrt{\hbar m \omega}$. As the system is initially confined in the harmonic oscillator trap $V_0(x)$, the initial two-body state of the system at $t = 0$ is taken to be the ground state of the interacting two-particle system confined in the potential $V_0(x)$. As there is no exact solution available for the case of interaction potential $U(r)$ (in contrast to the celebrated Busch \emph{et al.} solution for contact interactions \cite{1998-Busch-FoundPhys}), for given parameters $g$ and $R_\mathrm{c}$ we find the ground state numerically, by propagating a trial two-body wave function in imaginary time. The trial wave function is chosen as the ground state of two non-interacting particles (bosons or fermions) in a harmonic oscillator well. The evolution of the system for $t > 0$ is calculated by integrating the time-dependent Schr\"{o}dinger equation numerically, using the fourth-order Runge-Kutta method with time step $\delta t = 0.005$. The calculations are done on a dense grid with spacing $\delta x = 0.125$, with the simulated region including a large extent of space in the region where the external potential vanishes. To clarify, we represent the two-body wave function $\Psi(x_1,x_2;t)$ by the amplitudes $\psi_{ij}(t)$, obtained after the decomposition $\Psi(x_1,x_2;t) = \sum_{ij} \psi_{ij}(t) [\varphi_i(x_1)\varphi_j(x_2) \pm \varphi_j(x_1)\varphi_i(x_2)]$. Here $\varphi_i(x)$ is a single-particle function being nonzero on the $i$-th grid-cell, \emph{i.e.}, $\varphi_i(x) = 1/\sqrt{\delta x}$ for $|x-x_i| \le \delta x / 2$. The extent of the simulated region is chosen as $x \in [-4,60]$, for a total of 512 grid points. To avoid reflections of the escaped particles off the boundary of the simulated region, we employ the complex absorbing potential technique \cite{1994-Riss-JPhysB,1996-Riss-JChemPhys,2004-Muga-PhysRep,2005-Shemer-PRA}. Specifically, in the region far from the trap (at $x > 30$) we add an imaginary potential term $-i\Gamma(x)$ to absorb particles. The form of the imaginary potential is chosen as the smoothly rising function $\Gamma(x) = 10^{-3} \times (x-30)^2$. We wish to emphasize we have carefully checked that the final results presented in the following do not depend on the details of $\Gamma(x)$. Details about the effects of the complex absorbing potential are available in appendix \ref{sec:cap-appendix} \footnote{The full Fortran simulation code is available from the authors upon request (Git commit hash: e4335a0ca754c1e73c64866eb2bf5f33cfbfe1d5)}. \begin{figure}[t] \includegraphics[width=\linewidth]{Fig2.pdf} \caption{The effective interaction potential $U(r)$ \eqref{eq:rydberg_interaction} as a function of interparticle distance $r$. The distance is expressed in terms of the effective range $R_\mathrm{c}$, and the potential energy is expressed in terms of the interaction amplitude $U_0$. At large distances the potential decays as $r^{-6}$, while at small distances ($|r| \lesssim R_\mathrm{c}$) it saturates to the constant value $U_0$.} \label{fig:rydberg_interaction_potential} \end{figure} \section{Initial state and energy} \label{sec:initial-state} As noted, the initial state of the system is chosen as the ground state of two particles confined in a harmonic oscillator potential. As the properties of the initial state are directly connected to the subsequent evolution dynamics, we will now examine those properties in detail, depending on the interaction parameters. \subsection{Two-boson initial state} \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{Fig3.pdf} \caption{(a) Two-body density distribution $\rho_2(x_1,x_2)$ of the initial state for the two-boson system, for varying values of interaction strength $g$ and interaction range $R_\mathrm{c}$. Gray dashed lines demarcate the region $|x_1-x_2| \le R_\mathrm{c}$, in which the distance between the bosons is within the interaction range. (b) The corresponding one-body density distribution $\rho_1(x)$ of the initial two-boson state, for varying $g$ and $R_\mathrm{c}$. (c) Two-body density distribution $\rho_2(x_1,x_2)$ of the initial state for the two-fermion system. (d) One-body density distribution $\rho_1(x)$ of the initial two-fermion state. Lengths and range $R_\mathrm{c}$ are shown in units of $\sqrt{\hbar/m\omega}$, interaction strength is shown in units of $\sqrt{\hbar^3\omega/m}$.} \label{fig:InitialStateDensity} \end{figure*} We first focus on the bosonic case. We will now directly examine the spatial distribution of two bosons in the initial state, to observe the relationship between interactions and particle correlations. In Fig.~\ref{fig:InitialStateDensity}a,b we show the single- and two-body density profiles [$\rho_1(x) = \int \mathrm{d}x |\Psi(x,x_2)|^2$ and $\rho_2(x_1,x_2) = |\Psi(x_1,x_2)|^2$] of the initial state for two bosons, for different interaction parameters. For clarity, the gray dashed lines in the $\rho_2$ plot indicate the boundaries of the two-body configuration space region for which $|x_1-x_2| \le R_\mathrm{c}$, \emph{i.e.}, the distance between the bosons is less than $R_\mathrm{c}$. For the non-interacting case ($g = 0$), both bosons are in the harmonic oscillator ground state, and thus both the two-particle density profile $\rho_2$ and the one-particle density profile $\rho_1$ have Gaussian shapes. In this case the boson positions are entirely uncorrelated with each other, and the two-body wave function is simply a product of two identical one-body wave functions. In the case of attractive interactions ($g = -4$), the boson positions become correlated. As can be seen from the profile $\rho_2$, the density becomes concentrated around the diagonal $x_1=x_2$, so that the bosons are more likely to be near each other. The attractive interactions also cause a narrowing of the one-body profile $\rho_1$, so the bosons are more likely to be found near the center of the well. However, for larger interaction range $R_\mathrm{c}$, the non-interacting wave function is already nearly completely contained within the region $|x_1-x_2| \le R_\mathrm{c}$ and within that region the felt interaction is nearly constant. As a result, for higher $R_\mathrm{c}$ the attractive interactions do not significantly change the shape of the density profile. For repulsive interactions ($g = +2$ and $g = +12$), bosons are less likely to be found near each other. For large enough interaction strength the two-body density in the region $|x_1-x_2| \le R_\mathrm{c}$ is nearly completely depleted, and the density profile $\rho_1$ splits into two maxima away from each other, indicating that the bosons are likely to be found on the opposite sides of the well. For large interaction ranges $R_\mathrm{c}$ the effect of the repulsions on the density profile is weakened, so that a larger repulsive interaction strength is needed to empty the region $|x_1-x_2| \le R_\mathrm{c}$. This is because, as $R_\mathrm{c}$ increases, pushing the bosons away from each other towards the well edges requires a higher energy cost. Now let us analyze the initial energy of the two-boson system. In Fig.~\ref{fig:InitialEnergy}a we show the energy $E_\mathrm{INI}(g,R_\mathrm{c})$ of two bosons for different interaction strengths $g$ and interaction ranges $R_\mathrm{c}$. Also shown is the energy calculated in the contact interaction limit $R_\mathrm{c} \to 0$, \emph{i.e.}, for bosons interacting via the contact potential $g \delta(r)$. The energy is calculated for a system in the harmonic oscillator potential $V_0(x)$, but after the external potential is changed to $V(x)$ at $t=0$, the energy of the system is almost unchanged (since the potential in the initial confinement region remains almost the same). \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{Fig4.pdf} \caption{(a) Initial state energy $E_\mathrm{INI}(g,R_\mathrm{c})$ for the two-boson system as a function of the interaction strength $g$, for different interaction ranges $R_\mathrm{c}$. (b) The energy $E_\mathrm{INI}(g,R_\mathrm{c})$ of two bosons as a function of $R_\mathrm{c}$, with $g$ constant. (c) Initial state energy $E_\mathrm{INI}(g,R_\mathrm{c})$ for the two-fermion system, as a function of the interaction strength $g$. Note that $E_\mathrm{INI}$ is a non-monotonic function of $R_c$, as shown in the next subfigure. (d) The energy $E_\mathrm{INI}(g,R_\mathrm{c})$ of two fermions as a function of $R_\mathrm{c}$. Energy is given in units of $\hbar \omega$, range $R_\mathrm{c}$ in units of $\sqrt{\hbar/m\omega}$, interaction strength $g$ in units of $\sqrt{\hbar^3\omega/m}$.} \label{fig:InitialEnergy} \end{figure*} In the $R_\mathrm{c} \to 0$ limit, the energy is a monotonic function of $g$. As the interaction range $R_\mathrm{c}$ increases, the energy becomes overall less sensitive to changes in the interaction strength (the slope of $E_\mathrm{INI}(g,R_\mathrm{c})$ measured at $g = 0$ becomes smaller). Although in this work we focus only on interaction ranges $R_\mathrm{c} \le 1.5$ (on the order of a single natural length unit), it should be pointed out that in the $R_\mathrm{c} \to \infty$ limit the interaction $U(r)$ is expected to vanish completely for all finite $g$. This is because for $R_\mathrm{c}$ approaching infinity, the interaction is felt simply as an energy shift constant in space, with magnitude $g/(2R_\mathrm{c})$. When $R_\mathrm{c} \to \infty$, this energy shift goes to zero for all finite $g$. To better understand the effect of the interaction range, in Fig.~\ref{fig:InitialEnergy}b we examine the dependency of $E_\mathrm{INI}(g,R_\mathrm{c})$ on $R_\mathrm{c}$, with fixed $g$. For attractive interactions, the energy has a monotonic dependency on $R_\mathrm{c}$ and gradually approaches the non-interacting value as $R_\mathrm{c}$ increases. This agrees with the previously observed properties of the density profile: for increasing $R_\mathrm{c}$, the density profile is less squeezed and smoothly approaches the non-interacting profile. On the other hand, for repulsive interactions, the dependency of $E_\mathrm{INI}(g,R_\mathrm{c})$ on $R_\mathrm{c}$ is not monotonic. For smaller $R_\mathrm{c}$, the energy increases with $R_\mathrm{c}$ until a certain maximum value, then it begins decreasing, approaching the non-interacting value $E_\mathrm{INI} = 1$. This observation can likewise be explained by considering the density profile for repulsive systems. At first, increasing $R_\mathrm{c}$ causes the bosons to be pushed away from each other towards further regions of the harmonic well, increasing the system energy. Beyond a certain interaction range, the interaction energy for a given $g$ is no longer sufficient to separate the bosons to a distance $\sim R_\mathrm{c}$, thus for high $R_\mathrm{c}$ the state density profile is identical to the non-interacting one. \subsection{Two-fermion initial state} Let us now proceed to the two-fermion case. Owing to the different particle statistics, already on the level of the initial state this case differs visibly from the bosons. In Fig.~\ref{fig:InitialStateDensity}c,d we show the two-body and one-body density profiles $\rho_2$ and $\rho_1$ for the initial two-fermion state, at different interaction ranges $R_\mathrm{c}$ and interaction strengths $g$. As before, gray dashed lines in the $\rho_2$ plots indicate the region where $|x_1-x_2| \le R_\mathrm{c}$. In the non-interacting case ($g = 0$), the initial two-body state is the antisymmetrized product of the two lowest harmonic oscillator orbitals. As a result, the two-body density profile $\rho_2$ is entirely different from the bosonic case. The particle positions are anticorrelated, so that the fermions are more likely to be found on opposite sides of the well. The one-body density profile $\rho_1$ has a characteristic shape with two maxima located at opposite sides from the well center. The Pauli principle is manifested by the impossibility to find the two fermions at exactly the same position (\emph{i.e.}, the density along $x_1=x_2$ is empty). For attractive interactions ($g = -12$), the density is more concentrated within the $|x_1-x_2| \le R_\mathrm{c}$ region, \emph{i.e.}, the two fermions are more likely to be close to each other, although the $x_1 = x_2$ diagonal remains empty. Furthermore, for strong enough attractions, the two maxima in $\rho_1$ fuse into one maximum located in the center of the well. As $R_\mathrm{c}$ increases, the effect of attractions on the density profile becomes weaker, for the same reason as for bosons: for large $R_\mathrm{c}$ most of the entire non-interacting density profile is already contained within the $|x_1-x_2| \le R_\mathrm{c}$ region. In the case of repulsive interactions ($g = +2, g = +12$), another important difference compared to the boson case can be seen. Namely, for small interaction range ($R_\mathrm{c} = 0.5$), the density profile is almost unaffected by the repulsions. This is because the non-interacting two-body wave function already vanishes in such close vicinity to the diagonal, and any further repulsions do not modify it significantly. Only for higher interaction range ($R_\mathrm{c} = 1.5$) the density profiles are seen to be affected by the repulsive interactions, with the fermions pushed further away from each other. It is worth pointing out that for large $R_\mathrm{c}$ and $g$, both in the case of bosons and fermions there occurs a complete separation between the particles, and certain properties of the system (such as the density profile) become insensitive to the particle statistics in this case. We now turn our attention to the initial energy $E_\mathrm{INI}(g,R_\mathrm{c})$. In Fig.~\ref{fig:InitialEnergy}c we show the two-fermion $E_\mathrm{INI}(g,R_\mathrm{c})$ for different interaction parameters $g$ and $R_\mathrm{c}$. The vanishing of the two-fermion wave function at $r = 0$ means that the energy is overall less affected by interactions than in the bosonic case. In the limit $R_\mathrm{c} \to 0$, it furthermore means that the interaction $U(r)$ is not felt at all, and the energy in this case is independent of interactions: $E_\mathrm{INI}(R_\mathrm{c} = 0) = 2$. As $R_\mathrm{c}$ increases above zero, the energy gradually becomes more sensitive to interactions (as can be seen from the increasing slope of $E_\mathrm{INI}$ near the $g = 0$ point). Note that this is directly opposite to the boson case, where increasing $R_\mathrm{c}$ causes the energy near $g = 0$ to becomes less sensitive to interactions. However, it should be noted that in the $R_\mathrm{c} \to \infty$ limit the interaction is no longer felt by the two-fermion system, for the same reason as with bosons. Thus, for large enough $R_\mathrm{c}$ the trend reverses, at which point further increase of $R_\mathrm{c}$ causes the energy to approach the non-interacting value. For a clearer demonstration of how the two-fermion energy depends on the interaction range, in Fig.~\ref{fig:InitialEnergy}d we show the dependency of $E_\mathrm{INI}(g,R_\mathrm{c})$ on $R_\mathrm{c}$, with $g$ constant. The major difference from the bosonic case is that the energy approaches the same constant value in the two limits $R_\mathrm{c} \to 0$ and $R_\mathrm{c} \to \infty$. Thus, for intermediate values of $R_\mathrm{c}$ the energy has a non-monotonic dependency on $R_\mathrm{c}$, with a single minimum (maximum) for attractive (repulsive) interactions. \section{Eigenstates of two particles in open space} \label{sec:eigenstate-spectrum} After the well is opened at $t = 0$, the initial state starts to decay as the particles start tunneling into open space. To gain a basic understanding of the tunneling process, it is helpful to examine the many-body Hamiltonian spectrum for a system of particles in the region outside the well. In this way we can understand what configurations are available for the escaping particles. For this purpose, we describe the particles in their end-state (after tunneling) via a simplified Hamiltonian. We assume that the particles are far enough from the well that they feel no external potential, and thus can be described by a simplified Hamiltonian with $V(x) = 0$: \begin{equation} \label{eq:open-space-hamiltonian} H_\mathrm{out} = \sum\limits_{i=1}^2 \left[ -\frac{\hbar^2}{2m} \frac{\partial^2}{\partial x_i^2} \right] + U(x_1-x_2). \end{equation} To find the eigenstates and eigenenergies of the Hamiltonian \eqref{eq:open-space-hamiltonian}, it is convenient to perform a transformation to the coordinates of the center-of-mass frame: $X = (x_1+x_2)/2, r = x_1 - x_2$. In these new variables the Hamiltonian can be written as a sum of two independent single-particle Hamiltonians, $H_\mathrm{out} = H_\mathrm{X} + H_\mathrm{r}$: \begin{align} H_\mathrm{X} &= -\frac{1}{4}\frac{\partial^2}{\partial X^2}, \label{eq:hamiltonian_rel_rydberg_cm} \\ H_\mathrm{r} &= -\frac{\partial^2}{\partial r^2} + U(r). \label{eq:hamiltonian_rel_rydberg_r} \end{align} The total energy of the two particles in free space is correspondingly a sum of eigenenergies of the two Hamiltonians, $E = E_\mathrm{X} + E_\mathrm{r}$, and the wave function is given in terms of the product of their eigenfunctions, $\Psi(x_1,x_2) = \phi_\mathrm{X}(X) \phi_\mathrm{r}(r)$. Solutions for the center-of-mass motion Hamiltonian $H_\mathrm{X}$ are straightforward, representing free-particle wave functions. In case of the relative-motion Hamiltonian $H_\mathrm{r}$, an exact solution is not available, and we obtain the eigenenergies and eigenfunctions by numerical diagonalization. In Fig.~\ref{fig:BosonR-RydbergRelativeHamiltonianEigenspectrum}a we show the spectrum of eigenenergies of $H_\mathrm{r}$ as a function of $g$, obtained by numerical diagonalization, for two different values of $R_\mathrm{c}$. There are two groups of states distinguishable. The first group (indicated in gray) consists of almost-free-particle states with positive energy $E_\mathrm{r}$, forming a dense band. Their relative wave functions $\phi_\mathrm{r}(r)$ have a density distributed throughout all space, and describe a configuration of two (nearly) free particles. These states are present for all values of $g$. The second group (indicated in black) includes bound states with negative energy $E_\mathrm{r}$. They are much more sparse than the scattering states and do not form a dense band. Their wave functions $\phi_\mathrm{r}(r)$, with density centered near $r = 0$, describe states of two bound particles travelling together. These states only appear for negative interaction strengths $g < 0$. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Fig5.pdf} \caption{(a) The two-body spectrum of the relative-motion Hamiltonian $H_\mathrm{r}$ \eqref{eq:hamiltonian_rel_rydberg_r} for two particles in empty space, interacting by the potential $U(r)$, as a function of interaction strength $g$. Results are shown for two different interaction ranges: $R_\mathrm{c} = 0.5$, $R_\mathrm{c} = 1.5$. For all $g$ there exists a spectrum of scattering states with $E_\mathrm{r} > 0$ (gray) that describe the relative motion of two almost-free particles. For $g < 0$ there are also bound states available, with energy $E_\mathrm{r} < 0$ (black). Solid (dashed) black lines correspond to bound states which have wave functions $\phi_\mathrm{r}(r)$ symmetric (antisymmetric) about $r=0$. The general shape of the wave functions $\phi_\mathrm{r}(r)$ is shown schematically near the corresponding energies. (b) The threshold interaction strength $g_\mathrm{pair}$, below which there exists an antisymmetric bound state in the $H_\mathrm{r}$ eigenspectrum and thus pairing of two fermions is possible. The shown results are those calculated numerically for the exact potential $U(r)$ (solid line), and the result $g_\mathrm{pair} \approx -\pi^2/(2 R_\mathrm{c})$ for an approximate rectangular potential (dashed line). Close agreement is seen between the two values. Energies are expressed in units of $\hbar\omega$, interaction strength in units of $\sqrt{\hbar^3\omega/m}$, interaction range in units of $\sqrt{\hbar/m\omega}$.} \label{fig:BosonR-RydbergRelativeHamiltonianEigenspectrum} \end{figure} The wave functions $\phi_\mathrm{r}(r)$ have a well-defined symmetry in $r$, being even or odd functions of $r$: $\phi_\mathrm{r}(-r) = \pm \phi_\mathrm{r}(r)$. A single symmetric bound state appears immediately below $g = 0$ (indicated with a solid black line). For increasing attractive interactions $|g|$, additional bound states make their appearance, alternating between anti- and symmetric wave functions $\phi_\mathrm{r}(r)$ (their energies are indicated by dashed and solid black lines, respectively). The spacing between values of $g$ at which subsequent bound states appear is dependent on $R_\mathrm{c}$. For decreasing $R_\mathrm{c}$, the spacing between the bound states increases, and in the limit $R_\mathrm{c} \rightarrow 0$ (where the potential becomes equivalent to the contact potential) only one symmetric bound state is present. The possibility that the particles will be able to form pairs in the outside-well region depends on the availability of appropriate bound states. For bosons, where the relative wave function must be symmetric, the appropriate bound state becomes available as soon as interaction strength is below zero ($g < 0$), regardless of the value of $R_\mathrm{c}$. However, for fermions, the necessary bound state must have an antisymmetric wave function. Thus, pairing for fermions is only possible below a certain value $g_\mathrm{pair} < 0$, for which a second bound state (with odd symmetry) appears in the spectrum. This value $g_\mathrm{pair}$ is directly dependent on $R_\mathrm{c}$. It is worth noting that the approximate value of $g_\mathrm{pair}$ can be obtained analytically when the interaction potential $U(r)$ in \eqref{eq:hamiltonian_rel_rydberg_r} is replaced by a rectangular well potential, since in this case there exists an exact expression for the total number $n$ of bound states \cite{2003-Williams-Book}. For the particular parameters in this problem (mass $1/2$, well length $2R_\mathrm{c}$, well depth $|g|/(2R_\mathrm{c})$) the expression is $n = \lceil \sqrt{2 R_\mathrm{c} |g| } / \pi \rceil$, where $\lceil \cdot \rceil$ is the ceiling function, \emph{i.e.}, rounding up to the nearest integer. Therefore, the condition for the existence of a second bound state is $\sqrt{2 R_\mathrm{c} |g| } / \pi > 1$, giving the expression for $|g_\mathrm{pair}|$ as $\pi^2/(2 R_\mathrm{c})$. In Fig.~\ref{fig:BosonR-RydbergRelativeHamiltonianEigenspectrum}b we compare this expression with the numerically obtained value of $g_\mathrm{pair}$ for the Rydberg potential (defined as the highest value of $g$ at which there are at least two states with negative energy). We obtain a close agreement between the two cases. Note that in the limit of contact interactions ($R_\mathrm{c} \rightarrow 0$) we have $g_\mathrm{pair} \rightarrow -\infty$, so that the pairing between fermions becomes impossible, as expected for a contact potential limited to the $s$-wave scattering level. The above results have direct significance for the tunneling dynamics. It can be surmised that the presence of pair tunneling depends on whether the particles are able to form pairs in the open-space region. The above analysis indicates that for bosons, pair tunneling will be present to some degree for any value of attractive interactions. For fermions, much greater interaction scales will be needed to analyze pair tunneling, since a strong attractive interaction $g < g_\mathrm{pair}$ is needed for pair tunneling to even occur in the first place. However, while this eigenspectrum gives information about the availability of specific states, it does not directly specify which of the tunneling mechanisms will dominate in the dynamics. We therefore address this question by performing a numerically exact time evolution and analyzing the tunneling process in a time-dependent way. \section{Dynamics of the density distribution} \label{sec:short-time-dynamics} The dynamics at $t>0$ can be quite well understood when the evolution of the two-body density distribution $\rho_2(x_1,x_2;t) = |\Psi(x_1,x_2;t)|^2$ is analyzed. In a recent work \cite{2018-Dobrzyniecki-PRA}, we have conducted an analysis along these lines for a two-boson system with contact interactions. It was shown that the dynamical properties depend significantly on the strength $g$ of interparticle interactions. As $g$ is tuned from repulsive to strongly attractive values, the dynamics undergoes a transition between two regimes: the first one is dominated by sequential tunneling, so that both bosons leave the well one after the other, while the second one is almost completely dominated by pair tunneling. Here we analyze how these results apply to systems with non-zero-range interactions by studying the evolution of the two-particle density profile $\rho_2(x_1,x_2;t)$. To more easily tell apart the distinct tunneling processes in our analysis, we divide the configuration space into three regions $\mathbf{P}_i$: \begin{equation} \label{eq:regions} \begin{aligned} \mathbf{P}_2 &= \{(x_1,x_2) : x_1 \le x_\mathrm{B} \land x_2 \le x_\mathrm{B}\},\\ \mathbf{P}_1 &= \{(x_1,x_2) : (x_1 > x_\mathrm{B} \land x_2 \le x_\mathrm{B}) \\ \phantom{\mathbf{P}_1 }&\phantom{= \{(x_1,x_2) } \lor (x_1 \le x_\mathrm{B} \land x_2 > x_\mathrm{B})\},\\ \mathbf{P}_0 &= \{(x_1,x_2) : x_1 > x_\mathrm{B} \land x_2 > x_\mathrm{B}\}, \end{aligned} \end{equation} where $x_\mathrm{B} \approx \sqrt{2\lambda}$ is the position of the well boundary. The regions $\mathbf{P}_2,\mathbf{P}_1,\mathbf{P}_0$ encompass configurations with exactly two, one, or zero particles inside the well, respectively. \subsection{Two-boson dynamics} In Fig.~\ref{fig:BosonR-DensityEvolution} we show snapshots of the evolution of $\rho_2(x_1,x_2;t)$ at different times $t$ after opening the well, for two-boson systems with different interaction strengths $g$ and interaction ranges $R_\mathrm{c}$. For better visibility, the well boundary $x_\mathrm{B}\approx\sqrt{2\lambda}$ is indicated with dashed lines, dividing the configuration space into the different regions $\mathbf{P}_n$. At the beginning ($t = 0$), the entire two-body wave function is contained within the region $\mathbf{P}_2$. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Fig6.pdf} \caption{Time evolution of the density distribution $\rho_2(x_1,x_2,t)$ in an initially trapped two-boson system, for different interaction strengths $g$ and two different interaction ranges $R_\mathrm{c}$. The dashed lines demarcate the well boundary $x_\mathrm{B}\approx \sqrt{3}$. For the non-interacting and repulsive systems ($g = 0$, $g = 2$) essentially the entire decay process takes place via sequential tunneling of the two bosons. In the strongly attractive system ($g = -2$) the system decays mostly via pair tunneling, with the participation of sequential tunneling depending on interaction range $R_\mathrm{c}$. Positions and interaction range are in units of $\sqrt{\hbar/m\omega}$, interaction strength in units of $\sqrt{\hbar^3\omega/m}$, time in units of $1/\omega$.} \label{fig:BosonR-DensityEvolution} \end{figure} For the non-interacting system $(g = 0)$, both bosons tunnel entirely independently. After a short time $t = 10$ a large amount of density is present in the region $\mathbf{P}_1$, indicating a high probability of exactly one boson being outside the well. Additionally, a non-negligible amount of density is present in the region $\mathbf{P}_0$, corresponding to the event of two bosons having tunneled out of the well. Throughout the entire evolution, the two-body density is completely uncorrelated, \emph{i.e.}, the two-body wave function is simply the product of two identical one-body wave functions. The bosons are likely to leave the well one after the other, but a concidental simultaneous tunneling of two bosons is also possible. For the repulsive system $(g = +2)$, the sequential tunneling of bosons is enhanced. In this case, there is a visible anticorrelation in the boson positions, so that density close to the $x_1 = x_2$ diagonal vanishes. The tunneling here occurs solely via sequential tunneling, so that the probability flows from $\mathbf{P}_2$ into the $\mathbf{P}_1$ region, and subsequently from the areas of increased density in $\mathbf{P}_1$ into $\mathbf{P}_0$ (corresponding to the escape of the second boson out of the well). The tunneling of bound boson pairs is entirely absent. This is expected, since we have already noted in chapter \ref{sec:eigenstate-spectrum} that no bound pair states are available (in the outside-well region) for $g \ge 0$. Comparing the $R_\mathrm{c} = 0.5$ and $R_\mathrm{c} = 1.5$ cases, we see that the density dynamics remain qualitatively unchanged upon tuning of $R_\mathrm{c}$. The dynamics are significantly different for a strongly attractive system $(g = -2)$. Here, bound pair states are available for bosons in open space, and so pair tunneling is possible. For the $R_\mathrm{c} = 0.5$ case, we see that pair tunneling is essentially the only tunneling mechanism available. Therefore, the density flows directly from $\mathbf{P}_2$ into the $\mathbf{P}_0$ region and remains concentrated along the $x_1 = x_2$ diagonal, while it practically vanishes in the region $\mathbf{P}_1$. This demonstrates that the bosonic system with nonzero interaction range can undergo a transition into the pair tunneling regime, similarly to a $\delta$ interaction system. However, for the same $g = -2$ but a larger interaction range $R_\mathrm{c} = 1.5$, the density dynamics change. While the majority of the decay still takes place through pair tunneling, there is also non-negligible participation from sequential tunneling, as seen by the flow of density into $\mathbf{P}_1$. This can be explained by considering the system energy. The suppression of sequential tunneling occurs when the total system energy $E_\mathrm{INI}$ falls below the threshold of one-particle energy \cite{2018-Dobrzyniecki-PRA}. Since for larger $R_\mathrm{c}$ the energy of the attractive two-boson system becomes less sensitive to $g$ (as we have shown in Fig.~\ref{fig:InitialEnergy}a), the energy is farther away from crossing the threshold and the sequential tunneling is not as heavily suppressed. This also indicates that the interaction range parameter $R_\mathrm{c}$ can be treated as an additional knob to control the nature of tunneling, in addition to the interaction strength $g$. \subsection{Two-fermion dynamics} We now proceed to analyze the density dynamics for a system of two fermions, and compare the result with the bosonic case. In Fig. \ref{fig:FermionR-DensityEvolution} we show the evolution of $\rho_2(x_1,x_2;t)$ for the two-fermion system at different interaction strengths $g$ and ranges $R_\mathrm{c}$. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Fig7.pdf} \caption{Time evolution of the density distribution $\rho_2(x_1,x_2,t)$ in an initially trapped two-fermion system, for different interaction strengths $g$ and two different interaction ranges $R_\mathrm{c}$. The dashed lines demarcate the well boundary $x_\mathrm{B}\approx \sqrt{5}$. For the non-interacting and repulsive systems ($g = 0$, $g = +5$) essentially the entire decay process takes place via sequential tunneling of the two fermions. In the strongly attractive system ($g = -11$) the system decays mostly via pair tunneling, with the participation of sequential tunneling depending on interaction range $R_\mathrm{c}$. Positions and interaction range are in units of $\sqrt{\hbar/m\omega}$, interaction strength in units of $\sqrt{\hbar^3\omega/m}$, time in units of $1/\omega$. } \label{fig:FermionR-DensityEvolution} \end{figure} Already in the non-interacting case $(g = 0)$ the two-fermion dynamics differs significantly from the bosonic case. Now, the two-body wave function is no longer a product of two identical one-body wave functions. As a result, nonzero interparticle correlations are present in the system (although they are trivial, caused solely by the particle statistics). The density at the $x_1 = x_2$ diagonal remains zero for all times, and simultaneous tunneling of two fermions is suppressed. The only tunneling mechanism in this non-interacting case is the sequential tunneling, with density flowing from $\mathbf{P}_2$ to $\mathbf{P}_1$, and from there to $\mathbf{P}_0$. One characteristic feature is that, after a brief time, series of stripes of zero density appear in the $\mathbf{P}_0$ region, parallel to the $x_1 = x_2$ diagonal. Their presence can be simply explained as a result of interference between the wave functions of two approximately free particles with different momenta. In this approximation, the two-body density in the $\mathbf{P}_0$ region takes the form $\rho_2(x_1,x_2) \approx |e^{i k_1 x_1} e^{i k_2 x_2} - e^{i k_2 x_1} e^{i k_1 x_2}|^2 = 2[1-\cos[(k_2-k_1)(x_1-x_2)]]$, reproducing the interference pattern. If the momenta are chosen as $k_1 = 1,k_2 = \sqrt{3}$ (to match the initial fermion energies $E=1/2, E=3/2$), this approximate form closely reproduces the observed spacing between the stripes. Now let us look at the fermion density dynamics in the case of repulsive interactions $(g = +5)$. For a relatively small interaction range $(R_\mathrm{c} = 0.5)$, since the density is already nearly zero close to the $x_1=x_2$ diagonal, the dynamics remain nearly unchanged from the non-interacting case. However, for a larger range $R_\mathrm{c} = 1.5$, the interactions are able to affect the dynamics significantly. In particular, there is a visible change in the shape of the interference minima within $\mathbf{P}_0$. We now turn to a case of the strongly attractive system $(g = -11)$. At this value of $g$ a fermionic pair mode is available, and the initial state can decay via pair tunneling. For the $R_\mathrm{c} = 0.5$ case, the pair tunneling is seen as an area of high density concentrated along the $x_1=x_2$ diagonal. However, sequential tunneling still plays a significant role, as indicated by the flow of density from $\mathbf{P}_2$ into $\mathbf{P}_1$. For $R_\mathrm{c} = 1.5$, however, sequential tunneling vanishes and fermions are only emitted as pairs. Thus, we see that for fermions, there exists a regime dominated by pair tunneling just like for bosons. Note also that the influence of $R_\mathrm{c}$ on the dynamics is quite opposite than in the bosonic case: increasing $R_\mathrm{c}$ causes a greater suppression of sequential tunneling. This effect is consistent with the total energy of the system. As we have seen in Fig.~\ref{fig:InitialEnergy}c, the energy $E_\mathrm{INI}$ becomes smaller upon increasing the interaction range to $R_\mathrm{c}=1.5$, thus it crosses the critical threshold of one-particle energy and one-body tunneling is suppressed more heavily. \section{Long-time dynamics and the decay rate} \label{sec:long-time-dynamics} The short-time dynamics, expressed through the evolution of $\rho_2$, allow us to distinguish between specifical tunneling mechanisms. However, a more in-depth understanding of the tunneling process can be gained by simulating the time evolution over longer timescales. In this chapter we will focus on long-time dynamics of the system, and in particular on the exponential nature of the decay which becomes evident at such timescales. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Fig8.pdf} \caption{The time evolution of the probability ${\cal P}_2(t)$ over a long time scale (red, solid) for the two-particle system with various interaction strengths $g$, for the two-boson and two-fermion system with interaction range $R_\mathrm{c} = 1.5$. Blue dashed line shows an exponential fit to ${\cal P}_2(t)$. It can be seen that ${\cal P}_2(t)$ decays exponentially (apart from very long times). Time is given in units of $1/\omega$, interaction strength $g$ in units of $\sqrt{\hbar^3\omega/m}$, interaction range $R_\mathrm{c}$ in units of $\sqrt{\hbar/m\omega}$.} \label{fig:Rydberg-P2ExponentialDecay} \end{figure} It is known that decaying systems typically obey an exponential decay law \cite{1976-Davydov-Book}. That is, the survival probability, \emph{i.e.}, the probability that the system remains in the initial state, obeys an exponential decay law to a very good approximation (apart from very short and very long times \cite{1958-Khalfin-JETP,1997-Wilkinson-Nature,1972-Fonda-NuovoCim,2002-VanDijk-PRC,2006-Rothe-PRL,2006-Muga-PRA}). For the two-body trapped system, the survival probability is closely mimicked by the probability that both particles remain in the well region, given by $\mathcal{P}_2(t) = \int_{\mathbf{P}_2} |\Psi(x_1,x_2;t)|^2 \mathrm{d}x_1 \mathrm{d}x_2$. Therefore, its time evolution should be approximately given by \begin{equation} \label{eq:exponential_decay_of_p2} \mathcal{P}_2(t) \sim e^{-\gamma t}, \end{equation} with the decay rate $\gamma$ constant in time. To confirm this assumption, in Fig.~\ref{fig:Rydberg-P2ExponentialDecay} we show the long-time evolution of $\mathcal{P}_2(t)$ for various interaction strengths, for bosons and fermions with interaction range $R_\mathrm{c}=1.5$. We compare the results to a fitted exponential function \eqref{eq:exponential_decay_of_p2}. The obtained decay rate $\gamma$ depends essentially on the interaction parameters. It is seen that $\mathcal{P}_2(t)$ indeed decays exponentially throughout nearly the entire evolution, regardless of $g$, both for bosons and fermions. Any deviations from exponential decay only occur at very short times, or at long times where the trapped system is practicaly completely depleted and $\mathcal{P}_2(t)$ is negligible. The decay rate $\gamma$ can be therefore determined by measuring the evolution of $\mathcal{P}_2(t)$ in time, and then fitting an exponential function to the results. In this way, the decay process for any value of $g$ and $R_\mathrm{c}$ can be characterized by a single value $\gamma$. At this point we wish to emphasize that the results presented for $\mathcal{P}_2$, in contrast to other probabilities, are almost insensitive to the details of the absorbing potential method used (for details, see appendix \ref{sec:cap-appendix}). \subsection{Two-boson decay rate} For a two-boson system, the obtained decay rate is shown in Fig.~\ref{fig:BosonR-DecayRate}a as a function of $g$, for different interaction ranges $R_\mathrm{c}$. We also include results in the contact interaction limit $R_\mathrm{c} \rightarrow 0$, \emph{i.e.}, for bosons interacting via the potential $g \delta(r)$. In the inset, we additionally show the susceptibility $\chi(g) = \gamma^{-1}(\partial \gamma/\partial g)$. Its peaks signal a large sensitivity of the decay rate to small changes of the interaction strength. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{Fig9.pdf} \caption{(a) The decay rate $\gamma(g)$ as a function of $g$ and $R_\mathrm{c}$, for the two-boson system. (Inset) The susceptibility $\chi(g) = \gamma^{-1} (\partial \gamma / \partial g)$. (b) The ratio $J_0/J$, expressing the relative participation of pair tunneling in the overall tunneling dynamics of the two-boson system. Interaction strength $g$ is expressed in units of $\sqrt{\hbar^3\omega/m}$, interaction range in units of $\sqrt{\hbar/m\omega}$, decay rate in units of $\omega$, susceptibility in units of $\sqrt{m/\hbar^3\omega}$.} \label{fig:BosonR-DecayRate} \end{figure} It is seen that in the $R_\mathrm{c} \rightarrow 0$ limit, the decay rate $\gamma(g)$ displays a characteristic change in behavior approximately around the critical interaction strength $g_0 \approx -0.9$, so that the growth of $\gamma(g)$ is a lot faster above this point than below it, and a peak appears in $\chi(g)$. This change in behavior of $\gamma(g)$ is associated with the switch to the regime dominated by pair tunneling. Below the critical interaction strength $g_0$, sequential tunneling is suppressed, and the much slower pair tunneling is almost the only available decay mechanism \cite{2018-Dobrzyniecki-PRA}. As $R_\mathrm{c}$ increases from zero, the characteristic shape of $\gamma(g)$ and $\chi(g)$ is preserved (including the transition at some specific point $g_0$), but the sensitivity of the decay rate to the interactions is modified. Specifically, as $R_\mathrm{c}$ increases, the decay rate becomes less sensitive to a change in interaction strength. In the $R_\mathrm{c} \to \infty$ limit, the interaction is not felt at all and $\gamma(g)$ is interaction independent. The critical value $g_0$ is dependent on $R_\mathrm{c}$, and it moves towards stronger attractive interactions as $R_\mathrm{c}$ increases. This effect can likewise be treated as a reflection of an analogous behavior of the total system energy. As explained previously, $g_0$ is approximately equal to the interaction strength for which $E_\mathrm{INI}(g_0)$ equals the energy of a single trapped particle, $E_\mathrm{INI}(g_0) = 0.5$. The energy $E_\mathrm{INI}(g,R_\mathrm{c})$ becomes less sensitive to $g$ as $R_\mathrm{c}$ increases, and so lowering $E_\mathrm{INI}$ below this energy threshold requires stronger attractive interactions. In the $R_\mathrm{c} \to \infty$ limit, the interactions are not felt at all and thus $g_0$ approaches minus infinity. To show that the $g_0$ indeed corresponds to a transition between two different dynamical regimes, we calculate the relative participation of the two decay mechanisms (pair and sequential tunneling) by theoretical calculation of different probability fluxes through the potential barrier (for details of this procedure, see \cite{2018-Dobrzyniecki-PRA}). This participation is expressed by the magnitude $J_0/J$, where $J$ is the total probability flux going out of the region $\mathbf{P}_2$, and $J_0$ is the total flux going directly from region $\mathbf{P}_2$ into $\mathbf{P}_0$. Therefore, the ratio $J_0/J$ expresses the relative probability that the initial state will decay by the emission of a bound pair, as opposed to a single boson. In Fig.~\ref{fig:BosonR-DecayRate}b we show the relative participation $J_0/J$ as a function of $g$, for different $R_\mathrm{c}$. It can be seen that an abrupt transition between two regimes indeed occurs at $g_0$. For $g > g_0$ the participation $J_0/J$ is near zero, indicating that nearly the entire tunneling takes place via sequential tunneling. For $g < g_0$, on the other hand, $J_0/J$ is close to one, indicating a near-total dominance of pair tunneling. It should be noted that the above analysis has significance for experimental practice, since it points to a method of achieving a more complete experimental control over the properties of the tunneling system. Specifically, by regulating the parameter $R_\mathrm{c}$ one can regulate the value of the critical interaction strength $g_0$ where the tunneling mechanism dominance is changed. Conversely, experimentally finding the value of $g_0$ can help in determining the effective interaction range $R_\mathrm{c}$. \subsection{Two-fermion decay rate} Now let us compare the above results with the case of the two-fermion system. In Fig.~\ref{fig:FermionR-DecayRates}a, we show the determined values of decay rate $\gamma(g)$ and its susceptibility $\chi(g)$ as a function of $g$, for different interaction ranges $R_\mathrm{c}$. It is seen that the decay rate behaves much the same as in the boson case, and in particular it is possible to identify an interaction strength $g_0$ at which the behavior of $\gamma(g)$ changes abruptly and a maximum appears in $\chi(g)$. The value of $g_0$, as in the bosonic case, can be approximately determined as the interaction strength for which the total energy of the system equals the one-particle energy, $E_\mathrm{INI}(g_0) = 0.5$. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{Fig10.pdf} \caption{(a) The decay rate $\gamma(g)$ as a function of $g$ and $R_\mathrm{c}$, for the two-fermion system. (Inset) The susceptibility $\chi(g) = \gamma^{-1} (\partial \gamma / \partial g)$. (b) The ratio $J_0/J$, expressing the relative participation of pair tunneling in the overall tunneling dynamics of the two-fermion system. Interaction strength $g$ is expressed in units of $\sqrt{\hbar^3\omega/m}$, interaction range in units of $\sqrt{\hbar/m\omega}$, decay rate in units of $\omega$, susceptibility in units of $\sqrt{m/\hbar^3\omega}$.} \label{fig:FermionR-DecayRates} \end{figure} Analogously to the two-boson system, the dependence of $\gamma$ on $R_\mathrm{c}$ mimics the previously observed behavior of the initial energy $E_\mathrm{INI}(g)$ for two fermions. Thus, in the contact interaction limit ($R_\mathrm{c} \to 0$), the decay rate becomes independent of $g$ as the interactions vanish for fermionic atoms. For increasing $R_\mathrm{c}$, the sensitivity of $\gamma$ to a change of the interaction strength $g$ grows, quite opposite to the two-boson case. It should be noted, however, that this trend applies only to fairly small interaction ranges $R_\mathrm{c} \lesssim 1.0$. In the limit $R_\mathrm{c} \to \infty$ the decay rate approaches a constant, just as in the $R_\mathrm{c} \to 0$ case, for the same reason as in the bosonic case. Thus, for very high $R_\mathrm{c}$ (which are outside the scope of our work) the trend of increasing sensitivity is predicted to reverse. For example, it can be seen that at $R_\mathrm{c} = 1.5$ the slope of $\gamma(g)$ does not increase further, but is very close to that of $R_\mathrm{c} = 1.0$. In Fig.~\ref{fig:FermionR-DecayRates}b we show the pair tunneling participation $J_0/J$ as a function of $g$ for different interaction ranges $R_\mathrm{c}$. By comparing the figure with Fig.~\ref{fig:BosonR-DecayRate}b we see that, similarly to bosons, the interaction strength $g_0$ corresponds to a rapid transition between regimes dominated by pair and sequential tunneling. For small $R_\mathrm{c}$, increasing the interaction range causes the value of $g_0$ to move towards weaker attractions, which is quite opposite to the behavior of bosonic systems. However, also this trend is expected to reverse for larger interaction ranges. In the limit $R_\mathrm{c} \rightarrow 0$ fermion pairing and pair tunneling vanishes completely, and so in this limit the value of $g_0$ approaches minus infinity, just like in the bosonic case. It is also worth noting that the magnitude of $g_0$ for the two-fermion system is significantly larger than in the bosonic case. To illustrate this, one may consider that, while for the bosonic case changing the interaction range from $R_\mathrm{c}=0.5$ to $R_\mathrm{c} = 1.5$ causes a relatively small shift of $g_0$, for the fermionic case the analogous shift of $g_0$ is an order of magnitude greater. This can be explained by the fact that fermions in general feel the interactions less strongly than bosons, and the fact that, for fermions, pair tunneling doesn't appear at all until the interaction strength $g$ is below a value $g_\mathrm{pair} \sim -R_\mathrm{c}^{-1}$. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{Fig11.pdf} \caption{The value of two critical interaction strengths of the two-fermion system, as a function of $R_\mathrm{c}$: the value of $g_\mathrm{pair}$, below which fermions can form bound pairs, and the value $g_0$, below which the system transitions to a regime dominated by pair tunneling. Interaction strength is expressed in units of $\sqrt{\hbar^3\omega/m}$, interaction range in units of $\sqrt{\hbar/m\omega}$.} \label{fig:FermionR-CriticalInteractionStrength} \end{figure} Similarly as in the bosonic case, by tuning $R_\mathrm{c}$ one can manipulate the value of $g_0$. Furthermore, as the value $g_\mathrm{pair}$ is also dependent on $R_\mathrm{c}$, it can be treated as a second tunable parameter. In Fig.~\ref{fig:FermionR-CriticalInteractionStrength} we show $g_\mathrm{pair}$ and $g_0$ as a function of $R_\mathrm{c}$. It is seen that they can be regulated fairly extensively (though not independently) by tuning the interaction range $R_\mathrm{c}$, opening new ways to experimental control over the system properties. Conversely, one can attempt to find $g_\mathrm{pair}$ and/or $g_0$ to determine the value of $R_\mathrm{c}$. It is also worth noting that the value of the transition interaction strength $g_0$ can be to some degree manipulated by changing the shape of the potential outside of the well \cite{2018-Dobrzyniecki-PRA}, which suggests an additional way to change $g_0$ and $g_\mathrm{pair}$ independently from each other. \section{Conclusion} \label{sec:conclusion} We have examined the dynamical properties of a system of two Rydberg-dressed bosons or fermions with finite-range interactions, tunneling from a leaky potential well into the open space. The nature of the tunneling dynamics is found to depend significantly on the interaction strength. For the system with repulsive interactions, only sequential tunneling of two particles is available, independently of quantum statistics. For attractive interactions, the tunneling significantly depends on statistics. In the case of bosons, pair tunneling can be observed at any strength of the attractive interactions. In the case of fermions, pair tunneling can occur only for sufficiently strong attractive interactions $g < g_\mathrm{pair}$, with $g_\mathrm{pair}$ dependent on interaction range. The proportional participation of pair tunneling in the overall tunneling process depends on the strength of the attractive interaction. We find that the dominant decay mechanism changes abruptly as the interaction strength crosses a critical value $g_0$. For weaker attractions $(g > g_0)$, the decay process occurs mainly by the sequential emission of two particles from the well. For stronger attractions $(g < g_0)$, sequential tunneling is suppressed, and the particles tunnel mainly as bound pairs. This transition occurs in a similar way both for bosonic and fermionic systems. However, the evolution of two-particle density correlations shows visible differences between the two cases. The interaction strengths required to reach the regime of dominant pair tunneling are found to be significantly different for bosons and fermions. For fermions, a much greater strength $|g|$ of attractive interactions is needed. This is both due to the vanishing of the fermion wave function at $x_1=x_2$, which weakens the influence of attractive interactions, and the fact that antisymmetric bound pair states become available only for $g < g_\mathrm{pair} < 0$, with $|g_\mathrm{pair}|$ having particularly large values at short interaction ranges. Changing the interaction range $R_\mathrm{c}$ affects the decay rate of the system and the participation of the different decay mechanisms, in a quite opposite way for bosons and fermions. For bosons, increasing $R_\mathrm{c}$ (with interaction strength fixed) diminishes the effect of the interactions, so that the decay rate approaches the value of the non-interacting system. Also, the pair tunneling induced by attractive interactions becomes less dominant. For fermions, the situation is different because in the limit of zero-range interactions ($R_\mathrm{c} \to 0$) the interaction vanishes completely. As a result, in this case increasing $R_\mathrm{c}$ instead enhances the effect of the interactions. However, in the limit of very large interaction ranges, the decay rate approaches the non-interacting value both for bosons and fermions. Compared to results for a system of contact-interacting bosons \cite{2018-Dobrzyniecki-PRA}, the tunneling of atoms with long-range interactions remains qualitatively similar. However, the use of longer-range interactions allows to extend the investigation to systems of identical fermions as well. The longer-range interactions also give an additional dimension of control over the system properties. Specifically, the critical values of the interaction strength, $g_0$ (below which pair tunneling becomes dominant) and $g_\mathrm{pair}$ (below which bound fermion pairs can appear), are dependent on the interaction range $R_\mathrm{c}$. This indicates that the interaction range $R_\mathrm{c}$ can be treated as an additional tunable parameter to exercise more complete control over the system properties. In light of the recent experiments with few-body tunneling systems \cite{2012-Zurn-PRL,2013-Zurn-PRL} and Rydberg-dressed atoms \cite{2016-Jau-NatPhys,2016-Zeiher-NatPhys,2017-Zeiher-PRX,2019-Arias-PRL,2020-Borish-PRL}, the results presented in this paper have potential significance for future research in this direction. Finally, it should be noted that in this work we have deliberately limited ourselves to interaction ranges $R_\mathrm{c} \le 1.5$, on the scale of the extension of the initial wave function. For higher ranges, the existing model may break down and the dynamical properties might become more complicated. In particular, the decay of the system may become significantly non-exponential in such cases, necessitating a new approach to analysis. \section{Acknowledgments} This work was supported by the (Polish) National Science Center Grant No. 2016/22/E/ST2/00555.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \renewcommand{\theequation}{1.\arabic{equation}} \setcounter{equation}{0} Einstein's theory of general relativity (GR) was proposed over a century ago and has successfully passed all the observational tests carried out so far. In the weak field regime, GR was tested with various ground- and space-based precision experiments, including the three classical tests, namely the perihelion advance of Mercury, the deflection of light by the Sun, and the gravitational redshift \cite{will2014}. In the strong field regime, GR was also confronted with observations of binary pulsar systems \cite{pulsar1, pulsar2}, the extraordinary observation of the M87* black hole shadow by the Event Horizon Collaboration \cite{EHT}, and observations of gravitational waves generated due to the merging of black holes and/or neutron stars by the LIGO experiment \cite{ligo}. These observations are all remarkably consistent with the predictions of GR. Despite of all these successes, there are also various reasons to believe that GR may not be the complete theory of gravity. First, the accelerated expansion of the universe \cite{dark_energy1, dark_energy2, dark_energy3, dark_energy4} and the inconsistencies in galaxy rotation curves \cite{dark_energy4, dark_matter1,dark_matter2,dark_matter3,dark_matter4} are difficult to explain within the framework of GR without introducing dark energy and dark matter. Second, the standard inflationary paradigm in the early universe also suffers from the trans-Planckian problem \cite{inflation1, inflation2}. Third, Einstein's GR does not employ any quantum principles and it is still an unsolved question of unifying GR and quantum mechanics \cite{QG1,QG2}. In addition, GR inevitably leads to singularities both at the initial of the universe \cite{singularity1, singularity2} and in the interiors of black hole spacetimes \cite{hawking}, at which our known physics laws become all invalid. All these issues indicate that the classical GR might need to be modified. In particular, the spacetime singularities ought to be resolved after quantum gravitational effects are taken into account. Recently, in the context of LQG, a spherical symmetric spacetime, known as the LQG corrected Schwarzschild spacetime or self-dual spacetime, was constructed \cite{LQG_BH} \footnote{{In the last couple of years, loop quantum black holes (LQBHs) have been extensively studied, see, for instance, \cite{AOS18a,AOS18b,BBy18,ABP19,ADL20}. For more details, we refer readers to the review articles, \cite{Perez17,Rovelli18,BMM18,Ashtekar20}.}}. In particular, it has been shown that this self-dual spacetime is regular and free of any spacetime curvature singularity. In the construction of the solution, the minimum area of the full LQG is the fundamental ingredient to solve the black hole space-time singularity problem. Moreover, the deviation of the self-dual spacetime from the Schwarzschild one can be characterized by the minimal area and the Barbero-Immirzi parameter arising from LQG. As mentioned in \cite{Sahu:2015dea, Modesto:2009ve}, another important aspect of this solution is that it is self-dual in sense of T-duality. One can verify that under the transformation \textcolor{black}{ $r \to a_0/r$ with $a_0$ being related to the minimal area gap $A_{\rm min}$ of LQG via $a_0=A_{\rm min}/8\pi$},, the metric remains invariant, with suitable re-parameterization of other variables, hence marking itself as satisfying the T-duality. An important question now is whether the LQG effects of the self-dual spacetime can leave any observational signatures for the current and/or forthcoming experiments, so LQG can be tested or constrained directly by observations. Such considerations have attracted a great deal of attention lately and several phenomenological implications of the self-dual spacetime have been already investigated \cite{Alesci:2011wn, Chen:2011zzi, Dasgupta:2012nk, Hossenfelder:2012tc, Barrau:2014yka, Sahu:2015dea, Cruz:2015bcj, add1, add2}. In particular, the LQG effects on the shadow of the rotating black hole has been discussed in details and their observational implications to the latest Event Horizon Telescope (EHT) observation of the supermassive black hole, M87*, has also been explored \cite{Liu:2020ola}. In addition, with the calculation of the gravitational lensing in the self-dual spacetime, the polymeric function has been constrained by using the Geodetic Very-Long-Baseline Interferometry Data of the solar gravitational deflection of Radio Waves \cite{Sahu:2015dea}, \textcolor{black}{which leads to a constraint on the polymeric parameter $\delta$ of LQG, $\delta < 0.1$}. In this paper, we study the effects of LQG to observations conducted in the Solar System. We calculate in details the effects of the polymeric function in the self-dual spacetime to the light deflection by the Sun, the gravitational time delay, perihelion advance, and geodetic procession for a spinning object. With these theoretical calculations, we derive the observational constraints from some recent observational datasets, including the VLBI observation of quasars, Cassini experiment, MESSENGER mission, LAGEOS satelite, observations of S2 star at Galactic center, Gravity Probe B, and the lunar laser ranging data. Among these constraints, we find that the tightest constraint comes from the measurement of the gravitational time delay by the Cassini mission. In addition, we also discuss the potential constraint that can be derived in the near future by the joint European-Japanese BepiColombo project. While more detections and experiments are continuously being carried out, it is expected that the constraints on the LQG effects will be improved dramatically and deeper understanding of LQG will be achieved. \textcolor{black}{At last, we would like to mention that we only consider the static self-dual spacetime in this paper and ignore the effects of the angular momentum of the spacetime. For all the observational effects we considered in this paper, the effects due to rotation of the Sun or Earth are expected to be very small.} The plan of the rest of the paper is as follows. In Sec. II, we present a very brief introduction to the self-dual spacetime, while in Sec. III, we first consider the geodesic equations for both massless and massive objects in this self-dual spacetime. Using these equations we then derive in details the effects of the polymeric function $P$ to observations conducted in the Solar System, including the deflection angle of light by the Sun, gravitational time delay, and perihelion advance. The upper bounds on $P$ are obtained by comparing the theoretical predictions with observational data. Then, in Sec. IV, we study a spinning object in the self-dual spacetime and derive the geodetic procession of its spin vector, from which we obtain the constraints on the polymeric function $P$ by using the Gravity Probe B and lunar laser ranging data. A brief summary of our main results and some discussions are presented in Sec. V. \section{Equation of motion for test particles in the self-dual spacetime \label{secrot}} \renewcommand{\theequation}{2.\arabic{equation}} \setcounter{equation}{0} We start with a brief introduction of the effective self-dual spacetime, which arises from the quantization of a symmetry reduced spacetime in LQG. The metric of the self-dual spacetime is given by \cite{LQG_BH} \begin{equation}\label{1} ds^2= - f(r)dt^2 + \frac{dr^2}{g(r)} + h(r)(d\theta^2+\sin^2\theta d\phi^2), \end{equation} where the metric functions $f(r)$, $g(r)$, and $h(r)$ are given by \begin{eqnarray} f(r)&=&\frac{(r-r_+)(r-r_-)(r+r_*)^2}{r^4+a_0^2},\nonumber\\ g(r)&=&\frac{(r-r_+)(r-r_-)r^4}{(r+r_*)^2(r^4+a_0^2)},\nonumber\\ h(r)&=&r^2+\frac{a_0^2}{r^2}. \end{eqnarray} Here $r_+=2 M/(1+P)^2$ and $r_{-} = 2 M P^2/(1+P)^2$ denote the locations of the two horizons, and $r_{*}\equiv \sqrt{r_+ r_-} = 2 MP/(1+P)^2$ with $M$ denoting the ADM mass of the solution, and $P$ being the polymeric function \begin{eqnarray} P \equiv \frac{\sqrt{1+\epsilon^2}-1}{\sqrt{1+\epsilon^2}+1}, \end{eqnarray} where $\epsilon$ denotes a product of the Immirzi parameter $\gamma$ and the polymeric parameter $\delta$, i.e., $\epsilon=\gamma \delta \ll 1$. From the considerations of black hole entropy \cite{KM04}, the Immirzi parameter is determined to be $\gamma \simeq 0.2375$. The parameter \begin{eqnarray} a_0 = \frac{A_{\rm min}}{8\pi}, \end{eqnarray} \textcolor{black}{where $A_{\rm min}$} is the minimum area gap of LQG. \textcolor{black}{Here we would like to emphasize that there exists a lot of choices of the value of $\gamma$ from different considerations, see \cite{Achour:2014rja, Frodden:2012dq, Achour:2014eqa, Han:2014xna, Carlip:2014bfa, Taveras:2008yf} and references therein. For example, its value can even be complex \cite{Achour:2014rja, Frodden:2012dq, Achour:2014eqa, Han:2014xna, Carlip:2014bfa} or considered as a scalar field which value would be fixed by the dynamics \cite{Taveras:2008yf}. In this paper, in order to derive the observational constraints on the polymeric parameter $\delta$ derive from the constraints on $P$, we adopt the commonly used value $\gamma = 0.2375$ from the black hole entropy calculation \cite{KM04}. Thus it is important to mention that the constraints on $\delta$ we obtained in the following sections should depend on the choice of the value of $\gamma$ .} By taking $a_0=0=P$, it is easy to see that the above solution reduces to the Schwarzschild black hole exactly. According to \cite{Modesto:2009ve,Sahu:2015dea}, it is natural to assume that the minimal area \textcolor{black}{gap} in LQG is $A_{\min} \simeq 4 \pi \gamma \sqrt{3} l_{\rm Pl}$ with $l_{\rm Pl}$ being the Planck length. \textcolor{black}{In this sense, $a_0$ is proportional to $l_{\rm Pl}$ and thus is expected to be negligible. On the other hand, in order to explore the effects of $a_0$ and $P$ in the solar system, it is natural to expand (\ref{1}) in power of $1/r$. It is clearly to see that the maximal corrections from parameter $P$ is at the order of $1/r$ while $a_0$ is at $1/r^4$.} Thus, phenomenologically, the effects of $a_0$ are expected to be very small at the scale of the Solar System, so we can safely set $a_0=0$. \section{Classical tests of the self-dual spacetime} \renewcommand{\theequation}{3.\arabic{equation}} \setcounter{equation}{0} Let us first consider the evolution of a massive particle in the self-dual spacetime. We start with the Lagrangian of the particle, \begin{eqnarray} \mathcal{L} = \frac{1}{2}g_{\mu \nu} \frac{d x^\mu} {d \lambda } \frac{d x^\nu}{d \lambda}, \end{eqnarray} where $\lambda$ denotes the affine parameter of the world line of the particle. For massless particles we have $\mathcal{L}=0$ and for massive ones we have $\mathcal{L} <0$. Then the geodesic motion of a particle is governed by the Euler-Lagrange equation, \begin{eqnarray} \frac{d}{d\lambda} \left(\frac{\partial \mathcal{L}}{\partial \dot x^\mu}\right) - \frac{\partial \mathcal{L}}{\partial x^\mu}=0, \end{eqnarray} where a dot denotes the derivative with respect to the affine parameter $\lambda$. Then the the generalized momentum $p_\mu$ of the particle can be obtained via \begin{eqnarray} p_{\mu} = \frac{\partial L}{\partial \dot x^{\mu}} = g_{\mu\nu} \dot x^\nu, \end{eqnarray} which leads to four equations of motions for a particle with energy $\tilde E$ and angular momentum $\tilde l$, \begin{eqnarray} p_t &=& g_{tt} \dot t = - \tilde{E},\\ p_\phi &=& g_{\phi \phi} \dot \phi = \tilde{l}, \\ p_r &=& g_{rr} \dot r,\\ p_\theta &=& g_{\theta \theta} \dot \theta. \end{eqnarray} From these expressions we obtain \begin{eqnarray} \dot t = - \frac{ \tilde{E} }{ g_{tt} } = \frac{\tilde{E}}{f(r)},\\ \dot \phi = \frac{ \tilde{l}}{g_{\phi\phi}} = \frac{\tilde{l}}{h(r) \sin^2\theta}. \end{eqnarray} Note that one has $ g_{\mu \nu} \dot x^\mu \dot x^\nu = \varepsilon$ with $\varepsilon=-1$ for timelike geodesics and $\varepsilon= 0$ for null geodesics. Then, we find \begin{eqnarray} g_{rr} \dot r^2 + g_{\theta \theta} \dot \theta^2 &=& \varepsilon - g_{tt} \dot t^2 - g_{\phi\phi} \dot \phi^2\nonumber\\ &=& \varepsilon+\frac{\tilde{E}^2}{f(r)}- \frac{\tilde{l}^2}{h(r)}. \end{eqnarray} Since we are mainly interested in the evolution of the particle in the equatorial circular orbits, we will set $\theta=\pi/2$ and $\dot \theta=0$. Then the above expression can be simplified into the form \begin{eqnarray} \dot r ^2 = \tilde E^2 - V_{\rm eff} , \end{eqnarray} where $V_{\rm eff}$ is the effective potential of the particle, which is defined as \begin{eqnarray} V_{\rm eff} = \tilde E^2 - \left( \varepsilon+\frac{\tilde{E}^2}{f(r)}- \frac{\tilde{l}^2}{h(r)}\right) g(r).\label{Veff} \end{eqnarray} Then by using $\dot \phi = \tilde l/h(r)$, one obtains \begin{eqnarray}\label{rphi} \left(\frac{dr}{d\phi}\right)^2 =\left[ \frac{h^2(r)}{\tilde l^2} \varepsilon + \frac{\tilde E^2 h^2(r)}{\tilde l^2 f(r)} - h(r)\right] g(r). \end{eqnarray} In the following, we shall apply this equation to the calculations of the light deflection angle, gravitational time delay, and perihelion advance in the self-dual spacetime. \subsection{Light deflection angle} Let us first investigate the light deflection angle in the self-dual spacetime. We start from Eq.~(\ref{rphi}), in which we have $\varepsilon=0$ for light. Introducing the impact parameter \begin{eqnarray} b\equiv \frac{\tilde l}{\tilde E}, \end{eqnarray} we find Eq.~(\ref{rphi}) reduces to \begin{eqnarray}\label{phir} \frac{d\phi}{dr} = \pm \frac{1}{\sqrt{h(r)g(r)}} \left[\frac{h(r)}{b^2 f(r)} -1\right]^{-1/2}, \end{eqnarray} where $\pm$ correspond to increasing and decreasing $r$, respectively. Then, the distance of the closest path $r_0$ is defined as $dr/d\phi|_{r=r_0}=0$, for which we have \begin{eqnarray} b^2 = \frac{h(r_0)}{f(r_0)}. \end{eqnarray} The light trajectory is deflected by an angle, \begin{eqnarray} \Delta \phi &=& 2 \int_{r_0}^{+\infty} \frac{d\phi}{dr}dr - \pi, \end{eqnarray} with $d\phi/dr$ being given by (\ref{phir}). Considering the weak field approximation and then expanding the above integral in terms of the polymeric function $P$, one obtains the deflection angle of the light, \begin{eqnarray} \Delta\phi &\simeq &\frac{4 M}{r_0} \Big(1-2 P+\mathcal{O}(P^2)\Big)\nonumber\\ &=&\Delta \phi_{\rm GR} (1-2 P). \end{eqnarray} To obtain the experimental constraints from the light deflection experiment by the Sun, let us expression the deflection angle $\Delta \phi$ in terms of values of $\Delta \phi^{\rm GR}$ for the Sun, \begin{eqnarray} \Delta \phi = 1.75'' (1-2 P). \end{eqnarray} The best available measurement of the solar gravitational deflection comes from the astrometric observations of quasars on the solar background performed with the very-long baseline interferometry (VLBI) \cite{VLBI_deflection}, which leads to the constraint on the polymeric function $P$, \begin{eqnarray} -2.5 \times 10^{-5} < P< 1.25 \times 10^{-4} \;\;\;\; (68\% \;\;{\rm C.L.}). ~~~~~ \end{eqnarray} Considering $P>0$, thus one has \begin{eqnarray} 0< P < 1.25 \times 10^{-4} \;\;\;\; (68\% \;\;{\rm C.L.}). \end{eqnarray} For $\gamma = 0.2375$, the above constraint can be transformed to a constraint on the polymeric parameter $\delta$ as \begin{eqnarray} | \delta| < 0.0942 \;\; \;\;(68\% \;\;{\rm C.L.}). \end{eqnarray} \textcolor{black}{It is worth noting that the above constraint is consistent with that obtained in \cite{Sahu:2015dea} using the VLBI data in \cite{add_VLBI}. } \subsection{Gravitational Time Delay} We consider the time delay where a radar signal is sent from Earth or spacecraft pass to the Sun and reflect off another planet or spacecraft. The time delay can also be studied by using Eq.(\ref{phir}), from which one obtains \begin{eqnarray} \frac{dt}{dr} &=& \frac{dt}{d\phi} \frac{d\phi}{dr} = \frac{d\phi}{dr} \frac{\dot t}{\dot \phi}\nonumber\\ &=& \pm \frac{1}{ b } \frac{1}{\sqrt{f(r)g(r)}} \left[\frac{1}{b^2} -\frac{f(r)}{h(r)}\right]^{-1/2}. \end{eqnarray} Then the time spent by a radar signal that travels from the Sun to the point $r_A$ can be obtained by performing the integral \begin{eqnarray} t(r_A) = \frac{1}{ b } \int_{r_0}^{r_A} \frac{1}{\sqrt{f(r)g(r)}} \left[\frac{1}{b^2} -\frac{f(r)}{h(r)}\right]^{-1/2} dr. ~~~~~ \end{eqnarray} Again considering the weak field approximations, one finds \begin{eqnarray} t(r_A) &\simeq & \sqrt{r_A ^2- r_0^2} + M \sqrt{\frac{r_A -r_0}{r_A+r_0}} +2 M {\rm arccosh}\left(\frac{r_A}{r_0}\right) \nonumber\\ && - 4 M P \left( \sqrt{\frac{r_A -r_0}{r_A+r_0}} + {\rm arccosh}\left(\frac{r_A}{r_0}\right)\right). \end{eqnarray} Then the time delay of a radar signal that is sent from Earth or spacecraft and then reflects off another planet or spacecraft can be divided into two cases, the inferior conjunction and superior conjunction. In the inferior conjunction case, the planet (or spacecraft, denoted by B), which reflects the radar signal, is located between the Earth (or spacecraft, denoted by A) and the Sun. For this case, the time delay due to the self-dual spacetime can be obtained by \begin{eqnarray} \Delta t_I \simeq 4 M \ln\frac{r_A}{r_B} \times (1- 2 P) = \Delta t_I^{\rm GR} (1- 2P).\nonumber\\ \end{eqnarray} In the superior conjunction case, the planet that reflects the radar signal and the Earth is on opposite sides of the Sun, and the time delay for this superior conjunction case can be written as \begin{eqnarray} \Delta t_{S} &\simeq& 4 M+4 M \ln \frac{4 r_A r_B}{r_0^2} - 16 MP - 8 MP \ln \frac{4 r_A r_B}{r_0^2} \nonumber\\ &=& \Delta t_{S}^{\rm GR} - 16 MP - 8 MP \ln \frac{4 r_A r_B}{r_0^2}. \end{eqnarray} Here we use the experimental results of the Cassini satellite for the time delay to constrain the polymeric function in the self-dual spacetime \cite{cassini}. The Cassini experiment does not measure the time delay directly, but instead the relative change in the frequency in the superior conjunction case, \begin{eqnarray} \delta \nu = \frac{\nu(t) - \nu_0}{\nu_0} = \frac{d}{dt} \Delta t_{S}, \end{eqnarray} where $\nu_0$ is the frequency of the radio waves emitted from the Earth and then t being reflected back to the Earth at the frequency $\nu(t)$. Hence, the relative shift in the frequency is given by \begin{eqnarray} \delta \nu \simeq - \frac{8 M (1- 2 P) }{r_0} \frac{dr_0(t)}{dt}. \end{eqnarray} The Cassini experiment measures the frequency shift for approximately 25 days, where 12 days before and 12 days after the superior conjunction. During one day the distance of the closet approach of the radio waves changes by about $1.5 R_{\odot}$, where $R_{\odot}$ denotes the radius of the Sun. Thus, the frequency shift induced by the polymeric function $P$ is \begin{eqnarray} \delta \nu_P \simeq \frac{256}{27} P \frac{M_{\odot}}{R_{\odot}} v_{E}, \end{eqnarray} in which $v_E = dr_0/dt$ is the velocity of the Earth. In the Cassini experiment, the accuracy of the relative shift in the frequency is $10^{-14}$ \cite{cassini}, from which one obtains the constraint \begin{eqnarray} \delta \nu_P < 10^{-14}, \end{eqnarray} which leads to \begin{eqnarray} 0<P< 5.5\times 10^{-6}. \end{eqnarray} This constraint is stronger than that obtained by the observations of the deflection angle. Similarly, if one takes $\gamma=0.2375$, the above constraint leads to the constraint to the polymeric parameter \begin{eqnarray} |\delta | < 0.0199. \end{eqnarray} \subsection{Perihelion Advance} Now let us turn to the massive particles moving in the self-dual spacetime and study the perihelion advance of their orbits. We start from Eq. (\ref{rphi}) with $\varepsilon=-1$ in terms of a new variable $x=1/r$, which yields, \begin{eqnarray} \left(\frac{d x}{d\phi} \right)^2= x^4 \left[ - \frac{h^2(r)}{\tilde l^2} + \frac{\tilde E^2 h^2(r)}{\tilde l^2 f(r)} - h(r)\right] g(r).\nonumber\\ \end{eqnarray} Differentiating it with respect to $\phi$ and then expanding the equation by assuming that $P$ is a small parameter, one finds the orbits of the massive particles are governed by the following differential equation, \begin{eqnarray} &&\frac{d^2x}{d\phi^2} + x - \frac{M}{\tilde l^2} \simeq 3 M x^2 \nonumber\\ &&~~~~~~ - 4 M \left(\frac{\tilde E^2}{\tilde l^2} + \frac{2 M}{\tilde l^2} x - 4 M x^3\right) P. \label{EE} \end{eqnarray} The right-hand side of the above equation can be treated as perturbations to the Newtonian gravity. By ignoring the perturbation terms, the unperturbed solution of the above equation is given by \begin{eqnarray} x_0 = \frac{M}{\tilde l^2} (1+ e \cos\phi), \end{eqnarray} which describes an elliptical orbit with the eccentricity $e$. When the perturbations in the right-hand side of (\ref{EE}) is included, the elliptical orbit acquires a small correction, i..e, $x=x_0 + x_1$, where $x_1$ satisfies \begin{eqnarray} && \frac{d^2 x_1}{d\phi^2} + x_1 \simeq 3M x_0^2\nonumber\\ &&~~~ - 4 M \left(\frac{\tilde E^2}{\tilde l^2} + \frac{2 M}{\tilde l^2} x_0 - 4 M x_0^3\right) P. \end{eqnarray} Substituting the solution of $x_0=M (1+e \cos\phi)/\tilde l^2$ into the above equation, one finds \begin{eqnarray} \frac{d^2 x_1}{d\phi^2} + x_1 = A_0 + A_1 \cos\phi + A_2 \cos^2 \phi + A_3 \cos^3 \phi ,\nonumber\\ \end{eqnarray} where \begin{eqnarray} A_0 &=& \frac{3 M^3}{\tilde l^4} - 4 M \left(\frac{E^2}{\tilde l^2} + \frac{2 M^2}{\tilde l^4} + \frac{4 M^4}{\tilde l^6}\right) P, \\ A_1 &=& \frac{3 M^3}{\tilde l^4} \left(2e - \frac{8}{3} e P + 12 e P \frac{M^2}{ \tilde l^2}\right), \\ A_2 &=& \frac{3 M^3}{\tilde l^4} \left( e^2 + 12 e^2 P \frac{M^2}{\tilde l^2} \right), \\ A_3 &=& \frac{3 M^3}{\tilde l^4} \times \frac{16}{3}e^3 \frac{M^2}{\tilde l^2}. \end{eqnarray} Then the solution of $x_1$ is given by \begin{eqnarray} x_1 &=& A_0+\frac{A_2}{2} - \frac{A_2}{6} \cos (2 \phi) - \frac{1}{32} \cos (3 \phi) \nonumber\\ && + \left(\frac{1}{2} A_2 + \frac{3}{8} A_3\right) \phi \sin\phi. \end{eqnarray} In this solution, only the last term (in the second line) contributes to the perihelion advance, thus one can ignore the other terms in the solution and finally has \begin{eqnarray} x &\simeq & \frac{M}{\tilde l^2} (1+ e \cos \phi) + \left(\frac{1}{2} A_1 + \frac{3}{8} A_3\right) \phi \sin\phi \nonumber\\ &\simeq & \frac{M}{\tilde l^2} \left[1 + e \cos \left(\phi - \frac{\delta \phi_0}{2 \pi} \phi \right) \right] ,\label{orbit} \end{eqnarray} where \begin{eqnarray} \delta \phi_0 \simeq \frac{6 \pi M^2}{\tilde l^2} \left(1- \frac{4}{3}P\right), \label{delta_phi} \end{eqnarray} which is the angular shift of the perihelia per orbit. Now we would like to eliminate the angular momentum $\tilde{l}$ from (\ref{delta_phi}). Considering the orbit along (\ref{orbit}), we find that the minimum value of $r_-$ and the maximum one of $r_+$ can be obtained from (\ref{orbit}) at $(1-\delta\phi_0/2\pi)\phi=0$ and $(1-\delta\phi_0/2\pi)\phi=\pi$, respectively. Then, we find, \begin{eqnarray} r_-= \frac{\tilde l}{M (1+e)}, \;\;\\ r_+=\frac{\tilde l}{M (1-e)}. \end{eqnarray} Thus, the semi-major axis $a_0$ of the ellipse is \begin{eqnarray} a_0=\frac{r_-+r_+}{2} = \frac{\tilde l^2}{M (1-e^2)}. \end{eqnarray} Using this expression, the perihelion advance per orbit can be expressed as \begin{eqnarray} \Delta \phi = \Delta \phi^{\rm GR} \left(1- \frac{4}{3}P\right), \end{eqnarray} where \begin{eqnarray} \Delta \phi^{\rm GR} = \frac{6\pi M}{a_0(1-e^2) }. \end{eqnarray} Note that by taking $P=0$, one recovers the classical result for the Schwarzschild spacetime. Let us now consider observational constraints that can be imposed on the polymeric parameter $P$. We first consider the observation of the anomalous perihelion advance for Mercury. The current most accurate detection was done by the MESSENGER mission \cite{message}, in which the contribution from the Schwarzschild-like procession is measured to be \begin{eqnarray} \Delta \phi = (42.9799 \pm 0.0009)'' /{\rm century}. \end{eqnarray} We use the observational error in experimental data to compute upper-bounds for the polymeric parameter $P$. For the motion of Mercury around the Sun, the observational error is $0.009''/{\rm century}$. One expects that the contribution from LQG is less than the observational error. This procedure leads to a bound on the polymeric function $P$ as \begin{eqnarray} 0<P<1.57\times 10^{-5}. \end{eqnarray} From this bound, the polymeric parameter $\delta$ is constrained to be \begin{eqnarray} |\delta| < 0.033. \end{eqnarray} \textcolor{black}{In the above calculation, we have ignored the contributions to to the perihelion advance from the Lense-Thirring effects due to the angular momentum of the Sun. This effect is proportional to the angular momentum of the Sun and is given by \cite{LT} \begin{eqnarray} \Delta \phi^{\rm LT} = - \frac{6 S_{\odot}\cos i}{a_0^3 (1-e^2)^{3/2}}, \end{eqnarray} where $S_{\odot}$ is the angular momentum of the Sun and $i$ is the inclination of the solar equator to Mercury's orbit plane. According to the analysis in \cite{message}, the contribution of $\Delta \phi^{\rm LT}$ per century is smaller than the uncertainty in the measurement of the perihelion advance per century, thus for the purpose of constraining LQG effects in this work we ignore the effects of $\Delta \phi^{\rm LT}$ and only consider the static case. } We then turn to consider the measured perihelion advance of LAGEOS satellites around the Earth. Using 13 years of tracking data of the LAGEOS satellites, the precession of the periapsis of the LAGEOS II satellite was measured to be \cite{LAGEOS} \begin{eqnarray} \Delta \phi = \Delta \phi^{\rm GR} \Big[1+ (0.28 \pm 2.14)\times 10^{-3}\Big], \end{eqnarray} which corresponds to a bound on the polymeric function $P$ and parameter $\delta$ of \begin{equation} 0<P<0.0014, \end{equation} and \begin{equation} |\delta| < 0.32, \end{equation} respectively. On the other hand, the observations of the stars orbiting the central black hole of the Milky Way galaxy provide a different environment to test gravity in the strong gravity regime. These stars has been observed for 27 years and now their orbital parameters can be determined very accurately. Recently, the GRAVITY collaboration has detected the Schwarzschild precession of the S2 star to be \cite{S2} \begin{eqnarray} \Delta\phi =\Delta \phi^{\rm GR}(1.1 \pm 0.19), \end{eqnarray} where \begin{eqnarray} \Delta \phi^{\rm GR} = 12' \end{eqnarray} per orbit period from the prediction of GR. For the LQG corrections to the procession, this detection implies \begin{eqnarray} 0<P<0.0675,\;\;\; |\delta|<2.3. \end{eqnarray} \section{Geodesic precession of spinning objects in the self-dual spacetime} \renewcommand{\theequation}{4.\arabic{equation}} \setcounter{equation}{0} Now let us turn to consider the evolution of a spinning particle with its four-velocity vector $u^\mu=dx^\mu/d\lambda$ and four-spin vector $s^\mu$ in the self-dual spacetime. The equation of motions of this type of particles is governed by two equations, namely, the geodesic equation \begin{eqnarray} \frac{du^\mu}{d\lambda} + \Gamma^\mu_{\nu \lambda} u^\nu u^\lambda=0, \end{eqnarray} and the parallel transport equation \begin{eqnarray} \frac{ds^\mu}{d\lambda} + \Gamma^\mu_{\nu \lambda} s^\nu u^\lambda=0,\label{spin} \end{eqnarray} where the four-velocity vector $u^\mu$ and four-spin vector $s^\nu$ satisfy the orthogonal condition \begin{eqnarray} u^\mu s_\mu=0. \end{eqnarray} The spin vector $s^\mu$ also satisfies the normalization condition \begin{eqnarray} s^\mu s_\mu =1. \end{eqnarray} Since the self-dual spacetime we considered here is a spherically symmetric spacetime, we can comfortably choose to work on the equatorial plane, i.e., with $\theta=\pi/2$ without loss of any generality. To simplify the problem, we further assume that the test spinning particle moves in a circular orbit, i..e, $\dot r = 0=\dot \theta$. Then the four velocity $u^\mu=\dot x^\mu$ can be expressed as follows in terms of the constants of motion $\tilde E$ and $\tilde l$, \begin{eqnarray} u^t = \dot t =\frac{\tilde E}{f(r)}, \label{ut}\\ u^\phi= \dot \phi = \frac{\tilde l}{h(r)}.\label{uphi} \end{eqnarray} One can define the angular velocity of the spinning particle as \begin{eqnarray} \Omega = \frac{u^\phi}{u^t} = \frac{\tilde l}{\tilde E} \frac{f(r)}{h(r)}. \end{eqnarray} Note that the radial and $\theta$ components of the four velocity vanish since $\dot r=0=\dot \theta$. For the stable circular orbit in the equatorial plane, the effective potential $V_{\rm eff}(r)$ in (\ref{Veff}) must obey \begin{eqnarray} \tilde E^2-V_{\rm eff}=0,\;\;\;\; \frac{dV_{\rm eff}}{dr}=0. \end{eqnarray} Solving this two equations one obtains \begin{eqnarray} \tilde E &=& \sqrt{\frac{f^2(r)h'(r)}{f(r)h'(r) - h(r)f'(r)}},\\ \tilde l &=& \sqrt{\frac{h^2(r)f'(r)}{f(r)h'(r) - h(r)f'(r)}},\\ \Omega &=& \sqrt{\frac{f'(r)}{h'(r)}}. \end{eqnarray} Plugging these results into (\ref{ut}) and (\ref{uphi}) we can obtain $u^t$ and $u^\phi$ of the test spinning particle in equatorial circular orbits. Then in the self-dual spacetime, the parallel transport equation (\ref{spin}) along the circular orbits with radius $r$ in the equatorial plane reads \begin{eqnarray} && \frac{ds^t}{d\lambda} + \frac{1}{2} \frac{f'(r)}{f(r)} u^t s^r =0, \label{st} \\ && \frac{ds^r}{d\lambda} +\frac{1}{2} g(r) f'(r) u^t s^t - \frac{1}{2} g(r) h'(r) u^\phi s^\phi=0, ~~~~~~ \label{sr}\\ &&\frac{ds^\theta}{d\lambda} =0, \\ &&\frac{ds^\phi}{d\lambda} +\frac{1}{2} \frac{h'(r)}{h(r)} u^\phi s^r=0.\label{sphi} \end{eqnarray} Differentiating (\ref{sr}) with respect to the affine parameter $\lambda$ and converting $\lambda \to t$ using the relation $d t = u^t d \lambda$, one arrives at a second-order ordinary differential equation of $s^r$, \begin{eqnarray} \frac{d^2s^r}{d t^2} + \frac{1}{4}\left[\frac{g(r)h'^2(r)}{h(r)} \Omega^2- \frac{g(r)f'^2(r)}{f(r)} \right] s^r=0,\nonumber\\ \label{ss} \end{eqnarray} which can be solved to yield, \begin{eqnarray} s^r(t) = s^r(0) \cos (\omega_g t), \end{eqnarray} where \begin{eqnarray} \omega_g= \frac{1}{2} \sqrt{\frac{g(r)h'^2(r)}{h(r)} \Omega^2- \frac{g(r)f'^2(r)}{f(r)}} \label{omega_g}, \end{eqnarray} is the frequency of the oscillation pertaining to the spin four-vector $s^\mu$. Note that in deriving (\ref{ss}) we have used (\ref{st}) and (\ref{sphi}). Given this solution for the radial component $s^r$ and one can immediately solve for $s^t$, $s^\theta$, and $s^\phi$, yielding \begin{eqnarray} s^t( t) &=& -\frac{1}{2}\frac{f'(r)}{f(r)} s^r(0) \sin (\omega_g t), \\ s^\theta(t) &=&0,\\ s^\phi(t) &=& -\frac{1}{2} \frac{h'(r)}{h(r)} \Omega s^r(0) \sin(\omega_g t). \end{eqnarray} Here we have imposed the initial conditions such that the spin vector was initially directed along the radial direction, i.e., $s^t(0) =s^\phi(0)=s^\theta(0) =0$. By the inspection of the expression (\ref{omega_g}), it is evident that the the angular velocity $\omega_g$ of rotation of the spin vector is different from the angular velocity of the massive spinning particle along the circular orbit. It is this difference that leads to a procession of the spin vector. To see this clearly, let us compare $\omega_g$ and $\Omega$ by expanding (\ref{omega_g}) in terms of $M$ and $P$ as \begin{eqnarray} \frac{\omega_g}{\Omega} &=& \frac{1}{2} \sqrt{\frac{g(r)h'^2(r)}{h(r)} - \frac{g(r)f'(r) h'[r]}{f(r)}} \nonumber\\ &\simeq& 1-\frac{3 M}{2r} + \frac{2 M}{r} P, \end{eqnarray} which shows clearly $\omega_g < \Omega$. This implies that when the spinning particle completes one rotation along the circular orbit, the spin vector has not yet completed a complete circle. This phenomenon is called {\em geodetic procession}. For one complete period of the circular orbit, the angle of the geodetic procession can be expressed as \begin{eqnarray} \Delta \Theta &=& 2 \pi \left(1- \frac{\omega_g}{\Omega} \right) \nonumber\\ &\simeq &\frac{ 3 \pi M}{r} \left(1- \frac{4}{3} P \right ), \end{eqnarray} where the second term in the bracket represents the corrections from the LQG effects in the self-dual spacetime. It is transparent that the geodetic precession angle $\Delta \Theta$ decreases with the polymeric function $P$. When $P=0$ the above geodetic precession angle $\Delta \Theta$ reduces to the result for the Schwarzschild spacetime. The geodetic procession can be tested by using gyroscopes in the near-earth artificial satellites, which has been detected by the Gravity Probe B \cite{GPB}. Considering that the Gravity Probe B was spaced at an attitude of 642 km and had an orbital time period of 97.65 min, the geodetic effect leads to a procession of the gyroscope spin axis by 6,606.1 milliarcseconds (mas) per year, as predicted by GR. This procession is measured by the Gravity Probe B to be \cite{GPB} \begin{eqnarray} \Delta \Theta = (6601.8 \pm 18.3 ) {\rm mas}/{\rm year}. \end{eqnarray} This measurement leads to a bound on the polymeric function $P$ of \begin{eqnarray} 0<P<2.6\times 10^{-3}, \end{eqnarray} which corresponds to a bound on the polymeric parameter $\delta$ of \begin{eqnarray} |\delta| < 0.43. \end{eqnarray} The Earth-Moon system in the field of the Sun can also be considered as a gyroscope. This makes it is possible to detect the geodetic procession by measuring the Lunar orbit by using the Lunar laser ranging data. Recent measurement of the geodetic procession yields a relative deviation from GR as \cite{lunar} \begin{eqnarray} \frac{\Delta \Theta - \Delta \Theta^{\rm GR}}{ \Delta \Theta^{\rm GR}} = -0.0019 \pm 0.0064. \end{eqnarray} From this result one can get the bound of the polymeric function $P$ of \begin{eqnarray} 0<P< 6.2\times 10^{-3}, \end{eqnarray} which corresponds to the bound on the polymeric parameter $\delta$ of \begin{eqnarray} |\delta| < 0.67. \end{eqnarray} \section{Summary and Discussions} \renewcommand{\theequation}{5.\arabic{equation}} \setcounter{equation}{0} \begin{table*} \caption{Summary of estimates for upper bounds of the polymeric function $P$ and the parameter $\delta$ in the self-dual spacetime from several observations. } \label{table} \begin{ruledtabular} \begin{tabular} {cccc} Experiments/ Observations & $P$& $|\delta|$ & Datasets \\ \hline Light deflection & $1.25\times 10^{-4}$ & 0.0942 & VLBI observation of quasars \cite{VLBI_deflection} \\ Time delay & $ 5.5\times 10^{-6}$ &0.0199 & Cassini experiment \cite{cassini} \\ Perihelion advance & $1.57\times 10^{-5}$& 0.033& MESSENGER mission \cite{message} \\ \; & $1.4 \times 10^{-3}$ &0.32 & LAGEOS satellites \cite{LAGEOS} \\ \; & $0.0675$ &2.3 & Observation of S2 star at Galactic center \cite{S2} \\ Geodetic procession & $2.6\times 10^{-3}$& 0.43& Gravity Probe B \cite{GPB} \\ \; & $6.2 \times 10^{-3}$& 0.67& Lunar laser ranging data \cite{lunar} \\ \end{tabular} \end{ruledtabular} \end{table*} LQG provides an elegant resolution of both the classical big bang and black hole singularities. Recently, a regular static spacetime, the {\em self-dual spacetime}, is derived from the mini-superspace approach, based on the polymerization quantization procedure in LQG \cite{LQG_BH}. In this paper, we study the observational constraints that can be imposed on the polymeric function $P$ arising from LQG. For this purpose, we calculate theoretically the effects of the polymeric function $P$ to some astronomical observations conducted in the Solar System, including the deflection angle of light by the Sun, gravitational time delay, perihelion advance, and geodetic procession. Confronting the theoretical predictions with the observations, we derive the upper bound on the polymeric function in the self-dual spacetime. Our results are summarized in Table.~\ref{table}. It is remarkable that the measurement of the gravitational time delay by the Cassini experiment provides by far the most sensitive tool to constrain the effects of LQG in the Solar System. This measurement gives the tightest constraints [cf. in Table.~\ref{table}] on the polymeric function $P$ of $0<P<5.5\times 10^{-6}$ and on polymeric parameter $\delta$ of $|\delta| <0.0199$. Another important constraint comes from the observation of the perihelion advance for Mercury by the MESSENGER mission, which leads to an upper bound on $P$ of $1.57 \times 10^{-5}$. In the near future, the accuracy of the measurement for the Mercury's perihelion advance will be significantly improved by the joint European-Japanese BepiColombo project, which was launched in October, 2018. It is expected that this mission will improve the accuracy of the perihelion advance to be $10^{-4} \;{\rm as /century}$ \cite{will2018, Bepi}, which is one order of magnitudes better than the current accuracy of about $10^{-3} {\rm as /century}$ \cite{message}. With this mission, one can improve the constraints on the polymeric function $P$ to $0<P \lesssim 2 \times 10^{-6}$, which is much more restricted than that obtained from the Cassini experiment. We also calculate the effects of the polymeric function on the geodetic procession of a spinning object in the self-dual spacetime. The observation constraints on $P$ has also been derived from the Gravity Probe B data and the Lunar laser ranging data. Although these constraints are not as tighter as those obtained from the observations of the light deflection angle, gravitational time delay, and perihelion advance of Mercury, they do provide a different and interesting window to explore the features of the self-dual spacetime. \section*{Acknowledgements} T.Z. is supported in part by National Natural Science Foundation of China with the Grants No.11675143, the Zhejiang Provincial Natural Science Foundation of China under Grant Nos. LR21A050001, LY20A050002, and the Fundamental Research Funds for the Provincial Universities of Zhejiang in China under Grants No. RF-A2019015. A.W. is supported by National Natural Science Foundation of China with the Grants Nos. 11675145 and 11975203.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:intro} One of the most prominent features of the classical theory of holomorphic modular forms is the {\em weight}, i.e., the integer (or sometimes fraction) $k$ appearing in the transformation law \begin{equation}\label{weightk} f(z) \ \ = \ \ (cz+d)^{-k}\,f\(\f{az+b}{cz+d}\), \end{equation} where $f$ is a modular form and $\ttwo abcd$ is an element of its discrete automorphy group. Maass developed a parallel theory for non-holomorphic forms of various weights, including his well-known explicit operators which increase and lower the weight by 2. Gelfand-Graev (see \cite{Gelfand-Graev}) gave a reformulation of modular and Maass forms as functions on the group $SL(2,{\mathbb{R}})$, in which (\ref{weightk}) is interpreted representation-theoretically as an $SO(2)$-equivariance condition (see (\ref{SO2isotypic1})), and Maass's raising and lowering operators naturally correspond to Lie algebra derivatives (see (\ref{sl2updownaction}) and (\ref{sl2diffops3})). Those same Lie algebra derivatives are decisive tools in Bargmann's work \cite{Bargmann} describing the irreducible unitary representations of $SL(2,{\mathbb{R}})$ via their $K$-types. The goal of this paper is to describe a parallel, explicit theory of raising and lowering operators for automorphic forms on $SL(3,{\mathbb{R}})$. The papers \cite{howe,miyazaki} give a description of Lie algebra derivatives in principal series representations for $SL(3,{\mathbb{R}})$. In this paper we instead use a very explicit basis provided by Wigner functions \cite{bied} on the maximal compact subgroup $K=SO(3)$, from which one can then study precise topics in the analytic theory of automorphic forms (such as Whittaker functions, pursued in \cite{buttcane}). Indeed, our motivation is that restriction of a $K$-isotypic automorphic form on $SL(3,{\mathbb{R}})$ to $K$ is itself a sum of Wigner functions, which justifies the importance of this basis. As do the methods of \cite{howe,miyazaki}, our method also allows the computation of the composition series of principal series representations of $SL(3,{\mathbb{R}})$, as well as the action of intertwining operators on $K$-types. Moreover, it applies (with straightforward modifications) to many other real reductive Lie groups, in particular those whose maximal compact subgroup is isogenous to a product of $SU(2)$'s and tori (such as $G_2$ and $SO(4,4)$, pursued in \cite{Zhang,GLMZ}), and indeed to any group satisfying a sufficiently-explicit analog of the Clebsch-Gordan multiplication formula (\ref{CG1}). We will use the notation $G$ to denote $SL(r,{\mathbb{R}})$, where $r$ will equal 3 except in \secref{sec:SL2R} (where $r=2$), and $\frak g$ to denote the complex Lie algebra $\frak{sl}_r$ (consisting of all traceless $r\times r$ complex matrices). We write $N\subset G$ for the subgroup of unit upper triangular $r\times r$ matrices, which is a maximal unipotent subgroup of $G$, and $\frak n$ for its complexified Lie algebra (which consists of all strictly upper triangular complex $r\times r$ matrices). The subgroup $A\subset G$ consists of all nonsingular, diagonal $r\times r$ matrices with positive entries and determinant 1; it is the connected component of a maximal abelian subgroup of $G$. The complexified Lie algebra $\frak a$ of $A$ consists of all traceless diagonal $r\times r$ matrices with complex entries. Finally, $K=SO(r)$ is a maximal compact subgroup of $G$, and its complexified Lie algebra $\frak k$ consists of all antisymmetric complex $r\times r$ matrices. The Iwasawa decomposition asserts that the map $(n,a,k)\mapsto nak$ from $N\times A \times K\rightarrow G$ is a diffeomorphism; at the level of Lie algebras, $\frak g={\frak{n}}\oplus{\frak{a}}\oplus{\frak{k}}$. Our main result gives the explicit action of the Lie algebra on the basis of Wigner functions in a principal series representation (see (\ref{basisforlisotypicprincseries})). The action of $\frak k$ is classical and described in (\ref{so3differentiation}); the following result describes the action of a basis of the five-dimensional complement $\frak p=\{X\in{\frak{g}}|X^t=X\}$ of $\frak k$ in $\frak g$: \begin{thm}\label{thm:intro} Let $Z_{-2}$, $Z_{-1}$, $Z_{0}$, $Z_1$, and $Z_2$ be as defined in (\ref{sl3liealg}), $\L^{(k)}_{j}(\l,\ell,m_1)$ be as defined in (\ref{Lambda}), and let $q_{k,j}(\ell,m)$ denote the Clebsch-Gordan coefficient $\langle 2\,k\,\ell\,m|(\ell+j)\,(k+m) \rangle$ (see (\ref{CG3}) for an explicit description in the cases of relevance). Set $c_{-2}=c_2=1$ and $c_0=\sqrt{\f 23}$. Let $V_{\lambda,\delta}$ be a principal series representation of $SL(3,{\mathbb{R}})$ (see (\ref{princsl3})) and let $v_{\ell,m_1,m_2}$ denote its elements defined in (\ref{basisforlisotypicprincseries}). Then \begin{multline}\label{introthm1} \pi(Z_n)v_{\ell,m_1,m_2} \ \ = \\ \sum_{\srel{-2\le j \le 2}{k\,\in\,\{-2,0,2\}}} c_k \, q_{k,j}(\ell,m_1)\, q_{n,j}(\ell,m_2) \, \Lambda^{(k)}_j(\lambda,\ell,m_1) \, v_{\ell+j,m_1+k,m_2+n}\,. \end{multline} \end{thm} \noindent The papers \cite{howe,miyazaki} give comparable results for differently-presented bases. Section~\ref{sec:SL2R} contains a review of Lie algebra derivatives, $K$-types, and composition series of principal representations for the group $SL(2,{\mathbb{R}})$. Section~\ref{sec:wignerso3} gives background on Wigner functions for $SO(3)$, and Section~\ref{sec:princseries} describes principal series representations of $SL(3,{\mathbb{R}})$ in terms of Wigner functions. Theorem~\ref{thm:intro} is proved in Section~\ref{sec:gKmodulestructure}, along with a description of the operators $U_j$ (\ref{Ujdef}), which are somewhat analogous to raising and lowering operators. As an application, in Section~\ref{sec:examples} we describe the composition series of some principal series representations relevant to automorphic forms, in terms of Wigner functions. Finally, Section~\ref{sec:diffopformulas} gives formulas for $\pi(Z_{\pm 2})$ and $\pi(Z_0)$ from Theorem~\ref{thm:intro} in terms of differential operators on the symmetric space $SL(3,{\mathbb{R}})/SO(3,{\mathbb{R}})$. We wish to thank Jeffrey Adams, Michael B. Green, Roger Howe, Anthony Knapp, Gregory Moore, Siddhartha Sahi, Peter Sarnak, Wilfried Schmid, Pierre Vanhove, Ramarathnam Venkatesan, David Vogan, Zhuohui Zhang, and Greg Zuckerman for their helpful discussions. \section{$SL(2,{\mathbb{R}})$ background}\label{sec:SL2R} This section contains a summary of the types of results we prove for $SL(3,{\mathbb{R}})$, but in the simpler and classical context of $SL(2,{\mathbb{R}})$. For any function $f(z)$ on the complex upper half plane ${\mathbb{H}}$, Gelfand-Graev (see \cite{Gelfand-Graev}) associated a function $\phi$ on $G=SL(2,{\mathbb{R}})$ by the formula \begin{equation}\label{gglifting} \phi\ttwo abcd \ \ = \ \ (ci+d)^{-k}\,f\(\f{ai+b}{ci+d}\). \end{equation} If $f$ satisfies (\ref{weightk}) for all $\ttwo abcd$ lying in a discrete subgroup $\G\subset G$, then $\phi(\gamma g)=\phi(g)$ for all $\g\in \G$. Finally, since $K=SO(2,{\mathbb{R}})$ fixes $z=i$ under its action by fractional linear transformations on ${\mathbb{H}}$, the weight condition (\ref{weightk}) for $f$ becomes the $K$-isotypic condition \begin{equation}\label{SO2isotypic1} \phi(gk) \ \ = \ \ \chi(k)\,\phi(g)\,, \end{equation} for $\phi$, where $\chi$ is a character of the group $K=SO(2,{\mathbb{R}})$.\footnote{More precisely, $\chi$ is the character $\chi_{-k}$ defined in (\ref{charactersofSO2}).} Thus Gelfand-Graev converted the study of modular forms to the study of functions on $\G\backslash G$ which transform on the right by a character of the maximal compact subgroup $K$. By a standard reduction, it suffices to study functions $\phi$ which lie in an irreducible automorphic representation, in particular a space that is invariant under the right translation action $[\rho(g)\phi](x)=\phi(xg)$. More precisely, one assumes the existence of an irreducible representation $(\pi,V)$ of $G$ and an embedding $j$ from $V$ into a space of functions on $\Gamma\backslash G$ which intertwines the two representations in the sense that \begin{equation}\label{jandpi} j(\pi(g)v) \ \ = \ \ \rho(g)j(v)\,. \end{equation} Frequently an $L^2$-condition is imposed on $j(v)(g)$, though that will not be necessary here. Under the assumption that $j(v)=\phi$, condition (\ref{SO2isotypic1}) can be elegantly restated as \begin{equation}\label{SO2isotypic2} \pi(k)v \ \ = \ \ \chi(k)v\,, \end{equation} that is, $v$ is an isotypic vector for the character $\chi$ of $K=SO(2,{\mathbb{R}})$. The representation theory of compact Lie groups was completely determined by Weyl, and in the present context of $K=SO(2,{\mathbb{R}})$ the irreducibles are simply the characters \begin{equation}\label{charactersofSO2} \chi_\ell(k_\theta) \ \ = \ \ e^{i\ell\theta}\, , \ \ \ k_\theta \ = \ \ttwo{\cos\theta}{-\sin\theta}{\sin\theta}{\cos\theta}\ \in \ SO(2,{\mathbb{R}})\,, \end{equation} as $\ell$ ranges over ${\mathbb{Z}}$. Writing \begin{equation}\label{Vell} V_\ell \ \ = \ \ \{v\in V\ | \ \pi(k)v\, =\, \chi_\ell(k)v\}\,, \end{equation} the direct sum $\oplus_{\ell\in{\mathbb{Z}}}V_\ell$ forms the Harish-Chandra module of {\em $K$-finite vectors} for $\pi$, i.e., those vectors whose $K$-translates span a finite-dimensional subspace. In general, the Harish-Chandra module can be defined in terms of the decomposition of $\pi$'s restriction to $K$ into irreducible representations. In this particular situation, $K$-finite vectors correspond to trigonometric polynomials, and $K$-isotypic vectors correspond to trigonometric monomials. The full representation $V$ is a completion of this space in terms of classical Fourier series. For example, its smooth vectors $V^\infty$ (those vectors for which the map $g\mapsto \pi(g)v$ is a smooth function on the manifold $G$) correspond to Fourier series whose coefficients decay faster than the reciprocal of any polynomial. By definition, the subspace $V^\infty$ is preserved by Lie algebra derivatives \begin{equation}\label{liederiv} \pi(X)v \ \ := \ \ \lim_{t\,\rightarrow \,0} \f{\pi(e^{tX})v-v}{t}\,, \ \ X \ \in \ \frak g_{\mathbb{R}}\,, \end{equation} where $\frak g_{\mathbb{R}}$ is the Lie algebra of $G$. This definition extends to the complexified Lie algebra $\frak g$ through the linearity rule $\pi(X_1+iX_2)=\pi(X_1)+i\pi(X_2)$. If $\phi=j(v)$ is a smooth automorphic function corresponding to $v\in V^\infty$, then $j(\pi(X)v)$ is equal to $(\rho(X)\phi)(g) := \left.\f{d}{dt}\right|_{t=0}\phi(ge^{tX})$; as before, $\rho(X)$ is initially defined only for $X\in\frak g_{\mathbb{R}}$ and is then extended to $X\in \frak g$ by linearity. These Lie algebra derivatives occur in various {\em ad hoc} guises in the classical theory of automorphic functions, including Maass's raising and lowering operators. Such derivatives satisfy various relations with each other, which can often be more clearly seen by doing computations in a suitably chosen model for the representation $(\pi,V)$. It is well-known that every representation of $SL(2,{\mathbb{R}})$ is a subspace of some principal series representation $(\pi_{\nu,\e},V_{\nu,\e})$, where \begin{equation}\label{Vnudef} \gathered V_{\nu,\epsilon} \ \ = \ \ \left\{ f:G\rightarrow{\mathbb{C}}\ \mid \ f\(\ttwo{a}{b}{0}{d}g\) = |\smallf{a}{d}|^{1/2+\nu} \operatorname{sgn}(a)^\e f(g) \right\}, \\ [\pi_{\nu,\e}(g)f](h) \ \ = \ \ f(hg)\,, \endgathered \end{equation} $\nu\in {\mathbb{C}}$, and $\epsilon \in {\mathbb{Z}}/2{\mathbb{Z}}$. We shall thus use subspaces of (\ref{Vnudef}) as convenient models for arbitrary representations. Recall the {\em Iwasawa decomposition} $G=NAK$, where $N=\{\ttwo 1x01|x\in{\mathbb{R}}\}$ and $A=\{\ttwo{a}{0}{0}{a\i}|a>0\}$:~each element $g\in G$ has a decomposition \begin{equation}\label{expliiwasawasl2a} g \ \ = \ \ \ttwo 1x01 \ttwo{a}{0}{0}{a\i} k_\theta\,, \end{equation} with $x$, $a$, and $\theta$ determined uniquely by $g$. It follows that any function $f$ in (\ref{Vnudef}) is completely determined by its restriction to $K$; since $k_\pi=\ttwo{-1}00{-1}$, this restriction must satisfy the parity condition \begin{equation}\label{sl2paritycondition} f(k_\theta) \ \ = \ \ (-1)^\e\, f(k_{\theta+\pi})\,. \end{equation} Conversely, any function on $K$ satisfying (\ref{sl2paritycondition}) extends to an element of $V_{\nu,\e}$, for any $\nu\in {\mathbb{C}}$. The $K$-isotypic subspace $V_\ell$ therefore vanishes unless $\ell\equiv \e\imod 2$. When $V$ is equal to a full, irreducible principal series representation $V_{\nu,\e}$, $V_{\ell}$ is one-dimensional for $\ell\equiv \e\imod 2$ and consists of all complex multiples of $v_\ell$, the element of $V_{\nu,\e}$ whose restriction to $K$ is $\chi_\ell$. In terms of the Lie algebra, membership in $V_{\ell}$ is characterized as \begin{equation}\label{pikgenonvl} v\,\in\,V_{\ell} \ \ \Longleftrightarrow \ \ \pi\!\(\lietwo{0}{-1}{1}{0}\)v \,=\, i \ell v\,, \end{equation} since $\lietwo{0}{-1}{1}{0}\in \frak g$ is the infinitesimal generator of $K$, i.e., $\exp(t\lietwo{0}{-1}{1}{0})=k_{t}$.\footnote{In order to avoid confusion of whether $\pi$ or $\rho$ is acting via group translation in $G$ or by derivation in $\frak g$, we will use the traditional matrix parenthesis notation $\ttwo\cdot\cdot\cdot\cdot$ to signify elements of $G$ and the bracket notation $\lietwo\cdot\cdot\cdot\cdot$ to signify elements of $\frak g$.} The following well-known result computes the action of a basis of $\frak g$ on the $v_\ell$ inside $V_{\nu,\e}$: \begin{lem}\label{lem:sl2Liealgderiv} With the above notation, \begin{equation}\label{sl2updownaction} \aligned \pi\(\lietwo{1}{i}{i}{-1}\) v_\ell \ \ & = \ \ (2\nu+1-\ell)\,v_{\ell-2}\,, \\ \pi\!\(\lietwo{0}{-1}{1}{0}\)v_\ell \ \ & =\ \ i \, \ell\, v_\ell\,,\\ \text{and} \ \ \ \pi\(\lietwo{1}{-i}{-i}{-1}\) v_\ell \ \ & = \ \ (2\nu+1+\ell)\,v_{\ell+2}\,. \endaligned \end{equation} \end{lem} \noindent Our main result theorem~\ref{thm:main} is a generalization of \lemref{lem:sl2Liealgderiv} to $SL(3,{\mathbb{R}})$. Formulas (\ref{sl2updownaction}) are collectively equivalent to the three formulas \begin{equation}\label{sl2Liealgderivs} \aligned \pi\(\lietwo{0}{1}{0}{0}\)v_\ell \ \ & = \ \ \smallf{i(\ell-1-2\nu)}{4}\,v_{\ell-2} \ - \ \smallf{i \ell}{2} \,v_{\ell} \ + \ \smallf{i(\ell+1+2\nu)}{4}\,v_{\ell+2}\,, \\ \pi\(\lietwo{1}{0}{0}{-1}\)v_\ell \ \ & = \ \ -\,\smallf{(\ell-1-2\nu)}{2}\,v_{\ell-2} \ \ \ \ \ + \ \ \ \ \ \ \smallf{ (\ell+1+2\nu)}{2}\,v_{\ell+2}\,, \\ \text{and} \ \ \ \pi\(\lietwo0010\)v_\ell \ \ & = \ \ \smallf{i(\ell-1-2\nu)}{4}\,v_{\ell-2} \ + \ \smallf{i \ell}{2}\, v_{\ell} \ + \ \smallf{i(\ell+1+2\nu)}{4}\,v_{\ell+2} \\ \endaligned \end{equation} for the action of $\frak g=\frak{sl}_2({\mathbb{C}})$ under its usual basis. However, (\ref{sl2updownaction}) is simpler because its three Lie algebra elements diagonalize the adjoint (conjugation) action $Ad(g):X\mapsto gXg\i$ of $K$ on $\frak g$: \begin{equation}\label{sl2adjointdiag} \aligned Ad(k)\lietwo{1}{i}{i}{-1} \ \ & = \ \ \chi_{-2}(k)\,\lietwo{1}{i}{i}{-1} , \\ Ad(k)\lietwo{0}{-1}{1}{0} \ \ & = \ \ \lietwo{0}{-1}{1}{0} , \\ \text{and} \ \ \ Ad(k)\lietwo{1}{-i}{-i}{-1} \ \ & = \ \ \chi_{2}(k)\,\lietwo{1}{-i}{-i}{-1} . \\ \endaligned \end{equation} Indeed, the second of these formulas is obvious because $\lietwo{0}{-1}{1}{0}$ is the infinitesimal generator of the abelian group $K$, while the first and third formulas are equivalent under complex conjugation;~both can be seen either by direct calculation, or more simply verifying the equivalent Lie bracket formulation \begin{equation}\label{equivlie} \big[ \lietwo0{-1}10,\lietwo1ii{-1} \big] \ \ = \ \ \lietwo0{-1}10\lietwo1ii{-1} \, - \, \lietwo1ii{-1}\lietwo0{-1}10 \ \ = \ \ -2\,i\, \lietwo1ii{-1}. \end{equation} Incidentally, it follows from (\ref{equivlie}) that $\pi\(\lietwo0{-1}10\)\pi\(\lietwo1ii{-1}\)v_\ell=-2i\pi\(\lietwo1ii{-1}\)v_\ell+\pi\(\lietwo1ii{-1}\)\pi\(\lietwo0{-1}10\)v_\ell$, which equals $i(\ell-2)\pi\(\lietwo1ii{-1}\)v_\ell$ by (\ref{pikgenonvl}). A second application of (\ref{pikgenonvl}) thus shows that $\pi\(\lietwo1ii{-1}\)v_\ell\in V_{\ell-2}$, and hence a multiple of $v_{\ell-2}$; the first formula in (\ref{sl2updownaction}) is more precise in that it determines the exact multiple. Although Lemma~\ref{lem:sl2Liealgderiv} is well-known, we nevertheless include a proof in order to motivate some of our later calculations: \begin{proof}[Proof of \lemref{lem:sl2Liealgderiv}] If $X\in \frak g=\frak{sl}_2({\mathbb{R}})$, then by definition \begin{equation}\label{lem:sl2Liealgderivpf1} (\pi(X)v_{\ell})(k_\theta) \ \ = \ \ \left.\f{d}{dt}\right|_{t=0} v_{\ell}(k_\theta e^{tX}) \ \ = \ \ \left.\f{d}{dt}\right|_{t=0} v_{\ell}( e^{t k_\theta Xk_{-\theta}}k_\theta)\,. \end{equation} Expand the Lie algebra element $k_\theta Xk_{-\theta}$ as a linear combination \begin{equation}\label{sl2Liealgderivpf2} k_\theta Xk_{-\theta} \ \ = \ \ c_E(X,\theta)\lietwo0100 \ + \ c_H(X,\theta)\lietwo{1}{0}{0}{-1}\ + \ c_Y(X,\theta)\lietwo{0}{-1}{1}{0}, \end{equation} so that (\ref{lem:sl2Liealgderivpf1}) is equal to \begin{multline}\label{sl2Liealgderivpf4} (\pi(X)v_{\ell})(k_\theta) \ \ = \ \ c_E(X,\theta) \left.\f{d}{dt}\right|_{t=0} v_{\ell}( \ttwo1t01 k_\theta) \ + \\ c_H(X,\theta) \left.\f{d}{dt}\right|_{t=0} v_{\ell}( \ttwo{e^t}{0}{0}{e^{-t}} k_\theta) \ + \ c_Y(X,\theta) \left.\f{d}{dt}\right|_{t=0} v_{\ell}(k_{t}k_\theta)\,. \end{multline} By definition (\ref{Vnudef}), $v_\ell(\ttwo1t01 k_\theta)$ is independent of $t$ while $v_\ell(\ttwo{e^t}{0}{0}{e^{-t}} k_\theta)=e^{(2\nu+1)t}v_\ell( k_\theta)$, so (\ref{sl2Liealgderivpf4}) equals \begin{equation}\label{sl2Liealgderivpf5} \aligned (\pi(X)v_{\ell})(k_\theta) \ \ & = \ \ c_H(X,\theta)(2\nu+1) v_\ell( k_\theta) \ + \ c_Y(X,\theta)\smallf{d}{d\theta} v_\ell( k_\theta) \\ & = \ \ c_H(X,\theta)(2\nu+1) v_\ell( k_\theta) \ + \ c_Y(X,\theta)i\ell v_\ell( k_\theta)\,. \\ \endaligned \end{equation} Formula (\ref{sl2Liealgderivpf5}) remains valid for any $X$ in the complexification $\frak g$, and can be used to calculate any of the identities in (\ref{sl2updownaction}) and (\ref{sl2Liealgderivs}). The second formula in (\ref{sl2updownaction}) was shown in (\ref{pikgenonvl}), while the first and third are equivalent. We thus consider $X=\lietwo{1}{-i}{-i}{-1}$ and calculate that $c_H(X,\theta)=e^{2i\theta}$ and $c_Y(X,\theta)=-ie^{2i\theta}$, so that formula (\ref{sl2Liealgderivpf5}) specializes to $e^{2i\theta}(2\nu+1)e^{i\ell \theta}-ie^{2i\theta}i\ell e^{i\ell\theta}= (2\nu+1+\ell)e^{i(\ell+2)\theta}$ as claimed in the third equation in (\ref{sl2updownaction}). \end{proof} The operators $\pi\(\lietwo{1}{-i}{-i}{-1}\)$ and $\pi\(\lietwo{1}{i}{i}{-1}\)$ in (\ref{sl2updownaction}) are the Maass raising and lowering operators, respectively; they are expressed in terms of differential operators in (\ref{sl2diffops3}). Note that the coefficient $2\nu+1\pm\ell$ can vanish when $\nu\in \f12{\mathbb{Z}}$. In such a case, the representation $V_{\nu,\e}$ is reducible, but otherwise it is not (because appropriate compositions of the raising and lowering operators map any isotypic vector to any other as one goes up and down the ladder of $K$-types). Indeed, the following well-known characterization of irreducible representations of $SL(2,{\mathbb{R}})$ and their $K$-type structure can read off from (\ref{sl2updownaction}): \begin{thm}\label{thm:irrepsofSL2R}(Bargmann \cite{Bargmann}) 1) The representation $V_{\nu,\e}$ is irreducible if and only if $\nu\notin \f{\e+1}{2}+{\mathbb{Z}}$. 2) If $k\ge 2$ is an integer congruent to $\e \imod 2$, then $V_{-\f{k-1}{2},\e}$ contains the $(k-1)$-dimensional irreducible representation\footnote{There is a unique such representation up to equivalence. It is given by the action of $SL(2,{\mathbb{R}})$ on homogeneous polynomials of degree $k-2$ in 2 variables.} of $SL(2,{\mathbb{R}})$, spanned by the basis $\{v_{2-k},v_{4-k},v_{6-k},\ldots,v_{k-4},v_{k-2}\}$. 3) If $k\ge 2$ is an integer congruent to $\e \imod 2$, then $V_{\f{k-1}{2},\e}$ contains the direct sum of two irreducible representations of $SL(2,{\mathbb{R}})$:~a holomorphic discrete series representation $D_k$ with basis $\{v_k,v_{k+2},v_{k+4},\ldots\}$, and its antiholomorphic conjugate $\overline{D_k}$ with basis $\{v_{-k},v_{-k-2},v_{-k-4},\ldots\}$. 4) $V_{\nu,\e}$ and $V_{-\nu,\e}$ are dual to each other, hence the quotient of $V_{-\frac{k-1}{2},\e}$ by its finite-dimensional subrepresentation in case 2) is the direct sum of $D_k$ and $\overline{D_k}$. Likewise, the quotient $V_{\f{k-1}{2},\e}/(D_k\oplus \overline{D_k})$ is the $(k-1)$-dimensional representation. \end{thm} To summarize, we have described the irreducible representations of $K=SO(2,{\mathbb{R}})$, seen how functions on the circle of a given parity decompose in terms of them, and calculated the action of raising and lowering operators on them. In the next few sections we shall extend some of these results to $G=SL(3,{\mathbb{R}})$ and $K=SO(3,{\mathbb{R}})$, which is much more complicated because $K$ is no longer an abelian group. \section{Irreducible representations of $SO(3,{\mathbb{R}})$ and Wigner functions}\label{sec:wignerso3} The group $K=SO(3,{\mathbb{R}})$ is isomorphic to $SU(2)/\{\pm I\}$, and so its irreducible representations are precisely those of $SU(2)$ which are trivial on $-I$. By Weyl's Unitarian Trick (which in this case is actually due to Hurwitz \cite{hurwitz}), the irreducible representations of $SU(2)$ are restrictions of the irreducible finite-dimensional representations of $SL(2,{\mathbb{C}})$. Those, in turn, are furnished by the action of $SL(2,{\mathbb{C}})$ on the $(n+1)$-dimensional vector space of degree $n$ homogeneous polynomials in two variables (on which $SL(2,{\mathbb{C}})$ acts by linear transformations of the variables), for $n\ge 0$. This action is trivial on $-I$ if and only if $n$ is even, in which case it factors through $SO(3,{\mathbb{R}})$. Thus the irreducible representations $V_\ell$ of $SO(3,{\mathbb{R}})$ are indexed by an integer $\ell\ge 0$ (commonly interpreted as angular momentum) and have dimension $2\ell+1$. A basis for $V_\ell$ can be given by the isotypic vectors for any embedded $SO(2,{\mathbb{R}})\subset SO(3,{\mathbb{R}})$, transforming by the characters $\chi_{-\ell},\chi_{1-\ell},\ldots,\chi_\ell$ from (\ref{charactersofSO2}). According to the Peter-Weyl theorem, $C^\infty(SO(3,{\mathbb{R}}))$ has an orthonormal decomposition into copies of the representations $V_{\ell}$ for $\ell=0,1,2,\ldots$, each occurring with multiplicity $\dim V_{\ell}=2\ell+1$. Wigner gave an explicit realization of $V_\ell$ whose matrix entries give a convenient, explicit basis for these $2\ell+1$ copies. It is most cleanly stated in terms of Euler angles for matrices $k\in SO(3,{\mathbb{R}})$, which always have at least one factorization of the form \begin{multline}\label{eulerangles} k(\alpha,\beta,\gamma) \ \ = \ \ \tthree{\cos \a}{-\sin \a}{0}{\sin \a}{\cos \a}{0}{0}{0}{1} \tthree{1}{0}{0}{0}{\cos\b}{-\sin\b}{0}{\sin\b}{\cos\b} \tthree{\cos \g}{-\sin \g}{0}{\sin \g}{\cos \g}{0}{0}{0}{1} , \\ \a,\g \,\in\,{\mathbb{R}}/(2\pi{\mathbb{Z}}), \ \ 0 \le \b \le \pi\,. \end{multline} Wigner functions are first diagonalized according to the respective left and right actions of the $SO(2,{\mathbb{R}})$-subgroup parameterized by $\a$ and $\g$, and are indexed by integers $\ell$, $m_1$, and $m_2$ satisfying $-\ell\le m_1,m_2 \le \ell$; they are given by the formula\footnote{Some authors use slightly different notation, denoting $d^{\ell}_{m_1,m_2}(\cos\b)$ simply by $d^{\ell}_{m_1,m_2}(\beta)$. Others such as \cite{bied} flip the signs of $m_1$ and $m_2$. } \begin{multline}\label{WignerDformula} D^{\ell}_{m_1,m_2}(k(\a,\b,\g)) \ \ = \ \ (-1)^{\ell+m_2}\, e^{i (m_1 \a+m_2\g)}\sqrt{\smallf{(\ell+m_1)!(\ell-m_1)!}{(\ell+m_2)!(\ell-m_2)!}} \ \times \\ \sum_{r=\max(0,m_1+m_2)}^{\min(\ell+m_1,\ell+m_2)}(-1)^{r} \(\srel{\ell+m_2}{r}\)\(\srel{\ell-m_2}{\ell+m_1-r}\) \cos(\smallf{\b}2)^{2r-m_1-m_2} \sin(\smallf{\b}2)^{2\ell+m_1+m_2-2r} \end{multline} \cite[(3.65)]{bied}, which can easily be derived from the matrix coefficients of the $(2\ell+1)$-dimensional representation of $SL(2,{\mathbb{C}})$ mentioned above. They can be rewritten as \begin{equation}\label{WignerDdef} D^{\ell}_{m_1,m_2}(k(\a,\b,\g)) \ \ = \ \ e^{i m_1 \a}\,d^{\ell}_{m_1,m_2}(\cos \b)\,e^{i m_2 \g}\ , \ \ \ -\ell\,\le\,m_1,m_2\,\le\,\ell\,, \end{equation} where \begin{multline}\label{littleddef} d^{\ell}_{m_1,m_2}(x) \ \ = \\ (-1)^{\ell-m_1}2^{-\ell}\sqrt{\smallf{(\ell+m_2)!(1-x)^{m_1-m_2}}{(\ell-m_2)! (\ell+m_1)!(\ell-m_1)!(1+x)^{m_1+m_2}}} (\smallf{d}{dx})^{\ell-m_2}(1-x)^{\ell-m_1}(1+x)^{\ell+m_1} \end{multline} \cite[(3.72)]{bied}. Each $D^{\ell}_{m_1,m_2}$ is an isotypic vector for the embedded $SO(2,{\mathbb{R}})$ corresponding to the Euler angle $\g$. The full $(2\ell+1)\times (2\ell+1)$ matrix $(D^\ell_{m_1,m_2}(k))_{-\ell\le m_1,m_2\le \ell}$ furnishes an explicit representation of $SO(3)\rightarrow GL(2\ell+1,{\mathbb{C}})$. The Wigner functions $D^{\ell}_{m_1,m_2}$ form an orthogonal basis for the smooth functions on $K=SO(3,{\mathbb{R}})$ under the inner product given by integration over $K$:~more precisely, \begin{equation}\label{innerproduct} \int_{K}D^{\ell}_{m_1,m_2}(k)\, \overline{D^{\ell'}_{m_1',m_2'}(k)}\,dk \ \ = \ \ \left\{ \begin{array}{ll} \f{1}{2\ell+1}, & \ell=\ell',m_1=m_1',\text{ and }m_2=m_2'; \\ 0, & \hbox{otherwise.} \end{array} \right. \end{equation} where $dk$ is the Haar measure which assigns $K$ volume 1 \cite[(3.137)]{bied}. The left transformation law $D^{\ell}_{m_1,m_2}(k(\a,0,0)\,k) =e^{im_1\a}D^{\ell}_{m_1,m_2}(k)$ is unchanged under right translation by $K$, so for any fixed $\ell$ and $m_1$ the span of $\{D^{\ell}_{m_1,m_2}|-\ell\le m_2 \le \ell\}$ is an irreducible representation of $K$ equivalent to $V_\ell$. The $2\ell+1$ copies stipulated by the Peter-Weyl theorem are then furnished by the possible choices of $-\ell\le m_1\le \ell$. In our applications, it will be important to express the product of two Wigner functions as an explicit linear combination of Wigner functions using the Clebsch-Gordan multiplication formula \begin{multline}\label{CG1} D^{\ell}_{m_1,m_2}\,D^{\ell'}_{m_1',m_2'} \ \ = \\ \sum_{\ell''} \langle \ell\,m_1\,\ell'\,m_1'\,|\,\ell''\,(m_1+m_1') \rangle \langle \ell\,m_2\,\ell'\,m_2'\,|\,\ell''\,(m_2+m_2') \rangle D^{\ell''}_{m_1+m_1',m_2+m_2'}\,, \end{multline} where $\langle \cdot \cdot \cdot \cdot | \cdot \cdot \rangle$ are Clebsch-Gordan coefficients (see \cite[(3.189)]{bied}). Although the Clebsch-Gordan coefficients are somewhat messy to define in general, we shall only require them when $\ell'=2$, in which case the terms in the sum vanish unless $|\ell-2| \le \ell'' \le \ell+2$ (this condition is known as ``triangularity'' -- see \cite[(3.191)]{bied}). Write \begin{equation}\label{CG2} \aligned q_{k,j}(\ell,m) \ \ := & \ \ \langle 2 \, k \, \ell \, m | (\ell+j) \, (k+m)\rangle \, , \endaligned \end{equation} which by definition vanishes unless $|k|,|j|\le 2$, $|m|\le \ell$, $|k+m|\le \ell+j$, and $|\ell-2| \le \ell+j$ (corresponding to when the Wigner functions in (\ref{CG1}) are defined and the triangularity condition holds). The values for indices obeying these conditions are explicitly given as \begin{equation}\label{CG3} \aligned q_{k,-2}(\ell,m) \ \ & = \ \ \smallf{(-1)^k \sqrt{6(\ell-m)!(\ell+m)!}}{\sqrt{\ell(\ell-1)(2\ell-1)(2\ell+1)(2-k)!(2+k)!(\ell-k-m-2)!(\ell+k+m-2)!}} \\ q_{k,-1}(\ell,m) \ \ & = \ \ \smallf{(-1)^k (k+\ell k+2m)\sqrt{3(\ell-m)!(\ell+m)!}}{\sqrt{\ell(\ell-1)(\ell+1)(2\ell+1)(2-k)!(2+k)!(\ell-k-m-1)!(\ell+k+m-1)!}} \\ q_{k,0}(\ell,m) \ \ & = \ \ \smallf{(-1)^k (2\ell^2(k^2-1)+\ell(5k^2+6km-2)+3(k^2+3km+2m^2)) \sqrt{(\ell-m)!(\ell+m)!}}{\sqrt{\ell(\ell+1)(2\ell-1)(2\ell+3)(2-k)!(2+k)!(\ell-k-m)!(\ell+k+m)!}} \\ q_{k,1}(\ell,m) \ \ & = \ \ \smallf{(\ell k-2m) \sqrt{3(\ell-k-m+1)!(\ell+k+m+1)!}}{\sqrt{\ell(\ell+1)(\ell+2)(2\ell+1)(2-k)!(2+k)!(\ell-m)!(\ell+m)!}} \\ q_{k,2}(\ell,m) \ \ & = \ \ \smallf{\sqrt{6(\ell-k-m+2)!(\ell+k+m+2)!}}{\sqrt{(\ell+1)(\ell+2)(2\ell+1)(2\ell+3)(2-k)!(2+k)!(\ell-m)!(\ell+m)!}} \endaligned \end{equation} (see \cite[Table~27.9.4]{AS} or \cite[p.~637]{bied}). The Clebsch-Gordan coefficients satisfy the relation \begin{multline}\label{CG4} \sqrt{\tfrac{2}{3}} \left(\ell j+\smallf{j(j+1)}{2}-3\right) q_{0,j}(\ell,m) \ \ =\ \ \sqrt{(\ell-m)(\ell+1+m)}\, q_{-1,j}(\ell,m+1) \\ + \ \sqrt{(\ell+m)(\ell+1-m)}\, q_{1,j}(\ell,m-1)\,, \end{multline} which follows from \cite[34.3.8 and 34.3.14]{NIST} or direct computation from (\ref{CG3}). As a consequence of this and (\ref{CG1}), \begin{equation}\label{CG5} \aligned &\sqrt{(\ell-m_1)(\ell+1+m_1)} D^2_{-1,n} D^\ell_{m_1+1,m_2} +\sqrt{(\ell+m_1)(\ell+1-m_1)} D^2_{1,n} D^\ell_{m_1-1,m_2} \\ & \ \ \ \ \ \ \ \ = \ \ \sum_{-2 \le j \le 2} \Bigl( \sqrt{(\ell-m_1)(\ell+1+m_1)} \, q_{-1,j}(\ell,m_1+1) \\ & \ \ \ \ \ \ \ \ \ \ \ \qquad + \ \sqrt{(\ell+m_1)(\ell+1-m_1)} \,q_{1,j}(\ell,m_1-1) \Bigr) q_{i,j}(\ell,m_2) \, D^{\ell+j}_{m_1,m_2+n} \\ &\ \ \ \ \ \ \ \ = \ \ \sqrt{\smallf 23} \sum_{-2 \le j \le 2} \(j \ell +\smallf{j(j+1)}{2}-3\) q_{0,j}(\ell,m_1) \, q_{i,j}(\ell,m_2)\, D^{\ell+j}_{m_1,m_2+n} \endaligned \end{equation} for $n\in\{-2,-1,0,1,2\}$. \section{Principal series for $SL(3,{\mathbb{R}})$}\label{sec:princseries} In complete analogy to (\ref{Vnudef}), principal series for $G=SL(3,{\mathbb{R}})$ are defined as \begin{multline}\label{princsl3} V_{\lambda,\delta} \ \ = \ \ \left\{f:G\rightarrow{\mathbb{C}} \,|\right. \\ \left. f\(\tthree{a}{\star}{\star}{0}{b}{\star}{0}{0}{c}g\) \,=\, |a|^{1+\lambda_1}|b|^{\lambda_2}|c|^{-1+\lambda_3}\operatorname{sgn}(a)^{\delta_1}\operatorname{sgn}(b)^{\delta_2}\operatorname{sgn}(c)^{\delta_3}f(g) \right\} \end{multline} for $\lambda=(\l_1,\l_2,\l_3)\in{\mathbb{C}}^3$ satisfying $\l_1+\l_2+\l_3=0$ and $\d=(\d_1,\d_2,\d_3)\in({\mathbb{Z}}/2{\mathbb{Z}})^3$; $\pi_{\l,\d}$ again acts by right translation. The Iwasawa decomposition asserts that every element of $g$ is the product of an upper triangular matrix times an element of $K=SO(3,{\mathbb{R}})$. Thus all elements of $V_{\l,\d}$ are uniquely determined by their restrictions to $K$ and the transformation law in (\ref{princsl3}). Just as was the case for $SL(2,{\mathbb{R}})$ and $SO(2,{\mathbb{R}})$, not all functions on $SO(3,{\mathbb{R}})$ are restrictions of elements of $V_{\l,\d}$:~as before, the function must transform under any upper triangular matrix in $K$ according to the character defined in (\ref{princsl3}). \begin{lem}\label{lem:so3parity} Recall the Euler angles defined in (\ref{eulerangles}). A function $f:SO(3,{\mathbb{R}})\rightarrow{\mathbb{C}}$ is the restriction of an element of $V_{\l,\d}$ if and only if \begin{equation}\label{so3parityconditions} \aligned f(k(\a,\b,\g)) \ \ & = \ \ (-1)^{\d_1+\d_2}\,f(k(\a+\pi,\b,\g)) \\ \text{and} \ \ \ \ f(k(\a,\b,\g)) \ \ & = \ \ (-1)^{\d_2+\d_3}\,f(k(\pi-\a,\pi-\b,\pi+\g))\,. \endaligned \end{equation} \end{lem} \begin{proof} Consider the matrices \begin{equation}\label{order4group} m_{-1,-1,1} \ \ := \ \ \tthree{-1}000{-1}0001 \ \ \ \text{and} \ \ \ m_{1,-1,-1} \ \ := \ \ \tthree 1000{-1}000{-1} . \end{equation} Direct calculation shows that \begin{equation}\label{m11kabgs} \aligned m_{-1,-1,1}\,k(\a,\b,\g) \ \ & = \ \ k(\a+\pi ,\b,\g) \\ \text{and} \ \ \ m_{1,-1,-1}\,k(\a,\b,\g) \ \ & = \ \ k(\pi-\a ,\pi-\b,\pi+\g ) \endaligned \end{equation} for any $\a,\g\in{\mathbb{R}}/(2\pi {\mathbb{Z}})$ and $0\le \b\le \pi$. Thus (\ref{so3parityconditions}) must hold for functions $f\in V_{\l,\d}$. Conversely, since $m_{-1,-1,1}$ and $m_{1,-1,-1}$ generate all four upper triangular matrices in $K$, an extension of $f$ from $K=SO(3,{\mathbb{R}})$ to $G=SL(3,{\mathbb{R}})$ given by the transformation law in (\ref{princsl3}) is well-defined if it satisfies (\ref{so3parityconditions}). \end{proof} Like all functions on $K=SO(3,{\mathbb{R}})$, $D^{\ell}_{m_1,m_2}$ has a unique extension to $G=SL(3,{\mathbb{R}})$ satisfying the transformation law \begin{multline}\label{WignerDonG} D^{\ell}_{m_1,m_2}\( \tthree{a}{\star}{\star}{0}{b}{\star}{0}{0}{c}k(\a,\b,\g)\) \ \ = \\ a^{1+\lambda_1}b^{\lambda_2}c^{-1+\lambda_3} \, e^{i m_1 \a+i m_2 \g}\,d^{\ell}_{m_1,m_2}(\cos \b)\,, \ \ a,b,c\,>\,0\,. \end{multline} However, this extension may not be a well-defined element of the principal series $V_{\lambda,\delta}$ (\ref{princsl3}); rather, it is an element of the line bundle (\ref{linebundle}). Instead, certain linear combinations must be taken in order to account for the parities $(\d_1,\d_2,\d_3)$: \begin{lem}\label{sl3Kmultiplicities} The $V_\ell$-isotypic component of the principle series $V_{\l,\d}$ is spanned by \begin{equation}\label{basisforlisotypicprincseries} \{ v_{\ell,m_1,m_2} \ | \ \srel{m_1\,\equiv\, \d_1+\d_2\imod 2}{-\ell\,\le\,m_2\,\le\,\ell} \}\,, \ \ v_{\ell,m_1,m_2} \ := \ D^{\ell}_{m_1,m_2} + (-1)^{\d_1+\d_3+\ell}\, D^{\ell}_{-m_1,m_2}\,, \end{equation} where as always the subscripts $m_1$ and $m_2$ are integers satisfying the inequality $-\ell \le m_1,m_2\le \ell$. In particular, $V_\ell$ occurs in $V_{\l,\d}$ with multiplicity \begin{equation}\label{dldmult} m_{\ell,\d} \ \ = \ \ \left\{ \begin{array}{ll} \lfloor \smallf{\ell+1}{2} \rfloor, & \d_1+\d_2 \ \text{odd,} \\ 1+\lfloor \smallf{\ell}{2} \rfloor, & \d_1+\d_3+\ell\ \text{even}\,, \ \d_1+\d_2 \ \text{even,} \\ \lfloor \smallf{\ell}{2} \rfloor, & \d_1+\d_3+\ell\ \text{odd}\,,\ \d_1+\d_2 \ \text{even.} \\ \end{array} \right. \end{equation} \end{lem} \begin{proof} By \lemref{lem:so3parity} it suffices to determine which linear combinations of Wigner functions obey the two conditions in (\ref{so3parityconditions}). The transformation properties of (\ref{WignerDonG}) show that the first condition is equivalent to the congruence $m_1\equiv \d_1+\d_2\imod 2$. The expression (\ref{littleddef}) shows that $d^{\ell}_{-m_1,m_2}(x)=(-1)^{\ell+m_2}d^{\ell}_{m_1,m_2}(-x)$, from which one readily sees the compatibility of the second condition in (\ref{so3parityconditions}) with the sign $(-1)^{\d_1+\d_3+\ell}$ in (\ref{basisforlisotypicprincseries}). \end{proof} \noindent {\bf Examples:} \begin{itemize} \item If $\ell=0$ the basis in (\ref{basisforlisotypicprincseries}) is nonempty if and only if $\d_1\equiv \d_2\equiv \d_3\imod 2$, which is the well-known criteria for the existence of a spherical (i.e., $K$-fixed) vector in $V_{\l,\d}$. The possible cases here are thus $(\d_1,\d_2,\d_3)\equiv (0,0,0)$ or $(1,1,1)\imod 2$, which are actually equivalent because they are related by tensoring with the sign of the determinant character (since it is trivial on $SL(3,{\mathbb{R}})$). \item If $\ell=1$ and $\d_1+\d_2\equiv 0\imod 2$, then $m_1$ must vanish and hence $\d_1+\d_3\equiv 1\imod 2$ in order to have a nonempty basis. The possible cases of signs are $(\d_1,\d_2,\d_3)=(0,0,1)$ or $(1,1,0)$, which are again equivalent. \end{itemize} \section{The $({\frak g},K)$ module structure}\label{sec:gKmodulestructure} It is a consequence of the Casselman embedding theorem \cite{casselman} that any irreducible representation of $G=SL(3,{\mathbb{R}})$ is contained in some principal series representation $V_{\l,\d}$ (\ref{princsl3}). The Harish-Chandra module of $V_{\l,\d}$, its vector subspace of $K$-finite vectors, was seen in \lemref{sl3Kmultiplicities} to be isomorphic to $\oplus_{\ell\ge 0}V_{\ell}^{m_{\ell,\d}}$, where each copy of $V_{\l,\d}$ is explicitly indexed by certain integers $\ell, m_1 \ge 0$ described in (\ref{basisforlisotypicprincseries}), and the multiplicity $m_{\ell,\delta}$ is given in (\ref{dldmult}). An arbitrary subrepresentation $V$ of $V_{\l,\d}$ has a Harish-Chandra module isomorphic to $\oplus_{\l\ge 0}V_\ell^{m(V,\ell)}$, where $0\le m(V,\ell)\le m_{\ell,\d}$. In \secref{sec:SL2R} we studied the Harish-Chandra modules of representations of $SL(2,{\mathbb{R}})$ and their Lie algebra actions; in this section we consider these for $G=SL(3,{\mathbb{R}})$. Let us first denote some elements of the complexified Lie algebra ${\frak g}={\frak{sl}}_3({\mathbb{C}})$ of $G$ as follows: \begin{equation}\label{sl3liealg} \gathered X_{1} \ \ = \ \ \liethree 010000000 \! , \ \ X_{2} \ \ = \ \ \liethree 000001000 \! , \ \ X_{3} \ \ = \ \ \liethree 001000000\!, \\ X_{-1} \ \ = \ \ \liethree 000100000 \! , \ \ \ X_{-2} \ \ = \ \ \liethree 000000010 \! , \ \ X_{-3} \ \ = \ \ \liethree 000000100\!, \\ H_1 \ \ = \ \ \liethree 1000{-1}0000 \! , \ \ H_2 \ \ = \ \ \liethree 00001000{-1}\!,\\ Y_1 \ \ = \ \ -X_1 + X_{-1} \ \ = \ \ \liethree 0{-1}0100000 \! , \ \ Y_2 \ \ = \ \ -X_2 + X_{-2} \ \ = \ \ \liethree 00000{-1}010\! , \\ Y_3 \ \ = \ \ -X_3 + X_{-3} \ \ = \ \ \liethree 00{-1}000100 \!, \\ Z_{-2} \ \ = \ \ \liethree 1i0i{-1}0000\! , \ \ Z_{-1} \ \ = \ \ \liethree 00i00{-1}i{-1}0\! , \ \ Z_{0} \ \ = \ \ \sqrt{\smallf 23}\liethree 10001000{-2}\! , \\ Z_{1} \ \ = \ \ \liethree 00i001i10\! , \ \ \text{and} \ \ Z_{2} \ \ = \ \ \liethree 1{-i}0{-i}{-1}0000\!. \endgathered \end{equation} The normalization factor of $\sqrt{\f 23}$ for $Z_0$ is included to simplify later formulas. The elements $\{X_1,X_2,X_3,X_{-1},X_{-2},X_{-3},H_1,H_2\}$ form a basis of $\frak{sl}_3({\mathbb{C}})$. The elements $\{Y_1,Y_2,Y_3\}$ form a basis of $\frak k=\frak{so}_3({\mathbb{C}})$, which extends to the basis \begin{equation}\label{convenientLiebasis} \{Y_1,Y_2,Y_3,Z_{-2},Z_{-1},Z_0,Z_1,Z_2\} \end{equation} of $\frak{sl}_3({\mathbb{C}})$, in which the last 5 elements form a basis of the orthogonal complement $\frak p$ of $\frak k$ under the Killing form. The elements $Z_j$ have been chosen so that \begin{equation}\label{Zjproperty} \tthree{\cos \theta}{-\sin\theta}{0}{\sin\theta}{\cos\theta}{0}001 Z_j \tthree{\cos \theta}{-\sin\theta}{0}{\sin\theta}{\cos\theta}{0}001\i \ \ = \ \ e^{ij\theta}\,Z_j\,; \end{equation} that is, they diagonalize the adjoint action of the common $SO(2,{\mathbb{R}})$-subgroup corresponding to the Euler angles $\alpha$ and $\gamma$. The rest of this section concerns explicit formulas for the action of the basis (\ref{convenientLiebasis}) as differential operators under right translation by $\pi$ as in (\ref{liederiv}). The formulas for differentiation by the first three elements, $Y_1,Y_2,Y_3$, are classical and are summarized as follows along with their action on Wigner functions \cite{bied,NIST}. In terms of the Euler angles (\ref{eulerangles}) on $K=SO(3,{\mathbb{R}})$, \begin{equation}\label{so3differentiation} \aligned \pi(Y_1) \ \ & = \ \ \f{\partial}{\partial \g} \,,\\ \pi(Y_2) \ \ & = \ \ \f{\sin \g}{\sin \b}\f{\partial}{\partial \a} \ + \ \cos(\g)\,\f{\partial}{\partial \b} \ - \ \f{\sin \g}{\tan \b}\f{\partial}{\partial \g}\,,\\ {\text{and}} \ \ \pi(Y_3) \ \ & = \ \ -\,\f{\cos \g}{\sin\b}\f{\partial}{\partial \a} \ + \ \sin(\g) \,\f{\partial}{\partial \b} \ + \ \f{\cos\g}{\tan \b} \,\f{\partial}{\partial \g}\,. \endaligned \end{equation} The action of the differential operators (\ref{so3differentiation}) on the basis of Wigner functions $D^{\ell}_{m_1,m_2}$ is given by \begin{equation}\label{so3diffonWignerD} \aligned \pi(Y_1)D^{\ell}_{m_1,m_2} \ \ & = \ \ i m_2 D^{\ell}_{m_1,m_2} \\ \pi(Y_2+iY_3)D^{\ell}_{m_1,m_2} \ \ & = \ \ \sqrt{\ell(\ell+1)-m_2(m_2+1)}\,D^{\ell}_{m_1,m_2+1} \\ \text{and} \ \ \ \pi(-Y_2+iY_3)D^{\ell}_{m_1,m_2} \ \ & = \ \ \sqrt{\ell(\ell+1)-m_2(m_2-1)}\,D^{\ell}_{m_1,m_2-1}\,, \endaligned \end{equation} very much in analogy to the raising and lowering actions in (\ref{sl2updownaction}). Here we recall that $\pi(Y_j)D^{\ell}_{m_1,m_2}(k):=\left. \f{d}{dt}\right|_{t=0} D^{\ell}_{m_1,m_2}(ke^{tY_j})$. In terms of differentiation by left translation $L(Y_j)D^{\ell}_{m_1,m_2}(k):=\left. \f{d}{dt}\right|_{t=0} D^{\ell}_{m_1,m_2}(e^{tY_j}k)$, \begin{equation}\label{leftderiv} \aligned L(Y_1) D^\ell_{m_1,m_2} \ \ = \ \ & i m_1 D^\ell_{m_1,m_2} \\ L(-Y_2+i Y_3) D^\ell_{m_1,m_2} \ \ = \ \ & \sqrt{\ell(\ell+1)-m_1(m_1+1)} D^\ell_{m_1+1,m_2} \\ \text{and} \ \ \ L(Y_2+i Y_3) D^\ell_{m_1,m_2} \ \ = \ \ & \sqrt{\ell(\ell+1)-m_1(m_1-1)} D^\ell_{m_1-1,m_2}\,. \endaligned \end{equation} This completely describes the Lie algebra action of $\frak k={\frak{so}_3}({\mathbb{C}})$ on the basis (\ref{basisforlisotypicprincseries}) of the Harish-Chandra module for $V_{\lambda,\delta}$. We now turn to the key calculation of the full Lie algebra action. These formulas will be insensitive to the value of the parity parameter $\delta$ in the definition of the principal series (\ref{princsl3}). For that reason, we will perform our calculations in the setting of the line bundle \begin{equation}\label{linebundle} {\mathcal L}_{\l} \ \ = \ \ \left\{f:G\rightarrow{\mathbb{C}} \,|\, f\(\tthree{a}{\star}{\star}{0}{b}{\star}{0}{0}{c}g\) \,=\, a^{1+\lambda_1}b^{\lambda_2}c^{-1+\lambda_3}f(g), \, a,b,c>0 \right\}, \end{equation} which contains $V_{\lambda,\delta}$ for any possible choice of $\delta$. Elements of ${\mathcal L}_{\l}$ can be identified with their restrictions to $SO(3)$, and so for the rest of this section we shall tacitly identify each Wigner function $D^{\ell}_{m_1,m_2}$ with its extension to $G$ in ${\mathcal L}_{\l}$ given in (\ref{WignerDonG}). The right translation action $\pi$ on $V_{\lambda,\delta}$ also extends to ${\mathcal L}_{\l}$, which enables us to study the Lie algebra differentiation directly on $D^{\ell}_{m_1,m_2}$; the action on the basis elements (\ref{basisforlisotypicprincseries}) of $V_{\lambda,\delta}$ will follow immediately from this. Though the passage to the line bundle $\mathcal L_\l$ is not completely necessary, it results in simpler formulas. Given $X\in \frak g$ and $k\in K$, write \begin{equation}\label{kXki} kXk^{-1} \ \ = \ \ X_{\frak{n}}(k) \ + \ X_{\frak{a}}(k) \ + \ X_{\frak{k}}(k)\,, \end{equation} with $X_{\frak{n}}\in \frak{n}={\mathbb{C}} X_1\oplus {\mathbb{C}} X_2\oplus {\mathbb{C}} X_3$, $X_{\frak{a}}\in \frak{a}={\mathbb{C}} H_1\oplus {\mathbb{C}} H_2$, and $X_{\frak{k}}\in \frak{k}={\mathbb{C}} Y_1\oplus {\mathbb{C}} Y_2\oplus {\mathbb{C}} Y_3$. Since $f(ke^{tX})=f(e^{tX_{\frak{n}}(k)+tX_{\frak{a}}(k)+tX_{\frak{k}}(k)}k)$, the derivative of this expression at $t=0$ is equal to \begin{equation}\label{liealgdiff5} [\pi(X)f](k) \ = \ \left. \f{d}{dt}\right|_{t=0}f(e^{tX_{\frak{n}}(k)}k) \ + \ \left. \f{d}{dt}\right|_{t=0}f(e^{tX_{\frak{a}}(k)}k) \ + \ \left. \f{d}{dt}\right|_{t=0}f(e^{tX_{\frak{k}}(k)}k)\,. \end{equation} Write $X_{\frak{k}}=b_1(k)Y_1+b_2(k)Y_2+b_3(k)Y_3$ and $X_{\frak{a}}=c_1(k)H_1+c_2(k)H_2$. Since $f\in V_{\lambda,\d}$ satisfies the transformation law (\ref{linebundle}), \begin{equation}\label{liealgdiff3a} \left. \f{d}{dt}\right|_{t=0}f(e^{tX_{\frak n}}g)f\ \ \equiv \ \ 0 \,, \end{equation} while \begin{multline}\label{liealgdiff3b} \left. \f{d}{dt}\right|_{t=0}f(e^{tH_1}g) \ \ = \ \ (\l_1-\l_2+1)f(g) \\ \text{and} \ \ \left. \f{d}{dt}\right|_{t=0}f(e^{tH_2}g) \ \ = \ \ (\l_2-\l_3+1)f(g)\,. \end{multline} Combining this with (\ref{leftderiv}), we conclude \begin{multline}\label{liealgdiff4} [\pi(X)D^{\ell}_{m_1,m_2}](k) \\ = \ \ \( c_1(k)(\l_1-\l_2+1)+c_2(k)(\l_2-\l_3+1)+i m_1 b_1(k) \)D^{\ell}_{m_1,m_2}(k) \\ - \ (b_2(k)+ib_3(k))\smallf{\sqrt{\ell(\ell+1)-m_1(m_1+1)}}{2} D^{\ell}_{m_1+1,m_2}(k) \\ +\ (b_2(k)-ib_3(k))\smallf{\sqrt{\ell(\ell+1)-m_1(m_1-1)}}{2} D^{\ell}_{m_1-1,m_2}(k)\,, \end{multline} for any $\ell\ge 0$ and $-\ell\le m_1,m_2\le \ell$. Like all functions on $K$, each of the functions $c_1(k)$, $c_2(k)$, $b_1(k)$, $b_2(k)$, and $b_3(k)$ can be expanded as linear combinations of Wigner functions. Applying the Clebsch-Gordan multiplication rule for products of two Wigner functions then exhibits (\ref{liealgdiff4}) as an explicit linear combination of Wigner functions. We shall now compute these for the basis elements $X=Z_n$, $n\in \{-2,-1,0,1,2\}$, which of course entails no loss of generality: \begin{equation}\label{JBpf2} \aligned c_1(Z_n) \ \ =& \ \ D^2_{-2,n}+\textstyle{\sqrt{\f 23}} \,D^2_{0,n} + D^2_{2,n} \\ c_2(Z_n) \ \ =& \ \ 2\,\textstyle{\sqrt{\f 23}}\, D^2_{0,n} \\ b_1(Z_n) \ \ =& \ \ i D^2_{-2,n}\,-\,i D^2_{2,n} \\ b_2(Z_n) \ \ =& \ \ -\!D^2_{-1,n}\,+\,D^2_{1,n} \\ b_3(Z_n) \ \ =& \ \ i D^2_{-1,n}\,+\,i D^2_{1,n}\,, \endaligned \end{equation} as can be checked via direct computation. We now state the action of the $Z_n$ on Wigner functions: \begin{thm} \label{thm:main} Let \begin{equation}\label{Lambda} \aligned \L^{(-2)}_j(\l,\ell,m_1) \ \ & = \ \ \l_1\,-\,\l_2\,+\,1\,-\,m_1 \\ \L^{(0)}_j(\l,\ell,m_1) \ \ & = \ \ \l_1\,+\,\l_2\,-\,2\l_3+j\ell+\f{j+j^2}{2} \\ \L^{(2)}_j(\l,\ell,m_1) \ \ & = \ \ \l_1\,-\,\l_2\,+\,1\,+\,m_1\,, \\ \endaligned \end{equation} $c_{-2}=c_2=1$, $c_0=\sqrt{\f 23}$, and recall the formulas for $q_{k,j}(\ell,m)=\langle 2k\ell m|(\ell+j)(k+m)\rangle$ given in (\ref{CG3}). For $n\in \{-2,-1,0,1,2\}$, \begin{equation}\label{liealgdiffZj} \pi(Z_n)D^{\ell}_{m_1,m_2} \ \ = \sum_{\srel{-2\le j \le 2}{k\,\in\,\{-2,0,2\}}} c_k \, q_{k,j}(\ell,m_1)\, q_{n,j}(\ell,m_2) \, \Lambda^{(k)}_j(\lambda,\ell,m_1) \, D^{\ell+j}_{m_1+k,m_2+n} \end{equation} as an identity of elements in the line bundle $\mathcal L_\l$ from (\ref{linebundle}). \end{thm} \begin{proof} Formulas (\ref{liealgdiff4}) and (\ref{JBpf2}) combine to show \begin{equation}\label{JB3} \aligned \pi(Z_n) D^\ell_{m_1,m_2} \ \ =& \ \ c_{-2} (\lambda_1-\lambda_2+1-m_1) D^2_{-2,n} D^\ell_{m_1,m_2} \\ &+c_{0} (\lambda_1+\lambda_2-2\lambda_3+3) D^2_{0,n} D^\ell_{m_1,m_2} \nonumber \\ &+c_{2} (\lambda_1-\lambda_2+1+m_1) D^2_{2,n} D^\ell_{m_1,m_2} \nonumber \\ &+\sqrt{\ell(\ell+1)-m_1(m_1+1)} D^2_{-1,n} D^\ell_{m_1+1,m_2} \nonumber \\ &+\sqrt{\ell(\ell+1)-m_1(m_1-1)} D^2_{1,n} D^\ell_{m_1-1,m_2}. \nonumber \endaligned \end{equation} The Theorem now follows from (\ref{CG1}) and (\ref{CG5}). \end{proof} Theorem~\ref{thm:intro} follows immediately from Theorem~\ref{thm:main}, the definition of $v_{\ell,m_1,m_2}$ given in (\ref{basisforlisotypicprincseries}), and the identity $q_{k,j}(\ell,m)=(-1)^j q_{-k,j}(\ell,-m)$. Formula (\ref{liealgdiffZj}) expresses Lie algebra derivatives of $D^{\ell}_{m_1,m_2}$ as linear combinations of $V_{\ell+j}$-isotypic vectors for $-2\le j\le 2$. We shall now explain how the operators $U_j$ defined by \begin{align}\label{Ujdef} U_j D^\ell_{m_1,m_2} \ \ = \ \ & \sum_{k\,\in\,\{-2,0,2\}} c_k \, q_{k,j}(\ell,m_1) \, \Lambda^{(k)}_j(\lambda,\ell,m_1)\, D^{\ell+j}_{m_1+k,m_2} \end{align} for $-2\le j \le 2$ can usually be written using Lie algebra differentiation under $\pi$. These map $V_\ell$-isotypic vectors to $V_{\ell+j}$-isotypic vectors, and thus separate out the contributions to (\ref{liealgdiffZj}) for fixed $j$ (aside from an essentially harmless shift of $m_2$). Since they do not require linear combinations, the operators $U_j$ are in a sense more analogous to Maass's raising and lowering operators (\ref{sl2updownaction}), and often more useful than the $\pi(Z_n)$. To write $U_j$ in terms of Lie algebra derivatives, let \begin{align} W^\ell_{\pm 2,m_2} \ \ & = \ \ \pi(\smallf{\mp Y_2+i Y_3}{\sqrt{\ell(\ell+1)-m_2(m_2\pm1)}}) \circ \pi( \smallf{\mp Y_2+i Y_3}{\sqrt{\ell(\ell+1)-(m_2\pm1)(m_2\pm2)}}) \circ \pi( Z_{\pm 2})\,, \\ W^\ell_{\pm 1,m_2} \ \ & = \ \ \pi\(\frac{\mp Y_2+i Y_3}{\sqrt{\ell(\ell+1)-m_2(m_2\pm1)}} \) \circ \pi(Z_{\pm 1}) \,, \ \ \text{and}\\ W^\ell_{0,m_2} \ \ & = \ \ \pi(Z_0)\,, \end{align} which are defined whenever the arguments of the square-roots are positive. Let $\Delta_K$ be the $SO(3)$ laplacian, which acts on Wigner functions by $\D_K D^\ell_{m_1,m_2}=\ell(\ell+1)D^\ell_{m_1,m_2}$ \cite[Section 3.8]{bied}. Then the (commutative) composition of operators \begin{align} P^\ell_j \ \ = \ \ \prod_{\substack{|k| \le 2\\ k\ne j \\ \ell+k \ge 0}} \frac{\Delta_K- (\ell+k)(\ell+k+1)}{(\ell+j)(\ell+j+1)-(\ell+k)(\ell+k+1)} \end{align} acts on Wigner functions by the formula \begin{equation}\label{Plproject} P^\ell_jD^{\ell'}_{m_1,m_2} \ \ = \ \ \left\{ \begin{array}{ll} 0, & \ell'\neq \ell+j \\ D^{\ell+j}_{m_1,m_2}, &\ell'=\ell+j \end{array} \right. \end{equation} for $\ell-2\le \ell' \le \ell+2$. After composing with this projection operator, it follows from (\ref{so3diffonWignerD}) and (\ref{liealgdiffZj}) that \begin{equation}\label{PWqU} P^\ell_{j}\circ W^{\ell+j}_{n,m_2} \ \ = \ \ q_{n,j}(\ell,m_2)\, U_j \end{equation} on the span of Wigner functions $D^{\ell}_{m_1,m_2}$ with $|m_1|\le\ell$. Furthermore, for any choice of $\ell,m_2,j$ for which $(\ell,j)\not\in \{(0,0),(0,1),(1,-1)\}$, there is some $-2\le n \le 2$ with $q_{n,j}(\ell,m_2) \ne 0$, so that $U_j$ coincides with the Lie algebra differentiation $q_{n,j}(\ell,m_2)^{-1} P^\ell_{j}\circ W^{\ell+j}_{n,m_2}$ on this span. \section{Examples of composition series}\label{sec:examples} In this section we present a selected assortment of examples of representations of $SL(3,{\mathbb{R}})$ that are related to automorphic forms, and explicitly describe their $K$-type structure in terms of Wigner functions. The treatment here is by no means exhaustive. However, the techniques we use are directly applicable to any representation of $SL(3,{\mathbb{R}})$. We also briefly explain how to compute the composition series in some examples, though for convenience we incorporate information from the {\sc atlas} software package \cite{atlas} (which computes the length of the composition series as well as the multiplicities of the $K$-types of each factor). A similar analysis was performed in \cite{howe} using a different model for principal series. It is also possible to use the theory of intertwining operators to obtain descriptions of invariant subspaces in terms of Wigner functions. Indeed, the simple intertwining operator corresponding to the Euler angle $\alpha$ in (\ref{eulerangles}) is diagonalized by Wigner functions, via what amounts to an $SL(2)$-calculation. The simple intertwining operator corresponding to the Euler angle $\beta$ does not act diagonally on this basis, but does act diagonally on a basis of Wigner-like functions defined instead using different Euler angles related by conjugation by a Weyl element. Using a permutation matrix in $K=SO(3,{\mathbb{R}})$ along with the fact that the Wigner $D$-matrix $(D^\ell_{m_1,m_2}(k))_{-\ell\le m_1,m_2\le \ell}$ is a representation of $K$, it is trivial to explicitly diagonalize this intertwining operator as well (but not both intertwining operators simultaneously). \subsection{ Representations induced from trivial on $SL(2,{\mathbb{R}})$}\label{sec:sub:inducedtriv} We shall now describe how \thmref{thm:main} recovers simpler, known formulas for some degenerate principal series representations (see \cite{howelee}). Let $\pi=\pi_s$ denote the subspace of $V_{(s-1/2,s+1/2,-2s),(0,0,0)}$ spanned by $D^{\ell}_{0,m_2}$, $|m_2|\le \ell$, with $\ell$ even (a parity condition forced by (\ref{basisforlisotypicprincseries})). Thus the underlying Harish-Chandra module of $\pi$ is \begin{equation}\label{inducedfromtriv1} HC(\pi) \ \ = \ \ \bigoplus_{\srel{\ell\, \ge \,0}{\ell \,\in\,2{\mathbb{Z}}}} V_{\ell} \end{equation} and each $V_{\ell}$ has basis $\{D^{\ell}_{0,m_2}|-\ell \le m_2 \le \ell\}$. It is not hard to see that \begin{equation}\label{indfromtrivial1} \pi \ \ = \ \ \text{Ind}_{P_{2,1}}^G\psi_s \end{equation} is induced from a quasicharacter $\psi_s$ of a maximal parabolic subgroup $P_{2,1}$ of $G=SL(3,{\mathbb{R}})$, from which it follows that it is a $SL(3,{\mathbb{R}})$-invariant subspace of $V_{(s-1/2,s+1/2,-2s),(0,0,0)}$ (equivalently, that (\ref{inducedfromtriv1}) is ${\frak{sl}}(3,{\mathbb{R}})$-invariant). The formulas in \secref{sec:gKmodulestructure} give much finer information. With $(\l_1,\l_2,\l_3)=(s-1/2,s+1/2,-2s)$ and $m_1=0$, (\ref{Lambda}) reads \begin{equation}\label{Lambdadegen} \aligned \L^{(-2)}_j(\l,\ell,m_1) \ \ & = \ \ 0 \\ \L^{(0)}_j(\l,\ell,m_1) \ \ & = \ \ 4s +j\ell+\smallf{j+j^2}{2} \\ \L^{(2)}_j(\l,\ell,m_1) \ \ & = \ \ 0 \,.\\ \endaligned \end{equation} Note also that $D^{\ell}_{0,m_2}$'s extension (\ref{WignerDonG}) is a well-defined element of $V_{\l,\d}$ for $\ell$ even. Thus, in this situation formula (\ref{liealgdiffZj}) from \thmref{thm:main} states \begin{equation}\label{liealgdiffZjdegen12} \aligned \pi(Z_n)D^{\ell}_{0,m_2} \ \ & = \ \ \sqrt{\smallf 23}\,\sum_{j\,\in\,\{-2,0,2\}} q_{0,j}(\ell,0)\,q_{n,j}(\ell,m_2)\,(4s+j\ell+\smallf{j+j^2}{2})\, D^{\ell+j}_{0,m_2+n}\,,\\ \endaligned \end{equation} since $q_{0,-1}(\ell,0)=q_{0,1}(\ell,0)=0$ by (\ref{CG3}). Furthermore, (\ref{Ujdef}) specializes to the formula \begin{equation}\label{liealgeUjdegen} U_j D^{\ell}_{0,m_2} \ \ = \ \ \sqrt{\smallf 23}\, q_{0,j}(\ell,0)\,(4s+j\ell+\smallf{j+j^2}{2})\,D^{\ell+j}_{0,m_2} \,, \end{equation} which vanishes if $j=\pm 1$. Here $U_{\pm 2}$ play the role of the raising and lowering operators in (\ref{sl2updownaction}), from which the reducibility of $\pi$ can be easily deduced as it was for the principal series in \thmref{thm:irrepsofSL2R}. \begin{figure}\label{fig:1} \caption{An illustration of (\ref{liealgeUjdegen}) for representations of $SL(3,{\mathbb{R}})$ induced from the trivial representation of $SL(2,{\mathbb{R}})$. Each copy of $V_\ell$ is spanned by $\{D^{\ell}_{0,m_2}|-\ell\le m_2\le \ell\}$. At certain $s\in \frac{1}{4}{\mathbb{Z}}$ the operators $U_{\pm 2}$ may be trivial, in which case the representation reduces (analogously to the reducibility of the $SL(2,{\mathbb{R}})$ principal series in Theorem~\ref{thm:irrepsofSL2R}).} \begin{center} \includegraphics[width=5.5in]{jack-pic-1.png} \end{center} \end{figure} \subsection{Cohomological representations} In the rest of this section we will consider the decomposition of the full principal series \begin{equation}\label{fullprinck} V_{(-\f{k-1}{2},\f{k-1}2,0),(k,0,k)}\ , \ \ \ \ k\,\in\,{\mathbb{Z}}_{\ge 2}\,. \end{equation} (The principal series $V_{(-\f{k-1}{2},\f{k-1}2,0),(0,k,k)}$ have a very similar analysis.) Recall from (\ref{dldmult}) that $V_\ell$ occurs with multiplicity $m_{\ell,\d}$, which for $k$ even is $1+\lfloor \f\ell2 \rfloor$ if $\ell$ is even and $ \lfloor \f\ell2 \rfloor$ if $\ell$ is odd; for $k$ odd $m_{\ell,\delta}=\lfloor \f{\ell+1}{2}\rfloor$. The elements $ v_{\ell,m_1,m_2} = D^\ell_{m_1,m_2} + (-1)^{\ell} D^{\ell}_{-m_1,m_2}$ from (\ref{basisforlisotypicprincseries}) are nonzero elements of (\ref{fullprinck}) when $m_1\neq 0$ and $m_1 \equiv k\imod 2$; $v_{\ell,0,m_2}\neq 0$ when $k$ and $\ell$ are even. \begin{thm}\label{thm:evenk} For even integers $k\ge 2$, the spherical representation $V_{(-\f{k-1}{2},\f{k-1}2,0),(0,0,0)}$ has 2 constituents, $V_A$ and $V_B$, which contain $V_\ell$ with the following multiplicities: \begin{itemize} \item $m_{\ell,A}=\max(0,1+\lfloor \f{\ell-k}{2}\rfloor)$, and \item $m_{\ell,B}=\left\{ \begin{array}{ll} \min(\lfloor \f\ell2 \rfloor,\f{k-2}{2}), & \ell \text{ odd} \\ \min(1+\lfloor \f\ell2 \rfloor,\f k2), & \ell \text{ even.} \end{array} \right.$ \end{itemize} The representation $V_B$ is the subrepresentation spanned by the $v_{\ell,m_1,m_2}$ having $0\le m_1<k$ (and satisfying the above parity constraints), whereas $V_A$ is the quotient of $V_{(-\f{k-1}{2},\f{k-1}2,0),(0,0,0)}$ by $V_B$. The dual representation $V_{-(-\f{k-1}{2},\f{k-1}2,0),(0,0,0)}$ contains $V_A$ as the subrepresentation spanned by the $v_{\ell,m_1,m_2}$ having $ m_1\ge k$ (and satisfying the above parity constraints), and $V_B$ as a quotient. The composition series for various Weyl orbits of $(-\f{k-1}{2},\f{k-1}{2},0)$ have the form $\{0\} \subset V \subset V_{\lambda,(0,0,0)}$ with: \begin{itemize} \item $V\cong V_A$ and $V_{\lambda,(0,0,0)}/V\cong V_B$ if $\lambda=(\f{k-1}{2},-\f{k-1}{2},0)$, $(0,\f{k-1}{2},-\f{k-1}{2})$, or $(\f{k-1}{2},0,-\f{k-1}{2})$; and \item $V\cong V_B$ and $V_{\lambda,(0,0,0)}/V\cong V_A$ if $\lambda=(-\f{k-1}{2},\f{k-1}{2},0)$, $(0,-\f{k-1}{2},\f{k-1}{2})$, or $(-\f{k-1}{2},0,\f{k-1}{2})$. \end{itemize} \end{thm} \noindent Note that when $k=2$, $V_B$ coincides with the $s=0$ case of Section~\ref{sec:sub:inducedtriv}. \begin{figure}[t]\label{fig:2} \caption{A schematic illustration of Theorem~\ref{thm:evenk}.} \begin{center} \includegraphics[width=5.3in]{jack-pic-2.png} \end{center} \end{figure} \begin{proof} The fact that $V_{(-\f{k-1}{2},\f{k-1}2,0),(0,0,0)}$ has a composition series of length 2 for $k\in 2{\mathbb{Z}}_{\ge 1}$ is a standard consequence of the Beilinson-Bernstein theory. Both this and the assertions about the composition series for $\l=\pm(\f{k-1}{2},0,-\f{k-1}{2})$ can be directly verified through it using the {\tt atlas} software package \cite{atlas}. Let $W$ be the subspace of $V_{(-\f{k-1}{2},\f{k-1}2,0),(0,0,0)}$ spanned by its basis vectors $v_{\ell,m_1,m_2}$ with $m_1<k$. Since $\L_j^{(2)}((-\f{k-1}{2},\f{k-1}2,0),\ell,k-2)$ and $\L_j^{(-2)}((-\f{k-1}{2},\f{k-1}2,0),\ell,2-k)$ in (\ref{Lambda}) both vanish, formula (\ref{liealgdiffZj}) shows that $\pi(Z_n)$ preserves $W$, which is consequently irreducible. Similarly, the subspace of $V_{-(-\f{k-1}{2},\f{k-1}2,0),(0,0,0)}$ spanned by its basis vectors $v_{\ell,m_1,m_2}$ with $m_1\ge k$ is also irreducible. The multiplicity formulas follow from this, as does the composition series assertion for $\l=(-\f{k-1}{2},\f{k-1}2,0)$. The remaining three composition series assertions follow by duality and the contragredient symmetry. \end{proof} \subsection{Constant coefficients cuspidal cohomology} We next turn to the representation (\ref{fullprinck}) for $k=3$, which is spanned by the $v_{\ell,m_1,m_2}$ with $\ell>0$, $m_1$ odd, and $-\ell\le m_1,m_2 \le \ell$. It has a composition series \begin{equation}\label{constantcoeffcohom1} \{0\} \ \ \subset \ \ V^{(1)}_{\text{odd}} \ \ \subset \ \ V^{(1)} \ \ \subset \ \ V_{(-1,1,0),(1,0,1)} \end{equation} of length 3, where $V^{(1)}$ is spanned by the $v_{\ell,1,m_2}$ with $\ell>0$ and $V^{(1)}_{\text{odd}}$ is spanned by the $v_{\ell,1,m_2}$ with $\ell>0$ and odd. As in Theorem~\ref{thm:evenk}, it follows from (\ref{Lambda})-(\ref{liealgdiffZj}) that $V^{(1)}_{\text{odd}}$ and $V^{(1)}/V^{(1)}_{\text{odd}}$ are irreducible. The quotient $V_{(-1,1,0),(1,0,1)}/V^{(1)}$ is the archimedean representation associated to constant coefficients automorphic cohomology (it is the symmetric square $\bigwedge\!D_2$ of the discrete series $D_2$ of $SL(2,{\mathbb{R}})$). By duality, $\bigwedge\!D_2$ is an irreducible subspace of $V_{(1,-1,0),(1,0,1)}$, which can be shown as above to be spanned by the $v_{\ell,m_1,m_2}$ with $\ell>0$ and $m_1\ge 3$ odd. \begin{figure}[t]\label{fig:3} \caption{An illustration the cohomological representation $\bigwedge\!D_2$ as a quotient of $V_{(-1,1,0),(1,0,1)}$.} \begin{center} \includegraphics[width=5.3in]{jack-pic-3.png} \end{center} \end{figure} \subsection{Symmetric square of Ramanujan's cusp form $\Delta$} Our final example is (\ref{fullprinck}) for $k=23$. Its dual $V_{(11,-11,0),(1,0,1)}$ contains the irreducible representation $\bigwedge\!D_{12}$, which is the archimedean representation associated to the symmetric square of Ramanujan's famous weight 12 cusp form $\Delta$. This subspace of $V_{(11,-11,0),(1,0,1)}$ is spanned by the $v_{\ell,m_1,m_2}$ with $\ell\ge 23$ and $m_1\ge 23$ odd. The other composition factors in these principal series are more complicated to describe. \section{Formulas for action as differential operators on $G/K$} \label{sec:diffopformulas} With applications of the previous results to automorphic forms and string theory in mind, it may be useful to write the action of the differential operators (\ref{liealgdiffZj}) on functions on the symmetric space $SL(3,{\mathbb{R}})/SO(3)$. We first review this for $SL(2,{\mathbb{R}})$ (where the results are well-known and classical). In this section $\pi$ will be used to denote the right translation operator. \subsection{$SL(2,{\mathbb{R}})/SO(2)$} Consider the commonly-used coordinate parametrization \begin{multline}\label{sl2rcoordinates1} g \ \ = \ \ \ttwo1x01 \ttwo{y^{1/2}}{0}{0}{y^{-1/2}}\ttwo{\cos(\theta)}{-\sin(\theta)}{\sin(\theta)}{\cos\theta} \ \ \in \ \ SL(2,{\mathbb{R}})\,, \\ x\,\in\,{\mathbb{R}}\,,\ y\,>\,0\,, \ \ \theta\,\in\,{\mathbb{R}}/(2\pi{\mathbb{Z}})\,, \end{multline} of $G=SL(2,{\mathbb{R}})$. Let $f$ be an $SO(2)$-isotypic function on $G$, meaning it satisfies the right-transformation law \begin{equation}\label{sl2coordinates2} f\(g\ttwo{\cos(\theta)}{-\sin(\theta)}{\sin(\theta)}{\cos(\theta)}\) \ \ = \ \ e^{i\ell\theta}\,f(g)\,,\ \text{for all} \ \theta\,\in\,{\mathbb{R}}\,, \end{equation} or, equivalently, the differential equation \begin{equation}\label{sl2diffops1} \pi\ttwo{0}{-1}{1}{0}f \ \ = \ \ i\,\ell\, f \end{equation} for some $\ell\in{\mathbb{Z}}$. Then $f$ is determined by its restriction to upper triangular matrices $\ttwo1x01 \ttwo{y^{1/2}}{0}{0}{y^{-1/2}}$, and the action of the rest of the Lie algebra on $f$ is determined by the formulas \begin{equation}\label{sl2diffops2} \aligned \(\pi\ttwo{1}{0}{0}{-1}\!f\)\!\(\ttwo1x01 \ttwo{y^{1/2}}{0}{0}{y^{-1/2}}\) \ \ & = \ \ 2\,y\,\smallf{\partial}{\partial y}\,f\(\ttwo1x01 \ttwo{y^{1/2}}{0}{0}{y^{-1/2}}\) \\ \text{and} \ \ \ \ \(\pi\ttwo0100 \! f\)\!\(\ttwo1x01 \ttwo{y^{1/2}}{0}{0}{y^{-1/2}}\) \ \ & = \ \ y\,\smallf{\partial}{\partial x}\,f\(\ttwo1x01 \ttwo{y^{1/2}}{0}{0}{y^{-1/2}}\) \endaligned \end{equation} via linear combinations. For example, the raising and lowering operators in (\ref{sl2updownaction}) correspond to the differential operators \begin{equation}\label{sl2diffops3} \( \pi\ttwo{1}{\mp i}{\mp i}{-1}\!f\)\!\(\ttwo1x01 \ttwo{y^{1/2}}{0}{0}{y^{-1/2}}\) \ = \ (\pm 2iy\smallf{\partial}{\partial x}+ 2y\smallf{\partial}{\partial y} \pm \ell )f\(\ttwo1x01 \ttwo{y^{1/2}}{0}{0}{y^{-1/2}}\) \end{equation} introduced by Maass. \subsection{$SL(3,{\mathbb{R}})/SO(3)$} We parameterize elements of $G=SL(3,{\mathbb{R}})$ as \begin{equation}\label{sl3coordinates1} g \ \ = \ \ \tthree{1}{x_1}{x_3}01{x_2}001\tthree{y_1^{2/3}y_2^{1/3}}000{y_1^{-1/3}y_2^{1/3}}000{y_1^{-1/3}y_2^{-2/3}} k(\a,\b,\g) \ \ \in \ \ G\,, \end{equation} where $x_1,x_2,x_3\in {\mathbb{R}}$, $y_1,y_2>0$, and $k(\a,\b,\g)\in K=SO(3,{\mathbb{R}})$ is defined in (\ref{eulerangles}). Assume that $f$ is an $SO(3)$-isotypic function on $G$ satisfying the transformation law \begin{equation}\label{sl2coordinates2} f\(g\tthree{\cos(\theta)}{-\sin(\theta)}0{\sin(\theta)}{\cos(\theta)}0001\) \ \ = \ \ e^{im_2\theta}\,f(g)\,,\ \text{for all} \ \theta\,\in\,{\mathbb{R}}\,, \end{equation} which is the case for $D^{\ell}_{m_1,m_2}$ (see (\ref{WignerDdef})) and hence the image of a $K$-isotypic vector corresponding to $v_{\ell,m_1,m_2}$ in an irreducible representation of $G$. We refer back to (\ref{sl3liealg}) for notation of Lie algebra elements. \begin{lem}\label{lem:diffops} Let $g(x_1,x_2,x_3,y_1,y_2)$ denote the product of the first two matrices in (\ref{sl3coordinates1}). Then \begin{equation}\label{diffopssl3a} ( \pi(X)f ) (g(x_1,x_2,x_3,y_1,y_2)) \ \ = \ \ \mathcal{D}f(g(x_1,x_2,x_3,y_1,y_2)) \end{equation} for the following pairs of Lie algebra elements $X$ and differential operators $\mathcal{D}$: \begin{equation}\label{diffopssl3b} \begin{tabular}{|c|c|} \hline X & $\mathcal{D}$ \\ \hline $Y_1$ & $im_2$ \\ $H_1$ & $2y_1\f{\partial}{\partial y_1}-y_2\f{\partial}{\partial y_2}$ \\ $H_2$ & $-y_1\f{\partial}{\partial y_1}+2y_2\f{\partial}{\partial y_2}$ \\ $X_1$ & $y_1\f{\partial}{\partial x_1}$ \\ $X_2$ & $y_2\f{\partial}{\partial x_2}+x_1y_2\f{\partial}{\partial x_3}$ \\ $X_3$ & $y_1y_2\f{\partial}{\partial x_3}$ \\ $Z_{-2}$ & $-m_2 + 2iy_1\f{\partial}{\partial x_1}+2y_1\f{\partial}{\partial y_1}-y_2\f{\partial}{\partial y_2}$ \\ $Z_0$ & $\sqrt{6}y_2\f{\partial}{\partial y_2}$ \\ $Z_{2}$ & $m_2 - 2iy_1\f{\partial}{\partial x_1}+2y_1\f{\partial}{\partial y_1}-y_2\f{\partial}{\partial y_2}$ \\ \hline \end{tabular} \end{equation} \end{lem} \noindent Expressions for $Y_2$ and $Y_3$ (and hence $Z_{\pm 1}$) can also be given, but they are more complicated in the absence of an assumption such as (\ref{sl2coordinates2}). \begin{bibsection} \begin{biblist} \bib{AS}{book}{ author={Abramowitz, Milton}, author={Stegun, Irene A.}, title={Handbook of mathematical functions with formulas, graphs, and mathematical tables}, series={National Bureau of Standards Applied Mathematics Series}, volume={55}, publisher={For sale by the Superintendent of Documents, U.S. Government Printing Office, Washington, D.C.}, date={1964}, pages={xiv+1046}, review={\MR{0167642}}, note={\url{http://people.math.sfu.ca/~cbm/aands/toc.htm}} } \bib{atlas}{book}{title={Atlas of Lie Groups and Representations},note={\url{http://www.liegroups.org/}},} \bib{Bargmann}{article}{ author={Bargmann, V.}, title={Irreducible unitary representations of the Lorentz group}, journal={Ann. of Math. (2)}, volume={48}, date={1947}, pages={568--640}, issn={0003-486X}, review={\MR{0021942}}, } \bib{bied}{book}{ author={Biedenharn, L. C.}, author={Louck, J. D.}, title={Angular momentum in quantum physics}, series={Encyclopedia of Mathematics and its Applications}, volume={8}, note={Theory and application; With a foreword by Peter A. Carruthers}, publisher={Addison-Wesley Publishing Co., Reading, Mass.}, date={1981}, pages={xxxii+716}, isbn={0-201-13507-8}, review={\MR{635121}}, } \bib{buttcane}{article}{author={Buttcane, Jack},title={Higher weight on GL(3), II: The cusp forms},note={Preprint (2017). arXiv:1701.04380},} \bib{casselman}{article}{ author={Casselman, W.}, title={Jacquet modules for real reductive groups}, conference={ title={Proceedings of the International Congress of Mathematicians (Helsinki, 1978)}, }, book={ publisher={Acad. Sci. Fennica, Helsinki}, }, date={1980}, pages={557--563}, review={\MR{562655}}, } \bib{Gelfand-Graev}{book}{ author={Gel{\cprime}fand, I. M.}, author={Graev, M. I.}, author={Pyatetskii-Shapiro, I. I.}, title={Generalized functions. Vol. 6}, note={Representation theory and automorphic functions; Translated from the 1966 Russian original [ MR0220673] by K. A. Hirsch; Reprint of the 1969 English translation [ MR0233772]}, publisher={AMS Chelsea Publishing, Providence, RI}, date={2016}, pages={xvi+426}, isbn={978-1-4704-2664-4}, review={\MR{3468638}}, } \bib{GLMZ}{article}{author={Gustafsson, Henrik}, author={Liu, Baiying}, author={Miller, Stephen D.}, author={Zhang, Zhuohui}, title={in progress}} \bib{howe}{article}{ author={Howe, Roger}, title={$K$-type structure in the principal series of $\rm GL_3$. I}, conference={ title={Analysis on homogeneous spaces and representation theory of Lie groups, Okayama--Kyoto (1997)}, }, book={ series={Adv. Stud. Pure Math.}, volume={26}, publisher={Math. Soc. Japan, Tokyo}, }, date={2000}, pages={77--98}, review={\MR{1770718}}, } \bib{howelee}{article}{ author={Howe, Roger}, author={Lee, Soo Teck}, title={Degenerate principal series representations of ${\rm GL}_n(\bold C)$ and ${\rm GL}_n(\bold R)$}, journal={J. Funct. Anal.}, volume={166}, date={1999}, number={2}, pages={244--309}, issn={0022-1236}, review={\MR{1707754}}, doi={10.1006/jfan.1999.3427}, } \bib{hurwitz}{article}{author={Hurwitz, Adolf}, title={\"Uber die Erzeugung der Invarienten durch Integration}, journal={Nachrichten Ges. Wiss. G\"ottingen}, year={1897}, pages={71--90},} \bib{miyazaki}{article}{ author={Miyazaki, Tadashi}, title={The structures of standard $(\germ g,K)$-modules of ${\rm SL}(3,\bold R)$}, journal={Glas. Mat. Ser. III}, volume={43(63)}, date={2008}, number={2}, pages={337--362}, issn={0017-095X}, review={\MR{2460704}}, doi={10.3336/gm.43.2.08}, } \bib{NIST}{book}{title={NIST Digital Library of Mathematical Functions},note={\url{http://dlmf.nist.gov/}},} \bib{Zhang}{thesis}{author={Zhang, Zhuohui}, title={Rutgers University Ph.D. Dissertation} year={2017}} \end{biblist} \end{bibsection} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Image captioning attracts considerable attention in both natural language processing and computer vision. The task aims to generate a description in natural language grounded on the input image. It is a very challenging yet interesting task. On the one hand, it has to identify the objects in the image, associate the objects, and express them in a fluent sentence, each of which is a difficult subtask. On the other hand, it combines two important fields in artificial intelligence, namely, natural language processing and computer vision. More importantly, it has a wide range of applications, including text-based image retrieval, helping visually impaired people see \cite{wu2017automatic}, human-robot interaction \cite{das2017visual}, etc. \begin{figure}[t] \centering \footnotesize \begin{tabular}{@{}c p{0.23\textwidth}@{}} \multirow{3}{*}{\includegraphics[width=0.25\textwidth]{ex.jpg}} & \textbf{Soft-Attention}: a open laptop computer sitting on top of a table \\ & \textbf{ATT-FCN}: a dog sitting on a desk with a laptop computer and mouse\\ & \textbf{simNet}: a open laptop computer and mouse sitting on a table with a dog nearby\\ \end{tabular} \caption{Examples of using different attention mechanisms. Soft-Attention \cite{xu2015show} is based on visual attention. The generated caption is \textbf{detailed} in that it knows the visual attributes well (e.g. \textit{open}). However, it omits many objects (e.g. \textit{mouse} and \textit{dog}). ATT-FCN \cite{you2016image} is based on semantic attention. The generated caption is more \textbf{comprehensive} in that it includes more objects. However, it is bad at associating details with the objects (e.g. missing \textit{open} and mislocating \textit{dog}). simNet is our proposal that effectively merges the two kinds of attention and generates a detailed and comprehensive caption.} \label{fig:ex-intro} \end{figure} Models based on the encoder-decoder framework have shown success in image captioning. According to the pivot representation, they can be roughly categorized into models based on visual information \cite{vinyals2015show,chen2015mind,mao2014deep,karpathy2014deep,karpathy2017deep}, and models based on conceptual information \cite{fang2015captions,you2016image,wu2016what}. The later explicitly provides the visual words (e.g. \textit{dog}, \textit{sit}, \textit{red}) to the decoder instead of the image features, and is more effective in image captioning according to the evaluation on benchmark datasets. However, the models based on conceptual information have a major drawback that it is hard for the model to associate the details with the specific objects in the image, because the visual words are inherently unordered in semantics. Figure~\ref{fig:ex-intro} shows an example. For semantic attention, although \textit{open} is provided as a visual word, due to the insufficient use of visual information, the model gets confused about what objects \textit{open} should be associated with and thus discards \textit{open} in the caption. The model may even associate the details incorrectly, which is the case for the position of the dog. In contrast, models based on the visual information often are accurate in details but have difficulty in describing the image comprehensively and tend to only describe a subregion. \begin{figure}[t] \centering \includegraphics[width=0.46\textwidth]{sketch-new} \caption{Illustration of the main idea. The visual information captured by CNN and the conceptual information in the extracted topics are first condensed by attention mechanisms respectively. The merging gate then adaptively adjusts the weight between the visual information and the conceptual information for generating the caption.} \label{fig:ex-running} \end{figure} In this work, we get the best of both worlds and integrate visual attention and semantic attention for generating captions that are both detailed and comprehensive. We propose a \textbf{Stepwise Image-Topic Merging Network} as the decoder to guide the information flow between the image and the extracted topics. At each time step, the decoder first extracts focal information from the image. Then, it decides which topics are most probable for the time step. Finally, it attends differently to the visual information and the conceptual information to generate the output word. Hence, the model can efficiently merge the two kinds of information, leading to outstanding results in image captioning. Overall, the main contributions of this work are: \begin{itemize} \item We propose a novel approach that can effectively merge the information in the image and the topics to generate cohesive captions that are both detailed and comprehensive. We refine and combine two previous competing attention mechanisms, namely visual attention and semantic attention, with an importance-based merging gate that effectively combines and balances the two kinds of information. \item The proposed approach outperforms the state-of-the-art methods substantially on two benchmark datasets, Flickr30k and COCO, in terms of SPICE, which correlates the best with human judgments. Systematic analysis shows that the merging gate contributes the most to the overall improvement. \end{itemize} \begin{figure*}[t] \centering \subcaptionbox{The overall framework.\label{fig:overall}}{\includegraphics[height=6.5\baselineskip]{model-new}} \hfil \subcaptionbox{The data flow in the proposed simNet.\label{fig:tmn}}{\includegraphics[height=6.5\baselineskip]{tmn}} \caption{Illustration of the proposed approach. In the right plot, we use $\phi, \psi,\chi$ to denote input attention, output attention, and topic attention, respectively.} \label{fig:model} \end{figure*} \section{Related Work} A large number of systems have been proposed for image captioning. Neural models based on the encoder-decoder framework have been attracting increased attention in the last few years in several multi-discipline tasks, such as neural image/video captioning (NIC) and visual question answering (VQA) \cite{vinyals2015show,karpathy2014deep,venugopalan2015sequence,zhao2016partial,zhang2017visual}. State-of-the-art neural approaches \cite{anderson2018bottom,liu2018show,lu2018neural} incorporate the attention mechanism in machine translation \cite{bahdanau2014neural} to generate grounded image captions. Based on what they attend to, the models can be categorized into visual attention models and semantic attention models. Visual attention models pay attention to the image features generated by CNNs. CNNs are typically pre-trained on the image recognition task to extract general visual signals \cite{xu2015show,chen2017scacnn,lu2017knowing}. The visual attention is expected to find the most relevant image regions in generating the caption. Most recently, image features based on predicted bounding boxes are used \cite{anderson2018bottom,lu2018neural}. The advantages are that the attention no longer needs to find the relevant generic regions by itself but instead find relevant bounding boxes that are object orientated and can serve as semantic guides. However, the drawback is that predicting bounding boxes is difficult, which requires large datasets \cite{krishna2017visual} and complex models \cite{ren2015faster,ren2017faster}. Semantic attention models pay attention to a predicted set of semantic concepts \cite{fang2015captions,you2016image,wu2016what}. The semantic concepts are the most frequent words in the captions, and the extractor can be trained using various methods but typically is only trained on the given image captioning dataset. This kind of approach can be seen as the extension of the earlier template-based slotting-filling approaches \cite{farhadi2010every,kulkarni2013babytalk}. However, few work studies how to combine the two kinds of attention models to take advantage of both of them. On the one hand, due to the limited number of visual features, it is hard to provide comprehensive information to the decoder. On the other hand, the extracted semantic concepts are unordered, making it hard for the decoder to portray the details of the objects correctly. This work focuses on combining the visual attention and the semantic attention efficiently to address their drawbacks and make use of their merits. The visual attention is designed to focus on the attributes and the relationships of the objects, while the semantic attention only includes words that are objects so that the extracted topics could be more accurate. The combination is controlled by the importance-based merging mechanism that decides at each time step which kind of information should be relied on. The goal is to generate image captions that are both detailed and comprehensive \nocite{lin18deconvolution,shuming2018,jingjing2018Upaired,xu2018emnlp} \section{Approach} Our proposed model consists of an image encoder, a topic extractor, and a stepwise merging decoder. Figure \ref{fig:model} shows a sketch. We first briefly introduce the image encoder and the topic extractor. Then, we introduce the proposed stepwise image-topic merging decoder in detail. \subsection{Image Encoder} For an input image, the image encoder expresses the image as a series of visual feature vectors $\vec{V}=\{\vec{v}_1, \vec{v}_2, \ldots, \vec{v}_k\}, \vec{v}_i \in \mathbb{R}^g$. Each feature corresponds to a different perspective of the image. The visual features serve as descriptive guides of the objects in the image for the decoder. We use a ResNet152 \cite{he2016deep}, which is commonly used in image captioning, to generate the visual features. The output of the last convolutional layer is used as the visual information: \begin{align} \vec{V} = \vec{W}^{V,I} \text{CNN}(\vec{I}) \end{align} where $\vec{I}$ is the input image, and $\vec{W}^{V,I}$ shrinks the last dimension of the output.\footnote{For conciseness, all the bias terms of linear transformations in this paper are omitted.} \subsection{Topic Extractor} Typically, identifying an object requires a combination of visual features, and considering the limited capacity of the visual features, it is hard for the conventional decoder to describe the objects in the image comprehensively. An advance in image captioning is to provide the decoder with the semantic concepts in the image directly so that the decoder is equipped with an overall perspective of the image. The semantic concepts can be objects (e.g. \textit{person}, \textit{car}), attributes (e.g. \textit{off}, \textit{electric}), and relationships (e.g. \textit{using}, \textit{sitting}). We only use the words that are objects in this work, the reason of which is explained later. We call such words \textbf{topics}. The topic extractor concludes a list of candidate topic embeddings $\vec{T} = \{\vec{w}_1, \vec{w}_2, \ldots, \vec{w}_m\}, \vec{w}_i \in \mathbb{R}^e$ from the image, where $e$ is the dimension of the topic word embeddings. Following common practice \cite{fang2015captions,you2016image}, we adopt the weakly-supervised approach of Multiple Instance Learning \cite{zhang2006multiple} to build a topic extractor. Due to limited space, please refer to \citet{fang2015captions} for detailed explanation. Different from existing work that uses all the most frequent words in the captions as valid semantic concepts or visual words, we only include the object words (nouns) in the topic word list. Existing work relies on attribute words and relationship words to provide visual information to the decoder. However, it not only complicates the extracting procedure but also contributes little to the generation. For an image containing many objects, the decoder is likely to combine the attributes with the objects arbitrarily, as such words are specific to certain objects but are provided to the decoder unordered. In contrast, our model has visual information as additional input and we expect that the decoder should refer to the image for such kind of information instead of the extracted concepts. \subsection{Stepwise Image-Topic Merging Decoder} The essential component of the decoder is the proposed stepwise image-topic merging network. The decoder is based on an LSTM \cite{hochreiter1997long}. At each time step, it combines the textual caption, the attentive visual information, and the attentive conceptual information as the context for generating an output word. The goal is achieved by three modules, the visual attention, the topic attention, and the merging gate. \paragraph{Visual Attention as Output} The visual attention attends to attracting parts of the image based on the state of the LSTM decoder. In existing work \cite{xu2015show}, only the previous hidden state $\vec{h}_{t-1} \in \mathbb{R}^d$ of the LSTM is used in computation of the visual attention: \begin{align} \vec{Z_t} &= \text{tanh} ( \vec{W}^{Z,V} \vec{V} \oplus \vec{W}^{Z,h} \vec{h}_{t-1}) \\ \vec{\alpha}_t &= \text{softmax}(\vec{Z_t} \vec{w}^{\alpha,Z}) \end{align} where $\vec{W}^{Z,V} \in \mathbb{R}^{k \times g}, \vec{W}^{Z,h} \in \mathbb{R}^{k\times d}, \vec{w}^{\alpha,Z} \in \mathbb{R}^{k}$ are the learnable parameters. We denote the matrix-vector addition as $\oplus$, which is calculated by adding the vector to each column of the matrix. $\vec{\alpha}_t \in \mathbb{R}^{k}$ is the attentive weights of $\vec{V}$ and the attentive visual input $\vec{z}_t \in \mathbb{R}^{g}$ is calculated as \begin{align} \vec{z}_t = \vec{V} \vec{\alpha}_t \end{align} The visual input $\vec{z}_t$ and the embedding of the previous output word $\vec{y}_{t-1}$ are the input of the LSTM. \begin{align} \vec{h}_t = \text{LSTM}(\begin{bmatrix}\vec{z}_t\\\vec{y}_{t-1}\end{bmatrix}, \vec{h}_{t-1}) \end{align} However, there is a noticeable drawback that the previous output word $y_{t-1}$, which is a much stronger indicator than the previous hidden state $\vec{h}_{t-1}$, is not used in the attention. As $\vec{z}_t$ is used as the input, we call it \textbf{input attention}. To overcome that drawback, we add another attention that incorporates the current hidden state $\vec{h}_{t}$, which is based on the last generated word $y_{t-1}$: \begin{align} \widetilde{\vec{Z}}_t &= \text{tanh} ( \widetilde{\vec{W}}^{Z,V} \vec{V} \oplus \widetilde{\vec{W}}^{Z,h} \vec{h}_{t}) \\ \widetilde{\vec{\alpha}}_t &= \text{softmax}(\widetilde{\vec{Z}}_t \widetilde{\vec{w}}^{\alpha,Z}) \\ \widetilde{\vec{z}}_t & = \vec{V} \widetilde{\vec{\alpha}}_t \end{align} The procedure resembles the input attention, and we call it \textbf{output attention}. It is worth mentioning that the output attention is essentially the same with the spatial visual attention proposed by \citet{lu2017knowing}. However, they did not see it from the input-output point of view nor combine it with the input attention. The attentive visual output is further transformed to $\vec{r}_t = \text{tanh}(\vec{W}^{s,z} \widetilde{\vec{z}}_t), \vec{W}^{s,z} \in \mathbb{R}^{e \times g}$, which is of the same dimension as the topic word embedding to simplify the following procedure. \paragraph{Topic Attention} In an image caption, different parts concern different topics. In the existing work \cite{you2016image}, the conceptual information is attended based on the previous output word: \begin{align} \vec{\beta}_t = \text{softmax} (\vec{T}^\mathsf{T} \vec{U} \vec{y}_{t-1}) \end{align} where $\vec{U} \in \mathbb{R}^{e \times e}, \vec{\beta}_t \in \mathbb{R}^{m}$. The profound issue is that this approach neglects the visual information. It should be beneficial to provide the attentive visual information when selecting topics. The hidden state of the LSTM contains both the information of previous words and the attentive input visual information. Therefore, the model attends to the topics based on the hidden state of the LSTM: \begin{align} \vec{Q}_t &= \text{tanh} (\vec{W}^{Q, T} \vec{T} \oplus \vec{W}^{Q,h} \vec{h}_t) \label{eq:score1}\\ \vec{\beta}_t &= \text{softmax} (\vec{Q}_t \vec{w}^{\beta,Q}) \label{eq:score2} \end{align} where $\vec{W}^{Q,T} \in \mathbb{R}^{m\times e}, \vec{W}^{Q,h} \in \mathbb{R}^{m\times d}, \vec{w}^{\beta, Q} \in \mathbb{R}^{m}$ are the parameters to be learned. $\vec{\beta}_t \in \mathbb{R}^m$ is the weight of the topics, from which the attentive conceptual output $\vec{q}_t \in \mathbb{R}^{e}$ is calculated: \begin{align} \vec{q}_t = \vec{T} \vec{\beta}_t \end{align} The topic attention $\vec{q}_t$ and the hidden state $\vec{h}_t$ are combined as the contextual information $\vec{s}_t$: \begin{align} \vec{s}_t = \text{tanh}(\vec{W}^{s,q} \vec{q}_t + \vec{W}^{s,h} \vec{h}_t) \end{align} where $\vec{W}^{s,q} \in \mathbb{R}^{e \times e}, \vec{W}^{s,h} \in \mathbb{R}^{e \times d}$ are learnable parameters. \paragraph{Merging Gate} We have prepared both the visual information $\vec{r}_t$ and the contextual information $\vec{s}_t$. It is not reasonable to treat the two kinds of information equally when the decoder generates different types of words. For example, when generating descriptive words (e.g., \textit{behind}, \textit{red}), $\vec{r}_t$ should matter more than $\vec{s}_t$. However, when generating object words (e.g., \textit{people}, \textit{table}), $\vec{s}_t$ is more important. We introduce a novel score-based merging mechanism to make the model adaptively learn to adjust the balance: \begin{align} \gamma_t &=\sigma(S(\vec{s}_t)-S(\vec{r}_t))\\ \vec{c}_t &= \gamma_t \vec{s}_t+ (1-\gamma_t) \vec{r}_t \end{align} where $\sigma$ is the sigmoid function, $\gamma_t \in [0, 1]$ indicates how important the topic attention is compared to the visual attention, and $S$ is the scoring function. The scoring function needs to evaluate the importance of the topic attention. Noticing that Eq. (\ref{eq:score1}) and Eq. (\ref{eq:score2}) have a similar purpose, we define $S$ similarly: \begin{align} S(\vec{s}_t) &= \text{tanh} (\vec{W}^{S,h} \vec{h}_t + \vec{W}^{S,s} \vec{s}_t ) \cdot \vec{w}^{S} \\ S(\vec{r}_t) &= \text{tanh} (\vec{W}^{S,h} \vec{h}_t + \vec{W}^{S,r} \vec{r}_t ) \cdot \vec{w}^{S} \end{align} where $\cdot$ denotes dot product of vectors, $\vec{W}^{S,s} \in \mathbb{R}^{m \times e}, \vec{W}^{S,r} \in \mathbb{R}^{m \times e}$ are the parameters to be learned, and $\vec{W}^{S,h}, \vec{w}^{s}$ share the weights of $\vec{W}^{Q,h}, \vec{w}^{\beta, Q}$ from Eq. (\ref{eq:score1}) and Eq. (\ref{eq:score2}), respectively. Finally, the output word is generated by: \begin{align} y_t \sim \vec{p}_t = \text{softmax}(\vec{W}^{p, c} \vec{c}_t) \end{align} where each value of $\vec{p}_t \in \mathbb{R}^{|D|}$ is a probability indicating how likely the corresponding word in vocabulary $D$ is the current output word. The whole model is trained using maximum log likelihood and the loss function is the cross entropy loss. In all, our proposed approach encourages the model to take advantage of all the available information. The adaptive merging mechanism makes the model weigh the information elaborately. \begin{table*}[t] \centering \footnotesize \begin{tabular}{@{}l c c c c c@{}} \toprule Flickr30k & SPICE & CIDEr & METEOR & ROUGE-L & BLEU-4 \\ \midrule HardAtt \cite{xu2015show} & - & - & 0.185 & - & 0.199 \\ SCA-CNN \cite{chen2017scacnn} & - & - & 0.195 & - & 0.223 \\ ATT-FCN \cite{you2016image} & - & - & 0.189 & - & 0.230 \\ SCN-LSTM \cite{gan2017semantic} & - & - & 0.210 & - & 0.257 \\ AdaAtt \cite{lu2017knowing} & 0.145 & 0.531 & 0.204 & 0.467 & 0.251 \\ NBT \cite{lu2018neural} & 0.156 & 0.575 & 0.217 & - & 0.271 \\ \midrule SR-PL \cite{liu2018show}\ssymbol{1}\ssymbol{2} & 0.158 & \bf 0.650 & 0.218 & \bf 0.499 &\bf 0.293 \\ \midrule simNet & \bf 0.160 & 0.585 & \bf 0.221 & 0.489 & 0.251 \\ \bottomrule \end{tabular} \caption{Performance on the Flickr30k Karpathy test split. The symbol \ssymbol{1} denotes directly optimizing CIDEr. The symbol \ssymbol{2} denotes using extra data for training, thus not directly comparable. Nonetheless, our model supersedes all existing models in SPICE, which correlates the best with human judgments.} \label{tab:res-f30k} \end{table*} \begin{table*}[t] \centering \footnotesize \begin{tabular}{@{}l c c c c c@{}} \toprule COCO & SPICE & CIDEr & METEOR & ROUGE-L & BLEU-4 \\ \midrule HardAtt \cite{xu2015show} &- & - & 0.230 & - & 0.250 \\ ATT-FCN \cite{you2016image} &- & - & 0.243 & - & 0.304 \\ SCA-CNN \cite{chen2017scacnn} & - & 0.952 & 0.250 & 0.531 & 0.311 \\ LSTM-A \cite{yao2017boosting} & 0.186 & 1.002 & 0.254 & 0.540 & 0.326 \\ SCN-LSTM \cite{gan2017semantic} & - & 1.012 & 0.257 & - & 0.330 \\ Skeleton \cite{wang2017skeleton} & - & 1.069 & 0.268 & 0.552 & 0.336 \\ AdaAtt \cite{lu2017knowing} & 0.195 & 1.085 & 0.266 & 0.549 & 0.332 \\ NBT \cite{lu2018neural} & 0.201 & 1.072 & 0.271 & - & 0.347 \\ \midrule DRL \cite{ren2017deep}\ssymbol{1} & - & 0.937 & 0.251 & 0.525 & 0.304 \\ TD-M-ATT \cite{chen2018temporal}\ssymbol{1} & - & 1.116 & 0.268 & 0.555 & 0.336 \\ SCST \cite{rennie2017self}\ssymbol{1} &- & 1.140 & 0.267 & 0.557 & 0.342 \\ SR-PL \cite{liu2018show}\ssymbol{1}\ssymbol{2} & 0.210 & 1.171 & 0.274 & \bf 0.570 & 0.358 \\ Up-Down \cite{anderson2018bottom}\ssymbol{1}\ssymbol{2} & 0.214 & \bf 1.201 & 0.277 & 0.569 & \bf 0.363 \\ \midrule simNet & \bf 0.220 & 1.135 & \bf 0.283 & 0.564 & 0.332 \\ \bottomrule \end{tabular} \caption{Performance on the COCO Karpathy test split. Symbols, \ssymbol{1} and \ssymbol{2}, are defined similarly. Our model outperforms the current state-of-the-art Up-Down substantially in terms of SPICE.\label{tab:res-mscoco}} \end{table*} \section{Experiment} We describe the datasets and the metrics used for evaluation, followed by the training details and the evaluation of the proposed approach. \subsection{Datasets and Metrics} There are several datasets containing images and their captions. We report results on the popular Microsoft COCO \cite{chen2015microsoft} dataset and the Flickr30k \cite{young2014image} dataset. They contain 123,287 images and 31,000 images, respectively, and each image is annotated with 5 sentences. We report results using the widely-used publicly-available splits in the work of \citet{karpathy2014deep}. There are 5,000 images each in the validation set and the test set for COCO, 1,000 images for Flickr30k. We report results using the COCO captioning evaluation toolkit \cite{chen2015microsoft} that reports the widely-used automatic evaluation metrics SPICE, CIDEr, BLEU, METEOR, and ROUGE. SPICE \cite{anderson2016spice}, which is based on scene graph matching, and CIDEr \cite{vedantam2015cider}, which is based on n-gram matching, are specifically proposed for evaluating image captioning systems. They both incorporate the consensus of a set of references for an example. BLEU \cite{papineni2002bleu} and METOR \cite{banerjee2005meteor} are originally proposed for machine translation evaluation. ROUGE \cite{lin2003automatic,lin2004rouge} is designed for automatic evaluation of extractive text summarization. In the related studies, it is concluded that SPICE correlates the best with human judgments with a remarkable margin over the other metrics, and is expert in judging detailedness, where the other metrics show negative correlations, surprisingly; CIDEr and METEOR follows with no particular precedence, followed by ROUGE-L, and BLEU-4, in that order \cite{anderson2016spice,vedantam2015cider}. \subsection{Settings} Following common practice, the CNN used is the ResNet152 model \cite{he2016deep} pre-trained on ImageNet.\footnote{We use the pre-trained model from \href{https://github.com/pytorch/vision}{\texttt{torchvision}}.} There are 2048 $7 \times 7$ feature maps, and we project them into 512 feature maps, i.e. $g$ is 512. The word embedding size $e$ is 256 and the hidden size $d$ of the LSTM is 512. We only keep caption words that occur at least 5 times in the training set, resulting in 10,132 words for COCO and 7,544 for Flickr30k. We use the topic extractor pre-trained by \citet{fang2015captions} for 1,000 concepts on COCO. We only use 568 manually-annotated object words as topics. For an image, only the top 5 topics are selected, which means $m$ is 5. The same topic extractor is used for Flickr30k, as COCO provides adequate generality. The caption words and the topic words share the same embeddings. In training, we first train the model without visual attention (freezing the CNN parameters) for 20 epochs with the batch size of 80. The learning rate for the LSTM is 0.0004. Then, we switch to jointly train the full model with a learning rate of 0.00001, which exponentially decays with the number of epochs so that it is halved every 50 epochs. We also use momentum of 0.8 and weight decay of 0.999. We use Adam \cite{kingma2014adam} for parameter optimization. For fair comparison, we adopt early stop based on CIDEr within maximum 50 epochs. \begin{table*}[t] \centering \scriptsize \setlength{\tabcolsep}{3pt} \begin{tabular}{@{}l c c c c c c c c c c c@{}} \toprule \multirow{2}{*}[-3pt]{Methods} & \multicolumn{7}{c}{SPICE} & \multirow{2}{*}[-3pt]{CIDEr} & \multirow{2}{*}[-3pt]{METEOR} & \multirow{2}{*}[-3pt]{ROUGE-L} & \multirow{2}{*}[-3pt]{BLEU-4} \\ \cmidrule(lr){2-8} & All & Objects & Attributes & Relations & Color & Count & Size & & & & \\ \midrule Baseline (Plain Encoder-Decoder Network) & 0.150 & 0.295 & 0.048 & 0.039 & 0.022 & 0.004 & 0.023 & 0.762 & 0.220 & 0.495 & 0.251 \\ Up-Down \cite{anderson2018bottom}\ssymbol{1}\ssymbol{2} & 0.214 & 0.391 & 0.100 & 0.065 & 0.114 & 0.184 & 0.032 & \bf 1.201 & 0.277 & \bf 0.569 & \bf 0.363 \\ \midrule Baseline + Input Att. & 0.164 & 0.316 & 0.060 & 0.044 & 0.030 & 0.038 & 0.024 & 0.840 & 0.233 & 0.512 & 0.273 \\ Baseline + Output Att. & 0.181 & 0.329 & 0.094 & 0.053 & 0.089 & 0.184 & 0.044 & 0.968 & 0.253 & 0.534 & 0.301 \\ Baseline + Input Att. + Output Att. & 0.187 & 0.338 & 0.101 & 0.055 & \bf 0.115 & 0.161 & \bf 0.048 & 1.038 & 0.259 & 0.542 & 0.311 \\ \midrule Baseline + Topic Att. & 0.184 & 0.348 & 0.074 & 0.051 & 0.047 & 0.064 & 0.037 & 0.915 & 0.250 & 0.517 & 0.260 \\ Baseline + Topic Att. + MGate & 0.189 & 0.355 & 0.080 & 0.051 & 0.055 & 0.090 & 0.033 & 0.959 & 0.256 & 0.527 & 0.281 \\ \midrule Baseline + Input Att. + Output Att. + Topic Att. & 0.206 & 0.381 & 0.091 & 0.060 & 0.075 & 0.094 & 0.045 & 1.068 & 0.273 & 0.556 & 0.320 \\ \midrule simNet (Full Model) & \bf 0.220 & \bf 0.394 & \bf 0.109 & \bf 0.070 & 0.088 & \bf 0.202 & 0.045 & 1.135 & \bf 0.283 & 0.564 & 0.332 \\ \bottomrule \end{tabular} \caption{Results of incremental analysis. For a better understanding of the differences, we further list the breakdown of SPICE F-scores. \textit{Objects} indicates comprehensiveness, and the others indicate detailedness. Additionally, we report the performance of the current state-of-the-art Up-Down for further comparison, which uses extra dense-annotated data for pre-training and directly optimizes CIDEr.} \label{tab:res-ab} \end{table*} \subsection{Results} We compare our approach with various representative systems on Flickr30k and COCO, including the recently proposed NBT that is the state-of-the-art on the two datasets in comparable settings. Table \ref{tab:res-f30k} shows the result on Flickr30k. As we can see, our model outperforms the comparable systems in terms of all of the metrics except BLEU-4. Moreover, our model overpasses the state-of-the-art with a comfortable margin in terms of SPICE, which is shown to correlate the best with human judgments \cite{anderson2016spice}. Table \ref{tab:res-mscoco} shows the results on COCO. Among the directly comparable models, our model is arguably the best and outperforms the existing models except in terms of BLEU-4. Most encouragingly, our model is also competitive with Up-Down \cite{anderson2018bottom}, which uses much larger dataset, Visual Genome \cite{krishna2017visual}, with dense annotations to train the object detector, and directly optimizes CIDEr. Especially, our model outperforms the state-of-the-art substantially in SPICE and METEOR. Breakdown of SPICE F-scores over various subcategories (see Table~\ref{tab:res-ab}) shows that our model is in dominant lead in almost all subcategories. It proves the effectiveness of our approach and indicates that our model is quite data efficient. For the methods that directly optimize CIDEr, it is intuitive that CIDEr can improve significantly. The similar improvement of BLEU-4 is evidence that optimizing CIDEr leads to more n-gram matching. However, it comes to our notice that the improvements of SPICE, METEOR, and ROUGE-L are far less significant, which suggests there may be a gaming situation where the n-gram matching is wrongfully exploited by the model in reinforcement learning. As shown by \citet{liu2017improved}, it is most reasonable to jointly optimize all the metrics at the same time. We also evaluate the proposed model on the COCO evaluation server, the results of which are shown in Appendix \ref{sec:coco-server}, due to limited space. \begin{table}[t] \centering \footnotesize \begin{tabular}{@{}l c c c@{}} \toprule Method & Precision & Recall & F1 \\ \midrule Topics ($m$=5) & 49.95 & 38.91 & 42.48 \\ \midrule All words ($m$=5) & \bf84.01 & 17.99 & 29.49\\ All words ($m$=10) & 70.90 & 30.18 & 42.05 \\ All words ($m$=20) & 52.51 & \bf 44.53 & \bf 47.80 \\ \bottomrule \end{tabular} \caption{Performance of visual word extraction.} \label{tab:res-topic-extract} \end{table} \begin{table}[t] \centering \footnotesize \setlength{\tabcolsep}{3pt} \begin{tabular}{@{}l c c c c c@{}} \toprule Method & S & C & M & R & B \\ \midrule Topics ($m$=5) & \bf 0.220 & \bf 1.135 & \bf 0.283 & \bf 0.564 & \bf 0.332 \\ \midrule All words ($m$=5) & 0.197 & 1.047 & 0.264 & 0.550 & 0.314 \\ All words ($m$=10) & 0.201 & 1.076 & 0.256 & 0.528 & 0.293 \\ All words ($m$=20) & 0.209 & 1.117 & 0.276 & 0.561 & 0.329 \\ \bottomrule \end{tabular} \caption{Effect of using different visual words.} \label{tab:res-topic} \end{table} \section{Analysis} In this section, we analyze the contribution of each component in the proposed approach, and give examples to show the strength and the potential improvements of the model. The analysis is conducted on the test set of COCO. \paragraph{Topic Extraction} The motivation of using objects as topics is that they are easier to identify so that the generation suffers less from erroneous predictions. This can be proved by the F-score of the identified topics in the test set, which is shown in Table \ref{tab:res-topic-extract}. Using top-5 object words is at least as good as using top-10 all words. However, using top-10 all words introduces more erroneous visual words to the generation. As shown in Table \ref{tab:res-topic}, when extracting all words, providing more words to the model indeed increases the captioning performance. However, even when top-20 all words are used, the performance is still far behind using only top-5 object words and seems to reach the performance ceiling. It proves that for semantic attention, it is also important to limit the absolute number of incorrect visual words instead of merely the precision or the recall. It is also interesting to check whether using other kind of words can reach the same effect. Unfortunately, in our experiments, only using verbs or adjectives as semantic concepts works poorly. \begin{figure}[t] \centering \includegraphics[width=0.35\textwidth]{mgate-new} \caption{Average merging gate values according to word types. As we can see, object words (noun) dominate the high value range, while attribute and relation words are assigned lower values, indicating the merging gate learns to efficiently combine the information.} \label{fig:bgate} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{sample-new} \caption{Examples of the generated captions. The left plot compares simNet with visual attention and topic attention. Visual attention is good at portraying the relations but is less specific in objects. Topic attention includes more objects but lacks details, such as material, color, and number. The proposed model achieves a very good balance. The right plot shows the error analysis of the proposed simNet.} \label{fig:ex-comp} \end{figure*} To examine the contributions of the sub-modules in our model, we conduct a series of experiments. The results are summarized in Table \ref{tab:res-ab}. To help with the understanding of the differences, we also report the breakdown of SPICE F-scores. \paragraph{Visual Attention} Our input attention achieves similar results to previous work \cite{xu2015show}, if not better. Using only the output attention is much more effective than using only the input attention, with substantial improvements in all metrics, showing the impact of information gap caused by delayed input in attention. Combining the input attention and the output attention can further improve the results, especially in color and size descriptions. \paragraph{Topic Attention} As expected, compared with visual attention, the topic attention is better at identifying objects but worse at identifying attributes. We also apply the merging gate to the topic attention, but it now merges $\vec{q_t}$ and $\vec{h_t}$ instead of $\vec{s_t}$ and $\vec{r_t}$. With the merging gate, the model can balance the information in caption text and extracted topics, resulting in better overall scores. While it overpasses the conventional visual attention, it lags behind the output attention. \paragraph{Merging Gate} Combing the visual attention and the topic attention directly indeed results in a huge boost in performance, which confirms our motivation. However, directly combining them also causes lower scores in attributes, color, count, and size, showing that the advantages are not fully made use of. The most dramatic improvements come from applying the merging gate to the combined attention, showing that the proposed balance mechanism can adaptively combine the two kinds of information and is essential to the overall performance. The average merging gate value summarized in Figure \ref{fig:bgate} suggests the same. \medskip We give some examples in the left plot of Figure~\ref{fig:ex-comp} to illustrate the differences between the models more intuitively. From the examples, it is clear that the proposed simNet generates the best captions in that more objects are described and many informative and detailed attributes are included, such as the quantity and the color. \begin{figure*}[ht] \centering \includegraphics[width=1.0\textwidth]{vis-new} \caption{Visualization. Please view in color. Here, we give two running examples. The upper part of each example shows the attention weights of each of 5 extracted topics. Deeper color means larger in value. The middle part shows the value of the merging gate that determines the importance of the topic attention. The lower part shows the visualization of visual attention. The attended region is covered with color. The blue shade indicates the output attention. The red shade indicates the input attention. } \label{fig:ex-vis} \end{figure*} \paragraph{Visualization} Figure \ref{fig:ex-vis} shows the visualization of the topic attention and the visual attention with running examples. As we can see, the topic attention is active when generating a phrase containing the related topic. For example, \textit{bathroom} is always most attended when generating \textit{a bathroom}. The merging gate learns to direct the information flow efficiently. When generating words such as \textit{on} and \textit{a}, it gives lower weight to the topic attention and prefers the visual attention. As to the visual attention, the output attention is much more focused than the input attention. As we hypothesized, the conventional input attention lacks the information of the last generated word and does not know what to look for exactly. For example, when generating \textit{bathroom}, the input attention does not know the previous generated word is \textit{a}, and it loses its focus, while the output attention is relatively more concentrated. Moreover, the merging gate learns to overcome the erroneous topics, as shown in the second example. When generating \textit{chair}, the topic attention is focused on a wrong object \textit{bed}, while the visual attention attends correctly to the chair, and especially the output attention attends to the armrest. The merging gate effectively remedies the misleading information from the topic attention and outputs a lower weight, resulting in the model correctly generating the word \textit{chair}. \paragraph{Error Analysis} We conduct error analysis using the proposed (full) model on the test set to provide insights on how the model may be improved. We find 123 out of 1000 generated captions that are not satisfactory. There are mainly three types of errors, i.e. distance (32, 26\%), movement (22, 18\%), and object (60, 49\%), with 9 (7\%) other errors. Distance error takes place when there is a lot of objects and the model cannot grasp the foreground and the background relationship. Movement error means that the model fails to describe whether the objects are moving. Those two kinds of errors are hard to eliminate, as they are fundamental problems of computer vision waiting to be resolved. Object error happens when there are incorrect extracted topics, and the merging gate regards the topic as grounded in the image. In the given example, the incorrect topic is \textit{garden}. The tricky part is that the topic is seemingly correct according to the image features or otherwise the proposed model will choose other topics. A more powerful topic extractor may help with the problem but it is unlikely to be completely avoided. \section{Conclusions} We propose the stepwise image-topic merging network to sequentially and adaptively merge the visual and the conceptual information for improved image captioning. To our knowledge, we are the first to combine the visual and the semantic attention to achieve substantial improvements. We introduce the stepwise merging mechanism to efficiently guide the two kinds of information when generating the caption. The experimental results demonstrate the effectiveness of the proposed approach, which substantially outperforms the state-of-the-art image captioning methods in terms of SPICE on COCO and Flickr30k datasets. Quantitative and qualitative analysis show that the generated captions are both detailed and comprehensive in comparison with the existing methods. \section*{Acknowledgments} This work was supported in part by National Natural Science Foundation of China (No. 61673028). We thank all the anonymous reviewers for their constructive comments and suggestions. Xu Sun is the corresponding author of this paper.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} Zeckendorf's Theorem states that any positive integer can be uniquely written as the sum of non-consecutive Fibonacci numbers $\{F_n\}$, defined by $F_1 = 1, F_2 = 2$, and $F_{n + 1} = F_n + F_{n - 1}$ for all $n \geq 2$ \cite{Ze}. We call this sum a number's \textbf{Zeckendorf decomposition}, and interestingly this leads to an equivalent definition of the Fibonaccis: they are the only sequence such that every positive integer can be written uniquely as a sum of non-adjacent terms. This interplay between recurrence relations and notions of legal decompositions holds for other sequences and recurrence rules as well. Below we report on some of the previous work on properties of Generalized Zeckendorf decompositions for certain sequences, and then discuss our new generalizations to two-dimensional sequences. There is now an extensive literature on the subject (see for example \cite{Bes,Bow,Br,Ca,Day,Dem,FGNPT, Fr,GTNP,Ha, Ho, HW, Ke,Mw1,Mw2,Ste1,Ste2} and the references therein). Lekkerkerker \cite{Lek} proved that the average number of summands in the Zeckendorf decompositions of $m \in [F_n, F_{n + 1})$ is $\frac{n}{\varphi^2 + 1} + O(1) \approx .276n$ as $n \rightarrow \infty$. Later authors extended this to other sequences and higher moments (see the previous references, in particular \cite{Ben, Dem,Dor, DG, LM, LT, Mw2}), proving that given any rules for decompositions there is a unique sequence such that every number has a unique decomposition, and the average number of summands converges to a Gaussian. To date, most of the sequences studied have been one-dimensional; many that appear to be higher dimensional (such as \cite{CFHMN2,CFHMNPX}) can be converted to one-dimensional sequences. Our goal is to investigate decompositions that are truly higher dimensional. We do so by creating a sequence arising from two-dimensional lattice paths on ordered pairs of positive integers. A legal decomposition in $d$ dimensions will be a finite collection of lattice points for which \begin{enumerate} \item{each point is used at most once}, and \item{if the point $(i_1, i_2, \dots, i_d)$ is included then all subsequent points $(i_1', i_2', \dots, i_d')$ have $i'_j \ < \ i_j$ for all $j \in \{1, 2, \dots, d\}$ (i.e., \emph{all} coordinates must decrease between any two points in the decomposition).} \end{enumerate} We call these sequences of points on the $d$-dimensional lattice \textbf{simple jump paths}. In Section \ref{sec:futureWork} we discuss generalizations in which we allow only some of the coordinates to decrease between two consecutive points in the path; this adds combinatorial difficulties. Note that the number we assign to each lattice point depends on how we order the points (unless we are in one dimension). For example, if $d=2$ we can order the points by going along diagonal lines, or $L$-shaped paths. Explicitly, the first approach gives the ordering \be (1,1), \ \ \ (2,1),\ (1,2), \ \ \ (3,1),\ (2,2),\ (1,3), \ \ \ \dots, \ee while the second yields \be (1,1), \ \ \ (2,1),\ (2,2),\ (1,2), \ \ \ (3,1),\ (3,2),\ (3,3),\ (2,3),\ (1,3), \ \ \ \dots. \ee For the purposes of this paper, however, it does not matter which convention we adopt as our results on the distribution in the number of summands of a legal decomposition depend only on the combinatorics of the problem, and not the values assigned to each tuple. We call the labeling attached to any choice a \textbf{Simple Zeckendorf Sequence in $d$ dimensions}, and comment shortly on how this is done. If $d = 1$ then we denote the sequence as $\{y_a\}^{\infty}_{a = 0}$ and construct it as follows. \begin{enumerate} \item{Set $y_1 := 1$.} \item{Iterate through the natural numbers. If we have constructed the first $k$ terms of our sequence, the $(k+1)$\textsuperscript{th} term is the smallest integer which cannot be written as a sum of terms in the sequence, with each term used at most once.} \end{enumerate} Note this sequence is just powers of 2, \begin{eqnarray} \begin{array}{ccccccccccc} 1 & 2 & 4 & 8 & 16 & 32 & 64 & 128 & 256 & 512 & \dots, \end{array} \label{ZeckendorfDiagonalSequenceSimp1D} \end{eqnarray} and a legal decomposition of $n$ is just its binary representation. If $d = 2$, on the other hand, as remarked above we have choices. We describe the Simple Zeckendorf Diagonal Sequence $\{y_{a, b}\}_{a, b = 0}^{\infty}$; its construction is similar in nature to the $d = 1$ case and proceeds as follows. \begin{enumerate} \item{Set $y_{1,1} := 1$.} \item{Iterate through the natural numbers. For each such number, check if any path of numbers in our sequence with a strict leftward and downward movement between each two points sums to the number. If no such path exists, add the number to the sequence so that it is added to the shortest unfilled diagonal moving from the bottom right to the top left.} \item{If a new diagonal must begin to accommodate a new number, set the value $y_{k, 1}$ to be that number, where $k$ is minimized so that $y_{k, 1}$ has not yet been assigned.} \end{enumerate} In \eqref{ZeckendorfDiagonalSequenceSimp2D} we illustrate several diagonals' worth of entries when $d = 2$, where the elements are always added in increasing order. Note that unlike the Fibonacci sequence, we immediately see that we have lost the uniqueness of decompositions (for example, $25$ has two legal decompositions: $20+5$ and $24+1$). \begin{eqnarray} \begin{array}{cccccccccc}280 & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\157 & 263 & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\84 & 155 & 259 & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\50 & 82 & 139 & 230 & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\28 & 48 & 74 & 123 & 198 & \cdots & \cdots & \cdots & \cdots & \cdots \\14 & 24 & 40 & 66 & 107 & 184 & \cdots & \cdots & \cdots & \cdots \\7 & 12 & 20 & 33 & 59 & 100 & 171 & \cdots & \cdots & \cdots \\3 & 5 & 9 & 17 & 30 & 56 & 93 & 160 & \cdots & \cdots \\1 & 2 & 4 & 8 & 16 & 29 & 54 & 90 & 154 & \cdots \end{array} \label{ZeckendorfDiagonalSequenceSimp2D} \end{eqnarray} Of course, analogous procedures to the one which creates \eqref{ZeckendorfDiagonalSequenceSimp2D} exist for higher dimensions, but the intended illustration is most intuitive in two dimensions. For the same reason as in the $d = 2$ case, there are clearly multiple procedures to generate the higher-dimensional sequences, even if one fixes restrictions on how to choose the summands in as many as $d - 2$ dimensions. Numerical explorations (see Figure \ref{fig:gaussplots}) suggest that, similarly to other sequences mentioned earlier, the distribution of the number of summands converges to a Gaussian. \begin{figure}[h] \begin{center} \scalebox{.7}{\includegraphics[width=9cm,height=5.5cm,angle=0]{simplejumppatha10b10.eps}} \ \ \scalebox{.7}{\includegraphics[width=9cm,height=5.5cm,angle=0]{simplejumppatha40b40.eps}} \caption{\label{fig:gaussplots} Distribution of the number simple jump paths of varying lengths versus the best fit Gaussian. Left: Starting at $(10,10)$. Right: Starting at $(40, 40)$. In both cases the horizontal axis is the number of summands and the vertical axis is the probability of obtaining a simple jump path with some number of summands when selecting one from all simple jump paths uniformly at random. } \end{center} \end{figure} Our main result is that as $n\to \infty$, we converge to Gaussian behavior in any number of dimensions. \begin{thm}{($d$-dimensional Gaussianity)} \label{ddgauss} Let $n$ be a positive integer, and consider the distribution of the number of summands among all simple jump paths of dimension $d$ with starting point $(i, i,.....,i)$ where $ 1 \leq i \leq n$, and each distribution represents a (not necessarily unique) decomposition of some positive number. This distribution converges to a Gaussian as $n \rightarrow \infty$. \end{thm} In Section \ref{sec:simpleJumpPaths} we motivate our problem further, explore the notion of a simple jump path in more depth, and prove some needed lemmas. Then, we prove Theorem \ref{ddgauss} in Section \ref{sec:gaussianDDim}. The result is just the Central Limit Theorem for a binomial random variable if $d=1$. If $d=2$ it can be proved directly through combinatorial identities, but for larger $d$ the combinatorial lemmas do not generalize and we are forced to resort to analytic techniques. We show that the functional dependence is that of a Gaussian, and thus as the probabilities must sum to 1 the normalization constant, which depends on the number of paths, must have a certain asymptotic formula. Thus, as an immediate consequence, we obtain new proofs for the asymptotic number of paths (the approach mentioned on the OEIS uses generating functions and expansions). We end with a discussion of future work and generalizations of the simple jump paths. \section{Properties of Simple Jump Paths}\label{sec:simpleJumpPaths} We first set some notation for our simple jump paths. We have walks in $d$ dimensions starting at some initial point $(a_1, a_2, \dots, a_d)$ with each $a_j > 0$, and ending at the origin $(0, 0, \dots, 0)$. Note that our simple jump paths must always have movement in all dimensions at each step. We are just adding one extra point, at the origin, and saying every path must end there. Note that as we always change all of the indices during a step, we never include a point where only some of the coordinates are zero, and thus there is no issue in adding one extra point and requiring all paths to end at the origin. Our walks are sequences of points on the lattice grid with positive indices or the origin, and we refer to movements between two such consecutive points as \textbf{steps}. Thus a simple jump path is a walk where each step has a strict movement in all $d$ dimensions. More formally, a simple jump path of length $k$ starting at $(a_1, a_2, \dots, a_d)$ is a sequence of points $\{(x_{i, 1}, \dots, x_{i, d})\}^{k}_{i = 0}$ where the following hold: \begin{itemize} \item $(x_{0, 1}, \dots, x_{0, d}) \ = \ (a_1, \dots, a_d)$, \item $(x_{k, 1}, \dots, x_{k, d}) \ = \ (0, \dots, 0)$, and \item for each $i \in \{1, \dots, k - 1\}$ and $j \in \{1, \dots, d\}$, $x_{i, j} \ > \ x_{i + 1, j}$. \end{itemize} For a fixed $d$ and any choice of starting point $(n, n, \dots, n) \in \ensuremath{\mathbb{R}}^d$, we let $s_d(n)$ denote the number of simple jump paths from $(n, n, \dots, n)$ to the origin, and $t_d(k, n)$ the subset of these paths with exactly $k$ steps. As we must reach the origin, every path has at least 1 step, the maximum number of steps is $n$, and \begin{equation} \label{simpleJumpPathPartitionByNumSteps} s_d(n) \ = \ \sum_{k = 1}^{n} t_d(k, n). \end{equation} We now determine $t_d(k, n)$. In one dimension we have $t_d(k, n) = \ncr{n-1}{k-1}$, as we must choose exactly $k-1$ of the first $n-1$ terms (we must choose the $n$\textsuperscript{th} term as well as the origin, and thus choosing $k-1$ additional places ensures their are exactly $k$ steps). The generalization to higher dimensions is immediate as we are looking at simple paths, and thus there is movement in each dimension in each step; this is why we restrict ourselves to simple paths, as in the general case we do not have tractable formulas like the one below. \begin{lemma} \label{lem:enumerateSimpleJumpPaths} For $a_1, \dots, a_d$ positive integers let $t_d(k; a_1,\dots,a_d)$ denote the number of simple paths of length $k$ starting at $(a_1, \dots, a_d)$ and ending at $(0, \dots, 0)$. Then for $1 \leq k \le \min(a_1,\dots, a_d)$, \begin{eqnarray} t_d(k; a_1, \dots, a_d) \ = \ncr{a_1 - 1}{k - 1}\ncr{a_2 - 1}{k - 1} \cdots \ncr{a_d - 1}{k - 1}; \end{eqnarray} if $a_1 = \cdots = a_d = n$ we write $t_d(k,n)$ for $t_d(k;a_1, \dots, a_d)$. We have \begin{eqnarray} s_d(n) \ = \ \sum_{k=1}^n t_d(k,n), \end{eqnarray} and $s_1(n) = 2^{n-1}$, $s_2(n) = \ncr{2n - 2}{n - 1}$ (for higher $d$ there are no longer simple closed form expressions\footnote{We will find excellent approximations for large $n$ and fixed $d$ later.}). \end{lemma} The proof is an immediate, repeated application of the one-dimensional result, with the two formulas (for $s_1(n)$ and $s_2(n)$) being well-known binomial identities (see for example \cite{Mil}). \section{Gaussianity in $d$-Dimensional Lattices}\label{sec:gaussianDDim} \subsection{Mean and Variance}\label{sec:meanVar} To prove Theorem \ref{ddgauss}, we start by determining the density, $p_d(k, n)$, for the number of simple jump paths of length $k$ starting at $(n, \dots, n)$: \begin{eqnarray} p_d(k, n) \ := \ \frac{t_d(k, n)}{s_d(n)}. \label{dDimDensityCondensed} \end{eqnarray} Much, though not all, of the proof when $d = 1$ carries over to general $d$. We therefore concentrate on $d = 1$ initially and then remark on what issues arise when we generalize, and discuss the resolution of these problems. We begin by determining the mean and standard deviation. The analysis for the mean holds for all $d$, but the combinatorial argument for the variance requires $d \le 2$. Due to the presence of $n-1$ in the formula for $t_d(k,n)$, we work with $n+1$ below to simplify some of the algebra. \begin{lem}\label{simpleMeanStdDev} Consider all simple jump paths from $(n+1, \dots, n+1)$ to the origin in $d$-dimensions. If $K$ is the random variable denoting the number of steps in each path, then its mean $\mu_d(n + 1)$ and standard deviation $\sigma_d(n + 1)$ are \begin{eqnarray} \mu_d(n + 1) & \ = \ & \frac{1}{2}n + 1 \label{simpleSquareLatticeMean} \end{eqnarray} and \begin{eqnarray}\label{simpleSquareLatticeStdDev} \sigma_1(n + 1)\ = \ \frac{\sqrt{n}}{2}, \ \ \ \ \ \sigma_2(n+1) \ = \ \frac{n}{2\sqrt{2n-1}} \ \approx \ \frac{\sqrt{n}}{2\sqrt{2}}. \end{eqnarray} Further, we have \begin{equation} \sigma_d(n+1) \ \le \ \sigma_1(n+1) \ \le \ \sqrt{n}/2. \end{equation} \end{lem} \begin{proof} The results for $d=1$ are well known, as we have a binomial random variable. For $d=2$ one can compute the mean and the variance by combinatorial arguments (see Appendix \ref{sec:derivMeanStdDev}); unfortunately while these can be generalized to give the mean for any $d$ they do not generalize for the variance. Because we must end at the origin, note each path must have length at least 1. Thus instead of studying the number of paths of length $k \in \{1, \dots, n+1\}$ we instead study the number of paths of length $\kappa \in \{0, \dots, n\}$ and then add 1 to obtain the mean (there is no need to add 1 for the variance, as the variance of $K$ and $K-1$ are the same). As \be t_d(k;n+1) \ = \ \frac{\ncr{n}{k}^d}{s_d(n+1)}, \ee the symmetry of the binomial coefficients about $n/2$ implies the mean of $K-1$ is $n/2$. All that remains is to prove the variance bound for $d \ge 2$. Note that the variance of $K-1$ is \be \sigma_d(n+1) \ = \ \sum_{\kappa=0}^{n} \left(\kappa - n/2\right)^2 \frac{\ncr{n}{\kappa}^d}{s_d(n+1)}. \ee By symmetry it suffices to investigate $\kappa \ge n/2$. Since the binomial coefficients are strictly decreasing as we move further from the mean, for such $\kappa$ we find that \be \frac{p_d(\kappa)}{p_d(\kappa+1)} \ = \ \frac{\ncr{n}{\kappa}^d}{\ncr{n}{\kappa+1}^d} \ \ge \ 1,\ee and thus for every $g > 0$ we see that the probability of $K-1$ being within $g$ of the mean increases as $d$ increases. Thus the variance is smallest at $d=1$, completing the proof. \end{proof} Next, we show with high probability that $K$ is close to the mean. \begin{lem}\label{lem:chebyshevbound} Consider all simple jump paths from $(n+1, \dots, n+1)$ to the origin in $d$-dimensions. If $K$ is the random variable denoting the number of steps in each path, then the probability that $K$ is at least $n^{\epsilon} n^{1/2}/2$ from the mean is at most $n^{-2\epsilon}$. \end{lem} \begin{proof} By Chebyshev's Inequality, \be {\rm Prob}\left(|K - (n/2 + 1)| \ \ge\ n^\epsilon \sigma_d(n+1)\right) \ \ \le\ \ \frac1{n^{2\epsilon}}. \ee As $\sigma_d(n+1) \le n^{1/2}/2$ by Lemma \ref{simpleMeanStdDev}, we only decrease the probability on the left if we replace $\sigma_d(n+1)$ with $n^{1/2}/2$, and thus the claim follows. \end{proof} One important consequence of the above lemma is that if we write $k$ as $\mu_{d}(n + 1) + \ell n^{1/2}/2$, then with probability tending to 1 we may assume $|\ell| \le n^\epsilon$. \subsection{Gaussianity} \label{sec:GaussianityProof} The proof of Theorem \ref{ddgauss} in general proceeds similarly to the $d=1$ case. For $d \le 2$ we have explicit formulas for both the variance and $s_d(n+1)$, which simplify the proof. For general $d$ we show that the resulting distribution has the same functional form as a Gaussian, and from this we obtain asymptotics for both the variance and the number of paths. \begin{proof}[Proof of Theorem \ref{ddgauss}] From Lemma \ref{lem:chebyshevbound}, if we write \begin{equation} \label{meanStdDevApproxOfK} k \ = \ \mu_{d}(n + 1) + \ell n^{1/2}/2 \end{equation} then the probability of $|\ell|$ being at least $n^{1/9}$ is at most $n^{2/9}$, so in the arguments below we assume $|\ell| \le n^{1/9}$. In particular, this means that both $k$ and $n-k$ are close to $n/2$ with probability tending to 1 as $n\to\infty$. We are using $n^{1/2}/2$ and not $\sigma_d(n+1)$ as this way a quantity below will perfectly match the $d=1$ case. For $m$ large, Stirling's Formula states that \begin{equation} \label{stirlingCite} m! \ = \ m^m e^{-m} \sqrt{2\pi m} \left(1 + O\left(\frac1{m}\right)\right). \end{equation} Thus\begin{eqnarray}\label{equation:StirlingExpansion} p_d(k, n+1) & \ = \ & \frac{\ncr{n}{k}^d}{s_d(n+1)} \ = \ \frac1{s_d(n+1)} \left(\frac{n!}{k!(n-k)!}\right)^d \nonumber\\ & \ = \ & \frac1{s_d(n+1)}\left(\frac{\sqrt{2\pi n} n^n}{\sqrt{4\pi^2 k (n-k)} k^k (n-k)^{n-k}} \cdot \frac{\left(1 + O\left(\frac{1}{n}\right)\right)}{\left(1+O\left(\frac{1}{n-k}\right)\right)\left(1+O\left(\frac{1}{k}\right)\right)}\right)^d,\nonumber\\ \end{eqnarray} and the ratio of the big-Oh terms is $1 + O(1/n)$ since $k$ and $n-k$ are approximately $n/2$ (note the big-Oh constant here is allowed to depend on $d$, which is fixed). We now turn to the other part of the above expression. If we divide the rest of the quantity in parentheses by $2^n$ then we have the probability in 1-dimension, whose analysis is well-known; thus \be p_d(k, n+1) \ = \ \frac{2^{nd} n^{d/2}}{s_d(n+1)} \left(\frac{n^n}{2^n k^k (n-k)^{n-k} \sqrt{2\pi k(n-k)}} \right)^d \cdot \left(1 + O(1/n)\right). \ee The quantity to the $d$-th power converges (up to the normalization factor) to a Gaussian by the Central Limit Theorem for a binomial random variable; for completeness we sketch the proof. Using $n, n-k$ are close to $n/2$, we find \begin{eqnarray} \label{1DGaussMainTermExpandA} p_{{\rm main}, 1}(k) & \ := \ & \frac{n^n}{2^n k^k (n-k)^{n-k} \sqrt{2\pi k(n-k)}} \nonumber\\ &= & \frac{1}{\sqrt{\frac{1}{2}\pi n^2}} \cdot \frac{1}{\left(1-\frac{\frac{\ell\sqrt n}{2} }{n/2}\right)^{n/2-\frac{\ell\sqrt n}{2}+\frac12}\left(1+\frac{\frac{\ell\sqrt n}{2}}{n/2}\right)^{n/2+\frac{\ell\sqrt n}{2}+\frac12}}. \end{eqnarray} Let $q_n$ be the denominator of the second fraction above. We approximate $\log(q_n)$ and then exponentiate to estimate $q_n$. As $|\ell| \le n^{1/9}$, when we take the logarithms of the terms in $q_n$ only the first two terms in the Taylor expansion of $\log(1+u)$ contribute as $n\to\infty$. Thus \begin{eqnarray} \label{gaussFracExpanC} \log q_n & \ = \ & \left(\frac{n}{2} - \frac{\ell\sqrt n}{2} + \frac{1}{2}\right)\left(-\frac{\ell}{\sqrt n} - \frac{\ell^2}{2n}+ O\left(\frac{\ell^3}{n^{3/2}}\right)\right) \nonumber\\ & & \ \ \ \ + \ \left(\frac{n}{2} + \frac{\ell\sqrt n}{2} + \frac{1}{2}\right)\left(\frac{\ell}{\sqrt n} - \frac{\ell^2}{2n}+ O\left(\frac{\ell^3}{n^{3/2}}\right)\right)\nonumber\\ &= & \frac{\ell^2}{2}+O\left(n\cdot \frac{n^{1/3}}{n^{3/2}} - \frac{\ell^2}{2n}\right) \ = \ \frac{\ell^2}{2} + O\left(n^{-1/6}\right), \end{eqnarray} which implies (since $k = \mu_d(n+1) + \ell \sqrt{n}/2$) \begin{equation}\label{gaussFracExpanD} q_m \ = \ e^{\frac{(k-\mu_d(n+1))^2}{n/2}} e^{O(n^{-1/6})}. \end{equation} Thus collecting our expansions yields, for $|\ell| \le n^{1/9}$, \begin{equation} p_d(k, n+1) \ = \ \frac{2^{nd} n^{d/2}}{s_d(n+1)(\pi n^2 /2)^{d/2}}\ e^{-\frac{d(k-\mu_d(n+1))^2}{n/2}} \cdot e^{O(n^{-1/6})}. \end{equation} Note the second exponential is negligible as $n\to\infty$, and the first exponential is that of a Gaussian with mean $\mu_d(n+1)$ and variance $\sigma_d(n+1)^2 = n/4d$. As this is a probability distribution it must sum to 1 (the terms with $|\ell|$ large contribute negligibly in the limit), and thus $2^{nd} / (s_d(n+1) (\pi n /2)^{d/2})$ must converge to the normalization constant of this Gaussian, which is $1/\sqrt{2\pi s_d(n+1)^2}$. In particular, we obtain\footnote{One can check this asymptotic by computing $s_d(n+1)$ for various $d$ and looking up the resulting sequences on the OEIS, which agree; for example, see the entry A182421 for the sequence when $d=7$.} \begin{equation} s_d(n+1) \ \sim \ \frac{2^{nd} n^{d/2}}{(\pi n^2 /2)^{d/2}} \cdot \sqrt{2 \pi n/4d} \ = \ 2^{nd} \left(\frac{\pi n}{2}\right)^{-\frac{d}{2}+\frac12} d^{-1/2}. \end{equation} \end{proof} \section{Future Work and Concluding Remarks}\label{sec:futureWork} We could also consider the \textbf{Compound Zeckendorf Diagonal Sequence in $d$ dimensions}, which is constructed in a similar way to \eqref{ZeckendorfDiagonalSequenceSimp1D} and \eqref{ZeckendorfDiagonalSequenceSimp2D}, but allows more paths to be legal (explicitly, each step is no longer required to move in all of the dimensions). While the $d \ = \ 1$ Compound Zeckendorf Diagonal Sequence is the same as the simple one, the two notions of paths give rise to different sequences when $d \ = \ 2$. In that case, the Compound Zeckendorf Diagonal Sequence is denoted $\{z_{a, b}\}^{\infty}_{a \ = \ 0, b \ = \ 0}$, and is constructed as follows. \begin{enumerate} \item{Set $z_{1,1} \ := \ 1$.} \item{Iterate through the natural numbers. For each such number, check if any path of distinct numbers without upward or rightward movements sums to the number. If no such path exists, add the number to the sequence so that it is added to the shortest unfilled diagonal moving from the bottom right to the top left.} \item{If a new diagonal must begin to accommodate a new number, set the value $z_{k, 1}$ to be that number, where $k$ is minimized so that $z_{k, 1}$ has not yet been assigned.} \end{enumerate} The difference between this and the Simple Zeckendorf Diagonal Sequence is that we now allow movement in just one direction. This greatly complicates the combinatorial analysis because now the simultaneous movements in different dimensions depend on each other. In particular, if a step contains a movement in one direction, it no longer needs to contain a movement in other directions to be regarded as a legal step. In \eqref{ZeckendorfDiagonalSequenceComp} we illustrate several diagonals' worth of entries, where the elements are always added in increasing order. \begin{eqnarray} \begin{array}{cccccccccc}6992 & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\2200 & 6054 & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\954 & 2182 & 5328 & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\364 & 908 & 2008 & 5100 & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\138 & 342 & 862 & 1522 & 4966 & \cdots & \cdots & \cdots & \cdots & \cdots \\44 & 112 & 296 & 520 & 1146 & 2952 & \cdots & \cdots & \cdots & \cdots \\16 & 38 & 94 & 184 & 476 & 1102 & 2630 & \cdots & \cdots & \cdots \\4 & 10 & 22 & 56 & 168 & 370 & 1052 & 2592 & \cdots & \cdots \\1 & 2 & 6 & 18 & 46 & 140 & 366 & 1042 & 2270 & \cdots\end{array} \label{ZeckendorfDiagonalSequenceComp} \end{eqnarray} Just as in \eqref{ZeckendorfDiagonalSequenceSimp2D}, uniqueness of decompositions does not hold in the compound case. For instance, $112 + 38 + 10$ and $140 + 18 + 2$ are both legal decompositions of $160$ in \eqref{ZeckendorfDiagonalSequenceComp}. Moreover, just like the Simple Zeckendorf Diagonal Sequences \eqref{ZeckendorfDiagonalSequenceSimp1D} and \eqref{ZeckendorfDiagonalSequenceSimp2D}, Compound Zeckendorf Diagonal Sequences can be built in higher dimensions with multiple ways of formulating how to add terms to the sequence. Many of the articles in the literature use combinatorial methods and manipulations of binomial coefficients to obtain similar results (see, for instance, \cite{Eg, Len, Mw2}). Thus a question worth future study is to extend the combinatorial variance calculation to $d$ dimensions (see Lemma \ref{2DimStdDev}). Finally, similar to \cite{Bow,Ko} and related work, we can investigate the distribution of gaps between summands in legal paths. One can readily obtain explicit combinatorial formulas for the probability of a given gap; the question is whether or not nice limits exist in this case as they do for the one-dimensional recurrences previously studied.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} There is a wide variety and huge number of sources of visual data in today's world. The most common form of these visual data is the RGB images collected from standard cameras. The spectrum of the scene is mapped to three values, matching the human vision system's three cones system. However, capturing and analyzing a wider range of spectrum offers benefits. Medical applications utilize hyperspectral data extensively \cite{dicker2006differentiation,randeberg2006hyperspectral,stamatas2003hyperspectral} as well as segmentation tasks \cite{tarabalka2010segmentation,camps2014advances}. Remote sensing is another area that hyperspectral systems are used \cite{lillesand2014remote,borengasser2007hyperspectral,hege2004hyperspectral}. The problem is, however, capturing more spectral information and more spatial information create a trade-off. Most systems evolve to focus on spatial resolution, rather than hyperspectral information. The focus of this work is increasing the spectral resolution of a single RGB image by reconstructing channels/images for a desired set of wavelengths. In other words, given a coarse description of the spectrum of the scene, such as RGB image, infer the missing spectral information. This problem can be named as spectral reconstruction or spectral super resolution. The problem of spectral super resolution is under constrained since the aim is to estimate a larger number spectral bands (over 30) from generally much lower number of channels (usually the R,G,B channels). However, it has been shown that there is significant correlation between spectral bands~\cite{hyperspectral_remote}. This correlation can be used to infer the missing information. The complementary problem of spatial super resolution is extensively studied in literature~\cite{Nasrollahi2014,Agustsson_2017_CVPR_Workshops,Timofte_2017_CVPR_Workshops}. However, there are much less studies on spectral super resolution, most recent ones of which are summarized in section~\ref{ssc:related_work}. In this paper, a Convolutional Neural Network (CNN) based approach is proposed. Considering the limitations inherent to the problem such as lack of data and differences among the response functions of hyperspectral sensors, it will be argued that a relatively shallow CNN can avoid overfitting and manage to learn the latent mapping from RGB images to the desired spectral resolution. \vspace{-0.4cm} \subsection{Related Work} \label{ssc:related_work} \vspace{-0.1cm} The extensive study of the spatial super resolution problem in literature led to methods with impressive performances~\cite{DBLP:conf/accv/TimofteSG14-short,DBLP:conf/eccv/DongLHT14-short,DBLP:conf/cvpr/KimLL16a-short,Ledig_2017_CVPR,Timofte_2017_CVPR_Workshops}. The current state-of-the-art methods are CNN-based. In comparison, the complementary spectral reconstruction problem attracted much less attention. One proposed method tries to find the illumination conditions and uses radial basis functions to model the correspondence between RGB values of the given image and the reflectance of the objects in the scene~\cite{DBLP:conf/eccv/NguyenPB14-short}. Other approaches focus on sparse dictionary learning. Arad~{\textit{et al.}}~\cite{DBLP:conf/eccv/AradB16-short} learn a sparse dictionary method with K-SVD and OMP, they also introduce ICVL with 201 image pairs, the largest dataset for spectral reconstruction to date. Recently, Galliani~{\textit{et al.}}~\cite{DBLP:journals/corr/GallianiLMBS17} propose a CNN architecture based on Tiramisu network~\cite{DBLP:conf/cvpr/JegouDVRB17-short} to map the RGB values to hyperspectral bands. Their (very) deep CNN has 56 layers and learns an end-to-end mapping. The input patch is downscaled by max pooling through several layers and the upscaled back to original size through some layers of sub pixel upsampling. Very recently Aeschbacher~{\textit{et al.}}~\cite{Aeschbacher-ICCVW-2017} proposed an A+ based method. They build upon the A+ method of Timofte~{\textit{et al.}}~\cite{DBLP:conf/accv/TimofteSG14-short}, originally introduced for the spatial super resolution problem. Aeschbacher~{\textit{et al.}} proposes a sparse dictionary representation based method. It operates directly on pixel values and trains dictionaries using K-SVD and OMP. Offline anchored regressors are learned from the training samples, to map the low spectral resolution space (RGB) to the higher spectral resolution. Aeschbacher~{\textit{et al.}} also reimplemented and improved the performance of the method of Arad~{\textit{et al.}}. In this paper, we compare with the state-of-the-art methods: Galliani's deep CNN, Aeschbacher's A+ based method and reimplementation of Arad's method. \section{Proposed Method} \label{sec:pagestyle} Spectral reconstruction from an input RGB image is heavily ill-posed. Moreover, the existing datasets and train data are relatively small in size when compared with those available in the related problem of spatial super resolution. We place ourselves in between the shallow A+ method of Aeschabacher~{\textit{et al.}}~\cite{Aeschbacher-ICCVW-2017} and the (very) deep Tiramisu-based method of Galliani~{\textit{et al.}}~\cite{DBLP:journals/corr/GallianiLMBS17} and avoid overfitting to the training data with a novel moderately deep, rather shallow CNN with residual blocks. \begin{figure}[h] \centering \begin{tabular}{cc} \includegraphics[width=0.55\linewidth]{network}& \includegraphics[width=0.3\linewidth]{resblock}\\ a) network & b) res. block\\ \end{tabular} \vspace{-0.1cm} \caption{The proposed CNN architecture.} \label{fig:proposed_architecture} \vspace{-0.2cm} \end{figure} Figure~\ref{fig:proposed_architecture} gives the schematic representation of the proposed CNN network and the building residual block. `Conv' refers to convolutional layer, the number next to it refers to the number of feature maps of filters in that layer and the next element refers to the filter size. Arrows show the direction of flow. Wherever, the arrow heads meet the results of the layers at the source of the arrows are summed element-wise. The results of all layers except the last layer are passed through a PReLU, the formula of which is~\eqref{eq:PReLU}, as the non-linearity. PReLU~\cite{DBLP:conf/iccv/HeZRS15-short} was showed to improve over the traditional non-parametric ReLU. \begin{equation}\label{eq:PReLU} f(y_i)= \begin{cases} y_i, & \text{if } y_i > 0\\ (a_i)(y_i), & \text{if } y_i\le 0 \end{cases} \end{equation} The proposed architecture can be considered as two networks: the main network and the $7\times7$ conv layer. The architecture is created to form residual blocks all along the network. The $7\times 7$ conv layer can be considered as a skip connection of the residual block while the main network is core of the residual block. The $7\times7$ conv layer estimates the basic mapping from RGB to hyperspectral. For standard spatial super resolution, estimating the difference of high resolution image and the bicubic upsampled low resolution image is a common practice~\cite{Timofte_2017_CVPR_Workshops}. This convolution layer basically implements this operation but instead of a hand crafted method it learns the upsampling mapping. In the main network, the sub-network formed by layers from 2nd to 6th can be regarded as the residual block of the last layer and so on. Apart from the idea of forming residual sub-networks, we use regular residual blocks in the network. As shown in Figure~\ref{fig:proposed_architecture}, we opted for 2 residual blocks. Increasing the number of residual blocks brings only small benefits at the expense of runtime and potentially can lead to overfitting especially in our settings, some hyperspectral datasets have a small number of samples. The initial features extracted from the input are shrunk with $1\times1$ conv layers to form a bottleneck and decrease the computation cost and time. The bottleneck decreases overfit and forces the network to learn more compact and relevant features. However, the pre-shrink features are utilized further in the network through the skip connections. Therefore, the source of the learned complex features is also used. This idea aligns with the main concept of the network which is forming residual parts all along the network. Generally, the initial layers of CNNs are responsible for learning simple features such as gradients or blobs. Although combining them to form more complex features and using them to make decisions in further segments of the network is beneficial, the simple features can also be useful in the further stages. The shrunken features are then processed by the residual blocks. The blocks are composed of $3\times3$ convolutional filters just like the original Residual Network paper~\cite{he2016deep}. Different than the original block, we have PReLU as activation function. \begin{figure*}[htb] \includegraphics[width=\linewidth]{aplus_comparison_blurre.eps} \vspace{-0.4cm} \caption{Visual comparison on ICVL between ground truth (GT) and reconstructed bands by A+~\cite{Aeschbacher-ICCVW-2017} and our method.} \label{fig:visual_comparison} \vspace{-0.3cm} \end{figure*} The output of the residual blocks are expanded from 32 back to the original feature map count of 128. This ensures that more features are available to the final layer. Since the bulk of the processing has past, this expansion does not increase computation time heavily. This layer can be seen as counterpart of the second layer where the initial features were shrunk. After the expansion of the output of the residual block, the resulting maps are passed to the final layer of $5\times5$ convolution. There are 31 maps in this layer corresponding to the 31 channel images we are reconstructing. The spatial extent of this layer is kept high to ensure that the nearby pixels are also taken into consideration. Finally, the result from the $7\times7$ convolution layer is added to form the output. The network's receptive field is $17\times17$. This means a pixel in the output is calculated using a local neighborhood of $17\times17$ pixels. With the $7\times7$ convolution layer, the local neighborhood of $7\times7$ pixels have an extra effect on the resulting pixel. \vspace{-0.1cm} \section{Experimental results} \label{sec:experiments} We compare our proposed approach against the methods from Galliani~{\textit{et al.}}~\cite{DBLP:journals/corr/GallianiLMBS17} and Aeschbacher~{\textit{et al.}}~\cite{Aeschbacher-ICCVW-2017} as roughly described in Section~\ref{sec:introduction}. We adhere to the experimental setup from~\cite{Aeschbacher-ICCVW-2017,DBLP:journals/corr/GallianiLMBS17} and report RMSE and relative RMSE (rRMSE) as defined in~\cite{Aeschbacher-ICCVW-2017} for 3 benchmarks: ICVL~\cite{DBLP:conf/eccv/AradB16-short}, CAVE~\cite{DBLP:journals/tip/YasumaMIN10}, and NUS~\cite{DBLP:conf/eccv/NguyenPB14-short}. Because Galliani measured the errors using 8bit images, we report also our results \textit{w.r.t.} 8 bit images. In NUS dataset we use the provided train/test split and for ICVL and CAVE we apply a 2 fold validation by dividing the images into two sets and training 2 models for each set. On each test RGB image we employ the model that did not use it in training. The results are averaged to give the final test error. \vspace{-0.1cm} \subsection{Datasets} \label{ssec:subhead} \noindent\textbf{ICVL} dataset of Arad~{\textit{et al.}}~\cite{DBLP:conf/eccv/AradB16-short} includes 201 hyperspectral images with $1392\times1300$ resolution over 519 spectral bands (400-1,000nm). The images were captured by a line scanner camera (Specim PS Kappa DX4 hyperspectral). Although there are 519 bands, we used the downsampled version which has 31 bands from 400nm to 700nm with 10nm increments. Following the practice of Galliani and Aeschbacher we use the CIE 1964 color matching functions to prepare the corresponding RGB images of the hyperspectral images. \begin{table*}[th!] \caption{Quantitative comparison on ICVL~\cite{DBLP:conf/eccv/AradB16-short}, CAVE~\cite{DBLP:journals/tip/YasumaMIN10} and NUS~\cite{DBLP:conf/eccv/NguyenPB14-short} datasets. Best results are in bold.} \label{tab:MethodComparison} \centering \vspace{0.1cm} \resizebox{\linewidth}{!} { \begin{tabular}{l||ccccc|ccccc|cccccc} &\multicolumn{5}{|c|}{\textbf{ICVL dataset~\cite{DBLP:conf/eccv/AradB16-short}}}&\multicolumn{5}{|c|}{\textbf{CAVE dataset~\cite{DBLP:journals/tip/YasumaMIN10}}}&\multicolumn{6}{|c}{\textbf{NUS dataset~\cite{DBLP:conf/eccv/NguyenPB14-short}}} \\ & Galliani & \textbf{Arad} & \textbf{A+} & \textbf{ours}& \textbf{ours+E} & Galliani &Arad & \textbf{A+} & \textbf{ours} & \textbf{ours+E} & Nguyen & Galliani & Arad & \textbf{A+} & \textbf{ours} & \textbf{ours+E} \\ & \cite{DBLP:journals/corr/GallianiLMBS17}& \cite{Aeschbacher-ICCVW-2017} &\cite{Aeschbacher-ICCVW-2017}& & &\cite{DBLP:journals/corr/GallianiLMBS17}& \cite{Aeschbacher-ICCVW-2017} &\cite{Aeschbacher-ICCVW-2017}& & &\cite{DBLP:conf/eccv/NguyenPB14-short} & \cite{DBLP:journals/corr/GallianiLMBS17}& \cite{Aeschbacher-ICCVW-2017} &\cite{Aeschbacher-ICCVW-2017}& & \\ \hline \small{rRMSE} & -& 0.0507& 0.0344& 0.0168& \textbf{0.0166} &- &0.4998 & 0.4265 & 0.4697& \textbf{0.178} & 0.2145 & -& 0.1904& \textbf{0.1420}&0.1524&0.1471 \\ \small{rRMSE$_G$} & -& 0.0873& 0.0584& 0.0401& \textbf{0.0399} &- & 0.7755& 0.3034 &0.246 &\textbf{0.239} & 0.3026&- & 0.3633 & 0.2242 & 0.2317& \textbf{0.2168}\\ \small{rRMSE$_G^{uint}$} & 0.0587& -& - & 0.0353& \textbf{0.0350}& 0.2804 & -& - &0.1525&\textbf{0.1482} & 0.3026&0.234 & -&- & 0.1796& \textbf{0.1747}\\ \small{RMSE}& -& 1.70& 1.04& 0.6407& \textbf{0.6324} & -& 5.61&2.74&\textbf{2.550}&2.613 &12.44& -& 4.44& 2.92& 2.86& \textbf{2.83}\\ \small{RMSE$_G$} & -& 3.24& 1.96& 1.35& \textbf{1.33}&- & 20.13& 6.70&\textbf{5.77}&5.80& 8.06& -& 9.56 & 5.17&5.12&\textbf{4.92}\\ \small{RMSE$_G^{uint}$} & 1.98& -& -& 1.25& \textbf{1.23}&4.76 & -& -&\textbf{3.4924}&3.5275& 8.06& 5.27& - & -&\textbf{3.66}&\textbf{3.66}\\ \end{tabular} } \vspace{-0.3cm} \end{table*} \noindent\textbf{CAVE} database proposed by Yasuma~{\textit{et al.}}~\cite{DBLP:journals/tip/YasumaMIN10} has 32 images with $512\times512$ resolution. There are 31 bands for each image, ranging from 400 to 700 nm with 10 nm increments. The pictures were taken with a cooled CCD camera (Apogee Alta U260). The dataset contains various objects including food, fabric, faces and paints. In this dataset, Aeschbacher~{\textit{et al.}}~\cite{Aeschbacher-ICCVW-2017} followed a 4 fold cross validation. \noindent\textbf{NUS} dataset introduced by Nguyen~{\textit{et al.}}~\cite{DBLP:conf/eccv/NguyenPB14-short} contains 66 spectral images and their corresponding illuminations. Just like the other 2 datasets, the spectral bands range between 400 to 700nm, with 10nm increments. The pictures were taken with a Specim's PFDCL- 65-V10E spectral camera. Different illumination conditions were used. Natural daylight and metal halide bands were utilized to form a diverse set. Here, following Galliani and Aeschbacher, instead of CIE 1964 mapping, Canon 1D Mark III response function was used to map the hyperspectral images to RGB space. \vspace{-0.1cm} \subsection{Implementation Details} \label{ssc:implementation_details} \noindent{\textbf{Training }} The proposed network was trained with TensorFlow from scratch with Adam optimizer. The learning rate was initially set to 0.0005 while multiplied by 0.93 at every 50000 iterations. The networks were trained for 400000 iterations. Xavier initializer was used to initialize weights. Batch size is 64. The network was trained to minimize $l_2$-loss. The convolutions are implemented with no padding. Therefore, for each skip connection, the previous layer's output is cropped to match the input layer. For the training process, patches of size $36\time36$ were used. Because we used convolution with no padding, patches get smaller at every layer and the output of the network is $20\times20$. Therefore, while the input patch is $36\times36$ the corresponding label is of size $20\times20$. For each image in the set, as suggested in~\cite{Timofte_2016_CVPR} data augmentation is performed by rotating the image by 90, 180, 270 degrees, flipping and downscaling with 0.9, 0.8, 0.7. This produces 32 image pairs for each training image pair of low (RGB) and corresponding high resolution spectral image. \noindent{\textbf{Testing }} At test time, we use our model without (`ours' setting) and with the enhanced prediction (`ours+E') as suggested in~\cite{Timofte_2016_CVPR}. For the enhanced prediction the input image is rotated and flipped to obtain 8 images processed separately and mapped back to the original state to then average these resulting images for the final result. Generally, using the enhanced prediction is beneficial accuracy-wise (see Table~\ref{tab:MethodComparison}). \vspace{-0.1cm} \subsection{Design choices vs. performance} \label{ssec:subhead} Figure~\ref{fig:design_choices} shows validation errors for our model with 4 different settings. The number of residual blocks, the number of feature maps and the patch size, respectively, was varied in the default configuration of our model. For this comparison the ICVL dataset with Canon 1D Mark III response function was used. The dataset was divided into 2 sets and for each set 10 images were set as validation images. For each model, one network is trained on 90 images and tested on corresponding validation images. The results are averaged. As it can be seen from the figure, after 400,000 iterations the default configuration of our model with 2 residual blocks, 128 features maps and patch size 20 performs the best. Patch size 40 model has a significant problem since training with larger patch size results in a substantial increase in the training time. Moreover, due to memory restrictions, one cannot extract equal number of patches from the training images as the smaller patch size setting which results in higher number of epochs with same iteration number. The runtime is also directly affected by the number of features maps and the number of residual blocks/layers in the model. \vspace{-0.1cm} \subsection{Quantitative results} \label{ssec:quantitative_results} Table~\ref{tab:MethodComparison} demonstrates the quantitative results of ours and the compared methods. On ICVL and CAVE benchmarks, our method has substantially improved over the competing methods on all metrics. The mean values of the samples in the CAVE dataset are generally lower than those in ICVL, resulting in smaller differences in absolute RMSE. NUS benchmark proved to be more challenging for our network to create the same level of improvement. However, at most metrics our method managed to surpass the state-of-the-art. \vspace{-0.1cm} \subsection{Runtime} Apart from surpassing the state of the art, due to its shallow architecture, our method is fast. The spectral reconstruction of a RGB image patch of $722\times644$ pixels takes 0.29 seconds on GPU. In order to avoid boundary artifacts, usually, the patches with overlap are given to network at test time. The shallow architecture of our system leads to the ability of operating on larger patches, possibly on whole image, without experiencing RAM issues. This leads to an additional increase in speed of reconstruction. \vspace{-0.1cm} \subsection{Visual results} Figure~\ref{fig:visual_comparison} depicts a qualitative comparison between the reconstruction result at 3 wavelengths achieved by our method and that of A+ for an image from ICVL dataset. For reference we show also the ground truth images. For all 3 wavelengths there is a large and visible improvement, as shown also by the quantitative results on ICVL (Table~\ref{tab:MethodComparison}). \begin{figure}[h] \centering \vspace{-0.1cm} \includegraphics[width=\linewidth]{validation2.eps} \vspace{-0.6cm} \caption{Validation errors (on ICVL) for our method with different design choices.} \label{fig:design_choices} \vspace{-0.3cm} \end{figure} \vspace{-0.25cm} \section{Conclusion} We proposed a novel method for spectral reconstruction from a single RGB image. We avoided overfitting by designing a moderately deep (6 layers) CNN model and careful training. The power of our solution is shown by the relatively low runtime and the state-of-the-art results achieved on the 3 most used spectral reconstruction benchmarks. \vspace{-0.2cm} \bibliographystyle{IEEEbib}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Related Work} \subsection{Effectiveness of Speeches} There is disagreement from public speaking experts and academics about what techniques and theories support effective speeches. In the $20^{\rm th}$ century various scholars sought to better understand effective communication combining manual annotation of speech metrics and quantitative analysis\cite{barker1968two,lull1940effectiveness,haiman1949experimental}. Speech effectiveness has been measured in a number of ways that include recall-comprehension, attitude change, and the perceived credibility of the speaker\cite{gundersen1976relationships}. Many of the public speaking principles in common use today originated from these studies, even if they are misapplied. Possibly the most famous statistic related to communication is Mehrabian's research \cite{lapakko2007communication} that states ``7\% of communication comes from spoken word, 38\% comes from speech tone and 55\% comes from body language." Lapakko noted the widespread misuse of this research in public speaking textbooks. Still other researchers diametrically opposed applying this research to public speaking, ``the fact is Professor Mehrabian's research had nothing to do with giving speeches ... I have no doubt that the verbal (what you say) must dominate by a wide margin." \cite{yaffe20117} Among public speaking experts, the role of each speech factor is just as disputed as the relative contribution of different factors in effectiveness, as shown by our research survey in Sect. 4.3 of supplementary material. Academic research and speaking experts often have different conclusions. In a book written by a past champion of the World Championship of Public Speaking, the author claimed a general rule is to ``use ... short sentences to express your message."\cite{donovan2014speaker} However, an academic study of educational videos concluded that ``the complexity of the sentences is higher than the ones in the less popular videos."\cite{kravvaris2014speakers} Finding out whether public speaking rules can be generalized to different contexts, such as different audiences or speaking purposes, and evaluating supposed rules of public speaking effectiveness could potentially be achieved by an analytic system with sufficient data. Automated analysis of speaking data can allow for understanding of the effectiveness of speeches in different contexts. Recently a few works have dealt with assessing speech effectiveness. In terms of winning and losing results in a speech contest, Fuyuno et al. \cite{Fuyuno2017SemanticSS} focused on studying face rotation angle and speech pause data as correlated to speech contest winners. Ramanarayanan et al. \cite{ramanarayanan2015evaluating} automated scoring of multimodal speech data in terms of multiple dimensions of effectiveness. Kravvaris et al. \cite{kravvaris2014speakers} used YouTube's automated data extraction in speeches. Outside of academia, experts in public speaking have written books on how to win speech contests, and claim to understand the most important factors for effectively winning such competitions. Emotion is considered crucial in published books of speech contest winners and led to a world champion giving the definition of a speech at the competition as ``an emotional rollercoaster ride for the speaker and his audience.''~\cite{jhingran2014emote} In their work, experts in these competitions advise candidates how to plan the emotions in their speeches. A different world champion advises: ``Speeches that focus on only one or two emotions risk disengaging the audience. . . the most effective choices are anger, disgust, fear, happiness, love, sadness, and surprise."\cite{donovan2014speaker} Claims by such speech experts about the effectiveness of emotion in speech outcomes have yet to be assessed by quantitative methods. In addition to a survey of literature, in \autoref{section:interviews} we interview speech experts to determine emotional factors considered important for effectiveness. \subsection{Analysis Systems Supporting the Understanding of Speaking Effectiveness} To better evaluate different factors in public speaking, visual analytical tools have been developed that allow for large scale understanding. Wu et al. \cite{wu2018multimodal} developed a system that allowed users to understand posture, gesture, and rhetoric in TED videos, allowing users to ``help gain empirical insights into effective presentation delivery." Zeng et al. \cite{zeng2019emoco} developed a system that explores emotional coherence which has a case study where experts help teach how to ``express emotions more effectively." These systems focus on describing different patterns of emotions in speakers considered to be high level. However there is a lack of analytical systems that focus on a clear metric for measuring success, and thus give insight into what differentiates more successful speeches. Claims by speech experts outside of academia as well as predicted patterns made by speech experts in our user survey can be empirically evaluated with our system. In the past, much work was done by small scale manual annotation to understand speech effectiveness data. Our system extends insights from large scale analysis to public speaking novices and experts. \subsection{Emotion Visualization} Visual analysis has been shown to provide valuable insights into emotional speech data. Recently several works have created novel visual analysis techniques aiming for presenting emotional data that provide meaningful insights. New visual forms were developed to show changing group emotion\cite{zeng2020emotioncues} and emotional coherence\cite{zeng2019emoco} in an intuitive way. As we will later show, emotional data is critical to the effectiveness in the speeches we study. We created visualizations to express several critical metrics in a way designed to be intuitively understandable to a non-expert audience. \section{Domain-Centered Design} We adopted a domain-centered design procedure to investigate the comprehension of effective strategies in the domain of public speaking. In this section, we first introduce how we conducted interviews among public speaking experts. Next, we summarize their opinions into overall goals, and the design tasks of the system. \subsection{In-Depth Interviews} \label{section:interviews} We conducted initial in-depth interviews with public speaking experts in order to analyze what factors they thought were critical to speech effectiveness, as well as to establish design requirements for an interface that supports their comprehension. We focused our studies on the World Championship of Public Speaking, which in some years has claimed to be the world's most popular speech contest. The seven experts we interviewed were all professional public speaking trainers that deliver courses for preparing for speech contests. Six of the seven had participated multiple times in the World Championship of Public Speaking. The interview was semi-structured, with all participants asked to list possible critical factors to study. They were also prompted with factors surveyed from literature, to obtain their opinion on the importance and role of the factors on contest effectiveness. Consistent with literature, all experts thought that the emotion of speakers would have important impact on the outcome of the contest. Among the factors they listed, there are factors that we did not include in our research that are likely to have influence on speech effectiveness, such as gestures and voice pitch. However, in this contest, emotion is viewed as critical, with entire books about how to plan one's emotion~\cite{jhingran2014emote}. We saw that addressing the lack of quantitative methods to evaluate such emotion as a contribution to domain knowledge. The emotional factors are listed in \autoref{tab:pvalue}. The modality of each factor is indicated by V (voice), F (facial), or T (text). Some of the factors to be studied were determined by a survey of literature. Our later pre-survey in \autoref{sec:user survey} confirmed that experts view these factors as significant in speakers in the competition. We also established non-emotional factors that would be critical to assess speech effectiveness. For example, in the interviews, unprompted, three experts suggested that cross-cultural influences would be important for the outcome of the contest, and that the effect of culture on factors in the contest would be prominent. In \autoref{tab:pvalue}, two additional non-emotional factors for comparison, namely pauses and vocabulary, were included that were estimated by the experts to have significant impact on effectiveness. \subsection{Overall Goals} The preliminary interviews and literature review together led to the overall goals to guide our research. \textbf{G1: To understand the relation between speech effectiveness and various speech factors.} The relative importance of different speech factors and the role of factors on effectiveness in the contest are critical for users. \textbf{G2: To understand the spatio-temporal distribution of factors across multiple speeches.} Referenced work showed that experts believed certain factors were more important at different moments of speeches or that the time order of certain factors was important. The geographical distribution was also considered important by some domain experts. \textbf{G3: To understand the effectiveness of individual speeches in context.} Speech experts in preliminary interviews expressed interest in wanting to understand the patterns of an individual speech. Furthermore how these factors in one speech relate to the factors and effectiveness metrics of all speeches. \textbf{G4: To compare between speeches on various speech factors.} Observing similarity and differences of speeches can allow effectiveness metrics to be connected with speaking styles. \textbf{G5: To understand speaking strategies among speech factors.} As revealed in our literature survey, there are different opinions about how different factors are effectively used. These theories could be evaluated. \begin{table}[]\small \centering \caption{Emotional and non-emotional factors and the p-values of factors.} \label{tab:pvalue} \begin{tabular}{ccll} \hline Factor & Modality & Type(p-value) & Type(p-value) \\ \hline \multirow{3}{*}{Average} & Facial & Arousal($0.006^*$) & Valence(0.431) \\ & Textual & Arousal(0.215) & Valence(0.088) \\ & Vocal & Arousal($0.016^*$) & Valence($0.017^*$) \\ \hline \multirow{3}{*}{Volatility} & Facial & Arousal($0.020^*$) & Valence($0.006^*$) \\ & Textual & Arousal(0.433) & Valence(0.438) \\ & Vocal & Arousal(0.235) & Valence(0.845) \\ \hline Diversity & Facial & \multicolumn{2}{c}{Across Emotion Type(0.120)} \\ \hline Final & Facial & Arousal($0.002^*$) & Valence($0.020^*$) \\ \hline Coherence & All & Arousal(0.124) & Valence(0.051) \\ \hline \multirow{4}{*}{Ratio} & \multirow{4}{*}{Facial} & Happy($0.001^*$) & Sad(0.0736) \\ & & Fear(0.582) & Angry(0.292) \\ & & Surprise(0.115) & Disgust(0.306) \\ & & Neutral(0.488) & \multicolumn{1}{c}{-} \\ \hline Pauses & Vocal & Pauses(0.271) & \multicolumn{1}{c}{-} \\ \hline Vocabulary & Textual & Vocabulary(0.089) & \multicolumn{1}{c}{-} \\ \hline \end{tabular} \begin{tablenotes} \item *: The factor has a significant correlation with speech effectiveness. \end{tablenotes} \end{table} \subsection{Design Tasks} \label{section:tasks} \begin{table*}[]\small \centering \caption{Categorized Design Tasks and Corresponding Goals.} \label{tab:tasks} \begin{tabular}{llll} \hline \textbf{Category} & \multicolumn{2}{l}{\textbf{Design Task}} & \textbf{Overall Goals} \\ \hline \multirow{2}{*}{Visual Data Fusion} & T1 & To present temporal and geographical distributions of data. & G1-G5 \\ & T2 & To display multi-modal data aggregated as well as in time series. & G1-G3 \\ \hline \multirow{2}{*}{Relation \& Comparison} & T3 & To assist comparing speeches in one speech level and between different speech levels. & G1, G2, G4, G5 \\ & T4 & To present the relations of multiple speeches derived by algorithm. & G4, G5 \\ \hline \multirow{2}{*}{Navigation} & T5 & To support browsing of speech videos guided by multi-modal emotional cues. & G2, G3, G5 \\ & T6 & To enable navigation of speech videos by emotional cues in video collections. & G2, G4, G5 \\ \hline Overview + Detail & T7 & To support understanding data between speeches, of entire speeches, and within a speech. & G1-G5 \\ \hline Interactive Feature Specification & T8 & To allow selection of specified speeches and factors. & G1-G5 \\ \hline \multirow{2}{*}{Data Abstraction \& Aggregation} & T9 & To show the calculated correlation and distribution between effectiveness and factors. & G1, G2, G4, G5 \\ & T10 & To provide effectiveness estimation on speeches in terms of factors. & G1, G3, G4, G5 \\ \hline \end{tabular} \vspace{-0.5cm} \end{table*} According to the overall goals above, we derived 10 design tasks in \autoref{tab:tasks}, which are categorized as suggested by Kehrer and Hauser\cite{kehrer2013visual}. T1-2 focus on the visual data fusion, i.e., the way of presenting data to support further exploration. T3-4 focus on the relation and comparison between speeches, enabling users to understand the similarity and difference. T5-6 focus on the navigation of speeches, which provide users with interactions to explore in one speech or in a collection by emotional cues. T7 relates to the analysis from overview to detail. T8 focuses on the factor of interest to be dynamically specified by users. T9-10 mainly focus on the data abstraction and aggregation which support users to find out the hidden patterns or strategies in speeches and estimate the speech effectiveness using algorithms. \section{Data and Analysis} In this section, as illustrated in the upper part of \autoref{fig:systemoverview}, we describe the details of data collection (I), data pre-processing (II), factor calculation (III) and the method of effectiveness analysis (IV) based on the results of the domain-centered design procedure. \subsection{Data} There are three progressive steps for processing data into three levels: original data, features and factors. Each speech consists of: 1) the original video; 2) the scripts; 3) metadata such as the start and ending of the speech in the video; 4) information about the speech, including contest region, year, level and rank; 5) feature data extracted from the original video; 6) data of factors listed in \autoref{tab:pvalue}. \subsubsection{Data Collection} The entire database includes 203 videos from the World Championship of Public Speaking published online, including YouTube and WeChat channels. We collected the videos and metadata manually. The contest levels of the speech videos were recorded as a measurement of effectiveness: area, division, district, semi-final and final. The amount of videos for each level is approximately balanced, and we ensure that all collected videos are of good quality. Detailed information about our database is provided in Sect. 1 of the supplementary material. \subsubsection{Data Pre-processing} The part II of \autoref{fig:systemoverview} illustrates the data pre-processing step of our system. In order to acquire the previously mentioned factors, we extracted image frames, voice and text from the original video. The voice and text are aligned at the sentence level while the images remain at the frame level. \textbf{Facial emotional data:} We recognized discrete emotion types, valence and arousal of the speaker from the frames of video. Faces in frames and their positions are detected by face\_recognition\cite{face_recognition}. The faces are further clustered by DBSCAN\cite{10.5555/3001460.3001507} with the facial features extracted during detection to identify the faces of each speaker without the interference from others' faces in the video. AffectNet\cite{AffectNet} is a widely used database and baseline method for facial expression, valence and arousal computing in the wild. We used AffectNet to extract the facial emotion valence and arousal data. We extracted the facial emotion types using an emotion classification convolutional neural network by Arriaga et al. \cite{arriaga2017real}. \textbf{Textual emotional data:} We applied Azure Cognitive Speech to Text Service\footnote{\url{https://azure.microsoft.com/en-us/services/cognitive-services/speech-to-text/}} to convert spoken audio to script text with timestamps of each sentence and word. The method to extract the textual valence and arousal data came from the work of Wang et al.\cite{wang-etal-2016-dimensional}, which uses a regional CNN-LSTM model to analyze dimensional sentiment. \textbf{Vocal emotional data:} With the timestamps of sentences, we split the audio into sentence-level clips and applied an open-source toolbox for multimodal emotion analysis developed by Buitelaar et al.\cite{8269329} to obtain the vocal valence and arousal data of the clips. \textbf{Non-emotional data:} We considered two non-emotional factors. The pauses between words and sentences as well as the words spoken per minute were calculated with the timestamps. The vocabulary level was measured with the Dale-Chall measure\cite{enwiki:1026007428} calculated with an open-source tool for assessing the readability level of a given text \cite{readability}. \subsubsection{Factor Calculation} \label{section:factor calculation} The raw data extracted from videos are time series of multi-modal valence and arousal data. As it is not intuitive for users to explore the data, we calculate some factors based on the raw data extracted in part III of \autoref{fig:systemoverview}. The calculation methods are as follows. The time series of multi-modal valence or arousal data is represented as $D=\{d_t^m\}_{t=1}^T$, where $d_t^m$ indicates the $t$-th sample of time series data in the modality of $m$. Similarly, the time series of multi-modal emotion type data is represented as $E = \{e_t^m\}_{t=1}^T$. The average factor represents the average value of a specific data modality over time: \begin{equation} \label{eq:1} average = \frac{\sum_{t=1}^{T}d_t^m}{T}\\ \end{equation} Volatility represents the change of data over time. We first normalize the data, and compute volatility according to Equations (\ref{eq:2})-(\ref{eq:3}). \begin{equation} \label{eq:2} D_{diff} = \{d^m_t-d^m_{t-1}\}_{t=2}^T\\ \end{equation} \begin{equation} \label{eq:3} volatility = \sqrt{D_{diff}\cdot D_{diff}}\\ \end{equation} Diversity represents the variety and relative abundance of the emotions\cite{quoidbach2014emodiversity}. We calculate it with Equation \ref{eq:4}. \begin{equation} \label{eq:4} diversity = \sum_{i=1}^{e}(r_i\times \ln{r_i})\\ \end{equation} Here $e$ equals the total number of emotion types in $E$ and $r_i$ equals the proportion of $E$ that contains the $i$-th emotion. In \cite{zeng2019emoco}, Zeng et al. explore the effect of emotion coherence across facial, text and audio modalities. Similarly, we calculate the value of arousal and valence coherence (defined in \autoref{eq:5}) where $std$ and $mean$ indicate functions of calculating the standard deviation and the mean value respectively. The superscripts represent the modalities of data, where w, v and f mean the textual, vocal and facial modalities. \begin{equation} \label{eq:5} coherence = \frac{1}{T}{\sum_{t=1}^{T}\frac{std(d_{t}^{w},d_{t}^{v},d_{t}^{f})}{mean(d_{t}^{w},d_{t}^{v},d_{t}^{f})}}\\ \end{equation} In interviews, experts estimated the last 20\% of a speech to be more important than other parts. So we calculated the final valence and arousal with Equation \ref{eq:6}. \begin{equation} \label{eq:6} final = \frac{\sum_{t=0.8T}^{T}d_t^m}{T}\\ \end{equation} For calculating final emotion, diversity and type ratio, we only select facial data as input, since textual data and vocal data are much sparser than facial data over time. For example, we could only extract an item of textual or vocal data from a complete sentence, while we can extract an item of facial data from each video frame. So for the same video, the amount of textual/vocal data is much sparser. We found processed textual/vocal results of these factors less convincing. \subsection{Factor Effectiveness Analysis} \label{ordinal} According to G1, we want to find out the relation between speech factors and effectiveness. Contest speech levels can be regarded as ordinal variables, whose relative ordering is significant. For example, the grades of 1-5 may respectively represent 5 levels ``area'', ``division'', ``district'', ``semi final'' and ``world final''. In the World Championship of Public Speaking, only those who rank at the top of a level will advance to a higher level. So we hypothesize that the higher the level, the more effective the speech is. Given the factors of a certain speech, the problem of predicting its level can be considered as an intermediate problem between regression and classification~\cite{gutierrez2015ordinal}. We first conducted the test of parallel lines and found the p-value is smaller than 0.05, proving that the level prediction problem is suitable to be solved by multi-class ordinal regression. Then we split this prediction problem into four sub-problems as shown in \autoref{fig:systemoverview} IV. In each sub-problem, we performed logistic regression on the odds ratio of each factor. Finally, we obtained the p-value of each factor in \autoref{tab:pvalue}, where we indicate factors calculated as significant relating to effectiveness. The result of our factor effectiveness analysis shows that the average of facial arousal, the average of vocal arousal and valence, the volatility of facial arousal and valence, the final facial arousal, and the ratio of facial happy expressions all have a significant correlation with speech effectiveness. Taking experts' advice into consideration, we selected typical factors and embedded them into our system. According to the result of the four sub-problems, we calculated the probability of the five levels as the factors change value. \section{System Overview} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures/systemoverview.png} \setlength{\belowcaptionskip}{-0.5cm} \caption{An overview of our analytic system.} \label{fig:systemoverview} \end{figure} The design of the system was created in consideration of expert opinion, user feedback, and insight from the factor effectiveness analysis. \autoref{fig:systemoverview} illustrates the overview of our system architecture, which includes two major parts: a data \& analysis part and a visualization part. The visualization part adopts multi-modal and multi-level data from the first part, providing an interactive analytic interface for insights. \textbf{Views.} In the visualization part, as shown in \autoref{fig:MainInterface}, our system consists of four views: (A) the view of factors, (B) the view of all speeches, (C) the view of the selected speech and (D) the view of the selected speech context. In view A, the table displays the factors and p-values obtained from the factor calculation and effectiveness analysis steps. It helps users to understand the relation between speech effectiveness and various speech factors (G1), as well as the connection of emotional data to other speech factors. Given the range of factors of interest to the audience, a visualization system that provides an overview of all factors would be ideal for understanding their relation to effectiveness. The view B provides a panel to view all speeches for comparison, navigation and exploration with five novel sub-views: E-factor, E-similarity, E-spiral, E-script and E-type (G1, G2, G4, G5); see \autoref{section:visualization} for details. We utilize the raw arousal and valence data from pre-processing phase and factor values from factor calculation phase to generate visualizations. These visualization techniques will be introduced in \autoref{section:visualization} in more detail. The view C contains four sub-views showing the data and visualizations of selected individual speech (G3). It helps users analyze a selected speech in more detail. The view D contains information about our database and detailed context information of the speaker (G2, G3). The four views of our system assist users to explore our database and find what factors affect the effectiveness of speech (G1, G5). \textbf{Sub-views.} We mention that the view B and view C contain some sub-views above. The sub-views in view B are set to assist users to analyze overall trends, navigate and locate speeches of interest in our database. E-factor and E-similarity show the distribution of all speeches (G2). For E-spiral, E-script and E-type, we provide sub-views which aggregate the visualizations of all speeches (G5). The sub-views in view C are set to help users to observe the visualization of a selected individual speech in more detail using visualization tools such as E-spiral, E-script and E-type. We also visualize the original time series of valence and arousal data in the timeline sub-view (G3). \textbf{Interaction.} We chose to design interactive steps of the system with Schneiderman's information seeking mantra as a guideline: ``overview first, zoom and filter, then details-on-demand." \cite{shneiderman2003eyes} The views are organized from left to right in terms of the level of abstraction of the data. We provide some interactions to support the overview-to-detail exploration process (G5). Upon selection of an effectiveness factor, E-factor will show the distribution of the factor values with all speeches. Users can also hover the mouse over the speeches to see the individual speech data and click to change the view of the selected speech. This interaction is supported by each of the sub-views in the view of all speeches (B). These sub-views aggregate all the visualizations of speeches and organize them by level. For deep exploration, users can click the sub-views in view (C) to generate a bigger visualization in a floating window. In E-similarity, upon clicking the dot representing an individual speech, a radar-like chart will be displayed in the right to show the predicted level of the critical factors of the selected speech (G4). \section{Visualization Design} \label{section:visualization} \begin{table*}[tb] \centering \caption{Visualization Methods and Corresponding Tasks.} \label{tab:module} \begin{tabular}{|c|c|c|} \hline Module & Description & Task\\ \hline E-factor & To evaluate hypotheses of interest about speech factors using the cumulative data of all speeches. & T1-T3, T6-T8 \\ E-type & To understand discrete emotional data contained in emotional types, as well as their distribution over time. & T1, T3, T5, T7 \\ E-script & To understand the emotion in speech scripts. & T1, T3, T6-T7 \\ E-spiral & To provide an intuitive way of understanding the emotional shifts within speeches. & T1, T3, T6-T7\\ E-similarity & To understand the similarity and the effectiveness estimation of speech factors in speeches. & T3, T4, T7, T10\\ E-distribution & To understand distribution of factor effectiveness among speech levels. & T2, T3, T7-T9\\ \hline \end{tabular} \end{table*} As with the system itself, our visualizations were designed in iterations that began with interviews, and refined with user feedback. In this section we present the final designs used in our system as well as the reasoning behind our visualization methods. The relation of our visualizations to our design tasks can be seen in \autoref{tab:module}. We will introduce visualization techniques in two parts: visualizations generated from data of all speeches, including E-factor, E-similarity and E-distribution; visualizations generated from individual speech data, including E-spiral, E-script and E-type. \subsection{Visualizations Generated from All Speech Data} \subsubsection{E-Factor} In our literature review of effectiveness in this inspirational speech contest, we found many claims by experts that particular speech factors have significant relationships to speech performance. This visualization aims to allow users to evaluate hypotheses of interest about speech factors at the macroscopic level of all speeches. Upon selection of a factor in (A) the user is presented with many dots, with each dot positioned horizontally according to the cumulative amount of the factor in a speech (T2). The speeches are sorted vertically by the level of the speech. As shown in \autoref{fig:MainInterface}(B1), the light blue rectangle for each level covers the middle 50\% distribution of the speeches and the dark blue line indicates the median of each level (T3). A geographical analysis of factors is provided in view (D). By clicking a country on the map, the speakers belonging to the country will be highlighted, so users can analyze the regional difference between countries (T1). \subsubsection{E-Similarity} According to the experts' feedback, we found that they desired to compare the similarity between speeches (T4). To allow this comparison we chose the five most significant factors as the speech's feature vectors, and used t-SNE\cite{van2008visualizing} to reduce the dimensionality of feature vectors to display all speeches on a two-dimensional map. The closer two speeches are to each other, the more similar the two speeches are. In order to better understand this relation, a radar chart displays the five most significant factors, and a given speech's estimated level based on the amount of each of the factors (T10). In \autoref{ordinal}, the result of ordinal regression contains the probability of the contest levels at a certain value of the factor. For a particular speech, we use the value of a certain factor to predict its level and use the radial coordinate of the radar chart to represent the predicted level. The larger the area of the polygon in the radar chart, the higher the predicted effectiveness of the speech is. As shown in \autoref{fig:MainInterface} (B5), clicking a dot representing an individual speech in the scatter plot brings up a radar chart. Dots are color-encoded, allowing for rapid comparison of a speech's estimated level with its true value (T3). \subsubsection{E-Distribution} We designed E-distribution in order to show how the effect of each factor changes the calculated probability of each level, which can be interpreted as a metric of effectiveness (T10). In \autoref{ordinal}, we obtained the probability of the five levels with respect to each factor. The five lines in the graph represent the distribution of probability of the five levels of the contest, with the same color encoding used as in E-Similarity. We can observe E-Distribution in \autoref{fig:MainInterface} (A). For example for the factor arousal mean we can observe larger values to the right of E-distribution result in higher probabilities of the darker line, or final level of the contest. \subsection{Visualizations Generated from Individual Speech Data} \subsubsection{E-Spiral} \begin{figure}[tb] \centering \includegraphics[width=\columnwidth]{figures/spiral-demo.PNG} \setlength{\belowcaptionskip}{-0.5cm} \caption{E-spiral: a novel spiral-type visualization of time-series emotion data. A comparison is given: (a) Spiral without turning points. (b) Spiral with turning points. Emotion shifts during speeches are intuitively presented with an interaction showing the detailed data.} \label{fig:Espiral} \end{figure} In our preliminary interview experts had suggested that emotional twists and turns may be important to consider in contest speeches. Preliminary data analysis early in our collection of speeches confirmed this hypothesis, as we found that there was significant statistical results for shifts in arousal (p-value 0.020) and valence (p-value 0.006). We sought to create a visualization that would present a compact view of these emotional shifts so that speeches could be rapidly compared. One option is spirals. According to a survey of timeline visualizations, spirals have advantages for displaying certain kinds of information, and their potential may not have been exhausted. Brehmer et al. found that spirals are ``appropriate for presenting many events within a single dense display." \cite{2017Timelines}. We therefore created a new form of spiral that shows the emotional twists and turns in a visually dramatic way. Clockwise and counterclockwise turns in the spiral indicate shifting negative and positive emotions, with sharp angles of the visualization showing the emotional turning points. Due to the compact structure, large scale comparison is possible, supporting comparison and navigation between speeches and within a speech (T7), as shown in \autoref{fig:MainInterface}(B2). Based on the results extracted from the speech video, we identified the speaker's valence and arousal at regular intervals. Each circle appears in chronological order, starting at the center of the spiral. Significant shifts in the valence of speeches are reflected in the change of direction. As comparison of the emotional diversity of speeches was stated to be a priority in our pre-survey of experts, we further indicated the emotional type of the interval as the color of the circle. The circle radius represents the arousal of emotions and transparency represents the probability of the emotion being labeled correctly. E-Spiral is generated in polar coordinates with $\theta_n = \theta_{n-1} + 2\pi \Delta_{r} p_{i}$. $\theta_n$ is the polar angle of the center of the $n$-th circle and $\Delta_{r} = r_n - r_{n-1}$ is the variation of the polar radius between the $n$-th circle and the $(n-1)$-th circle, which is a constant value since the spiral expands at a constant variation of radius. The emotional turning points are generated based on the positive and negative changes of accumulated emotions in intervals. $E_i=\Sigma{a_n}$ is the accumulative emotion in an interval of 5 seconds, in which $a_n$ is one of the valence data in interval $i$. The spiral turns clockwise when $p=1$, while it turns counterclockwise when $p=-1$. The changing of $p$ decides the emotional turning points in spirals, which is calculated in \autoref{eq:9}. The initial value of $p$ is defined by the emotion in the first interval, shown in \autoref{eq:8}. \begin{equation} \label{eq:8} p_0= \left\{ \begin{aligned} 1 & , & E_0\geqslant0, \\ -1 & , & E_0<0. \end{aligned} \right. \end{equation} \begin{equation} \label{eq:9} p_{i\geqslant1}= \left\{ \begin{aligned} -p_{i-1} & , & E_i * E_{i-1} < 0 \text{ and } | E_i - E_{i-1} | > 10, \\ p_{i-1} & , & otherwise. \end{aligned} \right. \end{equation} With the help of E-spiral, as shown in \autoref{fig:Espiral}, we can clearly see the changes of emotion during the speech via the turning spiral. Interaction of rapidly skipping to the video frame of the selected speech by clicking the circle on the spiral supports rapid browsing with emotional cues (T5). \subsubsection{E-Script} \begin{figure}[tb] \centering \includegraphics[width=\columnwidth]{figures/Escript.png} \setlength{\belowcaptionskip}{-0.8cm} \caption{E-script: a novel visualization allowing key information about the text expression and emotional trends to be compared across whole speeches.} \label{fig:script} \end{figure} \begin{figure}[tb] \centering \end{figure} How speech script content relates to key speech delivery information is the subject of much literature on speech giving. E-script allows fine grained understanding of how multi-modal speech emotion (T5), word speed, and pauses relate to the timing of each word of a speech (T1). As experts advised that effective use of pauses and word speed was important to the emotional delivery of script content, we sought to indicate these factors in an intuitive way. We indicated the word speed by the tracking between letters in a word and pauses as the spaces between words. In a novel approach, E-script aims to provide an audience with ordinary visualization literacy with an intuitive way of understanding script emotional data by changing letter shape to encode quantitative information (quantitative glyphs). E-script highlights emotionally intense, high arousal moments. Sievers et al.\cite{2020A} found that the shape of lines can visually encode emotion and is closely related to arousal, and can be seen as independent from valence. We supposed that this visual mapping might be applied in visualizations with line shape. We implemented this visual mapping of arousal directly in the speech script by connecting the line based javascript font Leon Sans\cite{leonsans} with the arousal information. The size of the text was also changed in order to highlight emotionally intense moments in the speech. The method of mapping of the color to the valence and arousal in the speech script can be seen in the work \cite{2020EmotionMap}. No papers have been found in visual analytics that involve letter shape adjustment in order to convey quantitative emotion. Brath et al. \cite{BRATH201659} surveys existing text visualizations, noting only one case of quantitative glyphs, used for indicating prosody. The use for intuitive communication of emotion here can be seen as a novel expansion. \subsubsection{E-Type} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures/Etype.PNG} \caption{E-type: A linear visualization of discrete emotional type together with continuous emotional data.} \label{fig:Emo-type} \end{figure} As discussed in the interview section, an open question for speech experts is the role of the categories of emotions in speeches, as well as the temporal distribution of emotions (T1). E-type allows continuous valence and arousal data to be compared to discrete emotional type data across time in speeches. We supposed that a linear visualization might be more intuitively understandable to non-expert audiences. As shown in \autoref{fig:Emo-type}, we use a series of rectangles to show the categories of emotions and sample the emotional data of each speech evenly 200 times to get 200 rectangles. The color of rectangles represents the category of emotion. The height and $y$-coordinates of the rectangle represent the arousal value and valence value respectively. We use a red line to connect the center points of the rectangles, which indicates the change of valence. With E-Type, users can grasp the proportion of each category of emotion more clearly. In contrast to E-spiral, E-type offers a more fine tuned view of continuous emotional data. \section{Evaluation} Our evaluation focuses on two main goals as categorized by Lam et al.\cite{lam2011empirical}: user experience (UE), which we address with a usability test; visual data analysis and reasoning (VDAR), which we address with a user survey and a case study. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures/EvaluationProcedure.png} \setlength{\belowcaptionskip}{-0.5cm} \caption{Our evaluation procedure.} \label{fig:evaluation} \end{figure} \subsection{Study Design} The procedure of our evaluation is shown in \autoref{fig:evaluation}. Before participants experienced E-ffective, we introduced the project and related terms, as well as pre-surveyed their opinions of speech factors (VDAR). Next, they were introduced to the system, instructed to complete a series of user tasks, then freely analyzed speeches by using the system. Finally, they finished a user experience interview (UE) and a post-survey (VDAR). \textbf{User Tasks:} The user tasks focused on instructing the participants to better understand the main function of the visual analysis methods and the whole system. We designed 10 user tasks aiming at both visualization design (UT1-UT6) and system design (UT7-UT10). The detailed tasks are listed in Sect. 4.2 of the supplementary material. Participants were limited to using specified visualizations to complete UT1-UT6 while they were free to use any visual analysis methods to complete UT7-UT10. \textbf{Free Analysis:} 15 minutes of the evaluation was dedicated for participants to analyze speeches by using the whole system in a self-directed manner. During the analysis, participants were encouraged to freely speak out their intent in using the system as well as their findings on speech effectiveness. The purpose of the free analysis procedure was to observe the creative ways users reasoned using the system and record meaningful insights made by participants. \subsection{Participants} Our system is designed for both experts and novices. We recruited 16 participants for evaluation, including 8 experts in public speaking and 8 novices. All participants had competed in the World Championship of Public Speaking, and all were active in the field of public speaking. None of the participants had expert level visualization literacy. The experts participants (E1-E8) all had rich experience in public speaking. Almost all of them (7 of 8) have participated in over five speech contests and five of them have trained others for public speaking as an occupation for more than five years. Half had STEM educational backgrounds. Five of them had watched more than fifty speech contests. It is worth mentioning that one of the volunteers once won the second place in the world final of the contest, and another was a two-time world semifinal competitor. Of the novice participants (N1-N8), seven of them had competed in the contest at least twice. Half of them had STEM educational backgrounds, the same ratio as experts. Most novices had watched more than ten speeches, and they all desired to improve their speech skills by learning from excellent speakers. Detailed information about the participants is provided in Sect. 4.1 of the supplementary material. \subsection{Usability Testing} In our usability test, we wanted to evaluate how useful and easy to use our system and visualization methods are. Following the completion of both the user tasks and free analysis, user experiences of participants were scored in 7-point likert scale for usefulness and ease of use in a user experience questionnaire. As shown in \autoref{fig:userexperience}, the questionnaire evaluates both visualization design and system design. \begin{table}[]\small \caption{Statistical Data about the Results of the Questionnaire.} \label{table:question} \centering \begin{tabular}{cccclcc} \hline & \multirow{2}{*}{Question} & \multicolumn{2}{c}{Visualization Design} & & \multicolumn{2}{c}{System Design} \\ \cline{3-4} \cline{6-7} & & Mean & Std & & Mean & Std \\ \hline \multirow{2}{*}{Experts} & Usefulness & 5.950 & 1.218 & & 6.200 & 0.966 \\ & Ease of Use & 6.075 & 1.047 & & 6.200 & 0.822 \\ \hline \multirow{2}{*}{Novices} & Usefulness & 6.150 & 1.051 & & 6.575 & 0.636 \\ & Ease of Use & 5.900 & 1.033 & & 6.050 & 0.876 \\ \hline \end{tabular} \end{table} \begin{figure}[tb] \centering \includegraphics[width=\columnwidth]{figures/userexperience.png} \setlength{\belowcaptionskip}{-0cm} \caption{Result of the user experience questionnaire.} \label{fig:userexperience} \end{figure} The description and results of the user experience questionnaire are provided in \autoref{fig:userexperience}. The mean and standard deviation of the result are shown in \autoref{table:question}. From the results, we can conclude that the mean scores are all higher than 5.9, showing that our system and visualizations are useful and easy to use for experts and novices. The standard deviations of the visualization design questions are higher than the system design questions. Given the relatively low visualization literacy of the participants, perhaps these novel visualizations may be relatively complex, leading to different levels of understanding of them. In \autoref{fig:userexperience}, we can see that both experts and novices thought that E-similarity is less useful and less easy to use than the other visualizations. There was additional feedback from several participants that the five significant factors used in this visualization were not comprehensive enough to understand a speech. Also, several participants had difficulty conceptually understanding the t-SNE layout, leading to further difficulty in understanding why two speeches appeared similar. Experts preferred E-distribution, possibly because E-distribution can help them intuitively find out how effectiveness changes in regards to factors that they are more familiar with. Novices preferred E-script, and several of them said that they thought it was very useful to explore the emotion shown inside the original speech script. Many found the pauses and word speed information that was missing in other visualizations as useful. For questions about system design, participants reported more satisfaction with the guidance of emotional cues. They agreed that emotion plays a key role in inspirational speeches. Novices reflected that our system can help explore the effectiveness of speech factors. They gave the system higher scores than the experts in terms of usefulness, perhaps because experts have more experience about speech effectiveness than novices, and thus they have their own opinions that may differ from our system results. Novices scored the system lower in terms of the ease of use than experts, perhaps because they lack experience in public speaking and the terms associated with competition success. On the whole, they needed more time to understand and grasp how to use the system. \subsection{User Survey} \label{sec:user survey} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures/prepost.png} \setlength{\belowcaptionskip}{-0.5cm} \caption{Result of the user survey. Comparing the opinions before and after using the system. Darker blue values mean the opinion of a user changed to be closer to the system suggestion, and brown means the opposite.} \label{fig:prepost} \end{figure} Speech experts and novices were interviewed before using the system to determine their views about the importance of factors in the contest. The participants were asked to give their opinions on whether they thought a factor would significantly impact performance between contestants, and then to predict the typical role the factor had on performance. Afterwards, they were surveyed again to see whether their opinions on the surveyed questions had changed. The participants generated their hypothesis on the speech factors when they were pre-surveyed. They further examined their hypothesis by using the system. In the post-survey, we collected their results , which we later used to understand their hypothesis generation and reasoning. \textbf{Surveyed factors and measurement.} The factors included 1 factor (average arousal) determined by our analysis in \autoref{ordinal} to have significant influence, 2 factors (valence coherence, vocabulary level) to have near significant influence, as well as 2 factors (emotional diversity, average valence) to have insignificant influence. The significance is measured in 1-7 from very insignificant to very significant, and the relationship is measured in -3-3 from very negative to very positive. \textbf{Results processing.} We adapt our analysis results as ground truth for judging the directions of opinion changes, so that the judgment of whether the participants were influenced positively or negatively can be concluded. The results of pre-survey ($R_{pre}$) and post-survey ($R_{post}$) were further processed. For significance results, the final score of each question is $F_{s}=R_{post}-R_{pre}$ if the suggested significance in our factor analysis is significant and $F_{s}=R_{pre}-R_{post}$ if insignificant. For relationship results, the final score of each insignificant factor is $F_{r}=\vert R_{pre}\vert - \vert R_{post} \vert$ to measure the change towards 0. The final score of each significant factor is $F_{r}=R_{post}-R_{pre}$ if the relation is positive and $F_{r}=R_{pre}-R_{post}$ if the relation is negative. The processed result is shown in \autoref{fig:prepost} as a heat map. \textbf{Survey prediction.} The results of user survey prove that our system influenced the participants' opinions after using the system. In general, we can see that our system had a positive impact on participants' understanding of speech factors. Changes in the opinions of experts show that they are slightly more affected than the novices. But we can also find that, more experts insisted on their own opinion on factors than novices. In the respect to individual factors, there is a bigger change in the opinion on the significance of vocabulary for both experts and novices. However, in terms of all factors, there is a difference between the significance and relationship in the changes of participants' opinions. The impact of the system on factor significance seems to have more negative results than that of factor relationship. The reason may be that in the pre-survey participants expected the significance too high, but from the system they the factors less significant than they thought. So that in the post-survey they overcompensated and rated the factor as under significant. Overall, the results of user survey indicate that our system enables users to update their opinions based on the findings by exploring speech factor effectiveness. \subsection{Case Study} 1) In our evaluation sessions we often found experts identifying effective and ineffective contest strategies (G5). Upon finding our system had showed a trend among speakers at the lower levels having large ratios of happy expressions, a past world second place winner of the contest commented: ``In my experience there are two kinds of speakers at the lower levels: one shows a lot of happiness and jokes. Another kind just tells a sad story. The emotion doesn't change so much." Later this participant viewed the speeches sorted by emotional diversity with E-factor and clicked on a speaker with very low emotional diversity. The participant was surprised to find it was the last world champion. This countered her earlier pre-survey opinion that diversity is very significant and high diversity is better. She then carefully verified the accuracy of the facial data using E-spiral by mousing over dots on the visualization to reveal facial expressions. She then reasoned that online speeches may not need as much emotional diversity, ``this is an online speech, it is a little bit different from a speech in a big meeting room. When I was on the stage, I was facing a lot of people and they really couldn't see my face, they had to look at the big screens." This new hypothesis revealed a limitation of our system: we had not separately labeled and processed the results of online and offline speeches. In other interviews additional tags were suggested regarding yearly trends, gender, and other factors. Exploring the context of different speaking strategies was often focus of experts during the evaluations. One expert with contest experience in both China and at the world semifinal level in the United States explored the geographical differences among competitors (G2). He found the vocabulary of contestants in China to be lower and after exploration of the difference in China and the USA, he sorted contestants by vocabulary with E-factor to find a world finalist contestant. He then used E-script to find the many difficult words used by the competitor: ``If he is competing in China, he is going to lose the competition. Native speakers tend to have high levels of English vocabulary, but when they are addressing to different groups of audiences they probably should use a lower level of vocabulary." This new hypothesis countered his pre-survey prediction that winning speeches were more likely to have higher vocabularies. Expert opinion about the use of vocabulary differed strongly in our interviews, as well as among existing literature. A previous world champion supported his view that the best speeches have simpler vocabularies with the claim ``winning (world champion) speeches over the preceding 17 years ranged at grade levels between 3.5 and 7.7" \cite{donovan2014speaker} What this claim does not show is that in our survey, speeches at the highest level of the contest have larger vocabularies on average than any other level. 2) Several users and experts found additional applications for E-script to understand critical factors of speech delivery. One novice and former district contestant found E-script was intuitive to understand: ``(E-script is) useful for me because if I can see where I need to slow down my pace. And I can also see where the color matches the emotion and if it fits." The application of E-script by other users to evaluate emotional coherence across modalities as well as paces and pauses was observed in our evaluation. Furthermore, an expert who works full time in training for public speaking gave E-script full marks for usability and easiness, and provided suggestions for development: "I love it, I can see how people can use pauses to make their speech more powerful. In the future there could be a function where people could view and collaborate directly on the use of pauses in the script." \section{Discussion} In this section, the limitations and future directions of our system are elaborated on. \subsection{Data and Processing} While collecting speech videos of over 200 inspirational videos from open access channels, we tried to keep a balance of quality and quantity. The size of the database is still insufficient for conclusive evidence about the factors we studied. Therefore, we plan to enlarge the size of the database while further applying the system to other practical domain applications. \subsection{Factors and Analysis} In this paper we focused on the major factors taken from our literature review and interviews of domain experts. However, determining what factors affect the performances of speeches still remains a complicated problem. It is hard to include all the factors that matter and reduction is necessary for better quantitative analysis methods. In order to better find out the factors affecting the effectiveness of the speech, we extracted some factors from the valence and arousal data, instead of directly analyzing the valence and arousal data. While this allows users to more intuitively explore what factors affect the effectiveness of the speech, on the other hand it may be oversimplified. Current effectiveness analysis on factors is limited to the univariate linear regression analysis, and the system does not consider the interaction between variables and other complex relations. \subsection{Limitations in Evaluation} In our evaluation we aimed to assess user experience and visual data analysis and reasoning. There are limitations to the results we obtained, especially in the case of the user survey. While we compare results before and after use of E-ffective to the estimated results of our model, these results cannot be construed to suggest learning. Problems in the study design may influence the results, including the small sample size. Additionally, factors such as the reputation of the developers of the system or the evaluation organizer may influence the credibility of our prediction outcomes. The difference of the reliability of our models on one type of data may also influence the perceived reliability of other types of data. The varying accuracy of the various models we use in our system are likely to skew the results of our post-survey. \subsection{Limitations in Domain} The effectiveness of a speech is a very subjective issue, and there is no clear and quantifiable evaluation standard. Different people may have very different opinions of the same speech, which depends on the preferences of the audience, cultural differences, and many other factors. We try our best to analyze the patterns of speeches in an objective, data-driven way. In addition, emotion does not play a key role in all types of speeches. Many public speaking experts consider emotion to play a special importance in inspirational speeches. The results of our current visual analysis are not applicable to all situations. \subsection{Generalization} The E-ffective system proposed in this paper focuses on exploring the effectiveness of factors in inspirational speeches. Through the evaluation part of our work, it is proved to be useful, easy-to-use, and fits the domain requirements. Moreover, insights were made by users that had different levels of experience in the domain. However, the potential of our system is not restricted to the domain of inspirational speeches. We can see the possibilities of extending the system to analyze the effectiveness of factors in other kinds of speeches. \section{Conclusion} In this paper, we propose E-ffective, a visual analytic system built for speech experts and novices to evaluate the effectiveness of speaking strategies. Our system primarily addresses factors involving emotional expression in an inspirational speech contest. Our evaluation studies confirmed the utility by a usability testing study, and the ability of participants to analyze and reason using the system was demonstrated in the case study and two surveys. In order to support the needs of our users we found many potential factors influencing effectiveness in the competition. From algorithms and visualization methods we found what factors were tied to effectiveness. The importance and utility of these factors were later verified in our evaluation. Two novel forms of visualization, namely E-spiral and E-script were developed to further assist users to understand critical factors and their application in speeches. In future work, we have already begun to expand our database of inspirational contest speeches as well as expand our methods to create other kinds of speech effectiveness databases. We also plan to improve our analysis methods by means of considering the interrelation of factors and further expansion of the considered factors. Finally, we see the potential for additional visualization methods to be developed to more intuitively display factors critical for the effectiveness of speeches. \acknowledgments{This work was supported by the Beijing Natural Science Foundation (4212029), the Natural Science Foundation of China (61872346, 61725204), Alibaba Group through the Alibaba Innovative Research Program and the 2019 Newton Prize China Award (NP2PB/100047).} \bibliographystyle{abbrv-doi}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In this paper we study problems related to \emph{stable cuts} in graphs. A \emph{stable} cut of an edge-weighted graph $G=(V,E)$ is a partition of $V$ into two sets $V_0, V_1$ that satisfies the following property: for each $i\in\{0,1\}$ and $v\in V_i$, the total weight of edges incident on $v$ whose other endpoint is in $V_{1-i}$ is at least half the total weight of all edges incident on $v$. In other words, a cut is stable if all vertices have the (weighted) majority of their incident edges cut. The notion of stable cuts has been very widely studied from two different points of view. First, in the context of local search, a stable cut is a locally optimal cut: switching the side of any single vertex cannot increase the total weight of the cut. Hence, stable cuts have been studied with the aim to further our understanding of the basic local search heuristic for \textsc{Max Cut}. Second, in the context of algorithmic game theory a \textsc{Max Cut} game has often been considered, where each vertex is an agent whose utility is the total weight of edges connecting it to the other side. In this game, a stable cut corresponds exactly to the notion of a Nash equilibrium, that is, a state where no agent has an incentive to change her choice. The complexity of producing a Nash stable or locally optimal cut of a given edge-weighted graph has been heavily studied under the name \textsc{Local Max Cut}. The problem is known to be PLS-complete, under various restrictions (we give detailed references below). In this paper we focus on a different but closely related optimization problem: given an edge-weighted graph we would like to produce a stable cut \emph{of minumum total weight}. We call this problem \textsc{Min Stable Cut}. In addition to being a fairly natural problem on its own, we believe that \textsc{Min Stable Cut}\ is interesting from the perspective of both local search and algorithmic game theory. In the context of local search, \textsc{Min Stable Cut}\ is the problem of bounding the performance of the local search heuristic on a particular instance. It is folklore (and easy to see) that in general there exist graphs where the smallest stable cut has size half the maximum cut (e.g. consider a $C_4$) and this is tight since any stable cut must cut at least half the total edge weight. However, for most graphs this bound is far from tight. \textsc{Min Stable Cut}\ therefore essentially asks to estimate the ratio between the largest and smallest stable cut for a given specific instance. Similarly, in the context of algorithmic game theory, solving \textsc{Min Stable Cut}\ is essentially equivalent to calculating the Price of Anarchy of the \textsc{Max Cut} game on the given instance, that is, the ratio between the smallest stable cut and the maximum cut. Since we will mostly focus on cases where \textsc{Max Cut} is tractable, \textsc{Min Stable Cut}\ can, therefore, be seen as the problem of computing either the approximation ratio of local search or the Price of Anarchy of the \textsc{Max Cut} game on a given graph. \subparagraph*{Our results} It appears that little is currently known about the complexity of \textsc{Min Stable Cut}. However, since finding a (not necessarily minimum) stable cut is PLS-complete, finding the minimum such cut would be expected to be hard. Our focus is therefore to study the parameterized complexity of \textsc{Min Stable Cut}\ using structural parameters such as treewidth and the maximum degree of the input graph\footnote{We assume familiarity with the basics of parameterized complexity as given in standard textbooks \cite{CyganFKLMPPS15}.}. Our results are the following. \begin{itemize} \item First, we show that bounding only one of the two mentioned parameters is not sufficient to render the problem tractable. This is not suprising for the maximum degree $\Delta$, where a reduction from \textsc{Max Cut} allows us to show the problem is NP-hard for $\Delta\le 6$ even in the unweighted case (Theorem \ref{thm:npharddegree}). It is, however, somewhat more disappointing that bounded treewidth also does not help, as the problem remains weakly NP-hard on trees of diameter $4$ (Theorem \ref{thm:nphtrees}) and bipartite graphs of vertex cover $2$ (Theorem \ref{thm:nphardvc}). \item These hardness results point to two directions for obtaining algorithms for \textsc{Min Stable Cut}: first, since the problem is ``only'' weakly NP-hard for bounded treewidth one could hope to obtain a pseudo-polynomial time algorithm in this case. We show that this is indeed possible and the problem is solvable in time $(\Delta\cdot W)^{O(\mathrm{tw})}n^{O(1)}$, where $W$ is the maximum edge weight (Theorem \ref{thm:algpseudo}). Second, one may hope to obtain an FPT algorithm when both $\mathrm{tw}$ and $\Delta$ are parameters. We show that this is also possible and obtain an algorithm with complexity $2^{O(\Delta\mathrm{tw})}(n+\log W)^{O(1)}$ (Theorem \ref{thm:algdelta}). \item These two algorithms lead to two further questions. First, can the $(\Delta\cdot W)^{O(\mathrm{tw})}n^{O(1)}$ algorithm be improved to an FPT dependence on $\mathrm{tw}$, that is, to running time $f(\mathrm{tw})(nW)^{O(1)}$? And second, can the $2^{\Delta\mathrm{tw}}$ parameter dependence of the FPT algorithm be improved, for example to $2^{O(\Delta+\mathrm{tw})}$ or even $\Delta^{O(\mathrm{tw})}$? We show that the answer to both questions is negative, even if we replace treewidth with pathwidth: under the ETH there is no algorithm running in $(nW)^{o(\mathrm{pw})}$ or $2^{o(\Delta\mathrm{tw})}(n+\log W)^{O(1)}$ (Theorem \ref{thm:eth1}). \item Complementing the above, we show that the problem does become FPT by treewidth alone if we allow the notion of approximation to be used in the concept of stability: there exists an algorithm which, for any $\varepsilon>0$, runs in time $(\mathrm{tw}/\varepsilon)^{O(\mathrm{tw})}(n+\log W)^{O(1)}$ and produces a cut with the following properties: all vertices are $(1+\epsilon)$-stable, that is, no vertex can unilaterally increase its incident cut weight by more than a factor of $(1+\varepsilon)$; the cut has weight at most equal to that of the minimum stable cut. \item Finally, motivated by the above mostly negative results, we also consider \textsc{Unweighted Min Stable Cut}, the restriction of the problem where all edge weights are uniform. Our previous results give a much faster algorithm with parameter dependence $\Delta^{O(\mathrm{tw})}$, rather than $2^{\Delta\mathrm{tw}}$ (Corollary \ref{cor:algdelta2}). However, this poses the natural question if in this case the problem finally becomes FPT by treewidth alone. Our main result in this part is to answer this question in the negative and show that, under the ETH, \textsc{Unweighted Min Stable Cut}\ cannot be solved in time $n^{o(\mathrm{pw})}$ (Theorem \ref{thm:hard2}). \end{itemize} Taken together, our results paint a detailed picture of the complexity of \textsc{Min Stable Cut}\ parameterized by $\mathrm{tw}$ and $\Delta$. All our exact algorithms (Theorems \ref{thm:algpseudo}, \ref{thm:algdelta}) are obtained using standard dynamic programming on tree decompositions, the only minor complication being that for Theorem \ref{thm:algdelta} we edit the decomposition to make sure that for each vertex some bag contains all of its neighborhood (this helps us verify that a cut is stable). The main technical challenge is in proving our complexity lower bounds. It is therefore perhaps somewhat surprising that the lower bounds turn out to be essentially tight, as this indicates that for \textsc{Min Stable Cut}\ and \textsc{Unweighted Min Stable Cut}, the straightforward DP algorithms are essentially optimal, if one wants to solve the problem exactly. For the approximation algorithm, we rely on two rounding techniques: one is a rounding step similar to the one that gives an FPTAS for \textsc{Knapsack} by truncating weights so that the maximum weight is polynomially bounded. However, \textsc{Min Stable Cut}\ is more complicated than \textsc{Knapsack}, as an edge which is light for one of its endpoints may be heavy for the other. We therefore define a more general version of the problem, allowing us to decouple the contribution each edge makes to the stability of each endpoint. This helps us bound the largest stability-weight by a polynomial, but is still not sufficient to obtain an FPT algorithm, as the lower bound of Theorem \ref{thm:eth1} applies to polynomially bounded weights. We then go on to apply a technique introduced in \cite{Lampis14} (see also \cite{AngelBEL18,BelmonteLM20,KatsikarelisLP19,KATSIKARELIS2020}) which allows us to obtain FPT approximation algorithms for problems which are W-hard by treewidth by applying a different notion of rounding to the dynamic program. This allows us to produce a solution that is simultaneously of optimal weight (compared to the best stable solution) and almost-stable, using essentially the same algorithm as in Theorem \ref{thm:algpseudo}. However, it is worth noting that in general there is no obvious way to transform almost-stable solutions to stable solutions \cite{BhalgatCK10,CaragiannisFGS15}, so our algorithm is not immediately sufficient to obtain an FPT approximation for \textsc{Min Stable Cut}\ if we insist on obtaining a cut which is exactly stable. \subparagraph*{Related work} From the point of view of local search algorithms, there is an extensive literature on the \textsc{Local Max Cut} problem, which asks us to find a stable cut (of any size). The problem has long been known to be PLS-complete \cite{JohnsonPY88, SchafferY91}. It remains PLS-complete for graphs of maximum degree $5$ \cite{ElsasserT11}, but becomes polynomial-time solvable for graphs of maximum degree $3$ \cite{Loebl91,Poljak95}. The problem remains PLS-complete if weights are assigned to vertices, instead of edges, and the weight of an edge is defined simply as the product of the weights of its endpoints \cite{FotakisKLMPS20}. Even though the problem is PLS-complete, it has long been observed that local search quickly finds a stable solution in most practical instances. One theoretical explanation for this phenomenon was given in a recent line of work which showed that \textsc{Local Max Cut} has quasi-polynomial time smoothed complexity \cite{AngelBPW17,BibakCC19,ChenGVYZ20,EtscheidR17}. \textsc{Local Max Cut} is of course polynomial time solvable if all weights are polynomially bounded in $n$, as local improvements always increase the size of the cut. In algorithmic game theory much work has been done on the complexity of computing Nash equilibria for the cut game and the closely related \emph{party affiliation game}, in which players, represented by vertices, have to pick one of two parties and edge weights indicate how much two players gain if they are in the same party \cite{AwerbuchAEMS08,BalcanBM09,ChristodoulouMS12,FabrikantPT04,GourvesM09}. Note that for general graphical games finding an equilibrium is PPAD-hard on trees of constant pathwidth \cite{ElkindGG06}. Because computing a stable solution is generally intractable, approximate equilibria have also been considered \cite{BhalgatCK10,CaragiannisFGS15}. Note that the notion of approximate equilibrium corresponds exactly to the approximation guarantee given by Theorem \ref{thm:algapprox}, but unlike the cited works, Theorem \ref{thm:algapprox} produces a solution that is both approximately stable and as good as the optimal. The problem we consider in this paper is more closely related to the problem of computing the \emph{worst} (or best) Nash equilibrium, which in turn is closely linked to the notion of Price of Anarchy. For most problems in algorithmic game theory this type of question is usually NP-hard \cite{abs-1907-10468,ConitzerS08,ElkindGG07,FotakisKKMS09,gilboa1989nash,GrecoS09,SchoenebeckV12} and hard to approximate \cite{AustrinBC13,BravermanKW15,DeligkasFS18,HazanK11,MinderV09}. Even though these results show that finding a Nash equilibrium that maximizes an objective function is NP-hard under various restrictions (e.g. graphical games of bounded degree), to the best of our knowledge the complexity of finding the worst equilibrium of the \textsc{Max Cut} game (which corresponds to the \textsc{Min Stable Cut}\ problem of this paper) has not been considered. Finally, another topic that has recently attracted attention in the literature is that of MinMax and MaxMin versions of standard optimization problems, where we search the worst solution which cannot be improved using a simple local search heuristic. The motivation behind this line of research is to provide bounds and a refined analysis of such basic heuristics. Problems that have been considered under this lens are \textsc{Max Min Dominating Set} \cite{Bazgan2018,abs-2101-07550}, \textsc{Max Min Vertex Cover} \cite{Bonnet2018,Zehavi2017},\textsc{Max Min Separator} \cite{Hanaka2019}, \textsc{Max Min Cut} \cite{EtoHKK2019}, \textsc{Min Max Knapsack} \cite{ArkinBMS03,Furini2017,GourvesMP13}, \textsc{Max Min Edge Cover} \cite{KhoshkhahGMS20}, \textsc{Max Min Feedback Vertex Set} \cite{DubloisHGLM20}. Some problems in this area also arise naturally in other forms and have been extensively studied, such as \textsc{Min Max Matching} (also known as \textsc{Edge Dominating Set} \cite{IwaideN16}) and \textsc{Grundy Coloring}, which can be seen as a \textsc{Max Min} version of \textsc{Coloring} \cite{AboulkerB0S20,BelmonteKLMO20}. \section{Definitions -- Preliminaries} We generally use standard graph-theoretic notation and consider edge-weighted graphs, that is, graphs $G=(V,E)$ supplied with a weight function $w:E\to \mathbb{N}$. For a vertex $v\in V$, The weighted degree of a vertex $v\in V$ is $d_w(v)=\sum_{uv\in E} w(uv)$. A cut of a graph is a partition of $V$ into $V_0, V_1$. A cut is \emph{stable} for vertex $v\in V_i$ if $\sum_{vu\in E\land u\in V_{1-i}} w(vu) \ge \frac{d_w(v)}{2}$, that is, if the total weight of edges incident on $v$ crossing the cut is at least half the weighted degree of $v$. In the \textsc{Min Stable Cut}\ problem we are given an edge-weighted graph and are looking for a cut that is stable for all vertices that minimizes the sum of weights of cut edges (that is, edges with endpoints on both sides of the cut). In \textsc{Unweighted Min Stable Cut}\ we restrict the problem so that the $w$ function returns $1$ for all edges. When describing stable cuts we will sometimes say that we ``assign'' value $0$ (or $1$) to a vertex; by this we mean that we place this vertex in $V_0$ (or $V_1$ respectively). For the definitions of treewidth, pathwidth, and the related (nice) decompositions we refer to \cite{CyganFKLMPPS15}. We will use as a complexity assumption the Exponential Time Hypothesis (ETH) \cite{ImpagliazzoPZ01} which states that there exists a constant $c>1$ such that \textsc{3-SAT} with $n$ variables and $m$ clauses cannot be solved in time $c^{n+m}$. In fact, we will use the slightly weaker and simpler form of the ETH which states that \textsc{3-SAT} cannot be solved in time $2^{o(n+m)}$. \section{Weighted Min Stable Cut} In this section we present our results on exact algorithms for (weighted) \textsc{Min Stable Cut}. We begin with some basic NP-hardness reductions in Section \ref{sec:nphard}, which establish that the problem remains (weakly) NP-hard when either the treewidth or the maximum degree are bounded. These set the stage for two algorithms, given in Section \ref{sec:algs}, solving the problem in pseudo-polynomial time for constant treewidth; and in FPT time parameterized by $\mathrm{tw}+\Delta$. In Section \ref{sec:hardeth} we present a more fine-grained hardness argument, based on the ETH, which shows that the dependence on $\mathrm{tw}$ and $\Delta$ of our two algorithms is essentially optimal. \subsection{Basic Hardness Proofs}\label{sec:nphard} \begin{theorem}\label{thm:nphtrees} \textsc{Min Stable Cut}\ is weakly NP-hard on trees of diameter $4$. \end{theorem} \begin{proof} We describe a reduction from \textsc{Partition}. Recall that in this problem we are given $n$ positive integers $x_1,\ldots, x_n$ such that $\sum_{i=1}^n x_i = 2B$ and are asked if there exists $S\subseteq [n]$ such that $\sum_{i\in S} x_i = B$. We construct a star with $n$ leaves and subdivide every edge once. For each $i\in [n]$ we select a distinct leaf of the tree and set the weight of both edges in the path from the center to this leaf to $x_i$. We claim that the graph has a stable cut of weight $3B$ if and only if there is a partition of $x_1,\ldots, x_n$ into two sets with the same sum. For the first direction, suppose $S\subseteq [n]$ is such that $\sum_{i\in S} x_i = B$. For each $i\in S$ we select a degree two vertex of the tree whose incident edges have weight $x_i$ and assign it value $1$. We assign all other degree two vertices value $0$ and assign to all leaves the opposite of the value of their neighbor. We give the center value $0$. This partition is stable as the center has edge weight exactly $B$ towards each side, and all degree two vertices have a leaf attached that is placed on the other side and contributes half their total incident weight. The total weight cut is $2B$ from edges incident on leaves, plus $B$ from half the weight incident on the center. For the converse direction, observe that in any stable solution all edges incident on leaves are cut, contributing a weight of $2B$. As a result, in a stable cut of size $3B$, the weight of cut edges incident on the center is at most $B$. However, this weight is also at least $B$, since the edge weight incident on the center is $2B$. We conclude that the neighborhood of the center must be perfectly balanced. From this we can infer a solution to the \textsc{Partition} instance. \end{proof} \begin{remark}\label{rem:rem} Theorem \ref{thm:nphtrees} is tight, because \textsc{Min Stable Cut}\ is trivial on trees of diameter at most $3$. \end{remark} \begin{theorem}\label{thm:nphardvc} \textsc{Min Stable Cut}\ is weakly NP-hard on bipartite graphs with vertex cover $2$. \end{theorem} \begin{theorem}\label{thm:npharddegree} \textsc{Unweighted Min Stable Cut}\ is strongly NP-hard and APX-hard on bipartite graphs of maximum degree $6$. \end{theorem} \subsection{Algorithms}\label{sec:algs} \begin{theorem}\label{thm:algpseudo} There is an algorithm which, given an instance of \textsc{Min Stable Cut}\ with $n$ vertices, maximum weight $W$, and a tree decomposition of width $\mathrm{tw}$, finds an optimal solution in time $(\Delta\cdot W)^{O(\mathrm{tw})}n^{O(1)}$. \end{theorem} \begin{theorem}\label{thm:algdelta} There is an algorithm which, given an instance of \textsc{Min Stable Cut}\ with $n$ vertices, maximum weight $W$, maximum degree $\Delta$ and a tree decomposition of width $\mathrm{tw}$, finds an optimal solution in time $2^{O(\Delta\mathrm{tw})}(n+\log W)^{O(1)}$. \end{theorem} \begin{proof} We describe an algorithm which works in a way similar to the standard algorithm for \textsc{Max Cut} parameterized by treewidth, except that we work in a tree decomposition that is essentially a decomposition of the square of $G$. More precisely, before we begin, we do the following: for each $v\in V$ we add to every bag of the decomposition that contains $v$ all the vertices of $N(v)$. It is not hard to see that we now have a decomposition of width at most $(\Delta+1)(\mathrm{tw}+1)$ and also that the new decomposition is still a valid tree decomposition. Crucially, we now also have the following property: for each $v\in V$ there exists at least one bag of the decomposition that contains all of $N[v]$. The algorithm now performs dynamic programming by storing for each bag the value of the best solution for each partition of $B_t$. As a result, the size of the DP table is $2^{O(\Delta\mathrm{tw})}$. The only difference with the standard \textsc{Max Cut} algorithm (beyond the fact that we are looking for a cut of minimum weight) is that when we consider a bag that contains all of $N[v]$, for some $v\in V$, we discard all partitions which are unstable for $v$. Since the bag contains all of $N[v]$, this can be checked in time polynomial in $n$ and $\log W$ (assuming weights are given in binary). \end{proof} \subsection{Tight ETH-based Hardness}\label{sec:hardeth} We first give a reduction from \textsc{3-Set Splitting} to \textsc{Min Stable Cut}\ whose main properties are laid out in Lemma \ref{lem:hard}. This reduction gives the lower bound of Theorem \ref{thm:eth1}. \begin{lemma}\label{lem:hard} There is a polynomial-time algorithm which, given a \textsc{3-Set Splitting} instance $H=(V,E)$ with $n$ elements, produces a \textsc{Min Stable Cut}\ instance $G$ with the following properties: (i) $G$ is a Yes instance if and only if $H$ is a Yes instance; (ii) if $\Delta$ is the maximum degree of $G$ and $\mathrm{pw}$ its pathwidth, then $\Delta = O(\log n)$ and $\mathrm{pw}=O(n/\log n)$; (iii) the maximum weight of $G$ is $W=O(2^{\Delta})$. \end{lemma} \begin{figure}[h] \begin{tabular}{l|r} \input{red1}& \input{red2} \end{tabular} \caption{Sketch of the construction of Lemma \ref{lem:hard}. On the left, the general architecture: $m$ columns, each with $n$ vertices, partitioned into groups of size $\log n$. On each column we add a checker vertex (on top). Between the same groups of consecutive columns we add propagator vertices. On the right, more details about the exponentially increasing weights of edges incident on propagators.}\label{fig:red1} \end{figure} \begin{proof} Let $H=(V,E)$ be the given \textsc{3-Set Splitting} instance, $V=\{v_0,\ldots, v_{n-1}\}$ and suppose that $E$ contains $e_2$ sets of size $2$ and $e_3$ sets of size $3$, where $|E|=e_2+e_3$ will be denoted by $m$. Assume without loss of generality that $n$ is a power of $2$ (otherwise add some dummy elements to $V$). Let $\delta = \log n$. We construct a graph by first making $m$ copies of $V$, call them $V_j, j\in [m]$ and label their vertices as $V_j = \{ v_{(i,j)}\ |\ i\in \{0,\ldots, n-1\}\}$. Intuitively, the vertices $\{ v_{(i,j)}\ |\ j\in [m]\}$ are all meant to represent the element $v_i$ of $H$. We now add to the graph the following: \begin{enumerate} \item Checkers: Suppose that the $j$-th set of $E$ contains elements $v_{i_1}, v_{i_2}, v_{i_3}$. Then we construct a vertex $c_j$ and connect it to $v_{(i_1,j)}, v_{(i_2,j)}, v_{(i_3,j)}$ with edges of weight $1$. If the $j$-th set has size two, we do the same (ignoring $v_{i_3}$). \item Propagators: For each $j\in [m-1]$ we construct $\rho = \lceil n/\delta \rceil$ vertices labeled $p_{(i,j)}, i\in \{0,\ldots,\rho-1\}$. Each $p_{(i,j)}$ is connected to (at most) $\delta$ vertices of $V_j$ and $\delta$ vertices of $V_{j+1}$ with edges of exponentially increasing weight. Specifically, for $i\in \{ 0,\ldots, \rho-1\}, \ell\in \{0,\ldots,\delta-1\}$, we connect $p_{(i,j)}$ to $v_{(i\delta+\ell,j)}$ and to $v_{(i\delta + \ell,j+1)}$ (if they exist) with an edge of weight $2^{\ell}$. \item Stabilizers: For each $j\in [m], i\in \{0,\ldots,n-1\}$ we attach to $v_{(i,j)}$ a leaf. The edge connecting this leaf to $v_{(i,j)}$ has weight $3\cdot 2^{(i\bmod \delta)}$. \end{enumerate} This completes the construction of the graph. Let $L_w$ be the total weight of edges incident on leaves and $P$ be the total weight of edges incident on Propagator vertices $p_{(i,j)}$. We set $B=L_w+\frac{P}{2}+e_2+2e_3$ and claim that the new instance has a stable cut of weight $B$ if and only if $H$ can be split. For the forward direction, suppose that $H$ can be split by the partition of $V$ into $L, R=V\setminus L$. We assign the following values for our new instance: for each $j\in [m]$ odd, we set $v_{(i,j)}$ to value $0$ if and only if $v_i\in L$; for each $j\in [m]$ even, we set $v_{(i,j)}$ to value $0$ if and only if $v_i\in R$. In other words, we use the same partition for all copies of $V$, but flip the roles of $0,1$ between consecutive copies. We place leaves on the opposite side from their neighbors and greedily assign values to all other vertices of the graph to obtain a stable partition. Observe that all vertices $v_{(i,j)}$ are stable with the values we assigned, since the edge connecting each such vertex to a leaf has weight at least half its total incident weight. In the partition we have, we observe that (i) all edges incident on leaves are cut (total weight $L_w$) (ii) all Propagator vertices have balanced neighborhoods, so exactly half of their incident weight is cut (total weight $P/2$) (iii) since $L,R$ splits all sets of $E$, each checker vertex will have exactly one neighbor on the same side (total weight $e_2+2e_3$). So, the total weight of the cut is $B$. For the converse direction, suppose we have a stable cut of size $B$ in the constructed instance. Because of the stability condition, this solution must cut all edges incident on leaves (total weight $L_w$); at least half of the total weight of edges incident on Propagators (total weight $P/2$); and for each checker vertex all its incident edges except at most one (total weight at least $e_2+2e_3$). We conclude that, in order to achieve weight $B$, the cut must properly balance the neighborhood of all Propagators and make sure that each Checker vertex has one neighbor on its own side. We now argue that because the neighborhood of each Propagator is balanced we have for all $i\in\{0,\ldots, n-1\}, j\in [m-1]$ that $v_{(i,j)}, v_{(i,j+1)}$ are on different sides of the partition. To see this, suppose for contradiction that for two such vertices this is not the case and to ease notation consider the vertices $v_{(i\delta+\ell,j)}, v_{(i\delta+\ell,j+1)}$, where $0\le \ell\le \delta-1$. Among all such pairs select one that maximizes $\ell$. Both vertices are connected to the Propagator $p_{(i , j)}$ with edges of weight $2^{\ell}$. But now $p_{(i,j)}$ has strictly larger edge weight connecting it to the side of the partition that contains $v_{(i\delta+\ell,j)}$ and $v_{(i\delta+\ell,j+1)}$ than to the other side because (i) for neighbors of $p_{(i,j)}$ connected to it with edges of higher weight, the neighborhood of $p_{(i,j)}$ is balanced by the maximality of $\ell$ (ii) the total weight of all other edges is $2\cdot (2^{\ell-1}+2^{\ell-2}+\ldots+1) < 2\cdot 2^{\ell}$. We thus have that for all $i,j$, $v_{(i,j)}, v_{(i,j+1)}$ must be on different sides, and therefore all $V_j$ are partitioned in the same way (except some have the roles of $0$ and $1$ reversed). From this, we obtain a partition of $V$. To conclude this direction, we argue that this partition of $V$ must split all sets. Indeed, if not, there will be a checker vertex such that all its neighbors are on the same side, which, as we argued, means that the cut must have weight strictly more than $B$. Finally, let us show that the constructed instance has the claimed properties. The maximum degree is $\Delta = 2\delta = O(\log n)$ in the Propagators vertices (all other vertices have degree at most $4$); the maximum weight is $O(2^{\delta}) = O(2^{\Delta})$. Let us also consider the pathwidth of the constructed graph. Let $G_j$ be the subgraph induced by $V_j$ and its attached leaves, the Checker $c_j$, and all Propagators adjacent to $V_j$. We claim that we can build a path decomposition of $G_j$ that contains all Propagators adjacent to $V_j$ in all bags and has width $O(n/\log n)$. Indeed, if we place all the (at most $\lceil 2n/\delta\rceil$) Propagators and $c_j$ in all bags, we can delete them from $G_j$, and all that is left is a union of isolated edges, which has pathwidth $1$. Now, since the union of all $G_j$ covers all vertices and edges, we can construct a path decomposition of the whole graph of width $O(n/\log n)$ by gluing together the decompositions of each $G_j$, that is, by connecting the last bag of the decomposition of $G_j$ to the first bag of the decomposition of $G_{j+1}$. \end{proof} \begin{theorem}\label{thm:eth1}If the ETH is true then (i) there is no algorithm solving \textsc{Min Stable Cut}\ in time $(nW)^{o(\mathrm{pw})}$ (ii) there is no algorithm solving \textsc{Min Stable Cut}\ in time $2^{o(\Delta \mathrm{pw})}(n+\log W)^{O(1)}$. These statements apply even if we restrict the input to instances where weights are written in unary and the maximum degree is $O(\log n)$. \end{theorem} \section{Approximately Stable Cuts} In this section we present an algorithm which runs in FPT time parameterized by treewidth and produces a solution that is $(1+\varepsilon)$-stable and has weight upper bounded by the weight of the optimal stable cut. Before we proceed, we will need to define a more general version of our problem. In \textsc{Extended} \textsc{Min Stable Cut}\ we are given as input: a graph $G=(V,E)$; a cut-weight function $w:E\to\mathbb{N}$; and a stability-weight function $s:E\times V \to \mathbb{N}$. For $v\in V$ we denote $d_s(v)=\sum_{vu\in E} s(vu,v)$, which we call the stability degree of $v$. If we are also given an error parameter $\rho>1$, we will then be looking for a partition of $V$ into $V_0,V_1$ which satisfies the following: (i) each vertex is $\rho$-stable, that is, for each $i\in\{0,1\}$ and $v\in V_i$ we have $\sum_{vu\in E\land u\in V_{1-i}} s(vu,v) \ge \frac{d_s(v)}{2\rho}$ (ii) the total cut weight $\sum_{u\in V_0, v\in V_1, uv\in E} w(uv)$ is minimum. Observe that this extended version of the problem contains \textsc{Min Stable Cut}\ as a special case if $\rho=1$ and for all $uv\in E$ we have $s(uv,v)=s(uv,u)=w(uv)$. The generalization of \textsc{Min Stable Cut}\ is motivated by three considerations. First, the algorithm of Theorem \ref{thm:algpseudo} is inefficient because it has to store exact weight values to satisfy the stability constraints; however, it can efficiently store the total weight of the cut. We therefore decouple the contribution of an edge to the size of the cut (given by $w$) from a contribution of an edge to the stability of its endpoints (given by $s$). Second, our strategy will be to truncate the values of $s$ so that the DP of the algorithm of Theorem \ref{thm:algpseudo} can be run more efficiently. To do this we will first simply divide all stability-weights by an appropriate value. However, a problem we run into if we do this is that the edge $uv$ could simultaneously be one of the heavier edges incident on $u$ and one of the lighter edges incident on $v$, so it is not clear how we can adjust its weight in a way that minimizes the distortion for both endpoints. As a result it is simpler if we allow edges to contribute different amounts to the stability of their endpoints. In this sense, $s(uv,u)$ is the amount that the edge $uv$ contributes to the stability of $u$ if the edge is cut. Observe that with the new definition, if we set a new stability-weight function for a specific vertex $u$ as $s'(uv,v) = c\cdot s(uv,v)$ for all $v\in N(u)$, that is, if we multiply the stability-weight of all edges incident on $u$ by a constant $c$ and leave all other values unchanged, we obtain an equivalent instance, and this does not affect the stability of other vertices. Finally, the parameter $\rho$ allows us to consider solutions where a vertex is stable if its cut incident edges are at least a $(\frac{1}{2\rho})$-fraction of its stability degree. Armed with this intuition we can now explain our approach to obtaining our FPT approximation algorithm. Given an instance of the extended problem, we first adjust the $s$ function so that its maximum value is bounded by a polynomial in $n$. We achieve this by dividing $s(uv,u)$ by a value that depends only on $d_s(u)$ and $n$. This allows us to guarantee that near-stable solutions are preserved. Then, given an instance where the maximum value of $s$ is polynomially bounded, we apply the technique of \cite{Lampis14}, using the algorithm of Theorem \ref{thm:algpseudo} as a base, to obtain our approximation. We give these separate steps in the Lemmas below. \begin{lemma}\label{lem:algapprox1} There is an algorithm which, given a graph $G=(V,E)$ on $n$ vertices and a stability-weight function $s:E\times V\to\mathbb{N}$ with maximum value $S$, runs in time polynomial in $n+\log S$ and produces a stability-weight function $s':E\times V\to\mathbb{N}$ with the following properties: (i) the maximum value of $s'$ is $O(n^2)$ (ii) for all partitions $V$ into $V_0,V_1$, $i\in\{0,1\}$, $v\in V_i$ we have $$(\frac{\sum_{vu\in E, u\in V_{1-i}} s(vu,v)}{d_s(v)}) / (\frac{\sum_{vu\in E, u\in V_{1-i}} s'(vu,v)}{d_{s'}(v)}) \in [1-1/n,1+1/n] $$ \end{lemma} Using Lemma \ref{lem:algapprox1} we can assume that all stability-weights are bounded by $O(n^2)$. The most important part is that Lemma \ref{lem:algapprox1} guarantees us that almost-optimal solutions are preserved in both directions, as for any cut and for each vertex the ratio of stability weight going to the other side over the total stability-degree of the vertex does not change by more than a factor $(1+\frac{1}{n})$. Let us now see the second ingredient of our algorithm. \begin{lemma}\label{lem:algapprox2} There is an algorithm which takes as input a graph $G=(V,E)$, a cut-weight function $w:E\to\mathbb{N}$ with maximum $W$, a stability-weight function $s:E\times V\to \mathbb{N}$ with maximum $S$, a tree decomposition of $G$ of width $\mathrm{tw}$, and an error parameter $\varepsilon>0$ and returns a $(1+2\varepsilon)$-stable solution that has cut-weight at most equal to that of the minimum $(1+\epsilon)$-stable solution. If $S=O(n^2)$, then the algorithm runs in time $(\mathrm{tw}/\varepsilon)^{O(\mathrm{tw})}(n+\log W)^{O(1)}$. \end{lemma} \begin{proof} We use the methodology of \cite{Lampis14}. Before we proceed, let us explain that we are actually aiming for an algorithm with running time roughly $(\log n/\varepsilon)^{O(\mathrm{tw})}$. This type of running time implies the time stated in the lemma using a standard Win/Win argument: if $\mathrm{tw}\le \sqrt{\log n}$ then $(\log n)^{O(\mathrm{tw})}$ is $n^{o(1)}$, so the $(\log n) ^{O(\mathrm{tw})}$ factor is absorbed in the $n^{O(1)}$ factor; while if $\log n\le \mathrm{tw}^2$, then an algorithm running in $(\log n)^{\mathrm{tw}}$ actually runs in time $(\mathrm{tw})^{O(\mathrm{tw})}$. To be more precise, if the given tree decomposition has height $H$, then we will formulate an algorithm with running time $(H\log S/\varepsilon)^{O(\mathrm{tw})}(n+\log W)^{O(1)}$. This running time achieves parameter dependence $(\log n/\varepsilon)^{O(\mathrm{tw})}$ if we use the fact that $S=O(n^2)$ and a theorem due to \cite{BodlaenderH98} which proves that any tree decomposition can be edited (in polynomial time) so that its height becomes $O(\log n)$, without increasing its width by more than a constant factor. The basis of our algorithm will be the algorithm of Theorem \ref{thm:algpseudo}, appropriately adjusted to the extended version of the problem. Let us first sketch the modifications to the algorithm of Theorem \ref{thm:algpseudo} that we would need to do to solve this more general problem, since the details are straightforward. First, we observe that in solution signatures we would now take into account stability-weights, and signatures would have values going up to $S$. Second, in Forget nodes, since we are happy with a $(1+\varepsilon)$-stable solution, we would only discard solutions which violate this constraint. With these modifications, we can run this exact algorithm to return the minimum $(1+\varepsilon)$-stable solution in time $(2S)^{O(\mathrm{tw})}(n+\log W+\log(1/\varepsilon))^{O(1)}$. The idea is to modify this algorithm so that the DP tables go from size $(2S)^{\mathrm{tw}}$ to roughly $(H\log S)^{\mathrm{tw}}$. To do this, we define a parameter $\delta = \frac{\varepsilon}{5H}$. We intend to replace every value $x$ that would be stored in the signature of a solution in the DP table, with the next larger integer power of $(1+\delta)$, that is, to construct a DP table where $x$ is replaced by $(1+\delta)^{\lceil \log_{(1+\delta)} x \rceil}$. More precisely, the invariant we maintain is the following. Consider a node $t$ of the decomposition at height $h$, where $h=0$ corresponds to leaves. We maintain a collection of solution signatures such that: (i) each signature contains a partition of $B_t$ and for each $v\in B_t$ an integer that is upper-bounded by $\lceil\log_{(1+\delta)} d_s(v)\rceil$; (ii) Soundness: for each stored signature there exists a partition of $B^{\downarrow}_t$ which approximately corresponds to it. Specifically, the partition and the signature agree exactly on the assignment of $B_t$ and the total cut-weight; the partition is $(1+2\varepsilon)$-stable for all vertices of $B^{\downarrow}_t\setminus B_t$; and for each $v\in B_t$, if the signature stores the value $x(v)$ for $v$, that is, it states that $v$ has approximate stability-weight $(1+\delta)^{x(v)}$ towards its own side in $B^{\downarrow}_t\setminus B_t$, then in the actual partition the stability-weight of $v$ to its own side of $B^{\downarrow}_t\setminus B_t$ is at most $(1+\delta)^h(1+\delta)^{x(v)}$. (iii) Completeness: conversely, for each partition of $B^{\downarrow}_t$ that is $(1+\varepsilon)$-stable for all vertices of $B^{\downarrow}_t\setminus B_t$ there exists a signature that approximately corresponds to it. Specifically, the partition and signature agree on the assignment of $B_t$ and the total cut-weight; and for each $v\in B_t$, if the stability-weight of $v$ towards its side of the partition of $B^{\downarrow}_t\setminus B_t$ is $y(v)$, and the signature stores the value $x(v)$, then $(1+\delta)^{x(v)}\le (1+\delta)^h y(v)$. In more simple terms, the signatures in our DP table store values $x(v)$ so that we estimate that in the corresponding solution $v$ has approximately $(1+\delta)^{x(v)}$ weight towards its own side in $B_t^{\downarrow}$, that is, we estimate that the DP of the exact algorithm would store approximately the value $(1+\delta)^{x(v)}$ for this solution. Of course, it is hard to maintain this relation exactly, so we are happy if for a node at height $h$ the ``true'' value which we are approximating is at most a factor of $(1+\delta)^h$ off from our approximation. Now, the crucial observation is that the approximate DP tables can be maintained because our invariant allows the error to increase with the height. For example, suppose that $t$ is a Forget node at height $h$ and let $u\in B_t$ be a neighbor of the vertex $v$ we forget. The exact algorithm would construct the signature of a solution in $t$ by looking at the signature of a solution in its child node, and then adding to the value stored for $u$ the weight $s(vu,u)$ (if $u,v$ are on the same side). Our algorithm will take an approximate signature from the child node, which may have a value at most $(1+\delta)^{h-1}$ the correct value, add to it $s(vu,u)$ and then, perhaps, round-up the value to an integer power of $(1+\delta)$. The new approximation will be at most $(1+\delta)^h$ larger than the value that the exact algorithm would have calculated. Similar argumentation holds for Join nodes. Furthermore, in Forget nodes we will only discard a solution if according to our approximation it is not $(1+2\varepsilon)$-stable. We may be over-estimating the stability-weight a vertex has to its own side of the cut by a factor of at most $(1+\delta)^h \le (1+\frac{\varepsilon}{5H})^H \le 1+\frac{\varepsilon}{2}$ so if for a signature our approximation says that the solution is not $(1+2\varepsilon)$-stable, the solution cannot be $(1+\varepsilon)$-stable, because $(1+\varepsilon)(1+\frac{\varepsilon}{2})<1+2\varepsilon$ (for sufficiently small $\varepsilon$). Finally, to estimate the running time, the maximum value we have to store for each vertex in a bag is $\log_{(1+\delta)} S = \frac{\log S}{\log (1+\delta)} \le O(\frac{\log n}{\delta}) = O(\frac{H\log n}{\varepsilon})$. Using the fact that $H=O(\log n)$ we get that the size of the DP table is $(\log n/\varepsilon)^{O(\mathrm{tw})}$. \end{proof} \begin{theorem}\label{thm:algapprox} There is an algorithm which, given an instance of \textsc{Min Stable Cut}\ $G=(V,E)$ with $n$ vertices, maximum weight $W$, a tree decomposition of width $\mathrm{tw}$, and a desired error $\varepsilon>0$, runs in time $(\mathrm{tw}/\varepsilon)^{O(\mathrm{tw})}(n+\log W)^{O(1)}$ and returns a cut with the following properties: (i) for all $v\in V$, the total weight of edges incident on $v$ crossing the cut is at least $(1-\varepsilon)\frac{d_w(v)}{2}$ (ii) the cut has total weight at most equal to the weight of the minimum stable cut. \end{theorem} \section{Unweighted Min Stable Cut} In this section we consider \textsc{Unweighted Min Stable Cut}. We first observe that applying Theorem \ref{thm:algpseudo} gives a parameter dependence of $\Delta^{O(\mathrm{tw})}$, since $W=1$. We then show that this algorithm is essentially optimal, as the problem cannot be solved in $n^{o(\mathrm{pw})}$ under the ETH. \begin{corollary}\label{cor:algdelta2} There is an algorithm which, given an instance of \textsc{Unweighted Min Stable Cut}\ with $n$ vertices, maximum degree $\Delta$, and a tree decomposition of width $\mathrm{tw}$, returns an optimal solution in time $\Delta^{O(\mathrm{tw})} n^{O(1)}$. \end{corollary} \begin{wrapfigure}{R}{0.3\textwidth} \input{red3} \caption{Checker gadget for Theorem \ref{thm:hard2}. On the right two Selector gadgets. This Checker verifies that we have not taken an edge which has endpoints $(2,3)$, hence $t^1,t^3$ are connected to the first $2$ and $3$ vertices of the Selectors.}\label{fig:red2 \end{wrapfigure} We now first state our hardness result, then describe the construction of our reduction, and finally go through a series of lemmas that establish its correctness. \begin{theorem}\label{thm:hard2} If the ETH is true then no algorithm can solve \textsc{Unweighted Min Stable Cut}\ on graphs with $n$ vertices in time $n^{o(\mathrm{pw})}$. Furthermore, \textsc{Unweighted Min Stable Cut}\ is W[1]-hard parameterized by pathwidth. \end{theorem} To prove Theorem \ref{thm:hard2} we will describe a reduction from $k$-\textsc{Multi-Colored Independent Set}, a well-known W[1]-hard problem that cannot be solved in $n^{o(k)}$ time under the ETH \cite{CyganFKLMPPS15}. In this problem we are given a graph $G=(V,E)$ with $V$ partitioned into $k$ color classes $V_1,\ldots, V_k$, each of size $n$, and we are asked to find an independent set of size $k$ which selects one vertex from each $V_i$. In the remainder we use $m$ to denote the number of edges of $E$ and assume that vertices of $V$ are labeled $v_{(i,j)}, i\in [k], j\in [n]$, where $V_i = \{ v_{(i,j)}\ |\ j\in [n]\}$. Before we proceed, let us give some intuition. Our reduction will rely on a $k\times m$ grid-like construction, where each row represents the selection of a vertex in the corresponding color class of $G$ and each column represents an edge of $G$. The main ingredients will be a Selector gadget, which will represent a choice of an index in $[n]$; a Propagator gadget which will make sure that the choice we make in each row stays consistent throughout; and a Checker gadget which will verify that we did not select the two endpoints of any edge. Each Selector gadget will contain a path on (roughly) $n$ vertices such that any reasonable stable cut will have to cut exactly one edge of the path. The choice of where to cut this path will represent an index in $[n]$ encoding a vertex of $G$. In our construction we will also make use of a simple but important gadget which we will call a ``heavy'' edge. Let $A=n^5$. When we say that we connect $u,v$ with a heavy edge we will mean that we construct $A$ new vertices and connect them to both $u$ and $v$. The intuitive idea behind this gadget is that the large number of degree two vertices will force $u$ and $v$ to be on different sides of the partition (otherwise too many edges will be cut). We will also sometimes attach leaves on some vertices with the intention of making it easier for this vertex to achieve stability (as its attached leaves will always be on the other side of the partition). Let us now describe our construction step-by-step. \begin{enumerate} \item Construct two ``palette'' vertices $p_0, p_1$ and a heavy edge connecting them. Note that all heavy edges we will add will be incident on at least one palette vertex. \item For each $i\in [k], j\in [m]$ construct the following Selector gadget: \begin{enumerate} \item Construct a path on $n+1$ vertices $P_{(i,j)}$ and label its vertices $P_{(i,j)}^1, \ldots, P_{(i,j)}^{n+1}$. \item If $j$ is odd, then add a heavy edge from $P_{(i,j)}^1$ to $p_1$ and a heavy edge from $P_{(i,j)}^{n+1}$ to $p_0$. If $j$ is even, then add a heavy edge from $P_{(i,j)}^1$ to $p_0$ and a heavy edge from $P_{(i,j)}^{n+1}$ to $p_1$. \item Attach $5$ leaves to each $P_{(i,j)}^{\ell}$ for $\ell\in \{2,\ldots,n\}$. Attach $A+5$ leaves to $P_{(i,j)}^1$ and $P_{(i,j)}^{n+1}$. \end{enumerate} \item For each $i\in [k], j\in [m-1]$ construct a new vertex connected to all vertices of the paths $P_{(i,j)}$ and $P_{(i,j+1)}$. This vertex is the Propagator gadget. \item For each $j\in [m]$ consider the $j$-th edge of the original instance and suppose it connects $v_{(i_1,j_1)}$ to $v_{(i_2,j_2)}$. We construct the following Checker gadget (see Figure \ref{fig:red2}) \begin{enumerate} \item We construct four vertices $t_j^1, t_j^2, t_j^3, t_j^4$. These are connected to existing vertices as follows: $t_j^1$ is connected to $\{P_{(i_1,j)}^1, \ldots, P_{(i_1,j)}^{j_1}\}$ (that is, the first $j_1$ vertices of the path $P_{(i_1,j)}$); $t_j^2$ is connected to $\{P_{(i_1,j)}^{j_1+1}, \ldots, P_{(i_1,j)}^{n+1}\}$ (that is, the remaining $n+1-j_1$ vertices of $P_{i_1,j}$); similarly, $t_j^3$ is connected to $\{P_{(i_2,j)}^1, \ldots, P_{(i_2,j)}^{j_2}\}$; and finally $t_j^4$ is connected to $\{P_{(i_2,j)}^{j_2+1}, \ldots, P_{(i_2,j)}^{n+1}\}$. \item We construct four independent sets $T_j^1, T_j^2, T_j^3, T_j^4$ with respective sizes $j_1, n+1-j_1, j_2, n+1-j_2$. We connect $t_j^1$ to all vertices of $T_j^1$, $t_j^2$ to $T_j^2$, $t_j^3$ to $T_j^3$, and $t_j^4$ to $T_j^4$. We attach two leaves to each vertex of $T_j^1\cup T_j^2\cup T_j^3\cup T_j^4$. \item We construct three vertices $a_j, b_j, c_j$. We connect $c_j$ to both $a_j$ and $b_j$. We connect $a_j$ to an arbitrary vertex of $T_j^1$ and an arbitrary vertex of $T_j^3$. We connect $b_j$ to an arbitrary vertex of $T_j^2$ and an arbitrary vertex of $T_j^4$. \end{enumerate} \end{enumerate} Let $L_1$ be the number of leaves of the construction we described above and $L_2$ be the number of degree two vertices which are part of heavy edges. We set $B=L_1 + L_2 + km + k(m-1)(n+1) + m(2n+6)$. \begin{lemma}\label{lem:red2a} If $G$ has a multi-colored independent set of size $k$, then the constructed instance has a stable cut of size at most $B$. \end{lemma} \begin{lemma}\label{lem:red2b} If the constructed instance has a stable cut of size at most $B$, then $G$ has a multi-colored independent set of size $k$. \end{lemma} \begin{lemma}\label{lem:pw} The constructed graph has pathwidth $O(k)$. \end{lemma} \section{Conclusions} Our results paint a clear picture of the complexity of \textsc{Min Stable Cut}\ with respect to $\mathrm{tw}$ and $\Delta$. As directions for further work one could consider stronger notions of stability such as demanding that switching sets of $k$ vertices cannot increase the cut, for constant $k$. We conjecture that, since the structure of this problem has the form $\exists \forall_k$, its complexity with respect to treewidth will turn out to be double-exponential in $k$ \cite{LampisM17}. Another direction is to consider \emph{hedonic games} where vertices self-partition into an unbounded number of groups. The complexity of finding a stable solution in such games parameterized by $\mathrm{tw}+\Delta$ has already been considered by Peters \cite{Peters16a}, whose algorithm runs in time exponential in $\Delta^5\mathrm{tw}$. Can we bridge the gap between this complexity and the $2^{O(\Delta \mathrm{tw})}$ complexity of \textsc{Min Stable Cut}?
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} \IEEEPARstart{M}{odern} networked control systems (NCSs) integrate communication networks and smart sensors with physical plants and digital controllers (including digital filters) in an effort to achieve satisfactory efficiency and productivity over traditional control systems \cite{NCS1,NCS2,NCS3}. However, the wide introduction and usage of communication networks (especially wireless networks \cite{SAT}) may bring the following two essential problems. (1) \emph{NCSs security.} Compared with self-enclosed physical plants and digital controllers, the cyber space of networks is open to real world, which makes networks easy to access by malicious users. Recent years have witnessed serious cyber threats to NCSs such as the attack on U.S. fuel pipelines \cite{USC} and Stuxnet worm \cite{STU} on Iranian nuclear facilities. (2) \emph{Energy scarcity.} Specifically, when distributed sensors for measuring plant performance communicates with digital controllers via a wireless network and are battery-powered, the energy supply of sensors becomes limited and it is usually not realistic to achieve frequent battery replacement. It is widely known \cite{EHS} that compared with data sampling, data transmission for sensors equipped with radio modules usually consumes more energy; \emph{e.g.}, the power consumption of temperature sampling of sensor STCN75 produced by STM is \emph{0.4 mW}, and the power consumption of data transmission of radio module CC2420 produced by Texas Instruments is \emph{35 mW (at 0dBm)}. In this context, it is not surprising that the critical questions on NCSs security and energy scarcity have attracted wide attention. NCSs security can be greatly guaranteed by addressing the problems of attack modelling and detection. A popular attack is deception attack, which has a major type as replay attacks \cite{STU}. A generalization of replay attacks is termed as generalized replay attacks (GRAs) \cite{GRA,GRA2}, which is with simplicity of implementation and has been applied in real-world attacks. It has been found \cite{OLA,DWM1} that both replay attacks and GRAs have an ability of bypassing passive detection methods (\emph{e.g.}, the $\chi^2$ test), meanwhile destroying stability of systems with unstable open-loop dynamics. To improve the attack detection capability of passive detection methods, active detection methods have been widely investigated. Dynamic watermarking is one particular active detection method, which mainly includes CDW scheme \cite{DWM1,DWM2,DWL} and new dynamic watermarking (NDW) scheme \cite{NCS3,SCN}. For CDW scheme, the control or actuator signals are encrypted by injecting a watermarking signal generally with Gaussian distributions. For NDW scheme, the before-transmission system outputs are encrypted by injecting a watermarking signal, and then the after-transmission system outputs are decrypted by the same watermarking signal. For CDW and NDW scheme, some hypothesis tests (\emph{e.g.}, $\chi^2$ test \cite{DWM1}, KL divergence based test \cite{NCS3} or consistent tests \cite{DWM2,DWL,SCN}) are used to detect cyber attacks. Generally, the consistent tests are designed to be asymptotic (\emph{i.e.}, with infinite window sizes) at the beginning, and then are transformed to be statistical (\emph{e.g.}, with finite window sizes \cite{GRA,DWM4} or finite samples \cite{FSDW}) for practical use in real world. Existing dynamic watermarking schemes have the following two features. (1) Under no attack, nonzero system performance loss is introduced by watermarking signals for CDW scheme, while NDW scheme can guarantee zero system performance loss. (2) For linear systems with \emph{time-triggered commutation} (TTC, \emph{i.e.}, periodic sampling and data transmission), the replay attacks detection capability increases with watermarking intensity increasing for CDW scheme \cite{DWM1}; the detection of attacks is guaranteed by the asymptotic CDW \cite{DWM2} and NDW schemes \cite{SCN}; the finite failure on GRAs detection is guaranteed by the finite sample CDW scheme \cite{FSDW}; the detection of GRAs is guaranteed by the asymptotic time-varying CDW scheme \cite{GRA}. In a word, existing dynamic watermarking schemes have only focused on TTC. While TTC can be well-used for stable operation and superior performance of systems, the use of TTC may be limited by energy scarcity. Energy scarcity compels some energy-recharging solutions \cite{EES} (\emph{e.g.}, energy harvesting and transferring) or energy-saving solutions to be extensively used, where specially the energy-saving solutions are indispensable when there are no other resources (\emph{e.g.}, wind and solar power) for recharging energy. \emph{Event-triggered communication} (ETC)\cite{NCS3,ETC1,ETC2} is one popular energy-saving solution, where the sampled data will be sent through networks immediately when a predefined event-triggered condition is violated at a certain time instant. Specially, if the state of physical plants is partially available, the event-triggered state estimation (ETSE) \cite{ETSE1,ETSE2} developed from the standard Kalman filter is commonly used to obtain the state estimate. The primary advantage of ETC compared with TTC is saving energy, meanwhile maintaining the comparable performance of systems \cite{NCS3}. While ETC provides the above advantages, existing dynamic watermarking schemes may not always be applicable for ETC. Motivated by the above observations, the following challenges will be addressed: \begin{enumerate} \item[(1)] What are the limitations of CDW scheme for ETSE-based NCSs? \item[(2)] How to design a new dynamic watermarking scheme for ETSE-based NCSs? What is the security property of such scheme? \item[(3)] For the new dynamic watermarking scheme, how to design new statistical tests available for real-world ETSE-based NCSs? \end{enumerate} \begin{table}[!t] \centering \caption{Comparison between the Existing Dynamic Watermarking Schemes and Our Proposed Scheme.} \label{Tabsum} \begin{tabular}{llcll} \toprule Refs.$^1$ & DWF$^2$ & ECS$^3$/ESO$^4$/SPL$^5$ & LSF$^6$ & TCM$^7$ \\ \midrule \cite{GRA} & Asym.$^8$ \& FWS$^9$ & \CheckmarkBold/\XSolidBrush/\CheckmarkBold & LTV$^{10}$ & TTC$^{11}$ \\ \cite{DWM1} & FWS & \CheckmarkBold/\XSolidBrush/\CheckmarkBold & LTI$^{12}$ & TTC \\ \cite{DWM2} & Asym. & \CheckmarkBold/\XSolidBrush/\CheckmarkBold & LTI & TTC \\ \cite{DWL} & Asym. \& FWS & \CheckmarkBold/\XSolidBrush/\CheckmarkBold & LTI & TTC \\ \cite{DWM4} & FWS & \CheckmarkBold/\XSolidBrush/\CheckmarkBold & LTI & TTC \\ \cite{FSDW} & Asym. \& FIS$^{13}$ & \CheckmarkBold/\XSolidBrush/\CheckmarkBold & LTI & TTC \\ \cite{NCS3} & FWS & \XSolidBrush/\CheckmarkBold/\XSolidBrush & LTI & TTC \\ \cite{SCN} & Asym. \& FWS & \XSolidBrush/\CheckmarkBold/\XSolidBrush & LTI & TTC \\ $\dag^{14}$ & Asym. \& FIS & \XSolidBrush/\CheckmarkBold/\XSolidBrush & LTI & \textbf{ETC}$^{15}$ \\ \bottomrule \multicolumn{5}{l}{$^1$References. $^2$Dynamic watermarking form. }\\ \multicolumn{5}{l}{$^3$Encryption of control signal. $^4$Encryption of system outputs.}\\ \multicolumn{5}{l}{$^5$System performance loss. $^6$Linear system form.}\\ \multicolumn{5}{l}{$^7$Triggering communication modes. $^8$Asympototic. $^9$Finite window size.}\\ \multicolumn{5}{l}{$^{10}$Linear time-varying. $^{11}$Time-triggering communication.}\\ \multicolumn{5}{l}{$^{12}$Linear time-invariant. $^{13}$Finite sample.}\\ \multicolumn{5}{l}{$^{14}$This paper. $^{15}$Event-triggered communication.}\\ \end{tabular} \end{table} To deal with these challenges, this paper extends CDW scheme into a new event-triggered dynamic watermarking (ETDW) scheme. Compared with the existing results in the literatures, comparative analysis is listed in Table~\ref{Tabsum}. It can be clearly found that the existing results have only focused on dynamic watermarking scheme for TTC, but the proposed new ETDW scheme pays its attention to dynamic watermarking scheme for ETC. The main contributions of this paper are summarized as follows: \begin{enumerate} \item[(1)] The limitations of CDW scheme for ETSE-based NCSs are revealed, where there are system performance loss from watermarking signals and event-triggered covariance of signals used for attack detection; \item[(2)] A new ETDW scheme is designed by treating watermarking as symmetric key encryption, based on limit convergence theorem in probability. Furthermore, the security property of such scheme is proven that the asymptotic power of undetected GRAs is guaranteed to be not more than the power of attack-free residuals. \item[(3)] Two new finite sample ETDW tests are designed by using matrix concentration inequalities. Furthermore, the attack detection capability of such tests is proved with finite false alarm under no attack and finite failure on GRAs detection. \end{enumerate} The rest of this paper is organized as follows. Section~\uppercase\expandafter{\romannumeral+2} is problem formulation, where ETSE-based systems with CDW scheme under the GRAs and limitations of CDW scheme are analysed. Section~\uppercase\expandafter{\romannumeral+3} presents a new ETDW scheme, where the design, security property analysis, and performance analysis of our proposed approach are presented. Experimental results for a networked inverted pendulum system are given in Section~\uppercase\expandafter{\romannumeral+4}, followed by the conclusion in Section~\uppercase\expandafter{\romannumeral+5}. \emph{Notation:} The Euclidean norm of a vector $x$ is denoted as $\|{x}\|$. The spectral norm and spectral radius of a matrix $X$ are denoted as $\|{X}\|$ and $\rho(X)$, respectively. Multivariate Gaussian distribution with mean $\mu$ and covariance $\mathcal{E}$ is denoted $\mathcal{N}(\mu,\mathcal{E})$. The expectation of a random variable $a$ conditional on the variable $b$ is denoted $\mathds{E}[a|b]$. Given two events $E_1$ and $E_2$, the probability of $E_1$ conditional on $E_2$ is denoted $\mathds{P}(E_1|E_2)$ and the inverse event of $E_1$ is denoted $\neg {E_1}$. Table~A.\uppercase\expandafter{\romannumeral+2} of Section~\uppercase\expandafter{\romannumeral+1} in the supplementary materials summarizes the notations most frequently used throughout the rest of the paper. \section{Problem Formulation} \subsection{ETSE-Based NCSs with CDW Scheme} \begin{figure}[!t] \centering \includegraphics[width=0.485\textwidth]{FIG1_TII-21-4490.eps} \caption{Framework of ETSE-based NCSs with CDW scheme.} \label{fig1} \end{figure} The general framework of ETSE-based NCSs with CDW scheme is shown in Fig.~\ref{fig1}. The system output $y(k)$ is firstly measured by the sensor periodically. After having received $y(k)$, the trigger generates a binary triggering signal $\gamma(k)$ by the preset triggering condition and then accordingly decides whether or not $y(k)$ is sent to the network; as a consequence, $y(k)$ becomes $\bar y(k)$. Then, $\bar y(k)$, $\gamma(k)$ will be transmitted to the event-triggered estimator and the CDW detector via the network, which may be attacked and become $\bar y_a(k)$, $\gamma(k)$. Using received $\bar y_a(k)$, $\gamma(k)$ and the control input $u(k-1)$, the event-triggered estimator calculates the \emph{a priori} state estimate $\hat x(k|k-1)$, \emph{a posteriori} state estimate $\hat x(k|k)$, prediction error covariance $P_{k|k-1}$ and upper bound $\bar P_{k|k}$ of estimation error covariance. Furthermore, using $\hat x(k|k-1)$, $P_{k|k-1}$, $\bar P_{k|k}$ and watermarking signal $d(k-k'-1)$ generated by GenCDW, the event-triggered CDW detector judges whether or not an attack takes place; if yes, the alarm will sound, otherwise the alarm keeps silence. Furthermore, using $\hat x(k|k)$, the controller calculates $u_c(k)$, which is encrypted by $d(k)$ and becomes $u(k)$. Finally, the actuator applies $u(k)$ to stabilizing the plant. \begin{remark} It is not necessary to transmit $\gamma(k)$ in real world. The event-triggered estimator and the CDW detector have the ability to judge whether or not a data packet is received, \emph{e.g.}, by the mechanism of TCP-ACK transmission \cite{OPU}. \end{remark} To analyse the limitations of CDW scheme for ETSE-based NCSs, we consider a discrete-time linear time-invariant plant \begin{align} x(k + 1) &= Ax(k) + B u(k) + w(k), \hfill \label{eq21}\\ y(k) &= C x(k) + v(k) \hfill \label{eq22} \end{align} where $x(k) \in \mathbb{R}^{n_x}$, $u(k) \in \mathbb{R}^{n_u}$ and $y(k) \in \mathbb{R}^{n_y}$ are the system state, control input and system output at $k$-th sampling instant, respectively; the process noise $w(k)$ and measurement noise $v(k)$ are mutually independent and take the distribution form $w(k) \sim \mathcal{N}(0,\mathcal{E}_w)$, $v(k) \sim \mathcal{N}(0,\mathcal{E}_v)$. $A$, $B$ and $C$ are constant matrices with appropriate dimensions. \subsubsection{ETC with Send-on-Delta Condition} The send-on-delta condition for ETC is used to send $y(k)$. To formulate the send-on-delta condition, a triggering signal $\gamma(k)$ is defined by \be \gamma (k) := \left\{ \begin{gathered} 1,\ if\ {\epsilon ^T}(k)\epsilon (k) > \delta \hfill \\ 0,\ otherwise \hfill \\ \end{gathered} \right. \label{eq23} \ee where $\epsilon (k): = y(k) - y(\tau_k)$ is the difference between $y(k)$ and the previously transmitted measurement $y(\tau_{k})$ (and $\tau_{k} < k$); $\delta >0$ is the user-defined event-triggered threshold. Note that if and only if $\gamma(k)=1$, then $y(k)$ will be sent via the network. Consequently, the network's input becomes \be \bar y(k) = y(k) - \left( {1 - \gamma (k)} \right)\epsilon (k). \label{eq24} \ee \subsubsection{The GRAs} $\bar y(k)$ could be compromised by cyber attacks from the network. Note that if and only if there is no any attack, then $\bar y_a(k) = \bar y(k)$. When there is a persistent event-triggered version of GRAs \cite{GRA}, $\bar y_a(k)$ can be formulated by \begin{align} {{\bar y}_a}(k) &= \bar y(k) + a(k), \hfill \label{eq25}\\ a(k) &= \gamma (k)\left( {s\bar y(k) + C{x_a}(k) + v_a(k)}\right), \hfill \label{eq26}\\ {x_a}(k+1) &= {A_a}{x_a}(k) \hfill \label{eq27} \end{align} where $a(k)$ is the false data with respect to $\gamma(k)$ and generated by the hidden Markov model (\ref{eq26}) and (\ref{eq27}); $s \in \mathbb{R}$ is called the attack scaling factor; $x_a(k) \in \mathbb{R}^{n_x}$ is the hidden state of GRAs; the noise ${v_a}(k)$ takes the distribution of $v_a(k) \sim \mathcal{N}(0,\mathcal{E} _{v_a})$, and $v_a(k)$ is mutually independent with $v(k)$, $w(k)$; $A_a$ is Schur stable, \emph{i.e.}, $\rho(A_a)<1$. To further quantify the persistent additional distortion from GRAs, we consider the following Definition~\ref{D1}. \begin{definition} \label{D1} The \emph{asymptotic attack power} is defined as, cf. \cite{DWL}, \be \mathop {{\text{\rm as-lim}}}\limits_{i \to \infty } \frac{1}{i}\sum\nolimits_{k = 1}^{i} {{a^T}(k)a(k)} \label{eq28} \ee where $\text{\rm as-lim}$ represents the almost sure limit, or, cf. \cite{GRA}, \be \mathop {{\text{\rm p-lim}}}\limits_{i \to \infty } \frac{1}{i}\sum\nolimits_{k = 1}^{i} {{a^T}(k)a(k)} \label{eq29} \ee where $\text{\rm p-lim}$ represents limit in probability. \end{definition} \subsubsection{ETSE with Send-on-Delta condition} Using $\bar y_a(k)$, $\gamma(k)$ and $u(k-1)$, an event-triggered estimator can be designed to estimate system states for calculating control input, \emph{i.e.}, \begin{align} \hat x(k|k - 1) &= A\hat x(k - 1|k - 1) + Bu(k - 1), \hfill \label{eq210}\\ \hat x(k|k) &= \hat x(k|k - 1) + L(k,\gamma,\delta)r(k) \hfill \label{eq211} \end{align} where $r(k):={\bar y_a(k) - C\hat x(k|k - 1)}$ is the residual (or innovation); $L(k,\gamma,\delta)$ is the event-triggered estimator gain and can be designed according to Theorem~A.1 of Section~II.A in the supplementary materials. \subsubsection{Controller and CDW Scheme} Using $\hat x(k|k)$ generated by the above event-triggered estimator, the controller output can be calculated by minimizing linear quadratic Gaussian performance $J$ \cite{DWM1}, and the optimal solution is \be {u_c}(k) = K\hat x(k|k) \label{eq216} \ee where $K = - {\left( {{B^T}SB + R} \right)^{ - 1}}{B^T}SA$ is the controller gain; $S$ is the unique positive definite solution of algebraic Riccati equation $S = {A^T}SA + Q - {A^T}SB{\left( {{B^T}SB + R} \right)^{ - 1}}{B^T}SA$; $Q$ and $R$ are the designed constant matrices. Furthermore, to detect cyber attacks, ${u_c}(k)$ is encrypted by injecting $d(k)$, \emph{i.e.}, \be u(k) = {u_c}(k) + d(k) \label{eq217} \ee where $d(k) \sim \mathcal{N}(0,\mathcal{E}_d)$ is independent of $u_c(k)$, and $\mathcal{E}_d$ is full rank. The ETSE-based NCSs with CDW scheme have been well partially established as (\ref{eq21})--(\ref{eq27}) and (\ref{eq210})--(\ref{eq217}), where there are still attack detection formulas remaining to be designed. Then, two asymptotic CDW tests have been designed \cite{DWL} and the corresponding security property restricting the attack power (\ref{eq28}) of undetected attacks to zero is given in Theorem~A.2 of Section~II.C in the supplementary materials. \begin{remark} The above event-triggered estimator (\ref{eq210}), (\ref{eq211}) and controller (\ref{eq217}) can theoretically satisfy the state limitation, \emph{i.e.}, if $\rho \left( {A+BK} \right) < 1$ and $\rho \left( {(\mathds{I}-L(k,\gamma,\delta)C)A} \right) < 1$, then $\exists {\varsigma} < \infty$, it follows that ${\lim _{k \to \infty }}\mathds{E}\left[ {\left\| {x(k)} \right\|} \right] < {\varsigma}$, where the proof is given in Section~II.B of the supplementary materials. Moreover, in the experiments of Section~\uppercase\expandafter{\romannumeral+4}, appropriate $Q$ and $R$ are selected to guarantee $\rho \left( {A+BK} \right) < 1$, and proper $\delta$ is also selected to guarantee $\rho \left( {(\mathds{I}-L(k,\gamma,\delta)C)A} \right) < 1$, meanwhile $Q$, $R$ and $\delta$ are carefully regulated so that the system states do not exceed the limitation. \end{remark} \begin{figure*}[!t] \centering \includegraphics[width=0.7\textwidth]{FIG2_TII-21-4490.eps} \caption{Framework of ETSE-based NCSs with new ETDW scheme.} \label{fig2} \end{figure*} \begin{remark} The infinite limits $i \to \infty$ in the asymptotic CDW tests (A.14) of Section~II.C in the supplementary materials are not well suited for real-time attack detection. To deal with the problem, the finite sample CDW tests \cite{FSDW} have been transformed from the asymptotic CDW tests (A.14) of Section~II.C in the supplementary materials, where two events are defined by \begin{subequations} \label{eq219} \begin{align} {\Xi _{1,i}} &:= \left\{ {\left\| {\frac{1}{i}{\mathcal{D}_i}} \right\| \geqslant {\vartheta _{1,i}}} \right\}, \hfill \label{eq219a}\\ {\Xi _{2,i}} &:= \left\{ {\left\| {\frac{1}{i}\left( {{\mathcal{R}_i} - i\mathcal{E}_r^f} \right)} \right\| \geqslant {\vartheta _{2,i}}} \right\} \hfill \label{eq219b} \end{align} \end{subequations} where ${\vartheta _{1,i} }$ and $\vartheta _{2,i}$ are time-varying threshold functions. Furthermore, define ${\Xi _i}: = \left\{ {{\Xi _{1,i}} \cup {\Xi _{2,i}}} \right\}$, and if ${\Xi _i}$ is true, then the attack will be alarmed at the $i$-th sampling instant, otherwise the alarm keeps silence. \end{remark} \subsection{Limitations of CDW Scheme for ETSE-Based NCSs} The above discussion is for ETSE-based NCSs with CDW scheme. However, due to the introduction of watermarking signals and ETSE, there are two limitations of CDW scheme for ETSE-based NCSs. \emph{Limitation 1--System Performance Loss From Watermarking:} Considering the system (\ref{eq21})--(\ref{eq27}) and (\ref{eq210})--(\ref{eq217}) under no attacks or trigger (\emph{i.e.}, $a(k)=0$, $\gamma(k)=1$, $\forall k \geqslant 0$), then the system performance loss is, cf. \cite[Th. 3]{DWM1}, \be \Delta {J} = tr\left( {({B^T}SB + R){\mathcal{E} _d}} \right) \label{eq220} \ee where $S$ and $R$ have been given in (\ref{eq216}). \emph{Limitation 2--Event-Triggered Covariance:} Considering the system (\ref{eq21})--(\ref{eq27}) and (\ref{eq210})--(\ref{eq217}) under no attack (\emph{i.e.}, $a(k)=0, \forall k \geqslant 0$), the cross covariance of $r(k)$ and $d(k-1)$, and the auto covariance of $r(k)$ satisfy \begin{subequations} \label{eq221} \begin{align} \mathds{E}\left[ {\left. {r(k){d^T}(k - 1)} \right|\gamma (k)} \right] &= \left( {\gamma (k)-1} \right)CB{\mathcal{E} _d}, \hfill \label{eq221a}\\ \mathds{E}\left[ {\left. {r(k){r^T}(k)} \right|\gamma (k)} \right] &\leqslant {\Psi _\beta }\left({k,\gamma,\delta}\right) \hfill \label{eq221b} \end{align} \end{subequations} where ${\Psi _\beta }\left({k,\gamma,\delta}\right)$ is the same as (A.3) of Section~II.A in the supplementary materials. The proof is given in Section~\uppercase\expandafter{\romannumeral+2}.D of the supplementary materials. \begin{remark} From limitations 1 and 2, it can be clearly seen that there are three main problems of CDW scheme for ETSE-based NCSs to be solved: 1) bigger $\mathcal{E}_d$ brings more $\Delta {J}$; 2) the asymptotic CDW tests (A.14) of Section~II.C in the supplementary materials may not be used; and 3) the finite sample CDW tests (\ref{eq219}) may also not be used. To overcome limitations 1 and 2, it is necessary to develop a new attack detection scheme. \end{remark} \section{New Event-Triggered Dynamic Watermarking} The above has presented the framework of ETSE-based NCSs with CDW scheme, and analysed the corresponding two limitations. To cope with the limitations, a new ETDW scheme is designed and analysed from \emph{asymptotic} form to \emph{finite sample} form. As a comparison, a candidate ETDW scheme is designed and analysed in Section \uppercase\expandafter{\romannumeral+3}.A of the supplementary materials. \subsection{Design and Security Property Analysis of New ETDW Scheme for ETSE-Based NCSs} The designed framework of ETSE-based NCSs with new ETDW scheme is shown in Fig.~\ref{fig2}. Firstly, $y(k)$ is measured by the sensor periodically. After having received $y(k)$, the trigger generates $\gamma(k)$ by (\ref{eq23}) and then accordingly decides whether or not $y(k)$ is sent to the network; as a result, $y(k)$ becomes $\bar y(k)$. Secondly, $\bar y(k)$ is encrypted by injecting $d_n(k)$ that is generated by EnENDW, becoming $\bar y^{+}(k)$. Thirdly, $\bar y^+(k)$, $\gamma(k)$ are transmitted to the event-triggered estimator and the ETDW detector via the network, which may be attacked and become $\bar y^+_a(k)$, $\gamma(k)$. Then, $\bar y_a^+(k)$ is decrypted by subtracting $d_n(k)$ that is generated by DeENDW, becoming $\bar y_a^{-}(k)$. Using received $\bar y _a^-(k)$, $\gamma(k)$ and $u(k-1)$, the event-triggered estimator calculates $\hat x_n(k|k-1)$, $\hat x_n(k|k)$, $P_{n,k|k-1}$ and $\bar P_{n,k|k}$. Furthermore, using $\hat x_n(k|k-1)$, $P_{n,k|k-1}$, $\bar P_{n,k|k}$ and $d_n(k)$, the ETDW detector evaluates whether or not an attack takes place; if yes, the alarm will sound, otherwise the alarm keeps silence. Furthermore, using $\hat x_n(k|k)$, the controller calculates $u(k)$, which is applied by the actuator to stabilizing the plant. \subsubsection{Design of Watermarking as Symmetric Key Encryption, Event-Triggered Estimator and Controller} Consider the plant and trigger (\ref{eq21})--(\ref{eq24}). To monitor the information integrity and protect the information confidentiality, $\bar y(k)$ is encrypted by $d_n(k)$, becoming \be {\bar y^ + }(k) = \bar y(k) + d_n(k) \label{eq3A1} \ee where $d_n(k) \sim \mathcal{N}(0,\mathcal{E}_{d_n})$ is independent of $\bar y(k)$, and $\mathcal{E}_{d_n}$ is with full rank. Under the GRAs, ${\bar y^ + }(k)$ becomes \be \begin{aligned} \bar y_a^ + (k) &= {{\bar y}^ + }(k) + a(k), \hfill \\ a(k) &= \gamma (k)\left( {s{{\bar y}^ + }(k) + C{x_a}(k) + {v_a}(k)} \right), \hfill \\ x_a(k+1) &= A_a x_a(k). \hfill \end{aligned} \label{eq3A2} \ee To prevent the watermarking signal exciting the system operation, $\bar y_a^ + (k)$ is decrypted by $d_n(k)$, becoming \be \bar y_a^ - (k) = \bar y_a^ + (k) - d_n(k). \label{eq3A3} \ee For now, the mechanism of watermarking as symmetric key encryption (\ref{eq3A1}) and (\ref{eq3A3}) is completed, which has been developed in \cite{SCN}, and guarantees attack-free and trigger-free \be \Delta{J} = 0. \label{eq3A4} \ee Using $\bar y_a^ - (k)$, the event-triggered estimator is designed as \begin{align} \label{eq3A5} \hat x_n(k|k-1) &= A \hat x_n(k-1|k-1) + B u(k-1), \hfill \\ \label{eq3A6} \hat x_n(k|k) &= \hat x_n(k|k - 1) + L_n(k,\gamma,\delta)r_n(k) \hfill \end{align} where $r_n(k) := \bar y_a^ - (k) - C\hat x_n(k|k - 1)$; $L_n(k,\gamma,\delta)$ is designed like $L(k,\gamma,\delta)$, \emph{i.e.}, \be \begin{gathered} {L_n}(k,\gamma,\delta) = \left( {1 + {\beta _1}\left( {1 - \gamma (k)} \right)} \right){P_{n,k|k - 1}} \hfill \\ \qquad \qquad \qquad \qquad \qquad \qquad \times {C^T}\Psi _{n,\beta }^{ - 1}\left( {k,\gamma,\delta} \right), \hfill \\ \end{gathered} \label{eq3A7} \ee \be {P_{n,k|k - 1}} = A{P_{n,k - 1|k - 1}}{A^T} + {\mathcal{E} _w}, \label{eq3A8} \ee \begin{align} &\label{eq3A9}{\Psi _{n,\beta }}\left( {k,\gamma,\delta} \right) = \left( {1 + {\beta _1}\left( {1 - \gamma (k)} \right)} \right)C{P_{n,k|k - 1}}{C^T} \hfill \\ &+ \left( {1 + {\beta _2}\left( {1 - \gamma (k)} \right)} \right){\mathcal{E}_v} + \left( {1 - \gamma (k)} \right)\left( {1 + \beta _1^{ - 1} + \beta _2^{ - 1}} \right)\delta \mathds{I}, \hfill \nonumber \end{align} \be {{\bar P}_{n,k|k}} = \left( {1 + {\beta_1}\left( {1 - \gamma (k)} \right)} \right)\left( {\mathds{I} - L_n(k,\gamma,\delta)C} \right){P_{n,k|k - 1}}, \label{eq3A10} \ee and ${P_{n,k|k - 1}} := \mathds{E}\left[ {\left. {{e_n}(k|k - 1)e_n^T(k|k - 1)} \right|\gamma (k)} \right]$, ${e_n}(k|k - 1) := x(k) - {{\hat x}_n}(k|k - 1)$; ${P_{n,k|k}} := \mathds{E}\left[ {\left. {{e_n}(k|k)e_n^T(k|k)} \right|\gamma (k)} \right]$ and ${e_n}(k|k) := x(k) - {{\hat x}_n}(k|k)$, ${P_{n,k|k}} \cong {{\bar P}_{n,k|k}}$. To ensure the stability of the plant, using $\hat x_n(k|k)$ from the above event-triggered estimator, the control signal (\ref{eq217}) deployed by actuator can be re-written as \be u(k)=K \hat x_n(k|k). \label{eq3A11} \ee \subsubsection{Design of New Asymptotic ETDW Tests} From the design so far, the new ETDW scheme for ETSE-based NCSs has been completed partially, where there are attack detection formulas remaining to be designed. To design attack detection formulas, the features of $r_n(k)$ and $d_n(k)$ under no attack are presented in the following Theorem~\ref{T3}. \begin{theorem} \label{T3} Considering the system (\ref{eq21})--(\ref{eq24}) and (\ref{eq3A1})--(\ref{eq3A11}) under no attack (\emph{i.e.}, $a(k)=0, \forall k \geqslant 0$), the cross covariance of $r_n(k)$ and $d_n(k)$, and the auto covariance of $r_n(k)$ satisfy \begin{subequations} \label{eq3A12} \begin{align} \label{eq3A12a}\mathds{E}\left[ {\left. {r_n(k){d_n^T}(k)} \right|\gamma (k)} \right] &= 0, \hfill \\ \label{eq3A12b}\mathds{E}\left[ {\left. {r_n(k){r_n^T}(k)} \right|\gamma (k)} \right] &\leqslant {\Psi _{n,\beta} }\left({k,\gamma,\delta}\right) \hfill \end{align} \end{subequations} where $\Psi_{n,\beta} \left({k,\gamma,\delta}\right)$ is the same as (\ref{eq3A9}). \end{theorem} \emph{Proof:} The proof is given in Section \uppercase\expandafter{\romannumeral+3}.C of the supplementary materials. \endproof Using (\ref{eq3A12}) in Theorem~\ref{T3}, meanwhile according to the limit convergence theorem in probability \cite[Th. A.6]{GRA}, two new asymptotic ETDW tests can be designed as \begin{subequations} \label{eq3A13} \begin{align} \label{eq3A13a} \mathop {{\text{p-lim}}}\limits_{i \to \infty } \frac{1}{i}{\mathcal{D}_{n,i}} &= 0, \hfill \\ \label{eq3A13b} \mathop \text{\rm p-lim}\limits_{i \to \infty } \frac{1}{i}{\mathcal{R}_{n,i}} &\leqslant {\Psi _{n,\beta} }\left( {k,\gamma ,\delta} \right) \hfill \end{align} \end{subequations} where the summations are ${\mathcal{D}_{n,i}} := \sum\nolimits_{k = 1}^{i} {r_n(k){d_n^T}(k)}$ and ${\mathcal{R}_{n,i}} := \sum\nolimits_{k = 1}^{i} {r_n(k){r_n^T}(k)}$. \begin{remark} The new ETDW tests (\ref{eq3A13}) with the mechanism of watermarking as symmetric key encryption (\ref{eq3A1}) and (\ref{eq3A3}) can detect denial-of-service (DoS) attacks and replay attacks, where the proofs are given in Section~III.C of the supplementary materials. Furthermore, the new ETDW tests against other types of attacks can be investigated in future work. \end{remark} \subsubsection{Security Property of New Asymptotic ETDW Tests} Now, the new ETDW scheme for ETSE-based NCSs is completed. The security property of new ETDW scheme is analysed in the following Theorem~\ref{T4}. \begin{theorem} \label{T4} Considering the system (\ref{eq21})--(\ref{eq24}) and (\ref{eq3A1})--(\ref{eq3A11}), if both (\ref{eq3A13a}) and (\ref{eq3A13b}) are true, then the asymptotic attack power of GRAs is constrained by \be \mathop \text{\rm p-lim}\limits_{i \to \infty } \frac{1}{i}\sum\nolimits_{k = 1}^i {{a^T}(k)a(k)} \leqslant tr\left({\Psi _{n,\beta} }\left( {k,\gamma,\delta} \right)\right) \label{eq3A15} \ee where $\Psi_{n,\beta} \left({k,\gamma,\delta}\right)$ is the same as (\ref{eq3A9}). \end{theorem} \emph{Proof:} The proof is given in Section \uppercase\expandafter{\romannumeral+3}.D of the supplementary materials. \endproof \begin{remark} Unlike the zero asymptotic power of undetected attacks from trigger-free CDW scheme in Theorem 2, Theorem 3 reveals that with new ETDW scheme used, the asymptotic power of undetected GRAs is not \emph{more than} the power $tr \left({\Psi _{n,\beta} }( {k,\gamma,\delta}) \right)$ of attack-free $r_n(k)$. \end{remark} \subsection{Design and Attack Detection Performance Analysis of New Finite Sample ETDW Tests for ETSE-Based NCSs} We have established a new ETDW scheme for ETSE-based NCSs and analysed its security property. However, the new asymptotic ETDW tests used for new ETDW scheme requires the infinite limit $i \to \infty$ that is unrealistic. To solve the problem, the new finite sample ETDW tests for ETSE-based NCSs are designed and its attack detection performance is analysed as follows. \subsubsection{New Ideal Finite Sample ETDW Tests} To construct finite sample ETDW tests, three necessary conditions are required: \begin{enumerate} \item[c1)] Make $i$ finite; \item[c2)] The summation $\mathcal{D}_{n,i}$ and $\mathcal{R}_{n,i}$ need to be formulated; \item[c3)] To use \emph{matrix concentration inequality} \cite[Prop. 1]{FSDW}, zero-mean matrices need to be defined. \end{enumerate} To satisfy the conditions (c1)--(c3), one of the solutions is to derive the expectation of $\mathcal{D}_{n,i}$ or $\mathcal{R}_{n,i}$ from $\mathcal{D}_{n,i}$ or $\mathcal{R}_{n,i}$ respectively, \emph{i.e.}, \begin{align} \label{eq3B1}&\frac{1}{i}\left( {{\mathcal{D}_{n,i}} - \sum\nolimits_{k = 1}^i {\mathds{E}\left[ {\left. {{r_n}(k){d_n^T}(k)} \right|\gamma (k)} \right]} } \right), \hfill \\ \label{eq3B2}&\frac{1}{i}\left( {{\mathcal{R}_{n,i}} - \sum\nolimits_{k = 1}^i {\mathds{E}\left[ {\left. {{r_n}(k)r_n^T(k)} \right|\gamma (k)} \right]} } \right). \hfill \end{align} Note that (\ref{eq3B1}) and (\ref{eq3B2}) meet the conditions (c1)--(c3) well. Following Theorem 3, we substitute (\ref{eq3A12a}) and (\ref{eq3A12b}) into (\ref{eq3B1}) and (\ref{eq3B2}) respectively, then, \begin{align} \label{eq3B3}&\frac{1}{i}\left( {{\mathcal{D}_{n,i}} - \sum\nolimits_{k = 1}^i {\mathds{E}\left[ {\left. {{r_n}(k)d_n^T(k)} \right|\gamma (k)} \right]} } \right) = \frac{1}{i}{\mathcal{D}_{n,i}}, \hfill \\ &\frac{1}{i}\left( {{\mathcal{R}_{n,i}} - \sum\nolimits_{k = 1}^i {\mathds{E}\left[ {\left. {{r_n}(k)r_n^T(k)} \right|\gamma (k)} \right]} } \right) \geqslant \hfill \nonumber\\ \label{eq3B4}&\qquad \qquad \qquad \quad\frac{1}{i}\left( {{\mathcal{R}_{n,i}} - \sum\nolimits_{k = 1}^i {{\Psi _{n,\beta }}\left( {k,\gamma ,\delta} \right)} } \right). \hfill \end{align} It can be seen from (\ref{eq3A12a}) that $\mathds{E}\left[ {\left. {\mathcal{D}_{n,i}} \right|\gamma (k)} \right]=0$ in (\ref{eq3B3}), but the expectation of the right-hand side of (\ref{eq3B4}) is not zero, which is against the condition (c3), \emph{i.e.}, prevents us from using matrix concentration inequality to develop new finite sample ETDW tests. To cope with the non-zero expectation of the right-hand side of (\ref{eq3B4}), let us firstly focus on the following proposition: If and only if (\ref{eq3A12a}) is true, then $\exists {X_n} \geqslant 0$, \be \mathds{E}\left[ {\left. {r_n(k){r_n^T}(k)} \right|\gamma (k)} \right] + X_n ={\Psi _{n,\beta} }\left({k,\gamma,\delta}\right). \label{eq3B5} \ee The proof is omitted. Using (\ref{eq3B5}), we can define a zero-mean matrix: \be \mathcal{\tilde R}_{n,i}^{X_n} := {\mathcal{R}_{n,i}} - \sum\nolimits_{k = 1}^i {{\Psi _{n,\beta} }\left( {k,\gamma,\delta} \right)} + i X_n \label{eq3B6} \ee where $\mathds{E}\left[ {\left. {\mathcal{\tilde R}_{n,i}^{X_n}} \right|\gamma (k)} \right]=0$. Substituting (\ref{eq3B6}) into the left-hand side of (\ref{eq3B4}) and then using proposition 1, the left-hand side of (\ref{eq3B4}) can be re-written as \be \frac{1}{i}\left( {{\mathcal{R}_{n,i}} - \sum\nolimits_{k = 1}^i {\mathds{E}\left[ {\left. {{r_n}(k)r_n^T(k)} \right|\gamma (k)} \right]} } \right)= \frac{1}{i}\mathcal{\tilde R}_{n,i}^{X_n}. \label{eq3B7} \ee For now, (\ref{eq3B3}) and (\ref{eq3B7}) satisfy conditions (c1)--(c3). Furthermore, based on the matrix concentration inequality and using (\ref{eq3B3}) and (\ref{eq3B7}), the new ideal finite sample ETDW tests can be given by the following two events: \begin{subequations} \label{eq3B8} \begin{align} \label{eq3B8a}{\Xi _{n,1,i}} &:= \left\{\left. {\left\| {\frac{1}{i}{{\mathcal{D}}_{n,i}}} \right\| \geqslant {{\tilde \vartheta }_{n,1,i}}} \right| {\gamma(k)} \right\}, \hfill \\ \label{eq3B8b}{\Xi ^{X_n}_{n,2,i}} &:= \left\{\left. {\left\| {\frac{1}{i} \mathcal{\tilde R}^{X_n}_{n,i}} \right\| \geqslant {{\tilde \vartheta }_{n,2,i}}} \right| {\gamma(k)}\right\} \hfill \end{align} \end{subequations} where detection thresholds ${{\tilde \vartheta }_{n,1,i}} := \sqrt {(1 + {\iota _{n,1}}){\kappa _{n,1}}{{\ln i} \mathord{\left/{\vphantom {{\ln i} i}} \right. \kern-\nulldelimiterspace} i}}$ and ${{\tilde \vartheta }_{n,2,i}} := \sqrt {(1 + {\iota _{n,2}}){\kappa _{n,2}}{{\ln i} \mathord{\left/ {\vphantom {{\ln i} i}} \right. \kern-\nulldelimiterspace} i}}$, and $\iota_{n,1}$, $\iota_{n,2}$, $\kappa_{n,1}$, $\kappa_{n,2}$ are positive scalers. \subsubsection{New Adding-Threshold Finite Sample ETDW Tests} Even though the above new ideal finite sample ETDW tests seem to resolve the considered problem directly, there is still a huge gap between (\ref{eq3B8}) and the desired finite sample ETDW tests. This is because $X_n$ used in (\ref{eq3B8b}) is unknown and thus it is impossible to achieve the calculation of (\ref{eq3B8b}) in general. Yet, we observe that the concept of \emph{subset} will help to cope with the considered situation. Specifically, a subset of (\ref{eq3B8b}) allows us to eliminate the matrix ${X_n}$ by adding an adjustable real threshold. Motivated by this, we will construct the new adding-threshold finite sample ETDW tests. To design the adding-threshold finite sample ETDW tests, the part of $\mathcal{\tilde R}^{X_n}_{n,i}$ in (\ref{eq3B6}) without $X_n$ is defined as \be {\mathcal{\tilde R}_{n,i}}: = {\mathcal{R}_{n,i}} - \sum\nolimits_{k = 1}^i {{\Psi _{n,\beta} }} \left( {k,\gamma,\delta} \right) \label{eq3B9} \ee and thus (\ref{eq3B6}) can be re-written as ${\mathcal{\tilde R}^{X_n}_{n,i}} = {\mathcal{\tilde R}_{n,i}}+i X_n$. The relation between $\mathcal{\tilde R}^{X_n}_{n,i}$ and $\mathcal{\tilde R}_{n,i}$ is given in the following Lemma~\ref{T5L1}. \begin{lemma} \label{T5L1} For ${\mathcal{\tilde R}_{n,i}}$ and ${\mathcal{\tilde R}^X_{n,i}}$, there are two cases: \begin{enumerate} \item[i)] If $\left\| {\frac{1}{i}{\mathcal{\tilde R}_{n,i}}} \right\| \geqslant {\tilde \vartheta _{n,2,i}} + \Im_n $, then $\left\| {\frac{1}{i}\mathcal{\tilde R}_{n,i}^{X_n}} \right\| \geqslant {{\tilde \vartheta }_{n,2,i}}$; \item[ii)] If $\left\| {\frac{1}{i}{\mathcal{\tilde R}_{n,i}}} \right\| < {\tilde \vartheta _{n,2,i}} + \Im_n$, it is possible that $\left\| {\frac{1}{i}\mathcal{\tilde R}_{n,i}^{X_n}} \right\| < {{\tilde \vartheta }_{n,2,i}}$ or $\left\| {\frac{1}{i}\mathcal{\tilde R}_{n,i}^{X_n}} \right\| \geqslant {{\tilde \vartheta }_{n,2,i}}$ \end{enumerate} where $\Im_n = \left\| X_n \right\|$. \end{lemma} \emph{Proof:} The proof is given in Section~\uppercase\expandafter{\romannumeral+3}.E in the supplementary materials. \endproof According to Lemma~\ref{T5L1}, the adding-threshold finite sample ETDW tests can be given by (\ref{eq3B8a}) and \be {\Xi _{n,2,i}} := \left\{\left. {\left\| {\frac{1}{i} \mathcal{\tilde R}_{n,i} } \right\| \geqslant {{\tilde \vartheta }_{n,2,i}}+\tilde \Im_n} \right| {\gamma(k)}\right\} \label{eq3B10} \ee where the added threshold $\tilde \Im_n >0$ is used to approximate $\Im_n$ and needs to be carefully designed, and the detection threshold functions ${{\tilde \vartheta }_{n,1,i}}$ and ${{\tilde \vartheta }_{n,2,i}}$ have been given in (\ref{eq3B8}). Furthermore, we can define \be {\Xi _{n,i}} := \left\{{\Xi _{n,1,i}} \cup {\Xi _{n,2,i}}\right\}. \label{eq3B11} \ee We expect that if ${\Xi _{n,i}}$ is true, the attack will be alarmed at $i$-th sampling instant; if $\neg {\Xi _{n,i}}$ is true, there is no attack alarm. \subsubsection{False Alarm Analysis under no Attack and Detection Performance Analysis for the GRAs of New Adding-Threshold Finite Sample ETDW Tests} We have fully addressed the new adding-threshold finite sample ETDW tests. The false alarm under no attack of such tests is analysed in the following Theorem~\ref{T5}.(i) based on Lemma~1, and the GRAs detection performance of such tests is presented in the following Theorem~\ref{T5}.(ii). \begin{theorem} \label{T5} Considering the system (\ref{eq21})--(\ref{eq24}) and (\ref{eq3A1})--(\ref{eq3A11}): \begin{enumerate} \item[i)] If there is no attack (\emph{i.e.}, $a(k)=0, \forall k \geqslant 0$), $\left\| {w(k)} \right\|<\infty$, $\left\| {v(k)} \right\|<\infty$, and $\tilde \Im_d = \left\| X_n \right\|$, then \be \mathds{P}\left( {\mathop {\limsup }\limits_{i \to \infty } {\Xi _{n,i}}} \right) = 0 \label{eq3B112} \ee \item[ii)] If the GRAs do not satisfy (\ref{eq3A15}), $\left\| {w(k)} \right\|<\infty$, $\left\| {v(k)} \right\|<\infty$, and $\tilde \Im_d = \left\| X_n \right\|$, then \be \mathds{P}\left( {\mathop {\lim \sup }\limits_{i \to \infty } \left. {\neg {\Xi _{n,i}}} \right|s} \right) = 0. \label{eq3B13} \ee \end{enumerate} \end{theorem} \emph{Proof:} The proof is given in Section \uppercase\expandafter{\romannumeral+3}.F of the supplementary materials. \endproof \begin{remark} As an event-triggered extension of finite sample CDW tests (\ref{eq219})\cite[Thms. 5 and 7]{FSDW}, Theorem~\ref{T5} reveals that i) under no attack, the new adding-threshold finite time ETDW tests will trigger only finite number of attack alarms for ETSE-based NCSs, and ii) the new adding-threshold finite sample ETDW tests cannot detect the GRAs going against (\ref{eq3A15}) only a finite number of numbers for ETSE-based NCSs, \emph{i.e.}, the GRAs going against (\ref{eq3A15}) can be always detected by such tests. \end{remark} \begin{remark} It is difficult to yield $\tilde \Im_n = \left\| X_n \right\|$ because we do not know the value of $X_n$. But then, by performing multiple attack-free experiments on the system, an appropriate $\tilde \Im_d$ can be selected so that the false alarm under no attack is avoided as much as possible, which is shown in Section~\uppercase\expandafter{\romannumeral+4}. \end{remark} \begin{remark} For now, limitations 1 and 2 of CDW scheme for ETSE-based NCSs have been well-handled as follows: \[\begin{gathered} limitation\ 1:(\ref{eq220})\mathop \to \limits^{1st} (\ref{eq3A4}), \hfill \\ limitation\ 2:(\ref{eq221})\mathop \to \limits^{1st} (\ref{eq3A12})\mathop \to \limits^{2nd} (\ref{eq3A13})\mathop \to \limits^{2nd} (\ref{eq3B8})\mathop \to \limits^{3rd} (\ref{eq3B8a}), (\ref{eq3B10}). \hfill \\ \end{gathered} \] Firstly, to overcome limitation 1 (\ref{eq220}) of CDW scheme, the new ETDW scheme guarantees (\ref{eq3A4}) by treating watermarking as symmetric key encryption (\ref{eq3A1}) and (\ref{eq3A3}). Meanwhile, limitation 2 (\ref{eq221}) of CDW scheme is transformed into the feature (\ref{eq3A12}) under new ETDW scheme. Secondly, based on (\ref{eq3A12}), the new asymptotic ETDW tests (\ref{eq3A13}) are designed, which is used to produce the new ideal finite sample ETDW tests (\ref{eq3B8}). Thirdly, based on (\ref{eq3B8}), the new adding-threshold finite sample ETDW tests (\ref{eq3B8a}) and (\ref{eq3B10}) are designed. Finally, Theorems~\ref{T4}, \ref{T5} and experiment results in Section~IV show the reasonable GRAs detection performance of the proposed new ETDW scheme. \end{remark} \section{Experiments} \begin{figure}[!t] \centering \includegraphics[width=0.4\textwidth]{FIG3_TII-21-4490.eps} \caption{Experimental platform of NIPVSSs with new and candidate ETDW scheme. NETDW: New ETDW scheme. CETDW: Candidate ETDW scheme.} \label{fig3} \end{figure} To demonstrate the effectiveness of new ETDW scheme, the scenario when the GRAs enter networked inverted pendulum visual servo systems (NIPVSSs) \cite{NIP} with new and candidate ETDW scheme in Section \uppercase\expandafter{\romannumeral+2} of the supplementary materials is considered, as shown in Fig.~\ref{fig3}. \subsection{Platform of NIPVSS} The discrete-time system state of NIPVSSs is denoted as $x(k) := \left[ {\alpha (k);\theta (k);\dot \alpha (k);\dot \theta (k)} \right]$, where $\alpha(k)$ and $\theta(k)$ are the cart position and pendulum angle at $k$-th sampling instant, respectively. The state-space model of NIPVSSs is \[\begin{gathered} A = \left[ {\begin{array}{*{20}{c}} 1&0&{0.0100}&0 \\ 0&{1.0015}&0&{0.0100} \\ 0&0&1&0 \\ 0&{0.2945}&0&{1.0015} \end{array}} \right],B = \left[ {\begin{array}{*{20}{c}} 0 \\ {0.0002} \\ {0.0100} \\ {0.0300} \end{array}} \right] \hfill \\ \end{gathered} \] and the covariance of $w(k)$ is ${\mathcal{E} _w} = diag\left\{ {0,0,{{10}^{ - 5}},{{10}^{ - 5}}} \right\}$. The system output is $y(k)=\left[{\alpha(k);\theta(k)}\right]$, thus $C = \left[ {\mathds{I},0} \right]$; the covariance of $v(k)$ is ${\mathcal{E} _v} = diag\left\{ {2.7 \times {{10}^{ - 7}},5.5 \times {{10}^{ - 6}}} \right\}$. The trigger (\ref{eq23}) is set as $\delta = 0.00001$. The event-triggered estimators (\ref{eq210}), (\ref{eq211}), (\ref{eq3A5}) and (\ref{eq3A6}) are set as ${P_{ - 1| - 1}} = {P_{n, - 1| - 1}}= 0$, ${\beta _1} = {\beta _2} = 0.02$. By selecting $Q=diag\{10,10,10,10\}$ and $R=1$, the controllers (\ref{eq217}) and (\ref{eq3A11}) are designed as $K=[2.8889,-36.6415,4.9141,-7.3267]$. To analyse the power of the GRAs, a quantity of attack power is defined as $\mathcal{A}(i) = \frac{1}{i}\sum\nolimits_{k = 1}^i {{a^T}(k)a(k)}$. Considering the limits of reality, there are two bounds: $\left| {\alpha (k)} \right| \leqslant 0.3m$, $\left| {\theta (k)} \right| \leqslant 0.8rad$. Once one of the above two bounds is crossed, the servo motor of NIPVSSs will be put ``OFF'', \emph{i.e.}, NIPVSSs will get out of control. \subsection{CDW Scheme on TTC} To analyse the experimental results of NIPVSSs with CDW scheme on TTC, the following two steps are performed. Step 1: Six experiments on NIPVSSs with CDW scheme of $\mathcal{E}_{d}=0.01$ on TTC under no attack are carried out. To save page, one of six experiments is shown in Fig.~A.1 of Section~\uppercase\expandafter{\romannumeral+4}.A in the supplementary materials. Six experiments are used to determine the detection threshold functions for CDW tests on TTC with $\mathcal{E}_{d}=0.01$. The results of attack-free CDW tests on TTC with $\mathcal{E}_{d}=0.01$ are shown in Fig.~A.2, and the value of $\mathcal{E}_r^{f}$ and the concrete parameters of $\vartheta_{1,i}$ and $\vartheta_{2,i}$ are given in Section \uppercase\expandafter{\romannumeral+4}.A of the supplementary materials. \begin{figure}[!t] \centering \subfigure[Cart position]{\includegraphics[width=0.4\textwidth]{FIG4A_TII-21-4490.eps}} \\ \vspace{-0.15in} \subfigure[Pendulum angle]{\includegraphics[width=0.4\textwidth]{FIG4B_TII-21-4490.eps}} \\ \vspace{-0.15in} \subfigure[$\mathcal{A}(k)$]{\includegraphics[width=0.4\textwidth]{FIG4C_TII-21-4490.eps}} \\ \vspace{-0.15in} \subfigure[$\gamma(k)$]{\includegraphics[width=0.4\textwidth]{FIG4D_TII-21-4490.eps}} \\ \vspace{-0.15in} \subfigure[$\frac{1}{i}{\mathcal{D}_i}$]{\includegraphics[width=0.4\textwidth]{FIG4E_TII-21-4490.eps}} \\ \vspace{-0.15in} \subfigure[$\frac{1}{i}{(\mathcal{R}_i-i\mathcal{E}_r^f)}$]{\includegraphics[width=0.4\textwidth]{FIG4F_TII-21-4490.eps}} \\ \caption{States, attack power, triggering signal and detection results of NIPVSSs with CDW scheme on TTC of $\mathcal{E}_d=0.01$ under the GRAs from $k \geqslant 400$. (a), (b), (d)-(f): Blue Line, attack-free; Red Line, under the GRAs; Black Line, detection threshold function. (c): Blue Line, zero line; Red Line, attack power $\mathcal{A}(k)$.} \label{fig4} \end{figure} Step 2: we construct the GRAs with $s=-1$, $v_a(k)=0$ and $A_a=diag\left\{{0.1,0.1,0.1,0.1}\right\}$ from $k \geqslant 400$. The detection results of NIPVSSs with CDW scheme on TTC of $\mathcal{E}_d=0.01$ under the GRAs are shown in Fig.~\ref{fig4}, where it can be seen that 1) the pendulum angle is driven to cross 0.8 rad, and the corresponding attack power $\mathcal{A}(k)$ considerably exceeds zero base line, and 2) the CDW tests on TTC of $\mathcal{E}_d=0.01$ fail to detect the GRAs. \subsection{Triggering Rates of ETC} \begin{table}[!t] \centering \caption{Triggering Rates Analysis of Attack-Free NIPVSSs with New and Candidate ETDW Scheme when $\delta=0.00001$} \label{Tab2} \begin{tabular}{lccc} \toprule ~ & TR for NETDW$^1$ & ~ &TR for CETDW$^2$ \\ \midrule 1st Exp.$^3$ & 38.6008\% & 7th Exp. & 44.1856\% \\ 2nd Exp. & 43.6083\% & 8th Exp. & 40.4984\% \\ 3rd Exp. & 39.9198\% & 9th Exp. & 40.5813\% \\ 4th Exp. & 40.7949\% & 10th Exp. & 38.5386\% \\ 5th Exp. & 42.5484\% & 11th Exp. & 45.3508\% \\ 6th Exp. & 41.1089\% & 12th Exp. & 37.4888\% \\ \bottomrule \multicolumn{4}{l}{$^1$New ETDW. $^2$Candidate ETDW. $^3$Experiment.}\\ \end{tabular} \end{table} To analyse the triggering rates (TRs) of ETC with $\delta=0.00001$, we perform twelve experiments on NIPVSSs with new ETDW scheme ($\mathcal{E}_{d_n}=0.01\mathds{I}$, six times) and candidate ETDW scheme ($\mathcal{E}_d=0.01$, six times) under no attack. To save page, two of twelve experiments are shown in Figs.~A.3 and~A.4 of Section~\uppercase\expandafter{\romannumeral+4}.B in the supplementary materials, and the values of TRs are shown in Table~\ref{Tab2}, where 1) it can be calculated that when $\delta=0.00001$, the average TRs of NIPVSSs with new ETDW scheme of $\mathcal{E}_{d_n}=0.01\mathds{I}$ (six times) and candidate ETDW scheme of $\mathcal{E}_d=0.01$ (six times) are 41.0969\% and 41.1073\% respectively and it means that the trigger can save much communication resource, and 2) the trigger can make NIPVSSs operate stably. \subsection{Detection Threshold Functions and False Alarm for Candidate and New ETDW Tests on ETC} \begin{table}[!t] \centering \caption{Concrete Parameters in $\tilde \vartheta_{1,n,i}$ and $\tilde \vartheta_{2,n,i}+\tilde \Im_n$ ($\tilde \vartheta_{1,i}$ and $\tilde \vartheta_{2,i} + \tilde \Im$)} \label{Tabthd} \begin{tabular}{cccccc} \toprule ~&\multicolumn{2}{c}{\tabincell{c}{$\tilde \vartheta_{1,n,i}$\\($\tilde \vartheta_{1,i}$)}} & \multicolumn{3}{c}{\tabincell{c}{$\tilde \vartheta_{2,n,i}+\tilde \Im_n$\\($\tilde \vartheta_{2,i} + \tilde \Im$)}}\\ \cmidrule(r){2-3} \cmidrule(r){4-6} ~&\tabincell{c}{$\iota_{n,1}$\\($\iota_{1}$)} & \tabincell{c}{$\kappa_{n,1}$\\($\kappa_{1}$)} &\tabincell{c}{$\iota_{n,2}$\\($\iota_{2}$)} & \tabincell{c}{$\kappa_{n,2}$($\kappa_{2}$)} & \tabincell{c}{$\tilde \Im_n$($\tilde \Im$)}\\ \midrule NETDW$^1$ & 1.0 & 1.8e-7 & 1.0 & 1.0e-6 & 1.0e-3\\ (CETDW$^2$) & (1.0) & (1.0e-5) & (1.0) & (1.0e-6) & (1.0e-3)\\ \bottomrule \multicolumn{6}{l}{$^1$Candidate ETDW. $^2$New ETDW.}\\ \end{tabular} \end{table} The above 12 experiments are also used to determine the detection threshold functions for new and candidate finite sample adding-threshold ETDW tests with any $\mathcal{E}_{d_n}$ and $\mathcal{E}_d = 0.01$ respectively. The results of candidate and new finite sample adding-threshold tests are shown in Figs.~A.5-A.7 of Section \uppercase\expandafter{\romannumeral+4}.C in the supplementary materials, and the concrete parameters of detection threshold functions in $\tilde \vartheta_{1,i}$, $\tilde \vartheta_{2,i} + \tilde \Im$, $\tilde \vartheta_{1,n,i}$ and $\tilde \vartheta_{2,n,i} + \tilde \Im_d$ are given in Table~\ref{Tabthd}, where it can be clearly seen that the false alarm under no attacks can be avoided as much as possible by selecting appropriate $\tilde \vartheta_{1,i}$, $\tilde \vartheta_{2,i} + \tilde \Im$, $\tilde \vartheta_{1,n,i}$ and $\tilde \vartheta_{2,n,i} + \tilde \Im_n$. \subsection{The GRAs Detection Effectiveness for Candidate and New ETDW Tests on ETC} \begin{figure}[!t] \centering \subfigure[Cart position]{\includegraphics[width=0.4\textwidth]{FIG5A_TII-21-4490.eps}} \\ \vspace{-0.15in} \subfigure[Pendulum angle]{\includegraphics[width=0.4\textwidth]{FIG5B_TII-21-4490.eps}} \\ \vspace{-0.15in} \subfigure[$\mathcal{A}(k)$]{\includegraphics[width=0.4\textwidth]{FIG5C_TII-21-4490.eps}} \\ \vspace{-0.15in} \subfigure[$\gamma(k)$]{\includegraphics[width=0.4\textwidth]{FIG5D_TII-21-4490.eps}} \\ \vspace{-0.15in} \subfigure[$\frac{1}{i}{\mathcal{D}_i}$]{\includegraphics[width=0.4\textwidth]{FIG5E_TII-21-4490.eps}} \\ \vspace{-0.15in} \subfigure[$\frac{1}{i}{\mathcal{\tilde R}_i}$]{\includegraphics[width=0.4\textwidth]{FIG5F_TII-21-4490.eps}} \\ \caption{States, attack power, triggering signal and detection results of NIPVSSs with candidate ETDW scheme of $\mathcal{E}_d=0.01$ under the GRAs from $k \geqslant 400$. (a), (b), (d)-(f): Blue Line, attack-free; Red Line, under the GRAs; Black Line, detection threshold function. (c): Blue Line, $tr\left( {{\Psi _\beta }(k,\gamma,\delta)} \right)$; Red Line, attack power $\mathcal{A}(k)$.} \label{fig5} \end{figure} \begin{figure}[!t] \centering \subfigure[Cart position]{\includegraphics[width=0.4\textwidth]{FIG6A_TII-21-4490.eps}} \\ \vspace{-0.15in} \subfigure[Pendulum angle]{\includegraphics[width=0.4\textwidth]{FIG6B_TII-21-4490.eps}} \\ \vspace{-0.15in} \subfigure[$\mathcal{A}(k)$]{\includegraphics[width=0.4\textwidth]{FIG6C_TII-21-4490.eps}} \\ \vspace{-0.15in} \subfigure[$\gamma(k)$]{\includegraphics[width=0.4\textwidth]{FIG6D_TII-21-4490.eps}} \\ \vspace{-0.15in} \subfigure[$\frac{1}{i}{\mathcal{D}_{n,i}}$]{\includegraphics[width=0.4\textwidth]{FIG6E_TII-21-4490.eps}} \\ \vspace{-0.15in} \subfigure[$\frac{1}{i}{\mathcal{\tilde R}_{n,i}}$]{\includegraphics[width=0.4\textwidth]{FIG6F_TII-21-4490.eps}} \\ \caption{States, attack power, triggering signal and detection results of NIPVSSs with new ETDW scheme of $\mathcal{E}_{d_n}=0.01\mathds{I}$ under the GRAs from $k \geqslant 400$. (a), (b), (d)-(f): Blue Line, attack-free; Red Line, under the GRAs; Black Line, detection threshold function. (c): Blue Line, $tr\left( {{\Psi _{n,\beta }}(k,\gamma,\delta)} \right)$; Red Line, attack power $\mathcal{A}(k)$.} \label{fig6} \end{figure} We construct the GRAs with $s=-1$, $v_a(k)=0$ and $A_a=diag\left\{{0.1,0.1,0.1,0.1}\right\}$ from $k \geqslant 400$. The detection results of NIPVSSs with candidate finite sample adding-threshold ETDW tests of $\mathcal{E}_d=0.01$ and new finite sample adding-threshold ETDW tests of $\mathcal{E}_{d_n}=0.01\mathds{I}$ under the GRAs are shown in Figs.~\ref{fig5} and \ref{fig6} respectively, where it can be seen that 1) the candidate finite sample adding-threshold ETDW tests of $\mathcal{E}_d=0.01$ fails to detect the GRAs and the pendulum angle is driven to cross 0.8 rad, and the corresponding attack power $\mathcal{A}(k)$ considerably exceeds the value of $tr\left( {{\Psi _\beta }(k,\gamma,\delta)} \right)$; and 2) new finite sample adding-threshold ETDW tests of $\mathcal{E}_{d_n}=0.01\mathds{I}$ succeed to detect the GRAs and the pendulum angle is driven to cross 0.8 rad, and the corresponding attack power $\mathcal{A}(k)$ considerably exceeds the value of $tr\left( {{\Psi _{n,\beta} }(k,\gamma,\delta)} \right)$. \begin{remark} It can be clearly seen from Section IV.B-E that the time-triggered strategy provides worse detection performance than the event-triggered one. Specifically, the CDW tests on TTC of ${\mathcal{E}_d} = 0.01$ fail to report the GRAs as shown in Fig.~\ref{fig4}, while the ETDW tests of ${\mathcal{E}_{{d_n}}} = 0.01\mathds{I}$ succeed to report the GRAs as shown in Fig.~\ref{fig6}. With the bigger ${\mathcal{E}_d}$ (\emph{e.g.}, ${\mathcal{E}_d} = 10$), the CDW tests on TTC could succeed to report the GRAs. However, large ${\mathcal{E}_d}$ for the CDW tests will degrade the control performance, or even make systems crash. In comparison with the CDW tests, the ETDW tests with watermarking signals of arbitrary ${\mathcal{E}_{{d_n}}}$ will not degrade the control performance as shown in (\ref{eq3A4}), while the bigger $\delta$ (\emph{i.e.}, the lower triggering frequency) will decline the control performance. Therefore, co-design between the event-triggered threshold $\delta$ and the controller gain $K$ provides a path to guarantee the control performance. \end{remark} \subsection{The GRAs Detection Effectiveness for New ETDW Tests from Watermarking Intensity on ETC} To further investigate the impact of $\mathcal{E}_{d_n}$ on the GRAs detection effectiveness, we perform the same experiments of Fig.~\ref{fig6} again with new adding-threshold finite sample ETDW tests of $\mathcal{E}_{d_n}=0.0001\mathds{I}$. The experiment results are shown in Figs.~A.8 of Section \uppercase\expandafter{\romannumeral+4}.D in the supplementary materials, where it can be seen that new adding-threshold finite sample ETDW tests of $\mathcal{E}_{d_n}=0.0001\mathds{I}$ fail to detect the GRAs (in which the pendulum angle is driven to cross 0.8 rad). Therefore, a big enough watermarking intensity (\emph{e.g.}, $\mathcal{E}_{d_n}=0.01\mathds{I}$) should be selected in new adding-threshold finite sample ETDW tests for successful attack detection. \section{Conclusion} A linear event-triggered extension to the CDW scheme had been developed. Specifically, a new ETDW scheme was designed from new asymptotic ETDW tests to new ideal finite sample ETDW tests, which limited the power of undetected GRAs and guaranteed the finite false alarm under no attacks and finite failures on GRAs detection. Experimental results on NIPVSSs verified the proposed scheme. The ETDW scheme can be used in nonlinear systems. According to the approach of linearization, the ETDW scheme can be directly applied to the linearized nonlinear system with low degree of nonlinearity. However, for nonlinear systems with high degree of nonlinearity, it has to incorporate the properties of saturation, dead zone, gap, relay and so forth to analyze the covariance of signals. Therefore, it is interesting to extend ETDW scheme into nonlinear systems in future work.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The title is intended to provoke questions about the use of classical concepts, the example is strings, in descriptions of relativistic quantum physics. Why are distinguishable, classical geometric objects considered to describe relativistic quantum dynamics? As the more general description of nature, quantum mechanics should stand on its own without a foundation in classical concepts. While it is our experience that classical limits of quantum mechanics are usefully interpreted using the geometric concepts of point or string in spacetime, this interpretation applies to a limited set of approximations to quantum physics. For example, classical gravity predicts spacetime singularities while one perspective of quantum physics is that the interpretation of states as distinguishable objects falling into a spacetime singularity fails at small distances. The description of states as elements of a Hilbert space remains when the approximation and the geometric interpretation fails. The classical approximation develops a singularity while the appropriate description of physical states as Hilbert space elements continues a unitary evolution. Indeed, in the rigged Hilbert spaces of interest in relativistic quantum physics, the arguments of functions that label states are variables of integration in representations of generalized functions\footnote{Chapter II, section 4 in [\ref{gel2}]}. A spacetime geometry is the result of interpretation of particular states that are perceived to evolve along trajectories on manifolds according to classical models. The scalar product provides the geometry native to the Hilbert space of states. The intended analogy with aether is to emphasize that effective methods are not necessarily elaborations of prevailing methods. At one time an aether that exhibited electromagnetic waves and had properties consistent with the observed independence of the speed of light from an observer's velocity was of interest. This inquiry was not insightful, and physics abandoned it to describe electromagnetism without considering electromagnetic waves as disturbances in a medium. Similarly, limiting descriptions of relativistic quantum physics to quantizations of classical models may unnaturally constrain development. Indeed, quantum mechanics developed when observation contradicted the classical concept of distinguishable objects having trajectories. Of course, difficulties in the canonical quantization of relativistic field theory were recognized from the start [\ref{wightman-hilbert}] and mitigations such as S-matrix and string theories [\ref{witten}] have been suggested. Feynman series [\ref{weinberg}] provide a phenomenologically successful development for relativistic quantum physics and the analysis of diagrams appeals to classical intuition but whether the development realizes quantum mechanics remains undemonstrated. In contrast, there are Hilbert space realizations of relativistic quantum physics that exhibit interaction in physical spacetimes that are not derived from, and indeed are excluded by, a characterization as canonical quantizations [\ref{intro}]. These constructions are uniquely quantum mechanical, possess anticipated classical limits [\ref{limits}], include $1/r$ and Yukawa equivalent potentials [\ref{feymns}], and display substantive characteristics of nature: causality, invariance of transition amplitudes under Poincar\'{e} transformations, nonnegative energy, and a Hilbert space realization. Scattering amplitudes from the constructions have asymptotes at weak coupling and small momentum exchanges that approximate Feynman series amplitudes. But, the constructions that exhibit interaction violate prevailing technical conjecture for quantum field theories: the real fields are not Hermitian Hilbert space field operators, no states have strictly bounded spacetime support, and the interaction Hamiltonians vanish while interaction is manifest. The constructions would not be considered as realizations of quantum field theory in prevailing developments. Nevertheless, the constructions provide explicit examples of alternatives to canonical quantizations. Although abandonment of canonical quantization leaves physics without familiar characterizations for dynamics, the abandonment achieves realizations of relativistic quantum physics that exhibit familiar interactions in physical spacetime. Explicit examples enable critical review of preconceptions. For example, the constructions escape Haag's theorem in an unanticipated way: in the example of a single Lorentz scalar field, the constructions have a two-point function that can be taken to be the Pauli-Jordan positive frequency function but with a field that is not a Hermitian Hilbert space operator. As a consequence, the field is not unitarily similar to the Hermitian free field operator. The constructions abandon reliance on quantizations of classical models and perturbation of free fields. Consistency of classical limits with models of classical physics is a weaker constraint than requiring that quantum dynamics results from quantizations of classical limits. Association of classical dynamic variables with self-adjoint Hilbert space operators is one questionable extrapolation of canonical quantization to relativistic quantum physics. This association provides a translation from classical to quantum mechanics but the extrapolation to relativity and gravity results in contradictions and requires generalizations that are not evidently motivated except to mitigate those contradictions. Association of classical dynamic variables with self-adjoint Hilbert space operators is essential to canonical quantization and canonical quantization is used despite counterexamples to the necessity of such an association. Indeed, due to Lorentz covariance, location is not directly associated with a self-adjoint operator [\ref{wigner},\ref{yngvason-lqp},\ref{johnson}]. That is, multiplications by the values of arguments of functions that label states do not correspond to self-adjoint Hilbert space operators except in nonrelativistic limits. The successes of nonrelativistic quantum mechanics and Feynman series are indications that quantum physics constrained to have appropriate classical limits does approximate nature. Nevertheless, the constructions suggest that inclusion of relativity with interaction is inconsistent with canonical quantizations, in particular, is inconsistent with a correspondence of relativistic classical fields with self-adjoint Hilbert space field operators. The constructions of relativistic quantum physics that exhibit interaction and trivial interaction Hamiltonians are inconsistent with a canonical quantization. Canonical quantizations are impressively successful in nonrelativistic quantum mechanics and Feynman series, and to many, these concepts are the essence of quantum mechanics, but the constructions demonstrate that relativistic quantum physics can be implemented without Hermitian Hilbert space field operators. After nearly one hundred years of development\footnote{Counting from Max Born, {\em \"{U}ber Quantenmechanik}, {\bf Z. Phys.} 26, p.~379-395, 1924.}, controversy remains over basic understanding of quantum mechanics as the description of reality. Is quantum mechanics a new dynamics for classically idealized objects, or is quantum dynamics a distinct description of objects as elements of Hilbert spaces? Is quantum dynamics necessarily described by canonical quantizations of classical dynamics? Here, the derivation of quantum dynamics from classical dynamic models is designated a {\em quantization}, use of the canonical commutation or anticommutation relations in a quantization is a {\em canonical quantization}, and the essence of {\em classical} dynamics is a description of distinguishable geometric objects using Cartesian or locally Minkowski coordinates. Classical dynamics considers descriptions of trajectories of distinguishable objects to be reality. Quantum dynamics describes physical states as elements of Hilbert spaces with translation group homomorphic, unitary transformation determining the temporal evolution. Objects are distinguished by mass and quantum numbers, but objects are otherwise indistinguishable in quantum mechanics. The early development of quantum mechanics focused on canonical quantization and developed a correspondence of quantum with classical for nonrelativistic mechanics. However, when relativity is considered, the elevation of classical field theories to quantum mechanical theories incurs considerable difficulty: divergences in conjectured expansions, physically inequivalent realizations due to inapplicability of the Stone-von Neumann theorem, the `no go' result of the Haag (Hall-Wightman-Greenberg) theorem, and anomalies although the renormalizable development has impressive phenomenological successes. Nevertheless, even a nonrelativistic description of quantum mechanics suffers the Einstein-Podolsky-Rosen and Schr\"{o}dinger's cat paradoxes when classical concepts are retained. These measurement description paradoxes highlight the implausibility of maintaining classical descriptions of states of nature. Probabilistic, hidden variable, and ``we just don't understand it'' interpretations of quantum mechanics are characterized here as methods to preserve a classical interpretation of quantum mechanics. Alternatively, quantum mechanics is considered a complete theory in the Everett-Wheeler-Graham (EWG) relative state description [\ref{ewg}] and in axiomatic developments. Axiomatic developments rely only on explicit assumptions but realizations of relativistic quantum physics that exhibit interaction remain an unattained goal. It is argued here that this lack of realizations is due to remnants of strong, canonical quantization-based assumptions in the axioms. Examples demonstrate realizations of relativistic quantum physics that violate prevailing technical conjecture. That is, relaxation of technical conjecture originating in canonical quantization results in explicit realizations of relativistic quantum physics. These constructed examples suggest that a revisit to the foundations of quantum mechanics is productive. Free field theories, the archetype for the mathematics of relativistic quantum physics, are considered here to be misleading, isolated cases. Here, low dimensional spacetime models and free fields plus the related physically trivial models are excluded from consideration. Indeed, free fields are defined in physical spacetimes as Hermitian Hilbert space operators labeled by functions from sets that include functions of bounded spacetime support and this result has presumably contributed to the retention of technical conjecture inappropriate to interacting field theories. The constructions that exhibit interaction are singularly removed in a substantial sense from these free and related models. The constructions do not result from perturbations of free field operators. It is not suggested here that the properties of these constructions are necessary to relativistic quantum physics, and so the indicated contradictions are between the constructions and canonical quantization-based conjecture, but the longstanding lack of explicit realizations of interacting quantum fields in physical spacetimes suggests that the contradictions apply to more general cases. The interactions of our common experience often correlate us with localized states so the classical limit dominates our experience and intuition but the truer representation of nature, one in which states can manifest in complementary ways, particles and waves, is quantum mechanics. In relativistic quantum physics, the states are elements of rigged Hilbert spaces and are labeled by functions. A Poincar\'{e} invariant scalar product necessitates that the states are labeled by functions dual to generalized functions. Relativistic Hilbert space field operators with expectation values that are functions of spacetime are inconsistent with Poincar\'{e} invariant likelihoods [\ref{bogo},\ref{wizimirski}]. The Feynman rules formalize ``interaction at a point'', an unnatural concept in relativistic quantum descriptions. The concept of a spacetime geometry for function arguments is imposed as an interpretation of the perceived trajectories of point-like states. These observations are classical approximations of the relativistic quantum physics. Particular selections for the state labels simplify the description of motion. The representations of generalized functions are selected to result in straight line trajectories for point-like, free particle states. This linear description of free motion is preserved with Poincar\'{e} transformations and scale changes. Deflections from linear motion indicates interaction. At small scales, or when packet spread or entanglement is appreciable, classical interpretations no longer necessarily apply and the evolution of the elements of the Hilbert space substitutes for the classical spacetime trajectory approximations. Elements of Hilbert spaces describe nature. For nonrelativistic motion, these states can be considered as superpositions over classically idealized descriptions with the consideration that these classically idealized descriptions typically correspond to generalized functions and not elements of the Hilbert space. Ehrenfest's theorem provides an intuitive correspondence of classical trajectories with quantum dynamics in the case of nonrelativistic motion and point-like states, but once particle production is energetically possible, such a correspondence no longer applies. Together with consideration of observers' interpretations of states in terms of classical concepts, quantum mechanics is a complete description of nature with no need for external observers nor ad hoc collapse to particular, idealized states. The properties of operators realized in the Hilbert spaces of interest to physics are determined and their algebraic properties are not necessarily available to be specified in a canonical quantization. The generators of symmetries such as the generators of translations, energy and momentum, are self-adjoint operators but additional conjectured operators such as locations and fields are not necessarily self-adjoint Hilbert space operators. Indeed, the constructed real quantum fields are the conventional multiplication in the algebra of function sequences [\ref{borchers}] but in the case of the constructions this multiplication does not result in self-adjoint Hilbert space field operators [\ref{intro}]. The indistinguishable particles of quantum mechanics resolve the Gibbs paradox but this description is in violent conflict with intuition from considering distinguishable objects traveling trajectories. For dominantly-peaked packets, the state evolution of nonrelativistic objects is well approximated by the classical description while packet overlaps and entanglements are negligible. At a large scale for functions with isolated support concentrated in small areas, the quantum description appears point-like. Such states are designated here as {\em dominantly-peaked packet states}. However, at a small scale, or when entanglement is significant, or for multiple overlapping or energetic particles, the differences of the quantum and classical descriptions are not negligible. The scale for {\em small} here is set by the Compton wavelength, $\hbar/mc$, and corresponds with a nonrelativistic limit. Minimum packet states are the most classical states in the sense that the geometric mean of the packet extent in position and momentum is minimal. A classical limit applies to a multiple object state when dominantly-peaked behavior is exhibited in each spacetime argument and the peaks are jointly isolated. The peaks of packets with appropriate spreads propagate along classically described trajectories until spreads become too large or packets collide or bound states decay. A classical limit applies as an approximation and neglects rare events. Packet spread is negligible when the spread significantly exceeds the Compton wavelength and the dominant weight of the packet is sufficiently small to be spatially isolated [\ref{limits}]. Quantum mechanics is a description for the dynamics of elements of Hilbert spaces. The quantum state, an element of a Hilbert space, is the description of reality and paradoxes of measurement originate in attempts to consider quantum states as statistical descriptions of objects that are actually located at points, travel along trajectories, and as a consequence, go through one or the other of the two slits in a Young's double slit interferometer. The description of a particle as a point traveling on a trajectory provides a useful approximation for localized states, for intervals of time, and when entanglement can be neglected. A localized state may (mostly) pass through one slit of the interferometer but broadly supported states interact significantly with both slits. Quantum mechanics provides the general case while classical limits are more common in our experience. The Einstein-Podolsky-Rosen paradox [\ref{epr}], originally intended as a criticism of quantum mechanics, demonstrates that a classical concept for state, that is, an object with a determined, classically described state independent of the states of other objects, is not viable. A consequence of the observation that the principles of physics are independent of orientation is that angular momentum is conserved. If a spin zero particle decays into two spin one-half particles, then to conserve angular momentum the spins of the two product particles must add to zero. For any axis, one one-half spin particle must be spin up and the other spin down. Should these two particles fly apart, we can observe the spin of the near particle and infer the spin of the far distant particle. Should the near particle be spin up, then we know that to conserve angular momentum, the distant particle must be spin down. Or, if before the near measurement we rotate our measuring apparatus and measure spin of the near particle on another axis, it might be spin down or spin up on this new axis. But then the distant state must be in the corresponding spin up or down state on that axis. The issue is that if the description of the distant state has reality and is unaffected by our description of the near particle, how can angular momentum be conserved? How can our selection for measurement affect the distant description? The second particle is assumed to be sufficiently distant that the first particle does not causally affect the distant particle. If one accepts a classical description that the distant particle must be described by a determined spin, then conservation of angular momentum in quantum mechanics has a problem. Alternatively, the classical concept of a determined state for the distant particle in the pair has a problem. The quantum mechanical description is that the spin states of the pair are entangled as paired spins that conserve angular momentum. This entanglement was created when the original particle decayed and is not captured in a classical description. The observed states of the particle pair are described as correlated pairs, a description that preserves angular momentum. The classical idealization attributes reality to the spin of a distinguishable object independently of the states of other objects. This classical concept is not viable as a description of nature. Another argument against classical descriptions of states is the celebrated Schr\"{o}dinger's cat paradox. To maintain the idea of a determined classical description and yet use the phenomenologically successful formalism of quantum mechanics necessitates ``collapse of the wave packet'' upon observation. This collapse from a quantum mechanical superposition of states to a classically described, determined state upon observation is imposed ad hoc to justify a classical description of the observer and the result of an observation. The Schr\"{o}dinger cat paradox takes consideration of a superposition of states from the microscopic, where it is less evident and more acceptable, to the macroscopic. Decay of an unstable isotope results in a dead cat in a box. There is a finite probability for the isotope to have decayed at any time. That is, at any time, the description of the quantum state includes components with decayed and undecayed isotopes. Schr\"{o}dinger's cat provides a second example of how the entanglement of states in quantum mechanics works. The live cat is entangled with the undecayed isotope, and the dead cat is entangled with the decayed isotope. An observation entangles states of the observer with one or the other correlated pair of states. Again, the classical concept that at any instance the cat is in a determined state, necessarily either alive or dead independently of the state of the observer, is not consistent with nature. Superposition applies and the quantum mechanical description includes entanglement of distinct states of the observer with the various states in the superposition of possibilities. This description is sometimes called a ``many-worlds'' interpretation, but bizarre images of splitting worlds does not apply. This relative state interpretation is completely consistent with our experience. The EWG, relative state interpretation, due to Hugh Everett, John Archibald Wheeler and Neill Graham [\ref{ewg}], demonstrates that there is no discernible difference between keeping a complete history of the evolution of the quantum state as a superposition of all possibilities from keeping only the history relative to one selected sequence of results of observations. The relevant physical description is completely indifferent to whether the alternative histories are accounted for or not. The alternative histories have no discernible effect on our future observations. This result follows from the Hilbert space description of states. And finally, classical physics has its own flaws such as description of the electromagnetic radiation reaction force. These flaws degrade the plausibility of extrapolations of classical models to small scales. \section{A construction of relativistic quantum physics} The constructions provide alternative technical approaches to relativistic quantum physics [\ref{intro}]. The constructions result from consideration of possibilities within quantum mechanics that do not support the description of dynamics as canonical quantization. The constructions display causality, appropriate invariance under Poincar\'{e} transformations, nonnegative energy, and provide explicit Hilbert space realizations exhibiting interaction in physical spacetimes. The properties of these constructions suggest that difficulties in the union of relativity with quantum mechanics result from remnants of classical concepts in quantum mechanics and from an insufficient consideration for the properties of generalized functions. Physical states are elements of a Hilbert space with a dense set of elements labeled by test function sequences. The Hilbert spaces appropriate for relativistic quantum physics derive from rigged (equipped) Hilbert spaces. A {\em rigged Hilbert space} is also designated a Gelfand triple after Israel Gelfand. In a Gelfand triple, a set of countably normed test functions are contained in a normed set of functions that label the elements of the Hilbert space, and the functions that label the elements of the Hilbert space are contained in the set of generalized functions (functionals) dual to the set of test functions. The Hilbert spaces result from isometries of equivalence classes of elements of the linear vector ({\em pre-Hilbert}) space of test function sequences to dense sets of elements for the Hilbert spaces. These sequences consist of functions with increasing numbers of spacetime arguments. This is entirely conventional. The revision is to select a subset of the generalized functions to consider and to label the states with functions that have Fourier transforms with support only on positive energies. That is, the Fourier transforms of the functions lack support on the negative energy support of the generalized functions that define a scalar product of functions sequences. The observation that nature includes only positive energies is satisfied by selecting the supports of the Fourier transforms of the functions that label physical states to be limited to positive energies. As a consequence, the support of these functions is not contained in a bounded region in spacetime. The scalar product provides a semi-norm on these sequences and implies the realization of states as elements of a Hilbert space. The implied scalar product of the Hilbert space is Poincar\'{e} invariant and local. Interaction is observed in the scalar products of plane wave limits of states. The selection of functions achieves concurrent satisfaction of the spectral support and microlocality conditions of relativistic quantum physics. The generalized functions that define the scalar product of the Hilbert space are the VEV of the fields and are generalized functions for an enveloping set of functions that includes the functions that lack negative energy support. The Fourier transforms of the enveloping set of functions are the span of products of multipliers of Schwartz functions of energy-momenta with Schwartz functions of momenta. The Fourier transforms of these functions are test functions in one less dimension than spacetime when $E^2=m^2+{\bf p}^2$ with $p:=E,{\bf p}$ the energy-momentum Lorentz vector. These functions include the spacetime Schwartz functions, generalized functions, for example, functions with temporal support concentrated at a point, and the functions used by Lehmann, Symanzik and Zimmermann in calculations of scattering amplitudes. The enveloping set of functions also includes functions of bounded spacetime support, used to define and test the local properties of the VEV. The quantum fields satisfy the established definition as multiplication in the algebra of function sequences but the algebra consisting of sequences of functions that lack negative energy support is not $*$-involutive. As a consequence, the real field does not satisfy Hermiticity, necessary to self-adjointness of a Hilbert space field operator, and it is unresolved whether the fields are generally Hilbert space operators. The ``operator-valued distributions'' $\Phi(x)$ are formally Hermitian, that is, real, but the lack of real functions among the state labels precludes Hermiticity of Hilbert space field operators. That the constructions do not satisfy a canonical quantization is emphasized by evaluation of the Hamiltonian. The Hamiltonian for the constructions with a single Lorentz scalar fields is $\sqrt{m^2+{\bf p}^2}$, assigned by canonical quantization as the Hamiltonian of a free field. As a consequence, the Lagrangian density associated with the construction is trivial while the development manifests non-trivial interaction. A description of quantum dynamics based upon quantization of classical Lagrangians excludes these constructions. The constructions exhibit the physical properties of relativistic quantum physics but violate two prevailing technical assumptions. In the constructions:\begin{enumerate} \item the quantum fields are not self-adjoint operators \item no elements of the constructed Hilbert space are labeled by functions with support strictly limited to bounded regions of spacetime.\end{enumerate}The first technical difference results from the distinction between real fields and Hermitian Hilbert space field operators. The sets of functions that label the elements in the constructed Hilbert spaces lack real functions. The second difference is the result of the distinction between zero and arbitrarily small. Physically there is a negligible difference between zero and arbitrarily small but the implications of the distinction are decisive. The sets of functions that label states include dominantly-peaked packet states that are arbitrarily dominantly weighted within small bounded regions but never vanish in finite regions. There is an analogy with quasi-analytic functions: the entire function $\exp(-(z/\sigma)^2)$ is real for real values of $z$ when $\sigma \in {\bf R}$ and arbitrarily dominantly supported in a region of size proportional to $\sigma$, but the entire function does not vanish in any finite region. The only quasi-analytic function that vanishes in a finite region is zero. Acceptance of these two deviations from established technical conjecture results in realizations of interacting fields in physical spacetime. With these technical revisions, challenging `no go' theorems, in particular, demonstrations of the uniqueness of the two-point function ([\ref{feder}], the Jost-Schroer theorem and similar results [\ref{greenberg}]) do not apply to the constructions since the constructions lack Hermitian field operators, an assertion underlying the theorems. The constructions violate technical conjecture in the Wightman-G\aa rding, Wightman functional analytic, and Haag-Kastler algebraic axioms for relativistic quantum physics. The constructions violate aspects of each of the sets of axioms:\begin{itemize} \item[-] that fields are necessarily Hermitian Hilbert space operators in the Wightman-G\aa rding development. \item[-] that the spectral support condition and the semi-norm apply for all spacetime Schwartz functions in the Wightman functional analytic development. \item[-] that there are local observables strictly associated with bounded spacetime regions in the isotony condition of the Haag-Kastler development.\end{itemize}In the case of isotony, exclusion of the possibility of local observables has not been demonstrated but all the projections onto subspaces of states are not strictly localized. For the constructed examples of relativistic quantum physics, no states are labeled by functions of bounded spacetime support. The functions may be arbitrarily dominantly supported in a bounded spacetime region but the functions do not vanish in any region of spacetime, a property known as {\em anti-locality} [\ref{segal},\ref{masuda}]. Application of the spectral support condition and the semi-norm to spacetime Schwartz functions in the Wightman functional analytic development implies that real quantum fields are necessarily Hermitian Hilbert space operators. In contrast, the spectral support and the semi-norm axioms applied to the sets of functions with limited energy support do not imply that there are Hermitian Hilbert space field operators. These two technical assertions, self-adjointness and bounded spacetime support, preclude the constructed realizations that exhibit interaction. The established axioms are too strong to admit the constructions. Indeed, the only realizations discovered for these axioms exhibit no interaction. The axioms exclude forms such as $\delta(p_1\!+\!p_2\!+\!\ldots p_n) \prod_{k=1}^n \delta(p_k^2-m^2)\,f(p_1,\ldots p_n)$ that are an evident choice for the Fourier transforms of non-trivial components of Lorentz scalar VEV and are generalized functions in four dimensions. $f$ is a symmetric, Lorentz invariant function. Nonnegative energy is established by the selection of functions that label the physical states. Such forms suffice physically and result in many Hilbert space realizations that exhibit the physical properties of relativistic quantum physics. The choice provided by the constructions is to either revise the axioms or to reject the sole elementary examples of relativistic quantum physics that exhibit interaction in physical spacetimes. \section{Discussion} Canonical quantization emphasizes extrapolation of a modeled classical limit back to specify the general case. Although extrapolation of classical methods both facilitated acceptance of the quantum theory and provides a method to predict and classify quantum dynamics, the method appears not to simply extend to include relativity and gravity. Constructions provide the alternative of quantum mechanical realizations with appropriate classical limits that are not canonical quantizations of classical models. A characterization of the explicit realizations of relativistic quantum physics that extend the constructions is an alternative to canonical quantization. The constructions apply selection of appropriate functions as state labels, functions in four spacetime dimensions with Fourier transforms supported only on positive energies, to realize relativistic quantum physics. These constructions, designated here as unconstrained QFT, are unconstrained by conjecture to implement canonical quantization. An assumption that lingers in the established axioms for quantum mechanics is that observable, dynamic quantities correspond to self-adjoint Hilbert space operators. The constructions exploit that it is not necessary that classically observable quantities correspond to self-adjoint Hilbert space operators. More appropriate developments have been used in the practice of quantum mechanics but these refinements are not explicitly captured in axiomatic descriptions of quantum mechanics [\ref{dirac}]. Quantum mechanics includes the change from a description of nature as distinguishable geometric objects with determined positions in configuration spaces to a description of elements in Hilbert spaces. Quantum mechanics resolves observable flaws of classical mechanics, from an extensive entropy to conservation laws for quantized quantities to discrete atomic spectra. Despite this, classical descriptions continue to be considered foundations for quantum dynamics. When canonical quantization is disregarded, the question of ``how is a classical interaction quantized'' is replaced by the more general ``what interactions are exhibited in the classical limits of quantum mechanics?'' This latter question results from consideration of explicit realizations of relativistic quantum physics such as the constructions. The question of whether there is a classical unification of the forces is not fundamental in this view of quantum mechanics without ``quantization''. In this view, no force is due to the geometry of spacetime and all forces are manifest in appropriate, classical limits of the quantum theory. Dynamics are determined given the vacuum expectation values (VEV) of the quantum fields. The VEV are generalized functions dual to the dense set of functions selected as state labels and VEV determine the scalar product in the Hilbert space. The intended analogy with aether questions the use of classical, geometric objects to describe relativistic quantum physics. Points or strings do not directly appear in a relativistic quantum mechanical development other than as interpretations of particular states. Difficulties in relativistic quantum physics may originate in the maintenance of familiar but ultimately unproductive concepts, analogous to aether. The risk from aether-like preconceptions is imposition of unnecessary constraints. The constructions are an example that realizes relativistic quantum physics by more fully exploiting quantum mechanical possibilities. \begin{quote} Otherwise the ``paradox'' is only a conflict between reality and your feeling of what reality ``ought to be.'' Richard Feynman, sect.~18 p.~9 [\ref{feynman}].\end{quote} \section*{References} \begin{enumerate} \item \label{gel2} I.M.~Gelfand, and G.E.~Shilov, {\em Generalized Functions, Vol.~2}, trans.~M.D.~Friedman, A.~Feinstein, and C.P.~Peltzer, New York, NY: Academic Press, 1968. \item \label{wightman-hilbert} A.S.~Wightman,``Hilbert's Sixth Problem: Mathematical Treatment of the Axioms of Physics'', {\em Mathematical Development Arising from Hilbert Problems}, ed.~by F.~E.~Browder, {\em Symposia in Pure Mathematics 28}, Providence, RI: Amer.~Math.~Soc., 1976, p.~147. \item \label{witten} {\em Quantum Fields and Strings: A Course for Mathematicians}, edited by P.~Deligne, D.~Kazhdan, P.~Etingof, J.W.~Morgan, D.S.~Freed, D.R.~Morrison, L.C. Jeffrey, E.~Witten, American Mathematical Society, 1999. \item \label{weinberg} S.~Weinberg, {\em The Quantum Theory of Fields, Volume I, Foundations}, New York, NY: Cambridge University Press, 1995. \item \label{intro} G.E.~Johnson, ``Introduction to quantum field theory exhibiting interaction'', Feb. 2015, arXiv:math-ph/\-1502.\-07727. \item \label{limits} G.E.~Johnson, ``Classical limits of unconstrained QFT'', Dec. 2014, arXiv:math-ph/\-1412.\-7506. \item \label{feymns} G.E.~Johnson, ``Fields and Quantum Mechanics'', Dec.~2013, arXiv:math-ph/\-1312.\-2608. \item \label{wigner} T.D.~Newton and E.P.~Wigner, ``Localized States for Elementary Systems'', {\em Rev.~Modern Phys.}, Vol.~21, 1949, p.~400. \item \label{yngvason-lqp} J.~Yngvason, ``Localization and Entanglement in Relativistic Quantum Physics'', Jan.~2014, arXiv:quant-ph/1401.2652. \item \label{johnson} G.E.~Johnson, ``Measurement and self-adjoint operators'', May 2014, arXiv:quant-ph/\-1405.\-7224. \item \label{ewg} B.S.~DeWitt, H.~Everett III, N.~Graham, J.A.~Wheeler in {\em The Many-worlds Interpretation of Quantum Mechanics}, ed.~B.S.~DeWitt, N.~Graham, Princeton, NJ: Princeton University Press, 1973. \item \label{bogo} N.N.~Bogolubov, A.A.~Logunov, and I.T.~Todorov, {\em Introduction to Axiomatic Quantum Field Theory}, trans.~by Stephen Fulling and Ludmilla Popova, Reading, MA: W.A.~Benjamin, 1975. \item \label{wizimirski} Z.~Wizimirski, ``On the Existence of a Field of Operators in the Axiomatic Quantum Field Theory'', {\em Bull. Acad. Polon. Sci., s\'{e}r. Math., Astr. et Phys.}, Vol.~14, 1966, pg.~91. \item \label{borchers} H.J.~Borchers, ``On the structure of the algebra of field operators'', {\em Nuovo Cimento}, Vol.~24, 1962, p.~214. \item \label{epr} A.~Einstein, B.~Podolsky, N.~Rosen, ``Can Quantum-Mechanical Description of Physical Reality be Considered Complete?'', {\em Phys.~Rev.}, Vol.~47, 1935, p.~777. \item \label{feder} P.G.~Federbush and K.A.~Johnson, ``The Uniqueness of the Two-Point Function'', {\em Phys.~Rev.}, Vol.~120, 1960, p.~1926. \item \label{greenberg} O.W.~Greenberg, ``Heisenberg Fields which vanish on Domains of Momentum Space'', {\em Journal of Math.~Phys.}, Vol.~3, 1962, pp.~859-866. \item \label{segal} I.E.~Segal and R.W.~Goodman, ``Anti-locality of certain Lorentz-invariant operators'', {\em Journal of Mathematics and Mechanics}, Vol.~14, 1965, p.~629. \item \label{masuda} K.~Masuda, ``Anti-Locality of the One-half Power of Elliptic Differential Operators'', {\em Publ. RIMS, Kyoto Univ.}, Vol.~8, 1972, p.~207. \item \label{dirac} P.A.M.~Dirac, {\em The Principles of Quantum Mechanics}, Fourth Edition, Oxford: Clarendon Press, 1958. \item \label{feynman} R.P.~Feynman, R.B. Leighton, and M.~Sands, {\em The Feynman Lectures on Physics, Volume III}, Reading, MA: Addison-Wesley Publishing Co., 1965. \end{enumerate} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{intro} Young star clusters and star-forming regions are benchmarks where current theories of star formation and stellar evolution can be tested and validated. However, this validation requires statistically representative samples where biases are minimal. The most common biases affecting the parameters of nearby young clusters are observational and thus related to the survey characteristics, like its completeness, limiting magnitude, and spatial extent. Methodological biases can also appear, for example, due to cuts in the observational space \citep{2018A&A...616A...9L} or due to the membership selection. In young clusters, for example, activity related to youth can result in variable luminosity and colour indices, which can impact the membership analysis and derived properties. Moreover, in the analysis of young star clusters and star-forming regions, the remnants of dust and gas from the parent molecular cloud can also extinct the light of the newborn stars and introduce further biases in the inferred parameters of these young populations. In this article, we analyse the properties of the stellar groups in the nearby Perseus star-forming region using the \textit{Gaia} Data Release 3 \citep[DR3,][]{2022arXiv220800211G} complemented with publicly available catalogues of radial velocity from the Apache Point Observatory Galactic Evolution Experiment \citep[APOGEE,][]{2022ApJS..259...35A} and the Set of Identifications, Measurements and Bibliography for Astronomical Data \citep[SIMBAD,][]{2000A&AS..143....9W}, and photometry from the Two Micron All Sky Survey \citep[2MASS,][]{2006AJ....131.1163S} and the Panoramic Survey Telescope and Rapid Response System \citep[PanSTARRS,][]{2016arXiv161205560C}. In particular, we focus on the phase-space (the joint space of 3D positions and 3D velocities), mass, and energy distributions of \object{IC348} and \object{NGC1333}, which are two of the youngest nearest and richest star-forming clusters in the solar vicinity \citep{2016ApJ...827...52L}. The phase-space structure, mass, and energy distributions of the young Perseus groups provide observational constraints to compare competing star-formation scenarios, theories about the origin and variability of the initial mass function, and models for the star formation history of this region. The rest of this article is organized as follows. In Sect. \ref{review}, we review the recent publications about the membership, the spatial, kinematic, and mass distributions, as well as the star formation history of young groups in the Perseus region. In Sects. \ref{dataset} and \ref{methods}, we introduce the data set and methodologies we use. Then, in Sect. \ref{results}, we describe our results about the membership of the groups, their phase-space structure, mass, extinction, and energy distributions, and their dynamical state. Afterwards, in Sect. \ref{discussion}, we compare our results with those from the literature and discuss the differences and implications. In Sect. \ref{conclusions}, we present our conclusions and future perspectives. Finally, Appendix \ref{appendix:assumptions} lists the assumptions that we take throughout this work. \section{Literature review} \label{review} \subsection{Membership} \label{intro:membership} IC348 and NGC1333 have been the subject of several literature studies. Concerning the membership status, we refer the reader to the excellent review by \citet{2016ApJ...827...52L}. These authors not only compiled and assessed the membership status of previously known candidate members but also identified new ones based on several indicators, in particular near-IR spectroscopy. With the updated lists of candidate members, these authors analysed the clusters' ages, mass functions, disk fractions and spatial distributions. As a result, they identified 478 and 203 candidate members of IC348 and NGC1333, respectively (see Tables 1 and 2 of the aforementioned authors). Their survey is complete down to Ks<16.8 mag and Ks<16.2 mag, and in sky regions of 14' and 9', for IC348 and NGC1333 respectively. Shortly after, \citet{2017AJ....154..134E} obtained spectra of 11 members from \citet{2016ApJ...827...52L} and confirmed that two and six are candidate members of IC348 and NGC1333, respectively. However, only 364 and 93 of \citet{2016ApJ...827...52L} members of IC348 and NGC133, respectively, have \textit{Gaia} DR3 parallax, proper motions, and photometry. \citet{2018A&A...618A..93C} used \textit{Gaia} Data Release \citep[DR2,][]{2018A&A...616A...1G} astrometry and a modified version of the Unsupervised Photometric Membership Assignment in Stellar Clusters algorithm \citep[UPMASK,][]{2014A&A...561A..57K} to identify candidate members of hundreds of open clusters in the Milky Way. These authors found 144 and 50 candidate members of IC348 and NGC1333, respectively. \citet{2018A&A...618A..59C} utilised \textit{Gaia} DR2 astrometry and the Density Based Spatial Clustering of Applications with Noise algorithm \citep[DBSCAN,][]{10.5555/3001460.3001507} in combination with an artificial neural network to discover 31 new candidate open clusters. In the Perseus region, they found three cluster candidates: UBC4, UBC19 and UBC31, with 44, 34, and 84 candidate members, respectively. They proposed that these clusters are substructures of the Per OB2 complex, although they noticed that UBC4 is located farther away at 570 pc (see their Sect. 5.4). \citet{2018ApJ...865...73O} combined observations of the Very Long Baseline Array (VLBA) with \textit{Gaia} DR2 and measured the distance and kinematics of IC348 and NGC1333. Based on $3\sigma$ clipping in the independent spaces of parallax and proper motions these authors identified 133 and 31 members of IC348 and NGC1333, respectively (see their Table 7), of which 162 have \textit{Gaia} DR3 parallax, proper motions and photometry. \citet{2020AJ....160...57L} identified 12 new candidates for planetary-mass brown dwarfs in IC348 based on infrared images obtained with the Wide Field Camera 3 of the \textit{Hubble Space Telescope}. Their candidates have spectral types later than M8, while their faintest candidate reaches down to 4-5 $M_{Jup}$. Unfortunately, none of these sources has proper motions nor parallax in the \textit{Gaia} DR3 catalogue as they are too faint. \citet{2020PASP..132j4401A} designed and implemented a medium band near-IR filter to detect low-mass stars and brown dwarfs. Using this filter, these authors survey $1.3$ square degrees of IC348 and NGC1333 clusters, for which they identify 19 and 9 candidate members; however, only 13 and 3 of these sources, respectively, have \textit{Gaia} DR3 parallax, proper motions and photometry. \citet{2021MNRAS.503.3232P} used the \textit{Gaia} DR2 data to study the entire Perseus star-forming region. Through successive cuts and clusterings in the astrometric and photometric features, these authors report the discovery of several hundred members of five stellar groups with ages of 1-5 Myr. Their list of members recovers 50\% and 78\% of \citet{2016ApJ...827...52L} members of NGC1333 and IC348, respectively. In addition, they identify 170, 27, 329, 85, and 302 candidate members of the groups Alcaeus, Autochthe, Electryon, Heleus, and Mestor, respectively. Although these authors claimed the discovery of the previous five groups, 29 members of Electryon and another 29 members of Heleus belong to the UBC31 and UBC19 clusters found by \citet{2018A&A...618A..59C}. \citet{2021ApJ...917...23K} used the \textit{Gaia} DR2 data to identify $\sim\!3\times10^{4}$ young stars within a distance of 333 pc. Applying the Hierarchical Density Based algorithm \citep[HDBSCAN,][]{2017JOSS....2..205M} to this sample, the authors recover young associations like Orion, Perseus, Taurus, and Sco-Cen. They analyse the star-formation history of each group and find evidence of sequential star-formation propagating at a speed of $\sim\!4 \ \ \rm{km\, s^{-1}}$. In Perseus, they identified 264 candidate members that were broadly classified into groups 1A, 1B, 2A and 2B, based on cuts in the plane of the sky to separate the eastern populations (Per 1A and Per 1B) from the western ones (Per 2A and Per 2B), and in age to separate the youngest (Per 1A and Per 2A) from the oldest (Per 1B and Per 2B). \citet{2022ApJ...931..156P} identified members of 65 open clusters using \textit{Gaia} Early Data Release 3 \citep[EDR3,][]{2021A&A...649A...1G} and an unsupervised algorithm based on the technique of self-organising maps after proper motions cuts. In the sky region of Perseus, these authors found 211 candidate members in IC348, 353 in UBC31, and 80 and 230 in two substructures related to UBC31, which they claimed as new and called UBC31 group 1 and UBC 31 group 2, respectively. However, comparing their candidate members with those of \citet{2021MNRAS.503.3232P}, we find that 177 (50\%) of UBC31 belong to Electryon, and 176 (82\%) of UBC31 group 2 belong to Mestor. Except for IC348, all these groups fall outside the sky region analysed here. \citet{2022AJ....164...57K} identified 810 members in the Perseus region using \textit{Gaia} EDR3 and HDBSCAN. In a previous application of HDBSCAN to other star-forming regions \citep{2019AJ....158..122K}, the authors normalised the data and used the same value for the HDBSCAN parameters of minimum sample size and minimum cluster size. However, in the Perseus regions, they did not normalise the data and used a minimum sample size of 10 stars and a minimum cluster size of 25 stars. As a result, they found 43 Perseus groups, out of which they selected nine based on their absolute colour-magnitude diagram; the remaining groups were deemed unrelated to the Perseus region. Out of their nine groups, seven correspond to the groups identified by \citet{2021MNRAS.503.3232P}, while the other two correspond to the California cloud and a new one called Cynurus and located outside the region analysed by \citet{2021MNRAS.503.3232P}. \citet{2022ApJ...936...23W} identified 211 members in the Perseus region by applying astrometric, photometric, radial velocity, and quality cuts to the \textit{Gaia} EDR3 data and confirming membership with spectroscopy from the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST). In addition to IC348 and NGC1333, these authors also identified two subgroups corresponding to the Autochthe one from \citet{2021MNRAS.503.3232P} and a new one associated with the Barnard 1 cloud. \citet{2022AJ....164..125L} searched for substellar objects in the Perseus cloud regions of IC348 and Barnard 5 using narrow-band imaging centred on the water absorption feature at 1.45 $\mu$m. They confirm three brown dwarfs in IC348 and discover the first one in Barnard 5. In addition, these authors used \textit{Gaia} EDR3 to analyse the distance and proper motions of the Perseus regions of Barnard 1, L1448, and NGC1333. They confirm that the western part of the region is closer than the eastern part, with Barnard 5 standing alone 100 pc away from the rest of the groups. \subsection{Spatial distribution} \label{intro:spatial_distribution} The cluster radius of IC348 was measured by \citet{1999A&AS..137..305S}, \citet{2000AJ....120.3139C}, and \citet{2003AJ....125.2029M}, with all finding consistent values of about 10 to 15 arcmin. However, \citet{2003AJ....125.2029M}, studying the radial profile of the cluster, noticed that it showed two subregions (see their Fig. 4): a core within 5\arcmin, and a halo between 5\arcmin and 10.33\arcmin, with this latter value corresponding to the limit of their survey. Nevertheless, the authors mentioned that they could not exclude the possibility that the cluster extends beyond their survey area coverage to larger radii. Transforming the previous measurements with the cluster distance \citep[320 pc,][]{2018ApJ...865...73O}, we obtain cluster radii between 0.9-1.4 pc, while the core radius of \citet{2003AJ....125.2029M} corresponds to 0.5 pc. \citet{2003AJ....125.2029M} also warned about the possibility of halo sources within the core radius due to projection effects. The core and halo populations of IC348 were identified long ago by the pioneering work of \citet{1998ApJ...497..736H}. He identified these populations based on their $H_{\alpha}$ emissions, age, and spatial location and concluded that the core of IC348 "is projected upon a population of somewhat older pre-main-sequence stars whose $H_{\alpha}$ emission has largely decayed away". He also observed an age gradient in which older ages are located at larger distances. Moreover, he proposed that this extended and somewhat older population formed in the same molecular cloud that gave birth to IC348 and NGC1333. \citet{2002A&A...384..145B} created a Compiled Catalogue in the Perseus region with astrometry and photometry of about 30000 stars distributed in an area of 10\degr~ radius. The identified $\sim$1000 members of the Perseus OB2 complex are distributed in two populations occupying the same sky region but differing in proper motions and distance (see Table 8 of those authors). \citet{2007A&A...471L..33K} used deep infrared K-band data to analyse the spatial distribution of IC348 with the minimum spanning tree (MST) method. They conclude that the stellar population displays a clustered distribution while the substellar one is homogeneously distributed in space within two times the cluster core radius. Although the substellar population is unbounded, it remains within the cluster limits. We notice that these authors establish the cluster limiting radius as the point where the density of the radial profile merges with that of the background, which is not an intrinsic property of the cluster but rather only of its contrast with the background. \citet{2008MNRAS.389.1209S} analyse the spatial distribution of IC348 and NGC1333 by applying the nearest-neighbour and MST methods to infrared data. They found that both clusters are centrally concentrated and assembled from a hierarchical filamentary configuration that eventually built up to a centrally concentrated distribution. They also found that the stellar population of both clusters is mass segregated. \citet{2011ApJ...727...64K} identified substructures in nearby star-forming regions (including IC348) by applying the MST method to catalogues of young stellar objects (YSO) from the literature. In all the groups, particularly in the two at IC348, they found that the maximum mass member is typically more than five times more massive than the median mass member. Furthermore, these massive members are clustered in the central region and are associated with a relatively large density of low-mass stars. Given that all these groups are young, \cite{2011ApJ...727...64K} conclude that the observed configurations should be similar to the primordial ones. \citet{2017MNRAS.468.4340P} analysed IC348 and NGC133 using the membership lists of \citet{2016ApJ...827...52L}. They estimated the spatial structure, mass segregation and relative local surface density of these two systems using methods like the MST and the $\mathcal{Q}$-parameter. They found that both clusters are centrally concentrated with no evidence of mass segregation. They argue that the results of \citet{2011ApJ...727...64K} can be biased by the smaller list of members and their binning method. Afterwards, they compare their results with numerical simulations to estimate possible initial values for the density and velocities. \citet{2018ApJ...865...73O} studied the distance and structure of the Perseus region by combining data from the VLBA and \textit{Gaia} DR2. They estimated distances of $320\pm26$ pc and $293\pm22$pc to IC348 and NGC1333, respectively, and concluded that given the large median value of the parallax uncertainties, which is larger than the parallax dispersion, then the depth of the groups cannot be extracted from these measurements. \citet{2018ApJ...869...83Z} determined distances to major star-forming regions in the Perseus molecular cloud using a new method that combines PanSTARRS and 2MASS stellar photometry, astrometric \textit{Gaia} DR2 data, and $\rm{^{12}CO}$ spectral-line maps. For IC348 and NGC1333, they estimated distances of $295\pm18$ pc and $299\pm17$ pc, respectively. These uncertainties result from simple addition of the statistical and systematic ones reported in their Table 3. \citet{2021A&A...647A..14G} analysed the spatial and kinematic structure of four star-forming regions, including IC348. The authors applied their newly developed Small-scale Significant substructure DBSCAN Detection (S2D2) algorithm to \citet{2016ApJ...827...52L} list of members (after removing binaries). They found that: i) the densities of the six identified substructures indicate that these are Poisson fluctuations rather than imprints of star formation sites, and ii) the members are centrally concentrated in a radial distribution with an exponent between one and two. The results of these authors are thus consistent with a single and centrally concentrated star-formation event in IC348. \subsection{Velocity distributions} \label{intro:velocity} \citet{2015ApJ...807...27C} analysed the velocity distribution of IC348 utilising a subpopulation of 152 members with APOGEE radial velocity measurements. They fitted these measurements using Gaussian Mixture Models \citep[GMMs, see][for examples of applications of mixture models in astronomy]{2017arXiv171111101K} with one and two components and found that the second component improved the fit. Their method allows them to marginalise the contribution of binaries thanks to a binary model that incorporates a wide range of binary configurations into their expected radial velocities. The authors hypothesised that this second component could arise from: i) contaminants from nearby groups, ii) a halo of dispersed/evaporated stars, or iii) the cluster has not yet relaxed to a single Gaussian distribution. The authors argue that the measured velocity dispersion ($0.72\pm0.07\,\rm{km\, s^{-1}}$, or $0.64\pm0.08\,\rm{km\,s^{-1}}$ with two components) implies a super-virial state unless the gravitational potential has been underestimated by, for example, unaccounted gas. They accounted for the gas and dust mass by adding 40$\rm{M_\odot}$ and 210$\rm{M_\odot}$ as lower and upper limits. They found no evidence of a gradient in the velocity dispersion regarding distance to the cluster centre or stellar mass. However, they found evidence of convergence along the line of sight, which the small cluster rotation ($0.024\pm0.013\,\rm{km\,s^{-1}\,arcmin^{-1}}$) cannot explain. Using APOGEE radial velocity measurements and a similar methodological analysis as that of \citet{2015ApJ...807...27C}, \citet{2015ApJ...799..136F} determined the velocity distribution of NGC1333 based on a sample of 70 members. They found a radial velocity dispersion of $0.92\pm0.12\,\rm{km\, s^{-1}}$, which is consistent with the virial velocity of the group. \citet{2018ApJ...865...73O} analysed the velocity distribution of IC348 and NGC1333 using VLBA and \textit{Gaia} DR2 data. They conclude that there is no evidence of expansion or rotations given that the velocities they measure ($V_{\rm{Exp;IC348}}\!=\!-0.06\,\rm{km\, s^{-1}}$, $V_{\rm{Exp;NGC1333}}\!=\!0.19\,\rm{km\, s^{-1}}$, $V_{\rm{Rot;IC348}}\!=\![-0.16, 0.0, -0.10]\, \rm{km\, s^{-1}}$ and $V_{\rm{Rot;NGC1333}}\!=\![-0.10, 0.10, 0.19]\, \rm{km\, s^{-1}}$) are smaller than the observed velocity dispersions: $\sigma_{\rm{IC348}}=2.28\,\rm{km\, s^{-1}}$ and $\sigma_{\rm{NGC1333}}=2.0\,\rm{km\, s^{-1}}$. \citet{2019ApJ...870...32K} studied the internal kinematics of 28 young ($\lesssim$ 5 Myr) clusters and associations with \textit{Gaia} DR2 data. Using proper motions and distances these authors computed transverse velocities in the plane of the sky. After correcting for perspective effects, they derived outward and azimuthal velocities, which are 2D proxies for the internal motions of rotation and expansion. In IC348 and NGC1333, these authors found no evidence of contraction or expansion. With respect to rotation, although they found non-zero azimuthal velocities of $-0.45\pm0.21\,\rm{km\, s^{-1}}$ and $-0.27\pm0.23\,\rm{km\, s^{-1}}$ in NGC133 and IC348, respectively, they deemed these values not significant under the large values of the observational uncertainties, and identified only one system, Tr 15, as having significant rotation. \subsection{Age distribution} \label{intro:ages} \citet{2003ApJ...593.1093L} determined the age of IC348 by comparing \citet{1998A&A...337..403B} and \citet{2000ApJ...542..464C} theoretical isochrones of 1, 3, 10, and 30 Myr to their observational Hertzprung-Russell (HR) diagram, which they obtained with spectral measurements of effective temperatures and bolometric luminosities. These authors found that the cluster members have ages compatible with 1 to 10 Myr, with a median value of 1-2 Myr (see their Fig. 9). \citet{2015MNRAS.454..593B} obtained an age of 6 Myr for IC348 based on photometric data from a list of confirmed members from the literature. These authors derived ages comparing the extinction-corrected photometry to main sequence evolutionary models as well as their own semi-empirical pre-main sequence isochrones. They conclude that for star-forming regions younger than 10 Myr, the age estimates from the literature are younger by a factor of two. \citet{1996AJ....111.1964L} estimated the age of NGC1333 at 1-2 Myr based on its large fraction of members bearing a disk (61\%). Later on, \citet{2008ApJ...674..336G} measured a fraction of $83\pm11$\% of members with a disk, confirming the young age of this group. \citet{2021MNRAS.503.3232P} derived the ages of the newly identified groups by comparing the photometry of their candidate members to those predicted by theoretical isochrones. According to their results, Authochte is coeval with NGC1333, while Heleus and Mestor are slightly older, with some of their members close to or below the 1 Myr isochrone. Finally, Electryon and Alcaeus appear older with lower fractions of discs and their members being compatible with the 5 Myr isochrone. We notice that their age estimates were obtained without applying any extinction correction. These authors assumed that extinction was negligible based on the observation that the sequences of their candidate young stars were close to the theoretical isochrones of young stars. \cite{2021ApJ...917...23K} used PAdova TRieste Stellar Evolution Code \citep[PARSEC,][and references therein]{2012MNRAS.427..127B,2015MNRAS.452.1068C}, models and \textit{Gaia} photometry to determine isochrones ages of $6.0\pm3.2$ Myr and $4.7\pm0.5$ for NGC1333 (Per1A) and IC348 (Per2A), respectively. These authors also derived ages $\sim$17 Myr for the older and eastern populations of Perseus (Per1B and Per2B). Recently, several groups have estimated isochrone ages for the Perseus groups. \citet{2022AJ....164...57K} estimated group and individual star ages and found that the age of the new Cynurus group is 7 Myr, whereas the rest of the Perseus groups have ages similar to those of \citet{2021MNRAS.503.3232P}. Also, \citet{2022ApJ...936...23W} used ancillary data from several photometric surveys to estimate ages of 5.4 Myr, 2.9 Myr, and 5.7 Myr for IC348, NGC1333, and the remaining cloud regions (i.e., Autochthe and Barnard 1 group), respectively. Moreover, \citet{2022AJ....164..125L} estimated ages of 5 Myr for IC348 and the Barnard 5 group. \subsection{Mass distribution} \label{intro:mass} \citet{2003AJ....125.2029M} derived the mass distribution of IC348 down to 10 $M_{Jup}$ based on infrared (JHK) photometry. They found a mass distribution similar to that of the Trapezium, a brown-dwarf-to-stars ratio of 25\%, and radial variations of the mass distribution on the parsec scale. They identified two distinct peaks in the mass distribution attributed to the core and halo populations. They mentioned that to reconcile these two peaks, the age of the halo needed to be 5-10 Myr and that although the age gradient reported by \cite{1998ApJ...497..736H} was in the correct direction, it was not big enough to account for this difference. \citet{2007ApJ...671..767T} analysed the mass distribution of the Trapezium, Taurus-Auriga, IC348 and the Pleiades. They found evidence for correlated but disjoint populations of stars on the one hand and very low-mass stars and brown dwarfs on the other hand, which suggests different dynamical histories for both populations. They obtain a ratio of one brown dwarf for every five stars, although, in IC348, this ratio reaches up to 30\%. \citet{2012ApJ...745..131K} analysed the mass distributions of Taurus, Lupus3, ChaI, and IC348 based on their previous results on the spatial distributions of these star-forming regions \citep{2011ApJ...727...64K}. They found that massive stars are more easily located in regions with higher stellar surface density. Their results suggest strong evidence of this effect in Taurus and IC348, where stars typically have 10-20\% higher mean mass in the clustered environments. \citet{2013A&A...549A.123A} performed a large survey of IC348 to uncover its brown dwarf population, for this, they used deep optical and near-infrared images of MegaCam and Wide-Field Infrared Camera (WIRCam) to photometrically select candidate members. They also conducted a spectroscopic follow-up of their candidate members, of which 16 new members were confirmed, including 13 brown dwarfs. Five of these new members have L0 spectral types corresponding to masses of about 13 $M_{Jup}$. Combining their new members with those from the literature, they constructed the cluster mass distribution and found no significant differences with the mass distributions of other young clusters. Based on a Kolmogorov-Smirnov (KS) test, they conclude that the IC348 mass distribution is well-fitted by a \citet{2003PASP..115..763C} log-normal distribution. However, we notice that their mass bin at log Mass [$M_{\odot}$] $\sim$ -1.2 shows a deficit with respect to the density predicted by \citet{2003PASP..115..763C} mass distribution. Interestingly, this feature is predicted by the mass distribution of \citet{2007ApJ...671..767T}, as can be observed when comparing Fig. 7 of \citet{2007ApJ...671..767T} with Fig. 11 of \citet{2013A&A...549A.123A}. \citet{2013ApJ...775..138S} determined the mass distributions of IC348 and NGC1333 for a wide interval of isochrone ages, models, extinction and distances. They warn about the strong dependence of the results on these parameters and point out the importance of comparing under similar assumptions. They found brown-dwarfs-to-stars ratios of 40\% to 50\% in NGC1333 and 25\% to 35\% in IC348. This latter value is in agreement with that of \citet{2007ApJ...671..767T}. Comparing these two clusters, they found differences in their cumulative distributions that resulted from a relative excess of low-mass stars in NGC1333. They conclude that the environment plays an important role, with higher-density regions producing larger fractions of low-mass objects, as predicted by gravitational fragmentation models in which filaments fall into the cluster potential. \subsection{Extinction} \label{intro:extinction_distribution} \citet{2013MNRAS.428.1606F} investigated the shape of the extinction law in two regions of the Perseus molecular cloud. They combined red-optical and near-infrared images of Megacam and the UKIRT Infrared Deep Sky Survey (UKIDSS) to measure the colours of background stars. They developed a Bayesian hierarchical model to simultaneously infer the parameters of individual stars as well as those of the population and found a strong correlation between the extinction ($A_v$) and the slope of the extinction law ($R_v$), which they interpreted as evidence of grain growth. Later on, based on the correlation found by the previous authors, \citet{2018ApJ...869...83Z} adopted an $R_v$=3.3 for moderate extinction values of $A_v$ up to 4 mag. These latter authors determined distances to the Perseus molecular clouds using CO spectral-line maps, photometry and \textit{Gaia} DR2 parallaxes. \citet{2016ApJ...826...95C} obtained dust emissivity spectral index, dust temperature and optical depth maps of the Perseus molecular cloud from fitting spectral energy distributions to combined \textit{Herschel} and James Clerk Maxwell Telescope (JCMT) data. They found that the distribution of the dust emissivity spectral index varies from cloud to cloud, which indicates grain growth. This effect was already reported by \citet{1974PASP...86..798S} based on multiband photometry of 20 IC348 bright members. \citet{2016ApJ...826...95C} also found evidence of heating from B stars and embedded protostars, as well as outflows. \citet{2016A&A...587A.106Z} derived optical depth and temperature maps of the clouds in the Perseus region based on \textit{Planck}, \textit{Herschel} and 2MASS data. Their maps have resolutions from 36 arcsec to 5 arcmin and a dynamic range indicating that the extinction in this region reaches up to 20 mag in $A_K$. \citet{2019ApJ...887...93G} determined 3D maps of dust reddening based on \textit{Gaia} parallaxes and stellar photometry from PanSTARRS and 2MASS. Thanks to a spatial prior, they obtain smooth maps with isotropic clouds and small distance uncertainties. They made their map available online through the \textit{dustmaps} package. Later on, \citet{2020A&A...639A.138L} used variational inference and a Gaussian process to infer highly resolved 3D dust maps up to 400 pc. A detailed comparison between the 3D dust maps of \citet{2020A&A...639A.138L} and \citet{2019ApJ...887...93G} is shown in Fig. 9 to 11 of the former authors. We notice that although the maps of \citet{2020A&A...639A.138L} show better spatial resolution than those of \citet{2019ApJ...887...93G}, the latter have better 2D sky resolution \citep[see Fig. 11 of][]{2020A&A...639A.138L}. \cite{2021ApJ...914..122D} used optical and near-infrared stellar polarimetry in combination with \textit{Gaia} DR2 parallaxes to study the magnetic field polarisation in the Perseus molecular cloud. They found a bimodal distribution in the polarisation angles that identified with foreground and background molecular clouds. The foreground cloud is located at $\sim\!150$ pc and has a contribution to the extinction of $A_G\!\sim\!0.3$ mag. On the other hand, the background cloud is at $\sim\!300$ pc and has a larger contribution to the extinction: $A_G\!\sim\!1.6$. Thus, these authors interpret these two clouds as the edges of an ellipsoidal HI shell of about 100-160 pc in size created by the PerOB2 association, with its foreground edge coinciding with the Taurus molecular cloud. \subsection{Star-formation history} \label{intro:history} \citet{1996AJ....111.1964L} suggested that the ongoing star formation in IC348 and NGC1333, located at the opposite ends of the Perseus cloud complex, is produced by a similar physical mechanism, with the main difference between both clusters being that IC348 has produced stars for a longer time than NGC1333. \citet{1998ApJ...497..736H} proposed the following four-point scenario for the star-formation history of the Perseus region. First, the formation of stars in this region has been taking place for at least 10-20 Myr, with the latest episodes corresponding to IC348 and NGC1333. This first star-formation episode created the OB stars $o$ Per and $\zeta$ Per together with low-mass members that are expected to be spread over a large region. Most of this region has been emptied of molecular gas and dust except for the Perseus ridge. Second, there is a population of young stars with ages 1-12 Myr that is in and around IC348. It is composed of the bright A, B, and F stars as well as "field" ones entangled with IC348 and at its boundaries. The $\rm{H_{\alpha}}$ emission lines of these stars have decayed below the limit of detection. Third, within IC348, there is a population of stars with $\rm{H_{\alpha}}$ emission that are probably younger than the population of the previous point but entangled with it. Fourth, in the densest parts of the Perseus ridge, there is ongoing star formation, as suggested by the highly embedded sources. \citet{1999AJ....117..354D} compiled a comprehensive census of OB associations within 1 kpc of the Sun based on \textit{Hipparcos} data. The results of their census are in qualitative agreement with a large-scale star-formation scenario in which the Scorpio-Centaurs-Lupus-Crux, Orion, Perseus, and Lacerta associations formed $\sim$20 Myr ago out of the bubble blown by the high-mass stars of the Cas-Tau association. \cite{2002A&A...387..117B} considered that the two populations they identified in \cite{2002A&A...384..145B} constitute an example of propagated star formation. It started in the Per OB2b region approximately 30 Myr, continued in Per OB2a 10 Myr ago, and is now in progress in the southern border of Per OB2, where IC348 is located. \cite{2021MNRAS.503.3232P} suggested that the older groups (Alcaeus, Electryon, Heleus, and Mestor) are closer to the Galactic plane with low latitudes (>-19), while the younger ones (NGC1333 and Autochthe) are at higher latitudes (<-19). They also point out that NGC1333 and Autochthe are part of the same star formation event due to the similarity in their properties and their close location particularly. These two groups, together with IC348, are the only ones with ongoing star formation, while the other older groups in the region have stopped forming stars. \cite{2021ApJ...917...23K}, using \textit{Gaia} and HDBSCAN, identified two populations in PerOB2: Per A and Per B, corresponding to the western and eastern regions in the sky. They subdivided each of these populations into two distance subgroups, Per-1 at 283 pc and Per-2 at 314 pc, with Per-1A and Per-2A corresponding to NGC1333 and IC348, respectively, and Per-1B and Per-2B their corresponding eastern extensions. While Per A is young (NGC1333: $6.0\pm3.2$ Myr and IC348: $4.7\pm0.5$) and concentrated, Per B is older (Per1B: $17.5\pm0.9$ Myr and Per2B: $17.1\pm1.1$ Myr) and sparser (35 pc). These authors suggest that due to their kinematic similarities, Per 1 and Per 2 most likely formed in the same star-forming process. However, they notice that a continuous star-forming process between the two populations seems unlikely due to the considerable time lag and lack of age gradient. Instead, they hypothesise that in a parallel fashion, the feedback from the first generation (Per B) dispersed the gas of the parent cloud but did not prevent the continuous in-falling flow of external material, which resulted in a new star-formation burst (Per A). In the end, this process produced two distinct epochs of star formation in both Per 1 and Per 2. \cite{2021ApJ...919L...5B} used different indicators (e.g., X-rays, HI and $^{26}$Al) to identify a dust-free cavity between the Perseus and Taurus star-forming regions which they call the Per-Tau shell. This most likely formed through one or multiple supernovae episodes that created a super-bubble which sweep up the interstellar medium and created today's extended shell, with the age of this shell being $\simeq$6-22 Myr. These authors hypothesise that the supernova that created the shell may have had its origin in: a) a young (<20 Myr) star cluster with a total mass between 800 and 3300 $M_\odot$, b) a single supernova from a dynamically ejected O or B star, or c) an ultra-luminous X-ray source. They mention that the most likely scenario is the first one, given that evidence supporting the existence of a young (<20 Myr) population has been found in Taurus and Perseus. In the latter case, this young population corresponds to the Perseus groups identified by \citet{2021MNRAS.503.3232P}. \citet{2022Natur.601..334Z} found that almost all the star-forming complexes in the solar neighbour lie on the surface of the Local Bubble, with the young stars showing expansion with their motions being perpendicular to the Bubble surface. Their trace-back analysis supports a scenario in which the Local Bubble was formed by supernovae at approximately 14 Myr age. The only nearby star-forming complex that does not lie at the Local Bubble surface is the Perseus complex, which is related to the Local Bubble through the Taurus star-forming complex and the Per-Tau shell. \citet{2022AJ....164...57K} analysed three star-formation scenarios to explain the observed kinematics of the Perseus region: a supernova explosion, a cloud-cloud interaction, and the first generation of stars from the Per-Tau shell, with the most likely scenario being the collision of two clouds. According to these authors, the evidence supporting the other two scenarios is not conclusive. \citet{2022ApJ...936...23W} found that the Perseus region also follows the star-formation scenario of the expanding Local Bubble \citep{2022Natur.601..334Z,1987ARA&A..25..303C}, except that in this case, the Per-Tau shell interacts with it and with the Perseus molecular cloud and results in the elongated shape of the latter. The ages and positions of the Perseus groups suggest that Electyryon, Heleus, and Mestor are far away from the Per-Tau shell and unrelated to it. On the contrary, Alcaeus, Autochthe, IC348 and NGC1333 are near the edge of the shell and may have formed during the same star-formation event. \cite{2022AJ....164..125L} propose a star-formation scenario in which a supernova explosion in the Perseus region triggered the star-formation of the region through the driving of a HI super-shell. The latter has at its centre the eastern part of the Perseus region. \section{Data} \label{dataset} \subsection{\textit{Gaia} Data Release 3} \label{dataset:GDR3} We downloaded\footnote{\url{https://gea.esac.esa.int/archive/}} the astrometry and photometry of 164 502 \textit{Gaia} DR3 sources within the sky region: 51\degr<\texttt{ra}<59\degr, and 30\degr<\texttt{dec}<33\degr, and proper motions within -100 $\rm{mas\, yr^{-1}}$ < \texttt{pmra} < 200 $\rm{mas\, yr^{-1}}$ and -200 $\rm{mas\, yr^{-1}}$ < \texttt{pmdec} < 100 $\rm{mas\, yr^{-1}}$. From these, 163 178 sources (99.2\%) have observed proper motions and photometry, which are necessary to apply our membership methodology. Our initial list of members comprises the 194 candidate members, with a probability > 0.5, found by \citet{2018A&A...618A..93C} on the IC348 and NGC1333 open clusters. We use this sample of members due to its purely \textit{Gaia} origin and the simplicity of its membership algorithm. Although the pre-\textit{Gaia} sample of \cite{2016ApJ...827...52L} is the most extensive one from the literature, we do not use it because it contains several contaminants in the form of astrometric outliers (see Sect. \ref{discussion:members}). We processed the astrometric data by only applying a parallax zero point correction of $-17\,\rm{\mu as}$ \citep[see Sect. 2.2 of][]{2021A&A...649A...1G}. As stated in Sect. 7 of \citet{2021A&A...649A...1G} the current parallax bias correction as a function of magnitude, colour, and ecliptic latitude is only a tentative recipe. \subsection{Complementary data} \label{dataset:complementary_data} We complement the \textit{Gaia} DR3 data of our candidate members with APOGEE and SIMBAD radial velocities, as well as 2MASS and PanSTARRS photometry. Out of our 1052 candidate members (see Sect. \ref{results:membership}), 428, 407, and 149 have radial velocity entries in APOGEE, SIMBAD, and Gaia DR3, respectively. Whenever a source has multiple radial velocity entries, we select based on the following ordered preference: APOGEE, \textit{Gaia} DR3, and SIMBAD. In the case of APOGEE, we use as radial velocity uncertainty the dispersion of several measurements (i.e., VSCATTER column), except when it was zero, in which case we use the individual uncertainty (i.e., VERR column). In some cases, the radial velocity catalogues report missing or zero value uncertainties for sources with non-missing radial velocity. Thus we process the radial velocities as follows. If the star has either a missing uncertainty but a non-missing value or an uncertainty larger than 50 $\rm{km\, s^{-1}}$, then we set the uncertainty to 50 $\rm{km\, s^{-1}}$. This large value diminishes the contribution of the source and avoids discarding it. Similarly, if the uncertainties are smaller than 0.01 $\rm{km\, s^{-1}}$, we replace them with the latter value. This uncertainty soil avoids convergence issues in our kinematic inference methodology (see Sect. \ref{methods:6D_structure}). After the previous processing, a total of 626 (60\%) of our candidate members have radial velocity measurements, with a median uncertainty of 0.75 $\rm{km\, s^{-1}}$. In addition, we query the \textit{Hipparcos} \citep{1997A&A...323L..49P} data in the same sky region as defined above, and we find 47 sources. Out of these, 46 have a cross-match in \textit{Gaia} DR3, and the remaining one has parallax and proper motions ($\mu_{\alpha}=-139.6\pm0.89\, \rm{mas\, s^{-1}}$, $\mu_{\delta}=20.19\pm1.05\, \rm{mas\, s^{-1}}$, $\varpi=18.32\pm0.91$ mas) clearly incompatible with the Perseus groups. Due to the previous reason, we assume that the \textit{Gaia} DR3 of the Perseus region is complete on the bright side. \section{Methods} \label{methods} The following sections describe the methodology that we use to determine the properties of the stellar content of the Perseus region. We start by describing the membership methodology, and afterwards, we describe the steps to identify the distinct physical populations, as well as their properties: empirical isochrones, magnitude and mass distributions. We base the inference of the properties of a physical group on the obtained list of members (see Assumption \ref{assumption:groups_independency} in Appendix \ref{appendix:assumptions}). \subsection{Membership selection} \label{methods:membership} We determine members of the Perseus star-forming region using the \textit{Miec} code \citep{2021A&A...649A.159O}, which is an improvement over the Bayesian hierarchical methodology developed by \citet{2018A&A...617A..15O} and focuses on the analysis of extincted nearby stellar clusters. Briefly, \textit{Miec} is a statistical model that describes the observed astrometry and photometry of possible hundreds of thousands of sources in a sky region that encompasses an open cluster. It delivers candidate members of the open cluster as well as its astrometric (proper motions and parallax, if available) and photometric distributions (colour-index, and photometric bands). The likelihood of the data is a mixture of the field and cluster models, where the former consists of independent and multivariate GMMs in the astrometric and photometric spaces, and the latter is also made of a GMM in the astrometric space and a multivariate Gaussian in the photometric one. The median value of the latter corresponds to the cluster photometric sequence, in which each photometric band is modelled by a spline function of the colour index. The model is Bayesian because it infers the posterior distribution of the cluster parameters given the likelihood of the data and the prior distribution. This latter is constructed from the initial list of members. Once the posterior distributions of model parameters have been inferred (through Markov Chain Monte Carlo methods) the cluster membership probability of each source in the data set is computed using Bayes' theorem and the cluster and field likelihoods as follows: \begin{equation} \label{equation:probability} \begin{split} & Probability(cluster|data,\mathcal{M}_{cluster},\mathcal{M}_{field})= \\ & \frac{\mathcal{L}(data|\mathcal{M}_{cluster})\cdot \mathcal{P}(cluster)}{\mathcal{L}(data|\mathcal{M}_{cluster})\cdot \mathcal{P}(cluster)+\mathcal{L}(data|\mathcal{M}_{field})\cdot \mathcal{P}(field)}, \end{split} \end{equation} where $\mathcal{L}$, $\mathcal{M}$, and $\mathcal{P}$ stand for likelihood, model, and prior, respectively. We use as prior probabilities for the field and cluster their fractions of sources in the entire dataset. The \textit{Miec} code has been designed for open clusters, and thus, it models the astrometric features of the representation space using GMMs in which all Gaussian components share the same mean value, this is, they are concentric \citep[for more details see][]{2018A&A...617A..15O,2021A&A...649A.159O}. However, the members of star-forming regions have proper motions and parallax distributions that are not necessarily concentric (see, for example, the parallax and proper motions of the Taurus candidate members depicted in Fig. 5 of \citealt{2019A&A...630A.137G}). For this reason, we modify the \textit{Miec} code to deal with the dispersed populations present in star-forming regions by allowing non-concentric GMMs in the proper motions and parallax features. In the \textit{Miec} methodology, the representation space (i.e., set of observable features) is of paramount importance since it allows the disentanglement of the field and target populations. Given the known issue of the overestimated \textit{Gaia} \texttt{BP} flux for faint red sources \citep[see Sect. 8.1 of][]{2021A&A...649A...3R}, we use as colour index \texttt{G-RP} instead of \texttt{BP-RP}. Thus, our choice for the representation space comprises the following \textit{Gaia} features: \texttt{pmra, pmdec, parallax, G-RP, BP,} and \texttt{G}. The \textit{Miec} code requires that the spline functions describing the cluster photometric sequence be injective functions of the colour index \citep{2018A&A...612A..70O,2021A&A...649A.159O}. However, the previous representation space only allows for this condition to be fulfilled for \texttt{G} values fainter than 5-6 mag. For this reason, we search for Perseus candidate members brighter than this magnitude limit using only their astrometric membership probabilities, which are also delivered by \textit{Miec}. We classify sources brighter than \texttt{G}$\sim$5 mag as candidate members if their astrometric membership probability is larger than $3\sigma$ (0.997). We choose this highly conservative probability threshold given that, in these cases, the discriminant photometric information is not taken into account. In addition to the membership probabilities, the code delivers the astrometric and photometric distributions of the target population. While the astrometric distributions are multivariate mixtures in the joint space of proper motions and parallax, the photometric ones are multivariate mixtures in the joint space of colour index and photometric bands. More details about methods to obtain the astrometric, colour index and magnitude distributions can be found in \citet{2018A&A...617A..15O}. Since the Perseus region contains several dust and gas clouds, we use the extinction module of \textit{Miec} \citep[see Sect. 2.2 of][]{2021A&A...649A.159O}. Briefly, this module permits the extraction of the extinction-free population parameters (i.e., those that describe the cluster's or group's colour-index and magnitude distributions) by marginalising the possible extinction values, $A_v\in[0,A_{v,max}]$ of each source. For the maximum extinction value, $A_{v,max}$, we use the 3D extinction map of \citet{2019ApJ...887...93G}\footnote{We query the extinction map at each source position using the \textit{dustmaps} python package \citep{2018JOSS....3..695M}.} at the group distance. We prefer the maps of \citet{2019ApJ...887...93G} over those of \citet{2020A&A...639A.138L}, given their better 2D sky resolution and because we are interested in individual stars rather than in the 3D structure of the dust clouds. As explained in \citet{2021A&A...649A.159O}, the extinction module of the \textit{Miec} code faces two main caveats: an increased contamination rate due to sources with missing values and a reduced recovery rate in sources with high extinction values. Given that our data set comprises sources with complete astrometry and photometry (see Sect. \ref{dataset}), that the mean value of extinction to the Perseus region provided by the 3D extinction map is $A_{v,max}\sim 3\pm2$ mag, and that we classify members based on an optimum probability threshold that is optimized as a function of the \texttt{G} magnitude, then we expect that the performance of the \textit{Miec} code in our specific conditions will be better than the extreme conditions reported in Table C.3 of \citet{2021A&A...649A.159O}. In other words, we expect a recovery rate better than 87\% and a contamination rate less than 7\%. Although the physical members of distant open clusters can be identified using clustering methods working in the proper-motions-parallax space, nearby clusters and dispersed stellar populations extending several degrees on the sky are affected by projection effects that distort the proper-motions-parallax space and make difficult the identification of their members. Thanks to the improved non-concentric GMM, the \textit{Miec} code is now flexible enough to accommodate possible distortions in the proper-motions-parallax space created by these projection effects. Nonetheless, these distortions increase the mixing of the physical groups in the proper-motions-parallax space and difficult their disentanglement. This effect can be seen in Fig. 10 of \citet{2021MNRAS.503.3232P}, where the proper motions of the Perseus groups are heavily mixed. To disentangle these populations, we iteratively run the \textit{Miec} and \textit{Kalkayotl} \citep{2020A&A...644A...7O} codes. The first one identifies the candidate members in the astro-photometric space, while the second one separates the group in the phase-space (more below). \subsection{Phase-space structure} \label{methods:6D_structure} We analyse the phase-space distribution of the Perseus star-forming region using the \textit{Kalkayotl} code \citep[][a new version of the code is in preparation]{2020A&A...644A...7O}. This code implements a Bayesian hierarchical model that allows the joint inference of stellar positions, velocities, and population parameters without imposing a fixed prior. On the contrary, the code allows us to test different 1D (distance), 3D (positions) or 6D (positions+velocities) prior families and infer their parameters based on the \textit{Gaia} data. Moreover, it corrects for the parallax and proper motions angular spatial correlations and zero points using the values provided by \citet{2021A&A...649A...2L}. We notice that the output phase-space Cartesian coordinates returned by \textit{Kalkayotl} are in the Equatorial ICRS reference system rather than a Cartesian Galactic one. Throughout the rest of this work, unless stated otherwise, we will use this Equatorial reference system and the names X, Y, and Z for the 3D positions and U, V, and W for the 3D velocities. As mentioned in the previous section, the identification of the physical groups is a mandatory step for the subsequent astrophysical analyses. Given that the region is known to host several populations (see Sect. \ref{intro}), we use the \textit{Kalkayotl} code to probabilistically disentangle the possible physical groups. We model the stellar positions and velocities of the Perseus star-forming region using 6D GMMs (see Assumption \ref{assumption:gaussian} in Appendix \ref{appendix:assumptions}). We classify the candidate members into the physical groups by maximising membership probability to each Gaussian component. It is important to notice that the modelling of the phase-space structure is computationally expensive because the number of inferred parameters, $N_p$, grows linearly with the number of sources: $N_p=(6\times N_s) + (28\times N_c - 1)$, with $N_s$ and $N_c$ the number of sources and Gaussian components, respectively. In the latter equation, the first term corresponds to the parameters of the individual sources, with six phase-space coordinates for each one of them, and the second term corresponds to the global or population parameters. In these latter, each Gaussian component needs 28 parameters: 21 for the covariance matrix, six for the median, and one for the weight. Given that the components' weights are restricted to add to one, there is one non-free weight. Our iterative approach to identifying candidate members and physical groups proceeds as follows. Once the candidate members of the entire region have been identified by the \textit{Miec} code, we use the \textit{Kalkayotl} code to fit their observables using 6D GMMs with one to six components. We consider that several Gaussian components pertain to the same physical group if their medians are mutually contained within one Mahalanobis distance (see Assumption \ref{assumption:gaussian} in Appendix \ref{appendix:assumptions}). Otherwise, each Gaussian component corresponds to a physical group. We reject as physical groups those Gaussian components for which: i) the Hamiltonian Monte Carlo sampler does not converge \citep[see][]{2020A&A...644A...7O}, or ii) the contribution to the mixture is $\lesssim 5\%$. We recursively fit 6D GMM to each physical group until Assumption \ref{assumption:gaussian} (see Appendix \ref{appendix:assumptions}) is fulfilled. This recursive fit allows us to reject non-physical members in the 3D space that, due to projection effects, have similar astrometric features as the bulk of the group. Once the physical groups have been disentangled, we join the list of members of each of them to the field population and run again the \textit{Miec} code independently on the resulting data set. The independent run of \textit{Miec} on each identified group ensures that the candidate members' uncertainties (the photometric ones in particular) are propagated into the group's empirical isochrone and mass distribution (see Sect. \ref{methods:isochrones} and \ref{methods:mass}). We iterate the procedure until the number of candidate members of each physical group converges under Poisson uncertainties. To avoid the radial velocity of unresolved binary stars from biasing the group-level parameters (e.g., internal velocity dispersion) of the Perseus groups, we make an additional run of \textit{Kalkayotl} using as input list the final members of each group, however, this time, we set as missing the radial velocities of sources lying more than 3-$\sigma$ away from any of the group's mean U, V, and W space velocities. Setting as missing the radial velocity rather than removing the source avoids discarding the astrometric information of these sources. Finally, we also analyse the internal kinematics of the identified physical groups by searching for evidence of their expansion or rotation. To do this, we compute the dot and cross product of the positions and velocity vectors of each candidate member on the reference system of its parent group. The average values of these vector products are proxies for the expansion and rotation rates of stellar groups \citep[see][]{2019A&A...630A.137G,2021A&A...654A.122G}. \subsection{Empirical isochrones and age estimates} \label{methods:isochrones} The empirical isochrones of the physical groups are inferred from the data and delivered by \textit{Miec} as a by-product. These empirical isochrones are cubic spline functions that model the mean value of the \texttt{BP} and \texttt{G} magnitudes as functions of the colour index \texttt{G-RP}. We notice that thanks to the use of the extinction module (see Sect. \ref{methods:membership}), these empirical isochrones are free of extinction. We estimate the age of each physical group by comparing its extinction-free empirical isochrone with the theoretical ones from the PARSEC \citep{2020MNRAS.498.3283P,2013MNRAS.434..488M}, MESA Isochrones \& Stellar Tracks \citep[MIST,][]{2016ApJS..222....8D,2016ApJ...823..102C}, and BT-Settl \citep{2014IAUS..299..271A} models. We are aware that this dating method provides only a rough estimate of the group age due to well -known issues of theoretical isochrones to reproduce the observed colour-magnitude diagram of young clusters \citep[e.g.,][]{2015MNRAS.454..593B,2015A&A...577A.148B,2022MNRAS.513.5727B}. \subsection{Mass distributions} \label{methods:mass} We infer the mass distribution of each physical group using two different methods, but both based on the PARSEC, MIST and BT-Settl theoretical isochrones at each group's estimated age (see Sect. \ref{methods:isochrones}). The first method uses the \textit{Sakam} code \citep{2019A&A...625A.115O} to independently infer the mass of each candidate while the second method transforms the group's magnitude distributions delivered by \textit{Miec} into mass distributions. None of the theoretical isochrone models that we use fully covers the magnitude interval of our candidate members. Thus, when used independently, these models introduce border artefacts in the resulting mass distributions. We overcome this problem by computing a unified theoretical model that we call PMB (standing for PARSEC-MIST-BT-Settl). In this, we fitted cubic splines to the grid values of mass and magnitudes provided by the three theoretical models. We use cubic splines because they provide continuous derivatives of the magnitude-mass relations and thus avoid the typical problems of simple polynomials (i.e., Runge's phenomenon). \textit{Sakam} is a Bayesian inference code that samples the joint posterior distribution of mass, $A_v$ and $R_v$ of individual stars based on theoretical isochrones and the star's distance (see Sect. \ref{methods:6D_structure}) and available photometry (see Sect. \ref{dataset:complementary_data}). As prior distributions of the mass, $A_v$ and $R_v$ we use the \citet{2005ASSL..327...41C} distribution, a uniform distribution ($A_v\in[0,10]$ mag), and a Gaussian distribution ($R_v\sim\mathcal{N}(3.1,0.5)$), respectively. Once the posterior distributions of all the candidate members are inferred, we compute the group's mass distribution as a kernel density estimate on the aggregated mass samples of all the group members. In the second method, we use the mass-magnitude relations provided by the unified theoretical isochrones (at the group's estimated age, see Sect. \ref{methods:isochrones}) together with the group distance to transform \textit{Miec}'s magnitude distributions of each group into mass distributions. We notice that the theoretical mass-magnitude relations are not one-to-one and have abrupt changes of slope in the 1.5--2.5 $M_{\odot}$ mass interval, resulting in mass distributions with a large scatter in this region (see discussion in Sect. \ref{discussion:mass}). Working with the previous two methods offers the following advantages. First, the \textit{Sakam} method allows us to study possible variations in the $R_v$ value across the Perseus groups. These variations have already been suggested in the literature, see, for example, \citet{2013MNRAS.428.1606F,2018ApJ...869...83Z}. On the other hand, the \textit{Miec} method has two advantages over the \textit{Sakam} one. First, it obtains the group's magnitude distributions using the entire dataset, weighting each source by its membership probability to the group. This approach removes the sample bias introduced when working on the subsample of the most probable group members. Second, it does a full propagation of the model uncertainties and observational uncertainties to the magnitude and mass distributions, whereas the \textit{Sakam} method only propagates the photometric and distance uncertainties. Although the \textit{Miec} method offers a statistically more robust approach than that of the \textit{Sakam} method, none of them is a perfect solution to the mass distribution inference problem. Nonetheless, until the arrival of a complete and spectroscopically confirmed and characterised list of the group's members (see Assumption \ref{assumption:groups_independency} in Appendix \ref{appendix:assumptions}), the comparison of these two methods offers what we consider the best strategy to derive the mass distribution of the groups. \subsection{Dynamical analysis} \label{method:dynamical_analysis} We perform a dynamical analysis of the Perseus groups based on the source and group level parameters inferred with the methods presented in the previous subsections. We determine the dynamic state of each physical group with two methods. The first method takes the mass, position, and velocity posterior distributions of each candidate member and computes its energy distribution with respect to its parent group, under the assumption that the latter are self-gravitating (see Assumption \ref{assumption:self_gravitating} in Appendix \ref{appendix:assumptions}). We propagate uncertainties by taking samples from the posterior distributions of each candidate member and computing the energy of each sample as follows: \begin{equation} \label{equation:energy} E=\frac{1}{2}m \cdot v^2 - \frac{G\cdot M\cdot m}{r}, \end{equation} where $r$ and $v$ are the distance and speed in the {reference system} of the stellar group, $M$ is the total group's mass enclosed within the distance $r$ from its centre, $m$ is the sample's mass, and $G$ is the gravitational constant. To obtain $r$ and $v$ in the reference system of the stellar group, we use the population parameters delivered by \textit{Kalkayotl} (see Sect. \ref{methods:6D_structure}). The second method compares the observed velocity dispersion of each group with the theoretical one expected if the stellar system were at virial equilibrium. To compute the latter, we follow the approach that \citet{2015ApJ...807...27C} took in the analysis of IC348 (see their Sect. 4.3.3). Briefly, these authors assumed that the velocity dispersion at virial equilibrium, $\sigma_{vir}$, can be estimated using the total mass of the cluster $M$, its half-mass radius, $r_{hm}$, and a structural parameter, $\eta$, (see Assumption \ref{assumption:virial_equilibrium} in Appendix \ref{appendix:assumptions}) following Eq. 4 of \citet{2010ARA&A..48..431P}. Moreover, we follow \citet{2015ApJ...807...27C} additional assumption that these parameters can be obtained by fitting an \citet{1987ApJ...323...54E} profile (hereafter EFF) to the 2D stellar number density of the system (see Assumption \ref{assumption:EFF_profile} in Appendix \ref{appendix:assumptions}). We do this fitting for the Perseus physical groups with the free and open-source code \textit{PyAspidistra} \citep{2018A&A...612A..70O}. In addition, given that our methods also deliver mass and 3D positions for each member in the Perseus groups, we also estimate the half-mass radius by finding the radial distance at which the group's mass reaches 50\%. Furthermore, we notice the following two aspects of the Perseus star-forming region. First, it is known that this region is still embedded in the dust and gas of its parent molecular cloud and that the contribution of this non-stellar mass to the total mass of the groups is non-negligible. \citet{2015ApJ...807...27C} accounted for this non-stellar mass (see Sect. \ref{intro:velocity}), assuming that the dust and gas still follow the observed distribution of stars and that its total mass contribution has lower and upper limits equal to 65\% (80 $\rm{M_\odot}$) and 169\% (210 $\rm{M_\odot}$), respectively, of the total stellar mass of IC348. Here, we also take the previous two assumptions and extend them to the rest of the Perseus groups (see Assumption \ref{assumption:dust_mass} in Appendix \ref{appendix:assumptions}). Second, it is known that the fraction of binary systems in open clusters varies between 11\% to 70\% \citep{2010MNRAS.401..577S}, with the fraction of unresolved binaries between 12\% to 20\% \citep[e.g.,][]{2021AJ....162..264J}. However, our methodologies are unable to identify and infer the mass of these possibly unresolved binaries. Therefore, it follows that our gravitational potential will be underestimated due to the unaccounted mass of the unresolved binaries. Thus, we correct this bias by increasing by 20\% the mass contribution of the individual stars when computing the gravitational potential of the groups (see Assumption \ref{assumption:binaries_mass} in Appendix \ref{appendix:assumptions}). \section{Results} \label{results} \subsection{Membership} \label{results:membership} We iteratively apply the \textit{Miec} and \textit{Kalkayotl} codes (as described in Sects. \ref{methods:membership} and \ref{methods:6D_structure}) to the Perseus data set (see Sect. \ref{dataset}). In the first iteration, the code recovered 920 candidate members, and after successive iterations utilising the extinction module, we recover 130 more candidate members. Our search for bright (\texttt{G}>5 mag) members delivered only two astrometric candidate members: $\zeta$ Per and $o$ Per, with astrometric membership probabilities of 0.99989 and 0.99988, respectively, which implies a $\sim\!4\sigma$ discovery. According to our iterative methodology, the final 1052 candidate members (see Table \ref{table:list_of_members}) are distributed into eight statistical groups (see Sect. \ref{results:6D_structure}). Table \ref{table:groups_members} shows the names, number of members, mean distance, age, and mass estimates of these groups. In Sect. \ref{results:core_and_halo}, we will show that two of these statistical groups pertain to the same physical one, thus effectively reducing the number of physical groups to seven. We identify the well-known IC348 (with its core and halo) and NGC133 young clusters (see Fig. \ref{fig:sky}) and three of the recently discovered populations of \cite{2021MNRAS.503.3232P}: Heleus, Alcaeus and Autochthe. In addition, we discover a putatively new young physical group of $\sim$7 Myr and 191 candidate members that is composed of a core and halo populations. Following the nomenclature style of \cite{2021MNRAS.503.3232P}, we call this group Gorgophone. In Sect. \ref{discussion:members}, we present a detailed comparison between the candidate members that we find in this work and those from the literature. \begin{table}[ht!] \caption{The name, number of members, mean distance, age estimate, and mass lower limit of the Perseus groups.} \label{table:groups_members} \centering \begin{tabular}{c|c|c|c|c} \toprule Name & Number & Distance & Age & Mass \\ {} & {} & [pc] & [Myr] & [$\rm{M_{\odot}}$] \\ \midrule IC348 core & 329 & $315\pm 1$ & 3 & $146\pm 5$\\ IC348 halo & 172 & $312\pm 6$ & 5 & $125\pm 5$\\ Heleus & 124 & $365\pm 30$ & 5 & $ 47\pm 2$\\ Alcaeus & 127 & $286\pm 5$ & 10 & $ 93\pm 4$\\ Gorgophone core & 46 & $291\pm 5$ & 7 & $ 36\pm 2$\\ Gorgophone halo & 145 & $290\pm 19$ & 7 & $109\pm 4$\\ NGC1333 & 84 & $292\pm 1$ & 3 & $ 39\pm 2$\\ Autochthe & 25 & $295\pm 2$ & 3 & $ 19\pm 2$\\ \bottomrule \end{tabular} \end{table} \begin{figure*}[ht!] \resizebox{\hsize}{!}{\includegraphics{Figures/Sky.png}} \caption{Sky coordinates of the Perseus candidate members. The colour code shows the probabilistic classification, and the background image shows the thermal dust emission (545 GHz) from \citet{2020A&A...643A..42P}.} \label{fig:sky} \end{figure*} \subsection{Phase-space structure} \label{results:6D_structure} \begin{figure}[ht!] \centering \includegraphics[width=\columnwidth,page=1]{Figures/Model.pdf} \includegraphics[width=\columnwidth,page=2]{Figures/Model.pdf} \caption{Cartesian equatorial (ICRS) positions (top panel) and velocities (bottom panel) of the 1052 Perseus candidate members. The colour code shows the probabilistic classification, and the orange ellipses depict samples from the posterior distribution of the group level parameters corresponding to the one-sigma covariance matrix, and are centred at the mean position and velocity of the groups.} \label{fig:kalkayotl} \end{figure} As explained in Sect. \ref{methods:6D_structure}, we infer the phase-space structure of the Perseus groups by fitting 6D GMMs using the \textit{Kalkayotl} code. We jointly inferred the parameters of all candidate members and choose the model with six components as the best one. We based this decision on the convergence properties of the sampler and the weights of the components. Models with more than six components resulted in inefficient sampling and negligible weights for the additional components. Figure \ref{fig:kalkayotl} shows the inferred phase-space coordinates of the candidate members as well as the six-components GMM (the orange lines show samples from the posterior distribution of the one-sigma covariance matrices). The colour code shows the probabilistic classification of each of the components (i.e., IC348 core, IC348 halo, Alcaeus, Heleus, Gorgophone, and NGC1333+). The joint inference with the 1052 candidate members, our method was unable to completely disentangle the NGC1333 members from those of Autochthe (see the group called "NGC1333+" in Fig. \ref{fig:kalkayotl}). To overcome this problem and to break any possible entanglement, we iteratively fitted two-component GMMs to the candidate members of each identified group. At the end of this hierarchical-tree exploration, we find out that all the groups except for Gorgophone required only one Gaussian component to describe their phase-space structure. In the rest of the groups, the additional components showed negligible weights $\leq$5\% and convergence issues. In the two-component GMM of Gorgophone, the weights of the additional component were non-negligible (>5\%). Our iterative methodology delivers phase-space parameters of the Perseus groups as well as posterior distributions of the positions and velocities of each candidate member. The group-level parameters (mean and standard deviation) of the identified groups are shown in Tables \ref{table:groups_mean} and \ref{table:groups_sd}. As an example of the inferred group- and source-level parameters, Fig. \ref{fig:6d_K5} shows the cartesian (ICRS) positions and velocities of the Alcaeus group. The dots and error bars show the mean and standard deviation of the posterior distributions of the source-level parameters (i.e., 6D cartesian coordinates of each candidate member), while the orange ellipses show samples from the posterior distribution of the group-level parameters (i.e., the one-sigma covariance matrix centred at the mean position and velocity of the group). The total spatial and velocity dispersions of the identified groups are shown in the second and third columns of Table \ref{table:kinematic_indicators}. As can be observed from this table, the most distant Heleus group is also the most spread in the XYZ space. On the contrary, the core of IC348 is the most compact one, with only a 0.66 pc radius. \begin{table}[ht!] \caption{The mean values of the group's cartesian coordinates.} \label{table:groups_mean} \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|c|} \toprule {} & $X$ & $Y$ & $Z$ & $U$ & $V$ & $W$ \\ {} & $\rm{[pc]}$ & $\rm{[pc]}$ & $\rm{[pc]}$ & $\rm{[km \cdot s^{-1}]}$ & $\rm{[km \cdot s^{-1}]}$ & $\rm{[km \cdot s^{-1}]}$ \\ Group & & & & & & \\ \midrule IC348 core & $148.6 \pm 0.5$ & $221.3 \pm 0.7$ & $167.6 \pm 0.5$ & $4.5 \pm 0.1$ & $18.6 \pm 0.1$ & $-0.0 \pm 0.1$ \\ IC348 halo & $148.9 \pm 0.6$ & $218.0 \pm 1.0$ & $166.3 \pm 0.7$ & $5.0 \pm 0.1$ & $19.0 \pm 0.1$ & $0.7 \pm 0.1$ \\ Heleus & $170.3 \pm 1.7$ & $261.8 \pm 2.6$ & $189.3 \pm 1.6$ & $7.3 \pm 0.2$ & $21.5 \pm 0.3$ & $2.7 \pm 0.2$ \\ Alcaeus & $129.6 \pm 0.6$ & $204.5 \pm 0.9$ & $151.4 \pm 0.6$ & $4.0 \pm 0.1$ & $22.3 \pm 0.2$ & $-2.3 \pm 0.2$ \\ Gorgophone core & $146.4 \pm 1.0$ & $200.9 \pm 1.1$ & $150.9 \pm 0.7$ & $3.7 \pm 0.3$ & $20.9 \pm 0.3$ & $-1.9 \pm 0.3$ \\ Gorgophone halo & $140.5 \pm 1.3$ & $203.1 \pm 1.4$ & $151.1 \pm 1.1$ & $3.0 \pm 0.2$ & $20.4 \pm 0.3$ & $-2.8 \pm 0.3$ \\ NGC1333 & $152.5 \pm 0.7$ & $197.3 \pm 0.9$ & $151.9 \pm 0.7$ & $4.3 \pm 0.2$ & $21.5 \pm 0.2$ & $-4.3 \pm 0.2$ \\ Autochthe & $157.7 \pm 0.7$ & $197.4 \pm 0.8$ & $151.7 \pm 0.6$ & $1.0 \pm 0.3$ & $19.3 \pm 0.4$ & $-4.0 \pm 0.3$ \\ \bottomrule \end{tabular} } \end{table} \begin{table}[ht!] \caption{The standard deviation of the group's cartesian coordinates.} \label{table:groups_sd} \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|c|} \toprule {} & $\sigma_X$ & $\sigma_Y$ & $\sigma_Z$ & $\sigma_U$ & $\sigma_V$ & $\sigma_W$ \\ {} & $\rm{[pc]}$ & $\rm{[pc]}$ & $\rm{[pc]}$ & $\rm{[km \cdot s^{-1}]}$ & $\rm{[km \cdot s^{-1}]}$ & $\rm{[km \cdot s^{-1}]}$ \\ Group & & & & & & \\ \midrule IC348 core & $0.3 \pm 0.1$ & $0.4 \pm 0.2$ & $0.4 \pm 0.1$ & $0.9 \pm 0.0$ & $0.8 \pm 0.0$ & $0.8 \pm 0.0$ \\ IC348 halo & $2.9 \pm 0.4$ & $5.1 \pm 0.5$ & $3.7 \pm 0.4$ & $0.9 \pm 0.1$ & $1.0 \pm 0.1$ & $1.0 \pm 0.1$ \\ Heleus & $15.0 \pm 1.1$ & $23.3 \pm 1.7$ & $13.6 \pm 1.1$ & $1.4 \pm 0.2$ & $1.1 \pm 0.3$ & $1.1 \pm 0.2$ \\ Alcaeus & $4.4 \pm 0.4$ & $4.1 \pm 0.5$ & $2.1 \pm 0.4$ & $0.4 \pm 0.1$ & $0.4 \pm 0.2$ & $0.5 \pm 0.1$ \\ Gorgophone core & $5.2 \pm 0.7$ & $4.5 \pm 0.6$ & $1.6 \pm 0.4$ & $1.0 \pm 0.2$ & $1.2 \pm 0.2$ & $1.0 \pm 0.2$ \\ Gorgophone halo & $13.0 \pm 0.9$ & $13.0 \pm 1.2$ & $10.4 \pm 0.9$ & $2.0 \pm 0.2$ & $2.4 \pm 0.3$ & $2.2 \pm 0.2$ \\ NGC1333 & $0.7 \pm 0.2$ & $0.5 \pm 0.3$ & $0.3 \pm 0.2$ & $1.4 \pm 0.1$ & $1.6 \pm 0.1$ & $1.2 \pm 0.1$ \\ Autochthe & $0.7 \pm 0.3$ & $0.6 \pm 0.3$ & $0.8 \pm 0.2$ & $0.6 \pm 0.2$ & $0.5 \pm 0.3$ & $0.6 \pm 0.2$ \\ \bottomrule \end{tabular} } \end{table} \begin{figure}[ht!] \centering \includegraphics[width=\columnwidth,page=1]{Figures/Alcaeus.pdf} \includegraphics[width=\columnwidth,page=2]{Figures/Alcaeus.pdf} \caption{Cartesian equatorial (ICRS) positions (top panel) and velocities (bottom panel) of the Alcaeus group. The colour code shows the speed (top panel) and the distance (bottom panel) in the radial direction and both relative to the group centre. The orange ellipses show samples from the posterior distribution of the group-level parameters (see Fig. \ref{fig:kalkayotl}).} \label{fig:6d_K5} \end{figure} \subsubsection{Core and halo populations of IC348 and Gorgophone} \label{results:core_and_halo} In IC348 and Gorgophone, our methodology finds two Gaussian components that we call core and halo populations, with the core having the smallest dispersion in the 3D positional space and halo the largest (see the value of $\sigma_{XYZ}$ in Table \ref{table:groups_sd}). In IC348, the medians and covariance matrices of these two Gaussians result in Mahalanobis distances between them of 10.16 (halo with respect to the core) and 1.23 (core with respect to halo). Given that these distances are mutually farther away than one Mahalanobis distance, then we conclude that they correspond to independent physical groups (see Assumption \ref{assumption:gaussian} in Appendix \ref{appendix:assumptions}). However, in the case of Gorgophone, the Mahalanobis distances are 1.6 (halo with respect to the core) and 0.96 (core with respect to halo), which prevent us from concluding that they pertain to independent physical groups (see Assumption \ref{assumption:gaussian} in Appendix \ref{appendix:assumptions}). For historical reasons, we continue using \citet{1998ApJ...497..736H} nomenclature of core and halo populations for the two identified physical groups of IC348 (see Sect. \ref{intro:spatial_distribution} and \ref{intro:history}). \subsubsection{Internal kinematics} We analyse the internal kinematics of the groups using the inferred positions and velocities of both the sources and the groups. As an example, Fig. \ref{fig:6d_K5} shows the 3D positions (top panel) and 3D velocities (bottom panel) of the Alcaeus group. The colour code of this figure shows the distance (bottom panel) and speed (top panel), in the radial direction, both with respect to the group's centre. As shown in this figure, there are no observable trends of expansion. Appendix \ref{appendix:3D_velocities} shows figures with the Galactic Cartesian positions and velocities of the candidate members in our eight statistical groups. As can also be seen in those figures, there are no observable trends of expansion in any of the Perseus groups. As explained at the end of Sect. \ref{methods:6D_structure}, to objectively quantify the internal kinematics of the groups, we computed the average magnitude of the dot and cross products of the radial distance and velocity vectors of all the group's members, which are proxies of the group's expansion and rotation, respectively. Columns fourth and fifth of Table \ref{table:kinematic_indicators} show the average values of the dot and cross product vectors, respectively. Although the previous values show some trend of contraction, particularly in NGC1333 and the core of IC348, the current uncertainties do not allow us to draw firm conclusions. Similarly, the uncertainties in the cross product show that the observed trends of rotation are significant only at the one-sigma level, but fail to exceed the two-sigma level. \begin{table}[ht!] \caption{Total dispersions and kinematic indicators of the physical groups.} \label{table:kinematic_indicators} \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{c|c|c|c|c} \toprule Group & $\lVert\vec{\sigma}_{XYZ}\rVert$ & $\lVert\vec{\sigma}_{UVW}\rVert$ & $\overline{\hat{\vec{e}}_r \cdot \vec{v}}$ & $\overline{ \lVert\hat{\vec{e}}_r \times \vec{v}\rVert}$ \\ {} & [pc] &$\rm{[km \cdot s^{-1}]}$ &$\rm{[km \cdot s^{-1}]}$ & $\rm{[km \cdot s^{-1}]}$\\ \midrule IC348 core &$0.65\pm0.20$ & $1.44\pm0.08$ & $-0.2\pm0.8$ & $1.0\pm0.6$ \\ IC348 halo &$6.89\pm0.75$ & $1.68\pm0.16$ & $+0.1\pm1.0$ & $1.2\pm0.6$ \\ Heleus &$30.84\pm2.32$ & $2.08\pm0.35$ & $+0.5\pm1.5$ & $1.1\pm0.7$ \\ Alcaeus &$6.38\pm0.75$ & $0.76\pm0.21$ & $-0.1\pm0.4$ & $0.6\pm0.3$ \\ Gorgophone core &$7.09\pm0.99$ & $1.81\pm0.31$ & $+0.1\pm1.1$ & $1.2\pm0.8$ \\ Gorgophone halo &$21.13\pm1.72$ & $3.79\pm0.41$ & $+0.4\pm2.2$ & $2.6\pm1.4$ \\ NGC1333 &$0.90\pm0.39$ & $2.37\pm0.23$ & $-0.1\pm1.1$ & $1.4\pm0.8$ \\ Autochthe &$1.24\pm0.44$ & $1.04\pm0.38$ & $+0.2\pm0.8$ & $0.7\pm0.6$ \\ \bottomrule \end{tabular} } \end{table} \subsection{Empirical isochrones and age estimates} \label{results:isochrones} As described in Sect. \ref{methods:isochrones}, the \textit{Miec} code delivers, as a by-product, the extinction-free empirical isochrone of the group under analysis. Figure \ref{fig:relative_ages} shows the empirical isochrones of the identified groups. We notice that in the case of Gorgophone, both the core and halo have the same isochrone. In addition, Fig. \ref{fig:ages} shows, for each of the groups, the absolute CMD of the candidate members (black dots) as well as their extinction-free empirical isochrones (solid black lines). The apparent magnitudes of both the candidate members and the empirical isochrones were transformed into absolute ones using the source- and group-level distances (see Sect. \ref{results:6D_structure}), respectively. The figures also show the theoretical isochrones from the PARSEC, MIST, and BT-Settl models for the ages of 1, 3, 5, 7, and 10 Myr. We notice that due to the scarcity of candidate members in the high-luminosity region (absolute G < 7 mag), the empirical isochrones do not follow the curvature of the theoretical isochrones but that of our prior, which is a simple linear regression \citep[see Fig. B.3 of ][]{2021A&A...649A.159O}. We estimate the ages of the physical group by comparing their extinction-free empirical isochrone to the theoretical ones (see Sect. \ref{methods:isochrones}) in the faint region (G>8 mag) where the bulk of the candidate members are located. Our age estimates are shown in the fourth column of Table \ref{table:groups_members}. Given that these estimates are based on a simple visual comparison, we do not provide uncertainties. We stress the fact that these ages highly depend on the isochrone models and extinction map and thus will benefit from further refinement. \begin{figure}[ht!] \centering \includegraphics[width=\columnwidth]{Figures/Age.png} \caption{Empirical isochrones of the Perseus groups as obtained by \textit{Miec}.} \label{fig:relative_ages} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=\columnwidth]{Figures/CMDs.png} \caption{Absolute CMD of the Perseus groups candidate members (black dots). The black and coloured lines show the empirical isochrone and the theoretical ones of different evolutionary models, respectively.} \label{fig:ages} \end{figure} \subsection{Mass distributions} \label{results:mass} We infer the mass distributions of each of the physical groups using the two methods described in Sect. \ref{methods:mass}. Figure \ref{fig:mass}\footnote{The electronic data to reconstruct this figure will be available at \url{www.project-dance.com}.} shows the result of these inferences. The orange lines depict a hundred realisations of the mass distribution that result from the propagation of the same number of samples taken from the posterior distributions of the \textit{Miec} parameters that describe the G band magnitude distribution. The magnitude distributions are transformed into mass distributions using the theoretical mass-magnitude relations of the unified theoretical model PMB (see Sect. \ref{methods:mass}) at the group's age and distance (see Table \ref{table:groups_members}). The mass distributions inferred using the \textit{Sakam} code with the theoretical (i.e., PARSEC, MIST, BT-Settl) and unified (PMB) models are shown in the same figure as coloured lines. The grey line shows the \citet{2005ASSL..327...41C} mass prior, and the grey area depicts the incompleteness region of the \textit{Gaia} data, which corresponds to G=19 mag \citep[see Tables 4 and 5 of][]{2021A&A...649A...2L} and has been extinction corrected. As can be observed in Fig. \ref{fig:mass}, the mass distributions inferred with the \textit{Sakam} code agree for all the theoretical models except at the borders of their mass domains. Indeed, the peaks observed at Log Mass$[M_{\odot}]\sim-1$ and $\sim0.2$ correspond to border effects introduced by the lower limits of PARSEC and MIST models and the upper one of the BT-Settl model, respectively. As shown in the figure, the unified PMB model does not show these artefacts. We observe that the uncertainty in the mass distributions obtained with the \textit{Miec} code (dispersion of the orange lines) is proportional to the population size of the group. The uncertainty of IC348 mass distribution is the smallest, while that of NGC1333 and Autochthe are the largest ones. Comparing the mass distributions inferred with the two methods, we observe that the largest discrepancies are observed in IC348 while the smallest ones are in Gorgophone and Alcaeus. The extent of these discrepancies is proportional to the extinction value of the group. As will be shown in the next Section, the mode of the inferred extinction distributions is the largest in IC348 and the lowest in Gorgophone (see Fig. \ref{fig:av}). The discrepancy in the mass distributions is explained by the difficulties that the \textit{Miec} code has to infer the magnitude distributions of extincted regions under the presence of low-information-content datasets \citep[see the discussion of ][]{2021A&A...649A.159O}, in this particular case, the visual bands of \textit{Gaia}. Thus, we expect that in the heavily-extinct groups, the mass distributions inferred with the \textit{Sakam} method are more realistic than those of the \textit{Miec} one because \textit{Sakam} uses additional photometric bands (see Sect. \ref{dataset}), in particular the infrared ones that are less affected by extinction. \begin{figure*}[ht!] \centering \includegraphics[width=\textwidth]{Figures/Masses.png} \caption{Mass distributions of the Perseus groups. The orange lines show 100 realisations from the mass distribution obtained after transforming \textit{Miec}'s magnitude distributions (see text). The rest of the coloured lines (those of the legend) depict the mass distributions computed with \textit{Sakam}. The grey area shows the \textit{Gaia} incompleteness region.} \label{fig:mass} \end{figure*} \subsection{$A_v$ and $R_v$ distributions} \label{results:extinction_and_rv} The mass inference done with the \textit{Sakam} code also delivers samples from the posterior distributions of the $A_v$ and $R_v$ values of each source. Figures \ref{fig:av} and \ref{fig:rv} show histograms and kernel density estimates of the posterior samples of $A_v$ and $R_v$, respectively, of all the candidate members of each physical group. As in Fig. \ref{fig:mass}, the coloured lines indicate the theoretical and unified models, and the assumed prior distribution. The figures show that the $A_v$ extinction is highly variable, with the mode ranging from 0.8 mag in Gorgophone to 2.3 mag in IC348. In addition, the within-group differential extinction has a large dispersion, with the exception of the Alcaeus group, in which the internal dispersion is only $\sim0.5$ mag. We notice that the PARSEC models in all physical groups, except in Gorgophone, for which we use the 7 Myr isochrone, produces lower values of $A_v$ compared to those obtained with the MIST and BT-Settl models. This effect is a consequence of the redder theoretical isochrones of the PARSEC model compared to the other two models, particularly at the faintest magnitudes of the infrared bands (absolute K >4). However, at 7 Myr, the isochrones of the three models completely agree, which explains the negligible difference among the inferred $A_v$ values of the three theoretical models in the Gorgophone group. Concerning the inferred distributions of $R_v$, we observe that the three theoretical models result in similar distributions in all the groups. Thus, the results are all consistent. We notice, though, that in the case of NGC1333, the mode of the distribution is shifted from that of the prior towards a larger value of 3.4. We will further discuss the consequences of this latter value in Sect. \ref{discussion:extinction}. \begin{figure}[ht!] \centering \includegraphics[width=\columnwidth]{Figures/Av.png} \caption{Extinction distributions of the Perseus groups. The color code indicates the results of the theoretical and unified models as well as the uniform prior.} \label{fig:av} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=\columnwidth]{Figures/Rv.png} \caption{Rv distributions of the Perseus groups. Captions as in Fig. \ref{fig:av}. The vertical dotted line at $Rv=3.1$ depicts the mode of the Gaussian prior.} \label{fig:rv} \end{figure} \subsection{Dynamical analysis} \label{results:dynamical_analysis} Using the group level parameters inferred in previous sections, we now analyse the dynamical state of each of the Perseus physical groups. As mentioned in Sect. \ref{method:dynamical_analysis}, we make this analysis with two methods: the observed energy distributions and the comparison of the group's internal velocity dispersions with that expected for virial equilibrium. \subsubsection{Energy distributions} \label{results:energy} We compute the energy distributions of the Perseus groups using Eq. \ref{equation:energy} and samples from the posterior mass, position, and velocity distributions of each candidate member. As explained in Sect. \ref{method:dynamical_analysis}, we correct our mass estimates by unresolved binaries (see Assumption \ref{assumption:binaries_mass} in Appendix \ref{appendix:assumptions}) and the non-stellar mass of dust and gas (see Assumption \ref{assumption:dust_mass} in Appendix \ref{appendix:assumptions}). \begin{figure}[ht!] \centering \includegraphics[width=\columnwidth]{Figures/CDFs.png} \caption{Cumulative energy distributions of the Perseus groups. The dotted and dashed lines depict the results of accounting for the dust and gas mass by increasing the stellar mass in 65\% and 169\%, respectively.} \label{fig:cedfs} \end{figure} Figure \ref{fig:cedfs} shows the cumulative energy distribution functions (CEDFs) of the Perseus physical groups. As can be observed, all the Perseus groups have positive energies except for Alcaeus and the core of IC348. The fraction of gravitationally bound stars varies between 10\% and 25\% in Alcaeus and 15\% to 40\% in the core of IC348. In this latter group, the fraction of bounded stars depends highly on the applied correction for the non-stellar mass fraction (dust and gas), with the larger fraction of bound stars corresponding to the most massive gravitational potential, as expected. We also observe that this non-stellar mass correction has a negligible impact on energy distributions of the other Perseus groups. Given that our non-stellar mass correction follows the distribution of observed stars (see Assumption \ref{assumption:dust_mass} in Appendix \ref{appendix:assumptions}) rather than being massive particles located at the centres of the groups, then its effect is most pronounced in the most compact group, which is the core of IC348 (see the second column of Table \ref{table:kinematic_indicators}). \subsubsection{Virial state} \label{results:virial_state} We now estimate the virial state of the Perseus physical groups. First, we compute the velocity dispersion, $\sigma_{vir}$, that the groups would have if they were in virial equilibrium (see Sect. \ref{method:dynamical_analysis} and Assumptions \ref{assumption:virial_equilibrium} and \ref{assumption:EFF_profile} in Appendix \ref{appendix:assumptions}). Then, we compare these $\sigma_{vir}$ with the observed velocity dispersions $\sigma_{UVW}$ (see Sect. \ref{results:6D_structure}) and estimate the dynamical state of each group. For completeness reasons, we also compute the total mass that would be needed for the groups to be in virial equilibrium given their observed velocity dispersion. \begin{table*}[ht!] \caption{The virial velocity dispersions of the Perseus groups.} \label{table:virial_state} \resizebox{\textwidth}{!}{ \begin{tabular}{cccccccccccc} \toprule {} & $r_c$ & $r_{hm}$ & $\rm{\gamma_{EFF}}$ & $\rm{\sigma_{vir;EFF;0\%}}$ & $\rm{\sigma_{vir;EFF;65\%}}$ & $\rm{\sigma_{vir;EFF;169\%}}$ & $\rm{\sigma_{vir;HM;0\%}]}$ & $\rm{\sigma_{vir;HM;65\%}}$ & $\rm{\sigma_{vir;HM;169\%}}$ & $\rm{\pi_{vir;EFF;0\%}}$ & $\rm{\pi_{vir;HM;0\%}}$ \\ {} & [pc] & [pc] & - & $\rm{[km\ \ s^{-1}]}$ & $\rm{[km\ \ s^{-1}]}$ & $\rm{[km\ \ s^{-1}]}$ & $\rm{[km\ \ s^{-1}]}$ & $\rm{[km\ \ s^{-1}]}$ & $\rm{[km\ \ s^{-1}]}$ & - & - \\ \midrule IC348 core & $0.69 \pm 0.18$ & $1.41 \pm 0.67$ & $4.38 \pm 0.50$ & $0.57 \pm 1.10$ & $0.71 \pm 1.37$ & $0.89 \pm 1.71$ & $0.40 \pm 0.43$ & $0.50 \pm 0.53$ & $0.62 \pm 0.67$ & $ 6$ & $ 13$ \\ IC348 halo & $5.78 \pm 2.60$ & $6.22 \pm 3.36$ & $8.36 \pm 4.50$ & $0.18 \pm 0.21$ & $0.23 \pm 0.26$ & $0.28 \pm 0.32$ & $0.18 \pm 0.17$ & $0.22 \pm 0.21$ & $0.27 \pm 0.26$ & $ 84$ & $ 91$ \\ Heleus & $16.91 \pm 6.70$ & $27.32 \pm 15.69$ & $9.29 \pm 4.90$ & $0.07 \pm 0.08$ & $0.08 \pm 0.10$ & $0.10 \pm 0.13$ & $0.05 \pm 0.05$ & $0.06 \pm 0.06$ & $0.08 \pm 0.07$ & $1005$ & $1624$ \\ Alcaeus & $5.26 \pm 6.10$ & $5.90 \pm 2.64$ & $4.83 \pm 2.40$ & $0.17 \pm 0.07$ & $0.21 \pm 0.09$ & $0.26 \pm 0.12$ & $0.16 \pm 0.18$ & $0.19 \pm 0.22$ & $0.24 \pm 0.28$ & $ 21$ & $ 24$ \\ Gorgophone core & $3.73 \pm 3.70$ & $6.42 \pm 2.74$ & $2.60 \pm 3.60$ & $0.12 \pm 0.07$ & $0.15 \pm 0.08$ & $0.19 \pm 0.10$ & $0.09 \pm 0.11$ & $0.12 \pm 0.14$ & $0.14 \pm 0.17$ & $ 219$ & $ 377$ \\ Gorgophone halo & $7.16 \pm 5.00$ & $18.35 \pm 10.74$ & $2.55 \pm 2.30$ & $0.15 \pm 0.11$ & $0.19 \pm 0.14$ & $0.24 \pm 0.17$ & $0.10 \pm 0.08$ & $0.12 \pm 0.10$ & $0.15 \pm 0.13$ & $ 609$ & $1562$ \\ NGC1333 & $0.58 \pm 1.34$ & $1.37 \pm 0.64$ & $4.76 \pm 2.88$ & $0.32 \pm 0.08$ & $0.40 \pm 0.10$ & $0.50 \pm 0.12$ & $0.21 \pm 0.23$ & $0.26 \pm 0.29$ & $0.33 \pm 0.36$ & $ 54$ & $ 127$ \\ Autochthe & $1.65 \pm 2.69$ & $1.91 \pm 0.91$ & $8.25 \pm 6.05$ & $0.13 \pm 0.05$ & $0.17 \pm 0.06$ & $0.21 \pm 0.07$ & $0.12 \pm 0.14$ & $0.15 \pm 0.17$ & $0.19 \pm 0.21$ & $ 61$ & $ 70$ \\ \bottomrule \end{tabular} } \end{table*} Table \ref{table:virial_state} shows, for each Perseus group, the EFF's core radius ($r_c$) and $\gamma$ parameters, the half-mass radius ($r_{hm}$), the virial ($\sigma_{vir}$) velocity dispersions, and the mass factors ($\pi_{vir}$) that should multiply the observed stellar mass for the group to be in virial equilibrium (assuming a binary mass fraction of 20\%). In the virial velocity dispersions and mass factors, we show the values computed using the core and half-mass radii (sub-indices EFF and HM, respectively) as well as the original, lower, and upper limits of the cluster mass (sub-indices 0\%, 65\%, and 169\%, respectively). As expected, the most massive clusters (i.e., those with the 169\% non-stellar mass correction) have larger virial velocity dispersions. Comparing the observed velocity dispersions (third column of Table \ref{table:kinematic_indicators}) with the virial ones, we observe that all Perseus groups are in a super-virial state. In other words, for the groups to be in virial equilibrium, their stellar mass should be increased in factors ranging from 6 times to a thousand times. We will further discuss these results in Sect. \ref{discussion:dynamical_analysis}. \section{Discussion} \label{discussion} We now compare our results with those from the literature and discuss their differences. We proceed in the same order as in the previous sections. \subsection{Membership} \label{discussion:members} When comparing lists of candidate members is important to highlight that the parallax is the most discriminant feature that \textit{Gaia} DR3 provides to identify candidate members. Thus, we use it to exclude clear outliers from previous results of the literature. We consider that sources with parallaxes lower than 1.9 mas (farther than 526 pc) and higher than 6.5 mas (closer than 178 pc) belong to the field population. We choose this highly conservative parallax interval to ensure the inclusion of possible members within more than -100 pc and +200 pc around the traditional Perseus distance \citep[320 pc,][]{2018ApJ...865...73O}. Figure \ref{fig:comparison_literature} shows the representation space coordinates of our candidate members that are common with (dots) and rejected from the literature works. The colour code shows the membership probability of the sources. The following paragraphs present detailed comparisons of our candidate members with selected works from the literature. \begin{figure}[ht!] \centering \includegraphics[width=\columnwidth]{Figures/Comparison_literature.png} \caption{Representation space coordinates of the confirmed (dots) and rejected members from the literature after removal of clear outliers ($\varpi\notin[1.9,6.5]$ mas). The membership probability is shown as a colour code.} \label{fig:comparison_literature} \end{figure} \citet{2016ApJ...827...52L} identified 478 and 203 members in IC348 and NGC133, respectively. Only 450 of these members have \textit{Gaia} DR3 parallaxes, proper motions, and photometry. Of these, 16 are clear outliers according to our parallax limits. Our methodology recovers 380 and rejects 54 of their candidate members. As can be seen in Fig. \ref{fig:comparison_literature}, most of the rejected sources lay on the outskirts of the group's loci. From the list of members of \citet{2018ApJ...865...73O} 162 have a counterpart in our data set. Out of these, we recover 158 as candidate members and reject one due to its discrepant parallax. We identify the remaining three sources as false negatives of our methodology, given that their coordinates are consistent with the locus of the Perseus groups. Sixteen of \cite{2020PASP..132j4401A} members are in our data set, and four of them are clear parallax outliers. Of the remaining sources, nine are recovered by our methodology, and three were rejected. Of the latter, only one can be identified as a false negative of our methodology since the other two lay at the outskirts of the groups' loci. \citet{2021MNRAS.503.3232P} identified 913 candidate members in their five groups, which do not include IC348 or NGC1333. Contrary to the previous authors, our main objective is the analysis of these latter two clusters, and we focus on a smaller region (see Sect. \ref{dataset}). This region contains 183 of their candidate members, and 2 of them are clear parallax outliers. Our methodology recovers 166 of their candidate members and rejects 15. Half of the rejected candidates lay slightly below our probability thresholds (blue and green diamonds) and can be considered false negatives by our methodology. \citet{2022AJ....164...57K} identified 810 candidate members in the Perseus region, but only 429 are in our sky region and 425 in our data set\footnote{The four missing members in our sky region lack both \texttt{BP} and \texttt{RP} bands, which prevent us from using our membership methodology. Their \textit{Gaia} \texttt{source\_id} are: 123998252751482368, 216690175350832000, 216443575506877824, and 216443884747397504.}. From the latter, we recover 416 and reject nine. Only four of the rejected have membership probabilities above 0.5, which can be considered false negatives of our methodology. \citet{2022ApJ...931..156P} found 211 candidate members to IC348, 195 of these are within our sky region, and we recover 192 of them. The three rejected sources have probabilities larger than 0.5 and thus can be considered false negatives of our methodology. \cite{2022ApJ...936...23W} found 207 candidate members, out of which 197 are in our sky region, and we recover 191 of them. Out of the six rejected, two have negligible membership probabilities, while the remaining four have membership probabilities >0.5 but lower than our probability threshold. Unfortunately, these authors do not provide a classification of their candidate members into the Perseus groups. Therefore, we cannot make more detailed comparisons with our candidate members to the Perseus groups. As can be observed in Fig. \ref{fig:comparison_literature}, there are sources with very low membership probabilities that lay at the outskirts of the cluster locus and that we consider field population. Nonetheless, there are 12 sources from \citet{2016ApJ...827...52L}, three of \citet{2018ApJ...865...73O}, one of \citet{2020PASP..132j4401A}, five of \citet{2021MNRAS.503.3232P}, four of \citet{2022AJ....164...57K} and three of \citet{2022ApJ...931..156P} with coordinates consistent with the cluster locus that lay below our probability thresholds, we consider these members as false negatives of our methodology. Under the conservative assumption that all the rejected members from the literature (excluding clear outliers with discrepant parallaxes, as described above) are indeed true Perseus members, then the true positive rate of our methodology is 93.2\%, which is better than the 88\% estimated by \citet[][see their Table C.3]{2021A&A...649A.159O} for a cluster with $A_v\sim 6$ mag. The better performance of the current application of the \textit{Miec} code results from the use of the highly discriminant parallax feature and the exquisite precision of the \textit{Gaia} DR3 astrometry. Finally, we highlight that our membership methodology recovers 267 candidate members not previously identified as such by the literature. This represents an increase of 31\% with respect to the previous studies of the Perseus region. \subsubsection{Group's classification} We now compare the list of members that we recover for IC348 (core and halo) and NGC1333 with those found in other works. Tables \ref{table:ic348_literature} and \ref{table:ngc1333_literature} show, for IC348 and NGC1333, respectively, the number of members from other works that are in our dataset and that we recover and reject. The third column of these tables shows the number of recovered members together with those sources that we still recover as members of the Perseus region but belonging to another group (shown with the first two letters of its name). \begin{table}[ht!] \caption{The number of IC348 members in the literature, in our dataset, and in this work.} \label{table:ic348_literature} \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{c|c|c|c|c} \toprule Work & Members & In dataset & Recovered & Rejected \\ \midrule \text{\citet{2016ApJ...827...52L}} & 478 & 357 & 285 + 8Go + 7Al + 6He & 51 \\ \text{\citet{2018ApJ...865...73O}} & 133 & 131 & 126 + 1Go + 1He & 3 \\ \text{\citet{2018A&A...618A..93C}} & 144 & 144 & 140 + 1He + 1Al & 2 \\ \text{\citet{2020PASP..132j4401A}} & 19 & 13 & 7 & 6 \\ \text{\citet{2022ApJ...931..156P}} & 211 & 195 & 190 + 1Go + 1He & 3 \\ \text{\citet{2022AJ....164...57K}} & 270 & 263 & 250 + 4Go + 5He & 4 \\ \bottomrule \end{tabular} } \tablefoot{{\scriptsize Al = Alcaeus, He = Heleus, Go = Gorgophone}} \end{table} \begin{table}[ht!] \caption{The number of NGC1333 members in the literature, in our dataset, and in this work.} \label{table:ngc1333_literature} \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{c|c|c|c|c} \toprule Work & Members & In dataset & Recovered & Rejected \\ \midrule \text{\citet{2016ApJ...827...52L}} & 203 & 93 & 68 + 6Go & 19 \\ \text{\citet{2018ApJ...865...73O}} & 31 & 31 & 28 + 2Go & 1 \\ \text{\citet{2018A&A...618A..93C}} & 50 & 50 & 46 + 3Go & 1 \\ \text{\citet{2020PASP..132j4401A}} & 9 & 3 & 3 & 0 \\ \text{\citet{2022AJ....164...57K}} & 34 & 34 & 30 + 3Go & 1 \\ \bottomrule \end{tabular} } \tablefoot{{\scriptsize Go = Gorgophone}} \end{table} In addition to the well-known IC348 and NGC1333 groups, the Perseus region contains several substructures that have been recently identified thanks to the high-precision \textit{Gaia} data. For example, \citet{2021MNRAS.503.3232P} identified five Perseus groups, named Alcaeus, Autochthe, Electryon, Heleus, and Mestor. In the rest of this subsection, we discuss the recovery of the members of these groups, with the exception of the Mestor one, whose members are outside the sky region that we analyse. Concerning the Electryon group, out of the 329 members of \citet{2021MNRAS.503.3232P}, only 21 are in our dataset. Our methodology recovers 15 of these, although classified as members of the Heleus group. The fact that our methodology classifies these Electryon members into the Heleus group is explained by the small fraction of the former group within our dataset (6\%), its negligible contribution to the Perseus total number of members ($<1$\%), and the entanglement that these two group show in the parallax-proper-motions space. We notice that \citet{2021MNRAS.503.3232P} were able to disentangle Heleus from Electryon due to its relative compactness in the sky coordinates, however, our methodology identifies members of this group spread over a larger sky region. Concerning the Heleus group, our methodology recovers the 29 members of \citet{2021MNRAS.503.3232P} present in our dataset, and we identify 95 more candidate members. With respect to \citet{2022AJ....164...57K}, out of their 66 candidate members, our dataset contains 22, of which we recover 21, all of them belonging to this group. In addition, we identify 103 more candidate members than these authors. With respect to Alcaeus, from the 170 members of \citet{2021MNRAS.503.3232P}, 108 are in our dataset, and our methodology recovers 98 of these, with 88 classified into Alcaeus and 10 in Gorgophone. \citet{2022AJ....164...57K} identified 124 candidate members, out of which 83 are in our dataset. We recover 81 of these, although 20 of them are classified into Gorgophone. In addition, our method discovers 64 new candidate members of this group. Finally, from the 27 members of Autochthe identified by \citet{2021MNRAS.503.3232P}, 25 are in our dataset, and our method recovers 24 of them, with two of them classified into Gorgophone. \citet{2022AJ....164...57K} found 25 candidate members, with 23 of these in our dataset. Our methodology recovers 22 of these, although with one belonging to NGC1333 and 2 to Gorgophone. As shown in the previous paragraphs, further disentanglement of the Perseus substructures remains a difficult task in which more precise data, particularly radial velocities, and further methodological developments will still be needed. \subsection{Phase-space structure} \label{discussion:6D_structure} We now compare the distribution of positions and velocities of the identified physical groups with those from the literature. The most striking difference with respect to previous pre-\textit{Gaia} works is the number of identified substructures. Classically the region was comprised of IC348 and NGC1333, however, the arrival of \textit{Gaia} unravelled multiple populations. Compared to the latest analyses from the literature \citep{2022ApJ...936...23W,2022AJ....164...57K,2021MNRAS.503.3232P}, we identified one additional physical group that we call Gorgophone, which is composed of a core and halo populations. In IC348, our kinematic criterion to identify physical groups (see Assumption \ref{assumption:gaussian} in Appendix \ref{appendix:assumptions}) allow us to conclude that the core and halo of IC348 are distinct physical groups. The existence of these two populations is further supported by the age and mass distribution features of these populations, with the halo population being older and having fewer low-mass stars and more high-mass stars than the core. In addition, the most massive stars of the halo population are located off-centre while those in the core are centrally concentrated. These findings support a distinct halo population that formed earlier than the core population and possibly quenched the formation of high-mass stars in this latter. \subsubsection{3D distribution} \label{discussion:3d_structure} Our results show that the core of IC348 has a ($1\sigma$) radius of $0.65\pm0.20$ pc, with small correlations ($<0.3$) among the X, Y, and Z coordinates. These negligible correlations are in agreement with previous works that found a centrally concentrated distribution (see Sect. \ref{intro:spatial_distribution}). On the contrary, the halo extends over a ($1\sigma$) radius of $6.89\pm0.75$ pc, with non-negligible correlations of 0.3 between X and Y, 0.5 between X and Z, and 0.7 between Y and Z. Concerning IC348, our values of the halo radius are larger than the 0.9-1.4 pc radius reported by \citet{1999A&AS..137..305S}, \citet{2000AJ....120.3139C}, and \citet{2003AJ....125.2029M}. On the other hand, our 3D core radius estimate is in agreement with the 2D radius of 0.5~pc reported by \citet{2003AJ....125.2029M}. These latter authors also warned about the possibilities of a larger cluster radius and possible entanglement between the halo sources and the core ones due to projection effects. Evidence of a larger cluster size was also found by \citet{2015ApJ...807...27C}, who identified 63 of their candidate members beyond a radius of 1.8 pc radius (20\arcmin). Our results confirm the following suggestions proposed by \citet{2003AJ....125.2029M}: a larger cluster size extending into larger areas and elongated in the north-south direction, halo sources entangled within the core radius (see Fig. \ref{fig:sky}), and a density profile that is far from trivial. On the other hand, we do not find evidence of substructures within the halo populations, as suggested by the aforementioned authors. Contrary to IC348, NGC1333 is well described by a single Gaussian. It shows an elongation twice as large in the X coordinate that in the Y and Z, directions, although with negligible correlations and unrelated to the line of sight elongation. Given the cluster's young age and the low number of stars (see Table \ref{table:groups_members}), the observed elongation can be primordial. Concerning distances to IC348 and NGC1333, our estimates are in agreement with those of \citet{2018ApJ...865...73O} and \citet{2018ApJ...869...83Z}. The surprising agreement between the early distance estimate of $316\pm22$ pc made by \citet{1974PASP...86..798S} and our $315\pm1$ pc \textit{Gaia} value for the core of IC348 further supports the hypothesis proposed by those authors about dust grain growth in the dense Perseus clouds (see Sect. \ref{intro:extinction_distribution} and \ref{discussion:extinction}). We notice that \citet{2018ApJ...865...73O} warned about the impossibility of disentangling the cluster structure along the line of sight due to the fact that the parallax dispersion was dominated by individual uncertainties. However, this is no longer the case, thanks to the high precision of the \textit{Gaia} DR3 and the uncertainty deconvolution applied by \textit{Kalkayotl}. \subsubsection{Velocity distribution} \label{discussion:velocity_distribution} Our results on the velocity distributions of the Perseus groups are shown in Tables \ref{table:groups_sd} and \ref{table:kinematic_indicators}. As can be observed, the total velocity dispersion (third column of Table \ref{table:kinematic_indicators}) of these groups varies from 0.76 $\rm{km\, s^{-1}}$ in Alcaeus to 3.79 $\rm{km\, s^{-1}}$ in the halo of Gorgophone. Comparing the halo velocity dispersions of IC348 and Gorgophone with respect to their cores, we observe that in IC348, these values are similar (and compatible within 2$\sigma$ uncertainties), whereas, in Gorgophone, the halo velocity dispersion is significantly more than twice that of its core. The relatively large velocity dispersion and size of Gorgophone's halo to that of IC348, in combination with their similar ages, suggests that different mechanisms have formed these halo populations. We hypothesise that the core and halo of IC348 formed out of the same molecular cloud but at different ages, thus, although physically independent, they inherit similar velocity dispersions as those of the parent molecular cloud. On the contrary, the core and halo of Gorgophone pertain to the same physical group, but the halo-core dichotomy formed as a result of the dynamical interactions, thus the halo is dynamically hotter, and the core is relatively depleted of low-mass stars (see Fig. \ref{fig:mass}). \citet{2015ApJ...807...27C} fitted the radial velocity profile of IC348 with a GMM with two components (see Sect. \ref{intro:velocity}). They argue that the presence of the second component may have three possible origins: contaminants, a halo population, or lack of relaxation into a Gaussian distribution due to the young age of the cluster. Unfortunately, given the 1.8 pc radius cut that these authors applied to their sample, their results can not be directly compared with those of the IC348 core that we found here. Indeed, at that radius, we find a mixture of both core and halo populations. Our results confirm the hypothesis of the halo population, which we found in the 6D space of positions and velocities rather than only in the 1D radial velocity, as those authors hypothesised. On the other hand, the contamination scenario can be ruled out thanks to our robust membership methodology, in which the expected contamination rate of this group is $\lesssim$ 7\%. In the case of NGC1333, \citet{2015ApJ...799..136F} used the same methodology as \citet{2015ApJ...807...27C} and found that the radial velocity distribution can be well described by a single Gaussian component. These results agree with our single Gaussian model for the phase-space structure of NGC1333. \subsubsection{Internal kinematics} \label{discussion:internal_kinematics} Concerning IC348, in its core, we find a 3D velocity dispersion of $1.44\pm0.08\, \rm{km\, s^{-1}}$, which is similar than the 3D $1.2 \pm0.1\rm{km\, s^{-1}}$ value ($0.72\pm0.07\,\rm{km\, s^{-1}}$ for the 1D radial velocity dispersion) found by \citet{2015ApJ...807...27C}, and smaller to the 2.28 $\rm{km\, s^{-1}}$ reported by \citet{2018ApJ...865...73O}. On the one hand, the smaller value found by \citet{2015ApJ...807...27C} can be explained by their better treatment of binaries, which in our analysis are not distinguished from single stars. On the other hand, the larger value found by \citet{2018ApJ...865...73O} can be explained by their use of a single Gaussian to model stars in both the core and halo populations. In NGC1333, the 3D velocity dispersion of $1.6\pm0.2\,\rm{km\, s^{-1}}$ ($0.92\pm0.12\,\rm{km\, s^{-1}}$ for the 1D radial velocity dispersion) and $2.0\,\rm{km\, s^{-1}}$ reported by \citet{2015ApJ...799..136F} and \citet{2018ApJ...865...73O}, respectively, are smaller than and compatible with our $2.37\pm0.23\rm{km\, s^{-1}}$ value, respectively (see Table \ref{table:kinematic_indicators}). We notice that the 3D velocity ellipsoid of NGC1333 is 33\% wider in the Y direction than in the Z direction, which makes it far from spherical, as assumed by \citet{2015ApJ...799..136F}. On the other hand, the velocity dispersions measured by us and by \citet{2018ApJ...865...73O} are also probably inflated by unresolved binaries. Future work is warranted upon the arrival of more precise radial velocity measurements. With respect to expansion and contraction, our results show that the observed precision is not enough to claim any clear trend. Nonetheless, we notice that the largest values of contraction are those of the core of IC348, Alcaeus, and NGC1333. Concerning the signature of internal rotation, our results indicate that it is significant only at the one-sigma level. In IC348, \citet{2015ApJ...807...27C} have already reported a small but significant radial velocity rotation gradient in the plane of the sky: $0.024\pm0.013\,\rm{km\, s^{-1}\, arcmin^{-1}}$ or $0.26\pm0.14\,\rm{km\, s^{-1}\, pc^{-1}}$. Scaling their 1D gradient to 3D values at the cluster core and halo radii, we obtain $0.3\pm0.1\,\rm{km\, s^{-1}}$ and $3.1\pm0.1\,\rm{km\, s^{-1}}$, respectively. Although the rotation values we find for the core and halo of IC348 are larger and smaller, respectively, than those reported by \citet{2015ApJ...807...27C}, they still are not significant enough to rule out the null hypothesis of no internal rotation. Moreover, \citet{2018ApJ...865...73O} also found non-zero values for the rotation vectors of IC348 and NGC1333. However, they concluded that their values were negligible when compared to the internal velocity dispersion. Similarly, \cite{2019ApJ...870...32K} found non-zero azimuthal velocities in NGC1333 and IC348, but they also judged these as non-significant when compared to the measured uncertainties. In the light of the previous works and current evidence, future efforts are still needed, both on the observational and modelling sides, to test the hypothesis of internal rotation in the Perseus groups. \subsubsection*{The core and halo of IC348} As mentioned in Sect. \ref{intro:spatial_distribution} the core and halo populations were first found by \citet{1998ApJ...497..736H} on the base of the spatial distribution of stars with $H_{\alpha}$ emission. He also found an age gradient of 2.4 $\rm{Myr \, pc^{-1}}$ in which older ages are located at larger radii (1.45 Myr at 4' and 2.8 Myr at 10'). Here, we confirm the existence of these two IC348 populations and observe a smaller but still non-negligible age gradient of $0.32\,\rm{Myr\, pc^{-1}}$ computed on the base of the ages and radii of the core and halo. We notice that \citet{2015ApJ...807...27C} found a correlation between the radial velocity and the reddening in IC348, for which they proposed three different scenarios: a systemic offset in the radial velocities, the contraction of the cluster as a whole, or the convergence of two subclusters aligned in the line of sight. Their third scenario naturally coincides with the core and halo populations proposed by \citet{1998ApJ...497..736H} and confirmed here. These two populations naturally solve the previous issue because the halo population is red-shifted 1.6 $\rm{km\,s^{-1}}$ with respect to the core population and has a slightly larger spread in extinction (see Fig. \ref{fig:av}). Furthermore, we observe that the halo contains slightly more massive stars than the core (see Fig. \ref{fig:ages} and \ref{fig:mass}). This effect was already predicted by \citet{1998ApJ...497..736H} when he mentioned that the brightest stars from the halo formed first, and then the molecular cloud retreated behind them, where now the youngest population is formed. \subsection{Empirical isochrones and age estimates} \label{discussion:isochrones} Our age estimates for NGC1333 and IC348 are compatible with the literature values. In the case of IC348, our 3 Myr estimate for the core is slightly larger than the 2 Myr median value of \citet{2003ApJ...593.1093L}. Similarly, the 5 Myr age of IC348 halo is similar to the 6 Myr reported by \citet{2015MNRAS.454..593B}, and in agreement with the recent 5 Myr age determination of \citet{2022ApJ...931..156P} and \cite{2022AJ....164..125L}. Both our core and halo age estimates fall within the age interval reported by \citet{2003ApJ...593.1093L}. In the case of NGC133, our age estimate of 3 Myr is older but still compatible with the 1-2 Myr derived by \citet{1996AJ....111.1964L}. Furthermore, our age estimates for the halo of IC348 and NGC1333 are in clear agreement with those of \citet[][see Sect. \ref{intro:ages}]{2022ApJ...936...23W}, who determined isochrone ages of 5.4 Myr and 2.9 Myr for IC348 and NGC1333, respectively. We notice that this agreement is based not only on the theoretical isochrone models but on the similar extinction distribution, as observed when comparing their Fig. 7 to our Fig. \ref{fig:av}. Comparing the ages estimated by \citet{2021MNRAS.503.3232P} for the newly identified Perseus groups with those obtained here, we observe that the relative internal ages are in agreement. Furthermore, we confirm the coevality of NGC1333 and Authochte. However, the absolute ages differ by a factor of roughly two, with our estimates being twice older than theirs. In the particular case of the absolute age estimates obtained by \citet{2021MNRAS.503.3232P}, we conclude that their underestimation is a direct consequence of their lack of extinction correction. According to the extinction maps of \citet{2019ApJ...887...93G}, the members of Perseus groups have extinctions as large as 7 mag in $A_v$, with a median value of $A_v\,=\,2.6\pm1.9$ mag. Neglecting these extinction values when estimating isochronal ages results in younger age determinations. The same reasoning applies to the more recent isochrone age determinations by \citet{2022AJ....164...57K}. \subsection{Mass distributions} \label{discussion:mass} As mentioned in Sect. \ref{results:mass}, the mass distributions obtained with our two methods are compatible within the uncertainties, except in those groups with the largest peaks in extinction, which are the core and halo of IC348, NGC1333, and Autochthe (see Fig. \ref{fig:av}). The differences in the inferred mass distributions arise from the difficulties that the \textit{Miec} method has to recover luminosity distribution of extincted regions based on data sets lacking infrared bands. The \textit{Sakam} method complements \textit{Gaia} data with infrared photometry and thus is not affected by this problem. Comparing \textit{Sakam} mass distributions of all groups within \textit{Gaia} completeness limits, we observe that there is a general agreement with \citet{2005ASSL..327...41C} initial mass distribution. This comes as no surprise given the young ages of the groups. Nonetheless, we notice a depletion of stars more massive than $\sim$5$\rm{M_{\odot}}$ in Autochte, NGC1333, Heleus and the cores of IC348 and Gorgophone compared to the values expected from \citet{2005ASSL..327...41C} distribution. Unexpectedly, the halo populations of IC348 and Gorgophone seem to have more massive stars than the core populations. In almost all groups, there is a visible peak at masses ranging from 1.5$\rm{M_{\odot}}$ to 2.5$\rm{M_{\odot}}$, which results from abrupt changes in the slope of the mass-luminosity relation of the PARSEC and MIST models at this particular mass interval (see Sect. \ref{methods:mass}). \begin{figure}[ht!] \centering \includegraphics[width=\columnwidth]{Figures/IC348_NGC1333.png} \caption{Cumulative mass distribution of IC348 and NGC1333.} \label{fig:mass_cumulative} \end{figure} Figure \ref{fig:mass_cumulative} shows the cumulative mass distribution and incompleteness regions (shaded areas) of the core and halo populations of IC348 as obtained from the unified isochrone model (PMB, see Sect. \ref{methods:mass}). As can be observed in the figure, the core and halo populations are not identical. This observation is confirmed by a two-sample Kolmogorov-Smirnov (KS) test. We applied this test to one hundred samples with bootstrap realisations of the core and halo stars. The mean value of these hundred KS tests results in the rejection (p-value <1\%) of the null hypothesis that both populations come from the same parent distribution. We observe that the halo region of IC348 has more massive stars than the core region in spite of its lower number density. Furthermore, according to the unified PMB model, the core mass distribution peaks at $0.12\pm0.03M_{\odot}$, whereas the halo peaks at $0.25\pm0.07M_{\odot}$. This result is in disagreement with the observations of \citet{2003AJ....125.2029M}, who found that the core and halo mass distribution peak at $0.56\pm0.18M_{\odot}$ and $0.1\pm0.02M_{\odot}$, respectively. As can be observed in the top right panel of Fig. \ref{fig:mass}, our halo mass distribution starts to decrease at $0.2M_{\odot}$, within the \textit{Gaia} completeness limits (>$0.11M_{\odot}$). \citet{2003AJ....125.2029M} observed that the alignment of the peaks of their core and halo mass distributions would require a halo age between 5 and 10 Myr. Our age determination for the halo population, 5 Myr, agrees with the lower limit proposed by those authors. However, they mentioned that the hypothesis of two distinct populations with an older halo was difficult to accept. They base their conclusion on the following argument. If the two-population model were correct, then the similar sizes of their core and halo regions will imply a truncation of the halo population below $0.3M_{\odot}$, which was clearly not observed in their mass distributions. Here, we observe that the number of halo stars is half that of the core and that the halo mass distribution peaks at $0.25\pm0.07M_{\odot}$ with a clear cut at $0.2M_{\odot}$. Based on our previous results, we have no compelling evidence to reject Herbig's \citep{1998ApJ...497..736H} hypothesis of distinct core and halo populations. Furthermore, the kinematic evidence further supports the hypothesis of core and halo populations. Comparing the cumulative distribution of NGC1333 (also shown in Fig. \ref{fig:mass_cumulative}) with the core and halo populations of IC348, we observe the following. On the one hand, comparing the mass distributions of the halo of IC348 and that of NGC1333, we observe that NGC1333 has a general overabundance of stars less massive than $2\rm{M_{\odot}}$. This overabundance of low-mass stars and brown dwarfs was already reported by \citet{2013ApJ...775..138S}, who observed it in several of their mass scenarios (combinations of age, distance and extinction). On the other hand, comparing the mass distributions of NGC1333 and the core of IC348, we observe that although in the mass interval of $0.3-1.0\,\rm{M_{\odot}}$ the core of IC348 appears to have slightly more stars than NGC1333, the two distributions are remarkably similar with a KS p-value of 0.53, which prevents us from rejecting the null hypothesis that the two distributions are random realization of the same underlying distribution. However, due to the \textit{Gaia} completeness limits, we can only claim the previous findings in the domain of masses $>0.16\,\rm{M_{\odot}}$. Therefore, we conclude that, within the completeness limits of the \textit{Gaia} data, we observe no difference in the mass distributions of the core of IC348 and NGC1333, and thus, reject \citet{2013ApJ...775..138S} hypothesis of variation in the initial mass distribution with respect to the crowdedness of the environment. \subsection{Extinction and $Rv$ distributions} \label{discussion:extinction} \begin{figure}[ht!] \centering \includegraphics[width=\columnwidth]{Figures/AvRv_corr.png} \caption{$A_v$ and $R_v$ 2D histograms for stars in the different Perseus groups.} \label{fig:av_corr} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=\columnwidth]{Figures/Av_difference.png} \caption{Distributions of $A_v$ difference between the values inferred with \textit{Sakam} and those reported by \textit{Bayestar19} \citep{2019ApJ...887...93G}. Captions as in Fig. \ref{fig:av}.} \label{fig:av_diff} \end{figure} The $A_v$ and $R_v$ values that we derive here confirm previous findings from the literature. We observe cloud-to-cloud and intra-cloud extinction variations, evidence of which was already reported by \citet{2016ApJ...823..102C} for the dust emissivity spectral index. Furthermore, in the extinction interval of the Perseus group analysed here, we confirm the correlations reported by \citet{2013MNRAS.428.1606F} in the B5 and West-End clouds (see Fig. 9 of those authors). As can be observed from Fig. \ref{fig:av_corr}, there are no significant correlations between $R_v$ and $A_v$ for $A_v<2$ mag in any of the groups. However, for $A_v>3$ mag these correlations are clearly observed in NGC1333 and Authochte, mildly in Gorgophone, almost perceptible in IC348, and clearly absent in Alcaeus and Heleus. Interpreting these correlations as evidence of dust grain growth, and given that Alcaeus and Heleus are off the clouds (see Fig. \ref{fig:sky}), we conclude that there is an east-west gradient in the growth of the dust size across the stars of Perseus ridge. Comparing the $A_v$ values inferred by \textit{Sakam} with those reported by \textit{Bayestar19} \citep{2019ApJ...887...93G}, see Fig. \ref{fig:av_diff}, we observe that they are, in all cases, compatible (i.e., the zero value is covered by the 95\% credible interval of the distribution). However, NGC1333 and the cores of IC348 and Gorgophone show the largest discrepancies (with modes at -2.2, -1.16, and -0.91 mag, respectively), while Heleus, Alcaeus, and the halos of IC348 and Gorgophone display the smallest differences (with modes at 0.35, 0.26, 0.02, and -0.06 mag, respectively). The largest discrepancies observed in NGC1333 and the cores of IC348 and Gorgophone can be explained by the lack of stars in the 2MASS and PanSTARRS catalogues of these embedded regions that lay in the Perseus ridge (see Fig. \ref{fig:sky}). The lack of stars for anchoring the extinction values causes the \textit{Bayestar19} algorithm to overestimate them. \subsection{Dynamical analysis} \label{discussion:dynamical_analysis} \begin{figure}[ht!] \centering \includegraphics[width=\columnwidth]{Figures/3D.png} \caption{Galactic Cartesian positions and velocities of the Perseus groups (cones) together with the positions of individual members (dots). The interactive version of this figure will be available at \url{www.project-dance.com}.} \label{fig:3d} \end{figure} In Sect. \ref{results:dynamical_analysis}, we presented the CEDFs of each Perseus physical group, and a comparison of its observed velocity dispersion with that expected for virial equilibrium under varying values of the mass and radius parameters. All these results showed that the majority of the stars in the Perseus groups are gravitationally unbound and in a super-virial state. Even in the core of IC348, which is the most massive ($\rm{146\pm5\,M_{\odot}}$) and compact ($0.65\pm0.20$ pc) group, the upper limit in the fraction of gravitationally bound stars reaches only 40\%. The velocity dispersions at virial equilibrium computed from varying radii and total masses are all similar, although with the more massive clusters assumption resulting in larger virial velocity dispersion (see Sect. \ref{results:dynamical_analysis}). In the case of the core of IC348, the observed velocity dispersion is 1.6 times larger than the most conservative estimate for the virial velocity dispersion ($0.89\pm1.71\,\rm{km\, s^{-1}}$ computed from the core radius and a non-stellar mass correction of 169\%), whereas in the rest of the groups, the difference is even larger. For the core of IC348 to be in virial equilibrium, a total mass six times larger than the stellar mass will be needed. This value for the dust-and-gas mass exceeds by more than three times our most conservative estimate for this non-stellar mass. Moreover, recent studies have shown that the molecular cloud seen in projection towards IC348 is at 280-300 pc \citep{2022Natur.601..334Z} and thus in front of the cluster. This observation then implies that the core of IC348 has an even larger super-virial state than our conservative estimates. Thus, we conclude that it is highly unlikely that the young Perseus groups formed in virial state. Our independent estimates of the dynamical state of the Perseus groups confirm \citet{2015ApJ...807...27C} findings about the super-virial state of IC348 and extend it to the rest of the groups. We interpret the super-virial states and large fractions of gravitationally unbound stars in the young ($\lesssim$10 Myr) Perseus groups as evidence that these groups formed through a hierarchical star-formation mechanism \citep[e.g.,][]{2012MNRAS.426.3008K} rather than a monolithic one \citep[e.g.,][]{1991ASPC...13....3L}. For more details about the differences between these star-formation scenarios, see \citet{2018MNRAS.475.5659W} and the references therein. Alcaeus is an interesting physical group with 10-30\% of its stars in an energetically bounded state with a steep CEDF (see Fig. \ref{fig:cedfs}). It also has tight velocity dispersion ($0.76\pm0.21\,\rm{km\, s^{-1}}$, see Table \ref{table:kinematic_indicators}) similar to that of the well-known Pleiades \citep[$0.8\,\rm{km\, s^{-1}}$][]{2017A&A...598A..48G} and Coma Berenices \citep[$0.89\pm0.10\,\rm{km\, s^{-1}}$][]{Olivares2022} open clusters. Therefore, we conclude that this group is a young open cluster in the process of disruption. We notice that its kinematic indicators show hints of contraction. Although this contraction is not significant, if confirmed, it may be explained by either unidentified substructures or contaminants. Future work is warranted. \subsection{Star-formation history} \label{discussion:history} In this section, we use our results to discuss some of the star-formation histories proposed in the literature. Afterwards, we propose our own. The results that we have collected in this work can be categorised into three major topics: identified physical groups, isochrone ages, and dynamical states. The similar, although not exact same, values of the ages, distances, and kinematics of the Perseus groups suggest that it is unlikely that all these groups randomly formed out of the same star-formation event with this display of properties. The observed spatial (see Fig. \ref{fig:3d}), temporal (see Fig. \ref{fig:relative_ages}), energetic (see Fig. \ref{fig:cedfs}), and mass distribution (see Fig. \ref{fig:mass}) gradients all indicate that a latent (hidden) process links the star-formation history of the Perseus group. Whether this latent process was a supernova explosion, the feedback from nearby OB associations, triggering from within itself, or another unknown phenomenon remains to be explained. We notice that our results are insufficient to link the star-formation history of the Perseus groups to that of other nearby star-forming regions, as suggested in the literature. Thus, we cannot confirm nor reject the star-formation histories proposed by \citet{1999AJ....117..354D}, \citet{2021ApJ...919L...5B}, \citet{2022Natur.601..334Z}, \citet{2022ApJ...936...23W}, or \citet{2022AJ....164..125L}. On the contrary, the star-formation histories proposed by \cite{1998ApJ...497..736H}, \cite{2002A&A...384..145B}, \cite{2021MNRAS.503.3232P}, \cite{2021ApJ...917...23K}, and \cite{2022AJ....164...57K} are related, up to a certain point, to the intrinsic properties of the region and thus can be scrutinised under the light of the evidence gathered here. We now discuss these works. \cite{2022AJ....164...57K} discuss two scenarios for the formation of the Perseus regions that are related to the internal properties of the groups (a third scenario associated with the Per-Tau shell is also discussed): a past supernova and cloud-cloud interaction. Those authors concluded that although the region shows evidence of expansion, it is unlikely that it resulted from the triggering of a supernova, given that the observed velocities are not consistent with it. Our results show (see Table \ref{table:groups_mean}) that the velocities of the PerOB2a (i.e., the core and halo of IC348 and Heleus) and PerOB2b (i.e., Alcaeus, Autohochte, NGC1333 and Gorgophone) super-groups have distinct directions but similar magnitudes, with the maximum relative difference amongst the velocities of all the groups being only $4\,\rm{km\,s^{-1}}$. Concerning the cloud-cloud interaction scenario, the aforementioned authors argue that the super-groups PerOB2a and PerOB2b may have originated from two distinct clouds that independently started forming stars when they were in close proximity, a fact that is compatible with the spatial and kinematics derived here. Those authors continue mentioning that after the initial burst, the two super-groups continue forming stars resulting in today's similar ages for all the groups. Although our age estimates are systematically older than those of the previous authors (see Sect. \ref{results:isochrones} and Sect. \ref{discussion:isochrones}), we observe clear age gradients in the groups of PerOBa and PerOB2b. Finally, after highlighting the difficulty of measuring the degree of possible mutual triggering \citep[e.g.,][]{2015MNRAS.450.1199D} between these two super-groups, they stress that there appears to be at least some degree of mutual influence. \cite{2021ApJ...917...23K} suggested that given the kinematic similarities between Per 1 and Per 2 (which roughly correspond to PerOB2a and PerOB2b of \citealt{2022AJ....164...57K}), these super-groups most likely formed in the same star-forming process (see Sect. \ref{intro:history}). The spatial and kinematic results that we gather here support this conclusion. We consider it unlikely that two unrelated star-forming events produced the observed similar spatial, kinematic, and age properties of the Perseus region. Then, the aforementioned authors argue that given the lack of age gradient and the considerable time lag between the older (Per B) and younger populations (Per A) of the two super-groups (Per 1 or Per 2):$\sim$17 Myr vs $\sim$5 Myr, then they consider unlikely that a continuous star-formation process was at work. Instead, they proposed a scenario in which the formation of the older generation (Per B) dispersed part of the gas but not entirely prevent the formation of the younger generation (Per A). Our results contradict this scenario because we observe age gradients in both super-groups. On the one hand, in Per 2, we have one intermediate age generation with Heleus, and the halo of IC348, both formed at 5 Myr, and a younger generation constituted by the core of IC348, which formed 3 Myr ago. On the other hand, in Per 1, there are three generations with a clear age gradient: Alcaeus at 10 Myr, Gorgophone at 7 Myr, and NGC1333 and Autochthe both at 3 Myr. \cite{2021MNRAS.503.3232P} proposed that the older groups (Alcaeus and Heleus) formed closer to the Galactic plane while the younger ones did it at higher latitudes. They also point out that NGC1333 and Autochthe are part of the same star-formation event. Both these conclusions are supported by our spatial and age results. These authors also mentioned that only NGC1333, Authochte, and IC348 show evidence of continuing star formation. For this latter conclusion, we have no further evidence. \cite{2002A&A...384..145B} considered that the Perseus region constitutes an example of propagated star-formation, which started in the edge of Auriga 30 Myr ago, continued to Per OB2a 10 Myr ago, and is now in progress in the southern region where IC348 is located. Our results support these two latter steps, with Alcaeus at 10 Myr and the core of IC348, NGC1333 and Autochthe at 3 Myr. Finally, \cite{1998ApJ...497..736H} proposed a four steps scenario (see Sect. \ref{intro:history}), which is fully supported by the results we gather here. We confirm the following points: a) continuous star-formation in the region for at least the last 10 Myr, with Autochthe, NGC1333, and the core of IC348 being the youngest examples; b) $\zeta$ Per and $o$ Per are (4$\sigma$ astrometric) members of Alcaeus, a group formed 10 Myr ago; c) there are low-luminosity members of Alcaeus spread over a large region; d) IC348 is composed of two physical groups: an old 5 Myr halo and a young 3 Myr core. We now propose a star-formation history that closely follows that outlined by \cite{1998ApJ...497..736H}. We highlight the fact that our history is descriptive rather than phenomenological, and it is based on the ages and kinematics of the identified physical group. To arrive at a phenomenological description of the Perseus star-formation history, we still require precise age determinations and the application of the methods similar to the ones used here to a wider sky region encompassing the other actors in the scene: the Taurus star-forming region and its connecting bridge with Perseus. Future steps will be taken to address these points. \subsubsection{The first generation} Alcaeus and Gorgophone constitute the first generation of stars in the Perseus region. Alcaeus formed approximately 10 Myr ago, followed by the core and halo of Gorgophone 7 Myr ago. The brightest members of Alcaeus, $\zeta$ Per and $o$ Per are B-type stars with ages of $12.6\pm1.5$ Myr and $11.1\pm0.5$ Myr, respectively, according to \citet{2011MNRAS.410..190T}. These ages are compatible with our 10 Myr age estimate for the group. Most likely, the winds of this first generation sweep up part of the gas and dust from the molecular cloud resulting in the observed lowest extinction values of about $A_v\sim1$ mag (see Fig. \ref{fig:av}). The birth of this generation is an example of high-mass formation in a clustered environment (i.e., Alcaeus) together with low-mass formation in a sparse one (i.e., the core and halo of Gorgophone), which has also been observed in other star-forming regions \citep[e.g.,][]{2022ApJ...937...46K,2022AJ....163..266L} and is predicted by the hierarchical star-formation scenario \citep{2018MNRAS.475.5659W}. We notice that $\zeta$ Per has been related to the Per OB2 association as a runaway member \citep[e.g.,][]{1999AJ....117..354D,1999Ap.....42..247M}, a member with discrepant radial velocity \citep{2003A&A...402..587S} or a member of the trapezium type system ADS 2843 \citep{2004RMxAC..21..195A,2018MNRAS.481.3953A}, which is located at the central region of Alcaeus. \subsubsection{The second generation} Heleus and the halo of IC348 constitute the second generation of Perseus stars. Although these two groups are separated by about $\sim$54 pc, they have a similar age and direction of their velocity vector (see Fig. \ref{fig:3d} and Table \ref{table:groups_mean}), which indicate that probably these two groups were born from the same star-formation event. The origin of Heleus may also be related to the $\sim$5 Myr, elongated ($\sim$100 pc) but more distant ($\sim$400 pc) Barnard 5 group identified by \citet{2022AJ....164..125L}. However, we currently lack evidence to relate the origin of these two groups. \subsubsection{The third generation} Finally, Autochthe, NGC1333, and the core of IC348 make the third and most recent generation of stars. While NGC1333 and IC348's core have some hints that are still contracting (see Table \ref{table:kinematic_indicators}) the value of this contraction is not significant enough to claim this effect. NG1333 and Autochthe have similar space velocities (see Fig. \ref{fig:3d} and Table \ref{table:groups_mean}) and ages. However, the space velocity of IC348's core is different from that of the latter two groups (see Fig. \ref{fig:3d} and Table \ref{table:groups_mean}). Thus, we conclude that the core of IC348 formed in a coeval but rather different, possibly unrelated, or parallel \citep[as suggested by][]{2021ApJ...917...23K} star-formation episode than that giving birth to NGC1333 and Autochthe, with these latter two most likely representing the youngest star-formation episode. \subsubsection*{Internal triggering?} Internal triggering, or self-propagated star formation, has been proposed as a mechanism to explain the origin of the Perseus groups \citep[e.g.,][]{2002A&A...384..145B}. Although a clear demonstration of triggered star formation is extremely complex and elusive \citep[e.g.,][]{2015MNRAS.450.1199D}, the Perseus region gathers the following common indicators of internal triggering. Evidence of heating from B stars, embedded protostars, and outflows was found by \citet{2016ApJ...823..102C}. Outflows have also been found by \cite{2010ApJ...715.1170A} and \cite{2013ApJ...774...22P}. These outflows affect the regulation of the star-formation process, given that they can intensify or diminish it, particularly in the environment surrounding the active star formation regions. Our results show (see Fig. \ref{fig:3d}) that the super-group PerOB2b (i.e., Alcaeus, Gorgophone, NGC1333 and Autochthe) follows a clear spatio-temporal vector gradient that points away from Alcaeus (10 Myr) and towards NGC1333 and Autochthe (3 Myr), passing by Gorgophone (7 Myr). PerOB2a (Heleus and the core and halo of IC348) also follows a clear age gradient but lacks the neat spatial relation observed in PerOB2b. Although the previous evidence is not conclusive enough to claim internal trigger in either PerOB2a, PerOB2b or the entire Perseus region, it nonetheless represents a step forward in understanding the star-formation history of this region. \section{Conclusions} \label{conclusions} We applied Bayesian methodologies to the public \textit{Gaia}, APOGEE, 2MASS, PanSTARRS, and SIMBAD catalogues to identify and characterise the physical groups of the Perseus star-forming region. We found 1052 candidate members distributed into seven physically distinct populations: the core and halo of IC348, Autochthe, Alcaeus, Gorgophone, Heleus, and NGC1333. Gorgophone is a new kinematic group composed of a core and a halo. Our new list of candidate members increases by 31\% with respect to those from the literature in the Perseus region analysed here. The following are our most important conclusions. \begin{itemize} \item{Kinematics}. We propose that Alcaeus is an open cluster in a state of disruption. On the other hand, the internal velocities of NGC1333 and the core of IC348 show the largest, although not significant, signals of contraction. In all the Perseus groups, the evidence of internal rotation is not conclusive. \item{Age}. Our age estimates are compatible with those from the literature. We notice that neglecting or underestimating the contribution of the extinction in this partly embedded star-forming region may result in twice as much younger age estimates, as those determined by \citet{2021MNRAS.503.3232P} and \citet{2022AJ....164...57K}. \item{Mass}. The mass distribution of NGC1333 shows no over-abundance of low-mass stars compared to that of the core of IC348. This contradicts previous claims about the environmental differences of the initial mass function \citep[e.g.,][]{2013ApJ...775..138S}. On the contrary, the mass distributions of the young Perseus groups are broadly compatible with \citet{2005ASSL..327...41C}, although with distinct features. \item{Extinction}. We found evidence of dust grain growth in NGC1333 and Autochthe, as previously reported by \citet{2016ApJ...826...95C} for the Perseus clouds. \item{Dynamical state}. All of the Perseus groups are in a super-virial state, with CEDFs showing large fractions of energetically unbound stars. These findings, together with the lack of clear expansion patterns, support the hierarchical star-formation scenario \citep[e.g.,][]{2018MNRAS.475.5659W} over the monolithic one. \item{Star-formation history}. The Perseus region contains stars from at least three generations, which supports the star formation scenario proposed by \citet{1998ApJ...497..736H}. \end{itemize} \subsection*{Caveats and future perspectives} Thanks to the unprecedented quality and abundance of \textit{Gaia} data in combination with our comprehensive Bayesian methodologies, the traditional uncertainty sources in the mass determinations, like distance, extinction, completeness, and membership status, are now minimised. However, the following issues remain. Age continues to be the largest source of uncertainty for the analysis of the initial mass function and star-formation history of the Perseus region. Precise age determinations, like those provided by dynamical trace-back analyses \citep{2020A&A...642A.179M,2022A&A...667A.163M}, are the next mandatory step before attempting to construct a phenomenological star-formation history of the region. A phenomenological model for the star-formation history of the Perseus region also demands a joint analysis of its neighbour structures (i.e., Taurus, Auriga, and Pleiades). This analysis will be the next step to linking the origin of the Perseus region to large-scale structures such as the Per-Tau shell. However, the disentanglement of physically coherent structures in large sky regions is a methodological challenge for which our team has made but the first steps \citep[e.g.,][]{Olivares2022}. The incompleteness limits of the \textit{Gaia} data still prevent sound comparisons of the brown dwarf ratios in the Perseus groups, particularly between IC348 and NGC1333. Deep, wide, and complete astrophotometric surveys as those provided by the COSMIC-DANCe project \citep[e.g.,][]{2013A&A...554A.101B,2015A&A...577A.148B,2022NatAs...6...89M} are still needed. Our team is currently working on collecting and curating these data. \begin{acknowledgements} We thank the anonymous referee for the useful comments. JO acknowledge financial support from "Ayudas para contratos postdoctorales de investigación UNED 2021". P.A.B. Galli acknowledges financial support from São Paulo Research Foundation (FAPESP) under grants 2020/12518-8 and 2021/11778-9. This research has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 682903, P.I. H. Bouy), and from the French State in the framework of the "Investments for the future" Program, IdEx Bordeaux, reference ANR-10-IDEX-03-02. We gratefully acknowledge the support of NVIDIA Corporation with the donation of one of the Titan Xp GPUs used for this research. We acknowledge Anthony Brown, the Gaia Project Scientist Support Team and the Gaia Data Processing and Analysis Consortium (DPAC) for providing the \textit{PyGaia} code. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}),processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard \& Smithsonian, the Chilean Participation Group, the French Participation Group, Instituto de Astrof\'isica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut f\"ur Astrophysik Potsdam (AIP), Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg), Max-Planck-Institut f\"ur Astrophysik (MPA Garching), Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observat\'ario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Aut\'onoma de M\'exico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. \end{acknowledgements} \bibliographystyle{aa}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Continuum, quantum gauge theories are defined by their Lagrangian densities, which are functions of fields living in 4D Minkowski space-time. In the path-integral representation physical observables are averages over possible field configurations weighted with an exponential factor depending on the action. By performing a Wick rotation to imaginary time the Minkowski metric becomes Euclidean. Discretization of this 4D Euclidean space results in lattice gauge theory (LGT) -- a regularization of the original continuum theory, which allows to address non-perturbative problems. For a textbook see, for instance, Ref.~\cite{Rothe}. Physical results are recovered in the quantum continuum limit $a\to 0$, where $a$ is the lattice spacing measured in units proportional to a diverging correlation length. U(1) pure gauge theory, originally introduced by Wilson~\cite{Wi74}, is a simple 4D LGT. Nevertheless, determining its phase structure beyond reasonable doubt has turned out to be a non-trivial computational task. One encounters a phase transition which is believed to be first-order on symmetric $N_s^4$ lattices, e.g. \cite{JeNe83,ArLi01,ArBu03}. For a finite temperature $N_s^3\times N_{\tau}$, $N_{\tau}<N_s$ geometry the situation is less clear: Either second-order for small $N_{\tau}$ and first-order for large $N_{\tau}$ \cite{VeFo04}, or always second order, possibly corresponding to a novel renormalization group fixed point~\cite{BeBa06}. In 3D U(1) gauge theory is confining for all values of the coupling on symmetric $N_s^3$ lattices \cite{Polyakov:1976fu,Gopfert:1981er}, while in the finite temperature $N_s^2\times N_{\tau}$, $N_{\tau}<N_s$ geometry a deconfining transition of Berezinsky-Kosterlitz-Thouless type \cite{Berezinsky:1970fr,Kosterlitz:1973xp} is expected, see \cite{Borisenko:2008sc} for recent numerical studies. In lattice gauge theory one can evaluate Euclidean path integrals stochastically by generating an ensemble of field configurations with Markov chain Monte Carlo (MCMC). In this paper we report MCMC techniques we used in~\cite{BeBa06}. They are based on multicanonical (MUCA) simulations~\cite{BeNe91,Bbook} supplemented by a Wang-Landau (WL) recursion \cite{WaLa01}, both employed in continuum formulations. For updating we use the biased Metropolis heatbath algorithm (BMHA) of~\cite{BaBe05} added by overrelaxation~\cite{Ad81}. Observables include the specific heat, Polyakov loops and their structure factors (SFs) for low-lying momenta. For the analysis of these data binning is used to render autocorrelations negligible and a logarithmically coded reweighting procedure calculates averages with jackknife error bars. Our program package $$ {\tt STMC\_U1MUCA.tgz} $$ can be downloaded from the web at \smallskip \centerline{\tt http://www.hep.fsu.edu/\~\,\!berg/research\ .} \smallskip Unfolding of {\tt STMC\_U1MUCA.tgz} with\ \ \ {\tt tar\ -zxvf}\ \ \ creates a tree structure with root directory {\tt STMC\_U1MUCA}. Folders on the first level are {\tt ExampleRuns}, {\tt ForProg} and {\tt Libs}. Besides in the subfolders of {\tt ExampleRuns}, copies of all main programs are found in {\tt ForProg}. Fortran functions and subroutines of our code are located in the subfolders of {\tt Libs}, which are {\tt Fortran}, {\tt Fortran\_par}, {\tt U1}, and {\tt U1\_par}. Routines in {\tt Fortran} and {\tt U1} are modular, so that they can be called in any Fortran program, while routines in the other two subfolders need user defined parameter files, which they include at compilation. General purpose routines are in the {\tt Fortran} subfolders and to a large extent taken over from {\tt ForLib} of \cite{Bbook}, while routines specialized to U(1) are in the {\tt U1} folders. Parameter files are \begin{equation} \label{par_cano} \tt bmha.par,\ lat.par,\ lat.dat,\ mc.par \end{equation} for canonical simulations and in addition \begin{equation} \label{par_muca} \tt sf.par,\ u1muca.par \end{equation} for SF measurements and MUCA simulations with WL recursion. The main programs and the routines of the subfolders {\tt Fortran\_par} and {\tt U1\_par} include common blocks when needed. These common blocks with names {\tt common*.f} are also located in the {\tt Fortran\_par} and {\tt U1\_par} subfolders. This paper is organized as follows. In Sec.~\ref{sec_U1cano} we define U(1) LGT and introduce the routines for our BMHA and for measurements of some observables. Sec.~\ref{sec_U1MUCA} is devoted to our code for MUCA runs and to the analysis of these data. Sections~\ref{sec_U1cano} and~\ref{sec_U1MUCA} both finish with explicit example runs, where Sec.~\ref{sec_U1MUCA} uses on input action parameter estimates obtained in Sec.~\ref{sec_U1cano}. A brief summary and conclusions are given in the final section~\ref{sec_conclusions}. \section{Canonical simulations \label{sec_U1cano}} Our code is written for a variable dimension $d$ and supposed to work for $d\ge 2$. However, its use has mainly been confined to $d=4$ to which we restrict most of the subsequent discussion. After U(1) gauge theory is discretized its fundamental degrees of freedom reside on the links of a 4D hypercubic lattice which we label by $x,\mu$: $x$ is a 4D vector giving the location of a site, and $\mu =1,2,3,4$ is the direction of the link originating from this site, $\mu=4$ corresponds to the temporal direction of extension $N_\tau$ and $\mu=1,2,3$ to the spatial directions of extension $N_s$. The system contains $N_s^3\times N_\tau$ sites and $N_s^3\times N_\tau\times 4$ degrees of freedom $U_{x,\mu}$ that belong to U(1) gauge group, which we parametrize by \begin{equation}\label{Utheta} U_{x,\mu}=\exp(i\theta_{x,\mu}),\,\,\,\,\, \theta_{x,\mu}\in[0,2\pi)\ . \end{equation} In our code we use Wilson's action \begin{equation}\label{WiPlAct} S=\sum_x\sum_{\mu=1}^{4}\sum_{\nu<\mu} {\rm Re} (U_{x,\mu}U_{x+\hat\mu,\nu}U^+_{x+\hat\nu,\mu}U^+_{x,\nu})\ . \end{equation} The product $U_{x,\mu}U_{x+\hat\mu,\nu}U^+_{x+\hat\nu,\mu}U^+_{x,\nu}$ is taken around a \textit{plaquette}, an elementary square of the lattice. In 4D each link participates in 6 plaquettes ($2\,(d-1)$ in $d$-dimension). Products such as $U_{x+\hat\mu,\nu}U^+_{x+\hat\nu,\mu} U^+_{x,\nu}$ are called \textit{staples}. In canonical simulations one generates an ensemble of configurations weighted with $\exp(\beta S)$, the Boltzmann factor of a system with energy $-S$, which is in contact with a heatbath at inverse temperature $\beta$. Here $\beta$ is the inverse temperature of a statistical mechanics on the lattice and not the physical temperature of the LGT. The latter is given by the temporal extent of the lattice: $T=1/(aN_\tau)$. An important property of the action (\ref{WiPlAct}) is \textit{locality}: For a link update only its interaction with a small set of neighbors is needed. The part of the action involving a link $U_{x,\mu}$ being updated (with all other links frozen) is: \begin{eqnarray}\label{ActLink} S(U_{x,\mu})&=& {\rm Re}\left\{U_{x,\mu}\left[ \sum_{\nu\neq\mu}U_{x+\hat\mu,\nu}U^+_{x+\hat\nu,\mu}U^+_{x,\nu} \right.\right.\nonumber\\ &+&\left.\left.\sum_{\nu\neq\mu} U^+_{x+\hat\mu-\hat\nu,\nu} U^+_{x-\hat\nu,\mu}U_{x-\hat\nu,\nu})\right]\right\}. \end{eqnarray} The sum in square brackets $[...]$ runs over 6 staples and is evaluated before updating the link. We denote it \begin{eqnarray}\label{SumSt} & U_\sqcup & =~\ \alpha\exp(i\theta_\sqcup)\\ \nonumber &=&\left[ \sum_{\nu\neq\mu}U_{x+\hat\mu,\nu}U^+_{x+\hat\nu,\mu}U^+_{x,\nu} + U^+_{x+\hat\mu-\hat\nu,\nu}U^+_{x-\hat\nu,\mu}U_{x-\hat\nu,\nu}\right]. \end{eqnarray} To simplify the notation we drop the $x,\mu$ subscripts of the link: \begin{equation} S(U)\sim {\rm Re}(UU_\sqcup). \end{equation} Thus the distribution \begin{equation} P(U)\sim e^{\,\beta S}=e^{\,\beta{\rm Re}(UU_\sqcup)} \end{equation} needs to be sampled. In angular variables \begin{equation} {\rm Re}(UU_\sqcup)=\alpha{\rm Re}\left( e^{i(\theta+\theta_\sqcup)}\right)=\alpha\cos(\theta+\theta_\sqcup) \end{equation} and \begin{eqnarray} P(\theta)d\theta\sim e^{\,\beta\alpha\cos(\theta+\theta_\sqcup)} d\theta=e^{\,\beta\alpha\cos(\varphi)}d\varphi,~~ \varphi=\theta+\theta_\sqcup.\nonumber \end{eqnarray} The final probability density function (PDF) is \begin{equation}\label{Pphi} P(\alpha;\varphi)\sim e^{\,\beta\alpha\cos\varphi}. \end{equation} It is straightforward to implement the Metropolis algorithm \cite{Me53} for (\ref{Pphi}). A value $\varphi_{new}$ is proposed uniformly in the interval $[0,2\pi)$ and then accepted with probability \begin{equation}\label{Pmet} \min\left\{1,\frac{P(\alpha;\varphi_{new})} {P(\alpha;\varphi_{old})}\right\}\,. \end{equation} However, it has low acceptance rate in the region of interest ($0.8\leqslant\beta\leqslant1.2$). An efficient heatbath algorithm (HBA) is hard to design since the cumulative distribution function (CDF) \begin{equation}\label{FPphi} F_P(\alpha;\varphi)= \frac{\int_0^\varphi P(\alpha;\varphi')d\varphi'} {\int_0^{2\pi} P(\alpha;\varphi')d\varphi'} \end{equation} is not easily invertible, because it cannot be represented in terms of elementary functions. Nevertheless, two variations of heatbath algorithms for U(1) do exist \cite{We89}, \cite{HaNa92}. \begin{figure} \includegraphics[width=\columnwidth]{figs/u1bmha_cdf} \caption{The cumulative distribution function $F_P(\alpha=3.9375;\varphi)$ for an arbitrarily chosen value of the parameter $\alpha$.} \label{fig_FP} \end{figure} \subsection{Biased Metropolis heatbath algorithm} The MCMC updating of our code relies on a BMHA \cite{BaBe05}, which approximates heatbath probabilities by tables as described in the following. The updating variable $\varphi$ is drawn from some distribution $Q(\alpha;\varphi)$ and then accepted with probability \begin{equation}\label{PQmet} \min\left\{1,\frac{P(\alpha;\varphi_{new})} {P(\alpha;\varphi_{old})}\, \frac{Q(\alpha;\varphi_{old})} {Q(\alpha;\varphi_{new})}\right\}\,. \end{equation} This turns out to be a special case of general acceptance probabilities introduced by Hastings \cite{Ha70}. One refers to to the proposals as \textit{biased} when $Q(\alpha;\varphi_{old})/Q(\alpha;\varphi_{new}) \ne 1$ holds. For the heatbath algorithm the proposal probability $Q(\alpha;\varphi)$ is chosen to be identical to the target distribution $P(\alpha;\varphi)$, so that $\varphi_{new}$ is always accepted. In practice it is sufficient that $Q(\alpha;\varphi)$ is a good approximation of $P(\alpha;\varphi)$. We approximate the CDF (\ref{FPphi}) by a piece-wise linear function $F_Q(\alpha;\varphi)$. Compare Fig.~\ref{fig_FP} and Fig.~\ref{fig_FQ}. We partition the $F_Q$ axis into $n$ equidistant pieces ($n=16$ in the figures), which defines $\varphi^1,...,\varphi^n$ values via the relation $F_Q(\alpha;\varphi^j)-F_Q(\alpha;\varphi^{j-1})=1/n$, and we call an interval $(\varphi^{j-1},\varphi^j]$ a bin. The approximated PDF \begin{equation}\label{Qmet} Q(\alpha;\varphi) = \frac{dF_Q}{d\varphi} = \frac{1}{n(\varphi^j-\varphi^{j-1})} = \frac{1}{n\Delta\varphi^j} \end{equation} is flat in each bin and it is easy to generate $\varphi$ from $Q(\alpha;\varphi)$: One first picks a bin $j$ with uniform probability ($1/n$) and then generates $\varphi$ uniformly in this bin. \begin{figure} \includegraphics[width=\columnwidth]{figs/u1bmha_approx} \caption{The cumulative distribution function $F_Q(\alpha=3.9375;\varphi)$ which serves as an approximation of $F_P$. Compare to Fig.~\ref{fig_FP}.} \label{fig_FQ} \end{figure} We call this scheme BMHA because a Metropolis accept/reject step is used and a bias is introduced to enhance the acceptance rate by making the proposal probability an approximation of the one of the heatbath algorithm. When the discretization step goes to zero \begin{equation} \frac{\Delta\varphi^{j_{new}}}{\Delta\varphi^{j_{old}}} \to\frac{1/P(\alpha;\varphi_{new})}{1/P(\alpha;\varphi_{old})}= \frac{P(\alpha;\varphi_{old})}{P(\alpha;\varphi_{new})} \end{equation} follows from (\ref{Qmet}) and the acceptance rate (\ref{PQmet}) approaches~1. $P(\alpha;\varphi)$ depends on an external parameter $0\leqslant\alpha \leqslant \alpha_{max}=2\,(d-1)$ and the inverse temperature $\beta$. We discretize $\alpha$ for the proposal probability $Q(\alpha;\varphi)$. Before updating a link in the code we evaluate the sum of the staples $U_\sqcup$ and decompose it into the magnitude $\alpha$ and phase $\theta_\sqcup$. At this stage $\alpha$ is then known. The following is a summary of the algorithm we use to generate the $\varphi$ variable with the PDF (\ref{Pphi}). These routines are located in {\tt Libs/U1\_par}. \subsubsection{Table generation} \begin{enumerate} \item Choose $m$ bins for the parameter axis $\alpha$ and $n$ bins for the variable $\varphi$. Two $m\times n$ arrays are needed. We take $m$ and $n$ to be powers of 2, because $n$ being a power of 2 speeds up finding $j_{old}$ (\ref{jold}), though $m$ can, in principle, be arbitrary. \item Calculate a discrete set of $\alpha$ values by \begin{equation} \alpha^i=\left(i-\frac{1}{2}\right)\, \frac{\alpha_{max}}{m},\,\,\,\,\,i=1,...,m. \end{equation} \item For each $\alpha^i$ evaluate \begin{equation} F_Q^i(\varphi)= \frac{\int_0^\varphi P(\alpha^i;\varphi')d\varphi'} {\int_0^{2\pi} P(\alpha^i;\varphi')d\varphi'} \end{equation} by numerical integration with $\beta = \texttt{beta\_table}$ as set in \texttt{mc.par}. The inverse temperature of the canonical simulation is denoted \texttt{beta} in the code. We reserve the possibility to have different values of the inverse temperature for the BMHA table generation and for the simulation. Of course, for $\tt beta\_table=beta$ the table approximates the heatbath distribution at {\tt beta}. When a range of inverse temperatures is used, as in MUCA simulations, one can tune the acceptance rate by playing with \texttt{beta\_table}. The ranges that we used in multicanonical runs were narrow, so we were content with setting $\tt beta\_table = bmax$. \item Tabulate $F_Q^i(\varphi)$ by values $\varphi^{i,j}$ defined by \begin{equation} \frac{j}{n}=F_Q^i(\varphi^{i,j})~~\Leftrightarrow~~ \varphi^{i,j}=\left(F_Q^i\right)^{-1} \left(\frac{j}{n}\right) \end{equation} and the differences \begin{equation} \Delta\varphi^{i,j}=\varphi^{i,j}-\varphi^{i,j-1}\ . \end{equation} \end{enumerate} The common block {\tt common\_bmhatab.f}, which is listed below, stores the quantities $\varphi^{i,j}$, $\Delta\varphi^{i,j}$ and $\ln\Delta\varphi^{i,j}$, respectively: \begin{verbatim} C Table for BMHA. common/tabbma/ tabbm(nbm2,nbm1), & delbm(nbm2,nbm1),dbmln(nbm2,nbm1) \end{verbatim} where the parameters ${\tt nbm1}=m$ and ${\tt nbm2}=n$ are set in the file \texttt{bmha.par}: \begin{verbatim} c Biased Metropolis-HeatBath (BMHA) parameters: parameter(nbm1=2**5, & n2log2=7,nbm2=2**n2log2) \end{verbatim} \subsubsection{BMH updating} Our implementation of BMH updates is given in the routine {\tt u1\_bmha.f}. A call to {\tt u1\_bmha.f} performs one \textit{sweep}, which here is defined by updating each variable (U(1) matrix) once in sequential order. For each, single BMH update it calls the subroutine {\tt u1\_bmha\_update.f}. Its functions are shortly described in the following. \begin{enumerate} \item After $\alpha$ and $\theta_\sqcup$ are calculated by the \texttt{u1\_getstaple} subroutine, determine the $\alpha$ bin $k={\rm Int} [\alpha/\alpha_{max}]+1$, where ${\rm Int}[x]$ denotes rounding to the largest integer $\leqslant x$. \item For given $k$ determine to which bin the previous value $\varphi_{old}=(\theta_{old}+\theta_\sqcup)\mod 2\pi$ belongs (in the code $\theta_{old}$ value is stored in \texttt{aphase} array). This is done with a binary search \begin{equation} \label{jold} j_{old}\to j_{old}+2^i\cdot sign(\varphi-\varphi^{i,j_{old}}), ~~i\to i-1 \end{equation} starting with $j_{old}=0$, $i=\log_2n-1$. \item Pick a new bin \begin{equation} j_{new}={\rm Int}[nr_1]+1\,, \end{equation} where $r_1$ is a uniform random number in [0,1). \item Pick a new value \begin{equation} \varphi_{new}=\varphi^{i,j_{new}}- \Delta\varphi^{i,j_{new}}r_2\,, \end{equation} where $r_2$ is a uniform random number in [0,1). \item Accept $\varphi_{new}$ with probability (\ref{PQmet}). \item If accepted, store $\theta_{new}=(2\pi+\varphi_{new}-\theta_\sqcup)\mod2\pi$ in the \texttt{aphase} array. \end{enumerate} For U(1) and SU(2) we found \cite{BaBe05} that $m=32$ and $n=128$ are large enough to achieve $\sim 0.97$ acceptance rate. Thus, the BMHA achieves practically heatbath efficiency. It becomes important for cases where a conventional HBA is difficult to implement and/or computationally inefficient. \subsection{Overrelaxation} We use overrelaxation (OR) to faster decorrelate the system. OR algorithms were introduced by Adler. See \cite{Ad88} and references given therein. In the formulation of Ref.~\cite{Cr87} the idea is to generate a new value of the link matrix that lies as far as possible away from the old value without changing the action too much. This is done by reflecting the old matrix about the link matrix, which maximizes the action locally. In U(1) LGT one reflects $\theta_{old}$ about the element $\theta_0$, which maximizes the PDF~(\ref{Pphi}). The $\varphi$ value that maximizes (\ref{Pphi}) is $\varphi_0=\theta_0+\theta_\sqcup$. Reflecting $\theta_{old}$ about $\theta_0=-\theta_\sqcup$ we find \begin{equation}\label{thetaOR} \theta_{new}=\theta_0-(\theta_{old}-\theta_0) = -2\theta_\sqcup-\theta_{old}\,. \end{equation} As $\theta_{new}$ (\ref{thetaOR}) does not change the action, OR constitutes in our case a \textit{microcanonical} update. Our implementation is the subroutine {\tt u1\_over.f}, which performs one overrelaxation sweep. In the code $\theta_{new}= (6\pi-2\theta_\sqcup-\theta_{old})\mod 2\pi$. \subsection{Example runs} Short canonical simulations are needed to determine the action range for the multicanonical runs. We perform 1~BMHA sweep followed by 2~OR sweeps. Runs in 3D on $6^2\times 4$ lattices are prepared in the subfolders $${\tt C3D04t06xb1p0}~~{\rm and}~~{\tt C3D04t06xb2p0}~\ $$ of the folder {\tt ExampleRuns} and in 4D on $6^3\times 4$ lattices in $${\tt C4D04t06xb0p9}~~{\rm and}~~{\tt C4D04t06xb1p1}\ . $$ Parameters are set in the {\tt *.par} and {\tt lat.dat} files, the lattice size in {\tt lat.par} and {\tt lat.dat}: Run parameters in {\tt mc.par} and the BMHA table size in {\tt bmha.par}, which is kept the same for all runs. The general structure of our MCMC simulations is that outlined in \cite{Bbook}: Lattice set-up and initialization are done by the routine {\tt u1\_init.f} followed by {\tt nequi} sweeps for equilibration, which do not record measurements. Afterwards $\tt nrpt\times nmeas$ measurements are carried out, each after {\tt nsw} sweeps (in \cite{Bbook} only $\tt nsw=1$). The 4D $\beta$ values embrace the pseudo-transition region of the $6^3\times 4$ lattice: $\beta=0.9$ in the disordered and $\beta=1.1$ in the ordered phase. To avoid divergence of the equilibration time with increasing lattice size, the start configuration has to match the phase: Ordered ($\tt istart=1$) in the ordered and disordered ($\tt istart=2$) in the disordered phase. The production program has to be compiled with $$\tt ../make77\ u1\_ tsbmho.f$$ where the {\tt make77} file is located one level up in the tree. Besides Fortran~77 compilation the {\tt make77} moves parameter files around, so that they are properly included by subroutines, which need them. The Fortran compiler defined in our {\tt make77} is g77. You may have to replace it with the one used on your computing platform. In the tcshell CPU times are recorded by running the {\tt a.out} executable with $$\tt ../CPUtime\ >\&\ fln.txt\ \& $$ in the background, where the file {\tt CPUtime} is also located one level up in the tree. While running one can monitor the progress by looking up the {\tt progress.d} file. In test runs on a Intel E5405 2~GHz quad-core PC using g77 version 3.4.6 (Red Hat 3.4.6-4) the execution times for the prepared simulations were 3m35s in 4D and 41s in 3D. After a production run {\tt ana\_ts1u1.f} is used to analyze the action data. In the present context we only need the mean action values as input for the action ranges set in our multicanonical runs. With {\tt gnuplot h01.plt} or {\tt i01.plt} one plots the action histogram. The BMHA acceptance rate is also returned by {\tt ana\_ts1u1.f}. Further, the integrated autocorrelation length $\tau_{\rm int}$ is estimated, but the statistics may not always be sufficiently large for a good determination (the statistics needed to estimate the average action is much smaller \cite{Bbook}). The relevant parameters for calculating $\tau_{\rm int}$ have to be set in the analysis program and a plot is obtained with gnuplot {\tt a1tau.plt}. As rounding errors in floating point operations depend on the compiler, the actual numbers obtained for the average action on different computing platforms may vary within statistical errors. In our 4D run they were \begin{eqnarray} {\tt actm/np} &=& 0.46942\,(18)~~{\rm at}~~\beta=0.9\,,\label{actm0p9}\\ {\tt actm/np} &=& 0.71728\,(14)~~{\rm at}~~\beta=1.1\,,\label{actm1p1} \end{eqnarray} where $n_p$ is the number of plaquettes. Approximated, this range across the phase transition will be used in the {\tt u1muca.par} file of our subsequent MUCA simulation: \begin{eqnarray} \label{actmin} {\tt actmin}=S_{\min} &=& {\tt actm}~~{\rm at}~~\beta_{\min}=0.9\,, \\ {\tt actmax}=S_{\max} &=& {\tt actm}~~{\rm at}~~\beta_{\max}=1.1\,. \label{actmax} \end{eqnarray} The procedure for running 3D examples is the same as for the 4D runs described above. To cover the pseudo-transition region on $6^2\times 4$ lattice we set the inverse temperatures for the canonical runs at $\beta=1.0$ in the disordered and $\beta=2.0$ in the ordered phase. The choice of a broader range of temperatures is appropriate, related to finite size effects that scale logarithmically with the lattice size. Average action values obtained are \begin{eqnarray} {\tt actm/np}&=&0.47495\,(27)~~{\rm at}~~\beta=1.0\,,\label{actm1p0}\\ {\tt actm/np}&=&0.81079\,(15)~~{\rm at}~~\beta=2.0\,.\label{actm2p0} \end{eqnarray} \section{Multicanonical simulations} \label{sec_U1MUCA} In MUCA \cite{BeNe91} simulations one samples the phase space with weights, which are a {\it working estimate} \cite{Bbook} of the inverse spectral density up to an overall factor. This makes the energy histogram approximately flat and allows for efficient reconstruction of canonical expectation values in a range of temperatures $(\beta_{min},\beta_{max})$. This has many applications and is especially useful when studying the vicinity of first-order phase transitions, where canonical histograms exhibit a double-peak structure, suppressing tunneling between phases even for relatively small-sized system (e.g., see \cite{BBD08} for a recent study of 3D Potts models). For second-order phase transitions the usefulness of MUCA simulations has been discussed in the context of cluster algorithms~\cite{BeJa07}. Most important and successful applications target complex systems like, for instance, biomolecules~\cite{Ya08}. The MUCA method consists of two parts \cite{Bbook}: A recursion to determine the weights and a simulation with frozen weights. For the first part we have programmed a continuous version of the WL recursion \cite{WaLa01}. \subsection{Wang-Landau recursion} Because the WL code of this section is programmed for generic use in statistical physics systems we use the energy $E$ instead of the action notation. In earlier recursions for MUCA parameters (see, e.g., \cite{Bbook}) one was iterating with a weight function of the energy, $w(E)$, inversely proportional to the number of histogram entries at $E$. In contrast to that the WL algorithm \cite{WaLa01} increments the weight multiplicatively, i.e., additively in logarithmic coding: \begin{equation} \label{WLupd} \ln w(E{''}) \to \ln w(E^{''}) - a_{WL}\,,~~a_{WL}>0 \end{equation} at every WL update attempt $E\to E'$, where ${\tt addwl}= a_{WL}$ in our code. Here $E^{''}=E$ when the update is rejected and $E^{''}=E'$ when the update is accepted. After a sufficiently flat histogram is sampled (we come back to this point), the WL parameter is refined, \begin{equation} \label{WLrec} a_{WL} \to a_{WL}/2\ . \end{equation} In its original version the WL algorithm deals with discrete systems like Ising or Potts models for which histogram entries correspond naturally to the discrete values of the energy. However, for continuous systems binsizes are free parameters and we have to deal with their tuning. We discretize the U(1) action range into bins of a size large enough in $\triangle S$, so that one update cannot jump over a bin. Here $\triangle S < 4\,(d-1)$, because there are $2\,(d-1)$ plaquettes attached to a link and the range of the plaquette action is in the interval $[-1,+1]$, which gives another factor~2. In the program this value $\triangle S$ is defined as {\tt delE}. Next, we are {\it not} using constant weights over each bin, but instead a constant temperature interpolation as already suggested in \cite{BeNe91}. One WL update has two parts: \begin{enumerate} \item A MUCA update of the energy (in our U(1) code action) using the weights at hand. \item A WL update (\ref{WLupd}) of the MUCA weights. \end{enumerate} In our code a WL sweep is done by a call to \begin{equation} \label{WLsweep} \tt u1\_mucabmha.f\,, \end{equation} which updates link variables in sequential order through calls to the included routine {\tt u1\_update\_mubmha.f}. This routine generalizes BMH updating to the situation of MUCA weights, which is relatively straightforward (see the code for details) and increases efficiency compared to MUCA Metropolis simulations by a factor 3 to~5. It calls three modularly coded routines: The functions {\tt fmucaln.f} and {\tt betax.f} to calculate weights and $\beta$ values as needed for the BMHA. After an action update is done, the WL update (\ref{WLupd}) is performed by a call to the subroutine {\tt wala\_updt.f}, which is at the heart of our modifications of the WL algorithm. The basic point is that {\tt wala\_updt.f} does not only iterate the number of histogram entries, but also the mean value within each histogram bin. The relevant lines of that code are listed below. \begin{verbatim} C Put addwl <= zero in 2. part of MUCA. if(addwl.gt.zero) then wln(ix)=wln(ix)-addwl xwl(ix)=(hx(ix)*xwl(ix)+x) endif hx(ix)=hx(ix)+one hx0(ix)=hx0(ix)+one if(addwl.gt.zero) then xwl(ix)=xwl(ix)/hx(ix) else xmu(ix)=xmu(ix)+x endif \end{verbatim} Besides performing the update (\ref{WLupd}) the routine tracks the histogram entries in the array {\tt hx} and for ${\tt addwl}>0$ the mean value of {\tt x} within bin {\tt ix} as {\tt xwl(ix)}. For $\tt addwl\le 0$ MUCA simulations with fixed weights are performed. Then the {\tt xwl(ix)} values are kept constant, but the array {\tt xmu} allows one to calculate at a later stage the average within a histogram bin. The {\tt xwl(ix)} values are essential entries for the Fortran functions {\tt fmuca.f} and {\tt betax.f}. Logarithmic WL weights {\tt wln(ix)} correspond to the mean value positions. For general {\tt x} values the function {\tt fmucaln.f} interpolates the logarithmic weights linearly from the {\tt wln(ix)} weights of the two neighboring mean values: \begin{verbatim} x=(E-Emin)/delE ix=1+int(x) if(x.gt.xwl(1).and.x.lt.xwl(ixmax)) then if(x.gt.xwl(ix)) then ix1=ix+1 else ix1=ix-1 endif w1=abs(xwl(ix1)-x) w2=abs(xwl(ix)-x) fmucaln=(w1*wln(ix)+w2*wln(ix1))/(w1+w2) elseif(x.le.xwl(1)) then ... \end{verbatim} With this input the function {\tt betax} calculates the $\beta$ values used by the BMHA: \begin{verbatim} if(x.le.xwl(ix)) then if(ix.eq.1) then betax=bmax else betax=(wln(ix-1)-wln(ix))/ & (xwl(ix)-xwl(ix-1))/delE endif else if(ix.eq.ixmax) then betax=bmin else betax=(wln(ix)-wln(ix+1))/ & (xwl(ix+1)-xwl(ix))/delE endif endif \end{verbatim} Although these routines are modular, to transfer the relevant arrays and variables into the U(1) code, they are all kept in the common block {\tt common\_u1muca.f}. \begin{verbatim} C wln MUCA logarithmic weights (w=exp(wln)). C hx Total count of histogram entries. C hx0 For reconstruction of entries during C one recursion segment. C xwl Continuously updated mean values of C histogram bins C (used with MUCA weights). C xmu Keeps track of mean values of C histogram bins during C fixed weights MUCA runs. C addwl Wang-Landau parameter. C flat Flatness of the histogram as measured C by hist_flat.f. C irup1 Start of WL recursion loop. C irec Number of WL recursions done. C mucarun Number of MUCA run C (0 for WL recursion, then 1, 2, ...). C ntun Number of tunneling (cycling) events. C ltun0 Logical used when incrementing ntun. common/wln/wln(ixmax),hx(ixmax), & hx0(ixmax),xwl(ixmax),addwl,xmu(ixmax), & flat,irup1,irec,mucarun,ntun,ltun0 \end{verbatim} The common block is on the specialized U(1) level, because the array dimension $$\tt ixmax = Int \left[(actmax-actmin)\,(np/(4(nd-1)))\right] $$ depends on the number {\tt np} of plaquettes of the U(1) lattice. The relevant parameters are set by the user in {\tt u1muca.par}. Once the system cycled from the minimum (\ref{actmin}) to the maximum (\ref{actmax}) action value and back \begin{equation} \label{cycling} \tt actmin\ \longleftrightarrow\ actmax\ , \end{equation} a WL recursion (\ref{WLrec}) is attempted. The {\it cycling} or {\it tunneling} condition (\ref{cycling}) ensures that the range of interest has indeed been covered. In addition we require that the sampled action histogram is sufficiently flat. The flatness is defined by \begin{equation} \label{flat} {\tt flatness} = h_{\min}/h_{\max}\,, \end{equation} where $h_{\min}$ and $h_{\max}$ are the smallest and largest numbers of histogram entries in the range of interest, and are calculated by our modular routine {\tt hist\_flat.f}. Our cut on this quantity is set by {\tt flatcut} in {\tt u1muca.par}. In our simulations \cite{BeBa06} we used ${\tt flatcut}=0.5$. This is rather weak compared to the requirement in the original WL approach \cite{WaLa01} that ``the histogram $H(E)$ for all possible $E$ is not less that 80\% of the average histogram $\langle H(E) \rangle$'', although their definition of flatness is less stringent than our Eq.~(\ref{flat}). The conceptual difference is that the WL paper aims at iterating all the way towards an accurate estimate of the inverse spectral density, while we are content with a working estimate, which enables cycling~(\ref{cycling}). The estimate of the spectral density is then postponed to our continuation with frozen weight for which convergence of the MCMC process is rigorously proven~\cite{Bbook}, whereas no such proof exists for the WL algorithm. It needs to be mentioned that the histogram {\tt hx} is accumulated over the entire recursion process and the same is true for the refinement of the weights {\tt wln}. The histogram {\tt hx0} keeps track of the entries between WL recursions. Presently this information is not used in the code. One may consider to apply a flatness criterion to {\tt hx0} instead of {\tt hx}. This is just one of many fine tuning options, which we did not explore, because the WL recursion in its present form took just a few percent of the CPU time in our U(1) simulations~\cite{BeBa06}. The desired number of WL recursions (\ref{WLrec}) is set by the parameter {\tt nrec} of {\tt u1muca.par}, typically ${\tt nrec}=20$ or somewhat larger. To achieve this, {\tt nrup} (number (n) recursions (r) upper (up) limit) WL recursion attempts are allowed, each accommodating up to {\tt nupdt} WL update sweeps. The update sweeps are interrupted for a recursion attempt when cycling (\ref{cycling}) is recorded by our modularly coded subroutine {\tt tuna\_cnt.f}, which checks after every sweep. So, ${\tt nrup} \times {\tt nupdt}$ is the maximum number of sweeps spent on the WL recursion part. The process is aborted if the given limit is exhausted before {\tt nrec} WL recursions are completed. \subsection{Fixed weights MUCA simulations and measurements} Fixed weight MUCA simulations are performed by the routines discussed in the previous section, only that the WL update (\ref{WLupd}) is no longer performed, what is programmed to be the case for ${\tt addwl}\le 0$. We still perform updates of the weights in-between (long) MUCA production runs, which is done by a call to {\tt u1mu\_rec1.f} in the initialization routine {\tt u1mu\_init.f}. Overall our simulation consists of $$\tt nequi+nrpt\times nmeas\times nsw$$ BMHA sweeps, each (optionally) supplemented by 2 overrelaxation sweeps. During a MUCA simulation several physical quantities (described below in the section on data analysis) are measured by the \texttt{u1sf\_measure} subroutine. Measurements are performed every \texttt{nsw} sweeps and accumulated in the arrays defined in \texttt{common\_u1.f}: \begin{verbatim} c aphase(nd,ms): Phase of the U(1) "matrix". c (We store aphase and c the matrix on the link is c e^{i aphase}.) c act: Energy (action) variable. c amin,amax: Action act minimum and c maximum of the MC sweep. c acpt: Counts accepted updates. c tsa: Time series action array. c a_min,a_max: Action minimum and maximum for c a series of sweeps. c tsws,tswt: Time series for lattice c average spacelike and timelike c Wilson plaquette loops. c plreal,plimag: Space arrays for Polyakov c loops in (nd-1) dimensions. c tspr,tspi: Time series for lattice c average Polyakov loops. c isublat: sublattice for Polyakov loops c (in t=0 slice). common /u1/ aphase(nd,ms),act,amin,amax, & acpt,tsa(nmeas),a_min,a_max,tsws(nmeas), & tswt(nmeas),plreal(mxs),plimag(mxs), & tspr(nmeas),tspi(nmeas),isublat(mxs) \end{verbatim} and in \texttt{common\_sf.f}: \begin{verbatim} C Arrays for structure function measurements. common/u1sf/ tssf(ntotalsfbox,nmeas), & nsfcomp(1:ntotalsfbox,0:ndimsf) \end{verbatim} Then each $\tt nmeas\times nsw$ sweeps (i.e., on each iteration of the \texttt{nrpt} loop) the arrays with measurements are written on disk by the \texttt{u1mu\_rw\_meas} subroutine, which can also read data. With a call to {\tt u1wl\_backup.f} the current state of the lattice is backed up on disk. This allows one to restart the program from the latest iteration of the \texttt{nrpt} loop if it gets interrupted and ensures that not more than \texttt{1/nrpt} of the total running time is lost in such a case. Typically, we set \texttt{nrpt=32}. \subsection{Data analysis} The program {\tt ana\_u1mu.f} calculates action, specific heat and two Polyakov loop susceptibilities, the program {\tt sfana\_u1mu.f} Polyakov loop structure factors. Definitions are given in the following with names of corresponding gnuplot driver files in parenthesis. The specific heat ({\tt plot\_C.plt}) is \begin{equation} \label{Cbeta} C(\beta)\ =\ \frac{1}{N_p} \left[\langle S^2\rangle - \langle S \rangle^2\right]~~{\rm with}~~N_p=\frac{d\,(d-1)}{2}\,N_s^3\,N_{\tau} \end{equation} where $S$ is the action ({\tt plot\_a.plt}). Besides the action we measure Polyakov loops and their low-momentum structure factors. Polyakov loops $P_{\vec{x}}$ are the $U_{ij}$ products along the straight lines in $N_{\tau}$ direction. For U(1) LGT each $P_{\vec{x}}$ is a complex number on the unit circle and $\vec{x}$ labels the sites of the spatial sublattice. From the sum over all Polyakov loops \begin{equation} \label{PLoop} P\ =\ \sum_{\vec{x}} P_{\vec{x}} \end{equation} we find the susceptibility of the absolute value $|P|$ \begin{equation} \label{chi} \chi_{\max}(\beta)\ =\ \frac{1}{N_s^3} \left[ \langle|P|^2\rangle - \langle|P|\rangle^2\ \right] \end{equation} ({\tt plot\_CP.plt}), and the susceptibility with respect to~$\beta$ \begin{equation} \label{chi_beta} \chi^{\beta}_{\max}(\beta)\ =\ \frac{1}{N_s^3} \frac{d~}{d\beta}\, \langle|P|\rangle \end{equation} ({\tt plot\_CPb.plt}). The analogues in spin system are the magnetic susceptibilities with respect to an external magnetic field and with respect to the temperature. \subsubsection{Structure factors} Structure factors are defined by \begin{equation} \label{sf} F(\vec{k})=\frac{1}{N_s^3} \left\langle\left|\sum_{\vec{x}} P(\vec{x})\, \exp(i\vec{k}\vec{x}) \right|^2\right\rangle\,, ~~\vec{k} = \frac{2\pi}{N_s}\vec{n}\,, \end{equation} where $\vec{n}$ is an integer vector to which we refer as momentum in the following (it differs from the structure factor momentum $\vec{k}$ by a constant prefactor) to describe how we average over different momenta. In \texttt{sf.par} \begin{verbatim} C (0,0,1), (0,1,0), (1,0,0), and so on, C are stored separately. C ndimsf: number of dimensions for C the structure function. C nsfmax: maximum value of C \vec{n}=(n1,n2,n3,...,n_ndimsf) C with 0 =< n1,n2,..n_ndimsf =< nsfmax. C nsfbox: total number of \vec{n} components. parameter(ndimsf=nd-1,nsfmax=1, & nsfbox=(nsfmax+1)**ndimsf) \end{verbatim} the dimension \texttt{ndimsf} of the sublattice on which SFs are calculated is defined and one sets the maximum value of the components of the momentum, \texttt{nsfmax}, \begin{eqnarray} \vec{n}&=&(n_1,n_2,...,n_{\texttt{ndimsf}})\,, \\ n_i&=&0,...,\texttt{nsfmax},~~i\ =\ 1,...,\texttt{ndimsf}\,. \end{eqnarray} As $n_i$ counting is 0-based we measure during the simulation \begin{equation} \texttt{ntotalsfbox}=(\texttt{nsfmax}+1)^{\texttt{ndimsf}} \end{equation} SF components. Their momenta are stored in the \texttt{nsfcomp} array. In the example of a 4D multicanonical run given in the next section \texttt{ntotalsfbox}=$(1+1)^3$=8. Initialization of the arrays for structure factor measurements is carried out by the routine \texttt{sf\_box\_init}, which also outputs how momenta are initialized and numbered: \begin{verbatim} sf_box_init: Structure factors initialized: ndimsf = 3 nsfmax = 1 nsfbox = 8 Integer vectors generated: # n^2 components.... 1 0 0 0 0 2 1 0 0 1 3 1 0 1 0 4 2 0 1 1 5 1 1 0 0 6 2 1 0 1 7 2 1 1 0 8 3 1 1 1 sf_box_init done. \end{verbatim} These eight SFs are measured during this simulation. For a spatially symmetric lattice SF components with permuted momenta, i.e. $(0,0,1)$, $(0,1,0)$ and $(1,0,0)$ are equivalent and there are only \begin{equation} \texttt{nsfdiff}=\frac{(\texttt{nsfmax}+\texttt{ndimsf})!} {\texttt{nsfmax}!\texttt{ndimsf}!} \end{equation} different modes. In the example \texttt{nsfdiff}=4 and they are $$(0,0,0)\,,~~(0,0,1)\,,~~(0,1,1)~~{\rm and}~~(1,1,1)\,.$$ To average SFs over permutations of momenta one needs to identify momenta that differ up to permutations, calculate their multiplicity (i.e., the number of permutations) and construct a mapping from all momenta to the set of non-equivalent momenta. For this purpose the \texttt{sf\_box\_shuffle.f} subroutine is used. It returns three arrays corresponding to the elements described above: \texttt{nsfcomp\_diff}, \texttt{nsfmulti}, \texttt{nsfmapping}. Using them the analysis program \texttt{sfana\_u1mu.f} averages SF components and outputs them in files prefixed with the non-equivalent momenta components (for instance, the SF with $\vec{n}=(0,1,1)$ is output in {\tt 011sf006x004tmu01.d}). The SF normalization in (\ref{sf}) is defined so that $F(\vec{k})=1$ at $\beta=0$ for all momenta and dimensions. The output of \texttt{sf\_box\_shuffle.f} from our example run is: \begin{verbatim} sf_box_shuffle: Different SF components (integer vectors): # multi n^2 components.... 1 1 0 0 0 0 2 3 1 0 0 1 3 3 2 0 1 1 4 1 3 1 1 1 Mapping of the components (888 separator): 1 0 0 0 888 1 888 0 0 0 2 0 0 1 888 2 888 0 0 1 3 0 1 0 888 2 888 0 0 1 4 0 1 1 888 3 888 0 1 1 5 1 0 0 888 2 888 0 0 1 6 1 0 1 888 3 888 0 1 1 7 1 1 0 888 3 888 0 1 1 8 1 1 1 888 4 888 1 1 1 sf_box_shuffle done. \end{verbatim} The first part of the output shows that four different SF components were identified and in the second part the mapping from the eight original momenta is shown. If only partial measurements are available, one can choose the parameter {\tt nset} in {\tt ana\_u1mu.f} or {\tt sfana\_u1mu.f}, which is preset to $\tt nset=nrpt$, smaller than {\tt nrpt}. \subsubsection{Reweighting to the canonical ensemble} The analysis programs are reweighting to the canonical ensemble. The simulation is performed with $\exp(\texttt{wln}(E))$ weights, which need to be replaced by the Boltzmann factor $\exp(-\beta E)$. Given a set of $N$ multicanonical configurations the estimator for an observable $O$ in the canonical ensemble is \begin{equation}\label{Orew} \displaystyle \langle O\rangle(\beta)=\frac{\sum_{i=1}^NO_i\exp(-\beta E_i -\texttt{wln}(E_i))}{\sum_{i=1}^N\exp(-\beta E_i-\texttt{wln}(E_i))}. \end{equation} Eq. (\ref{Orew}) involves large terms in the numerator and denominator that can cause an overflow. To avoid this we use logarithmic coding as described in \cite{Bbook}. Instead of adding two numbers one expresses the logarithm of the sum through their logarithms. With this strategy one effectively evaluates the logarithm of the numerator and denominator, which are of the same order, and exponentiates the difference. The {\tt u1\_ts\_zln.f} subroutine performs the reweighting of the time series to a given value of $\beta$ according to Eq. (\ref{Orew}). Since the reweighting procedure is non-linear, one expects a bias, which is for {\tt nrpt} patches proportional to $\tau_{\rm int}/{\tt (nmeas* nsw)}$. Using jackknife error bars the bias becomes reduced by a factor 1/({\tt nrpt}-1). This is realized by the {\tt u1\_ts\_zlnj.f} subroutine. If one is not yet satisfied, one can go on and use the jackknife approach to estimate the bias explicitly. \subsection{Example runs}\label{subsec_MUCAruns} We have prepared MUCA example runs in the subfolders $${\tt MUCA3D04t06xb2p0}~~{\rm and}~~{\tt MUCA4D04t06xb1p1}$$ of the folder {\tt ExampleRuns}. The values of {\tt actmin} and {\tt actmax} in {\tt u1muca.par} are estimates from the previously discussed short canonical runs. The last four characters in the subfolder names denote the value of {\tt beta\_table} for which the BMHA table is calculated. The {\tt *.d} test files left in these subfolders were obtained from the analysis of MUCA data obtained by the preset runs. The MUCA data themselves are produced in {\tt *.D} files, which have been removed, because they are unformatted and readability is only guaranteed on the platform on which they are produced. The MUCA data production goes through three steps of individual runs. First one has to compile and run the program {\tt u1wl\_bmho.f}, which uses our WL recursion to obtain a working estimate of the MUCA weights. Subsequently two runs of MUCA data production are performed by the program {\tt u1mu\_bmho.f}. After each data production step one may execute the data analysis programs {\tt ana\_u1mu.f} and {\tt sfana\_u1mu.f}. \begin{figure} \includegraphics[width=\columnwidth]{figs/C} \caption{Specific heat on a $6^34$ lattice.} \label{fig_C} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{figs/CP} \caption{Polyakov loop susceptibility on a $6^34$ lattice.} \label{fig_CP} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{figs/CPb} \caption{Polyakov loop susceptibility with respect to $\beta$ on a $6^34$ lattice.} \label{fig_CPb} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{figs/sfs} \caption{Structure factors on a $6^34$ lattice (1.\ and 2.\ MUCA runs).} \label{fig_sfs} \end{figure} In our examples the WL recursion is considered complete after 20 successful recursion steps (\ref{WLrec}). In 3D this was achieved after 22 cycling (tunneling) events. In 4D 23 cycling events were needed. Then, during the simulation with fixed weights, more than 1$\,$000 tunnelings per job were recorded in 3D. In 4D 214 tunnelings occurred in the first and 247 in the second MUCA run. These numbers vary across different platforms. The results of the analysis programs are shown in Fig.~\ref{fig_C} for the specific heat, Fig.~\ref{fig_CP} and \ref{fig_CPb} for the susceptibilities and in Fig.~\ref{fig_sfs} for the first three non-trivial structure factors. On our 2~GHz PC the data production took 74m per job. Before, the WL recursion completed in just 2m27s. In 3D the specific heat does not diverge and the transition is much broader. We show in Fig.~\ref{fig_CP3d} the Polyakov susceptibility. On our PC the WL recursion completed in 6s and the data production took 7m3s per job. \section{Summary and Conclusions} \label{sec_conclusions} We think that the open source Fortran code documented in this paper can be modified for many applications in statistical physics and LGT, considerably beyond the U(1) gauge group. A number of parameters can be varied, but one should have in mind that most of them have not been tested. Obviously, it is in the responsibility of the user to perform rigid tests and verifications before trusting any of the result. \section*{Acknowledgments} This work was in part supported by the U.S. Department of Energy under contracts DE-FG02-97ER41022, DE-FC02-06ER-41439 and NSF grant 0555397. \begin{figure} \includegraphics[width=\columnwidth]{figs/CP3d} \caption{Polyakov loop susceptibility on a $6^24$ lattice.} \label{fig_CP3d} \end{figure} \bibliographystyle{elsarticle-num.bst}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section[Introduction]{Introduction} \label{sec:Introduction} For $p$-variate data assumed to arise from a continuous random variable, statistical inference is commonly focused on elliptical distributions \citep{Camb:Onth:1981}; in this class, the Gaussian distribution is the most widely considered because of its computational and theoretical convenience. However, for many applied problems, the tails of the Gaussian distribution are lighter than required. The $t$ distribution, thanks to the degrees of freedom, provides a common way to broaden the Gaussian tails (\citealp{Lang:Litt:Tayl:Robu:1989} and \citealp{Kotz:Nada:Mult:2004}). A further elliptical alternative is represented by the contaminated Gaussian distribution, a two-component Gaussian mixture in which one of the components, with a large prior probability, represents the ``good'' observations, and the other, with a small prior probability, the same mean, and an inflated covariance matrix, represents the ``bad'' observations \citep{Aitk:Wils:Mixt:1980}. It constitutes a common and simple theoretical model for the occurrence of outliers, spurious points, or noise (collectively referred to as ``bad" points herein). The contaminated Gaussian distribution is the core of this paper. Firstly, for maximum likelihood (ML) estimation of its parameters, we propose an expectation-conditional maximization (ECM) algorithm \citep{Meng:Rubin:Maxi:1993} which stems from the characterization of the contaminated Gaussian distribution as a Gaussian scale mixture model. In contrast with the ECM algorithm illustrated in \citet{Punz:McNi:Robu:2013}, in each CM step of the new algorithm we have the advantage that the mean and the covariance matrix (for the good observations) are updated independently from the other two parameters (proportion of good observations and inflation parameter). Secondly, we introduce the contaminated Gaussian factor analysis model, a robust extension of the classical (Gaussian) factor analysis model obtained by adopting the contaminated Gaussian distribution for the errors and the latent factors. Our proposal advantageously embeds the (Gaussian) factor analysis model in a larger model with two additional parameters that: (1) afford protection against elliptical non-Gaussianity (of errors and/or latent factors) and (2) allow for automatic detection of bad points. Thirdly, \citet{Punz:McNi:Robu:2013} have recently proposed mixtures of contaminated Gaussian distributions both as a robust generalization of mixtures of Gaussian distributions, and as an improvement of mixtures of $t$ distributions in terms of automatic detection of bad points in a clustering perspective. However, the mixture of contaminated Gaussian distributions, with unrestricted component-covariance matrices of the good observations, say $\boldsymbol{\Sigma}_g$, is a highly parametrized model with $p\left(p + 1\right)/2$ parameters for each $\boldsymbol{\Sigma}_g$, $g=1,\ldots,G$. To introduce parsimony, \citet{Punz:McNi:Robu:2013} also define thirteen variants of the general model obtained, as in \citet{Cele:Gova:Gaus:1995}, via eigen-decomposition of $\boldsymbol{\Sigma}_1,\ldots,\boldsymbol{\Sigma}_G$. But if $p$ is large relative to the sample size $n$, it may not be possible to use this decomposition to infer an appropriate model for $\boldsymbol{\Sigma}_1,\ldots,\boldsymbol{\Sigma}_G$. Even if it is possible, the results may not be reliable due to potential problems with near-singular estimates of $\boldsymbol{\Sigma}_g$ when $p$ is large relative to $n$. To address this problem, following the literature on the adoption of factor analyzers within mixture models (see, among many others, \citealp[][Chapter~8]{McLa:Peel:fini:2000}, \citealp{McLa:Peel:Bean:Mode:2003}, \citealp{McNi:Murp:Pars:2008}, \citealp{Zhao:Yu:Fast:2008}, \citealp{Mont:Viro:Maxi:2011}, and \citealp{Sube:Punz:Ingr:McNi:Clus:2013}), we propose mixtures of contaminated Gaussian factor analyzers, where a contaminated Gaussian factor analysis model is used for each mixture component. The result is a means of fitting mixtures of contaminated Gaussian distributions in situations where $p$ would be sufficiently large relative to the sample size $n$ to cause potential problems with singular or near-singular estimates of $\boldsymbol{\Sigma}_1,\ldots,\boldsymbol{\Sigma}_G$. The number of free parameters is controlled through the dimension of the latent factor space. The paper is organized as follows. Section~\ref{sec:The contaminated Gaussian distribution} contextualizes the contaminated Gaussian distribution as a Gaussian scale mixture model. The contaminated Gaussian factor analysis model is introduced in Section~\ref{sec:The contaminated Gaussian factor analysis model} while mixtures of contaminated Gaussian factor analyzers are presented in Section~\ref{sec:Mixtures of contaminated Gaussian factor analyzers}. In particular, in each section, hence for each model: (1) identifiability is discussed, (2) an EM-based algorithm is outlined for ML estimation of the parameters, (3) computational details are given, and (4) a real data analysis is discussed to appreciate the advantages of the model. Computation is done in the \textsf{R} software environment for statistical computing and graphics \citep{R:2013}. The paper concludes with a discussion in Section~\ref{sec:Discussion and future work}. \section{The Contaminated Gaussian Distribution} \label{sec:The contaminated Gaussian distribution} For robustness sake, one of the most common ways to generalize the Gaussian distribution is represented by the Gaussian scale mixture model \begin{equation} \int_0^{\infty} \phi\left(\boldsymbol{x};\boldsymbol{\mu},\boldsymbol{\Sigma}/w\right)dH\left(w\right), \label{eq:Gaussian scale mixture model} \end{equation} where $H\left(\cdot\right)$ is a probability distribution function and $\phi\left(\boldsymbol{x};\boldsymbol{\mu},\boldsymbol{\Sigma}/w\right)$ denotes the density of a $p$-variate Gaussian random vector $\boldsymbol{X}$ with mean $\boldsymbol{\mu}$ and covariance matrix $\boldsymbol{\Sigma}/w$. Model~\eqref{eq:Gaussian scale mixture model} is unimodal, symmetrical, and guarantees heavier tails than those of the Gaussian distribution \citep[see, e.g.,][]{Wata:Yama:TheE:2003}. It also includes some well-known models; for example, if $W\sim\text{gamma}\left(\nu/2,\nu/2\right)$, then we obtain the $t$ distribution with location parameter $\boldsymbol{\mu}$, positive definite inner product matrix $\boldsymbol{\Sigma}$, and $\nu$ degrees of freedom. For our aim, it is important to note that if we focus on the dichotomous random variable \begin{equation*} W=\left\{ \begin{array}{ll} 1 & \text{with probability $\alpha$}\\ 1/\eta & \text{with probability $1-\alpha$} \end{array}\right. , \label{eq:W} \end{equation*} with probability mass function \begin{equation*} p_C\left(w;\alpha,\eta\right)=\alpha^{\frac{w-1/\eta}{1-1/\eta}}\left(1-\alpha\right)^{\frac{1-w}{1-1/\eta}}, \label{eq:distribution of W} \end{equation*} then, from model~\eqref{eq:Gaussian scale mixture model}, we obtain the contaminated Gaussian distribution \begin{equation} p_{CN}\left(\boldsymbol{x};\boldsymbol{\mu},\boldsymbol{\Sigma},\alpha,\eta\right)=\alpha\phi\left(\boldsymbol{x};\boldsymbol{\mu},\boldsymbol{\Sigma}\right)+\left(1-\alpha\right)\phi\left(\boldsymbol{x};\boldsymbol{\mu},\eta\boldsymbol{\Sigma}\right), \label{eq:contaminated Gaussian distribution} \end{equation} where $\alpha\in\left(0,1\right)$ and $\eta>1$ \citep[cf.][]{Punz:McNi:Robu:2013}. In summary, \begin{equation} \boldsymbol{X}|w\sim N_{p}\left(\boldsymbol{\mu},\boldsymbol{\Sigma}/w\right), \label{eq:X|w N} \end{equation} \begin{equation} W\sim C\left(\alpha,\eta\right), \label{eq:W C} \end{equation} and \begin{equation} \boldsymbol{X}\sim CN_{p}\left(\boldsymbol{\mu},\boldsymbol{\Sigma},\alpha,\eta\right). \label{eq:X CN} \end{equation} As we can see in \eqref{eq:contaminated Gaussian distribution}, a contaminated Gaussian distribution is a two-component Gaussian mixture in which one of the components, typically with a large prior probability $\alpha$, represents the ``good'' observations, and the other, with a small prior probability, the same mean, and an inflated covariance matrix $\eta\boldsymbol{\Sigma}$, represents the ``bad'' observations \citep{Aitk:Wils:Mixt:1980}. As a special case of \eqref{eq:contaminated Gaussian distribution}, if $\alpha$ and/or $\eta$ tend to one, then $\boldsymbol{X}\sim N_p\left(\boldsymbol{\mu},\boldsymbol{\Sigma}\right)$. An advantage of \eqref{eq:contaminated Gaussian distribution} with respect to the often used $t$ distribution is that, once the parameters in $\boldsymbol{\vartheta}=\left\{\boldsymbol{\mu},\boldsymbol{\Sigma},\alpha,\eta\right\}$ are estimated, say $\widehat{\boldsymbol{\vartheta}}=\left\{\widehat{\boldsymbol{\mu}},\widehat{\boldsymbol{\Sigma}},\widehat{\alpha},\widehat{\eta}\right\}$, we can establish whether a generic observation $\boldsymbol{x}_i$ is either good or bad via the \textit{a~posteriori} probability. That is, compute \begin{equation} P\left(\text{$\boldsymbol{x}_i$ is good}\left|\widehat{\boldsymbol{\vartheta}}\right.\right)=\widehat{\alpha}\phi\left(\boldsymbol{x}_i;\widehat{\boldsymbol{\mu}},\widehat{\boldsymbol{\Sigma}}\right)\Big/p_{CN}\left(\boldsymbol{x}_i;\widehat{\boldsymbol{\vartheta}}\right), \label{eq:probability good} \end{equation} and $\boldsymbol{x}_i$ will be considered good if $P\left(\text{$\boldsymbol{x}_i$ is good}\left|\widehat{\boldsymbol{\vartheta}}\right.\right)>1/2$, while it will be considered bad otherwise. \subsection{Maximum likelihood estimation via the ECME algorithm} \label{subsec:CN: ML estimation via the ECME algorithm} Given an observed random sample $\boldsymbol{x}_1,\ldots,\boldsymbol{x}_n$ from $\boldsymbol{X}\sim CN_{p}\left(\boldsymbol{\mu},\boldsymbol{\Sigma},\alpha,\eta\right)$, we consider now the application of the expectation-conditional maximization either (ECME) algorithm of \citet{Liu:Rubi:TheE:1994} for maximum likelihood (ML) estimation of $\boldsymbol{\vartheta}=\left\{\boldsymbol{\mu},\boldsymbol{\Sigma},\alpha,\eta\right\}$; here, based on \eqref{eq:X|w N}, \eqref{eq:W C}, and \eqref{eq:X CN}, $\boldsymbol{\vartheta}$ is partitioned as $\boldsymbol{\vartheta}=\left\{\boldsymbol{\vartheta}_1,\boldsymbol{\vartheta}_2\right\}$, where $\boldsymbol{\vartheta}_1=\left\{\boldsymbol{\mu},\boldsymbol{\Sigma}\right\}$ and $\boldsymbol{\vartheta}_2=\left(\alpha,\eta\right)'$. The ECME algorithm is an extension of the ECM algorithm of \citet{Meng:Rubin:Maxi:1993}. With this extension, some or all of the CM-steps of the ECM algorithm are replaced by steps that conditionally directly maximize the observed-data log-likelihood function, and not the expectation of the complete-data log-likelihood. Anyway, both these algorithms are variants of the classical expectation-maximization (EM) algorithm \citep{Demp:Lair:Rubi:Maxi:1977}, which is a natural approach for ML estimation when data are incomplete. In our case, incompleteness arises from the characterization of the contaminated Gaussian distribution given by \eqref{eq:X|w N}, \eqref{eq:W C}, and \eqref{eq:X CN}. The complete data are taken to be $\left(\boldsymbol{x}_1',\ldots,\boldsymbol{x}_n',w_1,\ldots,w_n\right)'$, and the complete-data likelihood can be written $$ L_c\left(\boldsymbol{\vartheta}\right)=\prod_{i=1}^n\phi\left(\boldsymbol{x}_i;\boldsymbol{\mu},\boldsymbol{\Sigma}/w_i\right)p_C\left(w_i;\alpha,\eta\right). $$ Accordingly, the complete-data log-likelihood can be written \begin{equation*} l_c\left(\boldsymbol{\vartheta}\right)=l_{1c}\left(\boldsymbol{\vartheta}_1\right)+l_{2c}\left(\boldsymbol{\vartheta}_2\right), \label{eq:CN: complete-data log-likelihood} \end{equation*} where \begin{equation*} l_{1c}\left(\boldsymbol{\vartheta}_1\right)=-\frac{np}{2}\log\left(2\pi\right)-\frac{n}{2}\log\left|\boldsymbol{\Sigma}\right|+\frac{p}{2}\sum_{i=1}^n\log w_i -\frac{1}{2}\sum_{i=1}^n w_i \left(\boldsymbol{x}_i-\boldsymbol{\mu}\right)'\boldsymbol{\Sigma}^{-1}\left(\boldsymbol{x}_i-\boldsymbol{\mu}\right) \label{eq:CN: complete-data log-likelihood 1} \end{equation*} and \begin{equation*} l_{2c}\left(\boldsymbol{\vartheta}_2\right)=\log \alpha \sum_{i=1}^n\frac{w_i-1/\eta}{1-1/\eta} + \log \left(1-\alpha\right) \sum_{i=1}^n\frac{1-w_i}{1-1/\eta}. \label{eq:CN: complete-data log-likelihood 2} \end{equation*} \subsubsection{E-step} \label{subsubsec:E-step: contaminated Gaussian distribution} The E-step on the $\left(k + 1\right)$th iteration requires the calculation of \begin{eqnarray*} Q\left(\boldsymbol{\vartheta};\boldsymbol{\vartheta}^{\left(k\right)}\right)&=&E_{\boldsymbol{\vartheta}^{\left(k \right)}}\left[l_c\left(\boldsymbol{\vartheta}\right)|\boldsymbol{x}_1,\ldots,\boldsymbol{x}_n\right]\nonumber\\ &=& E_{\boldsymbol{\vartheta}^{\left(k \right)}}\left[l_{1c}\left(\boldsymbol{\vartheta}_1\right)|\boldsymbol{x}_1,\ldots,\boldsymbol{x}_n\right] + E_{\boldsymbol{\vartheta}^{\left(k \right)}}\left[l_{2c}\left(\boldsymbol{\vartheta}_2\right)|\boldsymbol{x}_1,\ldots,\boldsymbol{x}_n\right] \nonumber\\ &=& Q_1\left(\boldsymbol{\vartheta}_1;\boldsymbol{\vartheta}^{\left(k\right)}\right)+Q_2\left(\boldsymbol{\vartheta}_2;\boldsymbol{\vartheta}^{\left(k\right)}\right), \label{eq:contaminated: expected complete-data loglikelihood} \end{eqnarray*} where the expectation, as it can be noted by the subscript, is taken using the current fit $\boldsymbol{\vartheta}^{\left(k\right)}$ for $\boldsymbol{\vartheta}$. Here, we replace $w_i$ by \begin{equation*} E_{\boldsymbol{\vartheta}^{\left(k \right)}}\left[W_i|\boldsymbol{x}_i\right]=w_i^{\left(k \right)}, \label{eq:E(W|X)} \end{equation*} where \begin{equation*} w_i^{\left(k \right)}=\frac{\alpha^{\left(k \right)}\phi\left(\boldsymbol{x}_i;\boldsymbol{\mu}^{\left(k \right)},\boldsymbol{\Sigma}^{\left(k \right)}\right)+\displaystyle\frac{1-\alpha^{\left(k \right)}}{\eta^{\left(k \right)}}\phi\left(\boldsymbol{x}_i;\boldsymbol{\mu}^{\left(k \right)},\eta^{\left(k \right)}\boldsymbol{\Sigma}^{\left(k \right)}\right)}{\alpha^{\left(k \right)}\phi\left(\boldsymbol{x}_i;\boldsymbol{\mu}^{\left(k \right)},\boldsymbol{\Sigma}^{\left(k \right)}\right)+\left(1-\alpha^{\left(k \right)}\right)\phi\left(\boldsymbol{x}_i;\boldsymbol{\mu}^{\left(k \right)},\eta^{\left(k \right)}\boldsymbol{\Sigma}^{\left(k \right)}\right)}. \label{eq:w update} \end{equation*} \subsubsection{CM-step 1} \label{subsubsec:CM-step 1: contaminated Gaussian distribution} The first CM-step on the $\left(k + 1\right)$th iteration requires the calculation of $\boldsymbol{\vartheta}_1^{\left(k+1\right)}$ by maximizing $Q_1\left(\boldsymbol{\vartheta}_1;\boldsymbol{\vartheta}^{\left(k\right)}\right)$ with $\boldsymbol{\vartheta}_2$ fixed at $\boldsymbol{\vartheta}_2^{\left(k\right)}$. This yields \begin{equation} \boldsymbol{\mu}^{\left(k+1\right)}=\sum_{i=1}^n w_i^{\left(k\right)}\boldsymbol{x}_i\bigg/\sum_{i=1}^n w_i^{\left(k\right)} \label{eq:mean update: contaminated Gaussian} \end{equation} and \begin{equation*} \boldsymbol{\Sigma}^{\left(k+1\right)}=\frac{1}{n}\sum_{i=1}^n w_i^{\left(k\right)}\left(\boldsymbol{x}_i-\boldsymbol{\mu}^{\left(k+1\right)}\right)\left(\boldsymbol{x}_i-\boldsymbol{\mu}^{\left(k+1\right)}\right)'. \label{eq:covariance update: contaminated Gaussian} \end{equation*} \subsubsection{CM-step 2} \label{subsubsec:CM-step 2: contaminated Gaussian distribution} The updating of $\boldsymbol{\vartheta}_2$, with $\boldsymbol{\vartheta}_1$ fixed at $\boldsymbol{\vartheta}_1^{\left(k+1\right)}$, cannot be directly derived based on the $Q_2$-function because it is meaningless to estimate $\eta$ when $w_i^{\left(k \right)}$ is given. To solve this issue, the updating of $\alpha$ and $\eta$ is directly performed based on the observed-data log-likelihood; this choice leads to the ECME algorithm. In particular, the second CM-step, on the $\left(k + 1\right)$th iteration, requires the calculation of $\boldsymbol{\vartheta}_2^{\left(k+1\right)}$ by maximizing the observed-data log-likelihood \begin{equation*} l_{CN}\left(\boldsymbol{\vartheta}_2\big|\boldsymbol{\vartheta}_1^{\left(k+1\right)}\right)=\sum_{i=1}^n\log \left[p_{CN}\left(\boldsymbol{x}_i;\boldsymbol{\mu}^{\left(k+1\right)},\boldsymbol{\Sigma}^{\left(k+1\right)},\alpha,\eta\right) \right], \label{eq:conditional likelihood for alpha and eta} \end{equation*} under the constraints $\eta > 1$ and $\alpha\in\left(\alpha^*,1\right)$; the latter constraint is justified by the fact that, for practical purposes, one could require that the proportion of good data is at least equal to a pre-determined value $\alpha^*$, the most natural choice being $\alpha^*=1/2$; this choice is justified because, in robust statistics, it is usually assumed that at least half of the points are good \citep[cf.][p.~250]{Henn:Fixe:2002}. \subsection{Computational considerations and details} \label{subsec:Computational considerations: contaminated Gaussian distribution} The second CM-step is operationally performed using the \texttt{optim()} function in the \textbf{stats} package for \textsf{R}. The method of \citet{Neld:Mead:Asim:1965} is considered for the numerical search of the maximum. \subsubsection{Initialization} \label{subsubsec: CN: Initialization} The choice of the starting values for the ECM algorithm constitutes an important issue. The standard initialization consists of selecting a value for $\boldsymbol{\vartheta}^{\left(0\right)}$. In particular, a random initialization is usually repeated $t$ times, from different random positions, and the solution maximizing the observed-data log-likelihood $l\left(\boldsymbol{\vartheta}\right)$ among these $t$ runs is selected (see \citealp{Bier:Cele:Gova:Choo:2003}, \citealp{Karl:Xeka:Choo:2003}, and \citealp{Bagn:Punz:Fine:2013} for more complicated strategies). Instead of selecting $\boldsymbol{\vartheta}^{\left(0\right)}$ randomly, we suggest the following technique. The Gaussian distribution can be seen as nested in the corresponding contaminated Gaussian distribution. In particular, the former can be obtained from the latter when $\alpha\rightarrow 1^-$ and $\eta=\rightarrow 1^+$. Then, the closed-form ML estimates of $\boldsymbol{\mu}$ and $\boldsymbol{\Sigma}$ for the Gaussian distribution, along with the constraints $\alpha=\alpha^*$ (with $\alpha^*\rightarrow 1^-$) and $\eta=\eta^*$ (with $\eta^*=\rightarrow 1^+$), can be used as $\boldsymbol{\vartheta}^{\left(0\right)}$; in the analysis of Section~\ref{subsec: CN: Real data analysis}, we fix $\alpha^*=0.999$ and $\eta^*=1.001$. From an operational point of view, thanks to the monotonicity property of the ECM algorithm \citep[see, e.g.,][p.~28]{McLa:Kris:TheE:2007}, this also guarantees that the observed-data log-likelihood of the contaminated Gaussian distribution will be always greater than or equal to the observed-data log-likelihood of the corresponding Gaussian distribution. This is a fundamental consideration for the use of likelihood-based model selection criteria, and likelihood ratio tests, for choosing/assessing between a Gaussian distribution and a contaminated Gaussian distribution. \subsubsection{Convergence Criterion} \label{subsubsec: CN: Convergence criterion} The Aitken acceleration \citep{Aitk:OnBe:1926} is used to estimate the asymptotic maximum of the log-likelihood at each iteration of the ECM algorithm. Based on this estimate, we can decide whether or not the algorithm has reached convergence; i.e., whether or not the log-likelihood is sufficiently close to its estimated asymptotic value. The Aitken acceleration at iteration $k+1$ is given by \begin{equation*} a^{\left(k+1\right)}=\frac{l^{\left(k+2\right)}-l^{\left(k+1\right)}}{l^{\left(k+1\right)}-l^{\left(k\right)}}, \end{equation*} where $l^{\left(k\right)}$ is the observed-data log-likelihood value from iteration $k$. Then, the asymptotic estimate of the log-likelihood at iteration $k + 2$ is given by \begin{displaymath} l_{\infty}^{\left(r+2\right)}=l^{\left(r+1\right)}+\frac{1}{1-a^{\left(r+1\right)}}\left(l^{\left(r+2\right)}-l^{\left(r+1\right)}\right), \end{displaymath} cf.\ \citet{Bohn:Diet:Scha:Schl:Lind:TheD:1994}. The ECM algorithm can be considered to have converged when $l_{\infty}^{\left(k+2\right)}-l^{\left(k+1\right)}<\epsilon$. In the analysis of Section~\ref{subsec: CN: Real data analysis}, we fix $\epsilon=0.001$. \subsection{Real data analysis} \label{subsec: CN: Real data analysis} The bivariate data considered here relate 298 daily returns of two financial indexes (DAX 30 and FTSE MIB) spanning the period from July 24th, 2012, to September 30th, 2013 (the share prices used to compute the daily returns are downloadable from \url{http://finance.yahoo.com/}). A scatter plot of the data is provided in \figurename~\ref{fig:Financial data}. \begin{figure}[!ht] \centering \resizebox{0.5\textwidth}{!}{\includegraphics{Fina.eps}} % \caption{ Scatter plot of the daily returns of two financial indexes (DAX 30 and FTSE MIB) spanning the period July 24th, 2012 to September 30th, 2013. } \label{fig:Financial data} \end{figure} Mardia's test of multivariate symmetry, as implemented by the \texttt{mardia()} function of the \textbf{psych} package \citep{Reve:psyc:2014}, produces a $p$-value of $0.102$, leading us to not reject the null at the commonly considered significance levels. In the class of symmetric bivariate distributions we focus on the classical Gaussian distribution, that we recall to be nested in the contaminated Gaussian distribution. On data at hand, these distributions can be statistically compared via the likelihood-ratio (LR) statistic \begin{displaymath} LR=-2\left[l_{N}\left(\widehat{\boldsymbol{\mu}},\widehat{\boldsymbol{\Sigma}}\right)-l_{CN}\left(\widehat{\boldsymbol{\mu}},\widehat{\boldsymbol{\Sigma}},\widehat{\alpha},\widehat{\eta}\right)\right], \end{displaymath} where the hat denotes the ML estimate of the underlying parameter, while $l_{N}\left(\cdot\right)$ and $l_{CN}\left(\cdot\right)$ are the log-likelihood functions for the Gaussian and the contaminated Gaussian distributions, respectively. Under the null of bivariate Gaussianity (versus the alternative of bivariate contaminated Gaussianity), $LR$ is asymptotically distributed as a $\chi^2$ with two degrees of freedom, corresponding to the difference in the number of free parameters of the models under the two hypotheses. The resulting $p$-value is $9.724\times 10^{-6}$, which leads to the rejection of the null, in favor of the alternative, at any reasonable significance level. \figurename~\ref{fig: CN: Financial data} shows the graphical results from the ML estimation of the contaminated Gaussian distribution; bad points are represented by black bullets and contour lines from the estimated distribution are superimposed. \begin{figure}[!ht] \centering \resizebox{0.5\textwidth}{!}{\includegraphics{FinaCN.eps}} % \caption{ Classification results from the ML fitting of the contaminated Gaussian distribution on the financial data of \figurename~\ref{fig:Financial data}. } \label{fig: CN: Financial data} \end{figure} The ML estimates for $\alpha$ and $\eta$ are $\widehat{\alpha}=0.5$ and $\widehat{\eta}=3.602$, respectively; in some sense, such result indicates that we are far to infer that data at hand arise from a single Gaussian distribution. \section{The Contaminated Gaussian Factor Analysis Model} \label{sec:The contaminated Gaussian factor analysis model} \subsection{The Model} The (Gaussian) factor analysis model \citep{Spea:Thep:1904} is a well-known, and widely used, data reduction tool aiming to find latent factors that explain the variability in the data. The model \citep[see][Chapter~3]{Bart:Knot:Mous:Late:2011} assumes that the $p$-variate random vector $\boldsymbol{X}$ is modeled using a $q$-variate vector of factors $\boldsymbol{U}\sim N_q\left(\boldsymbol{0}_q,\boldsymbol{I}_q\right)$ where $q\ll p$. The model is \begin{equation} \boldsymbol{X} = \boldsymbol{\mu} + \boldsymbol{\Lambda} \boldsymbol{U} + \boldsymbol{e}, \label{eq:Gaussian factor model} \end{equation} where $\boldsymbol{\Lambda}$ is a $p\times q$ matrix of factor loadings and $\boldsymbol{e}\sim N_p\left(\boldsymbol{0}_p,\boldsymbol{\Psi}\right)$ is the error term, with $\boldsymbol{\Psi}=\text{diag}\left(\psi_1,\ldots,\psi_p\right)$. It follows from \eqref{eq:Gaussian factor model} that $\boldsymbol{X}\sim N_p\left(\boldsymbol{\mu},\boldsymbol{\Lambda}\bLambda'+\boldsymbol{\Psi}\right)$. The factor analysis model is, however, sensitive to bad points as it adopts the Gaussian distribution for errors and latent factors. To improve its robustness, for data having longer than Gaussian tails or bad points, \citet{McLa:Bean:BenT:Exte:2007} introduce the $t$-factor analysis model which considers the multivariate $t$ for the distributions of the errors and the latent factors \citep[see also][Section~5.14.4]{McLa:Kris:TheE:2007}. Although the $t$-factor analysis model robustifies the classical factor analysis model, once applied to data at hand, it does not allow for automatic detection of bad points. To solve this problem, recalling \eqref{eq:probability good}, we introduce the contaminated Gaussian factor analysis model. Based on \eqref{eq:Gaussian factor model}, the contaminated Gaussian factor analysis model generalizes the corresponding Gaussian factor analysis model by assuming \begin{equation} \begin{pmatrix} \boldsymbol{X} \\ \boldsymbol{U} \end{pmatrix}\sim CN_{p+q}\left(\boldsymbol{\mu}^*,\boldsymbol{\Sigma}^*,\alpha,\eta\right), \label{eq:Factor: joint density of X and U} \end{equation} where \begin{equation*} \boldsymbol{\mu}^*= \begin{pmatrix} \boldsymbol{\mu} \\ \boldsymbol{0}_q \end{pmatrix} \quad \text{and} \quad \boldsymbol{\Sigma}^* = \begin{pmatrix} \boldsymbol{\Lambda}\bLambda'+\boldsymbol{\Psi} & \boldsymbol{\Lambda}\\ \boldsymbol{\Lambda}' & \boldsymbol{I}_q \\ \end{pmatrix}. \end{equation*} Using the characterization of the contaminated Gaussian distribution discussed in Section~\ref{sec:The contaminated Gaussian distribution}, the joint density of $\boldsymbol{X}$ and $\boldsymbol{U}$, given $W=w$, can be written \begin{equation} \begin{pmatrix} \boldsymbol{X} \\ \boldsymbol{U} \end{pmatrix} \bigg|w \sim N_{p+q}\left(\boldsymbol{\mu}^*,\boldsymbol{\Sigma}^*/w\right), \label{eq:Factor: joint density of X and U given w} \end{equation} with $W \sim C\left(\alpha,\eta\right)$. Thus, \begin{eqnarray*} \boldsymbol{X}|w &\sim & N_p\left(\boldsymbol{\mu},\left(\boldsymbol{\Lambda}\bLambda'+\boldsymbol{\Psi}\right)/w\right),\\ \boldsymbol{U}|w &\sim & N_q\left(\boldsymbol{0}_q,\boldsymbol{I}_q/w\right),\\ \boldsymbol{e}|w &\sim & N_p\left(\boldsymbol{0}_p,\boldsymbol{\Psi}/w\right), \end{eqnarray*} so that \begin{eqnarray*} \boldsymbol{X} &\sim & CN_p\left(\boldsymbol{\mu},\boldsymbol{\Lambda}\bLambda'+\boldsymbol{\Psi},\alpha,\eta\right),\\ \boldsymbol{U} &\sim & CN_q\left(\boldsymbol{0}_q,\boldsymbol{I}_q,\alpha,\eta\right),\\ \boldsymbol{e} &\sim & CN_p\left(\boldsymbol{0}_p,\boldsymbol{\Psi},\alpha,\eta\right). \end{eqnarray*} The factors and error terms are no longer independently distributed as in the normal-based model for factor analysis; however, they are uncorrelated. To see this, we have from \eqref{eq:Factor: joint density of X and U} that conditional on $w$, $\boldsymbol{U}$ and $\boldsymbol{e}$ are uncorrelated, and hence, unconditionally uncorrelated. \subsection{Identifiability and number of free parameters} \label{subsec: CN factor: Identifiability and number of free parameters} Literally speaking, the number of parameters of the contaminated Gaussian factor analysis model is $p+pq+p+2$: we have $p$ values in $\boldsymbol{\mu}$, $pq$ values in $\boldsymbol{\Lambda}$, $p$ values in $\boldsymbol{\Psi}$, one $\alpha$, and one $\eta$. However, for identifiability sake when $q>1$, we have to impose $q\left(q-1\right)/2$ constraints for $\boldsymbol{\Lambda}$ to be uniquely defined (cf. \citealp[][p.~241]{McLa:Peel:fini:2000} and \citealp[][p.~64]{Bart:Knot:Mous:Late:2011}); in fact, there is an infinity of choices for $\boldsymbol{\Lambda}$ because the model is still satisfied if we replace $\boldsymbol{U}$ by $\boldsymbol{H}\boldsymbol{U}$ and $\boldsymbol{\Lambda}$ by $\boldsymbol{\Lambda}\boldsymbol{H}'$, where $\boldsymbol{H}$ is an orthogonal matrix of order $q$. The number $m$ of free parameters for the model is then \begin{equation} p+\left[pq-\frac{1}{2}q\left(q-1\right)\right]+p+2. \label{eq:free parameters in the CN factor model} \end{equation} \subsection{Maximum likelihood estimation via the AECM algorithm} \label{subsec:Contaminated Gaussian factor analysis model: ML estimation via the AECM algorithm} To find ML estimates for the parameters $\boldsymbol{\vartheta}=\left\{\boldsymbol{\mu},\boldsymbol{\Lambda},\boldsymbol{\Psi},\alpha,\eta\right\}$ of the contaminated Gaussian factor analysis model, we consider the application of the alternating expectation-conditional maximization (AECM) algorithm of \citet{Meng:VanD:TheE:1997}. The AECM algorithm is an extension of the ECM algorithm, where the specification of the complete data is allowed to be different on each CM-step. To apply the AECM algorithm, we partition $\boldsymbol{\vartheta}$ as $\boldsymbol{\vartheta}=\left\{\boldsymbol{\vartheta}_1,\boldsymbol{\vartheta}_2\right\}$, where $\boldsymbol{\vartheta}_1=\left\{\boldsymbol{\mu},\alpha,\eta\right\}$ and $\boldsymbol{\vartheta}_2=\left\{\boldsymbol{\Lambda},\boldsymbol{\Psi}\right\}$. For this application of the AECM algorithm, the $\left(k+1\right)$th iteration consists of two cycles, and there is one E-step and one CM-step for each cycle. The two CM-steps correspond to the partition of $\boldsymbol{\vartheta}$ into $\boldsymbol{\vartheta}_1$ and $\boldsymbol{\vartheta}_2$. \subsubsection{First cycle} \label{subsubsec:Contaminated Gaussian factor analysis model: First Cycle} The first cycle of the $\left(k+1\right)$th iteration of the AECM algorithm is practically equivalent to the $\left(k+1\right)$th iteration of the ECME algorithm for the contaminated Gaussian distribution (see Section~\ref{subsec:CN: ML estimation via the ECME algorithm}). The only difference is that we do not update $\boldsymbol{\Sigma}$, that is $\boldsymbol{\Lambda}\bLambda'+\boldsymbol{\Psi}$, but we only update $\boldsymbol{\mu}$ according to \eqref{eq:mean update: contaminated Gaussian} and $\alpha$ and $\eta$ according to what described in Section~\ref{subsubsec:CM-step 2: contaminated Gaussian distribution}. At the end of the first cycle, we set $\boldsymbol{\vartheta}^{\left(k+1/2\right)}=\left\{\boldsymbol{\vartheta}_1^{\left(k+1\right)},\boldsymbol{\vartheta}_2^{\left(k\right)}\right\}$. \subsubsection{Second cycle} \label{subsubsec:Contaminated Gaussian factor analysis model: Second Cycle} In the second cycle of the $\left(k+1\right)$th iteration of the AECM algorithm, we update $\boldsymbol{\vartheta}_2$ by specifying the missing data to be the factors $\boldsymbol{u}_i,\ldots,\boldsymbol{u}_n$ and the weights $w_1,\ldots,w_n$. From \eqref{eq:Factor: joint density of X and U} we have that \begin{equation*} \boldsymbol{X}_i | \boldsymbol{u}_i,w_i \sim N_p\left(\boldsymbol{\mu}+\boldsymbol{\Lambda}\boldsymbol{u}_i,\boldsymbol{\Psi}/w_i\right). \label{eq:Factor: conditional density of X given u and w} \end{equation*} Thus, the complete data are $\left(\boldsymbol{x}_1',\ldots,\boldsymbol{x}_n',\boldsymbol{u}_i',\ldots,\boldsymbol{u}_n',w_1,\ldots,w_n\right)'$, and the complete-data likelihood can be factored as $$ L_{c2}\left(\boldsymbol{\vartheta}_2\right)=\prod_{i=1}^n \phi\left(\boldsymbol{x}_i;\boldsymbol{\mu}^{\left(k+1\right)}+\boldsymbol{\Lambda}\boldsymbol{u}_i,\boldsymbol{\Psi}/w_i\right) \phi\left(\boldsymbol{u}_i;\boldsymbol{0}_q,\boldsymbol{I}_q/w_i\right) p_C\left(w_i;\alpha^{\left(k+1\right)},\eta^{\left(k+1\right)}\right). $$ The complete-data log-likelihood is \begin{eqnarray} l_{c2}\left(\boldsymbol{\vartheta}_2\right) &=& -\frac{n}{2}\left(p+q\right)\log\left(2\pi\right)+\frac{1}{2}\left(p+q\right)\sum_{i=1}^n\log w_i -\frac{1}{2}\sum_{i=1}^n w_i\boldsymbol{u}_i'\boldsymbol{u}_i -\frac{n}{2}\log\left|\boldsymbol{\Psi}\right| \nonumber\\ && + \log \alpha^{\left(k+1\right)} \sum_{i=1}^n\frac{w_i-1/\eta^{\left(k+1\right)}}{1-1/\eta^{\left(k+1\right)}} + \log \left(1-\alpha^{\left(k+1\right)}\right) \sum_{i=1}^n\frac{1-w_i}{1-1/\eta^{\left(k+1\right)}}\nonumber\\ && - \frac{1}{2} \text{tr}\left[\boldsymbol{\Psi}^{-1}\sum_{i=1}^nw_i\left(\boldsymbol{x}_i-\boldsymbol{\mu}^{\left(k+1\right)}\right)\left(\boldsymbol{x}_i-\boldsymbol{\mu}^{\left(k+1\right)}\right)'\right] \nonumber\\ && +\sum_{i=1}^nw_i\left(\boldsymbol{x}_i-\boldsymbol{\mu}^{\left(k+1\right)}\right)'\boldsymbol{\Psi}^{-1}\boldsymbol{\Lambda}\boldsymbol{u}_i - \frac{1}{2}\text{tr}\left(\boldsymbol{\Lambda}'\boldsymbol{\Psi}^{-1}\boldsymbol{\Lambda} \sum_{i=1}^n w_i\boldsymbol{u}_i\boldsymbol{u}_i'\right) , \label{eq:Factor: complete-data log-likelihood} \end{eqnarray} where $\text{tr}\left(\cdot\right)$ denotes the trace operator. The E-step on the second cycle of the $\left(k+1\right)$th iteration requires the calculation of $$ Q_2\left(\boldsymbol{\vartheta}_2;\boldsymbol{\vartheta}^{\left(k+1/2\right)}\right)=E_{\boldsymbol{\vartheta}^{\left(k+1/2\right)}}\left[l_{c2}\left(\boldsymbol{\vartheta}_2\right)|\boldsymbol{x}_1,\ldots,\boldsymbol{x}_n\right]. $$ In addition, we update $w_i$ to \begin{equation*} w_i^{\left(k+1/2\right)}=\frac{\alpha^{\left(k+1\right)}\phi\left(\boldsymbol{x}_i;\boldsymbol{\mu}^{\left(k+1\right)},\boldsymbol{\Sigma}^{\left(k \right)}\right)+\displaystyle\frac{1-\alpha^{\left(k+1\right)}}{\eta^{\left(k+1\right)}}\phi\left(\boldsymbol{x}_i;\boldsymbol{\mu}^{\left(k+1\right)},\eta^{\left(k+1\right)}\boldsymbol{\Sigma}^{\left(k \right)}\right)}{\alpha^{\left(k+1\right)}\phi\left(\boldsymbol{x}_i;\boldsymbol{\mu}^{\left(k+1\right)},\boldsymbol{\Sigma}^{\left(k \right)}\right)+\left(1-\alpha^{\left(k+1\right)}\right)\phi\left(\boldsymbol{x}_i;\boldsymbol{\mu}^{\left(k+1\right)},\eta^{\left(k+1\right)}\boldsymbol{\Sigma}^{\left(k \right)}\right)}, \label{eq:Factor: update for w} \end{equation*} where $\boldsymbol{\Sigma}^{\left(k \right)}=\boldsymbol{\Lambda}^{\left(k \right)}\boldsymbol{\Lambda}^{\left(k \right)'}+\boldsymbol{\Psi}^{\left(k \right)}$, due to the last two rows of \eqref{eq:Factor: complete-data log-likelihood} we also need to calculate $E_{\boldsymbol{\vartheta}^{\left(k + 1/2\right)}}\left(W_i\boldsymbol{U}_i|\boldsymbol{x}_i,w_i\right)$ and $E_{\boldsymbol{\vartheta}^{\left(k + 1/2\right)}}\left(W_i\boldsymbol{U}_i\boldsymbol{U}_i'|\boldsymbol{x}_i,w_i\right)$. From \eqref{eq:Factor: joint density of X and U given w} we obtain that \begin{equation} \boldsymbol{U}_i|\boldsymbol{x}_i,w_i \sim N_q\left(\boldsymbol{\gamma}'\left(\boldsymbol{x}_i -\boldsymbol{\mu}\right),\left(\boldsymbol{I}_q-\boldsymbol{\gamma}'\boldsymbol{\Lambda}\right)/w_i\right), \label{eq:Factor: conditional density of U given x and w} \end{equation} where $\boldsymbol{\gamma}=\left(\boldsymbol{\Lambda}\bLambda'+\boldsymbol{\Psi}\right)^{-1}\boldsymbol{\Lambda}$. Hence, from \eqref{eq:Factor: joint density of X and U given w} and \eqref{eq:Factor: conditional density of U given x and w} we have that \begin{equation*} E_{\boldsymbol{\vartheta}^{\left(k + 1/2\right)}}\left(W_i\boldsymbol{U}_i|\boldsymbol{x}_i,w_i\right)=w_i^{\left(k+1/2\right)}\boldsymbol{\gamma}^{\left(k\right)'}\left(\boldsymbol{x}_i -\boldsymbol{\mu}^{\left(k+1\right)}\right) \label{eq:Factor: expectation of WU} \end{equation*} and \begin{equation*} E_{\boldsymbol{\vartheta}^{\left(k + 1/2\right)}}\left(W_i\boldsymbol{U}_i\boldsymbol{U}_i'|\boldsymbol{x}_i,w_i\right)= \boldsymbol{I}_q-\boldsymbol{\gamma}^{\left(k\right)'}\boldsymbol{\Lambda}^{\left(k\right)} +w_i^{\left(k+1/2\right)}\boldsymbol{\gamma}^{\left(k\right)'}\left(\boldsymbol{x}_i -\boldsymbol{\mu}^{\left(k+1\right)}\right)\left(\boldsymbol{x}_i -\boldsymbol{\mu}^{\left(k+1\right)}\right)'\boldsymbol{\gamma}^{\left(k\right)}, \label{eq:Factor: expectation of WUU} \end{equation*} where $\boldsymbol{\gamma}^{\left(k\right)}=\left(\boldsymbol{\Lambda}^{\left(k\right)}\boldsymbol{\Lambda}^{\left(k\right)'}+\boldsymbol{\Psi}^{\left(k\right)}\right)^{-1}\boldsymbol{\Lambda}^{\left(k\right)}$. Starting from \eqref{eq:Factor: complete-data log-likelihood}, the expected complete-data log-likelihood of second cycle is \begin{eqnarray*} Q_2\left(\boldsymbol{\vartheta}_2;\boldsymbol{\vartheta}^{\left(k + 1/2\right)}\right) &=& C - \frac{n}{2}\log\left|\boldsymbol{\Psi}\right| - \frac{n}{2} \text{tr}\left(\boldsymbol{\Psi}^{-1}\boldsymbol{S}^{\left(k+1/2\right)}\right) \nonumber\\ && +n\text{tr}\left(\boldsymbol{\Psi}^{-1}\boldsymbol{\Lambda}\boldsymbol{\gamma}^{\left(k\right)'}\boldsymbol{S}^{\left(k+1/2\right)}\right) - \frac{n}{2}\text{tr}\left(\boldsymbol{\Lambda}'\boldsymbol{\Psi}^{-1}\boldsymbol{\Lambda} \boldsymbol{R}^{\left(k+1/2\right)}\right) , \label{eq:Factor: expected complete-data log-likelihood} \end{eqnarray*} where \begin{eqnarray*} \boldsymbol{S}^{\left(k+1/2\right)} &=& \frac{1}{n}\sum_{i=1}^nw_i^{\left(k+1/2\right)}\left(\boldsymbol{x}_i-\boldsymbol{\mu}^{\left(k+1\right)}\right)\left(\boldsymbol{x}_i-\boldsymbol{\mu}^{\left(k+1\right)}\right)', \label{eq:Factor: S}\\ \boldsymbol{R}^{\left(k+1/2\right)} &=& \boldsymbol{I}_q-\boldsymbol{\gamma}^{\left(k\right)'}\boldsymbol{\Lambda}^{\left(k\right)} +\boldsymbol{\gamma}^{\left(k\right)'}\boldsymbol{S}^{\left(k+1/2\right)}\boldsymbol{\gamma}^{\left(k\right)}, \label{eq:Factor: R} \end{eqnarray*} and $C$ includes the terms that do not depend on $\boldsymbol{\vartheta}_2$. The CM-step on this second cycle of the $\left(k+1\right)$th iteration is implemented by the maximization of $Q_2\left(\boldsymbol{\vartheta}_2;\boldsymbol{\vartheta}^{\left(k + 1/2\right)}\right)$ over $\boldsymbol{\vartheta}_2$ with $\boldsymbol{\vartheta}_1$ set equal to $\boldsymbol{\vartheta}_1^{\left(k+1\right)}$. After some algebra, this yields the updated estimates \begin{equation*} \boldsymbol{\Lambda}^{\left(k+1\right)}=\boldsymbol{S}^{\left(k+1/2\right)}\boldsymbol{\gamma}^{\left(k\right)} \left(\boldsymbol{R}^{\left(k+1/2\right)}\right)^{-1} \label{eq:Factor: Lambda} \end{equation*} and \begin{equation*} \boldsymbol{\Psi}^{\left(k+1\right)}=\text{diag}\left(\boldsymbol{S}^{\left(k+1/2\right)}-\boldsymbol{\Lambda}^{\left(k+1\right)}\boldsymbol{\gamma}^{\left(k\right)'}\boldsymbol{S}^{\left(k+1/2\right)}\right). \label{eq:Factor: Psi} \end{equation*} \subsection{Computational details} \label{subsec: FCN: Computational details} In the second cycle of the $\left(k + 1\right)$th iteration of the AECM algorithm, we need to compute $\boldsymbol{\gamma}^{\left(k\right)}$ which, in turn, requires the inversion of the $p\times p$ matrix $\boldsymbol{\Lambda}^{\left(k\right)}\boldsymbol{\Lambda}^{\left(k\right)'}+\boldsymbol{\Psi}^{\left(k\right)}$. This inversion can be slow for large values of $p$. To ease it, we use the Woodbury identity \citep{Wood:Inve:1950} \begin{equation} \left(\boldsymbol{\Lambda}^{\left(k\right)}\boldsymbol{\Lambda}^{\left(k\right)'}+\boldsymbol{\Psi}^{\left(k\right)}\right)^{-1} = \left(\boldsymbol{\Psi}^{\left(k\right)}\right)^{-1} - \left(\boldsymbol{\Psi}^{\left(k\right)}\right)^{-1}\boldsymbol{\Lambda}^{\left(k\right)}\left[\boldsymbol{I}_q+\boldsymbol{\Lambda}^{\left(k\right)'}\left(\boldsymbol{\Psi}^{\left(k\right)}\right)^{-1}\boldsymbol{\Lambda}^{\left(k\right)}\right]^{-1}\boldsymbol{\Lambda}^{\left(k\right)'}\left(\boldsymbol{\Psi}^{\left(k\right)}\right)^{-1}, \label{eq:Woodbury identity} \end{equation} which requires the simpler inversions of the diagonal $p\times p$ matrix $\boldsymbol{\Psi}^{\left(k\right)}$ and the $q\times q$ matrix $\boldsymbol{I}_q+\boldsymbol{\Lambda}^{\left(k\right)'}\left(\boldsymbol{\Psi}^{\left(k\right)}\right)^{-1}\boldsymbol{\Lambda}^{\left(k\right)}$. This leads to a significant speed-up when $q \ll p$. Based on the idea of Section~\ref{subsubsec: CN: Initialization}, the AECM algorithm is initialized with the estimates of $\boldsymbol{\mu}$, $\boldsymbol{\Lambda}$ and $\boldsymbol{\Psi}$ provided by a Gaussian factor analysis model, along with the constraints $\alpha=\alpha^*$ (with $\alpha^*\rightarrow 1^-$) and $\eta=\eta^*$ (with $\eta^*=\rightarrow 1^+$); in the analysis of Section~\ref{subsec: CGFAM: Real data analysis}, the (preliminary) Gaussian factor analysis model is estimated by the \texttt{pgmmEM()} function of the \textbf{pgmm} package for \textsf{R} \citep{McNi:Jamp:McDa:Murp:Bank:pgmm:2011}. The \texttt{pgmmEM()} function implements an AECM algorithm to obtain ML estimates. Finally, the Aitken acceleration is considered as convergence criterion (see Section~\ref{subsubsec: CN: Convergence criterion} for details). \subsection{Real data analysis} \label{subsec: CGFAM: Real data analysis} We illustrate the contaminated Gaussian factor analysis model on the \texttt{state.x77} data set available on the \textbf{datasets} package for \textsf{R}. This data set is a compilation of data about the $n=50$ US states put together from the 1977 \textit{Statistical Abstract of the United States} (available for free online at \url{http://www.census.gov/compendia/statab/}), with the actual measurements mostly made a few years before. The $p=8$ variables, included into the data set, are: \begin{center} \begin{tabularx}{\linewidth}{l X} \texttt{Population} & population estimate as of July 1, 1975; \\ \texttt{Income} & per capita income in dollars (1974); \\ \texttt{Illiteracy} & illiteracy (1970, percent of population); \\ \texttt{Life Exp} & life expectancy in years (1969--71); \\ \texttt{Murder} & murder and non-negligent manslaughter rate per 100,000 population (1976); \\ \texttt{HS Grad} & percent high-school graduates (1970); \\ \texttt{Frost} & mean number of days with minimum temperature below freezing (1931--1960) in capital or large city; \\ \texttt{Area} & land area in square miles. \\ \end{tabularx} \end{center} The scatterplot matrix of the data is displayed in \figurename~\ref{fig:state.obs}. \begin{figure}[!ht] \centering \includegraphics[width=0.8\textwidth]{state.obs.eps} % \caption{ A pairs (scatterplots) plot for the \texttt{state.x77}. } \label{fig:state.obs} \end{figure} To reduce the dimensionality of the data, so to have a lower number $q$ of latent factors that explain their variability, we compare the Gaussian, and the contaminated Gaussian, factor analysis models. With regards to the former, the \texttt{pgmmEM()} function of the \textbf{pgmm} package is run to obtain ML estimates. Two values for $q$ are considered, $q=1$ and $q=2$, and the Bayesian information criterion \citep[BIC;][]{Schw:Esti:1978} \begin{equation*} \text{BIC}=2l\left(\widehat{\boldsymbol{\vartheta}}\right)-m\log n, \label{eq:BIC} \end{equation*} where $m$ is the overall number of free parameters, is used to select the best one. For both the models, the best BIC values ($-4538.511$ for the Gaussian factor analysis model, and $-4532.987$ for the contaminated Gaussian factor analysis model) correspond to $q=1$. As the models are nested, the BIC also suggests that the selected contaminated Gaussian factor analysis model is better. Nevertheless, we prefer to have a $p$-value quantifying this choice. Following the lines of Section~\ref{subsec: CN: Real data analysis}, we perform an LR test to compare the best BIC models. The asymptotic $\chi^2$ distribution has, also in this case, two degrees of freedom and the $p$-value is 0.001; this result leads us to reject, at the usual significance levels, the null hypothesis that a Gaussian factor analysis model works well on these data, in favor of the alternative contaminated Gaussian factor analysis model. The advantage of this choice is that not only can we reduce dimensionality in the presence of bad points, but we can identify them. When we view the results of our analysis of the \texttt{state.x77} data in this way, we see from \figurename~\ref{fig:state.bad} that there is an anomalous point (black bullet). Note also that the ML estimates of $\alpha$ and $\eta$ are $\widehat{\alpha}=0.955$ and $\widehat{\eta}=3.510$, respectively. \begin{figure}[!ht] \centering \includegraphics[width=0.8\textwidth]{state.bad.eps} % \caption{ \texttt{state.x77} data: scatterplot matrix with the detected bad point denoted by $\bullet$. } \label{fig:state.bad} \end{figure} \section{Mixtures of Contaminated Gaussian Factor Analyzers} \label{sec:Mixtures of contaminated Gaussian factor analyzers} To robustify the classical mixture of Gaussian distributions to the occurrence of bad points, and also to allow for their automatic detection, \citet{Punz:McNi:Robu:2013} propose the mixture of contaminated Gaussian distributions \begin{equation} p\left(\boldsymbol{x};\boldsymbol{\vartheta}\right)=\sum_{g=1}^G\pi_gp_{CN}\left(\boldsymbol{x};\boldsymbol{\mu}_g,\boldsymbol{\Sigma}_g,\alpha_g,\eta_g\right) \label{eq:mixture of multivariate contaminated Gaussian distributions} \end{equation} where, for the $g$th mixture component, $\pi_g$ is its mixing proportion, with $\pi_g>0$ and $\sum_{g=1}^G\pi_g=1$, while $\boldsymbol{\mu}_g$, $\boldsymbol{\Sigma}_g$, $\alpha_g$ and $\eta_g$ are defined as in \eqref{eq:contaminated Gaussian distribution}. In \eqref{eq:mixture of multivariate contaminated Gaussian distributions}, there are $p\left(p + 1\right)/2$ parameters for each $\boldsymbol{\Sigma}_g$, $g=1,\ldots,G$. This means that as the number of components $G$ grows, the total number of parameters can quickly become very large relative to the sample size $n$, leading to overfitting. To model high-dimensional data, and to add parsimony, we consider the contaminated Gaussian factor analysis model of Section~\ref{sec:The contaminated Gaussian factor analysis model} in each mixture component; this leads to the mixture of contaminated Gaussian factor analyzers given by \eqref{eq:mixture of multivariate contaminated Gaussian distributions} but with component covariance matrices given by \begin{equation} \boldsymbol{\Sigma}_g=\boldsymbol{\Lambda}_g\boldsymbol{\Lambda}_g'+\boldsymbol{\Psi}_g. \label{eq:restriction} \end{equation} \subsection{Identifiability and number of free parameters} \label{subsec: Mixture CNFA: Identifiability} Intuitively, the identifiability of the family of mixtures of contaminated Gaussian factor analyzers requires the identifiability of the family of mixtures of contaminated Gaussian distributions, as well as the identifiability of the family of factor analysis models. Since the identifiability of the class of contaminated Gaussian distributions has been established \citep[see][]{Punz:McNi:Robu:2013}, this leaves the question of the identifiability of the family of factor analysis models; in other words, it requires the identifiability of the class of covariance structures defined in \eqref{eq:restriction}. Unfortunately, the topic of identification may itself deserve a separate research project. In this paper, we will not attempt to establish general rules for the identification of the proposed mixture models. However, based on the considerations leading to \eqref{eq:free parameters in the CN factor model}, we can say that the overall number $m$ of free parameters for the model is $$ \left(G-1\right)+ Gp + G\left[pq-\frac{1}{2}q\left(q-1\right)\right] + Gp + 2G. $$ \subsection{Maximum likelihood estimation via the AECM algorithm} \label{subsec:Mixture of contaminated Gaussian factor analyzers: ML estimation via the AECM algorithm} To find ML estimates for the parameters $\boldsymbol{\vartheta}=\left\{\pi_g,\boldsymbol{\mu}_g,\boldsymbol{\Lambda}_g,\boldsymbol{\Psi}_g,\alpha_g,\eta_g\right\}_{g=1}^G$ of the mixture of contaminated Gaussian factor analyzers model, we consider the AECM algorithm. We partition $\boldsymbol{\vartheta}$ as $\left\{\boldsymbol{\vartheta}_1,\boldsymbol{\vartheta}_2,\boldsymbol{\vartheta}_3\right\}$, where $\boldsymbol{\vartheta}_1=\left\{\pi_g,\boldsymbol{\mu}_g\right\}_{g=1}^G$, $\boldsymbol{\vartheta}_2=\left\{\alpha_g,\eta_g\right\}_{g=1}^G$, and $\boldsymbol{\vartheta}_3=\left\{\boldsymbol{\Lambda}_g,\boldsymbol{\Psi}_g\right\}_{g=1}^G$. The $\left(k+1\right)$th iteration of the algorithm consists of three cycles, and there is one E-step and one CM-step for each cycle. Concerning the specification of the incomplete data, it is useful to introduce the component-indicator vector $\boldsymbol{z}_i=\left(z_{i1},\ldots,z_{iG}\right)'$, where $z_{ig}$ is one or zero according to whether $\boldsymbol{x}_i$ belongs or does not belong to the $g$th component, with $i=1,\ldots,n$ and $g=1,\ldots,G$. \subsubsection{First cycle} \label{subsubsec:Mixture: AECM: First Cycle} For the first cycle of the $\left(k+1\right)$th iteration of the AECM algorithm, we specify the missing data to be the component-indicator vectors $\boldsymbol{z}_1,\ldots,\boldsymbol{z}_n$ and the weights $\boldsymbol{w}_1,\ldots,\boldsymbol{w}_n$, with $\boldsymbol{w}_i=\left(w_{i1},\ldots,w_{iG}\right)'$. Thus, the complete data are $\left(\boldsymbol{x}_1',\ldots,\boldsymbol{x}_n',\boldsymbol{w}_1',\ldots,\boldsymbol{w}_n',\boldsymbol{z}_1',\ldots,\boldsymbol{z}_n'\right)'$ and the complete-data likelihood can be factored as $$ L_{c1}\left(\boldsymbol{\vartheta}_1\right)=\prod_{i=1}^n\prod_{g=1}^G \left[ \pi_g \phi\left(\boldsymbol{x}_i;\boldsymbol{\mu}_g,\boldsymbol{\Sigma}_g^{\left(k\right)}/w_{ig}\right) p_C\left(w_{ig};\alpha_g^{\left(k\right)},\eta_g^{\left(k\right)}\right) \right]^{z_{ig}}. $$ Accordingly, the complete-data log-likelihood can be written as \begin{equation*} l_{c1}\left(\boldsymbol{\vartheta}_1\right)=l_{1c1}\left(\left\{\pi_g\right\}_{g=1}^G\right)+l_{2c1}\left(\left\{\boldsymbol{\mu}_g\right\}_{g=1}^G\right), \label{eq:mixture: AECM: complete-data log-likelihood} \end{equation*} where \begin{equation} l_{1c1}\left(\left\{\pi_g\right\}_{g=1}^G\right) = \sum_{g=1}^Gn_g\log\pi_g, \label{eq:mixture: AECM: complete-data log-likelihood: weights} \end{equation} and \begin{eqnarray} l_{2c1}\left(\left\{\boldsymbol{\mu}_g\right\}_{g=1}^G\right) &=& \frac{np}{2} \log\left(2\pi\right) +\frac{p}{2}\sum_{i=1}^n\sum_{g=1}^Gz_{ig}\log w_{ig} - \frac{1}{2}\sum_{g=1}^Gn_g\log\left|\boldsymbol{\Sigma}_g^{\left(k\right)}\right| \nonumber \\ && - \frac{1}{2}\sum_{i=1}^n\sum_{g=1}^G z_{ig}w_{ig}\left(\boldsymbol{x}_i-\boldsymbol{\mu}_g\right)'\left(\boldsymbol{\Sigma}_g^{\left(k\right)}\right)^{-1}\left(\boldsymbol{x}_i-\boldsymbol{\mu}_g\right), \nonumber\\ && + \sum_{g=1}^G\log \alpha_g \sum_{i=1}^nz_{ig}\frac{w_{ig}-1/\eta_g}{1-1/\eta_g} + \sum_{g=1}^G\log \left(1-\alpha_g\right) \sum_{i=1}^nz_{ig}\frac{1-w_{ig}}{1-1/\eta_g}, \label{eq:mixture: AECM: complete-data log-likelihood: means} \end{eqnarray} with $n_g=\sum_{i=1}^nz_{ig}$. The E-step for the first cycle of the $\left(k+1\right)$th iteration requires the calculation of $$ Q_{c1}\left(\boldsymbol{\vartheta}_1;\boldsymbol{\vartheta}^{\left(k\right)}\right)=E_{\boldsymbol{\vartheta}^{\left(k \right)}}\left[l_{c1}\left(\boldsymbol{\vartheta}_1\right)|\boldsymbol{x}_1,\ldots,\boldsymbol{x}_n\right]. $$ This E-step can be effected by first taking the expectation of $l_{c1}\left(\boldsymbol{\vartheta}_1\right)$ conditional on $\boldsymbol{x}_1,\ldots,\boldsymbol{x}_n$ and $\boldsymbol{z}_1,\ldots,\boldsymbol{z}_n$, and then finally over the $\boldsymbol{z}_i$ given $\boldsymbol{x}_i$. It can be seen from \eqref{eq:mixture: AECM: complete-data log-likelihood: weights} and \eqref{eq:mixture: AECM: complete-data log-likelihood: means} that in order to do this, we need to calculate $E_{\boldsymbol{\vartheta}^{\left(k \right)}}\left(Z_{ig}|\boldsymbol{x}_i\right)=z_{ig}^{\left(k\right)}$ and $E_{\boldsymbol{\vartheta}^{\left(k \right)}}\left(W_{ig}|\boldsymbol{x}_i,\boldsymbol{z}_i\right)=w_{ig}^{\left(k\right)}$, where \begin{equation*} z_{ig}^{\left(k \right)} = \frac{\pi_g^{\left(k\right)} p_{CN}\left(\boldsymbol{x}_i;\boldsymbol{\mu}_g^{\left(k \right)},\boldsymbol{\Sigma}_g^{\left(k \right)},\alpha_g^{\left(k \right)},\eta_g^{\left(k \right)}\right)}{\displaystyle\sum_{j=1}^G\pi_j^{\left(k\right)} p_{CN}\left(\boldsymbol{x}_i;\boldsymbol{\mu}_j^{\left(k \right)},\boldsymbol{\Sigma}_j^{\left(k \right)},\alpha_j^{\left(k \right)},\eta_j^{\left(k \right)}\right)} \label{eq:mixture: AECM: first cycle: z update} \end{equation*} and \begin{equation*} w_{ig}^{\left(k\right)}=\frac{\alpha_g^{\left(k \right)}\phi\left(\boldsymbol{x}_i;\boldsymbol{\mu}_g^{\left(k \right)},\boldsymbol{\Sigma}_g^{\left(k \right)}\right)+\displaystyle\frac{1-\alpha_g^{\left(k \right)}}{\eta_g^{\left(k \right)}}\phi\left(\boldsymbol{x}_i;\boldsymbol{\mu}_g^{\left(k \right)},\eta_g^{\left(k \right)}\boldsymbol{\Sigma}_g^{\left(k \right)}\right)}{\alpha_g^{\left(k \right)}\phi\left(\boldsymbol{x}_i;\boldsymbol{\mu}_g^{\left(k \right)},\boldsymbol{\Sigma}_g^{\left(k \right)}\right)+\left(1-\alpha_g^{\left(k \right)}\right)\phi\left(\boldsymbol{x}_i;\boldsymbol{\mu}_g^{\left(k \right)},\eta_g^{\left(k \right)}\boldsymbol{\Sigma}_g^{\left(k \right)}\right)}. \label{eq:mixture: AECM: first cycle: w update} \end{equation*} Using these results we have that $$ Q_{c1}\left(\boldsymbol{\vartheta}_1;\boldsymbol{\vartheta}^{\left(k\right)}\right) = Q_{1c1}\left(\left\{\pi_g\right\}_{g=1}^G\right) + Q_{2c1}\left(\left\{\boldsymbol{\mu}_g\right\}_{g=1}^G\right), $$ where \begin{equation*} Q_{1c1}\left(\left\{\pi_g\right\}_{g=1}^G;\boldsymbol{\vartheta}^{\left(k\right)}\right) = \sum_{g=1}^Gn_g^{\left(k\right)}\log\pi_g, \label{eq:mixture: AECM: expected complete-data log-likelihood: weights} \end{equation*} and \begin{equation*} Q_{2c1}\left(\left\{\boldsymbol{\mu}_g\right\}_{g=1}^G;\boldsymbol{\vartheta}^{\left(k\right)}\right) = C - \frac{1}{2}\sum_{i=1}^n\sum_{g=1}^G z_{ig}^{\left(k\right)}w_{ig}^{\left(k\right)}\left(\boldsymbol{x}_i-\boldsymbol{\mu}_g\right)'\left(\boldsymbol{\Sigma}_g^{\left(k\right)}\right)^{-1}\left(\boldsymbol{x}_i-\boldsymbol{\mu}_g\right), \label{eq:mixture: AECM: expected complete-data log-likelihood: means} \end{equation*} with $n_g^{\left(k\right)}=\sum_{i=1}^nz_{ig}^{\left(k\right)}$ and $C$ including the terms that do not depend on $\boldsymbol{\mu}_1,\ldots,\boldsymbol{\mu}_G$. The CM-step for the first cycle of the $\left(k+1\right)$th iteration requires the maximization of $Q_{c1}\left(\boldsymbol{\vartheta}_1;\boldsymbol{\vartheta}^{\left(k\right)}\right)$. The solutions for $\pi_g^{\left(k+1\right)}$ and $\boldsymbol{\mu}_g^{\left(k+1\right)}$ exist in closed form and are \begin{equation*} \pi_g^{\left(k+1\right)}=n^{\left(k\right)}/n \label{eq:mixture: AECM: updates for pi} \end{equation*} and \begin{equation*} \boldsymbol{\mu}_g^{\left(k+1\right)}=\sum_{i=1}^n z_{ig}^{\left(k\right)}w_{ig}^{\left(k\right)}\boldsymbol{x}_i\Big/\sum_{i=1}^n z_{ig}^{\left(k\right)}w_{ig}^{\left(k\right)}. \label{eq:mixture: AECM: updates for bmu} \end{equation*} At the end of this cycle, we write $\boldsymbol{\vartheta}^{\left(k+1/3\right)}=\left\{\boldsymbol{\vartheta}_1^{\left(k+1\right)},\boldsymbol{\vartheta}_2^{\left(k\right)},\boldsymbol{\vartheta}_3^{\left(k\right)}\right\}$. \subsubsection{Second cycle} \label{subsubsec:Mixture: AECM: Second Cycle} For the second cycle of the $\left(k+1\right)$th iteration of the AECM algorithm, for the updating of $\boldsymbol{\vartheta}_2$, we specify the missing data to be only $\boldsymbol{z}_1,\ldots,\boldsymbol{z}_n$. The complete-data likelihood is $$ L_{c2}\left(\boldsymbol{\vartheta}_2\right)=\prod_{i=1}^n\prod_{g=1}^G \left[ \pi_g^{\left(k+1\right)}p_{CN}\left(\boldsymbol{x}_i;\boldsymbol{\mu}_g^{\left(k+1\right)},\boldsymbol{\Sigma}_g^{\left(k\right)},\alpha_g,\eta_g\right)\right]^{z_{ig}}, $$ where $\boldsymbol{\Sigma}_g^{\left(k \right)}=\boldsymbol{\Lambda}_g^{\left(k \right)}\boldsymbol{\Lambda}_g^{\left(k \right)'}+\boldsymbol{\Psi}_g^{\left(k \right)}$. Accordingly, the complete-data log-likelihood is \begin{equation} l_{c2}\left(\boldsymbol{\vartheta}_2\right) = C + \sum_{i=1}^n\sum_{g=1}^G z_{ig}\log \left[p_{CN}\left(\boldsymbol{x}_i;\boldsymbol{\mu}_g^{\left(k+1\right)},\boldsymbol{\Sigma}_g^{\left(k\right)},\alpha_g,\eta_g\right)\right], \label{eq:mixture: AECM: second cycle: complete-data log-likelihood} \end{equation} where $C$ includes the terms that do not depend on $\boldsymbol{\vartheta}_2$. The E-step on the second cycle of the $\left(k+1\right)$th iteration requires the calculation of $$ Q_{c2}\left(\boldsymbol{\vartheta}_2;\boldsymbol{\vartheta}^{\left(k+1/3\right)}\right)=E_{\boldsymbol{\vartheta}^{\left(k+1/3\right)}}\left[l_{c2}\left(\boldsymbol{\vartheta}_2\right)|\boldsymbol{x}_1,\ldots,\boldsymbol{x}_n\right]. $$ Based on \eqref{eq:mixture: AECM: second cycle: complete-data log-likelihood}, in order to do this, we need to calculate $E_ {\boldsymbol{\vartheta}^{\left(k+1/3\right)}}\left(Z_{ig}|\boldsymbol{x}_i\right)=z_{ig}^{\left(k+1/3\right)}$, where \begin{equation*} z_{ig}^{\left(k+1/3\right)} = \frac{\pi_g^{\left(k+1\right)} p_{CN}\left(\boldsymbol{x}_i;\boldsymbol{\mu}_g^{\left(k+1\right)},\boldsymbol{\Sigma}_g^{\left(k \right)},\alpha_g^{\left(k \right)},\eta_g^{\left(k \right)}\right)}{\displaystyle\sum_{j=1}^G\pi_j^{\left(k+1\right)} p_{CN}\left(\boldsymbol{x}_i;\boldsymbol{\mu}_j^{\left(k+1\right)},\boldsymbol{\Sigma}_j^{\left(k \right)},\alpha_j^{\left(k \right)},\eta_j^{\left(k \right)}\right)} \label{eq:mixture: AECM: second cycle: z update} \end{equation*} Using this result we have that \begin{equation} Q_{2}\left(\boldsymbol{\vartheta}_2;\boldsymbol{\vartheta}^{\left(k+1/3\right)}\right) = \sum_{g=1}^G Q_{g2}\left(\alpha_g,\eta_g;\boldsymbol{\vartheta}^{\left(k\right)}\right), \label{eq:mixture: AECM: second cycle: Q2} \end{equation} where \begin{equation} Q_{g2}\left(\alpha_g,\eta_g;\boldsymbol{\vartheta}^{\left(k+1/3\right)}\right)=C_g + \sum_{i=1}^n z_{ig}^{\left(k+1/3\right)}\log \left[p_{CN}\left(\boldsymbol{x}_i;\boldsymbol{\mu}_g^{\left(k+1\right)},\boldsymbol{\Sigma}_g^{\left(k\right)},\alpha_g,\eta_g\right)\right], \label{eq:mixture: AECM: second cycle: Q2g} \end{equation} with $C_g$ including the terms that do not depend on $\alpha_g$ and $\eta_g$. Maximizing \eqref{eq:mixture: AECM: second cycle: Q2} with respect to $\boldsymbol{\vartheta}_2$, under the constraints on these parameters, is equivalent to independently maximizing each of the $G$ expressions \eqref{eq:mixture: AECM: second cycle: Q2g} over $\alpha_g^{\left(k+1\right)}$ and $\eta_g^{\left(k+1\right)}$, under the constraints $\eta_g>1$ and $\alpha_g\in \left(\alpha^*,1\right)$, $g=1,\ldots,G$. This maximization is equivalent to the numerical maximization problem discussed in Section~\ref{subsubsec:CM-step 2: contaminated Gaussian distribution} for the contaminated Gaussian distribution, with the only difference being that each observation $\boldsymbol{x}_i$ contributes to the log-likelihood with a known weight $z_{ig}^{\left(k+1/3\right)}$. At the end of this cycle, we write $\boldsymbol{\vartheta}^{\left(k+2/3\right)}=\left\{\boldsymbol{\vartheta}_1^{\left(k+1\right)},\boldsymbol{\vartheta}_2^{\left(k+1\right)},\boldsymbol{\vartheta}_3^{\left(k\right)}\right\}$. \subsubsection{Third cycle} \label{subsubsec:Mixture: AECM: Third Cycle} For the third cycle of the $\left(k+1\right)$th iteration of the AECM algorithm, for the updating of $\boldsymbol{\vartheta}_3$, we specify the missing data to be $\boldsymbol{z}_1,\ldots,\boldsymbol{z}_n$, $\boldsymbol{w}_1,\ldots,\boldsymbol{w}_n$, and $\boldsymbol{u}_1,\ldots,\boldsymbol{u}_n$. Thus, the complete data are $\left(\boldsymbol{x}_1',\ldots,\boldsymbol{x}_n',\boldsymbol{w}_1',\ldots,\boldsymbol{w}_n',\boldsymbol{z}_1',\ldots,\boldsymbol{z}_n',\boldsymbol{u}_1'\ldots,\boldsymbol{u}_n\right)'$ and, according to the definition of the contaminated Gaussian factor analysis model given in Section~\ref{sec:The contaminated Gaussian factor analysis model}, the complete-data likelihood can be factored as $$ L_{c3}\left(\boldsymbol{\vartheta}_3\right)=\prod_{i=1}^n\prod_{g=1}^G \left[ \pi_g^{\left(k+1\right)} \phi\left(\boldsymbol{x}_i;\boldsymbol{\mu}_g^{\left(k+1\right)}+\boldsymbol{\Lambda}_g\boldsymbol{u}_{ig},\boldsymbol{\Psi}_g/w_{ig}\right) \phi\left(\boldsymbol{u}_{ig};\boldsymbol{0}_q,\boldsymbol{I}_q/w_{ig}\right) p_C\left(w_{ig};\alpha_g^{\left(k+1\right)},\eta_g^{\left(k+1\right)}\right) \right]^{z_{ig}}. $$ Accordingly, the complete-data log-likelihood is \begin{eqnarray} l_{c3}\left(\boldsymbol{\vartheta}_3\right)&=& \sum_{g=1}^G n_g\log \pi_g^{\left(k+1\right)} -n\frac{p+q}{2}\log\left(2\pi\right) +\frac{p+q}{2}\sum_{g=1}^G\sum_{i=1}^n z_{ig}\log w_{ig} - \frac{1}{2}\sum_{g=1}^G \sum_{i=1}^n z_{ig}w_{ig} \boldsymbol{u}_{ig}'\boldsymbol{u}_{ig} \nonumber \\ && +\sum_{g=1}^G\log \alpha_g^{\left(k+1\right)} \sum_{i=1}^nz_{ig}\frac{w_{ig}-1/\eta_g^{\left(k+1\right)}}{1-1/\eta_g^{\left(k+1\right)}} +\sum_{g=1}^G\log \left(1-\alpha_g^{\left(k+1\right)}\right) \sum_{i=1}^nz_{ig}\frac{1-w_{ig}}{1-1/\eta_g^{\left(k+1\right)}}\nonumber \\ && -\frac{1}{2}\sum_{g=1}^G n_g\log\left|\boldsymbol{\Psi}_g\right| -\frac{1}{2}\sum_{g=1}^G \sum_{i=1}^n z_{ig}w_{ig} \left(\boldsymbol{x}_i-\boldsymbol{\mu}_g^{\left(k+1\right)}\right)'\boldsymbol{\Psi}_g^{-1}\left(\boldsymbol{x}_i-\boldsymbol{\mu}_g^{\left(k+1\right)}\right)\nonumber \\ && +\sum_{g=1}^G \sum_{i=1}^n z_{ig}w_{ig}\left(\boldsymbol{x}_i-\boldsymbol{\mu}_g^{\left(k+1\right)}\right)'\boldsymbol{\Psi}_g^{-1}\boldsymbol{\Lambda}_g\boldsymbol{u}_{ig} - \frac{1}{2} \sum_{g=1}^G \sum_{i=1}^n z_{ig}w_{ig} \boldsymbol{u}_{ig}'\boldsymbol{\Lambda}_g'\boldsymbol{\Psi}_g^{-1} \boldsymbol{\Lambda}_g \boldsymbol{u}_{ig}, \label{eq:mixture: AECM: third cycle: complete-data log-likelihood} \end{eqnarray} where $n_g=\sum_{i=1}^nz_{ig}$ The E-step on the third cycle of the $\left(k+1\right)$th iteration requires the calculation of $$ Q_3\left(\boldsymbol{\vartheta}_3;\boldsymbol{\vartheta}^{\left(k+2/3\right)}\right)=E_{\boldsymbol{\vartheta}^{\left(k+2/3\right)}}\left[l_{c3}\left(\boldsymbol{\vartheta}_3\right)|\boldsymbol{x}_1,\ldots,\boldsymbol{x}_n\right]. $$ In addition to update $z_{ig}$ to \begin{equation*} z_{ig}^{\left(k+2/3\right)}=\frac{\pi_g^{\left(k+1\right)} p_{CN}\left(\boldsymbol{x}_i;\boldsymbol{\mu}_g^{\left(k+1\right)},\boldsymbol{\Sigma}_g^{\left(k\right)},\alpha_g^{\left(k+1\right)},\eta_g^{\left(k+1\right)}\right)}{\sum_{j=1}^G\pi_j^{\left(k+1\right)} p_{CN}\left(\boldsymbol{x}_i;\boldsymbol{\mu}_j^{\left(k+1\right)},\boldsymbol{\Sigma}_j^{\left(k\right)},\alpha_j^{\left(k+1\right)},\eta_j^{\left(k+1\right)}\right)}, \label{eq:mixture: third cycle: update for z} \end{equation*} and $w_{ig}$ to \begin{equation*} w_{ig}^{\left(k+2/3\right)}=\frac{\alpha_g^{\left(k+1\right)}\phi\left(\boldsymbol{x}_i;\boldsymbol{\mu}_g^{\left(k+1\right)},\boldsymbol{\Sigma}_g^{\left(k \right)}\right)+\displaystyle\frac{1-\alpha_g^{\left(k+1\right)}}{\eta_g^{\left(k+1\right)}}\phi\left(\boldsymbol{x}_i;\boldsymbol{\mu}_g^{\left(k+1\right)},\eta_g^{\left(k+1\right)}\boldsymbol{\Sigma}_g^{\left(k \right)}\right)}{\alpha_g^{\left(k+1\right)}\phi\left(\boldsymbol{x}_i;\boldsymbol{\mu}_g^{\left(k+1\right)},\boldsymbol{\Sigma}_g^{\left(k \right)}\right)+\left(1-\alpha_g^{\left(k+1\right)}\right)\phi\left(\boldsymbol{x}_i;\boldsymbol{\mu}_g^{\left(k+1\right)},\eta_g^{\left(k+1\right)}\boldsymbol{\Sigma}_g^{\left(k \right)}\right)}, \label{eq:mixture: third cycle: update for w} \end{equation*} where $\boldsymbol{\Sigma}_g^{\left(k \right)}=\boldsymbol{\Lambda}_g^{\left(k \right)}\boldsymbol{\Lambda}_g^{\left(k \right)'}+\boldsymbol{\Psi}_g^{\left(k \right)}$, due to the last two rows of \eqref{eq:mixture: AECM: third cycle: complete-data log-likelihood} we also calculate \begin{equation*} E_{\boldsymbol{\vartheta}^{\left(k + 2/3\right)}}\left(Z_{ig}W_{ig}\boldsymbol{U}_{ig}|\boldsymbol{x}_i,w_{ig}\right)=w_{ig}^{\left(k+2/3\right)}\boldsymbol{\gamma}_g^{\left(k\right)'}\left(\boldsymbol{x}_i -\boldsymbol{\mu}_g^{\left(k+1\right)}\right) \label{eq:mixture: expectation of WU} \end{equation*} and \begin{equation*} E_{\boldsymbol{\vartheta}^{\left(k + 2/3\right)}}\left(Z_{ig}W_{ig}\boldsymbol{U}_{ig}\boldsymbol{U}_{ig}'|\boldsymbol{x}_i,w_{ig}\right)= \boldsymbol{I}_q-\boldsymbol{\gamma}^{\left(k\right)'}_g\boldsymbol{\Lambda}_g^{\left(k\right)} +w_{ig}^{\left(k+2/3\right)}\boldsymbol{\gamma}_g^{\left(k\right)'}\left(\boldsymbol{x}_i -\boldsymbol{\mu}_g^{\left(k+1\right)}\right)\left(\boldsymbol{x}_i -\boldsymbol{\mu}_g^{\left(k+1\right)}\right)'\boldsymbol{\gamma}_g^{\left(k\right)}, \label{eq:mixture: expectation of WUU} \end{equation*} where $\boldsymbol{\gamma}_g^{\left(k\right)}=\left(\boldsymbol{\Lambda}_g^{\left(k\right)}\boldsymbol{\Lambda}_g^{\left(k\right)'}+\boldsymbol{\Psi}_g^{\left(k\right)}\right)^{-1}\boldsymbol{\Lambda}_g^{\left(k\right)}$. Hence, starting from \eqref{eq:mixture: AECM: third cycle: complete-data log-likelihood}, the expected complete-data log-likelihood of the third cycle is \begin{eqnarray*} Q_3\left(\boldsymbol{\vartheta}_3;\boldsymbol{\vartheta}^{\left(k + 2/3\right)}\right) = \sum_{g=1}^G Q_{3g}\left(\boldsymbol{\Lambda}_g,\boldsymbol{\Psi}_g;\boldsymbol{\vartheta}^{\left(k + 2/3\right)}\right), \label{eq:mixture: third cycle: global expected complete-data log-likelihood} \end{eqnarray*} with \begin{eqnarray*} Q_{3g}\left(\boldsymbol{\Lambda}_g,\boldsymbol{\Psi}_g;\boldsymbol{\vartheta}^{\left(k + 2/3\right)}\right) &=& C_g - \frac{1}{2}n_g^{\left(k + 2/3\right)}\log\left|\boldsymbol{\Psi}_g\right| - \frac{1}{2}n_g^{\left(k + 2/3\right)} \text{tr}\left(\boldsymbol{\Psi}_g^{-1}\boldsymbol{S}_g^{\left(k+2/3\right)}\right) \nonumber\\ && +n_g^{\left(k + 2/3\right)}\text{tr}\left(\boldsymbol{\Psi}_g^{-1}\boldsymbol{\Lambda}_g\boldsymbol{\gamma}_g^{\left(k\right)'}\boldsymbol{S}_g^{\left(k+2/3\right)}\right) - \frac{1}{2}n_g^{\left(k + 2/3\right)}\text{tr}\left(\boldsymbol{\Lambda}_g'\boldsymbol{\Psi}_g^{-1}\boldsymbol{\Lambda}_g \boldsymbol{R}_g^{\left(k+2/3\right)}\right) , \label{eq:mixture: third cycle: expected complete-data log-likelihood} \end{eqnarray*} where \begin{eqnarray*} \boldsymbol{S}_g^{\left(k+2/3\right)} &=& \frac{1}{n_g^{\left(k + 2/3\right)}}\sum_{i=1}^nz_{ig}^{\left(k+2/3\right)}w_{ig}^{\left(k+2/3\right)}\left(\boldsymbol{x}_i-\boldsymbol{\mu}_g^{\left(k+1\right)}\right)\left(\boldsymbol{x}_i-\boldsymbol{\mu}_g^{\left(k+1\right)}\right)', \label{eq:mixture: S}\\ \boldsymbol{R}_g^{\left(k+2/3\right)} &=& \boldsymbol{I}_q-\boldsymbol{\gamma}_g^{\left(k\right)'}\boldsymbol{\Lambda}_g^{\left(k\right)} +\boldsymbol{\gamma}^{\left(k\right)'}\boldsymbol{S}_g^{\left(k+2/3\right)}\boldsymbol{\gamma}_g^{\left(k\right)}, \label{eq:mixture: R} \end{eqnarray*} and $C_g$ includes the terms that do not depend on $\boldsymbol{\Lambda}_g$ and $\boldsymbol{\Psi}_g$, $g=1,\ldots,G$. After some algebra, maximization of $Q_{3g}\left(\boldsymbol{\Lambda}_g,\boldsymbol{\Psi}_g;\boldsymbol{\vartheta}^{\left(k + 2/3\right)}\right)$ over $\boldsymbol{\Lambda}_g$ and $\boldsymbol{\Psi}_g$ yields the updated estimates \begin{equation*} \boldsymbol{\Lambda}_g^{\left(k+1\right)}=\boldsymbol{S}_g^{\left(k+2/3\right)}\boldsymbol{\gamma}_g^{\left(k\right)} \left(\boldsymbol{R}_g^{\left(k+2/3\right)}\right)^{-1} \label{eq:mixture: Lambda} \end{equation*} and \begin{equation*} \boldsymbol{\Psi}_g^{\left(k+1\right)}=\text{diag}\left(\boldsymbol{S}_g^{\left(k+2/3\right)}-\boldsymbol{\Lambda}_g^{\left(k+1\right)}\boldsymbol{\gamma}_g^{\left(k\right)'}\boldsymbol{S}_g^{\left(k+2/3\right)}\right). \label{eq:mixture: Psi} \end{equation*} \subsection{Computational details} \label{subsec: MCGFAM: Computational details} Analogously to the estimation of the contaminated Gaussian distribution, the \texttt{optim()} function is used to maximize each $Q_{gc2}\left(\alpha_g,\eta_g;\boldsymbol{\vartheta}^{\left(k+1/3\right)}\right)$, $g=1,\ldots,G$, and the Woodbury identity, given in \eqref{eq:Woodbury identity}, is considered to compute $\boldsymbol{\gamma}_g^{\left(k\right)}$, $g=1,\ldots,G$. Moreover, based on Section~\ref{subsec: FCN: Computational details}, the AECM algorithm is initialized with the estimates of $\pi_g$, $\boldsymbol{\mu}_g$, $\boldsymbol{\Lambda}_g$, and $\boldsymbol{\Psi}_g$ provided by a mixture (with $G$ components) of Gaussian factor analyzers, along with the constraint $\alpha_g=\alpha^*$ (with $\alpha^*\rightarrow 1^-$) and $\eta_g=\eta^*$ (with $\eta^*=\rightarrow 1^+$), $g=1,\ldots,G$. In the analysis of Section~\ref{subsec: MCN: Real data: blue crab data}, the (preliminary) mixture of Gaussian factor analyzers is estimated by the \texttt{pgmmEM()} function of the \textbf{pgmm} package for \textsf{R}. The Aitken acceleration is considered as convergence criterion (see Section~\ref{subsubsec: CN: Convergence criterion} for details). \subsection{Real data analysis} \label{subsec: MCN: Real data: blue crab data} The \texttt{f.voles} data set, detailed in \citet[][\tablename~5.3.7]{Flur:Afir:1997} and available in the \textbf{Flury} package for \textsf{R} \citep{Flur:Flur:2012}, consists of measurements on female voles from two species, \textit{M.~californicus} and \textit{M.~ochrogaster}. The data refer to $n=86$ observations for which we have a binary variable $\texttt{Species}$, denoting the species (45 \textit{M.~ochrogaster} and 41 \textit{M.~californicus}), as well as the following $p=7$ continuous variables \citep[the names of the variables are the same as in the original analysis of this data set by][]{Airo:Hoff:Acom:1984}: \begin{center} \begin{tabularx}{\linewidth}{l X} \texttt{Age} & Age measured in days; \\ \texttt{L2.Condylo} & Condylo incisive length; \\ \texttt{L9.Inc.Foramen} & Incisive foramen length; \\ \texttt{L7.Alveolar} & Alveolar length of upper molar tooth row; \\ \texttt{B3.Zyg} & Zygomatic width; \\ \texttt{B4.Interorbital} & Interoribital width; \\ \texttt{H1.Skull} & Skull height. \\ \end{tabularx} \end{center} All of the variables related to the skull are measured in units of 0.1 mm. The scatterplot matrix of these data, with the clustering induced by $\mathsf{Species}$, is shown in \figurename~\ref{fig:scatter f.voles}. \begin{figure}[!ht] \centering \resizebox{0.7\textwidth}{!}{\includegraphics{fvoles.eps}} \caption{ Scatterplot matrix of the \texttt{f.voles} data with clustering induced by $\texttt{Species}$ ($\bullet$ denotes the observation perturbed for the analysis of Section~\ref{subsec: MCN: Real data: blue crab data}). } \label{fig:scatter f.voles} \end{figure} For our purposes, we assume that data are unlabelled with respect to \textsf{Species} and that our interest is in evaluating, with respect to mixtures of Gaussian factor analyzers, clustering and robustness to anomalous points of our proposed model. Following the scheme adopted by \citet{Peel:McLa:Robu:2000} on the well-known crabs data, 36 ``perturbed'' data sets are created by substituting the original value 122 of \texttt{Age} for the 81th point (highlighted by a black bullet in \figurename~\ref{fig:scatter f.voles}) with atypical values ranging from 450 to 800, with increments of 10. The two competing techniques are run for $G\in\left\{1,2,3\right\}$ and $q\in\left\{1,2,3\right\}$; the best model is selected by BIC. As concerns the mixture of Gaussian factor analyzers model, apart from the perturbations 530 and 640, where the selected values are $G=3$ and $q=2$, in the former case, and $G=1$ and $q=3$ in the latter case, the best values are $G=1$ and $q=2$. Hence, the value $G=2$ is never selected and the obtained clustering is far from that induced by \textsf{Species}. When the number of mixture components is fixed to $G=2$, in a sort of model-based classification analysis, the best BIC model has always a single factor ($q=1$) and the classification on the unperturbated 85 observations is shown in \tablename~\ref{tab:f.voles classification mixture of factor analyzers}. \begin{table}[!ht] \caption{ Classification results for the mixture of Gaussian factor analyzers model on the perturbed variants of the \texttt{f.voles} data set. Classification is only evaluated for the unperturbed observations. } \label{tab:f.voles classification mixture of factor analyzers} \centering \begin{tabular}{l c rr} \hline && 1 & 2 \\ \hline \textit{M.~californicus} && 38 & 3 \\ \textit{M.~ochrogaster} && 1 & 43 \\ \hline \end{tabular} \end{table} As we can see, there are four misclassified observations, corresponding to a misclassification rate of 0.047. With regards to the mixture of contaminated Gaussian factor analyzers model, for all perturbed data sets, the best BIC values are $G=2$ and $q=2$, always leading to the classification results in \tablename~\ref{tab:f.voles clustering mixture of contaminated factor analyzers}. \begin{table}[!ht] \caption{ Classification results for the mixture of contaminated Gaussian factor analyzers model on the perturbed variants of the \texttt{f.voles} data set. } \label{tab:f.voles clustering mixture of contaminated factor analyzers} \centering \begin{tabular}{l c rrr} \hline && 1 & 2 & Bad \\ \hline \textit{M.~californicus} && 41 & & \\ \textit{M.~ochrogaster} && & 44 & 1 \\ \hline \end{tabular} \end{table} In contrast with the clustering results obtained for the mixture of Gaussian factor analyzers model, the true number of groups is always selected; furthermore, the selected model is robust to these perturbations, with the number of misallocated observations being null regardless of the particular value perturbed. Interestingly, we can also note how the model is able to detect the introduced bad point (see the last column of \tablename~\ref{tab:f.voles clustering mixture of contaminated factor analyzers}). Finally, by recalling that the original value of \texttt{Age} for the 81th point was 122, it is also interesting to note that the estimated value of $\eta_g$ (in the group containing the outlier) increases almost linearly as the value of this point further departs from its true value (cf. \figurename~\ref{fig:eta hat}). \begin{figure}[!ht] \centering \resizebox{0.5\textwidth}{!}{\includegraphics{eta.eps}} \caption{ Estimated value of the inflation parameter $\eta_g$ (in the group containing the outlier) as a function of the perturbed value of \texttt{Age} for the 81th observation. } \label{fig:eta hat} \end{figure} \section{Discussion} \label{sec:Discussion and future work} In this paper, the factor analysis model has been extended to the contaminated Gaussian distribution. Methodological contributions have been contextualized in the high-dimensional setting and have involved the definition of both the contaminated Gaussian factor analysis model (cf. Section~\ref{sec:The contaminated Gaussian factor analysis model}) and the mixture of contaminated Gaussian factor analyzers model (cf. Section~\ref{sec:Mixtures of contaminated Gaussian factor analyzers}). In one sense, these models can be respectively viewed as a generalization of the (Gaussian) factor analysis model, and of the mixture of (Gaussian) factor analyzers model, that accommodates outlying observations, spurious observations, or noise, which we have collectively referred to as bad points. Although approaches for high-dimensional data such as the $t$-factor analysis model and mixtures of $t$-factor analyzers can be used for data comprising bad points, they ``assimilate'' bad points rather than separating them out. Computational contributions have concerned the detailed illustration of AECM algorithms for fitting the above models, as well as the definition of a new version, with respect to \citet{Punz:McNi:Robu:2013}, of the ECM algorithm for maximum likelihood parameter estimation of the contaminated Gaussian distribution; the new version adopts the characterization of this distribution as a member of the Gaussian scale mixtures family. When applied to real data, our models have shown their supremacy with respect to their Gaussian counterparts. On the bivariate ``symmetric'' financial data of Section~\ref{subsec: CN: Real data analysis}, we have pointed out that, with respect to the contaminated Gaussian distribution, there were about the fifty percent of bad points creating departure from normality, while on the \texttt{state.x77} data of Section~\ref{subsec: CGFAM: Real data analysis} we have shown how the contaminated Gaussian factor analysis model can reduce dimensionality and simultaneously detect bad points. Finally, in the application to the \texttt{f.voles} data, our mixture of contaminated Gaussian factor analyzers model performed better in terms of clustering/classification and gave consistent results regardless of the extent of the perturbation (cf.\ Section~\ref{subsec: MCN: Real data: blue crab data}). Future work will focus on the development of an \textsf{R} package to facilitate dissemination of our approaches. In addition, high-dimensional contamination of non-elliptical densities will be explored and once realized, will lead to an even more flexible modelling paradigm. \small
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Response to the Senior Editor} \comment{All the Reviewers agree that the manuscript may contain some publishable results, but a careful revision is needed. Specifically, they point out that the considered setting should be better explained and justified (why the scheduling parameter is supposed to be unknown but perfectly measurable in real time, why there are no rank conditions on the fault matrices, why the exogenous disturbance of the state is same as on the output, etc). The Reviewers also point out that the contribution as compared to the existing literature should be better explained and that the conservativeness of the proposed solution should be analysed. Overall, after my own reading of the manuscript, I agree with the Reviewers and therefore invite the Authors to revise the manuscript by following the Reviewers suggestions. Notice that the revised manuscript will be considered for publication as a letter only if all the main issues raised by the Reviewers are successfully addressed. } \response{{\color{blue}We have made a substantial revision to improve the paper according to the aspects mentioned by the senior editor and the reviewers. In particular, we have made the following major modifications: \begin{itemize} \item In order to improve the presentation of the paper and better position its contributions, we have appended and rewritten the introduction. In the revised introduction we have incorporated the suggested literature from the reviewers and discussed the algorithms that the reviewers have asked to position our work against. \item We have rewritten the contributions and appended our simulation part to support the fact that a real-time implementable diagnosis solution is a necessity for health-monitoring of linear parameter varying systems (LPVs). The revised simulation section also includes the average computational time needed for calculations in each time step, indicating that our algorithm performs a factor 100 faster than is required given the sampling time of the simulation. \item We now explicitly point towards the algorithmic implementation of the analytical approximation (page 4, line 264) in which we also discuss the limitations and conservativeness of the proposed approach. \item Finally, we have thoroughly proofread the manuscript and have made changes throughout to resolve any unclarities from the reviewers and to better position and justify the considered setting. \end{itemize} Details of the numerous changes to the manuscript are given in the individual replies to the reviewers.}} \printbibliography[segment=\therefsegment,heading=subbibliography] \newrefsegment \newpage \section{Reviewer 52161} \textcolor{blue}{We would to thank the reviewer for taking the time and for providing instrumental comments. We made every effort to address all the comments. Please find our response to each comment below.} \comment{This paper proposed a filter to estimate multiple fault signals for a class of discrete-time linear parameter-varying (LPV) systems. The problem is interesting and the paper is well organized.} \response{Thank you for your positive and encouraging comments.} \comment{What is the step further done than ref [11].} \response{For~\cite{2006Nyberg} (previously ref [11], now ref [13] on the manuscript) and other nullspace-based approaches we have shortly elaborated on the positioning of our work with respect to those references. Specifically,~\cite{2006Nyberg} considers a continuous-time LTI model, whereas in our work we consider a discrete-time LPV model for which we solve the fault detection/estimation problem.} \comment{In Fig.(2), the filter shows that it can not compensate of the noise. How the ensure the convergence of the estimation error exist for the filter with such noise?} \response{First of all, we have made a careful adjustment to the titles of the subplots in Figure 2. Namely, we would like to emphasize that we compare results of the \textit{\textbf{novel}} parameter-varying filter (top graph) with the \textit{\textbf{baseline}} linear time-invariant filter. In these results, without considering measurement noise (i.e., the red dashed line), the novel parameter-varying filter estimates perfectly the injected fault, whereas the time-invariant filter suffers from the fact that it can not compensate for the parameter-varying plant characteristics. When considering measurement noise, it can be seen that the noise is in fact attenuated from blue dotted green dash-dotted (Figure 2). The noise is attenuated and can be further attenuated by placing the poles of $a(\mathfrak{q})$ further towards the exterior of the unit-circle, however this would also introduce a bigger lag of the residual w.r.t. the true fault. Conversely, by placing the poles of the filter further towards the interior of the unit-circle, it is possible to increase the convergence speed of the parameter-varying observer, up to the limit-case where you could in fact make a deadbeat observer (i.e., convergence to the true fault in $d_a$ time steps where $d_a$ represents the degree of the filter $a^{-1}(\mathfrak{q})$). This poses a trade-off between performance and robustness which is present for every estimation problem under the influence of noise. However, we would like to emphasize(and show through Figure 2) that the additive noise does not compromise the "average" behavior of the residual, which corresponds to the residual in a noiseless case. We have added additional material to the manuscript (page 4, line 252 to line 260) in which we borrow results from another work in literature to show that, in an LTI case (i.e., $w$ is constant) the filter will converge to the true fault. In the case of additive noise, the residual will contain both information on the fault as well as a filtered version of this additive noise. Therefore, the residual of the measurements under the influence of noise (blue dotted, green dash-dotted) will follow the average behavior of the residual in the deterministic case (red dashed) as depicted in Figure 2 in the manuscript.} \printbibliography[segment=\therefsegment,heading=subbibliography] \newrefsegment \newpage \section{Reviewer 52163} \textcolor{blue}{We want to thank the reviewer for taking time and instrumental comments. We made every effort to address all the comments. Please find our response to each comment below.} \comment{The paper contribution is quite confusing. Authors claim that a contribution is related to the real-time implementation. However, only simulation results are presented. On the other hand, the estimation is not as good as other results in the literature, for example, when the problem is boarded with PI fault estimation observers or for the descriptor approach.} \response{{\color{blue}We acknowledge the reviewer's concern and have substantially revised the abstract, introduction, the statement of the contributions, and also the simulation section in order to address the comment. We briefly summarize our argument in the following: An extension towards an LPV filter should be efficiently implemented to allow real-time implementation (e.g., using a tractable optimization routine). In Corollary~III.4, we propose an arbitrarily accurate approximation for the original program of the filter design whose solution is analytically available. This analytical solution allows for {\em implementable} real-time filter synthesis while using valuable practical considerations in the context of LPV systems. In the simulation section (page 5, line 321 to 325) we have elaborated on the computational time needed to synthesise the filter at each time step and calculating its outputs. The time needed is, on average, well over a factor 100 lower than the used sampling time of $0.01$s. This shows that our contribution is an efficient algorithm suitable for real-time computational tasks. } Finally, we have adjusted the formatting in Fig. 2 to prevent that the plots can be misinterpreted by the reader. Specifically, we have changed the order of the plots, to make sure that we start of with the \textbf{baseline} time-invariant filter, showing that even in the noiseless situation it cannot compensate for the time-varying nature of the system. Subsequently we transition to the \textbf{proposed} parameter-varying filter where it can be observed that the fault can in fact be perfectly estimated (red-dashed line). We also elaborate that we could in fact increase the convergence speed by placing the poles further towards the origin. However, this will inherently deteriorate the sensitivity towards any measurement noise. The trade-off between noise-sensitivity and filter performance is a fundamental problem which is faced in any publication dealing with LPV systems under the influence of noise.} \comment{I am not an English expert, but the writing requires careful revision. The article is misread, somewhat repetitive, and with some misspelled sentences.} \response{Based on your comment we have carefully proofread the final manuscript for any remaining unclarities and/or grammatical errors.} \comment{There are no recent works in the literature review; without a proper revision, it is impossible to see the paper's contribution. Some recent works are: \begin{enumerate} \item \text{https://doi.org/10.1002/asjc.1913} \item \text{DOI: 10.34768/amcs-2021-0017} \item \text{https://doi.org/10.1016/j.jfranklin.2019.11.053} \end{enumerate}} \response{Thank you for the proposed suggestions of recent works. The mentioned works~\cite{DO20202099,Gomez-Penate2019339,Hamdi2021247} have been analysed and incorporated appropriately in the revised introduction.} \comment{This assumption is contradictory: "$w$ represents a scheduling parameter which is assumed to be unknown a-priori, but is measurable in realtime". The problem of "unknown" is related to the unmeasurable premise variable case; then, it is measurable or unmeasurable. It is unmeasurable; then, why do authors consider unknown $w$?. Note that this assumption is not trivial because the mathematic for measurable and unmeasurable are different. Of course, the unmeasurable case is much more interesting than the measurable. } \response{We have carefully adjusted the wording of this assumption since we understand the confusion this may have caused (page 2, line 127 to 130). We are aiming to make a distinction between a premise variable which has a known relationship with time (for example, a sine function) and a premise variable of which the explicit relationship with time is unknown a priori, but the variable can be measured \textit{\textbf{up to}} current time.} \comment{What is the meaning of $G(w_k)$ in (2)?.} \response{The matrix $G(w_k)$ represents a matrix selecting a linear combination of the state vector at the next time step, $X(k+1)$. We however noticed and corrected double use of the term $G$ inside the manuscript.} \comment{Why the vector f is the same for the system and the output?. 7.- I cannot see how he models (2) and the example (14) are related. According to the equations, because the fault vector affects the actuators and sensor, this should be something like [fa; fb]. However, here, only the actuator is affected. Why is sensor fault not considered?} \response{We have clarified the text following Eq. (2). Through this clarification we explain that the used signals $d(k),\:f(k)$ in Eq. (2) can represent multivariate signals. We do not restrict our research results to scalar faults and/or disturbances nor do we assume to have only sensor faults or only actuator faults. Therefore, the signals $d(k),\:f(k)$ could contain disturbances and faults which affect the state $X(k)$, disturbances/faults which could affect the output $y(k)$ or disturbances/faults which could affect both. These disturbances/faults are selected through the matrices $B_d\in\mathbb{R}^{n_X\times n_d},\:B_f\in\mathbb{R}^{n_X\times n_f},\:D_d\in\mathbb{R}^{n_y\times n_d},\:D_f\in\mathbb{R}^{n_y\times n_f}$ where $n_X$ represents the size of state $X$, $n_y$ represents the size of the measured output $y$, $n_d$ represents the number of the disturbances $d$ and $n_f$ represents the number of faults. We believe that this clarification in the manuscript is sufficient for maintaining the current notation for using the multivariate exogenous disturbance $d(k)$ and fault $f(k)$ on both the input and the output of the system, which is a notation often used in current literature. \\It is correct that in our example, to show the effectiveness of our filter, we only consider an actuator fault. However, we care to stress that any additional sensor fault, which satisfies the condition in Fact III.2, can be estimated simultaneously with the actuator fault using our proposed method.} \comment{Authors must include the LPV and the discrete model of (14). } \response{The LPV model is provided in Eq. (14). In order to emphasize that this is in fact an LPV model, we have added the time dependency to all time-varying signals, including the scheduling parameter. Furthermore, we have added the discretization rules, which are used to transform the model in Eq. (14) to discrete time, on page 5, line 313 to 318.} \comment{Include a plot of the weighting functions?} \response{We are not entirely sure what the reviewer meant with the term "weighting function". If the reviewer could clarify the remark, we are open to provide any additional results. Our best guess to interpret the notion of weighting function is the parameter $w$, i.e., $v_x$ in our simulation example. This function is chosen as a sinusoidal as is now explicitly mentioned in the manuscript (page 5, line 321). Due to the lack of space, this sinusoidal is not plotted in our manuscript as we believed that an analytical time-varying description is self-explanatory.} \comment{It is quite easy to get lost with the number of corollaries. Include an algorithm for the solution.} \response{The algorithmic implementation of the filter, i.e., the analytical equations that the reader needs to implement our filter, had already been provided in Remark III.6. We do however understand your confusion, we have therefore adjusted this part and we have renamed it as "\textbf{Algorithmic implementation}" to emphasize to the reader that this part contains the analytical equations to reproduce our results.} \comment{How these results " supports real-time implementation", if authors are presenting only simulations.} \response{{\color{blue} As mentioned in response to an earlier comment, we acknowledge the reviewer's concern and have substantially revised the abstract, introduction, the statement of the contributions, and also the simulation section in order to position the contribution of this work properly. In particular, in Proposition~III.3 we propose a convex reformulation of the isolation/estimation filter at each time instance, followed by Corollary~III.4 where we offer an arbitrarily accurate approximation whose solution is analytically available. This analytical solution allows for {\em implementable} real-time filter synthesis while using valuable practical considerations in the context of LPV systems. In the simulation section (page 5, line 321 to 325), we have also elaborated on the computational time needed to synthesise the filter at each time step and calculating its outputs. The time needed is, on average, well over a factor 100 lower than the used sampling time of $0.01$s. This indicates that our contribution is an efficient algorithm suitable for real-time computational tasks.}} \printbibliography[segment=\therefsegment,heading=subbibliography] \newrefsegment \newpage \section{Reviewer 52463} \textcolor{blue}{We want to thank the reviewer for taking time and instrumental comments. We made every effort to address all the comments. Please find our response to each comment below.} \comment{The paper propose a novel filter to estmate multiple fault signals for a class of LPV systems. The design is formulated as an optimization problem. The paper is in general well written.} \response{Thank you for your positive and encouraging comments.} \comment{I have quite surprising because the authors do not impose any rank condition on fault matrix $D_f$. It should have full column rank? If not why did not you consider $D_f = 0$. This makes the paper more challenging.} \response{We would like to refer to Eq. (8) in Fact III.2 in which we impose the following rank condition: $$\text{Rank}\left(\left[\bar{H}(w)\quad \bar{F}(w)\right]\right)>\text{Rank}\left(\bar{H}(w)\right).$$ Note here, that $\bar{F}$ contains both the matrices $D_f(w_k)$ and $B_f(w_k)$ when considering a states-space notation (see Eq. (2) and the declaration of the matrices thereafter). In summary, this means that we do, in fact, impose a rank condition on both the matrices $B_f$ and $D_f$. \\\\ Regarding the question why we did not consider $D_f=0$; we elaborate in our manuscript (page 2, line 133 to 135) that we do not restrict ourselves to only sensor faults (i.e., faults acting on the output $y(k)$) or only actuator faults (i.e., faults acting on the state $X(k)$). Therefore, we do not require any restrictive assumptions on the values of $B_f,\:D_f$ as long as we satisfy the condition that the faults signals of interest can be isolated using the aforementioned Fact III.2. We believe that, by omitting this assumption, we improve the generic applicability of our contributions by being able to estimate only sensor faults (i.e., all entries of $B_f$ are equal to zero), only actuator faults (i.e., all entries of $D_f$ are equal to zero) or a combination of both (the entries of $B_f,\:D_f$ are not all equal to zero, and the matrices satisfy the rank condition given Fact III.2).} \comment{Similarly to $D_d$. Why did not you consider $D_d=0$? Why the exogenous disturbance of the state is same as on the output, I mean why do you denote both disturbance and measurement noises as $d(k)$? Can we seperate them by different terms such as $d(k)$ and $w(k)$. Comment please.} \response{We have clarified the text following Eq. (2). Through this clarification we explain that the used signals $d(k),\:f(k)$ in Eq. (2) can represent multivariate signals. We do not restrict our research results to scalar faults and/or disturbances. Therefore, the signals $d(k),\:f(k)$ could contain disturbances and faults which affect the state $X(k)$, disturbances/faults which could affect the output $y(k)$ or disturbances/faults which could affect both. These disturbances/faults are selected through the matrices $B_d\in\mathbb{R}^{n_X\times n_d},\:B_f\in\mathbb{R}^{n_X\times n_f},\:D_d\in\mathbb{R}^{n_y\times n_d},\:D_f\in\mathbb{R}^{n_y\times n_f}$ where $n_X$ represents the size of state $X$, $n_y$ represents the size of the measured output $y$, $n_d$ represents the number of the disturbances $d$ and $n_f$ represents the number of faults. We believe that this clarification in the manuscript is sufficient for maintaining the current notation for using the multivariate exogenous disturbance $d(k)$ on both the input and the output of the system, which is a notation often used in current literature.\\\\Regarding the question why we did not consider $D_d=0$; we elaborate in our manuscript (page 2, line 133 to line 135) that we do not restrict ourselves to only sensor faults (i.e., faults acting on the output $y(k)$ or only actuator faults (i.e., faults acting on the state $X(k)$). Therefore, we should be able to consider both disturbances acting on the state as well as disturbances which could be acting on the measurement. We believe it contributes to the general applicability of this work to not make any restrictive assumptions on the values of $D_d$.} \comment{There is an algorithm frequently used for fault detection, especially for LPV systems. It is called set-valued observer. What do you think the pros and cons between your filter and set-valued observer?} \response{A set-valued observer (SVO) solves a different problem than the problem we have posed. Namely, a set-valued observer intends to invalidate or "disqualify" a parameter-varying nominal model, in which case a fault is detected. This method does not allow the exact estimation of faults. In fact, the output is often binary, i.e., are the current measurements compatible with the LPV model? If not, there is a fault. In our work we focus on the exact estimation of the fault rather than just a flag to notify about the presence of the fault. Given that these algorithms solve a different problem at hand, it is impossible discuss on pros and cons between the two. For the same reason, we have not discussed this methodology as it stands too far from our problem statement and could hence cause confusion. } \comment{The conservativeness of the filter is not discussed. It would be conservative results for recursive state estimation. } \response{In the revised manuscript we have changed the naming of "\textbf{Remark III.6}" to "\textbf{Algorithmic implementation}" to provide clarity on the location of the practically implementable algorithm. Furthermore, have revised this part to shortly elaborate on the potential conservativeness of the filter. Namely, whenever the matrix $\bar{H}$ is not full rank, the solution to our dual problem, i.e., the analytical solution in (10), is approximated with a form of matrix regularization. Depending on the chosen parameters, this could result in a more conservative filter which may not be able to satisfy the posed conditions.} \comment{The sensitivity analysis with respect to fault has not been discussed. I am not sure that a fault of small magnitude can be estimated.} \response{The design requirements of the estimation filter are formulated by Eq. (5a) and (5b). By satisfying (5a) we can guarantee complete insensitivity to exogenous disturbances (i.e., decoupling). By satisfying (5b), we enforce that the sensitivity to the fault (i.e., the mapping from the fault to the residual) is non-zero. Finally, by incorporating the inverse DC-gain of the filter at each time step (as shown in the algorithmic implementation) we ensure that the steady-state input-output mapping from the fault to the residual is exactly equal to 1. Therefore, in the case of the example system, the minimum size of the fault which can be estimated is limited only by the numeric precision of the used platform. All of this is of course assuming that there is no noise acting on the system.} \comment{In the simulation part, what is the value of f? The norm of $D_f$ is large. So, not surprisingly, it could be estimated. Can you please simulate for a fault of much less magnitude.} \response{In our example we only consider an actuator fault. Therefore, we consider that all entries of $D_f$, for the sake of this example, are actually equal to zero. The values of $B_f$ are shown in Eq. (14). In the example, we have chosen values which are representative for a fault in such an automotive system. It is however possible to estimate faults of a substantially smaller magnitude as explained in our response to your previous comment. Please see an example simulation in Figure~\ref{fig:my_label} below which uses the same simulation settings as in the manuscript, but with an injected fault of magnitude $10^{-4}$ lower than the previously injected fault. To comply with page limitations, we did not include this additional result in the paper.} \begin{figure}[H] \centering \includegraphics[width=\columnwidth]{faultLCSS_title_reduced_magnitude.eps} \caption{Simulation with reduced fault magnitude.} \label{fig:my_label} \end{figure} \comment{"This approach has also been applied for...model descriptions", I do not get this sentence. The word "albeit" seems not to be suitable here. Please check it. } \response{We have carefully revised our wording in this part of the manuscript. The updated text can be found in page 1, line 65 to 76} \printbibliography[segment=\therefsegment,heading=subbibliography] \newrefsegment \newpage \section{Reviewer 52471} \textcolor{blue}{We want to thank the reviewer for taking time and instrumental comments. We made every effort to address all the comments. Please find our response to each comment below.} \comment{This paper presents a filter for estimating fault signals for linear parameter-varying (LPV) systems. The disturbances are decoupled while the fault is isolated and estimated with a filter that can be implemented in real time. The paper is generally well-written, but there are a few places where the paper could be a bit more clear, including a number of grammatical typos that appear throughout the manuscript.} \response{Thank you for your positive and encouraging comments. Based on your comment we have carefully proofread the final manuscript for any remaining unclarities and/or grammatical errors.} \comment{One area that could be a bit clearer is at the beginning of page 3 where the authors state "In this work, we take the problem a step further …" It could be a bit clearer that the way the authors are taking this a step further is by finding an irreducible polynomial basis fo the LPV system in contrast to the LTI system. As it is currently written, it appears at first glance that the second sentence in section III is simply repeating the content of the first sentence.} \response{We agree that the denoted sentence is ambiguous and could be interpreted in an incorrect way. We have carefully rewritten the text to better emphasize the incremental contribution of this work.} \comment{Regarding variable terms, it was not clear to me what the physical representation of z is when introduced in section II. In addition, there is no need to define $\bar{a}$ in Lemma III.1 since that variable is never used in the lemma. In fact, it is not used until Corollary III.5, so it would be better to define $\bar{a}$ there. Furthermore, the variable $\gamma$ is overloaded since it is used in one form immediately following equation 4 and then is used as a Lagrange multiplier in Corollary III.4. Lastly, the x-axis of the first plot in Figure 2 should be labeled. } \response{Based on your feedback, the following points have been adjusted in the manuscript: \begin{itemize} \item We have sharpened the text to define the signal $z$, showing possible examples of what it could consist of \item We have carefully reviewed the manuscript and came to the conclusion that we do not need the definition $\bar{a}$ for appropriately explaining the theory, hence we have omitted the definition. \item The variable name $\gamma$ after equation 4 is replaced with variable name $\mu$ such that there remains only one definition of the variable $\gamma$ inside the manuscript. \item Added an x-axis label to the top graph in Figure 2 \end{itemize}} \printbibliography[segment=\therefsegment,heading=subbibliography] \newrefsegment \end{document} \section{Introduction}{\color{black} The problem of fault diagnosis has been an extensively studied topic over the past decades. The detection and estimation of a fault can support an action of the system mitigating the effect of the fault, improving the safety of the system and potential users. In literature, various categories of fault diagnosis methods are elaborated upon, see~\cite{Ding20081, Gao20153757} and the references therein. In the scope of fault {\em detection}, i.e., detecting the presence of a fault and {\em estimation}, i.e., determining the exact magnitude and shape of a fault, choosing between fault-sensitivity, attenuation and decoupling of disturbances and uncertainties is often the most challenging trade-off~\cite{varga2017solving}. The task of {\em isolation} can be seen as a special case of detection and estimation, where all faults can be decoupled from one another using disturbance attenuation techniques, although the complexity of this problem highly depends on the condition of fault isolability~\cite{van2020multiple}. The class of linear parameter-varying (LPV) systems is often considered in the scope of fault detection and estimation and is particularly suitable for treating non-linear systems with parameter variations as linear systems with time-varying and potentially measurable parameters. A class of solutions was defined in literature through the use of linear matrix inequalities (LMI) to robustly formulate the sensitivity problem in an optimization framework using Lyapunov functions~\cite{6082383,Wei2011114,CHEN20178600,Henry2012190,DO20202099,Hamdi2021247,Gomez-Penate2019339}. Therein, parameter-independent Lyapunov functions~\cite{6082383} are used in a polytopic framework, which due to their time-independent nature could result in conservative solutions~\cite{Wei2011114}. Other works consider the use of parameter-dependent Lyapunov functions for filter synthesis in either a polytopic framework~\cite{CHEN20178600} or in a linear fractional transformation framework~\cite{Henry2012190}. These methods can handle potentially uncertain LPV systems. However, their computational burden is often high and may not guarantee the decoupling of disturbances, unless assumptions are made on the frequency content through the use of a complementary disturbance observer~\cite{DO20202099} (i.e., a proportional integral (PI) observer). In~\cite{Hamdi2021247,Gomez-Penate2019339} a sliding-mode observer is proposed for continuous-time systems, for which the observer gains are synthesised through LMIs. Although these methods are well suited for parameter-varying systems, they could suffer from chattering or singularities, requiring a relaxation of the proposed solution. A different solution to the LPV fault estimation problem is the use of a geometric approach. By exploiting the known model, disturbance directions that are not of interest can be projected in parameter-varying unobservable subspaces of the fault estimator~\cite{BOKOR2004511}. {\color{black}A nullspace approach, an application of the geometric approach, is proposed in~\cite{2006Nyberg}, which has been extended by a robust formulation for non-linear systems~\cite{Esfahani2016} and parameter-varying systems~\cite{VARGA20116697}. These approaches consider a continuous-time model setting, whereas in this work, amongst other contributions to be mentioned hereafter, we focus on a discrete-time model setting and a closed-form parameter-varying solution.} \textbf{Our contributions:} In summary, there exist many approaches to the problem of fault detection and estimation for linear parameter-varying systems. Yet, there does not yet exist a solution which could guarantee the decoupling of disturbances, while isolating and estimating the fault of interest in real-time (i.e., having a {\em practically implementable} solution in the form of a discrete filter with {\em low computational burden}). As such, we define our contributions as follows: \begin{enumerate}[label=(\roman*)] \item \textbf{Parameter-varying filter synthesis:} We propose a novel parameter-varying polynomial decomposition for LPV dynamical systems (Lemma~\ref{lem:poly}), which paves the way for a convex reformulation of the isolation/estimation filter at each time instance (Proposition~\ref{theorem:LPVtheorem}). \item \textbf{Isolability conditions:} We offer the {\em existence} conditions of an isolation filter via a novel polynomial time-varying matrix construction (Lemma~\ref{lem:poly}). This allows for a {\em tractable} evaluation of isolability for the LPV systems. \item \textbf{Analytical solution:} We further propose an arbitrarily accurate approximation for the original program of the filter design whose solution is analytically available (Corollary~\ref{cor:dualproblem}). This analytical solution allows for {\em implementable} real-time filter synthesis while using valuable practical considerations in the context of LPV systems. \end{enumerate} The LPV estimation filter is demonstrated on the lateral dynamics of an automated vehicle, a popular illustrative example for LPV fault detection/estimation techniques~\cite{Zhou2020,8269396}. Herein, the estimation challenge is to detect an offset in the steering system, while the vehicle can have a time-varying yet measurable longitudinal velocity.} The outline of this work is as follows. First, the problem formulation is provided in Section~\ref{sec:problemformulation}. In Section~\ref{sec:theorem}, the design of the LPV estimation filter is provided. Moreover, the problem is considered from a practical perspective, showing that the synthesis of such a estimation filter can be implemented by the use of generic computational tools, e.g., matrix inversion. In Section~\ref{sec:simulation}, the estimation filter is demonstrated by application to an example of the lateral dynamics of an automated vehicle. Finally, Section~\ref{sec:conclusion} draws conclusions and proposes future work. \section{Model description and preliminaries}\label{sec:problemformulation} In this section, a class of LPV systems is introduced along with some basic definitions. The model is an LPV extension of the differential-algebraic equations (DAE) class of discrete-time models introduced in~\cite{2006Nyberg} and is described as \begin{align}\label{eq:DAE} H(w_k,\mathfrak{q})[x]+L(w_k,\mathfrak{q})[z]+F(w_k,\mathfrak{q})[f]=0, \end{align} where $\mathfrak{q}$ represents the shift operator (i.e., $\mathfrak{q}[x(k)]=x(k+1)$), $x,z,f,w$ represent discrete-time signals indexed by the discrete time counter $k$, taking values in~$\mathbb{R}^{n_{x}},\mathbb{R}^{n_{z}},\mathbb{R}^{n_{f}},\mathbb{R}^{n_{w}}$. The matrices $H(w_k,\mathfrak{q}), L(w_k,\mathfrak{q}), F(w_k,\mathfrak{q})$ are parameter-varying polynomial functions in the variable $\mathfrak{q}$, depending on the parameter signal $w$ with $n_r$ rows and $n_x,\:n_z,\:n_f$ columns, respectively. {\color{black} Finally, $w$ represents a scheduling parameter of which the explicit relationship with time is unknown a priori, but the parameter is measurable in real-time and takes values from a compact set~$\mathcal{W}\subseteq\mathbb{R}^{n_{w}},\forall k$.} {\color{black}The signal $z$ is assumed to be known or measurable up to the current time $k$ and consist of, e.g., the known or measurable inputs and outputs to and from the system. The signals $x$ and $f$ are unknown and represent the state of the system and the fault, where $f$ is not restricted to any particular location (e.g., sensor or actuator fault).} \begin{Rem}[Non-measurable scheduling parameters or model uncertainty] Several suggestions exist in literature, in the scope of geometric nullspace-based estimation filters, which can be used in making these filters suitable for {\em non-measurable} scheduling parameters $w_k$~\cite[Section 3.3]{VARGA20116697}. Note, that the proposed approximation methods for unknown parameters $w_k$ are directly applicable in the results from this work. \end{Rem}The model~\eqref{eq:DAE} encompasses a large class of parameter-varying dynamical systems, an example of which is a set of LPV state-space difference equations. This example will be used in the simulation study and can be derived from~\eqref{eq:DAE} by starting from the following LPV difference equations: {\color{black}\begin{align}\label{eq:ldif} \begin{cases} G(w_k)X(k+1) = A(w_k)X(k)+B_u(w_k)u(k) + B_d(w_k)d(k)+B_f(w_k)f(k),\\ y(k) = C(w_k)X(k)+D_u(w_k)u(k) + D_d(w_k)d(k)+D_{f}(w_k)f(k) \end{cases} \end{align}} Herein, $u(k)$ represents the input signal, $d(k)$ the exogenous disturbance, $X(k)$ the internal state, $y(k)$ the measured output and $f(k)$ the fault. By defining $z:=[y;u],~x:=[X;d]$ and the parameter-varying polynomial matrices {\color{black}\begin{align*} L(w_k,\mathfrak{q}):=\begin{bmatrix}0&B_u(w_k)\\-I&D_u(w_k)\end{bmatrix},\quad F(w_k,\mathfrak{q}):=\begin{bmatrix}B_f(w_k)\\D_f(w_k)\end{bmatrix}, \quad H(w_k,\mathfrak{q}):=\begin{bmatrix}-G(w_k)\mathfrak{q}+A(w_k) & B_d(w_k)\\mathbb{C}(w_k)&D_d(w_k)\end{bmatrix}, \end{align*}} in~\eqref{eq:DAE}, it can be observed that~\eqref{eq:ldif} is an example of the model description~\eqref{eq:DAE}. In the absence of a fault signal $f$, i.e., for $f=0$, all possible $z$-trajectories of the system~\eqref{eq:DAE} can be denoted as \begin{align}\label{eq:behavior1} \mathcal{M}(w)\coloneqq\{z: \mathbb{Z}\rightarrow\mathbb{R}^{n_z}|\:\exists x:\mathbb{Z}\rightarrow\mathbb{R}^{n_x}: H(w_k,\mathfrak{q})[x]+L(w_k,\mathfrak{q})[z]=0\}, \end{align} which is called the \textit{healthy} behavior of the system. For fault detection, the primary objective is to identify whether the trajectory $z$ belongs to this healthy behavior. \section{Design of parameter-varying estimation filter}\label{sec:theorem} In~\cite{2006Nyberg}, an LTI system, also known as a residual generator, is proposed via the use of an irreducible polynomial basis for the nullspace of $H(w_k,\mathfrak{q})$, denoted by $N_H(w,\mathfrak{q})$~\footnote{In the remainder of this work, by not explicitly mentioning the time index ``$k$" in $w$, we emphasize that the filter coefficients may depend on the parameter signal $w$ in multiple time instances.}. {\color{black}In this work, we take the problem a step further by finding an irreducible polynomial basis $N_H(w,\mathfrak{q})$ for the nullspace of $H(w_k,\mathfrak{q})$, i.e., the state dynamics of an LPV system. Such a polynomial fully characterizes the healthy behavior of the system~\eqref{eq:DAE} as follows:} \begin{align} \mathcal{M}(w)\!=\!\{z:\mathbb{Z}\rightarrow\mathbb{R}^{n_z}\:|\:N_H(w,\mathfrak{q})L(w_k,\mathfrak{q})[z]\!=\!0\}.\label{eq:beh23} \end{align} For the design of an estimation filter, it suffices to introduce a linear combination $N(w_k,\mathfrak{q})=\mu N_H(w_k,\mathfrak{q})$, such that the following objectives for fault detection can be achieved: \begin{subequations}\label{eq:lpvtot} \begin{align} a^{-1}(\mathfrak{q})N(w,\mathfrak{q})H(w_k,\mathfrak{q})=&0,\quad\forall w_k\in\mathcal{W},\label{eq:lpvtota}\\ a^{-1}(\mathfrak{q})N(w,\mathfrak{q})F(w_k,\mathfrak{q})\neq& 0,\quad\forall w_k\in\mathcal{W}.\label{eq:lpvtotb} \end{align} \end{subequations} Here, the polynomial $a(\mathfrak{q})$ is intended to make the estimation filter proper. Moreover, it enables a form of noise attenuation, which is highly recommended towards experimental applications. The above conditions allow us to find a filter to decouple the residual from the time-varying behavior of the system. In fulfilling the requirements of~\eqref{eq:lpvtot}, a proper LPV estimation filter of the following form can be created: \begin{align}\label{eq:resgen} r\coloneqq a^{-1}(\mathfrak{q})N(w,\mathfrak{q})L(w_k,\mathfrak{q})[z]. \end{align} Note, that the degree of $a(\mathfrak{q})$ is not less than the degree of $N(w_k,\mathfrak{q})L(w_k,\mathfrak{q})$ and is stable and that the design of such polynomial is up to the user and can depend on various criteria (e.g., noise sensitivity). In the following lemma, a method to transform the conditions~\eqref{eq:lpvtot} into non-complex, scalar or vector equations is provided, forming a basis for the methodology proposed in the next section. {\color{black}\begin{Lem}\label{lem:poly} Let $N(w,\mathfrak{q})$ be a feasible solution to~\eqref{eq:lpvtot} where the matrices $H(w_k,\mathfrak{q}),\:F(w_k,\mathfrak{q}),\:a(\mathfrak{q})$ are as in~\eqref{eq:DAE} and~\eqref{eq:lpvtot} and have a particular form of \begin{align*} H(w_k,\mathfrak{q})=& \sum_{i=0}^{d_H}H_i(w_k)\mathfrak{q}^i,& F(w_k,\mathfrak{q})=& \sum_{i=0}^{d_F}F_i(w_k)\mathfrak{q}^i,\\ N(w,\mathfrak{q}) =& \sum_{i=0}^{d_N}N_i(w)\mathfrak{q}^i,& a(\mathfrak{q})=&\sum_{i=0}^{d_a}a_i\mathfrak{q}^i, \end{align*} where $d_H,\:d_F,\:d_N,\:d_a$ denote the degree of the respective polynomials. Given any parameter signal $w$, the conditions in~\eqref{eq:lpvtot} can be equivalently rewritten as \begin{subequations}\label{eq:conditionstot} \begin{align} \bar{N}(w)\bar{H}(w)=&0,\label{eq:condition1}\\ \bar{N}(w)\bar{F}(w)\neq&0,\label{eq:condition2} \end{align} \end{subequations} where $\bar{N}(w),\:\bar{H}(w),\:\bar{F}(w)$ are defined as \begin{align*} \bar{N}(w)\coloneqq&\begin{bmatrix}N_{0}(w)& N_{1}(w)& \hdots&N_{d_{N}}(w)\end{bmatrix},\\ \bar{H}(w)\coloneqq&\begin{bmatrix}H_{0}(w_{k-d_a})&\hdots&0\\\vdots&\ddots&\vdots\\H_{d_{H}}(w_{k-d_a})&&H_{0}(w_{k-d_a+d_N})\\\vdots&\ddots&\vdots\\0&\hdots&H_{d_{H}}(w_{k-d_a+d_N})\end{bmatrix}^\intercal,\\ \bar{F}(w)\coloneqq&\begin{bmatrix}F_{0}(w_{k-d_a})&\hdots&0\\\vdots&\ddots&\vdots\\F_{d_{F}}(w_{k-d_a})&&F_{0}(w_{k-d_a+d_N})\\\vdots&\ddots&\vdots\\0&\hdots&F_{d_{F}}(w_{k-d_a+d_N})\end{bmatrix}^\intercal. \end{align*} \end{Lem}} \begin{proof} For proving the results of this lemma, observe that~\eqref{eq:resgen} can be rewritten as (using~\eqref{eq:DAE} and~\eqref{eq:lpvtot}): \begin{align*} r & = -a^{-1}(\mathfrak{q})N(w,\mathfrak{q})F(w_k,\mathfrak{q})[f], \quad \Rightarrow -a(\mathfrak{q})[r] = N(w,\mathfrak{q})F(w_k,\mathfrak{q})[f],\\ \Rightarrow & -\sum_{h=0}^{d_a}a_h\mathfrak{q}^h[r] = \sum_{i=0}^{d_N}N_i(w)\mathfrak{q}^i\sum_{j=0}^{d_F}F_j(w)\mathfrak{q}^j[f] = \sum_{i=0}^{d_N}\sum_{j=0}^{d_F}N_i(w)F_j(\mathfrak{q}^i[w])\mathfrak{q}^{i+j}[f]. \end{align*} Multiplication of both sides with $\mathfrak{q}^{-d_a}$, in order to time-shift the relation to result in a present time residual $r(k)$ as a function of previous faults $f$ and residuals $r$, yields \begin{align*} -\sum_{h=0}^{d_a}a_h\mathfrak{q}^{h-d_a}[r]=&\sum_{i=0}^{d_N}\sum_{j=0}^{d_F}N_i(w)F_j(\mathfrak{q}^{i-d_a}[w])\mathfrak{q}^{i-d_a+j}[f], \end{align*} for which the right-hand side can be rewritten as \begin{align*} \bar{N}(w)\bar{F}(w)\Big(\begin{bmatrix}\mathfrak{q}^{-d_a}I&\mathfrak{q}^{1-d_a}I&\hdots&\mathfrak{q}^{d_N+d_F-d_a}I\end{bmatrix}[f]\Big). \end{align*} This proves the equivalence of~\eqref{eq:condition2} and~\eqref{eq:lpvtotb}. The same line of reasoning applies for proving the equivalence of~\eqref{eq:condition1} and~\eqref{eq:lpvtota}. \end{proof} It is worth noting that the matrices $\bar{N}(w)$, $\bar{H}(w)$, and $\bar{F}(w)$ defined in Lemma~\ref{lem:poly} depend on the parameter signal $w$ through $d_a + 1$ consecutive values. That is, at time instant $k$ the filter coefficient $\bar{N}(w)$ depends on $\{w_{k-d_a}, \dots, w_{k}\} \in \mathcal W^{d_a + 1}$. We, however, refrain from explicitly denoting this dependency and simply use the notation of the entire trajectory $w$, say $\bar{N}(w)$. In this light and using the result from Lemma~\ref{lem:poly}, the conditions for fault detectability can be defined as follows. \begin{Fact}[Conditions of isolability]\label{fact:cond} Given the parameter signal $w$, there exists a feasible solution $\bar{N}(w)$ to the conditions~\eqref{eq:conditionstot} if and only if \begin{align}\label{eq:iffcond} \text{Rank}\left(\left[\bar{H}(w)\quad \bar{F}(w)\right]\right)>\text{Rank}\left(\bar{H}(w)\right). \end{align} \end{Fact} The proof is omitted as it is a straightforward adaption from~\cite[Fact 4.4]{Esfahani2016}. Using the results from Lemma~\ref{lem:poly}, the main theorem for the LPV fault detection filter can be proposed. \begin{Prop}[Parameter-varying filter synthesis]\label{theorem:LPVtheorem} Let the matrices $\bar{H}(w)$ and $\bar{F}(w)$ be matrices as defined in Lemma~\ref{lem:poly}. Then an LPV fault detection filter of the form~\eqref{eq:resgen} can be found at every time instance $k$, depending on $w_k$, that fulfills~\eqref{eq:lpvtot}, by solving the following convex quadratic program (QP): \begin{subequations} \label{eq:lpvopt} \begin{align} \label{eq:lpvopt_obj} \bar{N}^*(w)\coloneqq\arg&\min\limits_{\bar{N}}-\lVert\bar{N}\bar{F}(w)\rVert_{\infty}+\lVert\bar{N}^\intercal\rVert_2^2,\\ \label{eq:lpvopt_cont} &\textnormal{s.t.}\quad \bar{N}\bar{H}(w)=0, \end{align} \end{subequations} where $\lVert\cdot\rVert_\infty$ denotes the supremum norm. \end{Prop} \begin{proof} The term in~\eqref{eq:lpvopt_obj}, related to the fault polynomial $\bar{F}(w)$ ensures a maximised sensitivity for the fault, analogous to~\eqref{eq:lpvtota}, whereas the quadratic (regularization) term related to the filter polynomial $\bar{N}(w)$ ensures that the solution to the problem is bounded. The constraint~\eqref{eq:lpvopt_cont} ensures that the effect of unknown disturbances is decoupled from the residual, analogous to the desired filter requirement in~\eqref{eq:lpvtotb}. \end{proof} Proposition~\ref{theorem:LPVtheorem} lays the groundwork for creating a estimation filter for an LPV model with measurable scheduling parameters $w$. At first glance, it can appear to be an unattractive solution to solve an optimization problem at each time-step, in order to obtain filter coefficients for the estimation filter. However, we show in the following corollary that a tractable analytical solution can be derived for this problem. \begin{Cor}[Analytical solution]\label{cor:dualproblem} Consider the convex QP optimization problem in~\eqref{eq:lpvopt}. The solution to this optimization problem has an analytical solution given by the following polynomial: \begin{align}\label{eq:analyt} &\bar{N}_\gamma^*(w) = \frac{1}{2\gamma}\bar{F}_{j^*}^\intercal(w)(\gamma^{-1}I+\bar{H}(w)\bar{H}^\intercal(w))^{-1}, \\ &\text{where} \quad j^* = \arg\max\limits_{j \le d_N} \lvert\bar{N}_\gamma^{*}(w)\bar{F}_{j}(w)\rvert, \nonumber \end{align} where $j$ denotes the $j$-th column of the matrix $\bar{F}(w)$. Moreover, the solution~$\bar{N}_\gamma^*(w)$ in~\eqref{eq:analyt} converges to the optimal filter coefficient~\eqref{eq:lpvopt} as the parameter~$\gamma$ tends to $\infty$. For bounded values of $\gamma$,~\eqref{eq:analyt} provides an approximate solution. \end{Cor} \begin{proof} A dual program of~\eqref{eq:lpvopt} can be obtained by penalizing the equality constraint~\eqref{eq:lpvopt_cont} through a quadratic function as \begin{align}\label{eq:dual} \sup\limits_{\gamma \ge 0} g(\gamma,w) = \lim_{\gamma \to \infty}g(\gamma,w), \end{align} where $\gamma\in\mathbb{R}_+$ represents the Lagrange multiplier and $g(\gamma)$ represents the dual function defined as \begin{align*} g(\gamma,w) \coloneqq\inf_{\bar{N}}\gamma\lVert\bar{N}\bar{H}(w)\rVert_2^2 + \lVert\bar{N}^{\intercal}\rVert_2^2 - \lVert\bar{N}\bar{F}(w)\rVert_{\infty}. \end{align*} Note, that the $\infty$-norm related to the fault sensitivity can temporarily be dropped by viewing the problem~\eqref{eq:lpvopt} and its dual problem~\eqref{eq:dual} as a set of $d_N$ different QPs; note that the matrix $\bar{F}$ has $d_N$ columns. Hence, the set of dual functions is denoted as \begin{align}\label{eq:dualfunctions} \tilde{g}(\gamma,w) = \inf_{\bar{N}}\underbrace{\gamma\lVert\bar{N}\bar{H}(w)\rVert_2^2 + \lVert\bar{N}^\intercal\rVert_2^2 - \bar{N}\bar{F}(w)}_{\mathcal{L}(\bar{N},\gamma)}. \end{align} The solution to the convex quadratic dual problem can be found by first finding the partial derivative of the above Langrangian as follows: \begin{align*} \frac{\partial\mathcal{L}(\bar{N}^*_\gamma(w),\gamma)}{\partial\bar{N}}\!=\!2\gamma\bar{N}^*_\gamma(w)\bar{H}(w)\bar{H}^{^\intercal}\!(w)\!+\!2\bar{N}^*_\gamma(w)\!-\!\bar{F}^{^\intercal}\!(w). \end{align*} Setting this partial derivative to zero, we arrive at \begin{align} \bar{N}_\gamma^*(w)=\frac{1}{2\gamma}\bar{F}^\intercal(w)(\gamma^{-1}I+\bar{H}(w)\bar{H}^\intercal(w))^{-1},\label{eq:solutionopt} \end{align} which provides $d_N$ admissible solutions to the problem with dual functions $\tilde{g}(\gamma,w)$~\eqref{eq:dualfunctions}. The optimal solution is found by choosing the column of $\bar{F}$, such that the fault sensitivity of the filter is maximal, i.e., \begin{align*} &\bar{N}_\gamma^*(w) = \frac{1}{2\gamma}\bar{F}_{j^*}^\intercal(w)(\gamma^{-1}I+\bar{H}(w)\bar{H}^\intercal(w))^{-1}, \\ &\text{where} \quad j^* = \arg\max\limits_{j \le d_N} \lvert\bar{N}_\gamma^{*}(w)\bar{F}_{j}(w)\rvert, \nonumber \end{align*} which proves equation~\eqref{eq:analyt}. Substituting this solution back into the dual program~\eqref{eq:dual} yields \begin{align*} \begin{cases} \max\limits_{\gamma} -\frac{1}{4\gamma}\bar{F}_{j^*}^\intercal(w)(\gamma^{-1}I+\bar{H}(w)\bar{H}^\intercal(w))^{-1}\bar{F}_{j^*}(w),\\ \text{s.t.}\quad &\gamma\geq 0. \end{cases} \end{align*} This quadratic negative (semi-)definite problem reaches its maximum when $\gamma$ tends to infinity, concluding the proof. \end{proof} Considerations for choosing $\gamma$ are elaborated further on inside the algorithmic implementation. For the purpose of estimation, we are particularly interested in a unity zero-frequency gain. In the following corollary, it is shown how to incorporate this condition in the filter. \begin{Cor}[Zero steady-state]\label{cor:zerosteadystate} {\color{black}Given a filter in the form of in~\eqref{eq:resgen}, where the numerator coefficient $\bar{N}^*_\gamma(w)$ is a solution to the program~\eqref{eq:lpvopt} given analytically in~\eqref{eq:analyt}. The steady-state relation of the mapping $(d,f)\mapsto r$, for any disturbance signal $d$ and a constant fault $f$, is given by \begin{align*} r=-\frac{\bar{N}^*_\gamma(w)\bar{F}(w)\mathds{1}_{{d_N}\times{d_F}}}{\sum_{h=0}^{d_a}a_h}f, \end{align*} where $\mathds{1}_{{d_N}\times{d_F}}$ denotes a matrix of ones of the dimensions ${d_N}\times{d_F}$. } \end{Cor} \begin{proof} The model equation~\eqref{eq:DAE}, multiplied with a filter $a^{-1}(\mathfrak{q})N(w,\mathfrak{q})$, satisfying the conditions~\eqref{eq:lpvtot}, can be denoted as \begin{align*} a^{-1}(\mathfrak{q})N(w_k,\mathfrak{q})L(w_k,\mathfrak{q})[z]=&- a^{-1}(\mathfrak{q})N(w_k,\mathfrak{q})F(w_k,\mathfrak{q})[f],\\ \Rightarrow r=&- a^{-1}(\mathfrak{q})N(w_k,\mathfrak{q})F(w_k,\mathfrak{q})[f], \end{align*} where the last line is induced by~\eqref{eq:resgen}. The steady-state behavior of this filter can be found by setting $\mathfrak{q}=1$, resulting in {\color{black}\begin{align*} r=&- a^{-1}(1)N(w_k,1)F(w_k,1)[f] = -\frac{\bar{N}(w)\bar{F}(w)\mathds{1}_{{d_N}\times{d_F}}}{\sum_{h=0}^{d_a}a_h}f, \end{align*}} which provides the desired result, hence concluding the proof. \end{proof} {\color{black}Notice, that when the considered system is time-invariant (i.e., $w$ is constant for all $k$), the conclusion from Corollary~\ref{cor:zerosteadystate} coincides with~\cite[Lemma 3.1, Eq. (10)]{van2020multiple}. This condition, together with the stability of $a(\mathfrak{q})$, ensures convergence of the estimation error for piecewise constant fault signals. Let us note that when there is additive unbiased noise on, e.g., the output measurements, the average behavior of the resulting residual will still follow the residual from the deterministic case.} {\color{black} In the next section, we elaborate on the algorithmic implementation of the proposed estimation filter to, e.g., an LPV minimal state-space realization, a non-trivial problem given the potential effects of dynamic dependence~\cite{toth2012}. \subsection*{Algorithmic Implementation} For the estimation filter to function according to the objectives~\eqref{eq:lpvtot} (including unity DC gain for estimation), hence preventing any effects from dynamic dependencies~\cite{toth2012}, the filter can be implemented as an LPV Input-Output representation as follows: \begin{align*} r(k) = a_0^{-1}\bar{a}\mathds{1}_{d_a}E(w) \begin{bmatrix}z(k-d_a)&\hdots&z(k-d_a+d_N)\end{bmatrix}^{\intercal} = - a_0^{-1}\sum_{i=1}^{d_a}a_{i}r(k-i) \end{align*} \begin{align} E(w)=&\frac{\bar{N}(w)\bar{L}(w)}{\bar{N}(w)\bar{F}(w)\mathds{1}_{d_N\times d_F}},\label{eq:matrixop} \end{align} where the matrix $\bar{L}(w)$ is defined as \begin{align*} \bar{L}(w)\coloneqq&\begin{bmatrix}L_{0}(w_{k-d_a})&\hdots&0\\\vdots&\ddots&\vdots\\L_{d_{L}}(w_{k-d_a})&&L_{0}(w_{k-d_a+d_N})\\\vdots&\ddots&\vdots\\0&\hdots&L_{d_{L}}(w_{k-d_a+d_N})\end{bmatrix}^\intercal. \end{align*} The matrix operation~$E(w)$ in~\eqref{eq:matrixop} ensures the isolation and estimation of the fault, while the filter coefficients in $\bar{a}$ ensure causality of the operation and reduced sensitivity to noise. By substitution of the results from~\eqref{eq:analyt},~\eqref{eq:matrixop} can be rewritten as: \begin{align*} E(w)=\frac{\bar{F}^{\intercal}_{j^*}(\gamma^{-1}I+\bar{H}(w)\bar{H}^{\intercal}(w)))^{-1}\bar{L}}{\bar{F}^{\intercal}_{j^*}(\gamma^{-1}I+\bar{H}(w)\bar{H}^{\intercal}(w)))^{-1}\bar{F}\mathds{1}_{d_N\times d_F}}, \end{align*} from which it can be deduced that the term $\gamma^{-1}I$ solely ensures well-posedness of the involved inversion operations since, based on Proposition~\ref{cor:dualproblem}, ideally $\gamma^{-1}I$ tends to a zero matrix. The filter is well-posed and exact if and only if $\bar{H}(w)$ is of full rank. If this condition is not fulfilled, the analytical solution from~Proposition~\ref{cor:dualproblem} provides a conservative solution (i.e., the filter could inherit a bias), where the Lagrangian operator $\gamma$ needs to be chosen large enough to ensure well-posedness, while being numerically bounded for practical considerations.} \section{Case study: automated driving}\label{sec:simulation} In this section, the proposed method for designing an LPV estimation filter is illustrated based on a fault estimation problem coming from the lateral dynamics of an automated passenger vehicle. A linear bicycle vehicle model is used as a benchmark model~\cite[Equation (1)]{Schmeitz2017} which is controlled in {\em closed-loop} by the same PD control-law as proposed in~\cite{Schmeitz2017}. Within an automotive context it is undesirable to mitigate a fault in closed-loop without being aware of its magnitude. In fact, in the presence of substantial faults, the vehicle is expected to transition to a safe state. This need for estimating the fault shows the applicability of our proposed problem statement in this application context. First, the model as depicted in Fig.~\ref{fig:simmodel} can formulated as a set of continuous-time linear state-space equations as follows {\color{black} \begin{align} \footnotesize \setlength{\arraycolsep}{2.5pt} \medmuskip = 1mu \dot{X}(t)\!\ =&\!\! \underbrace{\begin{bmatrix}\frac{C_f+C_r}{v_x(t)m}&\frac{l_fC_f-l_rC_r}{v_x(t)m}&0&0\\ \frac{l_fC_f-l_rC_r}{v_x(t)I}&\frac{l_f^2C_f+l_r^2C_r}{v_x(t)I}&0&0\\-1&0&0&v_x(t)\\0&-1&0&0\end{bmatrix}}_{\tilde{A}(v_x)}\!\!\!X(t)\!-\!\!\!\underbrace{\begin{bmatrix}\frac{C_f}{m}\\\frac{l_fC_f}{I}\\0\\0\end{bmatrix}}_{\tilde{B}_u}\!\!u(t)\nonumber -\!\!\underbrace{\begin{bmatrix}\frac{C_f}{m}\\\frac{l_fC_f}{I}\\0\\0\end{bmatrix}}_{\tilde{B}_f}\!\!f(t)\!+\!\!\underbrace{\begin{bmatrix}g&0\\0&0\\0&0\\0&v_x(t)\end{bmatrix}}_{\tilde{B}_d(v_x)}\!\!\!\begin{bmatrix}\sin{(\phi(t))}\\\kappa(t)\end{bmatrix},\\ y(t)=& \begin{bmatrix}0&I\end{bmatrix}X(t),\nonumber \end{align} where the state $X(t)=\begin{bmatrix}\Ddot{\psi}(t)&\dot{v}_y(t)&\dot{y}_e(t)&\dot{\psi}_e(t)\end{bmatrix}$ for which $\dot{\psi}$ denotes the yaw-rate of the vehicle, $v_y$ the lateral velocity of the vehicle, $y_e$ the lateral deviation from the lane center, and, $\psi_e$ the heading deviation from the lane center. The assumed fault, $f$, acts as an additive fault on the input steering angle, $u$.} \begin{figure}[t] \centering \includegraphics[clip, trim=4.5cm 13.4cm 4.5cm 7.2cm, scale=0.63]{images/model_img_f.pdf} \caption{Visual representation of the bicycle model.} \label{fig:simmodel} \end{figure} Two disturbances are considered, where $\kappa$ denotes the curvature of the road and $\phi$ denotes the banking angle of the road. The parameters $C_f=1.50\cdot 10^5\:\rm N\cdot rad^{-1}$ and $C_r=1.10\cdot 10^5\:\rm N\cdot rad^{-1}$ represent the lateral cornering stiffness of the front and rear tyres, respectively, $l_f=1.3\:\rm m$ and $l_r=1.7\:\rm m$ represent the distances from the front and rear axle to the center of gravity. It is furthermore assumed that $m=1500\:\rm kg$ represents the total mass of the vehicle, $I=2600\:\rm kg\cdot m^2$ represents the moment of inertia around the vertical axis of the vehicle and $g=9.81\:\rm m\cdot s^{-2}$ represents the gravitational acceleration. The parameter $v_x$ represents the longitudinal velocity of the vehicle and acts as the scheduling parameter $w_k$ from~\eqref{eq:DAE}. {\color{black}The discrete-time system matrices, used for filter synthesis, are found using exact discretization with a sampling time of $h=0.01$s, i.e., $A(v_x)=e^{\tilde{A}(v_x)h}$ and $B(v_x) = \tilde{A}^{-1}(v_x)(A(v_x)-I)\tilde{B}(v_x)$ (for all matrices $\tilde{B}_u, \tilde{B}_f$ and $\tilde{B}_d(v_x)$). The relation from the state to the output remains unchanged through discretization.} {\color{black}In traffic scenarios it is realistic to assume perturbation in the longitudinal velocity. To capture this, the scheduling parameter is chosen as $v_x(t)=19+5\sin(0.1\pi t)$.} {\color{black}The simulation results are generated through Simulink on an Intel Core i7-10850H 2.7 GHz platform. The average computational time needed for evaluating the filter and its output is $8.2\cdot 10^{-5}s$, i.e., a factor 100 lower than the sampling time.} Fig.~\ref{fig:simulation} depicts the simulation results of a 500 sample long scenario. With this simulation, the effectiveness of the LPV estimation filter, using two different sets of filtering coefficients, is shown and compared to an LTI estimation filter (as used in~\cite{van2020multiple}, generated for a velocity $v_x=19\:m/s$). Here, the fault to be estimated is simulated as a realistic abrupt steering wheel offset {\color{black} $f=0.1\frac{\pi}{180}$ radians starting at time sample $k=150$.} Finally, in Fig.~\ref{fig:simulation} the simulation results in two cases, with and without measurement noise are shown. We introduce realistic additive white sensor noises with standard deviations of $\sigma_{\dot{\psi}}=8\cdot 10^{-4}$rad/s, $\sigma_{y_e}=5\cdot10^{-2}$m, $\sigma_{\psi_e}=3\cdot 10^{-3}$ rad. Note, that for the noisy simulations, two different filters are created and depicted. One of which with denominator $a(\mathfrak{q})=(\mathfrak{q}+0.95)^3$. For the other filter (denoted in Fig.~\ref{fig:simulation} as "increased filtering"), $a(\mathfrak{q})=(\mathfrak{q}+0.98)^3$ is selected. Note, that the design of $a(\mathfrak{q})$ could highly depend on domain-specific knowledge of the application. For example, $a(\mathfrak{q})$ can be designed to attenuate unmodeled noise or disturbances while respecting the frequency-content of the fault to be estimated. \begin{figure}[t] \includegraphics[width = 0.8\columnwidth]{images/faultLCSS_title.eps} \caption{Performance of the time-invariant and the proposed parameter-varying filter in the absence and presence of measurement noise.} \label{fig:simulation} \end{figure} The results in Fig.~\ref{fig:simulation} show that the baseline LTI filter is not robust against the time-varying longitudinal velocity. Once the fault increases, the residual responds, but does not converge to the true fault. This is explained by the fact that the estimation filter is only designed to decouple unmeasured disturbances from the residual at a constant velocity. Therefore, small effects of disturbances and unmeasured states appear in the residual. In the absence of measurement noise, the LPV filter estimates exactly the injected fault. In the presence of noise, the LPV filter still outperforms the LTI filter, and the estimation accuracy is improved further by placing the poles of numerator $a(\mathfrak{q})$ further towards the exterior of the unit circle. By placing the poles of the numerator $a(\mathfrak{q})$ further towards the origin of the unit-circle, the convergence-rate will increase at the cost of an increased sensitivity for the measurement noise. \section{Conclusion and future work}\label{sec:conclusion} In this paper, a novel synthesis method for a fault estimation filter, applicable a class of discrete-time LPV systems is introduced. The synthesis of such a filter is formulated by an optimization problem as a function of the measurable scheduling parameters, for which a solution exists given a set of (easy-to-check) conditions. We further propose an approximate scheme that can be arbitrarily precise while it enjoys an analytical solution, which supports real-time implementation. The proposed method has been demonstrated on the lateral dynamics of an automated vehicle, showing that in several distinct cases the fault can be estimated. Future work includes the extension to uncertain dynamics, decoupling of non-linearities in the system dynamics and experimental validation of the proposed algorithm. \bibliographystyle{unsrt} \section{\@startsection{section}{1}% \def\subsection{\@startsection{subsection}{2}% \z@{.5\linespacing\@plus.7\linespacing}{.5\linespacing}% {\normalfont\large\bfseries}} \def\subsubsection{\@startsection{subsubsection}{3}% \z@{.5\linespacing\@plus.7\linespacing}{.5\linespacing}% {\normalfont\itshape}} \usepackage[usenames, dvipsnames]{xcolor} \definecolor{darkblue}{rgb}{0.0, 0.0, 0.45} \usepackage[colorlinks = true, raiselinks = true, linkcolor = darkblue, citecolor = Mahogany, urlcolor = ForestGreen, pdfauthor = {Chris van der Ploeg}, pdftitle = {}, pdfkeywords = {}, pdfsubject = {}, plainpages = false]{hyperref} \usepackage{dsfont,amssymb,amsmath, graphicx,enumitem} \usepackage{amsfonts,dsfont,mathtools, mathrsfs,amsthm} \usepackage[]{siunitx} \usepackage{algorithm,algorithmicx,fancyhdr,setspace} \usepackage{tikz} \usepackage{subfig} \usetikzlibrary{calc} \usetikzlibrary{shapes,arrows,positioning} \usetikzlibrary{intersections} \usepackage{layouts} \usepackage[outdir=./]{epstopdf} \usepackage{pgfplots} \usepackage{varwidth} \usepackage{enumitem} \usepackage{subfig} \usepackage[utf8]{inputenc} \usepackage[backend=biber, style=ieee]{biblatex} \pgfplotsset{compat=newest} \pgfplotsset{plot coordinates/math parser=false} \newlength\figureheight \newlength\figurewidth \section{\@startsection{section}{1}% \def\subsection{\@startsection{subsection}{2}% \z@{.5\linespacing\@plus.7\linespacing}{.5\linespacing}% {\normalfont\large\bfseries}} \def\subsubsection{\@startsection{subsubsection}{3}% \z@{.5\linespacing\@plus.7\linespacing}{.5\linespacing}% {\normalfont\itshape}} \usepackage[colorlinks = true, raiselinks = true, linkcolor = MidnightBlue, citecolor = Mahogany, urlcolor = ForestGreen, pdfauthor = {Peyman Mohajerin Esfahani}, pdftitle = {}, pdfkeywords = {}, pdfsubject = {}, plainpages = false]{hyperref} \usepackage[usenames, dvipsnames]{color} \usepackage{dsfont,amssymb,amsmath,subfig, graphicx,fancyhdr,mdframed} \usepackage{amsfonts,dsfont,mathtools, mathrsfs,amsthm,wrapfig} \usepackage[]{siunitx} \usepackage{ragged2e} \captionsetup[subfigure]{margin=0pt, parskip=0pt, hangindent=0pt, indention=0pt, labelformat=parens, labelfont=rm} \let\labelindent\relax \usepackage{enumitem} \allowdisplaybreaks \usepackage{algorithm} \usepackage[noend]{algpseudocode} \allowdisplaybreaks \date{\today} \patchcmd\@setauthors {\MakeUppercase{\authors}} {\authors} {}{}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In a recent paper \cite{Ni2019}, the second author considered a Riemannian manifold $M$, which permits the effective isometric action of a Lie group $G$. He studied the Killing vector field of constant length induced by a vector $X\in\mathfrak{g}=\mathrm{Lie}(G)$, and proved that the eigenvalues of $\mathrm{ad}(X):\mathfrak{g}\rightarrow\mathfrak{g}$ are all imaginary. This discovery inspired him to propose the following conjecture (see Conjecture 1 in \cite{Ni2019}). \begin{conjecture}\label{question 1} Assume that a semi-simple Lie group $G$ acts effectively and isometrically on a connected Riemannian manifold $M$, and the vector $X\in\mathfrak{g}$ defines a Killing vector field of constant length. Then $X$ is a compact vector in $\mathfrak{g}$, i.e. the subalgebra $\mathbb{R}X$ is compactly imbedded in $\mathfrak{g}$. \end{conjecture} See Section \ref{subsection-2.1} for the notions of compact vector and compactly imbedded subalgebra. Our initial motivation is to prove Conjecture \ref{question 1}. Our approach is different from that in \cite{Ni2019}, which depends on the Riemannian structure and the constant length condition. Here we only need to assume that the vector $X\in\mathfrak{g}$ defines a bounded Killing vector field. Recall that a Killing vector field on a Riemannian manifold is called bounded if its length function with respect to the given metric is a bounded function. This condition is relatively weak. For example, any Killing vector field on a compact Riemannian manifold is bounded. The special case, Killing vector fields of constant length, is intrinsically related to Clifford--Wolf translations \cite{BN2008,DX2014}. See \cite{Ni2015,WPX,XW} for some recent progress on this subject. On the other hand, curvature conditions may provide obstacles or rigidities for bounded Killing vector fields. For example, on a complete negatively curved Riemannian manifold, bounded Killing vector field must be zero \cite{Wo1964}. On a complete non-positively curved Riemannian manifold, a bounded Killing vector field must be parallel \cite{BN2008-2}. We first prove the following theorem solving Conjecture \ref{question 1}, not only for Killing vector fields of constant length, but also for all bounded Killing vector fields. \begin{theorem}\label{theorem 1} Let $M$ be a connected Riemannian manifold on which a connected semi-simple Lie group acts effectively and isometrically. Assume $X\in\mathfrak{g}$ defines a bounded Killing vector field. Then $X$ is contained in a compact ideal in $\mathfrak{g}$. \end{theorem} As a compact ideal in the semi-simple Lie algebra $\mathfrak{g}$ generates a compact semi-simple subgroup of $G$, we see immediately after Theorem \ref{theorem 1} that $X$ is a compact vector when it is bounded, and hence $\mathrm{ad}(X):\mathfrak{g}\rightarrow\mathfrak{g}$ has only imaginary eigenvalues. It is then natural to further study this spectral property of bounded Killing vector fields when $G$ is not semi-simple. For this purpose, we take a Levi decomposition $\mathfrak{g}=\mathfrak{r}(\mathfrak{g})+\mathfrak{s}$ for $\mathfrak{g}=\mathrm{Lie}(G)$ (see Section \ref{subsection-2.1}), and then we have the decomposition $X=X_r+X_s$ accordingly. Applying the argument for Theorem \ref{theorem 1}, some technique in the proof of Lemma 2.3 and Lemma 2.4 in \cite{Wo2017}, and more Lie algebraic discussion from Lemma \ref{lemma 10} and Corollary \ref{lemma 5} (see Lemma 3 in \cite{Ti1964} for similar argument for bounded automorphisms of Lie groups), we prove the following crucial algebraic properties for the bounded Killing vector field $X$. \begin{theorem}\label{theorem 2} Let $M$ be a connected Riemannian manifold on which the connected Lie group $G$ acts effectively and isometrically. Assume that $X\in\mathfrak{g}$ defines a bounded Killing vector field, and $X=X_r+X_s$ according to the Levi decomposition $\mathfrak{g}=\mathfrak{r}(\mathfrak{g})+\mathfrak{s}$, where $\mathfrak{s}=\mathfrak{s}_c\oplus\mathfrak{s}_{nc}$. Then we have the following: \begin{enumerate} \item The vector $X_s\in\mathfrak{s}$ is contained in the compact semi-simple ideal $\mathfrak{c}_{\mathfrak{s}_c}(\mathfrak{r}(\mathfrak{g}))$ of $\mathfrak{g}$; \item The vector $X_r\in\mathfrak{r}$ is contained in the center $\mathfrak{c}(\mathfrak{n})$ of $\mathfrak{n}$. \end{enumerate} \end{theorem} Here the centralizer $\mathfrak{c}_\mathfrak{a}(\mathfrak{b})$ of the subalgebra $\mathfrak{b}\subset\mathfrak{g}$ in the subalgebra $\mathfrak{a}\subset\mathfrak{g}$ is defined as $\mathfrak{c}_\mathfrak{a}(\mathfrak{b})= \{u\in\mathfrak{a}\,|\,[u,\mathfrak{b}]=0\}$. In particular, the center $\mathfrak{c}(\mathfrak{a})$ of $\mathfrak{a}\subset\mathfrak{g}$ coincides with~$\mathfrak{c}_\mathfrak{a}(\mathfrak{a})$. \smallskip Theorem \ref{theorem 2} helps us find more algebraic properties for bounded Killing vector fields. In particular, $X=X_r+X_s$ is an abstract Jordan decomposition which is irrelevant to the choice of the Levi subalgebra $\mathfrak{s}$, and the eigenvalues of $\mathrm{ad}(X)$ coincide with those of $\mathrm{ad}(X_s)$, which are all imaginary (see Theorem \ref{main-cor}). As a direct corollary, we have proved the following spectral property. \begin{corollary} Let $M$ be a connected Riemannian manifold on which the connected Lie group $G$ acts effectively and isometrically. Assume that $X\in\mathfrak{g}$ defines a bounded Killing vector field. Then all eigenvalues of $\mathrm{ad}(X):\mathfrak{g}\rightarrow\mathfrak{g}$ are imaginary. \end{corollary} When $M=G/H$ is a Riemannian homogeneous space on which the connected Lie group $G$ acts effectively, we can apply Theorem \ref{theorem 2} to prove the following theorem, which completely determine all bounded vectors in $\mathfrak{g}$ for $G/H$, or equivalently all bounded Killing vector fields induced by vectors in $\mathfrak{g}$ (see Section 2.3 for the notion of bounded vectors for a coset space, and Lemma \ref{lemma 1} for the equivalence). \begin{theorem}\label{theorem 3} Let $G/H$ be a Riemannian homogeneous space on which the connected Lie group $G$ acts effectively. Let $\mathfrak{r}(\mathfrak{g})$, $\mathfrak{n}(\mathfrak{g})$ and $\mathfrak{s}=\mathfrak{s}_c\oplus\mathfrak{s}_{nc}$ be the radical, the nilradical, and the Levi subalgebra respectively. Then the space of all bounded vectors in $\mathfrak{g}$ for $G/H$ is a compact subalgebra. Its semi-simple part coincides with the ideal $\mathfrak{c}_{\mathfrak{s}_c}(\mathfrak{r}(\mathfrak{g}))$ of~$\mathfrak{g}$, which is independent of the choice of the Levi subalgebra $\mathfrak{s}$, and its Abelian part $\mathfrak{v}$ is contained in $\mathfrak{c}(\mathfrak{n}(\mathfrak{g}))$, which coincides with the sum of $\mathfrak{c}_{\mathfrak{c}(\mathfrak{r}(\mathfrak{g}))} (\mathfrak{s}_{nc})$ and all two-dimensional irreducible representations of $\mathrm{ad}(\mathfrak{r}(\mathfrak{g}))$ in $\mathfrak{c}_{\mathfrak{c}(\mathfrak{n}(\mathfrak{g}))} (\mathfrak{s}_{nc})$ corresponding to nonzero imaginary weights, i.e. $\mathbb{R}$-linear functionals $\lambda:\mathfrak{r}\rightarrow\mathfrak{r}/\mathfrak{n} \rightarrow\mathbb{R}\sqrt{-1}$. \end{theorem} Theorem \ref{theorem 3} is a summarization of Theorem \ref{main-cor-2} and Theorem \ref{main-cor-3}. Note that $\mathfrak{c}_{\mathfrak{s}_{c}}(\mathfrak{r}(\mathfrak{g}))$ is a compact semi-simple summand in the Lie algebra direct sum decomposition of $\mathfrak{g}$, which can be easily determined. For the other, the Abelian factor~$\mathfrak{v}$, we propose a theoretic algorithm which explicitly describes all bounded vectors in $\mathfrak{c}(\mathfrak{n}(\mathfrak{g}))$. \smallskip Theorem \ref{theorem 3} provides a simple and self contained proof of the following theorem. \begin{theorem}\label{theorem 4} The space of bounded vectors in $\mathfrak{g}$ for a Riemannian homogeneous space $G/H$ on which the connected Lie group $G$ acts effectively is irrelevant to the choice of~ $H$. \end{theorem} Notice that the arguments in \cite{Ti1964} indicate that the subset of all bounded isometries in $G$ is irrelevant to the choice of $H$. So Theorem \ref{theorem 4} can also be proved by J. Tits' Theorem 1 in \cite{Ti1964}, which implies that all bounded isometries in $G$ are generated by bounded vectors in $\mathfrak{g}$. \smallskip Meanwhile, Theorem \ref{theorem 3} provides an alternative explanation why in some special cases, the much stronger constant length condition for Killing vector fields or Clifford--Wolf condition for translations may be implied by the boundedness condition \cite{MMW,Wo2017}. \smallskip At the end, we remark that all lemmas, theorems and corollaries are still valid when~$M$ is a Finsler manifold. The Finsler metric on a smooth manifold is a natural generalization of the Riemannian metric, which satisfies the properties of the smoothness, positiveness, homogeneity of degree one and strong convexity, but not the quadratic property in general. See \cite{BCS2000} for its precise definition and more details. The proofs for all the results of this work in the Finsler context only need an add-on from the following well-known fact. The isometry group of a Finsler manifold is a Lie group~\cite{DH2002} with a compact isotropy subgroup at any point. \smallskip This work is organized as following. In Section 2, we summarize some basic knowledge on Lie theory and homogeneous geometry which are necessary for later discussions. We define the bounded vector in $\mathfrak{g}$ for a smooth coset space $G/H$ and discuss its basic properties and relation to the bounded Killing vector field. In Section 3, we prove Theorem \ref{theorem 1} and Theorem \ref{theorem 2}. In Section 4, we discuss two applications of Theorem \ref{theorem 2}. One is to prove the Jordan decomposition and spectral properties for bounded Killing vector fields. The other is to study the Lie algebra of all bounded vectors in $\mathfrak{g}$ for a Riemannian homogeneous space $G/H$, on which $G$ acts effectively. We will provide explicit description for this compact Lie algebra and completely determine all bounded Killing vector fields for a Riemannian homogeneous space. \newpage \section{Preliminaries in Lie theory and homogeneous geometry} \subsection{Some fundamental facts in Lie theory} \label{subsection-2.1} Let $\mathfrak{g}$ be a real Lie algebra. Its {\it radical} $\mathfrak{r}(\mathfrak{g})$ and {\it nilradical} (or {\it nilpotent radical}) $\mathfrak{n}(\mathfrak{g})$ are the unique largest solvable and nilpotent ideals of $\mathfrak{g}$ respectively. By Corollary 5.4.15 in \cite{JK}, we have $$ [\mathfrak{r}(\mathfrak{g}),\mathfrak{r}(\mathfrak{g})]\subset [\mathfrak{r}(\mathfrak{g}),\mathfrak{g}] \subset\mathfrak{n}(\mathfrak{g})\subset\mathfrak{r}(\mathfrak{g}). $$ By Levi's theorem, we can find a semi-simple subalgebra $\mathfrak{s}\subset\mathfrak{g}$, which is the complement of the radical $\mathfrak{r}(\mathfrak{g})$ in $\mathfrak{g}$. We will further decompose the $\mathfrak{s}$ as $\mathfrak{s}=\mathfrak{s}_c\oplus\mathfrak{s}_{nc}$, where~$\mathfrak{s}_c$ and $\mathfrak{s}_{nc}$ are the compact and noncompact parts of $\mathfrak{s}$ respectively. We will call \begin{equation}\label{007} \mathfrak{g}=\mathfrak{r}(\mathfrak{g})+\mathfrak{s} \end{equation} a {\it Levi decomposition}, and the semi-simple subalgebra $\mathfrak{s}$ in (\ref{007}) a {\it Levi subalgebra}. By Malcev's Theorem (see Theorem 5.6.13 in \cite{JK}), the Levi subalgebra $\mathfrak{s}$ is unique up to $\mathrm{Ad}\bigl(\exp([\mathfrak{g},\mathfrak{r}(\mathfrak{g})])\bigr)$-actions. If $G$ is a connected Lie group with $\mathrm{Lie}(G)=\mathfrak{g}$, $\mathfrak{r}(\mathfrak{g})$ and $\mathfrak{n}(\mathfrak{g})$ generate closed solvable and nilpotent normal subgroups respectively. A subalgebra $\mathfrak{k}\subset\mathfrak{g}$ is called {\it compactly imbedded} if after taking closure it generates a compact subgroup in the inner automorphism group $\mathrm{Inn}(\mathfrak{g})=G/Z(G)$. A vector $X\in\mathfrak{g}$ is called a {\it compact vector} if $\mathbb{R}X$ is a compactly imbedded subalgebra of $\mathfrak{g}$. Assume that $G$ is a connected Lie group with $\mathrm{Lie}(G)=\mathfrak{g}$, and $H$ the connected subgroup generated by a subalgebra $\mathfrak{h}$. Obviously if $H$ is compact, then any subalgebra of $\mathfrak{h}$ is compactly imbedded. The converse statement is not true in general. We call a subgroup $H$ of $G$ {\it compactly imbedded} if the closure of $\mathrm{Ad}_\mathfrak{g}(H)\subset \mathrm{Aut}(\mathfrak{g})$ is compact. Any compactly imbedded subalgebra $\mathfrak{h}$ of $\mathfrak{g}$ is contained in a {\it maximal compactly imbedded} subalgebra. A maximal compactly imbedded subalgebra can be presented as the pre-image in $\mathfrak{g}$ for the subalgebra of $\mathfrak{g}/\mathfrak{c}(\mathfrak{g})$ generating a maximal compact subgroup in $G/Z(G)$. As an immediate corollary for the conjugation theorem for maximal compact connected subgroups (see Theorem 14.1.3 in \cite{JK}), the maximal compactly imbedded subalgebra is unique up to $\mathrm{Ad}(G)$-actions. \subsection{Homogeneous metric and reductive decomposition} Let $M$ be a Riemannian homogeneous space on which the connected Lie group $G$ acts effectively and isometrically. The effectiveness implies that $G$ is a subgroup of the isometry group $I(M)$. When $G$ is a closed subgroup of $I(M)$, then $H$ is compact. When $G$ is not closed in~$I(M)$, then we still have the following consequence from the discussion in \cite{MM1988}. \begin{lemma}\label{lemma -1} Let $M$ be a Riemannian homogeneous space on which the connected Lie group $G$ acts effectively and isometrically. Then the isotropy subgroup $H$ at any $x\in M$ and its Lie algebra $\mathfrak{h}$ are compactly imbedded. \end{lemma} To be more self contained, we propose a direct proof here. \begin{proof} Let $\overline{G}$ be the closure of $G$ in $I(M)$ and $\overline{H}$ be the isotropy subgroup at $x\in M$ for the $\overline{G}$-action on $M$. Then $\overline{H}$ is compact. On the other hand, the property that $\mathrm{Ad}(G)$-actions preserve $\mathfrak{g}$ can be passed by continuity to $\overline{G}$, i.e. $\mathfrak{g}$ is an ideal of $\overline{\mathfrak{g}}=\mathrm{Lie}(\overline{G})$. Denote $\mathrm{Ad}_\mathfrak{g}$ the restriction of $\mathrm{Ad}(\overline{G})$-actions from $\mathfrak{g}$ to $\mathfrak{g}$, then the subgroup $\mathrm{Ad}_\mathfrak{g}(H)$ of~$\mathrm{Aut}(\mathfrak{g})$ (which is contained in $\mathrm{Inn}(\mathfrak{g})$ because of the connectedness of $G$) is contained in the compact subgroup $\mathrm{Ad}_\mathfrak{g}(\overline{H})$. From this argument, we also see that both $H$ and $\mathfrak{h}$ are compactly imbedded. \end{proof} \smallskip Now we further assume $M=G/H$ is a Riemannian homogeneous space. The $H$-action on $T_o(G/H)$ at $o=eH$ is called the {\it isotropy action}. A linear direct sum decomposition $\mathfrak{g}=\mathfrak{h}+\mathfrak{m}$, where $\mathrm{Lie}(H)=\mathfrak{h}$, is called a {\it reductive decomposition} for $G/H$, if it is $\mathrm{Ad}(H)$-invariant. We can identify $T_o(G/H)$ with $\mathfrak{m}$ such that the isotropy action coincides with the $\mathrm{Ad}(H)$-action on $\mathfrak{m}$. Generally speaking, there exist many different reductive decompositions for a Riemannian homogeneous space $G/H$. A canonical one can be constructed by the following lemma, which summarizes Lemma 2 and Remark 1 in \cite{Ni2017}. \begin{lemma}\label{lemma 0} Let $G/H$ be a Riemannian homogeneous space on which $G$ acts effectively. Then we have the following: \begin{enumerate} \item The restriction of the Killing form $B_\mathfrak{g}$ of $\mathfrak{g}$ to $\mathfrak{h}$ is negative definite; \item The $B_\mathfrak{g}$-orthogonal decomposition $\mathfrak{g}=\mathfrak{h}+\mathfrak{m}$ is reductive and $\mathfrak{n}(\mathfrak{g})\subset\mathfrak{m}$. \end{enumerate} \end{lemma} \subsection{Bounded vector for a coset space} For any smooth coset space $G/H$, where $H$ is a closed subgroup of $G$, $\mathrm{Lie}(G)=\mathfrak{g}$ and $\mathrm{Lie}(H)=\mathfrak{h}$, we denote $\mathrm{pr}_{\mathfrak{g}/\mathfrak{h}}$ the natural linear projection from $\mathfrak{g}$ to $\mathfrak{g}/\mathfrak{h}$. We call any vector $X\in\mathfrak{g}$ a {\it bounded vector} for $G/H$, if \begin{equation}\label{002} f(g)=\|\mathrm{pr}_{\mathfrak{g}/\mathfrak{h}}\bigl(\mathrm{Ad}(g)X\bigr)\|, \quad\forall\, g\in G, \end{equation} is a bounded function, where $\|\cdot\|$ is any norm on $\mathfrak{g}/\mathfrak{h}$. Since $\mathfrak{g}/\mathfrak{h}$ has a finite dimension, any two norms $\|\cdot\|_1$ and $\|\cdot\|_2$ on it are equivalent in the sense that $$c_1\|u\|_1\leq \|u\|_2\leq c_2\|u\|_1,\quad\forall\, u\in\mathfrak{g}/\mathfrak{h},$$ where $c_1$ and $c_2$ are some positive constants. So the boundedness of $X\in\mathfrak{g}$ for $G/H$ is not relevant to the choice of the norm. When $\|\cdot\|$ is an $\mathrm{Ad}(H)$-invariant quadratic norm, which defines a $G$-invariant Riemannian metric on $G/H$, the function $f(\cdot)$ on $G$ defined in (\ref{002}) is right $H$-invariant, so it can be descended to $G/H$, and coincides with the length function of the Killing vector field induced by $X$. Summarizing this observation, we have the following lemma. \begin{lemma}\label{lemma 1} If $X\in\mathfrak{g}$ is a bounded vector for $G/H$, then it defines a bounded Killing vector field for any $G$-invariant Riemannian metric on $G/H$. Conversely, if $G/H$ is endowed with a $G$-invariant Riemannian metric and $X\in\mathfrak{g}$ induces a bounded Killing vector field, then $X$ is a bounded vector for $G/H$. \end{lemma} The boundedness condition may be kept when we change the coset space. By definition, it is obvious to see \begin{lemma}\label{lemma 2} A vector $X\in\mathfrak{g}$ is bounded for the smooth coset space $G/H$ iff it is bounded for the universal covering $\widetilde{G}/\widetilde{H}$ of $G/H$, where $\widetilde{G}$ is the universal covering group of $G$, and $\widetilde{H}$ is closed connected subgroup which $\mathrm{Lie}(H)=\mathfrak{h}$ generates in $\widetilde{G}$. \end{lemma} For any chain of subalgebras $\mathfrak{h}\subset\mathfrak{k}\subset\mathfrak{g}$, the natural linear projection $\mathrm{pr}:\mathfrak{g}/\mathfrak{h}\rightarrow \mathfrak{g}/\mathfrak{k}$ is continuous with respect to standard topologies. So it maps bounded sets to bounded sets, with respect to any norms $\|\cdot\|_1$ and $\|\cdot\|_2$ on $\mathfrak{g}/\mathfrak{h}$ and $\mathfrak{g}/\mathfrak{k}$ respectively. Obviously $\mathrm{pr}\circ\mathrm{pr}_{\mathfrak{g}/\mathfrak{h}}= \mathrm{pr}_{\mathfrak{g}/\mathfrak{k}}$, so $$ \mathrm{pr}\bigl(\mathrm{pr}_{\mathfrak{g}/\mathfrak{h}} (\mathrm{Ad}(G)X)\bigr)=\mathrm{pr}_{\mathfrak{g}/\mathfrak{k}} (\mathrm{Ad}(G)X). $$ By these observations, it is easy to prove the following lemma. \begin{lemma}\label{lemma 3} Assume $K$ is a closed subgroup of $G$ which Lie algebra $\mathfrak{k}$ satisfies \linebreak $\mathfrak{h}\subset\mathfrak{k}\subset\mathfrak{g}$. If $X\in\mathfrak{g}$ is bounded for $G/H$, then it is bounded for $G/K$ as well. \end{lemma} To summarize, the boundedness of Lie algebra vectors for a coset space is originated and intrinsically related to the boundedness of Killing vector fields for a homogeneous metric. However it is an algebraic condition, which can be discussed more generally and is not relevant to the choice or existence of homogeneous metrics. \section{Proof of Theorem \ref{theorem 1} and Theorem \ref{theorem 2}} \subsection{A key lemma for proving Theorem \ref{theorem 2}} \begin{lemma}\label{lemma 10} Let $\mathfrak{g}$ be a Lie algebra and $\mathfrak{g}=\mathfrak{r}(\mathfrak{g})+\mathfrak{s}$ be a Levi decomposition. Then we have the following Lie algebra direct sum for the centralizer $\mathfrak{c}_{\mathfrak{g}}(\mathfrak{n}(\mathfrak{g}))$ of the nilradical $\mathfrak{n}(\mathfrak{g})$ in $\mathfrak{g}$\,{\rm:} \begin{equation}\label{011} \mathfrak{c}_{\mathfrak{g}}({\mathfrak{n}(\mathfrak{g})})= \bigl(\mathfrak{c}_{\mathfrak{g}}({\mathfrak{n}(\mathfrak{g})}) \cap\mathfrak{r}(\mathfrak{g})\bigr)\oplus \bigl(\mathfrak{c}_{\mathfrak{g}} ({\mathfrak{n}(\mathfrak{g})})\cap\mathfrak{s}\bigr) =\mathfrak{c}_{\mathfrak{r}(\mathfrak{g})} (\mathfrak{n}(\mathfrak{g})) \oplus\mathfrak{c}_\mathfrak{s}(\mathfrak{n}(\mathfrak{g})). \end{equation} Moreover we have the following\,{\rm:} \begin{enumerate} \item The two summands $\mathfrak{c}_{\mathfrak{r}(\mathfrak{g})} (\mathfrak{n}(\mathfrak{g})) =\mathfrak{c}(\mathfrak{n}(\mathfrak{g}))$ and $\mathfrak{c}_\mathfrak{s}(\mathfrak{n}(\mathfrak{g}))= \mathfrak{c}_\mathfrak{s}({\mathfrak{r}(\mathfrak{g})})$ are Abelian and semi-simple ideals of $\mathfrak{g}$ respectively. \item The summand $\mathfrak{c}_\mathfrak{s}(\mathfrak{r}(\mathfrak{g}))$ is contained in the intersection of all Levi subalgebras, so it does not depend on the choice of the Levi subalgebra $\mathfrak{s}$. \end{enumerate} \end{lemma} \begin{proof} Firstly, we prove (\ref{011}) as a linear decomposition. Assume conversely that this is not true, then we can find a vector $X\in\mathfrak{g}$ such that $[X,\mathfrak{n}(\mathfrak{g})]=0$ and $[X_s,\mathfrak{n}(\mathfrak{g})]\neq 0$. Denote $\mathrm{ad}_{\mathfrak{n}(\mathfrak{g})}$ the restriction of the $\mathrm{ad}$-action from $\mathfrak{n}(\mathfrak{g})$ to $\mathfrak{n}(\mathfrak{g})$. Then $\mathrm{ad}_{\mathfrak{n}(\mathfrak{g})}(X_r) =-\mathrm{ad}_{\mathfrak{n}(\mathfrak{g})}(X_s)$ is a nonzero linear endomorphism in the general linear Lie algebra $\mathfrak{gl}(\mathfrak{n}(\mathfrak{g}))=\mathrm{Lie}(\mathrm{GL}(\mathfrak{n}(\mathfrak{g})))$ where $\mathfrak{n}(\mathfrak{g})$ as well as its subspaces are viewed as real vector spaces. The map $\mathrm{ad}_{\mathfrak{n}(\mathfrak{g})}$ is a Lie algebra endomorphism from $\mathfrak{g}$ to $\mathfrak{gl}(\mathfrak{n}(\mathfrak{g}))$. The vector $X_s$ generates a semi-simple ideal $\mathfrak{s}_1$ of $\mathfrak{s}$, which can be presented as \begin{equation}\label{004} \mathfrak{s}_1=\mathbb{R}X_s+ [\mathfrak{s},X_s]+[\mathfrak{s},[\mathfrak{s},X_s]]+ [\mathfrak{s},[\mathfrak{s},[\mathfrak{s},X_s]]]+\cdots. \end{equation} Meanwhile, $X_r$ generates a sub-representation space $\mathfrak{v}_1$ in $\mathfrak{r}(\mathfrak{g})$ for the $\mathrm{ad}(\mathfrak{s})$-actions, i.e. \begin{equation}\label{005} \mathfrak{v}_1=\mathbb{R}X_r+ [\mathfrak{s},X_r]+[\mathfrak{s},[\mathfrak{s},X_r]]+ [\mathfrak{s},[\mathfrak{s},[\mathfrak{s},X_r]]]+\cdots. \end{equation} Compare (\ref{004}) and (\ref{005}), we can see that $\mathrm{ad}_{\mathfrak{n}(\mathfrak{g})} (\mathfrak{s}_1)$ and $\mathrm{ad}_{\mathfrak{n}(\mathfrak{g})} (\mathfrak{v}_1)$ have the same image in $\mathfrak{gl}(\mathfrak{n}(\mathfrak{g}))$, i.e. $$\mathrm{ad}_{\mathfrak{n}(\mathfrak{g})} (\mathfrak{s}_1)=\mathrm{ad}_{\mathfrak{n}(\mathfrak{g})} (\mathfrak{v}_1) =\mathbb{R}A+ [\mathrm{ad}_{\mathfrak{n}(\mathfrak{g})}(\mathfrak{s}),A]+ [\mathrm{ad}_{\mathfrak{n}(\mathfrak{g})}(\mathfrak{s}), [\mathrm{ad}_{\mathfrak{n}(\mathfrak{g})}(\mathfrak{s}),A]] +\cdots.$$ Denote $\mathfrak{u}_1=\mathrm{ad}_{\mathfrak{n}(\mathfrak{g})} (\mathfrak{s}_1)$ and $\mathfrak{u}_2=\mathrm{ad}_{\mathfrak{n}(\mathfrak{g})} (\mathfrak{r}(\mathfrak{g}))$. We have just showed $0\neq\mathfrak{u}_1\subset\mathfrak{u}_2$. Since $\mathrm{ad}_{\mathfrak{n}(\mathfrak{g})}:\mathfrak{g}\rightarrow \mathfrak{gl}(\mathfrak{n}(\mathfrak{g}))$ is a Lie algebra endomorphism, $\mathfrak{u}_1$ is semi-simple and $\mathfrak{u}_2$ is solvable. But this is impossible, so (\ref{011}) is a linear direct sum decomposition. \smallskip Further, we prove that $\mathfrak{c}_{\mathfrak{r}(\mathfrak{g})}(\mathfrak{n}(\mathfrak{g})) =\mathfrak{c}(\mathfrak{n}(\mathfrak{g}))$ is an Abelian ideal of $\mathfrak{g}$. The summand $\mathfrak{c}_{\mathfrak{r}(\mathfrak{g})} (\mathfrak{n}(\mathfrak{g}))$ in (\ref{011}) is an ideal of $\mathfrak{g}$ contained in the radical $\mathfrak{r}(\mathfrak{g})$. It is not hard to check that $\mathfrak{c}_{\mathfrak{r}(\mathfrak{g})} (\mathfrak{n}(\mathfrak{g})) +\mathfrak{n}(\mathfrak{g})$ is a nilpotent ideal of $\mathfrak{g}$. By the definition of the nilradical, we must have $\mathfrak{c}_{\mathfrak{r}(\mathfrak{g})} (\mathfrak{n}(\mathfrak{g})) \subset\mathfrak{n}(\mathfrak{g})$, i.e. $\mathfrak{c}_{\mathfrak{r}(\mathfrak{g})} (\mathfrak{n}(\mathfrak{g}))=\mathfrak{c}(\mathfrak{n}(\mathfrak{g}))$. So it is an Abelian ideal. Finally, we prove that $\mathfrak{c}_\mathfrak{s}(\mathfrak{n}(\mathfrak{g}))= \mathfrak{c}_\mathfrak{s}(\mathfrak{r}(\mathfrak{g}))$ is a semi-simple ideal of $\mathfrak{g}$ contained in the intersection of all Levi subalgebras. Obviously $\mathfrak{c}_\mathfrak{s}(\mathfrak{n}(\mathfrak{g}))$ is an ideal of $\mathfrak{s}$. It is a semi-simple Lie algebra itself, so we have $[\mathfrak{c}_\mathfrak{s}(\mathfrak{n}(\mathfrak{g})) ,\mathfrak{c}_\mathfrak{s}(\mathfrak{n}(\mathfrak{g}))] =\mathfrak{c}_\mathfrak{s}(\mathfrak{n}(\mathfrak{g}))$. It commutes with $\mathfrak{r}(\mathfrak{g})$ because \begin{eqnarray*} [\mathfrak{r}(\mathfrak{g}),\mathfrak{c}_\mathfrak{s}(\mathfrak{n}(\mathfrak{g}))] &\subset&[\mathfrak{r}(\mathfrak{g}),[\mathfrak{c}_\mathfrak{s}(\mathfrak{n}(\mathfrak{g})), \mathfrak{c}_\mathfrak{s}(\mathfrak{n}(\mathfrak{g}))]] =[[\mathfrak{r}(\mathfrak{g}),\mathfrak{c}_\mathfrak{s}(\mathfrak{n}(\mathfrak{g}))], \mathfrak{c}_\mathfrak{s}(\mathfrak{n}(\mathfrak{g}))]\\ &\subset& [\mathfrak{n}(\mathfrak{g}),\mathfrak{c}_\mathfrak{s}(\mathfrak{n}(\mathfrak{g}))]=0. \end{eqnarray*} So we get $\mathfrak{c}_\mathfrak{s}(\mathfrak{n}(\mathfrak{g}))= \mathfrak{c}_\mathfrak{s}(\mathfrak{r}(\mathfrak{g}))$. It is an ideal of $\mathfrak{g}$ because $ [\mathfrak{g},\mathfrak{c}_\mathfrak{s}(\mathfrak{r}(\mathfrak{g}))] =[\mathfrak{s},\mathfrak{c}_\mathfrak{s} (\mathfrak{r}(\mathfrak{g}))]\subset\mathfrak{c}_\mathfrak{s} (\mathfrak{r}(\mathfrak{g}))$. Therefore, we have $\mathfrak{c}_\mathfrak{s}(\mathfrak{r}(\mathfrak{g}))= \mathrm{Ad}(g)\mathfrak{c}_\mathfrak{s}(\mathfrak{r}(\mathfrak{g})) \subset\mathrm{Ad}(g)\mathfrak{s}$ for all $g\in G$. So by Malcev's Theorem, $\mathfrak{c}_\mathfrak{s}(\mathfrak{r}(\mathfrak{g}))$ is contained in all Levi subalgebras, and thus independent of the choice of the Levi subalgebra. We have proved all statements and finished the proof of the lemma. \end{proof} \smallskip By similar arguments as above, we can also establish the Lie algebra direct sum $\mathfrak{c}_\mathfrak{s}(\mathfrak{r}(\mathfrak{g}))= \mathfrak{c}_{\mathfrak{s}_{c}}(\mathfrak{r}(\mathfrak{g}))\oplus \mathfrak{c}_{\mathfrak{s}_{nc}}(\mathfrak{r}(\mathfrak{g}))$ in which each summand is a semi-simple ideal of $\mathfrak{g}$. So we get the following corollary. \begin{corollary}\label{lemma 5} Keep all relevant notations and assumptions, then we have the following Lie algebra direct sum decomposition, $$\mathfrak{c}_\mathfrak{g}(\mathfrak{n}(\mathfrak{g})) =\mathfrak{c}_{\mathfrak{s}_{c}}(\mathfrak{r}(\mathfrak{g}))\oplus \mathfrak{c}_{\mathfrak{s}_{nc}}(\mathfrak{r}(\mathfrak{g})) \oplus\mathfrak{c}(\mathfrak{n}(\mathfrak{g})),$$ in which each summand is an ideal of $\mathfrak{g}$. \end{corollary} \subsection{Proof of Theorem \ref{theorem 1}} Fix any $x\in M$, and denote $H$ the isotropy subgroup of $G$ at $x$. The smooth coset space $G/H$ can be identified with an immersed submanifold in $M$. The submanifold metric on $G/H$ is $G$-invariant. The restriction of the bounded Killing vector field induced by $X\in\mathfrak{g}$ to $G\cdot x=G/H$ is still a bounded Killing vector field, induced by the same $X$. By Lemma \ref{lemma 1}, $X\in\mathfrak{g}$ is a bounded vector for $G/H$. By Lemma \ref{lemma -1}, $\mathfrak{h}$ is compactly imbedded. We can find a maximal compactly imbedded subalgebra $\mathfrak{k}$ of $\mathfrak{g}$ such that $\mathfrak{h}\subset\mathfrak{k}\subset\mathfrak{g}$. Denote $\widetilde{G}$ the universal cover of $G$, $\widetilde{H}$, and~$\widetilde{K}$ the connected subgroup of $\widetilde{G}$ generated by $\mathfrak{h}$ and $\mathfrak{k}$ respectively. The subgroup~$\widetilde{K}$ is closed because it is the identity component of the pre-image in $\widetilde{G}$ for a maximal compact subgroup of $G/Z(G)$. By Lemma \ref{lemma 2} and Lemma \ref{lemma 3}, $X\in\mathfrak{g}$ is also bounded for $\widetilde{G}/\widetilde{H}$ and $\widetilde{G}/\widetilde{K}$. Since $G$ is semi-simple, we have $\widetilde{G}=\widetilde{G}_c\times\widetilde{G}_{nc}$, where $\widetilde{G}_c$ and~$\widetilde{G}_{nc}$ are the compact and non-compact parts of $\widetilde{G}$ respectively, $\widetilde{K}=\widetilde{G}_c\times\widetilde{K}_{nc}$, and $X=X_c+X_{nc}$ with $X_c\in\mathfrak{g}_c=\mathrm{Lie}(\widetilde{G}_c)$ and $X_{nc}\in\mathfrak{g}_{nc}=\mathrm{Lie}(\widetilde{G}_{nc})$ accordingly. The coset space $\widetilde{G}/\widetilde{K}=\widetilde{G}_{nc}/\widetilde{K}_{nc}$ is a symmetric space of non-compact type. The vector $X_{nc}\in\mathfrak{g}_{nc}$ defines the same Killing vector field as $X$ on $\widetilde{G}/\widetilde{K}$, so it is bounded as well. Since the Riemannian symmetric metric on $\widetilde{G}/\widetilde{K}$ has negative Ricci curvature and non-positive sectional curvature, the bounded vector $X_{nc}$ must vanish \cite{Wo1964}. So $X=X_c$ is contained in the compact ideal $\mathfrak{g}_c$ in $\mathfrak{g}$. This completes the proof of Theorem \ref{theorem 1}. \subsection{Proof of Theorem \ref{theorem 2}} The key steps are summarized as the following two claims. \smallskip {\bf Claim 1:} $X_s$ is contained in a compact ideal of $\mathfrak{s}$. The proof of Claim 1 applies a similar method as for Theorem \ref{theorem 1}. By similar argument as in the proof of Theorem \ref{theorem 1}, we can restrict our discussion to any orbit $G\cdot x=G/H$ in $M$, where the isotropy subgroup $H$ has a compactly imbedded Lie algebra. The vector $X$ indicated in Theorem \ref{theorem 2} is bounded for $G/H$. The radical $\mathfrak{r}(\mathfrak{g})$ generates a closed normal subgroup $R$ of ${G}$ and its product $RH$ with the compact subgroup ${H}$ is also a closed subgroup. By Lemma \ref{lemma 3}, $X\in\mathfrak{g}$ is bounded for $G/HR$. We can identify $G/HR$ as the orbit space for the left $R$-actions on $G/H$. So $G/HR$ admits a $G$-invariant metric induced by submersion. On the other hand, the coset space $G/HR$ can be identified as $S/H_S=(G/R)/(HR/R)$, where the Lie algebra of $S=G/R$ can be identified with $\mathfrak{s}$ by Levi's Theorem, and $\mathrm{Lie}(H_S)$ is a compactly imbedded subgroup because it is the image of the compactly imbedded $\mathfrak{h}$ in~$\mathfrak{g}/\mathfrak{r}(\mathfrak{g})$. With this identification, $X$ defines the same Killing vector field as $X_s$ on~$S/H_S$. By Lemma \ref{lemma 1}, $X_s$ is bounded for $S/H_S$. Now we have the semi-simpleness for $S$ and the compactly imbedded property for $\mathrm{Lie}(H_S)$, so we can apply a similar argument as for Theorem \ref{theorem 1} to prove $X_s$ is contained in a compact ideal of $\mathfrak{s}$. This completes the proof of Claim 1. \smallskip {\bf Claim 2:} $X$ commutes with the nilradical $\mathfrak{n}$. To prove this claim, we still restrict our discussion to a single $G$-orbit. But we need to be careful because the effectiveness is required in later discussion. The following lemma guarantees that suitable $G$-orbits with effective $G$-actions can be found. \begin{lemma}\label{lemma 4} Let $M$ be a connected Riemannian homogeneous space on which a connected Lie group $G$ acts effectively. Then there exists $x\in M$, such that $G$ acts effectively on $G\cdot x$. \end{lemma} \begin{proof} Denote $\overline{G}$ the closure of $G$ in $I(M)$. Then the $\overline{G}$-action on $M$ is proper (see Proposition 3.62 in \cite{AB2015}). By the Principal Orbit Theorem (see Theorem 3.82 in \cite{AB2015}), the principal orbit type for the $\overline{G}$-action is unique up to conjugations, and the union $\mathcal{U}$ of all principal orbits is open dense in $M$. Let $G\cdot x$ be any $G$-orbit in $\mathcal{U}$, and assume $g\in G$ acts trivially on $G\cdot x$. Because $G\cdot x$ is dense in $\overline{G}\cdot x$, $g$ acts trivially on $\overline{G}\cdot x$ as well. Now we consider any other orbit $\overline{G}\cdot y$ in $\mathcal{U}$. The point $y$ can be suitably chosen such that $x$ and $y$ have the same isotropy subgroups $\overline{G}_x=\overline{G}_y$ in $\overline{G}$. Then their isotropy subgroups in $G$ are the same because $G_x=\overline{G}_x\cap G=\overline{G}_y\cap G=G_y$. The $g$-action on $G\cdot y$ is trivial, and by continuity, that on $\overline{G}\cdot y$ is trivial as well. This argument proves that $g$ acts trivially on the dense open subset $\mathcal{U}$ in $M$, so it acts trivially on $M$. Due to the effectiveness of the $G$-action, we must have $g=e\in G$. To summarize, the $G$-action on $G\cdot x\subset\mathcal{U}$ is effective, which completes the proof of this lemma. \end{proof} \smallskip Take the orbit $G\cdot x=G/H$ indicated in Lemma \ref{lemma 4}, endowed with the invariant submanifold metric. Since $X\in\mathfrak{g}$ defines a bounded Killing vector field on the whole manifold, it also defines a bounded Killing vector field when restricted to $G\cdot x$. So by Lemma \ref{lemma 1}, $X$ is a bounded vector for $G/H$. By Lemma \ref{lemma 0}, we have $B_\mathfrak{g}$-orthogonal reductive decomposition $\mathfrak{g}=\mathfrak{h}+\mathfrak{m}$ with $\mathfrak{n}(\mathfrak{g})\subset\mathfrak{m}$. Denote $X_\mathfrak{m}$ the $\mathfrak{m}$-component of $X$. For any $Y\in\mathfrak{n}(\mathfrak{g})$, \begin{equation}\label{001} \mathrm{pr}_\mathfrak{m}(\mathrm{Ad}(\exp(tY))X) =X_\mathfrak{m}+t\,[Y,X]+\frac{t^2}{2!}\,[Y,[Y,X]]+\cdots, \end{equation} in which all terms except the first one in the right side are contained in $\mathfrak{n}(\mathfrak{g})$. Since $\mathfrak{n}(\mathfrak{g})$ is nilpotent, the right side of (\ref{001}) is in fact a vector-valued polynomial with respect to~$t$. If it has a positive degree, we can get $$\lim_{t\rightarrow\infty} \|\mathrm{pr}_\mathfrak{m}\bigl(\mathrm{Ad}(\exp(tY))X)\bigr)\|=+\infty,$$ for any norm $\|\cdot\|$ on $\mathfrak{m}$. This is a contradiction to the boundedness of $X$ for $G/H$. So we get $[X,Y]=0$ for any $Y\in\mathfrak{n}(\mathfrak{g})$ which proves Claim 2. \smallskip Finally, we finish the proof of Theorem \ref{theorem 2}. Claim 2 indicates that $X\in\mathfrak{c}_\mathfrak{g}(\mathfrak{n}(\mathfrak{g}))$. By Lemma \ref{lemma 10} or Corollary \ref{lemma 5}, we have $X_r\in\mathfrak{c}_{\mathfrak{r}(\mathfrak{g})} =\mathfrak{c}(\mathfrak{n}(\mathfrak{g}))$ and $X_s\in \mathfrak{c}_{\mathfrak{s}}(\mathfrak{n}(\mathfrak{g}))=\mathfrak{c}_{\mathfrak{s}_c} (\mathfrak{r}(\mathfrak{g}))\oplus\mathfrak{c}_{\mathfrak{s}_{nc}} (\mathfrak{r}(\mathfrak{g}))$. Claim 1 indicates $X_s$ is contained in the compact semi-simple ideal $\mathfrak{c}_{\mathfrak{s}_c} (\mathfrak{r}(\mathfrak{g}))$ of $\mathfrak{g}$. This finishes the proof of Theorem \ref{theorem 2}. \section{Applications of Theorem \ref{theorem 2}} \subsection{Jordan decomposition and spectral property for bounded Killing vector fields} Theorem \ref{theorem 2} and Lemma \ref{lemma 10} provide the following obvious observations for $X=X_r+X_s\in\mathfrak{g}$ which defines a bounded Killing vector field: \begin{enumerate} \item The linear endomorphism $\mathrm{ad}(X_s)\in\mathfrak{gl}(\mathfrak{g})$ is semi-simple with only imaginary eigenvalues; \item The linear endomorphism $\mathrm{ad}(X_r)\in\mathfrak{gl}(\mathfrak{g})$ is nilpotent, i.e. it has only zero eigenvalues; \item These two endomorphisms commute because $[X_r,X_s]=0$. \item By a suitable conjugation, we can present $\mathrm{ad}(X)$, $\mathrm{ad}({X_r})$ and $\mathrm{ad}({X_s})$ as upper triangular, strict upper triangular and diagonal matrices respectively. So $\mathrm{ad}(X)\in\mathfrak{gl}(\mathfrak{g})$ has the same eigenvalues (counting multiples) as $\mathrm{ad}(X_s)$. \item The centralizer $\mathfrak{c}_{\mathfrak{s}_c}(\mathfrak{r}(\mathfrak{g}))$ containing $X_s$ is an compact semi-simple ideal of $\mathfrak{g}$ contained in the intersection of all Levi subalgebras. \end{enumerate} The observations (1)--(3) imply $\mathrm{ad}(X)=\mathrm{ad}(X_s) +\mathrm{ad}(X_r)$ is a Jordan--Chevalley decomposition, and hence $X=X_s+X_r$ is an abstract Jordan decomposition. See 4.2 and 5.4 in \cite{Hu1972} for a comprehensive discussion of these notions. The observation (4) explains why $\mathrm{ad}(X)$ has only imaginary eigenvalues, which solves our spectral problem for bounded Killing vector fields. Notice that the decomposition $\mathrm{ad}(X)=\mathrm{ad}(X_r)+\mathrm{ad}(X_s)$ is unique by the uniqueness of Jordan--Chevalley decomposition, while the abstract Jordan decomposition may not be because of the center $\mathfrak{c}(\mathfrak{g})$. However, by the observation (5), the decomposition $X=X_r+X_s$ is unique in the sense that it does not depends on the choice of the Levi subalgebra. Above observations and discussions can be summarized to the following theorem. \begin{theorem}\label{main-cor} Let $M$ be a connected Riemannian manifold on which the connected Lie group $G$ acts effectively and isometrically. Assume that $X\in\mathfrak{g}$ defines a bounded Killing vector field. Let $X$ be decomposed as $X=X_r+X_s$ according to any Levi decomposition $\mathfrak{g}=\mathfrak{r}(\mathfrak{g})+\mathfrak{s}$, then we have the following: \begin{enumerate} \item The decomposition $\mathrm{ad}(X)=\mathrm{ad}(X_r)+\mathrm{ad}(X_s)$ is the unique Jordan--Chevalley decomposition for $\mathrm{ad}(X)$ in $\mathfrak{gl}(\mathfrak{g})$; \item The decomposition $X=X_r+X_s$ is the abstract Jordan decomposition which is unique in the sense that $X_s$ is contained in all Levi subalgebras, i.e. this decomposition is irrelevant to the choice of the Levi subalgebra; \item The eigenvalues of $\mathrm{ad}(X)$ coincide with those of those of $\mathrm{ad}(X_s)$, counting multiples. \end{enumerate} \end{theorem} \newpage \subsection{Bounded Killing vectors on a connected Riemannian homogeneous space} In this section, we will always assume that $M=G/H$ is a Riemannian homogeneous space on which the connected Lie group $G$ acts effectively. Applying Theorem \ref{theorem 2} and some argument in its proof, we can completely determine all the bounded vectors for $G/H$ as following. Let $\mathfrak{g}=\mathfrak{r}(\mathfrak{g})+\mathfrak{s}$ be a Levi decomposition, and $\mathfrak{s}=\mathfrak{s}_{c}\oplus \mathfrak{s}_{nc}$ be a Lie algebra direct sum decomposition. By Lemma \ref{lemma 0}, we have a reductive decomposition $\mathfrak{g}=\mathfrak{h}+\mathfrak{m}$ such that the nilradical $\mathfrak{n}(\mathfrak{g})$ is contained in $\mathfrak{m}$. We have mentioned that $\mathfrak{c}_{\mathfrak{s}_c} (\mathfrak{r}(\mathfrak{g}))$ is an ideal of $\mathfrak{g}$ contained in $\mathfrak{s}_c$. Denote $\mathfrak{s}'_c$ the ideal of $\mathfrak{s}_c$ such that $\mathfrak{s}_c=\mathfrak{c}_{\mathfrak{s}_c} (\mathfrak{r}(\mathfrak{g}))\oplus\mathfrak{s}'_c$. Then we have a Lie algebra direct sum decomposition \begin{equation}\label{008} \mathfrak{g}=\mathfrak{c}_{\mathfrak{s}_c} (\mathfrak{r}(\mathfrak{g}))\oplus (\mathfrak{s}'_c+\mathfrak{s}_{nc}+\mathfrak{r}(\mathfrak{g})). \end{equation} By this observation, we find the following lemma. \begin{lemma}\label{lemma 6} Keep all assumptions and notations of this section, then any vector in~$\mathfrak{c}_{\mathfrak{s}_c} (\mathfrak{r}(\mathfrak{g}))$ is bounded for $G/H$. \end{lemma} \begin{proof} The ideal $\mathfrak{c}_{\mathfrak{s}_c}(\mathfrak{r}(\mathfrak{g}))$ generates a compact semi-simple subgroup in $G$. So for any $X\in\mathfrak{c}_{\mathfrak{s}_c}(\mathfrak{r}(\mathfrak{g}))$, the orbit $$ \mathrm{Ad}(G)X=\mathrm{Ad}\bigl(\exp\mathfrak{c}_{\mathfrak{s}_c} (\mathfrak{r}(\mathfrak{g}))\bigr) $$ is a compact set, which projection in $\mathfrak{g}/\mathfrak{h}$ is obviously bounded with respect to any norm. So any vector $X\in\mathfrak{c}_{\mathfrak{s}_c} (\mathfrak{r}(\mathfrak{g}))$ is bounded for $G/H$, which proves this lemma. \end{proof} \smallskip Obviously linear combinations of bounded vectors for $G/H$ are still bounded vectors for $G/H$, i.e. the set of all bounded vectors for $G/H$ is a real linear subspace of $\mathfrak{g}$. It is preserved by all $\mathrm{Ad}(G)$-actions. So it is an ideal of $\mathfrak{g}$. Applying Theorem \ref{theorem 2} and Lemma \ref{lemma 6}, we get the following immediate consequence. \begin{theorem}\label{main-cor-2} Assume $G/H$ is a Riemannian homogeneous space on which the connected Lie group $G$ acts effectively. Then the space of all bounded vectors for $G/H$ is a compact ideal of $\mathfrak{g}$. Its semi-simple part is coincides with $\mathfrak{c}_{\mathfrak{s}_c}(\mathfrak{r}(\mathfrak{g}))$. Its Abelian part~$\mathfrak{v}$ is contained in $\mathfrak{c}(\mathfrak{n}(\mathfrak{g}))$. \end{theorem} Before we continue to determine all the bounded vectors, there are several remarks. For some Riemannian homogeneous spaces, bounded Killing vector fields can only be found from $\mathfrak{c}(\mathfrak{n}(\mathfrak{g}))$. For example, \begin{corollary} Let $G/H$ be a Riemannian homogeneous space which is diffeomorphic to an Euclidean space on which the connected Lie group $G$ acts effectively. Assume that $X\in\mathfrak{g}$ defines a bounded Killing vector field, then $X\in\mathfrak{c}(\mathfrak{n}(\mathfrak{g}))$. \end{corollary} \begin{proof} Since $G/H$ is diffeomorphic to an Euclidean space, the subgroup $H$ is a maximal compact subgroup of $G$. Assume conversely that $X$ is not contained in $\mathfrak{c} (\mathfrak{n}(\mathfrak{g}))$, then by Theorem \ref{theorem 2} or Theorem \ref{main-cor-2}, there exists a non-trivial compact semi-simple normal subgroup $H'$ of $G$. We can get $H'\subset H$ by the conjugation theorem for maximal compact subgroups (i.e. Theorem 14.1.3 in \cite{JK}). This is a contradiction to the effectiveness of the $G$-action. \end{proof} When $G/H$ is a geodesic orbit space (that means that every geodesic is an orbit of some one-parameter isometry group from $G$), the second author have proved that any vector in $\mathfrak{c}(\mathfrak{n}(\mathfrak{g}))$ defines a Killing vector field of constant length (see Theorem 1 in \cite{Ni2013} or Theorem 5 in \cite{Ni2019}). By Theorem \ref{main-cor-2}, it implies an equivalence between the boundedness and the constant length condition for Killing vector fields $X\in\mathfrak{n}(\mathfrak{g})$ for a~geodesic orbit space. Similar phenomenon can also be seen from Corollary 3.4 in \cite{Wo2017}, for exponential solvable Lie groups endowed with left invariant metrics. \smallskip Applying a similar style for defining the restrictive Clifford--Wolf homogeneity \cite{BN2009} and the $\delta$-homogeneity (which is equivalent to the notion of the generalized normal homogeneity) \cite{BN2008,BN2014}, we can use bounded Killing vector fields in order to define the following condition for Riemannian homogeneous spaces. \begin{definition} Let $G/H$ be a Riemannian homogeneous space on which the connected Lie group $G$ acts effectively. Then it satisfies Condition {\rm(BH)} if for any $x\in G/H$ and any $v\in T_x(G/H)$, there exists a bounded vector $X\in\mathfrak{g}$ such that $X(x)=v$. \end{definition} Then Theorem \ref{main-cor-2} provides the following criterion for Condition (BH). \begin{corollary} Let $G/H$ be a Riemannian homogeneous space on which the connected Lie group $G$ acts effectively. Then it satisfies Condition {\rm(BH)} iff there exists a connected subgroup $K$ of $G$ such that its Lie algebra is compact and the $K$-action on $G/H$ is transitive. \end{corollary} \begin{proof} If $G/H$ satisfies Condition (BH), then the space of all bounded vectors for $G/H$ generated a connected quasi-compact subgroup $K$ of $G$ which acts transitively on $G/H$. Conversely, if such a quasi-compact subgroup exists, all vectors in it are bounded for $G/H$. The Condition (BH) is satisfied because the exponential map from $\mathrm{Lie}(K)$ to $K$ is surjective. This completes the proof of the corollary. \end{proof} \smallskip To completely determine all bounded vectors for $G/H$, we just need to determine the subspace $\mathfrak{v}$ of all bounded vectors $X\in\mathfrak{c}(\mathfrak{n})$ for $G/H$. Obviously the $\mathrm{Ad}(G)$-actions preserve $\mathfrak{v}$, which is contained in the summand $\mathfrak{m}$ in the reductive decomposition. The condition that $X\in\mathfrak{v}$, i.e. $X\in\mathfrak{c}(\mathfrak{n})$ is bounded for $G/H$, is equivalent to that $\mathrm{Ad}(G)X$ is a bounded set in $\mathfrak{c}(\mathfrak{n})$ with respect to any norm. The restriction of the $\mathrm{Ad}(G)$-actions defines a Lie group endomorphism $\mathrm{Ad}_\mathfrak{v}$ from $G/N$ to the general linear group $\mathrm{GL}(\mathfrak{v})$, where $N$ is the closed connected normal subgroup generated by $\mathfrak{n}(\mathfrak{g})$. The tangent map at $e$ for $\mathrm{Ad}_\mathfrak{v}$ induces the Lie algebra endomorphism which coincides with $\mathrm{ad}_\mathfrak{v}$ defined by restricting the $\mathrm{ad}$-action from $\mathfrak{v}$ to~$\mathfrak{v}$. The following key lemma helps us determine the subspace $\mathfrak{v}$. \begin{lemma}\label{lemma 7} Let $G/H$ be a Riemannian homogeneous space on which the connected Lie group $G$ acts effectively. Keep all relevant assumptions and notations. Then the image $\mathrm{Ad}_{\mathfrak{v}}(G)$ has a compact closure in $\mathrm{GL}(\mathfrak{v})$. \end{lemma} \begin{proof} Fix a quadratic norm $\|\cdot\|=\langle\cdot,\cdot\rangle^{1/2}$ on $\mathfrak{v}$ and an orthonormal basis $\{v_1,\ldots,v_k\}$ for $\mathfrak{v}$. By the boundedness of each $v_i$ for $G/H$, and the speciality of the reductive decomposition, we can find a positive $c_i>0$, such that $$ \|\mathrm{Ad}(g)v_i\|<c_i,\quad\forall\, g\in G,\quad\forall\, i=1,\ldots,k. $$ For any $v\in\mathfrak{v}$ with $||v||=1$, we can present it as $v=\sum_{i=1}^k a_iv_i$ with $\sum_{k=1}^k {a_i}^2=1$, then for any $g\in G$ we have $$ \|\mathrm{Ad}(g)v\|\leq\sum_{i=1}^k|a_i|\cdot \|\mathrm{Ad}(g)v_i\|\leq C=c_1+\cdots+c_k. $$ So we get \begin{equation}\label{009} C^{-1}\|v\|\leq \|\mathrm{Ad}(g)v\|\leq C \|v \|, \quad\forall\, g\in G,\quad\forall\, v\in\mathfrak{v}. \end{equation} For any sequence $\mathrm{Ad}_\mathfrak{v}(g_i)$ with $g_i\in G$, we can find a subsequence $\mathrm{Ad}_\mathfrak{v}(g'_i)$ such that \linebreak $\lim\limits_{i\rightarrow\infty}\mathrm{Ad}_\mathfrak{v}(g'_i)v_j$ exists for each $j$, so $\mathrm{Ad}_\mathfrak{v}(g'_i)$ converges to a $\mathbb{R}$-linear endomorphism $A$. By continuity, the estimates (\ref{009}) for each $\mathrm{Ad}_\mathfrak{v}(g_i)$ can be inherited by $A$, from which we see that $A\in \mathrm{GL}(\mathfrak{v})$. So $\mathrm{Ad}_\mathfrak{v}(G)$ has a compact closure in $\mathrm{GL}(\mathfrak{v})$, which proves this lemma. \end{proof} By Lemma \ref{lemma 7}, $\mathrm{ad}_\mathfrak{v}$ maps the reductive Lie algebra $\mathrm{Lie}(G/N)=\mathfrak{s}_c\oplus\mathfrak{s}_{nc} \oplus\mathfrak{a}$, where $\mathfrak{a}= \mathfrak{r}/\mathfrak{n}$, to a compact subalgebra. The summand $\mathfrak{s}_{nc}$ must be mapped to 0, from which we get $[\mathfrak{s}_{nc},\mathfrak{v}]=0$. Then it is easy to see that $$ \mathfrak{v}\subset \mathfrak{c}_{\mathfrak{c}(\mathfrak{n})} (\mathfrak{s}_{nc})\quad\mbox{and}\quad [\mathfrak{r}(\mathfrak{g}),\mathfrak{c}_{\mathfrak{c}(\mathfrak{n})} (\mathfrak{s}_{nc})] \subset\mathfrak{c}_{\mathfrak{c} (\mathfrak{n})}(\mathfrak{s}_{nc}). $$ Moreover, the Abelian summand $\mathfrak{a}$ in $\mathrm{Lie}(G/N)$ is mapped to a space of semi-simple matrices with imaginary eigenvalues, so $\mathfrak{v}$ can be decomposed as a sum $$\mathfrak{v}=\mathfrak{v}_1\oplus\cdots\oplus\mathfrak{v}_k$$ of irreducible representations of $\mathfrak{a}$, each of which is either one-dimensional or two-dimensional. Any one-dimensional $\mathfrak{v}_i$ must be a trivial representation of $\mathfrak{a}$, and hence $\mathfrak{v}_i\subset \mathfrak{c}_{\mathfrak{c}(\mathfrak{r}(\mathfrak{g}))} (\mathfrak{s}_{nc})$. Any two-dimensional $\mathfrak{v}_i$ corresponds to a pair of imaginary weights in $\mathfrak{a}^*\otimes\mathbb{C}$, i.e. $\mathbb{R}$-linear functionals $\pm\lambda:\mathfrak{a}\rightarrow\mathbb{R}\sqrt{-1}$, such that the eigenvalues of $\mathrm{ad}_\mathfrak{v}(u):\mathfrak{v}_i\rightarrow\mathfrak{v}_i$ are $\pm\lambda(u)$. Conversely, we consider the sum $\mathfrak{v}'$ of the centralizer $\mathfrak{c}_{\mathfrak{c}(\mathfrak{r}(\mathfrak{g}))} (\mathfrak{s}_{nc})$ and all two-dimensio\-nal irreducible $\mathrm{ad}(\mathfrak{r})$-representations in $ \mathfrak{c}_{\mathfrak{c}(\mathfrak{n}(\mathfrak{g}))} (\mathfrak{s}_{nc})$ corresponding to imaginary weights of $\mathfrak{a}=\mathfrak{r} (\mathfrak{g})/\mathfrak{n}(\mathfrak{g})$. Then $\mathfrak{v}'$ is $\mathrm{Ad}(G)$-invariant, and $\mathfrak{v}\subset\mathfrak{v}'$. Denote $\mathrm{Ad}_{\mathfrak{v}'}$ the restriction of the $\mathrm{Ad}_\mathfrak{g}$-action from $\mathfrak{v}'$ to $\mathfrak{v}'$. The subspace $\mathfrak{v}'$ satisfies similar descriptions as given above for $\mathfrak{v}$. The image group $\mathrm{Ad}_{\mathfrak{v}'}(R/N)$ is contained in a torus which commutes with the image $\mathrm{Ad}_{\mathfrak{v}'}(S_c)\subset\mathrm{GL}(\mathfrak{v}')$ of the compact subgroup $S_c=\exp\mathfrak{s}_c$, so $$ \mathrm{Ad}_{\mathfrak{v}'}(G)= \mathrm{Ad}_{\mathfrak{v}'}(S_c)\cdot\mathrm{Ad}_{\mathfrak{v}'} (R/N)\subset\mathrm{Ad}_{\mathfrak{v}'}(S_c)\cdot \overline{\mathrm{Ad}_{\mathfrak{v}'} (R/N)} $$ has a compact closure in~$\mathrm{GL}(\mathfrak{v}')$. This implies all vectors $X\in\mathfrak{v}'$ are bounded for $G/H$, i.e. $\mathfrak{v}=\mathfrak{v}'$. Summarizing above argument, we get the following theorem which determines all bounded vectors $X\in\mathfrak{c} (\mathfrak{n}(\mathfrak{g}))$ for a Riemannian homogeneous space $G/H$. \begin{theorem}\label{main-cor-3} Let $G/H$ be a Riemannian homogeneous space on which the connected Lie group $G$ acts effectively. Keep all relevant assumptions and notations. Then the space $\mathfrak{v}$ of all bounded vectors $X\in\mathfrak{c}(\mathfrak{n})$ for $G/H$ is the sum of $\mathfrak{c}_{\mathfrak{c}(\mathfrak{r}(\mathfrak{g}))} (\mathfrak{s}_{nc})$ and all two-dimensional irreducible representations in $\mathfrak{c}_{\mathfrak{c}(\mathfrak{n})}(\mathfrak{s}_{nc})$ for the $\mathrm{ad}(\mathfrak{r}(\mathfrak{g}))$-actions, which corresponds to nonzero imaginary weights in $\mathfrak{a}^*\otimes\mathbb{C}$, i.e. nonzero $\mathbb{R}$-linear functionals $\lambda:\mathfrak{r}(\mathfrak{g})\rightarrow \mathfrak{r}(\mathfrak{g})/ \mathfrak{n}(\mathfrak{g})\rightarrow\mathbb{R}\sqrt{-1}$. \end{theorem} A theoretical algorithm presenting all vectors in $\mathfrak{v}$ can be given as follows. Let us consider the complex representation for $\mathrm{ad}(\mathfrak{r}(\mathfrak{g}))$-actions on $\mathfrak{c}_{\mathfrak{c}(\mathfrak{n})}(\mathfrak{s}_{nc})\otimes\mathbb{C}$, then we can find distinct real weights $\lambda_i\in\mathfrak{a}^* \subset\mathfrak{r}^*$, $1\leq i\leq n_1$, and non-real complex weights $a_j\pm b_j\sqrt{-1}\in\mathfrak{a}^*\otimes\mathbb{C} \subset\mathfrak{r}^*\otimes\mathbb{C}$, with $1\leq i\leq n_2$ and $b_j>0$, such that we have the direct sum decomposition $$ \mathfrak{c}_{\mathfrak{c}(\mathfrak{n})}(\mathfrak{s}_{nc})\otimes\mathbb{C}= \bigoplus_{i=1}^{n_1}\mathfrak{u}^\mathbb{C}_{\lambda_i} \,\oplus\,\bigoplus_{j=1}^{n_2}\left( \mathfrak{u}^{\mathbb{C}}_{a_j+b_j\sqrt{-1}}+ \mathfrak{u}^{\mathbb{C}}_{a_j-b_j\sqrt{-1}}\right), $$ where the complex subspace $\mathfrak{u}^{\mathbb{C}}_\alpha$ for any real weight $\alpha\in\mathfrak{a}^*$ or any complex weight $\alpha\in\mathfrak{a}^*\otimes\mathbb{C}$ is defined as $$ \mathfrak{u}^\mathbb{C}_\alpha =\left\{X\in\mathfrak{c}_{\mathfrak{c}(\mathfrak{n})} (\mathfrak{s}_{nc})\otimes\mathbb{C}\,|\, \bigl(\mathrm{ad}(u)-\alpha(u)\mathrm{Id}\bigr)^k X=0,\forall\, u\in\mathfrak{r}(\mathfrak{g}), \mbox{ for some }k>0\right\}. $$ We assume $a_j=0\in\mathfrak{a}^*$ iff $1\leq j\leq m$, where $m$ can be zero. Denote $\mathfrak{v}^{\mathbb{C}}_{\alpha}$ be the eigenvector subspace in $\mathfrak{u}^{\mathbb{C}}_{\alpha}$ for the $\mathrm{ad}(\mathfrak{r}(\mathfrak{g}))$-actions, i.e. $$ \mathfrak{v}^\mathbb{C}_{\alpha}= \left\{X\in\mathfrak{u}^{\mathbb{C}}_{\alpha}\,|\, \mathrm{ad}(u)X=\alpha(u) X,\forall \,u\in\mathfrak{r}(\mathfrak{g})\right\}. $$ We take any basis $\left\{v_{j,1},\ldots,v_{j,k_j}\right\}$ for $\mathfrak{v}^\mathbb{C}_{b_j\sqrt{-1}}$\,, then the space $\mathfrak{v}$ of all bounded vectors in $\mathfrak{c}(\mathfrak{n})$ for $G/H$ can be presented as \begin{eqnarray*} \mathfrak{v}&=&\left(\mathfrak{v}^{\mathbb{C}}_0\cap\mathfrak{v}\right) \oplus \bigoplus_{j=1}^{m}\left(\left(\mathfrak{v}^\mathbb{C}_{b_j\sqrt{-1}}+ \mathfrak{v}^{\mathbb{C}}_{-b_j\sqrt{-1}}\right)\cap\mathfrak{v}\right)\\ &=&\mathfrak{c}_{\mathfrak{c}(\mathfrak{r}(\mathfrak{g}))} (\mathfrak{s}_{nc})\oplus \mathrm{span}^\mathbb{R} \left\{v_{j,k}+\overline{v_{j,k}}, \sqrt{-1}\bigl(v_{j,k}-\overline{v_{j,k}}\bigr),\forall\, 1\leq j\leq m, 1\leq k\leq k_j\right\}. \end{eqnarray*} \smallskip We hope that all the above results will be useful in the study of related topics. \medskip {\bf Acknowledgements.} The authors would like to sincerely thank the Chern Institute and the School of Mathematical Sciences in Nankai University for their hospitality during the preparation of this paper. They would also sincerely thank Joseph A. Wolf for reading this paper and helpful discussions. The first author is supported by National Natural Science Foundation of China (No. 11821101, No. 11771331), Beijing Natural Science Foundation (No. 00719210010001, No. 1182006), Capacity Building for Sci-Tech Innovation -- Fundamental Scientific Research Funds (No. KM201910028021).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The Steenrod operations and higher cup products, $\cup_i$ are an important part of algebraic topology, and have recently been emerging as a critical tool in the theory of fermionic quantum field theories. They were invented by Steenrod \cite{SteenrodCupPaper} in the study of homotopy theory. More recently, they have made a surprising entrance in the theory of fermionic and spin TQFTs in the study of Symmetry-Protected Topological (SPT) phases of matter \cite{GuWen,GaiottoKapustin,BrumfielMorgan}. As such, it would be desirable to give them a geometric interpretation beyond their mysterious cochain formulas, in a similar way that the regular cup product, $\cup_0$, can be interpreted as an intersection product between cells and a shifted version of the other cells. In this note, we will show that in fact, there is such an interpretation as a generalized intersection product, which gives the intersection class of the cells dual to a cochain with a \textit{thickened} and shifted version of the other's cells. Similar interpretations for the Steenrod squares (the maps $\alpha \mapsto \alpha \cup_i \alpha$) as self-intersections from immersions have been shown \cite{EcclesGrant} and are related to classical formulas of Wu and Thom. However, a more general interpretation of the $\cup_i$ products has still not been demonstrated. This interpretation was conjectured by Thorngren in \cite{Thorngren2018Thesis} that describes the $\cup_i$ product as an intersection from an $i$-parameter thickening with respect to $i$ vector fields. Such vector fields will be referred to as `Morse Flows'. We will verify the conjecture by giving an explicit construction of a set of such $n$ vector fields inside each $n$-simplex. Thickening the Poincaré dual cells with respect to the first $i$ fields and shifting with respect to the next field will show us that the cells that intersect each other with respect to these fields are the exact pairs that appear in Steenrod's formula for $\cup_i$. In the section \ref{sec:Prelim}, we review some convenient ways to describe and parameterize the Poincaré dual cells, which we will use extensively throughout the note. While this material is standard, it would be helpful to skim through it to review our notation. In Section \ref{CupZeroSection}, we will warm up by reviewing how the intersection properties of the $\cup_0$ product's formulas can be obtained from a vector field flow. In Section \ref{sec:HigherCupProductMain}, we will start by reviewing the definitions of the higher cup formulas and the Steenrod operations. Then we'll provide some more motivation, given the $\cup_0$ product's interpretation, as to why the thickening procedure should seem adequate to describe the higher cup products. Then, we'll describe the thickening procedure, state more precisely our main proposition about the higher cup formula in Section \ref{mainPropCup_m}, and prove it in Sections \ref{mainPropCup_m}-\ref{proofOfMainPropCup_m}. The main calculation is in Section \ref{proofOfMainPropCup_m}. In principle the main content is Sections \ref{definingThickenedCells}-\ref{proofOfMainPropCup_m}, and the rest of Sections \ref{CupZeroSection}-\ref{sec:HigherCupProductMain} are there to build intuition for the construction. Throughout, we only work with $\mathbb{Z}_2$ coefficients. After talking about the higher cup products, we will show how our interpretation can be applied to interpreting the `Grassmann integral' of Gu-Wen/Gaiotto-Kapustin \cite{GuWen,GaiottoKapustin}, which we'll call the ``GWGK Grassmann Integral" or simply the ``GWGK Integral". In Section \ref{backgroundPropertiesOfGuWenGrassmannIntegral}, we review some background material on $Spin$ structures and Stiefel-Whitney classes on a triangulated manifold, as well as the formal properties of the GWGK Integral we set out to reproduce. In Section \ref{geometricGuWenIn2D}, we review how the GWGK Integral can be defined geometrically in 2D with respect to the vector fields we constructed before and a loop decomposition of a $(d-1)$-cocycle on the dual 1-skeleton. And in Section \ref{geometricGuWenInHigherDimensions}, we extend this understanding to higher dimensions. The interpretation of the higher cup product makes its application in Section \ref{verifyingTheFormalPropertiesHigherDimensions} in demonstrating the `quadratic refinement' property of our construction. \section*{\LARGE{\underline{\textbf{Interpreting Higher Cup Products}}}} \addcontentsline{toc}{section}{\LARGE{\underline{\textbf{Interpreting Higher Cup Products}}}} \section{Preliminaries} \label{sec:Prelim} It will be helpful to review how Poincaré duality looks on the standard $n$-simplex, $\Delta^n$. Recall that \begin{equation} \Delta^n = \{ (x_0,\dots,x_n) \in \mathbb{R}^{n+1} | x_0 + \dots + x_n = 1 , x_i \ge 0 \text{ for all i} \} \end{equation} In particular, we'll review and write out explicit formulas parameterizing the cells in the dual cellulation of $\Delta^n$ and how they are mapped to their cochain partners. \subsection{Cochains} Recall that we are working with $\mathbb{Z}_2$-valued chains and cochains. If we fix $\alpha$ to be a $p$-cochain, then $\alpha$ restricted to $\Delta^n$ will manifest itself as a function from the set of size-$(p+1)$ subsets of $\{0,\dots,n\}$ to $\mathbb{Z}_2$. In other words \begin{equation} \alpha(i_0,\dots,i_p) \in \{0,1\} = \mathbb{Z}_2, \text{ where } 0 \le i_0 < \dots < i_p \le n \end{equation} Note that there are $2^{{n+1}\choose{p+1}}$ distinct p-cochains on $\Delta^n$, since there are ${{n+1}\choose{p+1}}$ choices of $\{i_0<\dots<i_p\} \subset \{0,\dots,n\}$ and two choices of the value of each $\alpha(i_0,\dots,i_p)$. The `coboundary' of a $p$-cochain $\alpha$ is a $(p+1)$-cochain $\delta \alpha$ defined by \begin{equation} \delta\alpha (i_0,\dots,i_{p+1}) = \sum_{j=0}^{p+1} \alpha(0,\dots,\hat{\imath}_j,\dots,i_{p+1}) \end{equation} where $\hat{\imath}_j$ refers to skipping over $i_j$ in the list. We say $\alpha$ is `closed' if $\delta \alpha = 0$ everywhere, which means modulo 2 that at each simplex, $\alpha = 1$ on an even number of $p$-subsimplices. We say $\alpha$ is `exact' if $\alpha = \delta \lambda$ for some $\lambda$. \subsection{The dual cellulation} Now, let us review how to construct the dual cellulation of $\Delta^n$. For clarity, let's first look at the case $n=2$ before writing the formulas in general dimensions. \subsubsection{Example: The 2-simplex} The two simplex is $\Delta^2 = \{ (x_0,x_1,x_2) \in \mathbb{R}^{3} | x_0 + x_1 + x_2 = 1 , x_i \ge 0 \text{ for all i} \}$. The 'barycentric subdivision' is generated by the intersections of the planes $\{x_0 = x_1, x_0 = x_2, x_1 = x_2\}$ with $\Delta^2$, as shown in Figure(\ref{fig:2_Simplex1}). The Poincaré dual cells are made from a certain subset of the cells of the barycentric subdivision, indicated pictorially in Figures(\ref{fig:2_Simplex2}, \ref{fig:2_Simplex3}). \begin{figure}[h!] \centering \begin{minipage}{0.44\textwidth} \centering \includegraphics[width=\linewidth]{2_Simplex_Barycenter.png} \caption{The standard 2-simplex $\Delta^2$. The blue, dashed lines are the barycentric subdivision of $\Delta^2$, obtained from the intersections of the planes $\{x_0 = x_1, x_0 = x_2, x_1 = x_2\}$ with $\Delta^2$} \label{fig:2_Simplex1} \end{minipage} \quad \quad \quad \begin{minipage}{0.44\textwidth} \centering \includegraphics[width=\linewidth]{2_Simplex_Poincare.png} \caption{The Poincaré dual cellulation, whose 1-skeleton is in blue. We'll be able to express the dual cells in terms of the points $f_0,f_1,f_2,c$.} \label{fig:2_Simplex2} \end{minipage} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.4\linewidth]{2_Simplex_Cells.png} \caption{The cells in the Poincaré dual cellulation of $\Delta^2$} \label{fig:2_Simplex3} \end{figure} Let us now list all the cells in the Poincaré dual decomposition of $\Delta^2$. It is first helpful to define 4 points: $\{c, f_0, f_1, f_2\} \in \Delta^2$. Here, \begin{equation} c = \{1/3,1/3,1/3\} \end{equation} is the coordinate of the barycenter of $\Delta^2$. And, \begin{equation} \begin{split} f_0 &= \{0,1/2,1/2\} \\ f_1 &= \{1/2,0,1/2\} \\ f_2 &= \{1/2,1/2,0\} \end{split} \end{equation} are the barycenters of the boundary $1$-simplices of $\Delta^2$ respectively. We denote by $f_i$ the barycenter of the $1$-simplex opposite to the point $v_i$, where $v_i \in \{(1,0,0),(0,1,0),(0,0,1)\}$ is the point on the $x_i$-axis on $\Delta^2$. There is one 0-cell $P_{\{0,1,2\}}$ which consists of only the center point $c$, \begin{equation} P_{\{0,1,2\}} = \{c\} \end{equation} There are three 1-cells $P_{\{0,1\}},P_{\{0,2\}},P_{\{1,2\}}$, which consist of the intersection of $\Delta^2$ with the rays going from $c$ to $f_2, f_1, f_0$. In other words \begin{equation} \begin{split} P_{\{0,1\}}&=\Delta^2 \cap \{c + (f_2-c)t | t \ge 0\} \\ P_{\{0,2\}}&=\Delta^2 \cap \{c + (f_1-c)t | t \ge 0\} \\ P_{\{1,2\}}&=\Delta^2 \cap \{c + (f_0-c)t | t \ge 0\} \end{split} \end{equation} And, there are three 2-cells, $P_{\{0\}},P_{\{1\}},P_{\{2\}}$, which consist of the points \begin{equation} \begin{split} P_{\{0\}}=\Delta^2 \cap \{c + (f_1-c)t_1 + (f_2-c)t_2 | t_1, t_2 \ge 0\} \\ P_{\{1\}}=\Delta^2 \cap \{c + (f_0-c)t_1 + (f_2-c)t_2 | t_1, t_2 \ge 0\} \\ P_{\{2\}}=\Delta^2 \cap \{c + (f_0-c)t_1 + (f_1-c)t_2 | t_1, t_2 \ge 0\} \end{split} \end{equation} The reason we chose to name the cells this way was to make it clearer the relationship between the cochains and their dual chains. The statement is that the p-cochain $\alpha$ is dual to the union of the chains under which $\alpha$ doesn't vanish, i.e. $\alpha$ is dual to $\bigcup \{P_{\{i_0,\dots,i_p\}} | \alpha(i_0,\dots,i_p) = 1\}$. Above, we have given an explicit parametrization of the cells $P_{I}, I \subset \{0,\dots,n\}$. But, it will also be helpful for us to express them in another way. One can easily check that the 1-cells can be written as: \begin{equation} \begin{split} P_{\{0,1\}}&=\Delta^2 \cap \{(x_0,x_1,x_2) | x_0 = x_1 \ge x_2 \} \\ P_{\{0,2\}}&=\Delta^2 \cap \{(x_0,x_1,x_2) | x_0 = x_2 \ge x_1 \} \\ P_{\{1,2\}}&=\Delta^2 \cap \{(x_0,x_1,x_2) | x_1 = x_2 \ge x_0 \} \end{split} \end{equation} In words, $P_{\{i,j\}}$ is where the plane $x_i=x_j$ intersects $\Delta^2$, but restricted to those points where $x_i, x_j$ are greater than or equal to the other coordinates. And, the 2-cells can be similarly written as: \begin{equation} \begin{split} P_{\{0\}}=\Delta^2 \cap \{(x_0,x_1,x_2) | x_0 \ge x_1, x_0 \ge x_2 \} \\ P_{\{1\}}=\Delta^2 \cap \{(x_0,x_1,x_2) | x_1 \ge x_0, x_1 \ge x_2 \} \\ P_{\{2\}}=\Delta^2 \cap \{(x_0,x_1,x_2) | x_2 \ge x_0, x_2 \ge x_1 \} \end{split} \end{equation} \subsubsection{General dimensions} We can see general patterns for the dual cell decompositions in $n$ dimensions. Just as before, we can define the points $c$, which is the barycenter of $\Delta^n$ and $f_0,\dots,f_n$ which are the barycenters of the $(n-1)-$simplices that are opposite to the points $v_i = (0,\dots,0,1,0,\dots,0)$ on the $x_i$ axis in $\Delta^n$. Explicitly, we'll have that the coordinates of these points are \begin{equation} c = \frac{1}{n+1}(1,...,1) \end{equation} which comes from setting $x_0 = x_1 = \dots = x_n$ and $\sum_j x_j = 1$. And, we'll have \begin{equation} \begin{split} f_i &= ((f_i)_0,\dots,(f_i)_n), \text{ where} \\ (f_i)_j &= \begin{cases} 1/n, & \text{if } i \neq j \\ 0, & \text{if } i = j \\ \end{cases} \end{split} \end{equation} which comes from setting $x_0 = \dots = \hat{x}_i = \dots = x_n$ \footnote{The notation $\hat{x}_i$ refers to skipping over it in the equality} and $x_i = 0$ and $\sum_j x_j = 1$. From these points, an $(n-p)$-cell $P_{\{i_0,\dots,i_{p}\}}$ that would appear as a dual chain of a $p-$form with $\alpha(i_0,\dots,i_{p})=1$ can be written as: \begin{equation}\label{cellParameterizations} \begin{split} &P_{\{i_0,\dots,i_{p}\}} = \Delta^n \cap \{c + \sum_{j=1}^{n-p} (f_{\hat{\imath}_j} - c)t_j | t_j \ge 0 \text{ for all } j\},\\ &\text{where } \{\hat{\imath}_1,\dots,\hat{\imath}_{n-p}\} = \{0,\dots,n\} \textbackslash \{i_0,\dots,i_{p}\} \end{split} \end{equation} And in parallel, we can also write \begin{equation} \label{dualCellEqns1} \begin{split} P_{\{i_0,\dots,i_{p}\}} = \Delta^n \cap \big\{(&x_0,\dots,x_n) | x_{i_0} = \dots = x_{i_n} \text{ and } x_i \ge x_{\hat{\imath}},\\ &\text{ for all } i \in \{i_0,\dots,i_{p}\}, \hat{\imath} \notin \{i_0,\dots,i_{p}\} \big\}\\ \end{split} \end{equation} which tells us that $P_{\{i_0,\dots,i_{p}\}}$ is where $\Delta^n$ intersects the plane of $x_{i_0} = \dots = x_{i_n}$, restricted to the points where $x_i \ge x_{\hat{\imath}}$ for $i \in \{i_0,\dots,i_p\}$ and $\hat{\imath} \in \{\hat{\imath}_1, \dots, \hat{\imath}_{n-p}\}$. \subsection{More Notation} Such $p$-cochains $\alpha$ will be denoted as living in the set $C^p(M,\mathbb{Z}_2)$. Closed $p$-cochains live in the set $Z^p(M,\mathbb{Z}_2) \subset C^p(M,\mathbb{Z}_2)$. So $C^p(M,\mathbb{Z}_2)$ with upper-index $p$ is the set of all functions from the $p$-simplices of $M$ to $\mathbb{Z}_2$. Here, $M$ implicitly refers to a manifold equipped with its triangulation. We will refer to the same manifold equipped with its dual cellulation as $M^\vee$. Poincaré duality says that the chains in $C_{n-p}(M^\vee,\mathbb{Z}_2)$ are in bijection with $C^p(M,\mathbb{Z}_2)$. However, we could also use the words `cochains' and `chains' to describe a related set of objects. Namely, we could also consider $C^{p}(M^\vee,\mathbb{Z}_2)$, which are functions from $p$-cells of $M^\vee$ to $\mathbb{Z}_2$. There will be a completely analogous statement of Poincaré duality that $C^{p}(M^\vee,\mathbb{Z}_2)$ is in bijection with $C_{n-p}(M,\mathbb{Z}_2)$, so that chains living on $M$ are in bijection with cochains on $M^\vee$. Throughout describing the higher cup products, we'll mostly be referring to `cochains' as being functions on a single $n$-simplex $\Delta^n$. Later on when discussing combinatorial $Spin$ structures, we'll see that representatives of Stiefel-Whitney classes naturally live in $C_{n-p}(M,\mathbb{Z}_2) = C^p(M^\vee,\mathbb{Z}_2)$. \section{Warm up: The $\cup_0$ product as intersection from a `Morse Flow'} \label{CupZeroSection} Now, as a warm up, let's review what the formula for $\cup_0$ had to do with vector field flow on the simplex $\Delta^n$. We'll use the standard notation that $\cup_0 = \cup$. Recall that for a $p$-cochain $\alpha \in C^p(X,\mathbb{Z}_2)$ and an $(n-p)$-cochain $\beta \in C^{n-p}(X,\mathbb{Z}_2)$, the value of $\alpha \cup \beta$ on an $n$-simplex $(0,\dots,n)$ is given as \begin{equation} (\alpha \cup \beta)(0,\dots,n) = \alpha(0,\dots,p) \beta(p,\dots,n) \end{equation} For a manifold $X$ with a simplicial decomposition and a branching structure, it is well known that the cup product on $H^*(X)$ is Poincaré dual to the intersection form on the associated chains, when viewed on $H_*(X)$. There is an elementary way to see directly on the cochain level why the intersection of the chains associated to $\alpha, \beta$ may take this form. This is discussed in \cite{Thorngren2018Thesis}, but it will be helpful to redo the discussion here before moving on to higher cup products. As before, it will be helpful to explicitly visualize the case of $n=2$ before moving on to higher dimensions. \subsection{Example: $\cup_0$ product in 2 dimensions} The simplest example of a nontrivial cup product is the case $n=2$, between two 1-cochains. Suppose $\alpha$ and $\beta$ are both $\mathbb{Z}_2$ valued 1-cochains. Then, the value of $\alpha \cup \beta$ on the simplex $(0,1,2)$ is \begin{equation} (\alpha \cup \beta)(0,1,2) = \alpha(0,1)\beta(1,2) \end{equation} Note that $\alpha$ and $\beta$ are both Poincaré dual to 1-chains, and the cell $P_{\{i,j\}}$ is included in the dual chain of $\alpha$ iff $\alpha(i,j)=1$. To see why the quantity $\alpha(0,1)\beta(1,2)$ plays a role in the intersection of $\alpha$ and $\beta$, we will introduce a `Morse Flow' of the chains within the simplex as follows. For some small real number $0 < \epsilon \ll 1$ and some fixed set of real numbers $b_0 < b_1 < b_2$, we will define new coordinates $\Tilde{x}_0,\Tilde{x}_1,\Tilde{x}_2$ on $\mathbb{R}^3$ as: \begin{equation} \Tilde{x}_i := x_i + \epsilon b_i, \text{ for } i=0,1,2 \end{equation} Then, in parallel to our cells $P_{I}$ defined in Eq(\ref{dualCellEqns1}), we can define a set of `shifted' cells $\Tilde{P}_{I}$ defined by \begin{figure}[h!] \centering \includegraphics[width=0.5\linewidth]{2_Simplex_Flowed.png} \caption{The green lines are the 1-skeleton of the \textit{flowed} 1-cells, i.e. $\Tilde{P}_{\{0,1\}},\Tilde{P}_{\{0,2\}},\Tilde{P}_{\{1,2\}}$. The solid blue lines are the 1-skeleton of the original cells, $P_{\{0,1\}},P_{\{0,2\}},P_{\{1,2\}}$, and the dashed blue lines complete the barycentric subdivision of $\Delta^2$. Notice that the flowed 1-cells of $\Tilde{P}$ only intersect once with the original cells $P$. More precisely, the only intersection point (the yellow star) is between $P_{\{0,1\}}$ and $\Tilde{P}_{\{1,2\}}$. This is the geometric interpretation for why the cup product $(\alpha \cup \beta)(0,1,2) = \alpha(0,1)\beta(1,2)$ represents an intersection.} \label{fig:2_Simplex_flowed} \end{figure} \begin{equation}\label{dualCellsShifted} \begin{split} \Tilde{P}_{\{i_0,\dots,i_{p}\}} = \Delta^n \cap \big\{(&x_0,\dots,x_n) | \Tilde{x}_{i_0} = \dots = \Tilde{x}_{i_n} \text{ and } \Tilde{x}_i \ge \Tilde{x}_{\hat{\imath}},\\ &\text{ for all } i \in \{i_0,\dots,i_{p}\}, \hat{\imath} \in \{0,\dots,n\} \big\}\\ \end{split} \end{equation} The Morse Flow will have this definition in every dimension. Pictorially, we can imagine the branching structures as playing the role of shifting the cells within $\Delta^2$, and creating an additional copy of them, as in Figure(\ref{fig:2_Simplex_flowed}). In that figure, we can see that each of the planes $\Tilde{x}_i=\Tilde{x}_j$ is `shifted' away from the plane $x_i=x_j$ by a transverse distance $b_j-b_i$. We can readily notice that the shifted cells $\Tilde{P}$ only intersect the original cells $P$ at exactly one point. Furthermore the only intersection point is between the cells $P_{\{0,1\}}$ and $\Tilde{P}_{\{1,2\}}$. This gives us a nice interpretation for the cup product. In other words, if we represent $\alpha$ by its representative chains on the original cells, $P$, and $\beta$ by its representatives on the $\Tilde{P}$, then we'll have that $\alpha \cup \beta (0,1,2)$ is 1 if those submanifolds intersect in $\Delta^2$ and 0 if they do not. Furthermore, for intersections of 0-cells with 2-cells, it's simple to see that the only pairs of such cells that intersect are $(P_{\{0\}},\Tilde{P}_{\{0,1,2\}})$ and $(P_{\{0,1,2\}},\Tilde{P}_{\{2\}})$. This matches up with the intuition that for $\alpha$ a 0-cochain (resp. 2-cochain) and $\beta$ a 2-cochain (resp. 0-cochain), then $\alpha \cup \beta (0,1,2) = \alpha(0)\beta(0,1,2)$ (resp. $\alpha \cup \beta (0,1,2) = \alpha(0,1,2)\beta(2)$). Also, note that we can see a simple explanation the `non-commutative' property of the cup product, that on the cochain level $\alpha \cup \beta \neq \beta \cup \alpha$: it's simply because the Morse Flow breaks the symmetry of which cells in $\alpha$ intersect with which cells in $\beta$. This intuition for the cup product will indeed hold for any chains in any dimension, a property which we'll state more precisely and verify in the next section. \subsection{Cup product in general dimensions} Let's state our first proposition about the $\cup_0$ product. \begin{prop} Fix $n \ge 2$. For $a$ sufficiently small and some subsets $I = \{i_0 < \dots < i_p\}$, $J =\{j_0 < \dots < j_q \}$ of $\{0,\dots,n\}$, the cells $P_{I,J}$ and $\Tilde{P}_{I,J}$ are defined as in Eq(\ref{dualCellsShifted}). Then, \begin{enumerate} \item If $i_p > j_0$, then the intersection of the cells $P_I \cap \Tilde{P}_J$ is empty. \item If $i_p = j_0$, then $\lim_{\epsilon \to 0} (P_I \cap \Tilde{P}_J) = P_{\{i_0,\dots,i_p=j_0,\dots,j_q\}}$ \item If $i_p < j_0$, then $\lim_{\epsilon \to 0} (P_I \cap \Tilde{P}_J) = P_{\{i_0,\dots,i_p,j_0,\dots,j_q\}}$ \end{enumerate} where `limit' here means the Cauchy limit of the sets. \end{prop} This is \textit{almost} the statement we want, modulo the subtlety which is Part 3 of the proposition. However, note that if $i_p < j_0$ then the cell $P_{\{i_0,\dots,i_p,j_0,\dots,j_q\}}$ is a dimension $n-p-q-1$ cell; this is a lower dimension than the case $i_p = j_0$ where $P_{\{i_0,\dots,i_p,j_0,\dots,j_q\}}$ is a dimension $n-p-q$ cell. Also note that for any finite $\epsilon >0$, the intersection of the cells will be an $(n-p-q)$-dimensional manifold. So in short, this proposition tells us that in the limit of $\epsilon \to 0$, the only intersections that retains the full dimension $n-p-q$ are between the $I = \{i_0,\dots,i_p\}$ and $J=\{j_0,\dots,j_q\}$ such that $i_p = j_0$. Translated back to the cochain language, this proposition says for our $\mathbb{Z}_2$ cochains $\alpha, \beta$, the Poincaré dual of $\alpha \cup \beta$ is the union of all of the intersections as $\epsilon \to 0$ cells of $\big\{P_{\{i_0,\dots,i_p\}} | \alpha(i_0,\dots,i_p)=1 \big\}$ with $\big\{\Tilde{P}_{\{j_0,\dots,j_q\}} | \beta(j_0,\dots,j_q)=1 \big\}$ that survive as \textit{full, $(n-p-q)$-dimensional} cells in the limit of $\epsilon \to 0$, which satisfy $i_p = j_0$. This is a direct way to see how the cup product algebra interacts with the intersection algebra. Now, we can give the proof. \begin{proof} Recall that $P_I$ and $\Tilde{P}_J$ are defined, respectively, by the relations: \begin{equation} \begin{split} P_{\{i_0,\dots,i_{p}\}} = \Delta^n \cap \big\{(&x_0,\dots,x_n) | x_{i_0} = \dots = x_{i_n} \text{ and } x_i \ge x_{\hat{\imath}},\\ &\text{ for all } i \in \{i_0,\dots,i_{p}\}, \hat{\imath} \in \{0,\dots,n\} \big\}\\ \end{split} \end{equation} and \begin{equation} \begin{split} \Tilde{P}_{\{j_0,\dots,j_q\}} = \Delta^n \cap \big\{(&\Tilde{x}_0,\dots,\Tilde{x}_n) | \Tilde{x}_{j_0} = \dots = \Tilde{x}_{j_q} \text{ and } \Tilde{x}_j \ge \Tilde{x}_{\hat{\jmath}},\\ &\text{ for all } j \in \{j_0,\dots,j_q\}, \hat{\jmath} \in \{0,\dots,n\} \big\}\\ \end{split} \end{equation} The definition of $\Tilde{P}_{\{j_0,\dots,j_q\}}$ can be rewritten as \begin{equation} \begin{split} \Tilde{P}_{\{j_0,\dots,j_q\}} = \Delta^n \cap \big\{(&x_0,\dots,x_n) | x_{j_0} = x_{j_1} + \epsilon (b_{j_1} - b_{j_0}) = \dots = x_{j_q} + \epsilon (b_{j_q} - b_{j_0}) \text{ and } x_j \ge x_{\hat{\jmath}} + \epsilon(b_{\hat{\jmath}}-b_j),\\ &\text{ for all } j \in \{j_0,\dots,j_q\}, \hat{\jmath} \in \{0,\dots,n\} \big\}\\ \end{split} \end{equation} Now, we can see why Part 1 is true. Suppose $i_p > j_0$ and $\epsilon>0$. Any point $(x_0,\dots,x_n)$ in the intersection would need to satisfy $x_{j_0} \ge x_{i_p} + \epsilon(b_{i_p}-b_{j_0}) > x_{i_p} \ge x_{j_0}$, i.e. $x_{j_0} > x_{j_0}$ which is impossible. Here, we used that $b_{i_p}-b_{j_0} > 0$ for $i_p > j_0$. So, there are no points in $P_I \cap \Tilde{P}_J$. The argument for Part 2 is similar. It's not hard to check that the intersection $P_I \cap \Tilde{P}_J$ is defined by the equations \begin{equation} \begin{split} P_I \cap \Tilde{P}_J = \Delta^n \cap \bigg\{(&x_0,\dots,x_n) | x_{i_0} = \dots = x_{i_p} = x_{j_0} = x_{j_1} + \epsilon (b_{j_1} - b_{j_0}) = \dots = x_{j_q} + \epsilon (b_{j_q} - b_{j_0}),\\ &\text{ and } x_k \ge x_{\hat{k}} + \epsilon \Tilde{D}_{k \hat{k}} \text{ for all } k \in \{i_0,\dots,i_p=j_0,\dots,j_q\}, \hat{k} \in \{0,\dots,n\}, \\ &\text{ where } \bigg(\Tilde{D}_{k \hat{k}} := \begin{cases} 0 &\text{ if } k \notin J \\ b_{\hat{k}}-b_k &\text{ if } k \in J \textbackslash \{i_p=j_0\} \\ max\{0, b_{\hat{k}}-b_k\} &\text{ if } k = i_p = j_0 \\ \end{cases} \bigg) \bigg\}\\ \end{split} \end{equation} And, in the limit $\epsilon \to 0$, we'll have that this set becomes precisely $P_I \cap \Tilde{P}_J \to P_{i_0,\dots,i_p=j_0,\dots,j_q}$. The argument for Part 3 is again similar to both of the previous parts. Similarly to Part 1, we have the constraint $x_{j_0} \ge x_{i_p} - \epsilon(b_{j_0}-b_{i_p}) \ge x_{j_0} - \epsilon(b_{j_0}-b_{i_p})$. But, since now $i_p < j_0$, we'll have that this constraint limits $x_{i_p}$ to lie in the range $[x_{j_0} - \epsilon(b_{j_0}-b_{i_p}),x_{j_0}]$. In the limit of $\epsilon \to 0$, this will enforce $x_{i_p} = x_{j_0}$. So, in the limit of $\epsilon \to 0$, we'll have that \begin{equation} \begin{split} P_I \cap \Tilde{P}_J &\xrightarrow{\epsilon \to 0} \Delta^n \cap \big\{(x_0,\dots,x_n) | x_{i_0} = \dots = x_{i_p} = x_{j_0} = \dots = x_{j_q} \text{ and } x_k \ge x_{\hat{k}},\\ &\quad\quad\quad\quad\quad\quad \text{ for all } k \in \{i_0,\dots,i_p,j_0,\dots,j_q\}, \hat{k} \in \{0,\dots,n\} \big\}\\ &= P_{\{i_0,\dots,i_p,j_0,\dots,j_q\}} \end{split} \end{equation} \end{proof} \subsection{Comparing the vector fields on different simplices} \label{flowedSimplices} While our vector fields satisfy the desired intersection properties within each simplex, one minor issue that we should address is to think of how the vector fields compare on the boundaries of neighboring simplices. It will not be the case that the vector fields will match on neighboring two simplices. However, the branching structure will ensure that the vector fields can be smoothly glued together without causing any additional intersections between the chains (see Figure(\ref{neighboringTwoSimplices})). This is because the branching structure will make sure that the flowed simplices will be flowed on the same side of the original simplex. So those flows can be connected between different faces to avoid any further intersections than the ones inside the simplices themselves. So, the intersection numbers on the whole triangulated manifold will just be given by the intersections on the interiors. In the cases where the intersection classes are higher dimensional, the intersection classes themselves can be also be connected to meet on the boundaries. We expect that these flows can be \textit{smoothly} connected to match on the boundaries. However, we will avoid explicitly smoothing the vector fields at the boundaries due to the technicalities that tend to be involved in such constructions. For example, in high dimensions a single piecewise-linear structures on a manifold generically corresponds to many smooth structures, so we would expect an explicit smoothing of these maps to depend on the particular smooth structure. However, in discussing the GWGK Integral, we'll be able to explicitly connect the vector fields in a neighborhood of the 1-skeleton. This is since we'll do everything in local coordinates which aren't as technical to deal with just near the 1-skeleton. \begin{figure}[h!] \centering \includegraphics[width=0.5\linewidth]{Neighboring_Two_Simplices.png} \caption{The flows on different simplices may not match on the boundaries. But, the flows can be connected (the bright green line) to avoid any additional intersections between the flowed (green) cells and the original (blue) cells.} \label{neighboringTwoSimplices} \end{figure} \section{$\cup_i$ products from $(i+1)$-parameter Morse Flows} \label{sec:HigherCupProductMain} First, let us recall the definition and some properties of the higher cup products (see e.g. \cite{MosherTangora}). Given some $\alpha$ a $p$-cochain and $\beta$ a $(n-p+i)$-cochain, we'll have that $\alpha \cup_i \beta$ is an $n$-cochain, such that when restricted to an $n$-simplex, \begin{equation} \label{higherCupFormula} (\alpha \cup_i \beta)(0,\dots,n) = \sum_{0 \le j_0 < \dots < j_i \le n} \alpha(0 \to j_0, j_1 \to j_2, \dots) \beta(j_0 \to j_1, j_2 \to j_3,\dots) \end{equation} where we use the notation $j \to k$ to refer to $j,\dots,k$. There is a caveat in the above definition, that we just restrict to those $j_0 < \dots < j_i$ such that $\#\{0 \to j_0, j_1 \to j_2, \dots \} = p+1$ and $\#\{j_0 \to j_1, j_2 \to j_3, \dots \} = n-p+i+1$, so not all $\{j_0,\dots,j_i\}$ contribute to the sum. For example, if $\alpha$ and $\beta$ are both 2-cochains, then $\alpha \cup_1 \beta$ is a 3-cochain with $(\alpha \cup_1 \beta)(0,1,2,3) = \alpha(0,1,3)\beta(1,2,3) + \alpha(0,2,3)\beta(0,1,2)$, so only two of the ${4 \choose 2} = 6$ choices of $0 \le j_0 < j_1 \le 3$ contribute in this case. It is well-known that the $\cup_i$ products are \textit{not} cohomology operations, i.e. that $\alpha \cup_i \delta \lambda$ may not be cohomologically trivial or even closed, even though $\delta \lambda$ is exact. Despite the fact that $\cup_i$ is not a \textit{binary} cohomology operation, it will in fact be a \textit{unary} cohomology operation - the maps called the `Steenrod Squares', \begin{equation} \alpha \mapsto \alpha \cup_i \alpha =: Sq^{p-i}(\alpha) \end{equation} will always be closed for $\alpha$ closed and (up to a boundary) only depend on the cohomology class of $\alpha$. The root algebraic property of the $\cup_i$ products is the formula: \begin{equation} \delta(\alpha \cup_k \beta) = \delta\alpha \cup_k \beta + \beta \cup_k \delta\alpha + \alpha \cup_{k-1} \beta + \beta \cup_{k-1} \alpha \end{equation} This implies that if $\delta \alpha = 0$, then $\delta(\alpha \cup_i \alpha) = \delta\alpha \cup_{i} \alpha + \alpha \cup_{i} \delta\alpha + 2 \alpha \cup_{i-1} \alpha = 0$, so $\alpha \cup_i \alpha$ is closed for closed $\alpha$. And, $(\alpha + \delta \beta) \cup_k (\alpha + \delta \beta) = \alpha \cup_k \alpha + \delta\big(\alpha \cup_{k+1} \delta\beta + \beta \cup_k \delta\beta + \beta \cup_{k-1} \beta \big)$, meaning $Sq^{p-k}(\alpha)$ and $Sq^{p-k}(\alpha + \delta \beta)$ are closed cocycles in the same cohomology class. An important consequence of the above equality is that, up to a coboundary, we'll have that for $\alpha$ a cocycle and $\lambda$ any cochain, we'll have (up to a coboundary): \begin{equation} \label{higherCupSymm} \alpha \cup_i \delta\lambda \equiv \alpha \cup_{i-1} \lambda + \lambda \cup_{i-1} \alpha \end{equation} which relates the $\cup_i$ to how the $\cup_{i-1}$ differs under switching the order of $\alpha,\lambda$ (or equivalently under reversing the branching structure). \subsection{Motivation for the $\cup_1$ product} Now, let's give a key example to motivate why `thickening' the chains should seem useful to describe the higher cup products. We'll start with the simplest example of the $\cup_1$ product. Let's consider the $\cup_1$ product between a closed cochain $\alpha$ and some boundary $\delta\lambda$. Let's consider $\alpha$ a $p$-cocycle and $\lambda$ an $(n-p)$-cochain, so that $\alpha \cup_1 \delta \lambda$ is an $n$-cochain. From Eq(\ref{higherCupSymm}), we'll have: \begin{equation} \alpha \cup_1 \delta\lambda \equiv \alpha \cup \lambda + \lambda \cup \alpha \end{equation} To think about what this term means, we should first think about what each of the $\alpha \cup \lambda, \lambda \cup \alpha$ mean. Based on our observations in the previous section, we can see that $\alpha \cup \lambda$ measures where $\lambda$ intersects with a version of $\alpha$ shifted in the direction of the \textit{positive} Morse flow. And $\lambda \cup \alpha$ measures where $\lambda$ intersects with a copy of $\alpha$ flowed in the \textit{negative} direction. So, we see that $\int \alpha \cup_1 \delta\lambda = \int \lambda \cup \alpha + \int \alpha \cup \lambda$ measures how the intersection numbers of the chains representing $\alpha$ and $\lambda$ change with respect to the positive and negative Morse Flows.\footnote{Note that $\alpha \cup \lambda$ may not equal $\lambda \cup \alpha$ plus a coboundary, since $\lambda$ may not be closed. So $\int \alpha \cup \lambda$ may not equal $\int \lambda \cup \alpha$} \begin{figure}[h!] \centering \begin{minipage}{0.44\textwidth} \centering \includegraphics[width=\linewidth]{Cup1LinkingPositive.png} \end{minipage} \quad \quad \quad \begin{minipage}{0.44\textwidth} \centering \includegraphics[width=\linewidth]{Cup1LinkingNegative.png} \end{minipage} \caption{The change in the linking number of $\delta\lambda$ and $\alpha$ under the Morse flow gives $\int \alpha \cup_1 \delta \lambda$. Note that by construction, $\alpha$ and $\delta \lambda$ can only intersect at the barycenters of the 3-simplices of the triangulation and the lines connecting these barycenters. Here, we chose to depict them only meeting at the barycenter of a single one simplex. This would happen if, e.g. $\alpha(0,1,2)=\alpha(0,1,3)=\delta\lambda(0,2,3)=\delta\lambda(1,2,3)=1$ with all other entries being zero at a 3-simplex. \\ (Left) Shifting a 1-form $\alpha$ via the positive Morse flow, giving a shifted curve $\alpha^+$. $\alpha^+$ intersecting $\lambda$ once means that $\alpha^+$ and $\delta\lambda$ have a linking number of 1. (Right) Shifting a 1-form $\alpha$ via the negative Morse flow, giving a shifted curve $\alpha^-$. $\alpha^-$ doesn't intersect $\lambda$, so $\alpha^-$ and $\delta\lambda$ have a linking number of 0. } \label{LinkingNumberCup1} \end{figure} In three dimensions, we can visualize this as follows. Suppose $\alpha$ and $\delta \lambda$ are 2-cocycles, so that $\lambda$ is a 1-cochain. This means that $\alpha$ and $\delta \lambda$ are dual to closed 1D curves on the dual lattice and $\delta \lambda$ is dual to a the boundary of the 2-surface that's dual to $\lambda$. Recall $\int \alpha \cup_1 \delta\lambda = \int \alpha \cup \lambda + \int \lambda \cup \alpha$ gives the difference between the intersection numbers of $\alpha$ with $\lambda$ in the positive and negative Morse flow directions. In the case where $\alpha$ is a trivial curve and the manifold is $S^3$, we can visualize this process as how the \textit{linking number} of $\alpha$ and $\delta \lambda$ changes under the Morse flow, due to the well known fact that the (mod 2) linking number between two curves $C_1, C_2$ is the number of times (mod 2) that $C_1$ intersects a surface that $C_2$ bounds. This is shown in Figure(\ref{LinkingNumberCup1}). While this linking number picture is a nice way to visualize the integrals of certain $\cup_1$ products in three dimension, it is still somewhat unsatisfactory. First, the linking number is often subtle to define and may not make sense; e.g. in higher dimensions, or if the manifold or the curves themselves are topologically nontrivial, it's not always possible to define linking numbers. Next, a linking number is a global quantity that requires global data to compute it, whereas the higher cup products are local quantities defined on every simplex. And, most glaringly, this picture only gives us information about cochains of the form $\alpha \cup_1 \delta\lambda$, while it'd be nice to understand it for more general pairs of cochains. \begin{figure}[h!] \centering \includegraphics[width=0.60\linewidth]{Cup1Thiccening.png} \caption{$\alpha$ and $\beta$ are represented by the blue curve and central red curves. The thickening of $\beta$ is given in both the positive and negative directions of the Morse flow (both directions pointing away from the central red curve). $\int \alpha \cup_1 \beta$ measures the intersection of $\alpha$ with this thickening of $\beta$. Note that if $\beta$ was a trivial curve, this integral gives how the linking numbers of $\alpha$ and $\beta$ change with respect to the Morse flow.} \label{fig:Cup1Thiccening} \end{figure} Following this intuition of trying to give a `local' geometric definition, we are lead to the idea of `thickening' the chains. In particular, we can note that this difference of linking numbers can be also attributed to `thickening' $\delta \lambda$ in \textit{both} directions of the Morse flow and then measuring the intersection number of $\alpha$ with this thickening of $\delta \lambda$. For example, see Figure(\ref{fig:Cup1Thiccening}). This could be anticipated from the linking number intuition, since the change in the linking number under the Morse flow only depends on the surface in a neighborhood of the second curve. So, it seems like we've found a potential geometric prescription to assign to the $\cup_1$ product. While this is in line with our intuition, we quickly run into an issue when we try to implement this on the cochain level: the intersection between the original cells and their thickenings is degenerate (i.e. the curves intersect on their boundaries). We can see this by drawing the simplest example, see Figure(\ref{fig:CochainThicceningNotWorkVsWork}), of the intersection of a 1-chain with the thickening of a 1-chain. It's not hard to convince oneself that the only intersection point between a cell of $\alpha$ and the thickened version of a different cell of $\beta$ will be the barycenter, $c$, which is at the boundary of both the cell in $\alpha$ and the thickened cell of $\beta$. And, the intersection of a cell with its own thickening will simply be itself, not anything lower-dimensional. This was basically the same issue we faced with the original cup product. The way we dealt with this degenerate intersection before was to shift $\alpha$ along the direction of the Morse Flow, which made the intersection nondegenerate. We could again try shifting $\alpha$ along the Morse flow, but we'll quickly realize that these shifted cells of $\alpha$ will only intersect at the thickened $\beta$'s edge: simply because the thickened $\beta$ was defined with respect to the Morse flow in the first place! To resolve this ambiguity, we will need to shift $\alpha$ by along a vector that's linearly independent from all the other vectors. This way, we can arrange for there to be a definite intersection point between the thickened cells of $\beta$ and the shifted cells of $\alpha$. There is one aspect in this that we should be careful about. Let's say we thickened $\beta$ along the original Morse flow vector $\Vec{v}$ by some thickness $\epsilon_1$. Then, we'll want to shift $\alpha$ along the second Morse flow vector $\Vec{w}$ by some distance $\epsilon_2 \ll \epsilon_1$. This is because once $\epsilon_2$ becomes too big compared to $\epsilon_1$, then the intersection locus might change its topology, which can be seen by examining Figure(\ref{fig:CochainThicceningNotWorkVsWork}). \begin{figure}[h!] \centering \begin{minipage}{0.44\textwidth} \centering \includegraphics[width=\linewidth]{CochainThicceningNotWork.png} \end{minipage} \quad \quad \quad \begin{minipage}{0.44\textwidth} \centering \includegraphics[width=\linewidth]{CochainThicceningWork.png} \end{minipage} \caption{(Left) $\alpha$ and $\beta$ are represented by the \textit{thick} blue and red curves. The thickening of $\beta$ is given by the interior of the red rectangle. And some shifts of $\alpha$ along the Morse Flow are given by the dashed blue lines. No matter how you move $\alpha$ along the \textit{original} Morse flow, $\alpha$ and the thickening of $\beta$ will always have a degenerate intersection, at their boundaries. This means that the intersection of a chain with the thickened version of itself will be the chain itself, not a lower dimensional version. (Right) However, if we shift $\alpha$ by a \textit{new} vector (green arrow) that's linearly independent from the original Morse flow vector (black arrows), then we can say whether or not this shifted $\alpha$ and the thickened $\beta$ intersect, non-degenerately. Note that if we shift $\alpha$ by an amount comparable to how much we thickened $\beta$, then the intersection region (yellow star) will change in topology.} \label{fig:CochainThicceningNotWorkVsWork} \end{figure} \textbf{\underline{Intuition for Higher Cup Products}} From here, we can see a general pattern of thickening and shifting that we can perform to try to compare to the higher cup products. For example, after accepting this for the $\cup_1$ product, we can apply the intuitive reasoning to \begin{equation*} \alpha \cup_2 \delta\lambda \equiv \alpha \cup_1 \lambda + \lambda \cup_1 \alpha \end{equation*} to see that $\cup_2$ could be thought of as measuring how $\alpha \cup_1 \lambda$ changes under a Morse Flow. We'll see that the $\cup_i$ product is obtained from thickening the cells of one cochain by an $i$-parameter Morse Flow and then shifting the other a small distance to make a well-defined intersection. It is often said that the higher cup products measure how much the lower cup products `fail to commute'. For example, the $\cup_1$ product gives an indication of how badly $\cup_0$ product doesn't commute on the cochain level. Geometrically, this is saying that the $\cup_1$ product `measures' the how the intersection of the cells differs under the Morse flow between $\epsilon$ positive or negative. Looking forward, we will see that the geometric way to see $\cup_1$ is to `thicken' the cells under \textit{both} the positive and negative direction of the Morse flow and measure an intersection of the original cell with the thickened cell. However, to measure such an intersection, we \textit{again} need to break the symmetry by introducing an additional vector flow, for a similar reason that we needed to break the symmetry to measure intersection in the first place. In general, the $\cup_i$ product will involve an $(i+1)$-parameter Morse Flow, where the first $i$ directions of the flow thicken the manifold and the last direction breaks the symmetry in order to be able to measure an intersection number. We note that much of this discussion was proposed by Thorngren in \cite{Thorngren2018Thesis}. The rest of this section will be devoted to setting up the algebra needed to realize this and showing that the higher cup product formulas are exactly reproduced by such a procedure. \subsection{Defining the thickened cells} \label{definingThickenedCells} Our first goal should be to write down parametric equations defining the points of the flowed cells, analogous to the ones in Eq(\ref{cellParameterizations}). To do this, we will need to define some variables $\Tilde{x}_i$ in an analogous way as we did for the 1-parameter Morse flow. Recall, that we defined $\Tilde{x}_j = x_j + \epsilon b_j$ where $b_0 < \dots < b_n$. Here, there was a single $\epsilon$ that played the role of the Morse Flow parameter and the vector $\Vec{b} = (b_j)$ was the Morse Flow vector. For the $\cup_m$ product, we will need an $(m+1)$-parameter Morse Flow, which means that we need $(m+1)$ linearly independent vectors. Let's call these vectors $\Vec{b}_i$ for $i \in \{1 \dots m+1\}$ and denote by $b_{ij}$ the matrix of these vectors, \begin{equation} \Vec{b}_i = (b_{i 0},\dots,b_{i n}) \text{ for } i = 1,\dots,m+1 \end{equation} We'll define quantities $\epsilon_{j}$ which play the role of the Morse Flow parameters, and we'll similarly define our shifted coordinates $\Tilde{x}_i$ as: \begin{equation} \Tilde{x}_i = x_i + \epsilon_1 b_{1 i} + \dots + \epsilon_{m+1} b_{m+1, i} \end{equation} Now to parametrically define our thickened cells, we'll define the points $\Tilde{c}$ near the barycenter of $\Delta^n$ and the points $\Tilde{f}_j$ near the centers of the faces as follows. $\Tilde{c}$ will be defined by setting all the $\Tilde{x}_j$ coordinates equal, $\Tilde{x}_0 = \dots = \Tilde{x}_n$, analogously to how we defined the center $c$, earlier. Before writing the expression for $\Tilde{c}$, we will find it convenient to define new quantities $B_i$ as \begin{equation} B_i := \sum_{j=0}^n b_{ij} \end{equation} Then, solving the equations $x_0 + \dots + x_n = 1$ and $\Tilde{x}_0 = \dots = \Tilde{x}_n$, it's straightforward to see that: \begin{equation} \label{shiftedCenterMorse} \begin{split} \Tilde{c} &= (\Tilde{c}_0,\dots,\Tilde{c}_n), \text{ where} \\ \Tilde{c}_i &= \frac{1}{n+1}(1+B_1 \epsilon_1 + \dots + B_{m+1} \epsilon_{m+1}) - (b_{1 i}\epsilon_1 + \dots + b_{m+1,i}\epsilon_{m+1}) \end{split} \end{equation} And, we'll similarly define our points $\Tilde{f}_j$ by setting $\Tilde{x}_0 = \dots = \hat{\Tilde{x}}_j = \dots = \Tilde{x}_n$ and $x_j = 0$ and $x_0 + \dots + x_n = 1$. Solving these equations will give us: \begin{equation} \label{shifedFacesMorse} \begin{split} \Tilde{f}_j &= ((\Tilde{f}_j)_0,\dots,(\Tilde{f}_j)_n), \text{ where} \\ (\Tilde{f}_j)_i &= \bigg(\frac{1}{n}(1+(B_1 - b_{1 j}) \epsilon_1 + \dots + (B_{m+1}-b_{m+1,j}) \epsilon_{m+1}) - (b_{1 i}\epsilon_1 + \dots + b_{m+1, i}\epsilon_{m+1})\bigg) \delta_{i \neq j}, \text{ and} \\ \delta_{i \neq j} &= \begin{cases} 1 \text{ if } i \neq j \\ 0 \text{ if } i = j \\ \end{cases} \end{split} \end{equation} We should clarify that these $\Tilde{c}$ and $\Tilde{f}$ are really functions of the $\epsilon_j$. Now, for some fixed set of $\epsilon_j$, we can define the shifted cells $\Tilde{P}_{\{i_0 \dots i_p\}}$ entirely analogously to the original cells in Eq(\ref{cellParameterizations}): \begin{equation} \begin{split} \Tilde{P}_{\{i_0,\dots,i_{p}\}}(\epsilon_1,\dots,\epsilon_{m+1};\varepsilon) &= \Delta^n \cap \{\Tilde{c} + \sum_{j=1}^{n-p} (\Tilde{f}_{\hat{\imath}_j} - \Tilde{c})t_j | t_j \ge 0 \text{ for all } j\},\\ &\text{where } \{\hat{\imath}_1,\dots,\hat{\imath}_{n-p}\} = \{0,\dots,n\} \textbackslash \{i_0,\dots,i_{p}\} \end{split} \end{equation} These $\Tilde{P}_{\{i_0 \dots i_p\}}$ are defined with respect to a fixed set of $\epsilon_j$: as of now they are not thickened. Note that they depend on multiple parameters $\epsilon_1,\dots,\epsilon_{m+1}$ and that this definition agrees with our previous expression Eq(\ref{dualCellsShifted}). We could have defined such a parameterization earlier when talking about the $\cup_0$ product, but it wasn't necessary at that point as it is now. Another equivalent way to express this is \footnote{This is because the cells can be thought of as shifting by the vectors $\{\epsilon_j (\Vec{b}_j - \frac{1}{n+1} B_j \Vec{b}_0)\}$, which are the projections of $\{\epsilon_j \Vec{b}_j\}$ onto $\Delta^n$}: \begin{equation} \begin{split} \Tilde{P}_{\{i_0,\dots,i_{p}\}}(\epsilon_1,\dots,\epsilon_{m+1};\varepsilon) = \Delta^n \cap \{& \epsilon_1 \Vec{b}_1 + \cdots + \epsilon_{m+1} \Vec{b}_{m+1} - \frac{1}{n+1}(\epsilon_1 B_1 + \cdots \epsilon_{m+1}B_{m+1}) \cdot \Vec{b}_0 \\ & + \sum_{j=1}^{n-p} (f_{\hat{\imath}_j} - c)t_j | t_j \ge 0 \text{ for all } j\},\\ \text{where } & \{\hat{\imath}_1,\dots,\hat{\imath}_{n-p}\} = \{0,\dots,n\} \textbackslash \{i_0,\dots,i_{p}\} \end{split} \end{equation} Here, $\Vec{b}_0$ refers to the vector $(1,\dots,1)$. To thicken them, we should also treat the $\epsilon_j$ as a parameter to be varied. So overall, our thickened cells $\Tilde{P}^\text{thick}_{\{i_0 \dots i_p\}}((\epsilon_{m+1};\varepsilon))$ will be written as: \begin{equation} \label{thickenedCellDefinition} \Tilde{P}^\text{thick}_{\{i_0 \dots i_p\}}(\epsilon_{m+1};\varepsilon) = \bigsqcup_{\epsilon_j \in (-\varepsilon, \varepsilon) \text{ for } j \in \{1,\dots,m\}} \Tilde{P}_{\{i_0 \dots i_p\}}(\epsilon_1,\dots,\epsilon_{m+1};\varepsilon) \end{equation} Above, we only allowed the $\epsilon_1,\dots,\epsilon_m$ to vary since we're considering an $m$-parameter thickening. And, we only thickened the cells up to some small fixed number $0 < \varepsilon \ll 1$. Eventually $\epsilon_{m+1} \ll \varepsilon$ will induce a small shift to let us define a non-degenerate intersection. What we really mean by these $\ll$ signs is that we're going to be considering the cells' intersections in the following order of limits: \begin{equation} \label{thickenedCellIntersection} \lim_{\varepsilon \to 0} \lim_{\epsilon_{m+1} \to 0^+} P_{\{i_0 \dots i_p\}} \cap \Tilde{P}^\text{thick}_{\{j_0 \dots j_q\}}(\epsilon_{m+1};\varepsilon) \end{equation} \subsection{Statement of the main proposition on $\cup_m$} \label{mainPropCup_m} Now, we are in a position to begin to write down the main proposition of this note relating the $\cup_m$ formulas to the generalized intersections. Specifically given our $m$-parameter thickening, we want to find what the limit in Eq(\ref{thickenedCellIntersection}) equals. More specifically, if $\alpha$ is a $p$-cochain and $\beta$ is a $q$-cochain, then $\alpha \cup_m \beta$ will be an $(p+q-m)$-form, so we really care about which $(n+m-p-q)$-dimensional cells survive the limit. Recall for the regular cup product, we had that many pairs of $(n-p)$- and $(n-q)$-cells had limits of intersections that were `lower-dimensional' of dimension $(n-p-q-1)$ that survived the limit. Likewise, for these thickened cases, we'll have that there may be many pairs of $(n-p)-$cells whose intersection with the thickened $(n-q)$-cells limit to cells that have dimension less than $(n+m-p-q)$. Now, before we state the main proposition, let's look more closely about what exactly the formula for the higher cup product is saying. For general indices, it reads that for a $p$-cochain $\alpha$ and a $q$-cochain $\beta$: \begin{equation} \label{higherCupFormula2} (\alpha \cup_m \beta)(i_0,\dots,i_{p+q-m}) = \sum_{\{j_0 < \dots < j_m\} \subset \{i_0,\dots,i_{p+q-m}\}} \alpha(i_0 \to j_0, j_1 \to j_2, \dots) \beta(j_0 \to j_1, j_2 \to j_3,\dots) \end{equation} For this subsection, we will refer to $j_\gamma \to j_{\gamma +1}$ as $\{j_\gamma,\dots j_{\gamma+1}\} \cap \{i_0,\dots,i_{p+q+m}\}$ where $\#\{i_0 \to j_0, j_1 \to j_2, \dots \} = p+1$ and $\#\{j_0 \to j_1, j_2 \to j_3, \dots \} = q+1$. Writing this statement in terms of the dual cells, this means that only pairs of cells of the form $P_{\{i_0 \to j_0, j_1 \to j_2, \dots\}}$ and $P_{\{j_0 \to j_1, j_2 \to j_3, \dots\}}$ will contribute to $(\alpha \cup_m \beta)$. Let's think about what kind of restrictions this would imply for general cells. Let's call the sets $J_1 := \{i_0 \to j_0, j_1 \to j_2, \dots\}$, $J_2 := \{j_0 \to j_1, j_2 \to j_3, \dots\}$. Note that the two sets $J_1$ and $J_2$ will always share exactly $m+1$ indices $\{j_0,\dots,j_m\}$. We'll have that any $i \in \{i_0,\dots,i_{p+q-m}\} \textbackslash \{j_0,\dots,j_m\}$ will be contained in some interval $j_k < i < j_{k+1}$ for some $k \in \{0 \dots m-1\}$. The forms of the sets $J_1, J_2$ tell us that $i \in J_1$ iff k is even, and $i \in J_2$ iff k is odd. Now we are ready to state our main proposition. \begin{prop} \label{mainProp} Choose two cells $P_{K}$ and $P_{L}$ of $\Delta^n$, where $K = \{k_0 < \dots < k_p\}$ has $(p+1)$ elements and $L = \{\ell_0 < \dots < \ell_q\}$ has $(q+1)$ elements. Let's say that $K \cup L = \{i_0 < \dots < i_r\}$ has $r+1$ elements. Then there exists a set of linearly independent vectors $\Vec{b}_i, i \in \{1,m+1\}$ such that, given $\Tilde{P}^\text{thick}_{L}(\epsilon_{m+1};\varepsilon)$ as defined in Eq(\ref{thickenedCellDefinition}), the following statements hold \begin{enumerate} \item If $r \neq p+q-m$, then $\lim_{\varepsilon \to 0} \lim_{\epsilon_{m+1} \to 0^+} P_{\{i_0 \dots i_p\}} \cap \Tilde{P}^\text{thick}_{\{j_0 \dots j_q\}}$ will be empty or consist of cells whose dimensions are lower than $n+m-p-q$ \item If $r = p+q-m$, then $K$ and $L$ share $(m+1)$ elements which we'll denote $j_0 < \dots < j_m$. \begin{enumerate} \item If $K = \{i_0 \to j_0, j_1 \to j_2, \dots\}$ and $L = \{j_0 \to j_1, j_2 \to j_3, \dots\}$, then $\lim_{\varepsilon \to 0} \lim_{\epsilon_{m+1} \to 0^+} P_{K} \cap \Tilde{P}^\text{thick}_{L}(\epsilon_{m+1};\varepsilon) = P_{K \cup L}$. \item Otherwise, $\lim_{\varepsilon \to 0} \lim_{\epsilon_{m+1} \to 0^+} P_{K} \cap \Tilde{P}^\text{thick}_{L}(\epsilon_{m+1};\varepsilon)$ will be empty. \end{enumerate} \end{enumerate} Furthermore, we can choose the $\Vec{b}_i$ so that any subset of $n$ vectors chosen from the set $\{\Vec{b}_1, \dots, \Vec{b}_{m+1}, (c-f_0),\dots,(c-f_n)\}$ are linearly independent. \end{prop} We can readily verify Part 1 of the proposition. \begin{proof}[Proof of Part 1 of Proposition \ref{mainProp}] Let us first verify the case that $r > p+q-m$. Note that $\lim_{\varepsilon \to 0} \lim_{\epsilon_{m+1} \to 0^+} P_{K} \cap \Tilde{P}^\text{thick}_{L}$ should be a subset of $P_K \cap P_L = P_{K \cup L}$. This is immediate from the definition of the Cauchy limit of sets, since $\lim_{\varepsilon \to 0} \lim_{\epsilon_{m+1} \to 0^+} \Tilde{P}^\text{thick}_{L} = P_{L}$. So if $r > p+q-m$, then $P_{K \cup L}$ will be of dimension $n-r < n+m-p-q$. Now, if $r < p+q-m$, then we'll show that each $P_{K} \cap \Tilde{P}^\text{thick}_{L}$ is empty for $\epsilon_{m+1} \neq 0$, so there can't be any intersection points at all. For this second case, we need the property that any subset of $n$ vectors chosen from the set $\{\Vec{b}_1, \dots, \Vec{b}_{m+1}, (c-f_0),\dots,(c-f_n)\}$ are linearly independent. Let us write $\Tilde{P}^\text{thick}_{L}(\epsilon_{m+1})$ to indicate that this hyperplane is a function of $\epsilon_{m+1}$. $P_{K}$ is a subset of an $(n-p)$-dimensional plane $Q_1$ in $\Delta^n$ and $\Tilde{P}^\text{thick}_{L}(\epsilon_{m+1})$ is a subset of an $(n+m-q)$-dimensional plane $Q_2(\epsilon_{m+1})$ with $Q_2(0)$ containing $P_L$. Since $r < p+q-m$, then $Q_1$ and $Q_2(0)$ will share an $n-r > n+m-p-q$ dimensional subspace, consisting of the points of the plane containing $P_{K \cup L}$. However, we claim that for any $\epsilon_{m+1} \neq 0$, $Q_1 \cap Q_2(\epsilon_{m+1})$ is empty, which would then imply that $P_{K} \cap \Tilde{P}^\text{thick}_{L}$ is empty. Note that $Q_1 = \{c + t_1(f_{\hat{k}_1} - c) + \dots t_{n-p} (f_{\hat{k}_{n-p}}-c) | t_i \in \mathbb{R} \}$ and $Q_2 = \{(c + \epsilon_{m+1} b_{m+1}) + s_1(f_{\hat{l}_1} - c) + \dots s_{n-q} (f_{\hat{l}_{n-q}} - c) + s_{n-q+1} b_1 + \dots + s_{n-q+m} b_m | s_i \in \mathbb{R}\}$, where $\{\hat{k}_1,\dots,\hat{k}_{n-p}\} = \{1,\dots,n\} \textbackslash K$ and $\{\hat{\ell}_1,\dots,\hat{\ell}_{n-q}\} = \{1,\dots,n\} \textbackslash L$. We'll have that of the $(n-p)+(n-q+m)$ vectors in $\{f_{\hat{k}_1} - c, \dots, f_{\hat{k}_{n-p}} - c\} \cup \{f_{\hat{\ell}_1} - c, \dots, f_{\hat{\ell}_{n-q}} - c\} \cup \{b_1,\dots,b_m\}$, $(n-r)$ of these are repeated. So, there are $(n+m+r-p-q)$ unique vectors, which we may call $\{w_1,\dots,w_{n+m+r-p-q}\}$. Finding where $Q_1$ and $Q_2(\epsilon_{m+1})$ intersect amounts to solving the equation $$\epsilon_{m+1} b_{m+1} + s_1 w_1 + \dots + s_{n+m+r-p-q} w_{n+m+r-p-q} = 0$$ But, since $r < p+q-m$, we'll have $n+m+r-p-q < n$. And since $b_{m+1}$ is not contained in $\{w_i\}$, this would imply that $\{b_{m+1},w_1,\dots,w_{n+m+r-p-q}\}$ (of size $\le n$) are linearly dependent. But, since $\epsilon_{m+1} \neq 0$, solving these equations contradicts the fact that we chose the $b$ so that any subset of $n$ of the $\{\Vec{b}_1, \dots, \Vec{b}_{m+1}, (c-f_0),\dots,(c-f_n)\}$ are linearly independent. So $Q_1 \cap Q_2 (\epsilon_{m+1}) = 0$, meaning that $\lim_{\varepsilon \to 0} \lim_{\epsilon_{m+1} \to 0^+} P_{K} \cap \Tilde{P}^\text{thick}_{L}$ is empty. \end{proof} To verify Part 2 in the case where $r=p+q-m$ we need to do some more work and then actually construct vectors $\Vec{b}$ with the desired properties. But, let us observe that we'll only have to worry about the case when $r=p+q-m=n$, i.e. when $K \cup L = \{0,\dots,n\}$. This is because if $K \cup L = \{i_0,\dots,i_r\}$ with $\{\hat{\imath}_1,\dots,\hat{\imath}_{n-r}\} = \{0,\dots,n\} \textbackslash \{i_0,\dots,i_r\}$, then we can restict to the subsimplex $\Delta' = \Delta_{\{i_0,\dots,i_r\}} = \Delta^n \cap \{(x_0,\dots,x_n) | x_{\hat{\imath}_1} = \dots = x_{\hat{\imath}_{n-r}} = 0\}$ and consider the intersection question on that subsimplex. We can similarly define the dual cells associated to and the $\Delta'$ and consider how the Morse Flows and thickenings act on those cells. We can analyze this by defining the center, $c'$, and the centers of the faces, $f'_j$, of $\Delta'$ and explicitly writing the cell decompositions in terms of these variables. If we do this, it will be immediate that the restriction to $\Delta'$ of the cells' intersections limit to the center $c'$ iff they limit to $P_{K \cup L}$ thoughout $\Delta$. This is because the shifted cells are all parallel to the original cells, so if the intersection on that boundary cell is nonempty, then the intersection throughout $\Delta$ will be a either be a shifted version of $P_{K \cup L}$ that limits to $P_{K \cup L}$. If it is empty, then it won't contain $c'$ and will be a lower dimensional cell. Also note that while we will explicitly construct the fields inside each simplex and the fields won't necessarily match on the simplices' boundaries. But, we expect that the observations of Section \ref{flowedSimplices} will also apply to these constructions allowing us to define the vector fields continuously on a simplicical complex, so we wouldn't have to worry about any additional intersections that come from these boundary mismatches. \subsection{Proof of Part 2 of Main Propostion} \label{proofOfMainPropCup_m} Now, let us set-up our main calculation for the case of $K \cup L = \{0,\dots,n\}$. For the rest of this section, we will use the variables $i,j$ to label the cells and $k$ to label coordinates, and they won't be related to our previous usages. Recall that we want to find the intersection points of the cell \begin{equation} \label{mainProofCellEqn1} {P}_{\{i_0 \dots i_p\}} = \Delta^n \cap \{c + (f_{\hat{\imath}_1} - c)s_1 + \dots + (f_{\hat{\imath}_{n-p}} - c)s_{n-p} | s_1,\dots,s_{n-p} \ge 0 \} \end{equation} with the shifted cell \begin{equation} \label{mainProofCellEqn2} \begin{split} \Tilde{P}^\text{thick}_{\{j_0 \dots j_{n+m-p}\}}(\epsilon_{m+1};\varepsilon) = \Delta^n \cap \{& \epsilon_1 \Vec{b}_1 + \cdots + \epsilon_{m+1} \Vec{b}_{m+1} - \frac{1}{n+1}(\epsilon_1 B_1 + \cdots \epsilon_{m+1}B_{m+1}) \cdot \Vec{b}_0 \\ & + c + (f_{\hat{\jmath}_1} - c)t_1 + \dots + (f_{\hat{\jmath}_{n-p}} - c)t_{p-m} | t_1,\dots,t_{p-m} \ge 0 , -\varepsilon \le \epsilon_1, \dots, \epsilon_{m} \le \varepsilon\} \end{split} \end{equation} where we define $\{\hat{\imath}_1,\dots,\hat{\imath}_{n-p}\} = \{0,\dots,n\} \textbackslash \{i_0,\dots,i_p\} $ and $\{\hat{\jmath}_1,\dots,\hat{\jmath}_{p-m}\} = \{0,\dots,n\} \textbackslash \{j_0,\dots,j_{n+m-p}\}$. So, we want to solve the equations, \begin{equation} \begin{split} c + (f_{\hat{\imath}_1} - c)s_{\hat{\imath}_1} + \dots + (f_{\hat{\imath}_{n-p}} - c)s_{\hat{\imath}_{n-p}} = & \epsilon_1 \Vec{b}_1 + \cdots + \epsilon_{m+1} \Vec{b}_{m+1} - \frac{1}{n+1}(\epsilon_1 B_1 + \cdots \epsilon_{m+1}B_{m+1}) \cdot \Vec{b}_0 \\ & + c + (f_{\hat{\jmath}_1} - c)t_{\hat{\jmath}_1} + \dots + (f_{\hat{\jmath}_{p-m}} - c)t_{\hat{\jmath}_1}, \quad \text{or} \\ c(1-s_{\hat{\imath}_1} - \dots - s_{\hat{\imath}_{n-p}}) + f_{\hat{\imath}_1} s_{\hat{\imath}_1} + \dots + f_{\hat{\imath}_{n-p}} s_{\hat{\imath}_{n-p}} = & \epsilon_1 \Vec{b}_1 + \cdots + \epsilon_{m+1} \Vec{b}_{m+1} - \frac{1}{n+1}(\epsilon_1 B_1 + \cdots \epsilon_{m+1}B_{m+1}) \cdot \Vec{b}_0 \\ & + c(1-t_{\hat{\jmath}_1} - \dots - t_{\hat{\jmath}_{p-m}}) + f_{\hat{\jmath}_1} t_{\hat{\jmath}_1} + \dots + f_{\hat{\jmath}_{p-m}} t_{\hat{\jmath}_{p-m}} \end{split} \end{equation} We have $n$ equations for $x_1,\dots,x_n$ (the $x_0$ equation is redundant since $x_0 + \dots + x_n = 1$). And, we have $n$ variables $\{s_{\hat{\imath}_1},\dots,s_{\hat{\imath}_{n-p}},t_{\hat{\jmath}_1},\dots,t_{\hat{\jmath}_{p-m}},\epsilon_1,\dots,\epsilon_m\}$ to solve for. It'll be convenient to change variables, and instead solve for $\{S_{\hat{\imath}_1},\dots,S_{\hat{\imath}_{n-p}},T_{\hat{\jmath}_1},\dots,T_{\hat{\jmath}_{p-m}},A_1,\dots,A_m\}$, defined as \begin{equation} \begin{split} s_{i} &= \epsilon_{m+1} S_i, \text{ for } i \in \{\hat{\imath}_1,\dots,\hat{\imath}_{n-p}\} \\ t_{j} &= \epsilon_{m+1} T_j, \text{ for } j \in \{\hat{\jmath}_1,\dots,\hat{\jmath}_{p-m}\} \\ \epsilon_{i} &= \epsilon_{m+1} A_i, \text{ for } i \in \{1,\dots,m\} \end{split} \end{equation} When we expand out each equation for $x_k$, $k=1,\dots,n$, we get that \begin{equation} \begin{split} &\frac{1}{n}\big(S_{\hat{\imath}_1}\delta_{k \neq \hat{\imath}_1} + \dots + S_{\hat{\imath}_{n-p}}\delta_{k \neq \hat{\imath}_{n-p}} \big) - \frac{1}{n+1}\big(S_{\hat{\imath}_1} + \dots + S_{\hat{\imath}_{n-p}}\big)\\ = &\frac{1}{n+1}(B_1 A_1 + \dots + B_m A_m + B_{m+1}) - b_{1 k}A_1 - \dots - b_{m k} A_m - b_{m+1, k} \\ & +\frac{1}{n}\big(T_{\hat{\jmath}_1}\delta_{k \neq \hat{\jmath}_1} + \dots + T_{\hat{\jmath}_{p-m}}\delta_{k \neq \hat{\jmath}_{p-m}} \big) - \frac{1}{n+1}\big(T_{\hat{\jmath}_0} + \dots + T_{\hat{\jmath}_{p-m}} \big) \end{split} \end{equation} We can then multiply by $n(n+1)$ and do some rearranging to give the equations \begin{equation} \begin{split} &\big(S_{\hat{\imath}_1} + \dots + S_{\hat{\imath}_{n-p}}\big) - (n+1)S_k \delta_{k \in \{\hat{\imath}\}} \\ &= \big(T_{\hat{\jmath}_1} + \dots + T_{\hat{\jmath}_{p-m}}\big) - (n+1)T_k \delta_{k \in \{\hat{\jmath}\}} \\ &\quad - n(n+1)\big(b_{1 k} A_1 + \dots + b_{m k} A_m + b_{m+1,k} \big) + n(B_1 A_1 + \dots + B_m A_m + B_{m+1}) \end{split} \end{equation} where $\delta_{k \in \{\hat{\imath}\}}$ is $1$ if $k \in \{\hat{\imath}_1,\dots,\hat{\imath}_{n-p}\}$ and $0$ otherwise, and similarly for $\delta_{k \in \{\hat{\jmath}\}}$. We are also abusing notation above, since we only defined $S_k$ for $k \in \{\hat{\imath}_1,\dots,\hat{\imath}_{n-p}\}$ in the first place. But, this is inconsequential since the term would vanish anyways for $k \notin \{\hat{\imath}_1,\dots,\hat{\imath}_{n-p}\}$. We will find it convenient to cast these equations in a more symmetric form by a change of variables. First, let us define the sets: \begin{equation} \begin{split} \{\lambda_0,\dots,\lambda_m\} &= \{i_0,\dots,i_p\} \cap \{j_0,\dots,j_{n+m-p}\}, \text{ and } \\ \{\hat{\lambda}_1,\dots,\hat{\lambda}_{n-m}\} &= \{0,\dots,n\} \textbackslash \{\lambda_0,\dots,\lambda_{m}\} \end{split} \end{equation} Note that $\{\hat{\lambda}_1,\dots,\hat{\lambda}_{n-m}\} = \{\hat{\imath}_1,\dots,\hat{\imath}_{n-p}\} \sqcup \{\hat{\jmath}_1,\dots,\hat{\jmath}_{p-m}\}$. And, let us redefine the variables for $k \in \{\hat{\lambda}_1,\dots,\hat{\lambda}_{n-m}\}$, \begin{equation} Z_k = \begin{cases} S_k &\text{ if } k \in \{\hat{\imath}_1,\dots,\hat{\imath}_{n-p}\} \\ -T_k &\text{ if } k \in \{\hat{\jmath}_1,\dots,\hat{\jmath}_{p-m}\} \end{cases} \end{equation} Then, we can rewrite our equations in their final form as: \begin{equation} \label{mainProofFinalForm} \begin{split} &(Z_{\hat{\lambda}_1} + \dots + Z_{\hat{\lambda}_{n-m}}) - n(B_1 A_1 + \dots + B_m A_m + B_{m+1}) \\ &= (n+1) Z_k \delta_{k \in \{\hat{\lambda}\}} - n(n+1)\big(b_{1 k} A_1 + \dots + b_{m k} A_m + b_{m+1,k} \big) \end{split} \end{equation} where $\delta_{k \in \{\hat{\lambda}\}}$ is 1 if $k \in \{\hat{\lambda}\}$ and 0 otherwise. While these may again seem tricky to solve, some computer algebra experimentation shows that they have an elegant solution in terms of the $b_i$. Namely, the solutions are: \begin{equation} \label{mainSolZ} \begin{split} Z_{\hat{\lambda}} &= n \frac{\det \begin{pmatrix} 1 & b_{1 \lambda_0} & \cdots & b_{m+1,\lambda_0} \\ \vdots & \vdots & & \vdots \\ 1 & b_{1 \lambda_m} & \cdots & b_{m+1,\lambda_m} \\ 1 & b_{1 \hat{\lambda}} & \cdots & b_{m+1,\hat{\lambda}} \\ \end{pmatrix} }{\det \begin{pmatrix} 1 & b_{1 \lambda_0} & \cdots & b_{m \lambda_0} \\ \vdots & \vdots & & \vdots \\ 1 & b_{1 \lambda_m} & \cdots & b_{m \lambda_m} \\ \end{pmatrix} } \\ &\text{ where } \hat{\lambda} \in \{\hat{\lambda}_1, \dots, \hat{\lambda}_{n-m} \} \end{split} \end{equation} \begin{equation}\label{mainSolA} \begin{split} A_{\ell} &= (-1)^{m-\ell+1}\frac{\det \begin{pmatrix} 1 & b_{\Tilde{\imath}_1 \lambda_0} & b_{\Tilde{\imath}_2 \lambda_0} & \cdots & b_{\Tilde{\imath}_m \lambda_0} \\ 1 & b_{\Tilde{\imath}_1 \lambda_1} & b_{\Tilde{\imath}_2 \lambda_1} & \cdots & b_{\Tilde{\imath}_m \lambda_1} \\ \vdots & \vdots & \vdots & & \vdots \\ 1 & b_{\Tilde{\imath}_1 \lambda_m} & b_{\Tilde{\imath}_2 \lambda_1} & \cdots & b_{\Tilde{\imath}_m \lambda_m} \\ \end{pmatrix} }{\det \begin{pmatrix} 1 & b_{1 \lambda_0} & b_{2 \lambda_0} & \cdots & b_{m \lambda_0} \\ 1 & b_{1 \lambda_1} & b_{2 \lambda_1} & \cdots & b_{m \lambda_1} \\ \vdots & \vdots & \vdots & & \vdots \\ 1 & b_{1 \lambda_m} & b_{2 \lambda_1} & \cdots & b_{m \lambda_m} \\ \end{pmatrix} }\\ &\text{ where } \ell \in \{1,\dots,m\} \text{ and } \{\Tilde{\imath}_1,\dots,\Tilde{\imath}_m\} = \{1,\dots,m+1\} \textbackslash \{ \ell \} \end{split} \end{equation} We prove that these formulas solve Eq(\ref{mainProofFinalForm}) in Appendix \ref{proofOfVandermonde}. But for now, let's explore their consequences. We only care about the solutions to the $Z_{\hat{\lambda}}$. Recall that since we were choosing $\epsilon_{m+1} \to 0^+$ we'll have that $\epsilon_{m+1} > 0$. In terms of our original variables, we want each of the $s_{\hat{\imath}} > 0$ and each $t_{\hat{\imath}} > 0$. So, since $s_{\hat{\imath}} = \epsilon_{m+1} S_{\hat{\imath}}$ and $t_{\hat{\imath}} = \epsilon_{m+1} T_{\hat{\imath}}$, we will want to impose that each $S_{\hat{\imath}} > 0$ and each $T_{\hat{\jmath}} > 0$. This translates to saying that we want to find solutions when $Z_{\hat{\lambda}} > 0$ if $\hat{\lambda} \in \{ \hat{\imath}\}$ and $Z_{\hat{\lambda}} < 0$ if $\hat{\lambda} \in \{ \hat{\jmath} \}$. Now is when we pick our matrix $b_{uv}$. A nice choice to connect to the higher cup formula will be: \begin{equation} \label{higherCupSolnMatrix} b_{uv} = (\frac{1}{1+v})^{u} \end{equation} Given this choice, we'll have that $Z_{\hat{\lambda}}$ can be written in terms of the ratio of Vandermonde determinants: \begin{equation} \begin{split} Z_{\hat{\lambda}} &= n \frac{\det \begin{pmatrix} 1 & \frac{1}{1+\lambda_0} & \cdots & (\frac{1}{1+\lambda_0})^{m+1} \\ \vdots & \vdots & & \vdots \\ 1 & \frac{1}{1+\lambda_m} & \cdots & (\frac{1}{1+\lambda_m})^{m+1} \\ 1 & \frac{1}{1+\hat{\lambda}} & \cdots & (\frac{1}{1+\hat{\lambda}})^{m+1}\\ \end{pmatrix} }{\det \begin{pmatrix} 1 & \frac{1}{1+\lambda_0} & \cdots & (\frac{1}{1+\lambda_0})^m \\ \vdots & \vdots & & \vdots \\ 1 & \frac{1}{1+\lambda_m} & \cdots & (\frac{1}{1+\lambda_m})^m \\ \end{pmatrix} } \\ &= n (\frac{1}{1+\hat{\lambda}}-\frac{1}{1+\lambda_0}) \cdots (\frac{1}{1+\hat{\lambda}}-\frac{1}{1+\lambda_m}) \end{split} \end{equation} Note that each of the factors $(\frac{1}{1+\hat{\lambda}}-\frac{1}{1+\lambda_k})$ is positive iff $\lambda_k > \hat{\lambda}$. So, this implies that if $\hat{\lambda} < \lambda_0$, then $Z_{\hat{\lambda}} > 0$. And in general, if $\lambda_k < \hat{\lambda} < \lambda_{k+1}$, then $Z_{\hat{\lambda}} < 0$ iff $k$ is even and $Z_{\hat{\lambda}} > 0$ iff $k$ is odd. But this is exactly the condition that we wanted to show to relate this to the higher cup product formula! More specifically, for valid solutions to the intersection equations where $S_{\hat{\imath}} > 0$ and $T_{\hat{\jmath}} > 0$, we'll need that $\{i_0,\dots,i_p\} = \{0 \to \lambda_0,\lambda_1 \to \lambda_2, \dots \}$ and $\{j_0,\dots,j_p\} = \{\lambda_0 \to \lambda_1,\lambda_2 \to \lambda_3,\dots\}$ so that each $Z_{\hat{\imath}} > 0$ and each $Z_{\hat{\jmath}} < 0$. This shows that the only cells with solutions to the intersection equations are exactly the pairs that appear in the higher cup product formulas. And, it is straightforward to check that any $n$ of the $\{\Vec{b}_1,\dots,\Vec{b}_{m+1}, c-f_0,\dots,c-f_{n}\}$ are linearly independent. We also note that there are many related choices of the $b_{uv}$ that reproduce the higher cup products. Really, choosing $$b_{uv} = g(v)^u$$ for any positive function $g(v) > 0$ satisfiying $g(v) > g(w)$ if $v < w$ works, and the same Vandermonde argument applies. We also note that the solutions for the $A$ are certain Schur polynomials in the $g(v)$. The signs of the $A$ can thus be determined using known formulas for Schur polynomials: so we can determine which sides of the thickened cell the intersection happens. \section*{\LARGE{\underline{Interpreting the GWGK Grassmann Integral}}} \addcontentsline{toc}{section}{\LARGE{\underline{Interpreting the GWGK Grassmann Integral}}} Now, let's discuss how this geometric viewpoint of the higher cup product can be used to give in general dimensions a geometric interpretation of the GWGK integral as formulated in \cite{GaiottoKapustin} for triangulations of $Spin$ manifolds, and extended by Kobayashi \cite{KobayashiPin} to non-orientable $Pin^-$ manifolds. Apart from the conceptual interpretation will give us two practical consequences would be helpful in doing computations. First, we will be able to give equivalent expressions of the Grassmann integrals of \cite{GaiottoKapustin, KobayashiPin} without actually using Grassmann variables. Next, we will be able to formulate the Grassmann integral on any branched triangulation of a manifold, whereas in, \cite{GaiottoKapustin, KobayashiPin}, only the cases of a barycentric subdivision were considered, which will have many more cells than a typical triangulation. In two dimensions, the geometric meaning of the Grassmann integral was explained in Appendix A of \cite{GaiottoKapustin}. We also note that entirely analogous ideas of considering combinatorial $Spin$ structures in two dimensions were developed in \cite{CimasoniReshetikhin,CimasoniNonorientable} in the context of solving the dimer model, a statistical mechanics problem whose solution can be phrased in terms of Grassmann integration. \section{Background and Properties of the GWGK Grassmann integral} \label{backgroundPropertiesOfGuWenGrassmannIntegral} Let's discuss some background material and some formal properties of the Grassmann integral that we will want to reproduce. First, we'll need to start out with some background material on how geometric notions like $Spin$ and $Pin$ structures may be encoded on a triangulation. After this, we'll recall the formal properties of the Grassmann integral that we'll want to reproduce with geometric notions. We won't give its detailed definition on a barycentrically subdivided triangulation here and refer the reader to \cite{GaiottoKapustin, KobayashiPin}. But we won't need its definition to proceed with out discussion. \subsection{Spin/Pin structures and $w_1, w_2$, $w_1^2$ on a triangulated manifold} Let $E \to M$ by a vector bundle over $M$ of rank $m$. We'll denote by $w_i(E) \in H^i(M,\mathbb{Z}_2)$ the $i^{th}$ Stiefel-Whitney class of $E$ over $M$. When $E$ is the tangent bundle $TM$, we'll often refer to $w_i(TM)$ as $w_i$ and as the Stiefel-Whitney classes of $M$. The way we'll choose to think about the Stiefel-Whitney classes is via their obstruction-theoretic definitions as follows. Choose a frame of $(m-i+1)$ `generic' sections of $E$. The condition of being generic means that they are linearly independent almost everywhere, and the locus of points on $M$ where they are linearly dependent will form a closed codimension-$i$ submanifold of $M$. This locus of points will be Poincaré dual to some cohomology class $w_i(E) \in H^i(M,\mathbb{Z}_2)$. This obstruction theoretic definition will be useful for us because it will help us make use of the vector fields we defined earlier in this note. Now, we'll want to figure out how to represent $w_1,w_2$ on a simplicial complex. We'll note first that the Poincaré duals of $w_p$ will be more naturally defined as chains living on the simplices themselves, i.e. as elments of $C_{n-p}(M,\mathbb{Z}_2) = C^p(M^\vee,\mathbb{Z}_2)$. This is in contrast to the simplicial cochains we considered previously, whose duals were naturally defined by chains living on the dual cellulation, $C_{n-p}(M^\vee,\mathbb{Z}_2) = C^p(M^,\mathbb{Z}_2)$. We can see this by noting the simplest case, of $w_1$. A canonical definition of $w_1$ is that it's represented by the set of all $(n-1)$-simplices for which the branching structure gives adjacent $n$-simplices opposite local orientations. In particular, this is encoded for us by noting that if a vector field frame reverses orientations between adjacent $n$-simplices, then the orientation must reverse upon passing their shared $(n-1)$-simplex. These $(n-1)$-simplices taken together will be Poincaré dual to the cohomology class $w_1$. A manifold is orientable iff the sum of such $(n-1)$-simplices are the boundary of some collection of $n$-simplices. If not, we may typically choose a simpler representative than this canonical one to do calculations. So while this is a canonical way to define $w_1$, it may practically be helpful to choose a representation with fewer simplices. In general, similar constructions for formulas for chain representatives of \textit{any} Stiefel-Whitney class on a branched triangulation are have been known since the 1970's, like in \cite{GoldsteinTurner}. For a barycentrically subdivided triangulation, the answer is particularly simple \cite{HalperinToledo}: that \textit{every} $(n-i)$-simplex is part of $w_i$. This is one reason that the GWGK Integral in \cite{GaiottoKapustin, KobayashiPin} was more readily formulated on a barycentrically subdivided triangulation. We expect that the vector fields constructed above are closely related to these older constructions, but we have not explicitly found the relationship and the formulas of \cite{GoldsteinTurner} may not apply directly to us. Soon, we will see a way to use our vector fields to give a canonical definition of $w_2$ on a branched triangulation. But for now, it'll be helpful to talk about $Spin$/$Pin$ structures on a triangulated manifold. A quick review of $Spin$ and $Pin^\pm$ groups are given in Appendix \ref{spinPinAppendix}. One can generally define a $Spin$ structure on a vector bundle $E \to M$ for which $w_1(E)$ and $w_2(E)$ vanish in cohomology. A $Spin$ structure of $E \to M$ is a cochain $\eta \in C^1(M^\vee,\mathbb{Z}_2)$ with $\delta \eta = w_2(E)$. We say $\eta$ is a $Spin$ structure of $M$ if it's $Spin$ structure of $TM$. Note that a $Spin$ structure of $M$ needs that $M$ is orientable and is only defined if its $w_2$ vanishes. Let $\det(TM)$ denote the determinant line bundle on $TM$. We can use $\det(TM)$ to characterize $Pin^{\pm}$ structures on $M$ (c.f. \cite{KirbyTaylor}). A $Pin^+$ structure may be defined on orientable or nonorientable manifolds. It has the same obstruction condition as a $Spin$ structure, and can be repesented by a cochain $\eta$ s.t. $\delta \eta = w_2$. Equivalently, it can be thought of as a $Spin$ structure on $TM \oplus 3 \det(TM)$ And, a $Pin^-$ structure may also be defined on orientable or nonorientable manifolds. The obstruction condition is different from the other ones, and is defined by a cochain $\eta$ s.t. $\delta \eta = w_2 + w_1^2$. It can also be thought of as a $Spin$ structure on $TM \oplus \det(TM)$. In general, such structures on a manifold are considered to be equivalent iff they differ by a coboundary. So, $Spin$/$Pin$ strucutres on $M$ are in bijection with $H^1(M,\mathbb{Z}_2)$. Another way to think a $Spin$ structure is to consider how we restrict $E \to M$ to the 1-skeleton and the 2-skeleton of $M$. $Spin$ structure on $E$ can be thought of as a frame of $(m-1)$ linearly independent sections of $E$ over the 1-skeleton, which can be extended (generically) over the 2-skeleton of $M$, possibly becoming linearly dependent at some even number of points within each 2-cell \footnote{Note that if $E$ is orientable, then we can put a nonvanishing positive definite metric on $E$, which given our $(m-1)$ linearly independent vectors, define a trivialization of $E$ over the 1-skeleton. An example of a manifold where the $(m-1)$ sections must be linearly dependent at some points inside a 2-cell is the tangent bundle of $S^2$, since any generic vector field will vanish at two points on the sphere.}. This is consistent with our obstruction theoretic definition, since it would be impossible to arrange this if every generic set of $(m-1)$ sections vanishes a total of an odd number of times on the 2-skeleton. And, this gives us a hint of how to construct a canonical representative of $w_2(E)$. In particular, suppose we chose some trivialization of $TM$ over the 1-skeleton where a generic extension becomes linearly dependent at some number of points $k$ on some 2-cell, $P$. Then, we'll have that $w_2(E)(P)=0$ if $k$ is even and $w_2(P)=1$ if $k$ is odd. So this gives a chain representative of $w_2$. So if we can always construct some trivialization of $E$ over the 1-skeleton and we know how to compute how many points vanish on each 2-cell, we'll have gotten our representative of $E$. We'll see later on that we can construct such a framing this canonically for $E=TM$, which will be a `canonical' chain representative of $w_2(TM)$. Although $w_1$ and $w_2$ have canonical chain representatives which can be expressed solely in terms of the branching structure, $w_1^2$ (as far as we know) does not have such an intrinsic chain-level definition. $w_1^2$ is a `self-intersection' of the orientation reversing wall, which can be defined by perturbing $w_1$ by a generic vector field and seeing the locus where it intersects its perturbed version. So, to define $w_1^2$, we need to specify a vector field to perturb along. The reference \cite{KobayashiPin} encodes this self-intersection in their definition of the Grassmann integral. Similarly in our geometric construction of a $Pin^-$ structure, such a choice will be encoded in the user's choice of a trivialization of $TM \oplus \det(TM)$, which we'll see equivalently encodes this perturbing vector. So given a branching structure and this additional user choice, we can represent $w_1^2$. We'll also see how given these framings, we can encode $Spin/Pin^-$ structures as adding `twists' in the background framing, which change the background framing into extending to even-index singularities on each $2$-cell $P$. We'll see that this can only be arranged if $w_2 + w_1^2$ is trivial. In Appendix \ref{combDefinePinPlus}, we give the construction for $Pin^+$ structures. \subsection{Formal Properties of the GWGK Grassmann integral} \label{formalPropertiesGuWen} Now, let's recall the formal properties of the GWGK Integral that we'll need to reproduce. In this section, when we denote by $M$ some manifold, we'll implicitly think of $M$ as encoding a triangulated manifold equipped with some branching structure. Formally, the GWKG Integral $\sigma(M,\alpha)$ depends on a branched triangulation, $M$, of some $n$-manifold and some closed cochain $\alpha \in Z^{n-1}(M,\mathbb{Z}_2)$. Note that elements $\alpha \in Z^{n-1}(M,\mathbb{Z}_2)$ are dual to some sum of closed loops on the dual graph. These loops are physically meant to represent worldlines of the fermions in this Euclidean setting. On an orientable manifold, $\sigma(M,\alpha)$ takes values in $\mathbb{Z}_2 = \{\pm 1\}$. On a nonorientable manifold, we'll have $\sigma(M,\alpha)$ takes values in $\mathbb{Z}_4 = \{\pm 1, \pm i\}$, and $\sigma(M,\alpha) = \pm i$ iff $\int \alpha \cup w_1 = 1$. The definition of $\sigma(M,\alpha)$ depends on the (canonical) chain representative of $w_2$ and the (user-defined) chain representative of $w_1^2$. Given this, the main properties of $\sigma(M,\alpha)$ are: \begin{enumerate} \item Suppose $\lambda \in C^2(M,\mathbb{Z}_2)$ is Poincaré dual to an elementary 2-cell of the dual complex, so that $\delta \lambda$ is dual to the boundary of an elementary cell. Then $\sigma(M,\delta \lambda)$ = $(-1)^{\int (w_2+w_1^2) (\lambda)} = (-1)^{\int_\lambda w_2+w_1^2}$, which is 1 if $(w_2+w_1^2)$ is zero on $\lambda$ and $-1$ if $(w_2+w_1^2)$ is nonzero on $\lambda$. \item (quadratic refinement) $\sigma(a)\sigma(b) = (-1)^{\int a \cup_{n-2} b} \sigma(a+b)$ \end{enumerate} These two properties uniquely define $\sigma$ on homologically trivial loops. For homologically nontrivial loops, it is not determined by the above properties. So to compute the Grassmann integral for nontrivial loops, if we have the value of $\sigma$ for some loops that form a representative basis of $H_1(M,\mathbb{Z}_2)$, then we can use the quadratic refinement property to define it for any sum of closed curves on $M$. Now we should consider how $\sigma$ changes under a re-triangulation or a bordism. Suppose $M_1 \sqcup \bar{M_2} = \partial N$, so that $N$ is some triangulated bordism between $M_1$ and $M_2$. And, suppose that $\alpha \in Z^{n-1}(N,\mathbb{Z}_2)$ is a gauge field that restricts to $\alpha_{1,2}$ on $M_{1,2}$. Then, arguments of \cite{GaiottoKapustin, KobayashiPin} show that \begin{equation} \label{guWenBordism} \sigma(M_1,\alpha_1) = \sigma(M_2,\alpha_2) (-1)^{\int_N Sq^2(\alpha) + (w_2 + w_1^2)(\alpha)} \end{equation} A special case is that if the manifold is admits a $Pin^-$ structure and is $Pin^-$ null-bordant, we have the following formula (c.f. \cite{GaiottoKapustin}): \begin{equation} \sigma(\delta\lambda,M) = (-1)^{\int_M \lambda \cup_{d-3} \delta \lambda + \lambda \cup_{d-4}\lambda + (w_2+w_1^2) (\lambda)} \end{equation} Note that this formula only works if $(w_2 +w_1^2)$ is trivial on $N$, since otherwise shifting $\lambda \to \lambda + \mu$ for some $\mu$ with $\int (w_2 + w_1^2) \cup \mu = 1$ will change the integral by a factor $-1$. Now, let's comment on why the Grassmann integral is important in the context of spin-TQFT's. It is due to the fact that under a cobordism the Grassmann integral changes by a factor of $(-1)^{\int_N Sq^2(\alpha) + (w_2 + w_1^2)(\alpha)}$, which can be thought of as a `retriangulation anomaly'. We can consider coupling the theory to a $Spin$ or $Pin^-$ structure, depending on whether $w_1 = 0$. This would entail finding some cochain $\eta$ with $\delta \eta = w_2$ or $\delta \eta = w_2 + w_1^2$. Then, the combination $z_{\Pi}(M,\eta,\alpha) := \sigma(M,\alpha)(-1)^{\int \eta \cup \alpha}$ will change by a factor of $(-1)^{\int_N Sq^2(\alpha)}$ under a cobordism. So, coupling to a $Spin$ structure cancels part of this retriangulation anomaly. Note that for $M$ a 2-manifold, $Sq^2$ kills all 1-forms, so the factor $(-1)^{\int_N Sq^2(\alpha)}$ is trivial. This means that for 2-manifolds, $z_{\Pi}(M,\eta,\alpha)$ is invariant under bordisms. In fact, in \cite{GaiottoKapustin, KobayashiPin} they show that the sum of $z_{\Pi}(M,\eta,\alpha)$ over all possible loop configurations $\alpha$ can be identified precisely with the Arf invariant, or Arf-Brown-Kervaire invariant of a $Spin$ or $Pin^-$ manifold, which exactly classifies the bordism class of a 2-manifold equipped with a $Spin$/$Pin^-$ structure. \section{Warm up: Geometric interpretation of $\sigma(M,\alpha)$ in 2D} \label{geometricGuWenIn2D} Now, let's review the geometric interpretation of the Grassmann integral in two dimensions. This was reviewed in an Appendix of \cite{GaiottoKapustin} for the case of orientable surfaces, but was also known earlier in a slightly different context, in \cite{CimasoniReshetikhin,CimasoniNonorientable}. In particular, the observations and pictures drawn in \cite{CimasoniNonorientable} will be helpful in extending this understanding to the case of nonorientable manifolds, both in two and higher dimensions. \subsection{Orientable surfaces and the winding of a vector field} We will start by focusing on the story for orientable surfaces. First, we will describe a pair of linearly independent vectors along the 1-skeleton. Then, we'll give our definition of $\sigma(M,\alpha)$, which is related to how many times the vector field winds with respect to the tangent vector of the loop. Then, we'll show that our definition of $\sigma(M,\alpha)$ satisfies both formal properties that we care about. Then after this subsection, we'll explain how to modify the picture for the case of nonorientable surfaces. \subsubsection{Framing of $TM$ along the dual 1-skeleton and its winding along a loop} Let's describe the frame of vectors we'll use along the dual 1-skeleton. For this purposes in this section, it will suffice to describe them pictorially. First, we can note that on an orientable surface with a branched triangulation, it is possible to consistently label the 2-simplices as either $+$ or $-$. A consistent labeling means that if two of the simplices are adjacent, then their labelings of $\pm$ will be the same iff the local orientations defined by the branching structures match. So, we choose some consistent labeling of the simplices. Given such a consistent labeling, the framing along the 1-skeleton can be described as in the Figure(\ref{vectorFieldOn2Simplices}). Away from the center of the 2-simplex, we'll have one vector that runs along the 1-skeleton and the vector `perpendicular' to it will be in an opposite direction of the arrow defining the branching structure. This vector field is related to the flow that we constructed earlier, in Fig(\ref{fig:2_Simplex_flowed}) for the 2-simplex. This is because when we deform the vector fields in the manner depicted close to the center, and one of the vectors will be pointing in the same direction as that flow. Note if there's a globally defined orientation, these vector fields will be consistent with each other when glued together on the boundaries of the adjacent simplices. However, in the nonorientable case, there will be some inconsistencies that occur when the representative of $w_1$ doesn't vanish one the simplices' shared boundary. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{vectorFieldOn2Simplices.png} \caption{A pair of vector fields (in pink) along the 1-skeleton of a triangulation of a surface, for both positively-oriented (left) and negatively-oriented (right) 2-simplices. In this picture they're drawn `perpendicular' to each other. I.e. one vector field is parallel to the 1-skeleton away from the center and the other vector field is `perpendicular' to the 1-skeleton away from the middle. Note that in the center of the simplex, one of the vector fields is parallel to the flow vector depicted in Fig(\ref{fig:2_Simplex_flowed}). In green, we show the counterclockwise `winding angle' between these vector fields and the tangent vector of a curve that's restricted to the 1-skeleton.} \label{vectorFieldOn2Simplices} \end{figure} Also, if there's a global orientation means that we can talk about how many times this vector field frame `winds' in a counterclockwise direction with respect to the tangent vector of the loop. In Fig(\ref{vectorFieldOn2Simplices}), we show what these winding angles would look like for orientable manifolds. This winding will be crucial in constructing $\sigma(M,\alpha)$. Note that for the nonorientable case, we will have to be more careful in defining this winding, since `clockwise' and `counterclockwise' won't make sense. It will turn out that the analog of the `winding' can be expressed by a matrix, and these matrices won't necessarily commute. \subsubsection{Definition of $\sigma(M,\alpha)$ in 2D and its formal properties} Now, let us define $\sigma(M,\alpha)$ in two dimensions and show that it satisfies the formal properties we listed in Section \ref{formalPropertiesGuWen}. Given some closed cocycle $\alpha \in Z^1(M,\mathbb{Z}_2)$, we can represent it by some collection of curves on the dual 1-skeleton, which we'll denote $C_1,\dots,C_k$. Since the dual 1-skeleton is a trivalent graph, this decomposition into loops is unambiguous. For each curve $C_i$, define the quantity $wind(C_i)$ as the number of times the above vector field winds with respect to the tangent vector. Then the weight $\sigma(M,\alpha)$ will be defined: \begin{equation} \sigma(M,\alpha) = \prod_{i=1}^k (-1)^{1+wind(C_i)} = (-1)^{\text{\# of loops}}\prod_{i=1}^k (-1)^{wind(C_i)} \end{equation} It's clear that this is well-defined, since $wind(C)$ will be the same mod 2 if we consider the curve going forwards as opposed to going backwards. Now let's see why this quantity satisfy the formal properties we cared about. First, we should show that a loop $C$ surrounding an elementary dual 2-plaquette, $P$, has a sign of $-1$ if $\int_P w_2 = 1$ and a sign of $-1$ if $\int_P w_2 = 0$. So, we should show $$(-1)^{\int_P w_2} = \sigma(\alpha_C)$$ where $\alpha_C$ is the cochain representing the elementary plaquette loop $C$. The winding number definition will actually naturally (perhaps tautologically) satisfy this due to the obstruction theoretic definition of $w_2$. Suppose a vector field has winding number $wind(C)$ with respect to the tangent of a simple closed curve $C$. Then (depending on sign conventions) a generic extension of the vector field to the interior, $P$ of $C$ will vanish at $(\pm 1 \pm wind(C))$ points. So, our obstruction theoretic definition tells us that: $$\int_{P} w_2 = 1 + wind(C) \quad \quad \text{(mod 2)}$$ which matches up with $\sigma(\alpha_C) = (-1)^{1+wind(C)} = (-1)^{\int_P w_2}$ for such elementary plaquette loops. Next, we should show the quadratic refinement property, i.e. we should show for cochains $\beta$ and $\beta'$ that: $$\sigma(\beta)\sigma(\beta') = (-1)^{\int_M \beta \cup \beta'} \sigma(\beta+\beta')$$ So, the Grassmann integral of the sum of two cocycles will be the product of the Grassmann integrals of each summand, times this extra $(-1)^{\text{mod 2 intersection number of } \beta, \beta'}$. The argument for this is due to Johnson \cite{Johnson} who was studying the closely related notion of quadratic forms associated to 2D $Spin$ structures. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{intersectionsAndWindings.png} \caption{Different cases of shared segments of loops intersecting on a trivalent graph. Sums here are implicitly done modulo 2. The loop from $\beta$ is in green and the loop from $\beta'$ is in pink, the reference vector field is in red, and the intersection region that's shared by both $\beta$ and $\beta'$ is in blue. For the cases of Type I or Type II crossings, combining the loops after discarding the intersection region makes the number of loops change by 1. Type I intersections cause have intersection number zero and cause the winding number to change by 1 (mod 2) after resolving the intersection. Type II crossings have intersection number of one and cause the winding to stay the same. In general when $\beta$ and $\beta'$ share several different segments, we must choose the first resolved intersection to be either a Type I or II crossing. After the first intersection is resolved creating a combined loop, Types III and IV crossings can be resolved. Resolving a Type III or IV crossing doesn't change the number of loops, and is a two-step process where we reconnect the combined loop into two loops, which then allow us to reduce the resolution to either a Type II or I crossing, respectively.} \label{intersectionsAndWindings} \end{figure} Note that when the loops representing the cocycles never intersect, the formula is immediate, so we only need to consider what happens when loops from the different cycles intersect each other. In particular, we'll want to visualize what happens to the windings when we combine the loops and discard the pieces that they both share. For these loops living on these kinds of trivalent graphs, loops intersecting will necessarily share some finite segment of edges. And in general, we'll have that the loops may intersect at a collection of more than one different segments of edges. The strategy is to resolve each intersecting segment of edges one at a time. So, we need to show that quadratic refinement holds as we resolve each intersection. We'll summarize the logic here, but refer to Fig(\ref{intersectionsAndWindings}) for a more detailed view. For the first segment of intersections that is resolved, we have the freedom to change the directions of the curves so that they are directed oppositely to each other on their intersections of segments. The cases we'll need to distinguish are if the loops exit their shared line segments on the \textit{same} side of the shared segments, or on \textit{opposite} sides of the shared segments, which are labeled as Type I and Type II crossings In Fig(\ref{intersectionsAndWindings}). One can check that both cases change the number of loops by $\pm 1$. Type I crossings will contribute $0$ to the mod 2 intersection number, and Type II crossings will contribute $1$ to the mod two intersection number. And, Type I crossings change the total winding number by 1 whereas Type II crossings don't change the total winding number at all. This means that for Type I crossings, $(-1)^{\text{\# of loops}}(-1)^{\text{winding}}$ for the sum $\beta + \beta'$ is \textit{locally} the same as the $\cup$ products for $\beta$ and $\beta'$. Whereas for Type II crossings, $(-1)^{\text{\# of loops}}(-1)^{\text{winding}}$ for $\beta + \beta'$ differs \textit{locally} by a factor of $-1$ from the $\cup$ products for $\beta$ and $\beta'$. So, summing over all intersections, the quantity $\sigma(\beta + \beta') \{\sigma(\beta)\sigma(\beta')\}^{-1}$ will be the number of Type II crossings between $\beta$ and $\beta'$, which is just the mod 2 intersection number of $\beta$ and $\beta'$. This is precisely the statement of quadratic refinement. If this segment was the only intersection region, then we're done. But now, we want to resolve the rest of the segments of intersections. Resolving the first intersection segment functioned as combining the two curves into one, and this combined curve may intersect itself in many different places. Some of these intersection regions look exactly like Type I or II crossings, for which the same logic applies as the previous paragraph. But there's also the possibility that the combined curve's shared regions are pointing in the same direction as each other, which are the Type III and IV crossings in Fig(\ref{intersectionsAndWindings}). We can resolve these intersections in a two-step process. First, reconnect the edges which turns the combined loop into two loops as in Fig(\ref{intersectionsAndWindings}). Then, for a Type III or IV crossing, after reversing one of these two reconnected loops we'll respectively get Type II and I crossings, which can then be resolved as such. Note that resolving these kinds of intersections ends up \textit{not} changing the number of loops, but the quadratic refinement property does hold after each such resolution. \subsection{Nonorientable surfaces and `non-commuting' windings on $Pin^-$ surfaces} Now, we will describe how to define $\sigma(M,\alpha)$ on nonorientable surfaces and see how we can connect it to the geometry of $Pin^-$ structures. This presentation is motivated by the entirely analogous ideas of \cite{CimasoniNonorientable}, who found a way to combinatorially encode the construction of \cite{KirbyTaylor} of $\mathbb{Z}_4$-valued quadratic forms on $Pin^-$ surfaces. Recall that a $Pin^-$ structure on $TM$ can be thought of as a $Spin$ structure on $TM \oplus \det(TM)$. So, we'll have that $\sigma(M,\alpha)$ will be related to some winding with respect to a trivialization of $TM \oplus \det(M)$ over $M$'s 1-skeleton. First, we'll describe possible framings of $TM \oplus \det(TM)$ along the 1-skeleton and see how different choices of the framing can be related to different choices of the chains representing $w_1^2$. Then we'll define $\sigma(M,\alpha)$ and show how its formal properties match the ones we want. Recall that the main issue in dealing with nonorientable surfaces is that it's not possible to consistently label 2-simplices as $+$ and $-$ with neighboring simplices having the same labeling iff their orientations locally agree. To deal with this, we'll just choose some labeling of $+$ and $-$ 2-simplices, and there will be some set of 1-simplices representing $w_1$ for which the local orientations don't match with their labeling. \subsubsection{Framing of $TM \oplus \det(TM)$ along the dual 1-skeleton} \label{sec:TMdetTMFraming} Since we are adding an extra direct summand of $\det(TM)$ to the tangent bundle, it will be natural for us to visualize $TM \oplus \det(TM)$ at a point via a 2D plane parallel to the surface and a `third dimension' sticking out perpendicular to the plane. We've depicted such a framing for a positively oriented simplex in Fig(\ref{framingOfTM+DetTM}). Inside a 2-simplex, the framing of $TM \oplus \det(TM)$ along the 1-skeleton look similar to the framings in Fig(\ref{vectorFieldOn2Simplices}), except there will be an extra vector pointing in a direction `normal' to the surface, representing the framing of $\det(TM)$, in addition to the two vectors we had before, pointing in the directions along the surface. We'll refer to this vector along $\det(TM)$ as the `orientation vector'. We can give this framing an order by saying that the first (`$x$') vector is the orientation vector, the third (`$z$') vector is the one in $TM$ pointing along the 1-skeleton, and the second (`$y$') vector is the other vector along $TM$ pointing along the 1-skeleton, but transverse to the 1-skeleton. Similarly, we can define the same kind of framing on a negatively oriented simplex, and as long as two neighboring simplices are not separated by a representative of $w_1$, this framing can be extended in the same way as the orientable case. The fact that $TM \oplus \det(TM)$ is always orientable ensures that this `normal' direction, or `orientation vector' along the dual 1-skeleton is well-defined on the interior of a 2-simplex. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{framingOfTMPlusDetTM.png} \caption{The framing of $TM \oplus \det(TM)$ along the dual 1-skeleton of a positively oriented simplex. The directions $y,z$ represent the coordinates for $TM$ going along the surface and the direction $x$ represents the coordinate for $\det(TM)$ which is depicted as `normal' to the surface. A curve that goes through this simplex can have $TM \oplus \det(TM)$ framed relative to these curves in two different ways as described in the main text, with the curve's `orientation vector' starting in either the same or opposite direction as that of the 2-simplex. ``Same Orientation'' refers to if the two orientation vectors start in the same orientation and ``opposite orientation'' means they start at different orientations. As one traverses along the directed curve, the curve's framing may change with respect to the framing along the 1-skeleton. The only directions in which the framing changes are listed as (a) and (b), as well as the explicit matrix under which the framing changes. Here $e^{\pm i \pi L_x}$ refers to a path in $SO(3)$ parameterized as $e^{\pm i t L_x}$ for $t:0 \to \pi$, whose endpoints for different choices of $\pm$ will be the same in $SO(3)$ but lift to different elements of $SU(2)$.} \label{framingOfTM+DetTM} \end{figure} Although these assignments can unambiguously determine framings inside each 2-simplex, there's an issue of what happens when the local orientation on $TM$ reverses, i.e. when we cross a 1-simplex where $w_1 \neq 0$. Since an orientation of $TM$ can't be defined everywhere, a trivialization of $TM \oplus \det(TM)$ requires that we rotate $TM$ and $\det(TM)$ into each other near $w_1$, where the orientation of $TM$ reverses. For each potential choice of mismatching framings, we can consider two different ways of extending them to match across $w_1$, as depicted in Fig(\ref{framingAcrossW1}). In particular, we'll only consider the possibilities of rotating into each other the `orientation vector' and the vector going along the dual 1-simplex. When doing this, our two choices to consider amount to our choice of which direction along the dual 1-simplex the orientation vector points as it traverses $w_1$. This choice can give us a choice of vector field transverse to the $w_1$ surface along the 1-skeleton as follows (see Fig(\ref{framingAcrossW1})). The orientation vector as it crosses $w_1$ will point to one side of $w_1$. On that side of $w_1$, we consider the background frame's `$z$' vector that usually points along the 1-skeleton. The direction this `$z$' vector points will determine the transverse direction. And, as depicted in Fig(\ref{w1Squared}), this choice of extensions of the framings across $w_1$ will define a representative of $w_1^2$. This is because if two adjacent dual edges have this vector pointing in opposite directions, then in between those two edges, there will be an odd number of self intersections of $w_1$, as defined by some extension of these vectors. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{framingAcrossW1.png} \caption{The framing of $TM \oplus \det(TM)$ when crossing a representative of $w_1$. Since an orientation of $TM$ can't be defined everywhere, we need to rotate $\det(TM)$ and $TM$ into each other across $w_1$. The pairs (a,b) and (c,d) are the different choices of extension for framings that match away from $w_1$. These different choices will end up corresponding to different windings with respect to the loop, whose different possibilities are listed in the table. In addition, they correspond to different choices of vectors transverse to $w_1$ at the 1-skeleton, which correspond shifting $w_1$ along a perturbing vector. This is depicted as the shift of the solid black line, representing $w_1$, to the dashed black line, representing its shift along the perturbing vector. The perturbing vector is given by the `$z$' vector along the 1-skeleton, on the side of the $w_1$ surface that the orientation vector points to as it traverses $w_1$, circled in red and pointed to by the purple dashed arrow, which points in the same direction as the orientation vector along $w_1$. } \label{framingAcrossW1} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.5\linewidth]{w1Squared.png} \caption{The extension of the framing $TM \oplus \det(TM)$ across $w_1$ defines vectors along the 1-skeleton transverse to $w_1$ as described in Fig(\ref{framingAcrossW1}). Some generic extension of this vector field will allow us to define a vector field along which we can define $w_1^2$, the self-intersection of $w_1$. In particular, $w_1^2$ is nonzero on some dual 2-plaquette iff the vector field points in opposite directions along the two links on the plaquette that intersect $w_1$.} \label{w1Squared} \end{figure} \subsubsection{Winding matrices around a loop}\label{2DWindingMatrices} Now that we've defined a background framing of $TM \oplus \det(TM)$ along the 1-skeleton, we should think about how to define the winding matrices, the analog of the orientable case's winding matrices. \underline{\textbf{The relative framing around a loop}} Recall in the orientable case, we compared the winding of the background vector field with respect to the tangent of the curve. For the nonorientable case, we'll be comparing this background framing of $TM \oplus \det(TM)$ to a certain `tangent framing' of $TM \oplus \det(TM)$ along a loop. This tangent framing along the loop is defined as follows. Pick a starting point along the curve on the interior of a 2-simplex. Then, first vector (`$x$') will be identified with the `orientation vector' of the framing, along the $\det(TM)$ part. The tangent vector of the curve will define the third (`$z$') vector of the tangent frame. And, the orientation of $TM \oplus \det(TM)$ will determine the second (`$y$') vector \footnote{Strictly speaking, we'd need to define some positive-definite metric on $TM \oplus \det(TM)$ to do this unambigously.}. Given the background framing of $TM \oplus \det(TM)$ and this tangent framing along some given loop $C$, we will want to compare these framings as we go along a loop. Assuming that, with respect to some positive definite metric, these frames are orthonormal, each point along the loop defines some element of $SO(3)$. So, going around the loop will mean that these relative framings define some path in $SO(3)$, i.e. a function $f_C: [0,1] \to SO(3)$. The possible changes in relative framing (i.e. changes in $f_C$) for a loop traversing inside a 2-simplex are given in (a,b) of Fig(\ref{framingOfTM+DetTM}) \footnote{We denote be $i L_x=\begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & 1 & 0 \end{pmatrix}, i L_y=\begin{pmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ -1 & 0 & 0 \end{pmatrix}, i L_z=\begin{pmatrix} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}$ the basis of the $\mathfrak{so}(3)$ Lie Algebra in its defining representation, satisfying $[i L_x,i L_y] = i L_z, [i L_y,i L_z] = i L_x, [i L_z,i L_x] = i L_y$ } for a positively oriented simplex, and can similarly be found for a negatively oriented simplex. And the path of winding going across $w_1$ are given in Fig(\ref{framingAcrossW1}). Note that the path of winding depends on the direction we traverse and also on the whether the orientation vector of the background framing agrees with orientation vector of the tangent framing. We will find it convenient to normalize $f_C$ so that $f_C(0) = \mathbbm{1}$, so that we only measure the change in framing from the start. Denote by $\alpha_C \in H^1(M,\mathbb{Z}_2)$ the cohomology class associated to $C$. We note that if $\int w_1 \cup \alpha_C = 0$, i.e. if $C$ crosses $w_1$ an even number of times, then $f_C(1)$ will be the identity, $f_C(1) = \mathbbm{1} \in SO(3)$. To see this, first note that the vectors along the 1-skeleton will be in the same relative orientation at the beginning and end of the loop. Next, note that crossing the $w_1$ surface an even number of times means that the tangent framing's orientation vector at the end of the loop will agree with the background framing's orientation vector, just like in the beginning of the loop. This means that the relative framings are the same at the beginning and end of the loop, i.e. that $f_C(1) = \mathbbm{1}$. By similar reasoning, we can note that if $\int w_1 \cup \alpha_C = 1$, i.e. $C$ crosses $w_1$ an odd number of times, then (relative to the coordinates `x,y,z') $$f_C(1) = \begin{pmatrix} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix} \in SO(3)$$ Again, the relative orientations of the vectors along the 1-skeleton isn't different at the beginning versus at the end. But, the orientation vectors at the end of the loop will have a relative sign change since there's such a sign change every time we cross $w_1$. This means that relatively between the initial and final frames, the `$x$' and `$y$' coordinates will flip sign. \underline{\textbf{The quadratic form on a $Pin^-$ surface}} Now, we should mention what exactly these relative framings have to do with a $\mathbb{Z}_4$-valued quadratic form. For motivation, let us recall the definition of the $\mathbb{Z}_4$-valued quadratic form of Kirby/Taylor \cite{KirbyTaylor}. There, given some closed, non-self-intersecting loop $C$ in the $Pin^-$ surface $M$, we can consider the restriction $TM \oplus \det(TM)$ to $C$, which we call $\tau$. First, a $Pin^-$ structure on $M$ gives a trivialization of $\tau$. Now, denote $E$ as the total space of the bundle $\det(TM) \to M$. Another way to decompose $\tau$ is as: $$\tau = TM \oplus \det(TM)|_{C} = TC \oplus N_{C \subset M} \oplus N_{M \subset E}$$ where $TC$ is the tangent bundle of the curve $C$ and $N_{A \subset B}$ denotes the normal bundle of a submanifold $A \subset B$. Note that we can trivialize $TC$ by considering tangent vectors in the direction we traverse. The definition of the quadratic form of \cite{KirbyTaylor} involves a comparing the framing induced by the $Pin^-$ structure's trivialization of $\tau$ with the framing induced by this bundle decomposition, in the same way that the orientable winding number was gotten by comparing two frames. To do this, first, we pick some framing of $\tau$ that's homotopic to the one induced by the $Pin^-$ structure and for which the third `$z$' vector lies on the curve's tangent. Then, we can choose the orientations of both framings to match each other at the starting point of the curve. Then the quadratic form $z(C)$ associated to $C$ is $$z(C) = -i^{\text{number of right half-twists (mod 4)}}$$ where `number of right half-twists (mod 4)' is the number of right handed half-twists (mod 4) that $N_{C \subset M}$ makes traversing the loop compared to this background framing homotopic to the $Pin^-$ framing. \footnote{The (mod 4) factor comes in because different choices of framings homotopic to the $Pin^-$ one will differ by 4 in the number of right half-twists. This is because framings of the rank-3 bundle $\tau$ form a $\pi_1(SO(3))=\mathbb{Z}_2$ torsor, while framings of the rank-2 bundle $N_{C \subset M} \oplus N_{M \subset E}$ form a $\pi_1(SO(2))=\mathbb{Z}$ torsor. So, two framings of $N_{C \subset M} \oplus N_{M \subset E}$ that differ in $\mathbb{Z}$ by 2 will be homotopically the same framing of $\tau$ in $\mathbb{Z}_2$, and they will differ by 4 right half-twists going around.} The $-1$ in front corresponds to the $(-1)^\text{number of loops}$ factor that we had in the orientable case. It's shown in \cite{KirbyTaylor} that given some set of disjoint loops $C_1,\dots,C_k$ on $M$ representing $\alpha \in H^1(M,\mathbb{Z}_2)$ that the function \begin{equation} z(\alpha) = \prod_{i=1}^k z(C_i) \end{equation} doesn't depend on the representative curves of $\alpha$ so is a function on cohomology classes. And they also show the quadratic refinement property holds, that $z(\beta)z(\beta') = z(\beta + \beta') (-1)^{\int \beta \cup \beta'}$ for $\beta, \beta' \in H^1(M,\mathbb{Z}_2)$ \underline{\textbf{Our analog of the quadratic form on the 1-skeleton and how to compute it}} Now, let's think about how this `number of right half-twists' is encoded in our function $f_C$. Since our function $f_C$ is a function $[0,1] \to SO(3)$ with $f_C(0) = \mathbbm{1}$, we'll be able to lift it to a unique function $\Tilde{f}_C: [0,1] \to SU(2)$ with $\Tilde{f}_C(0) = \mathbbm{1} \in SU(2)$. The homotopy class of the path $f_C$, and consequently the number of right half-twists it makes, is determined by the endpoint of its lift, i.e. by $\Tilde{f}_C(1)$. Recall we found earlier that $$f_C(1) = \mathbbm{1} \in SO(3) \quad \text{iff} \quad \int w_1 \cup \alpha_C = 0 $$ and that $$f_C(1) = \begin{pmatrix} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \\ \end{pmatrix} \in SO(3) \quad \text{iff} \quad \int w_1 \cup \alpha_C = 1$$ This implies that the endpoint of the lift $\Tilde{f}_C(1)$ can take the possible values $$\Tilde{f}_C(1) = \pm \mathbbm{1} \in SU(2) \quad \text{iff} \quad \int w_1 \cup \alpha_C = 0$$ and \footnote{We denote by $X=\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, Y=\begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix}, Z=\begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}$ the lifts of $L_x,L_y,L_z$ to matrices of the $\mathfrak{su}(2)$ Lie Algebra in the fundamental representation.} $$\Tilde{f}_C(1) = \pm i Z \in SU(2) \quad \text{iff} \quad \int w_1 \cup \alpha_C = 1$$ The cases of $\Tilde{f}_C(1) = \{\mathbbm{1}, iZ, -\mathbbm{1},-iZ\}$ corresponds to the number of right half-twists being $\{0,1,2,3\}$ (mod 4). So, we can see that \begin{equation} \label{defNumberRightHalfTwists} \begin{pmatrix} 1 & 0 \end{pmatrix} \Tilde{f}_C(1) \begin{pmatrix} 1 \\ 0 \end{pmatrix} = i^{\text{number of right half-twists}} \end{equation} Now, one may be concerned that this definition depends on things like the starting point of the curve and the direction we traverse the curve. We can show that this is not the case as follows. To show this, we'll first translate the windings of Figs(\ref{framingOfTM+DetTM},\ref{framingAcrossW1}), which denoted changes in $f_C$, into how they lift as corresponding changes of $\Tilde{f}_C$. We'll have to address that the winding on a part of the loop depends on relative direction of the orientation vector. To do this, it's convenient to introduce a 2-component tuple of orientations: \begin{equation} \mathcal{O} = \begin{pmatrix} \mathcal{O}_\text{same} \\ \mathcal{O}_\text{opposite} \end{pmatrix} \end{equation} Each of $\mathcal{O}_\text{same},\mathcal{O}_\text{opposite}$ will be in $SU(2)$ and only one component at a time will be nonzero. $\mathcal{O}_\text{same}$ being nonzero means that the orientation vectors agree between the background and tangent framings. And $\mathcal{O}_\text{opposite}$ being nonzero means that they disagree. As we said before, at the beginning of the loop we choose the orientation vectors to agree. So the intial tuple will be: $$\mathcal{O}^\text{initial} = \begin{pmatrix} \mathbbm{1} \\ 0 \end{pmatrix} $$ And, at the end of the loop we'll have some tuple $\mathcal{O}^\text{final}$, from which we can extract $\Tilde{f}(1)$ as: \begin{equation} \Tilde{f}(1) = \mathcal{O}_\text{same}^\text{final} + \mathcal{O}_\text{opposite}^\text{final} = \begin{pmatrix} \mathbbm{1} & \mathbbm{1} \end{pmatrix} \mathcal{O}^\text{final} \end{equation} To get from $\mathcal{O}^\text{initial}$ to $\mathcal{O}^\text{final}$, there will be some sequence of matrices $\{W_1,\dots,W_k\}$ so that $\mathcal{O}^\text{final} = W_k \cdots W_1 \mathcal{O}^\text{initial}$. Each $W_j$ will be 2$\times$2 blocks where each block is in $SU(2)$. In the cases where the part $j$ of the loop keeps the orientation vector relatively the same (i.e. parts within a 2-simplex), $W_j$ will be block-diagonal. And the parts where the orientation vector relatively switches (i.e. going across $w_1$), $W_j$ will be block-off-diagonal. For a `$+$' simplex, we'll have: $$W_j = \begin{pmatrix} -i X & 0 \\ 0 & i X \end{pmatrix} \quad \text{if} \quad \hat{2} \to \hat{0}$$ $$W_j = \begin{pmatrix} i X & 0 \\ 0 & -i X \end{pmatrix} \quad \text{if} \quad \hat{0} \to \hat{2}$$ $$W_j = \begin{pmatrix} \mathbbm{1}& 0 \\ 0 & \mathbbm{1} \end{pmatrix} \quad \text{otherwise}$$ For a `$-$' simplex, we'll have: $$W_j = \begin{pmatrix} i X & 0 \\ 0 & -i X \end{pmatrix} \quad \text{if} \quad \hat{2} \to \hat{0}$$ $$W_j = \begin{pmatrix} -i X & 0 \\ 0 & i X \end{pmatrix} \quad \text{if} \quad \hat{0} \to \hat{2}$$ $$W_j = \begin{pmatrix} \mathbbm{1}& 0 \\ 0 & \mathbbm{1} \end{pmatrix} \quad \text{otherwise}$$ And, we'll have a couple of cases to consider when crossing $w_1$ as we depicted in Fig(\ref{framingAcrossW1}). $$W_j = \begin{pmatrix} 0 & iY \\ -iY & 0 \end{pmatrix} \quad \text{cases (a.i,b.ii,c.i,d.ii) of crossing } w_1$$ $$W_j = \begin{pmatrix} 0 & -iY \\ iY & 0 \end{pmatrix} \quad \text{cases (a.ii,b.i,c.ii,d.i) of crossing } w_1$$ Now, we can show that the number of right half-twists as defined in Eq(\ref{defNumberRightHalfTwists}) is well-defined: that it doesn't depend on the starting point of the curve and doesn't depend on the direction we go. One thing to note is that although the individual matrices $iX,iY,iZ$ don't commute with each other, all of the matrices $W_j$ will commute due to the block diagonal structure. This ensures that the matrix $\mathcal{O}^\text{final} = W_k \cdots W_1 \mathcal{O}^\text{initial}$ is independent of the starting point of the path. Another thing to note is that for a segment $j$ of the path, $W_j$ is the negative of the matrix gotten by traversing that part in the opposite direction. And, note that the total number of matrices $k$ will be even, since every part that contributes a nontrivial $W_j$ reverses the relative direction of the vector along the 1-skeleton, and this relative direction stays the same between the beginning and the end. So, reversing the path will change $\mathcal{O}^\text{final}$ by an even number of minus signs, i.e. it keeps $\mathcal{O}^\text{final}$ the same. We can also note that if we started out with the orientation vector of the tangent frame in the \textit{opposite} direction as the background frame, as opposed to the same direction, then this would also leave the number of right half-twists the same. This would amount to defining $\mathcal{O}^\text{initial} = \begin{pmatrix} 0 \\ \mathbbm{1} \end{pmatrix}$. This would give the same $\Tilde{f}_C(\mathbbm{1})$. We can see this because starting with this $\mathcal{O}^\text{initial}$ is equivalent to starting with $\begin{pmatrix} \mathbbm{1} \\ 0 \end{pmatrix}$ and conjugating $W_k \cdots W_1$ by $\begin{pmatrix} 0 & \mathbbm{1} \\ \mathbbm{1} & 0 \end{pmatrix}$. Conjugating each $W_k$ by this matrix introduces a minus sign, and there will be an even number of minus signs that all cancel. \subsubsection{The definition of $\sigma(M,\alpha)$ and its formal properties} \label{formalPropsPinMinusSigma} Now, suppose $\alpha$ is represented by some set of curves $C_1,\dots,C_k$ on the dual 1-skeleton. Then, we'll define the function $\sigma(M,\alpha)$ in a similar way as before: \begin{equation} \sigma(M,\alpha) = (-1)^{\text{\# of loops}} \prod_{i=1}^k i^{\text{number of right half-twists for } C_i} \end{equation} By the previous discussion, this quantity is well-defined. And, we can see that $\sigma(M,\alpha) = \pm 1$ iff $\int w_1 \cup \alpha = 0$ and $\sigma(M,\alpha) = \pm i$ iff $\int w_1 \cup \alpha = 1$, which we wanted. \underline{\textbf{$\sigma$ for elementary plaquette loops}} The next thing we should show is that if an elementary plaquette loop $C$ surrounds the plaquette $P$, then $\sigma(M,\alpha_C) = (-1)^{\int_P w_2 + w_1^2}$. Note that our definition of $w_2$ requires a vector field on $TM$ that is nonvanishing over the entire 1-skeleton. Of the vectors $x,y,z$ of the background framing, the only one that remains in $TM$ over the entire 1-skeleton is the $y$ field, which doesn't pay attention to $w_1$. So, $w_2$ is defined via the $y$ field. Note that away from the $w_1$ surface, this computation is exactly the same as it was in the orientable case. But for $C$ that lie on $w_1$, the computation is subtle. The reason for this is that simplices that are neighboring each other across $w_1$ have their labels `$+,-$' that are inconsistent with their relative local orientations. This means that the winding number definition needs to be looked at with care to define $\int_P w_2$ since the labeling of the $+$ and $-$ simplices is `wrong' as far as measuring this winding is concerned. To treat this, let's consider the sequence of matrices $W_1,\dots,W_k$ that go into constructing $\mathcal{O}^\text{final} = W_k \cdots W_1 \mathcal{O}^\text{initial}$. And, let's say that the matrices at $i_0 < j_0$ correspond to the segments where the orientation reverses. (Here, we will treat the case where $w_1$ intersects the loop twice. The case of a higher even number of intersections is similar). Note that the matrices $W_{i_0}$ and $W_{j_0}$ are of the form $\pm \begin{pmatrix} 0 & iY \\ -iY & 0 \end{pmatrix}$ and the $W_{i_0+1},\dots,W_{j_0-1}$ are all of the form $\pm \begin{pmatrix} -i X & 0 \\ 0 & i X\end{pmatrix}$. An issue we need to deal with is that the $W_{i_0+1},\dots,W_{j_0-1}$ are all the \textit{negative} of what the local orientation would think, relative to the start of the curve. In other words, the winding part $(-1)^{\int_P w_2}$ of $\sigma(M,\alpha_C)$ would be given by the opposite of the sign $\pm \mathbbm{1}_{4 \times 4}$ of \begin{align*} &W_1 \cdots W_{i_0-1} (-W_{i_0 + 1}) \cdots (-W_{j_0 - 1}) W_{j_0 + 1} \cdots W_k \\ &= (-1)^{j_0 - i_0 - 1} W_1 \cdots W_{i_0-1} W_{i_0 + 1} \cdots W_{j_0 - 1} W_{j_0 + 1} \cdots W_k \end{align*} From here, if we can verify that the sign of $(-1)^{j_0 - i_0 - 1} W_{i_0} W_{j_0}$ is equal to $(-1)^{\int_P w_1^2}$, then we've shown what we've want, that $\sigma(M,\alpha_C) = (-1)^{\int_P w_2 + w_1^2}$. This can be seen as follows, with the pictures in Fig(\ref{framingAcrossW1}) in mind. Note that $W_{i_0}$ or $W_{j_0}$ is $\begin{pmatrix} 0 & iY \\ -iY & 0 \end{pmatrix}$ if the direction the loop traverses is the same as the direction of the background orientation vector across $w_1$, and it's $\begin{pmatrix} 0 & -iY \\ iY & 0 \end{pmatrix}$ if the loop's direction is opposite that of the background orientation vector across $w_1$. This means that $W_{i_0} W_{j_0} = -\mathbbm{1}_{4 \times 4}$ if the background orientation vectors at the $i_0, j_0$ junctions near $w_1$ point to the same side of $w_1$ as each other, and $W_{i_0} W_{j_0} = \mathbbm{1}_{4 \times 4}$ if they point to opposite sides of $w_1$. Similarly, $(j_0 - i_0 - 1)$ gives the number (mod 2) of half-turns that the $y,z$ vectors make with respect to the curve on one side of $w_1$. So, $(-1)^{j_0-i_0-1}$ is related to the relative directions of the `$z$' vectors of the background framings near the $i_0, j_0$ junctions, on the same side of $w_1$. In particular, $(-1)^{j_0-i_0-1}$ is 1 if these directions are opposite, and it's $-1$ if the directions are the same. Combining these observations with the definition of the perturbing vectors and the defintion of $w_1^2$ shows that $(-1)^{j_0 - i_0 - 1} W_{i_0} W_{j_0} = (-1)^{\int_P w_1^2} \mathbbm{1}_{4 \times 4}$. So, we've shown that $\sigma(M,\alpha_C) = (-1)^{\int_P w_2 + w_1^2}$. \underline{\textbf{Quadratic refinement for $\sigma$}} Fortunately, the quadratic refinement property for $\sigma$ follows from a similar analysis as with the orientable case, for which argument was depicted in Fig(\ref{intersectionsAndWindings}). As before the problem reduces to the case of when $\beta,\beta' \in H^1(M,\mathbb{Z}_2)$ are each represented by a single loop, $C,C'$ resp., on the dual 1-skeleton. The main point of extending that logic to the nonorientable case is that we should compare what happens to the total signs of the winding matrices for each curve before and after combining them on each intersection. In particular, let $W := W_k \cdots W_1$ be the total winding matrix for $C$ and $W' := W'_{k'} \cdots W'_{1}$ be the total winding matrix for $C'$. Since all the $W_j,W'_j$ commute with each other, we should consider the total matrix $W_\text{combined}$ is after combining them by resolving a single intersection. First, note that resolving each intersection changes the number of loops by $\pm 1$. Then, we should note that if $W W' = W_\text{combined}$, then the total number of half-twists stays the same (mod 4), and if $W W' = - W_\text{combined}$, then the total number of right half-twists will change by 2 (mod 4), which can be easily seen by examining the definition of the number of right half-twists in terms of the $W,W'$. So, the problem boils down to showing that $W W' = -W_\text{combined}$ for a Type I crossing and that $W W' = W_\text{combined}$ for a Type II crossing. If the shared part of the curve doesn't intersect $w_1$, this can also be seen in the same way we saw it in Fig(\ref{intersectionsAndWindings}). But if a shared part \textit{does} interesect $w_1$, then we should be more careful. Let's suppose first that the shared part intersects $w_1$ exactly once. Then, the local orientations at the ends of the shared part will be opposite to each other, so the analogous argument would tell us that there's a relative minus sign between our expected answer. But we should also consider the products of the winding matrices along the shared part. If the curve doesn't intersect $w_1$, then the winding going in one direction of the shared part exactly cancels the winding in the other direction. But since the curve intersects $w_1$ once, the winding matrices going in one direction times the winding in the other direction will actually be $-1$, which can be traced to the fact that $$-\begin{pmatrix} 0 & -iY \\ iY & 0 \end{pmatrix} \begin{pmatrix} 0 & -iY \\ iY & 0 \end{pmatrix} = -\mathbbm{1}$$ So this $-1$ from the local orientations will cancel the $-1$ from the winding matrices going in the opposite directions, which means that quadratic refinement still holds. \section{$\sigma(M,\alpha)$ in higher dimensions} \label{geometricGuWenInHigherDimensions} Now, our goal will be to use the lessons from the 2D case and see how to extend this understanding to higher dimensions, for some triangulation $M$ of a $d$-dimensional manifold and some cocycle $\alpha \in Z^{d-1}(M,\mathbb{Z}_2)$. In summary, the basic idea for $\sigma(M,\alpha)$ will remain the same, that schematically: $$\sigma(M,\alpha) = ``(-1)^{\text{\# of loops}} \prod_{\text{loops}} i^{\text{winding}}"$$ So, our goal will be to formulate what exactly we mean by these quantities and then show that they satisfy the formal properties we care about. One thing is that we need to decide what exactly this `winding' factor means. In 2D, there were two tangent vectors, so we could unambiguously decide what the winding angle or the winding matrices between the `background' and `tangent' framings were going around a loop. But in higher dimensions, it's not as clear how to do this. The other issue is we want to ensure is that there is a clear definition of the `loops' on the dual 1-skeleton. It's not immediately obvious that a loop decomposition makes sense, because the dual 1-skeleton in higher dimensions is $(d+1)$-valent. For a trivalent graph it's possible to decompose any cochain into loops, but for higher-valent graphs, there's an issue that if there are four or more edges at at a vertex then there are multiple ways of splitting these edges up into pairs to define the loops. It will turn out that we will have a clear way to define this winding via a certain \textit{shared framing} along the 1-skeleton. In other words, across the entire 1-skeleton there will be some fixed set of $(d-2)$ vectors on the 1-skeleton that define a `shared framing' and are shared by both the background and tangent framings. Then, there will be two remaining vectors that will differ between the background and tangent framings, from which we can then unambiguously give a notion of winding. This need for additional framing vectors can be anticipated from another way to think about $Spin$ structures in higher dimensions. In particular, in higher dimensions a $Spin$ structure can be thought of assigning either a bounding or non-bounding $Spin$ structure to every \textit{framed} loop so that the $Spin$ structures changes when the framing is twisted by a unit. So, this shared framing will tell us that for any loop that passes through two edges of the dual 1-skeleton at a $d$-simplex, we can assign a winding to that segment of the loop. And, there's a way to deal with the problem of a $(d+1)$-valent dual 1-skeleton. To deal with this, one option to unambigously resolve a loop configuration is to resolve the $(d+1)$-valent vertex into $(d-2)$ trivalent vertices. In particular, we want to make sure that our trivalent resolution will allow $\sigma$ to satisfy the quadratic refinement property $\sigma(\beta) \sigma(\beta') = (-1)^{\int_M \beta \cup_{d-2} \beta'} \sigma(\beta + \beta')$. It turns out that for our purposes, this option works. We will see that there is a certain trivalent resolution of the dual 1-skeleton that will yield this property, even though generically not all trivalent resolutions will work. It will turn out that the interpretation of the higher cup product as a thickening under vector field flows will be crucial in allowing these definitions to work. We will see that the quadratic refinement property can be readily deduced if we thicken the loops under the `shared' framing. And, we'll see that the vector used to shift the loops to resolve intersections will correspond to the `$y$' vector of the background framing. A nice feature of this will be that the constructions and arguments for the 2D case carry over directly to higher dimensions, and we can use the same geometric reasoning to deduce quadratic refinement in higher dimensions. So, we already did a large part of the work in spelling out the 2D case (apart from the trivalent resolution part). Throughout this section, we will will want to label the edges of the dual 1-skeleton and the $(d-1)$-simplices that comprise of the boundary. As in the last section, for $i \in \{0,\dots,d\}$, we'll use $\hat{\imath}$ to refer interchangeably with the $(d-1)$-simplex that comprises of $(0 \dots \hat{\imath} \dots d)$ or its dual edge on the 1-skeleton. If the context isn't sufficient to distinguish the $(d-1)$-simplex with its dual edge, we'll refer to the $(d-1)$-simplex as $\hat{\imath}_\Delta$ and its dual edge as $\hat{\imath}_{ed}$ \subsection{The different framings} \label{theDifferentFramings} We will first need to illustrate explicitly what all the different framings are that we'll be considering. In particular, we'll want to know how to describe along the 1-skeleton the $(d-2)$ vectors that go into shared framing, and the other two vectors each that go into the background and tangent framings. These will be closely related to the vector fields we constructed in describing the higher cup product. Then, we'll be in a position to see explicitly how to compute the winding of these frames with respect to each other as we enter and exit the $d$-simplex along two edges of the dual 1-skeleton. While we were able to do this pictorially in two dimensions, in higher dimensions we'll need to think more carefully to show the analogous statements in higher dimensions. Throughout, we'll see that some nice properties of the Vandermonde matrix will allow us to think about the windings and do the relevant computations. Since we're dealing with $\Delta^d \subset \mathbb{R}^{d+1}$, the vectors we deal with in our vector fields will have $(d+1)$ components. And the ones that can lie within $\Delta^d$ will be the ones with $(1,\dots,1)$ projected out. However, it will be convenient algebraically to think about the vectors before projecting out $(1,\dots,1)$. So, we will introduce the notation: \begin{equation} v \sim w \quad \text{if } v = w + a (1,\dots,1), \text{ for some } a \in \mathbb{R} \end{equation} We'll give most of the details of the fields' definitions inside each $d$-simplex in the main text. But, we'll relegate some other details to Appendix \ref{windingsInHigherDims}, like how to glue the vector fields at neighboring simplices and how to compute their windings with respect to each other. \subsubsection{The shared framing} Let's discuss first what is the shared framing that we referred to above and see how it relates to the vector fields we constructed to thicken and intersect the cells. Then, we'll talk about some of this framing that will be necessary for us. Recall that Eq(\ref{higherCupSolnMatrix}) represented a set of vector fields we could use to connect to the higher cup product. And for $\beta,\beta'$ being $(d-1)$-cochains, the corresponding vectors that we thicken along inside the $d$-simplex are: $$v^\text{shared}_i \sim \Vec{b}_i = (1,\frac{1}{2^i},\frac{1}{3^i}, \dots, \frac{1}{(d+1)^i}) \text{ for } i = 1,\dots,d-2$$ We'll also include the extra vector $\Vec{b}_0 = (1,\dots,1)$ for convenience. Inside each $d$-simplex, these vectors will represent the shared framing that are common to both the background and tangent framings. In particular, along each of the $(d+1)$ edges of the dual 1-skeleton, away from the boundary the $d$-simplex these vectors will be constant. But, we'll want to modify the vectors as the points approach the boundary of the $d$-simplex, onto the $(d-1)$-simplex $\hat{\imath}_\Delta$. This is so that it will be possible to extend the framing to nearby $d$-simplices. So, as we approach $\hat{\imath}_\Delta$, we'll project all of these vectors onto the subspace of $\mathbb{R}^{d+1}$ with the $i$th component being zero (except for $\Vec{b}_0$ which always remains $(1,\dots,1)$). We do this in anticipation that we'll compare the vector fields on different simplices. Note that the vectors will remain linearly independent as we project out this component. This linear independence follows from the fact that every $k \times k$ minor of a Vandermonde matrix consists of linearly independent $k$-component vectors, and the $\Vec{b}_j$ with any component projected out can be thought of as a $(d-1) \times (d-2)$ submatrix of a Vandermonde matrix. Another important property is that this frame of vectors is linearly independent from the vectors tangent to any of the dual 2-cells. Let $(ij)$ denote the 2-cell dual to the $(d-2)$-simplex $(0 \dots \hat{\imath} \dots \hat{\jmath} \dots d)$. To see this, note that the vectors that span this dual 2-cell are $\{(c-f_i),(c-f_j)\}$ as defined in Section \ref{sec:Prelim}. Also note that: $$(c-f_i) \sim \frac{1}{n} (0,\dots0,\underbrace{1}_{i\text{th component}},0\dots,0)$$ So, the shared frame are linearly independent from $(ij)$ for the same reason: projecting out these $i$th and $j$th components leaves the frame linearly independent since that's saying that a $(d-2) \times (d-2)$ minor of a Vandermonde matrix has nonzero determinant. \subsubsection{The background framing} Now, let's define the other two vectors that go into the background framing. The first additional vector that we'll add will simply be the vector $$v^\text{bkgd}_{n-1} \sim \Vec{b}_{n-1} = (1,\frac{1}{2^{n-1}}, \dots, \frac{1}{(d+1)^{n-1}})$$ And again, as we approach the boundary at the $(d-1)$-simplex $\hat{\imath}$, we project out the $i$th component of the vector. The second vector, $v^\text{bkgd}_n$ will be analogous to the earlier vector that points parallel to the dual 1-skeleton, except near the center of the $d$-simplex. To define this vector, we need to be careful about the direction along the 1-skeleton points, either towards or away from the center. So, we need to be sure that the orientation defined by the entire frame is consistent throughout the $d$-simplex. Let's recall how we did this for the 2-simplex, as in Fig(\ref{vectorFieldOn2Simplices}), for which the prescription differed for `$+$' and `$-$' simplices. For a `$+$' simplex, this vector along the 1-skeleton pointed towards the center along the edges $\hat{1}$ and pointed away from the center for the edges $\hat{0},\hat{2}$, and oppositely for a `$-$' simplex. The reason for this is by considering the induced orientations on the $(d-1)$-simplices $\hat{\imath}_\Delta$: the branching structure gives opposite orientations on the simplices labeled by $i$ even versus $i$ odd. So, away from the center, we'll have that for a `$+$' simplex, $$v^\text{bkgd}_n = (-1)^i(c - f_i) \sim \frac{1}{n}(0,\dots,0,\underbrace{(-1)^i 1}_{i\text{th component}},0,\dots,0) \quad \text{ along } \hat{\imath}$$ with opposite signs for a `$-$' simplex. Now, while we can make these definitions along $\hat{\imath}$ away from the center of a $d$-simplex, we have to be careful when approaching their centers and making sure that we can make a continuous vector field in some neighborhood of the center. The solution to this is that we should first pick some neighborhood of the center whose shape is a $d$-simplex with vertices on each edge of the dual 1-skeleton at some same, small coordinate distance from the center. As the curve goes from edges $\hat{\imath} \to \hat{\jmath}$, then $v^\text{bkgd}_n$ will look like $t (-1)^i(c - f_i) + (1-t)(-1)^j(c - f_j)$ where $t$ is some appropriate parameter of the curve between $\hat{\imath} \to \hat{\jmath}$. The important point is that as we approach the center, it will be possible to arrange that: $$v^\text{bkgd}_n \sim \Vec{b}_n = (1,\frac{1}{2^n}, \dots, \frac{1}{(d+1)^n}) \quad \text{ at the center}$$ We alluded to this previously in Fig(\ref{vectorFieldOn2Simplices}), where we demonstrated visually that a natural continuation of the vector field points in the same direction as the direction of the Morse flow at the center. Of course, we need to make sure these constructions make sense and indeed define a nondegenerate framing everywhere in a neighborhood of the 1-skeleton. We'll verify this and put the constructions on more solid footing in Appendix \ref{windingsInHigherDims}. \subsubsection{The tangent framing} We can define the tangent framing in a similar way. Let's consider a path $\hat{\imath} \to \hat{\jmath}$. Then the vector tangent to the curve, which we'll call $v^\text{tang}_n$, will start out as $$v^\text{tang}_n = (c - f_i) \sim \frac{1}{n}(0,\dots,0,\underbrace{1}_{i\text{th component}},0,\dots,0) \quad \text{ along } \hat{\imath}$$ and end as $$v^\text{tang}_n = (f_j - c) \sim \frac{1}{n}(0,\dots,0,\underbrace{-1}_{j\text{th component}},0,\dots,0) \quad \text{ along } \hat{\jmath}$$ And in between, we'll have that $$v^\text{tang}_n = t (c-f_i) + (1-t)(c - f_j) \quad \text{ in between } \hat{\imath}, \hat{\jmath}$$ Now, away from the boundary, we can choose the other vector, $v^\text{tang}_{n-1}$, to be $$v^\text{tang}_{n-1} \sim \pm \frac{1}{n}(0,\dots,0,\underbrace{-1}_{j\text{th component}},0,\dots,0) \quad \text{ along } \hat{\imath}$$ $$v^\text{tang}_{n-1} \sim \pm \frac{1}{n}(0,\dots,0,\underbrace{-1}_{i\text{th component}},0,\dots,0) \quad \text{ along } \hat{\jmath}$$ $$v^\text{tang}_{n-1} \sim \pm \frac{1}{n}(0,\dots,0,\underbrace{-(1-t)}_{i\text{th component}},0,\dots,0,\underbrace{-t}_{j\text{th component}},0,\dots,0) \quad \text{ in between } \hat{\imath}, \hat{\jmath}$$ The choice of $\pm$ here will depend on a couple factors: whether the simplex is a `$+$' or `$-$' simplex, whether the `orientation vectors' agree or disagree, and the values of $i,j$. The details of this will be given in Appendix \ref{windingsInHigherDims}. \subsection{Verifying the formal properties: windings, trivalent resolution} \label{verifyingTheFormalPropertiesHigherDimensions} Given the background and tangent framing, we can ask what are their windings with respect to each other? In other words, the two framings determine a relative element of $SO(d)$ with each other, and we want to know how to determine this relative winding's path in $\pi_1(SO(d)) = \mathbb{Z}_2$. \footnote{Strictly speaking, these framings give a relative framing in $GL^+(d)$ and a loop determines an element of $\pi_1(GL^+(d)) = \mathbb{Z}_2$. But, first note that we can freely choose an appropriate inner product that makes the background framing orthonormal. Then, we can orthogonalize the tangent framing with respect to this inner product. This will give the same element of $\mathbb{Z}_2$, since $GL^+(d)$ deformation retracts onto $SO(d)$ via the orthogonalization procedure.} Throughout this subsection, we'll again relegate to Appendix \ref{windingsInHigherDims} technical details. In fact, we won't need to explicitly state what the winding matrices are to describe the formal properties for now. But for reference, the windings and the trivalent resolutions are given in Fig(\ref{fig:trivalentResAndWinding}). \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{trivalentResAndWinding.png} \caption{The trivalent resolution used, and the windings on both orientations. The $\pm \pi$ can refer to the winding angle in the orientable case, but more generally refers to the winding matrices of Section \ref{2DWindingMatrices}. Note that winding between $\hat{\imath}$ and $\hat{\jmath}$ occur iff $i \equiv j \text{ (mod 2)}$ and depend on whether or not $i < j$.} \label{fig:trivalentResAndWinding} \end{figure} The first thing we should can do is verify that for some elementary plaquette loop $C$ bounding the dual 2-cell $P$, that $(-1)^{(w_2 + w_1^2)(P)} = \sigma(M,\alpha_C)$. The reason for this is precisely the same as it was before. For the orientable case, note that the tangent framing will always have two vectors spanning the plaquette's tangent bundle along the boundary. This means that the winding (mod 2) of the background frame with respect to the tangent frame determines the number of singularities of a generic extension that must occur, which shows for the orientable case that $(-1)^{w_2(P)} = \sigma(M,\alpha_C)$. The same argument for the nonorientable case also applies in the same manner as earlier, and gives us the additional $w_1^2$ part. Next, we should verify the quadratic refinement part. This is the place where viewing the higher cup product as a thickening with respect to a vector field flow will be helpful in showing quadratic refinement. Again, the argument reduces to showing that $\sigma(\beta)\sigma(\beta') = \sigma(\beta + \beta')(-1)^{\int \beta \cup_{d-2} \beta'}$ for when $\beta$ and $\beta'$ are both dual to a single loop on the dual 1-skeleton. In 2D, we were able to verify this by considering the edges shared between the loops of $\beta,\beta'$. In particular, these shared segments defined several possible crossings, which we called Type I, II, III, IV, which corresponded to whether each curve starts and ends on the `same side' or `opposite side' of their shared edges and whether the loops were parallel to each other on the section we were resolving. The different crossings changed the number of loops by either $0$ or $\pm 1$, and they differed in how many times (mod 2) the curves intersected each other with respect the background field along them. In all the cases, this allowed us to identify the change in $(-1)^\text{\# of loop} i^\text{winding}$ after resolving the intersection with the intersection number of the curves, as perturbed by the background vector field. There are two issues in trying to extend this logic to higher dimensions is that in higher dimensions, and consequently two cases we need to deal with to verify the quadratic refinement property. The first is related to the issue of why we need to introduce a trivalent resolution in the first place. If there are two curves that meet at a single point at the center of a $d$-simplex, then we need a trivalent resolution of that dual 1-skeleton to unambiguously say whether the curves split up and join each other or whether they stay in tact. So, these kinds of intersections are the first case. The fact that the quadratic refinement holds for this case of intersections is handled in Appendix \ref{windingsInHigherDims}. The second issue deals with when the shared edges along the curves' intersection are the original edges of the dual 1-skeleton itself. In 2D, it made sense to distinguish the types of intersections based on which `side' of the curves' shared edges the curves start and end. But in higher dimensions, this notion doesn't make sense by itself. However, we can give this notion a meaning via \textit{thickening} the curves along the shared framing. This is because thickening along these $(d-2)$ shared vectors $\{v^\text{shared}_1,\dots,v^\text{shared}_{d-2}\}$ will locally give near the curve a codimension-1 set of points for which it's possible to ask which `side' of this set a curve is on. And, the `background' vector field $v^\text{bkgd}_{n-1}$ will act as the `perturbing' vector to separate the curve from its thickening. The fact that we chose the shared frame vectors and the perturbing background vector to be the same ones used to interpret the higher cup product will allow us to interpret quadratic refinement in the same way in higher dimensions as we did in lower dimensions. As depicted in Fig(\ref{higherDimIntersectionsAndWinding}), the `side' of the thickening that the curves enter and exit the shared region correspond exactly to how many times (mod 2) the curve intersects the thickened region after being perturbed by some other vector. We depicted the cases of Type I and II crossings in Fig(\ref{higherDimIntersectionsAndWinding}), but Types III and IV crossings can be drawn similarly. Note that based on the vectors we chose, these intersections points on each shared segment exactly give the contribution of the segment to $\int \beta \cup_{d-2} \beta'$! We can also consider projecting all the vectors the direction of thickening which would flatten the whole image to 2D. Then, resolving this intersection after flattening shows that the relationship after each resolution between the intersection number and the change of winding is follows exactly the same pattern as in 2D. Except the intersection $\int \beta \cup \beta'$ of 2D gets replaced with $\int \beta \cup_{d-2} \beta'$ in higher dimensions. \begin{figure}[h!] \centering \begin{minipage}{0.81\textwidth} \centering \includegraphics[width=\linewidth]{typeOneIntersectionsAndWinding.png} \end{minipage} \quad \quad \quad \begin{minipage}{0.81\textwidth} \centering \includegraphics[width=\linewidth]{typeTwoIntersectionsAndWinding.png} \end{minipage} \caption{In higher dimensions, we can consider thickening one curve in the direction of the shared framing and perturbing it in the direction of $(n-1)$th vector of the background frame (or equivalently perturbing the other curve in \textit{opposite} direction, as shown here). Given our definitions of the shared framing and the $(n-1)$th vector of the background frame, the number of intersections between this thickening and the other perturbed curve is a contribution of $\int \beta \cup_{d-2} \beta'$, where $\beta$ is the green curve, $\beta'$ is the pink curve, and their shared region is in blue as in Fig(\ref{intersectionsAndWindings}). By projecting out all components of the vectors in the direction of the shared framing, we can reduce the comparison of the intersection numbers and windings to the same analysis we gave for the 2D case.} \label{higherDimIntersectionsAndWinding} \end{figure} \subsection{Encoding a $Spin/Pin^-$ structure} Now, we can describe how to use this construction to encode $Spin/Pin^-$ structures given the background framing and these triangulated manifolds, following discussion in \cite{mathOverflowThread} (see also \cite{BudneyCombinatorialSpin}). Recall that a $Pin^-$ structure can be thought of as a cochain $\eta \in C^1(M^\vee,\mathbb{Z}_2) = C_{n-1}(M,\mathbb{Z}_2)$ such that $\delta \eta = w_2 + w_1^2 \in C^2(M^\vee,\mathbb{Z}_2) = C_{n-2}(M,\mathbb{Z}_2)$. We want to see how this choice of $\eta$ can be thought of geometrically. Note that $\delta \eta = w_2$ means as chains that $\eta$ will be represented by some collection of $(d-1)$-simplices whose boundary is given by $w_2 + w_1^2$. Now, we should ask how this relates to our winding picture of a $Spin$ structure. Remember that a $Spin$ or $Pin^-$ structure can be viewed as a trivialization of $TM$ or $TM \oplus \det(TM)$ on the 1-skeleton that extends to even-index singularities on the 2-skeleton. But, the framings we constructed to talk about $\sigma(M,\alpha)$ often extend to odd-index singularities on dual 2-cells where $w_2 + w_1^2$ doesn't vanish. So, to `fix' this, we'll choose some collection $\eta$ of edges on the dual 1-skeleton and `twist' the two unshared background vectors by $360^\circ$ going around the circle. We want to choose the collection of edges so that every dual 2-cell with $w_2+w_1^2 = 0$ will have an even number of edges in $\eta$ and those with $w_2+w_1^2$ will have an odd number of edges in $\eta$, and then twist the two unshared background vectors by $360^\circ$ along each edge, like in Fig(\ref{fig:spinStructTwist}). We can also think of $\eta$ as the collection of $(d-1)$-simplices dual to the edges in the dual 1-skeleton, and the boundary of this collection sum up to be a representative of $w_2 + w_1^2$. Note that this is only possible if $w_2 + w_1^2$ vanishes in cohomology. This has the effect that for a curve traversing in a loop, its winding gets an additional full-twist (or equivalently multiplied by $-\mathbbm{1}$) per edge it contains in $\eta$ (or equivalently for every $(d-1)$-simplex in $\eta$ it crosses). For a cochain $\alpha$, we can write this phase as $(-1)^{\int \eta (\alpha)}$. This means that elementary plaquette loop $C$ will have $\sigma(\alpha_C)(-1)^{\int \eta (\alpha_C)} = 1$. These extra twists ensure that the twisted framing will extend to even-index singularities on each elementary plaquette, which is equivalent to saying that it defines a $Pin^-$ structure. Two such $\eta,\eta'$ will give equivalent $Spin$/$Pin^-$ structures if $\eta + \eta'$ is represented by some homologically trivial sum of $(d-1)$-simplices. But, they'll instead give inequivalent $Spin$/$Pin^-$ structures if $\eta + \eta'$ is homologically nontrivial in $H^1(M,\mathbb{Z}_2)$. In other words, given $\lambda \in H^1(M,\mathbb{Z}_2)$ which is represnted by some closed codimension-1 collection of simplices, $\lambda.\eta$ is the $Spin$ structure we get by adding a $360^\circ$ twist relative to $\eta$ every time we cross the $\lambda$ surface. Simlar reasoning can be used to combinatorially encode a $Pin^+$ structure, as in Appendix \ref{combDefinePinPlus}. \begin{figure}[h!] \centering \includegraphics[width=0.5\linewidth]{spinStructTwist.png} \caption{Twisting the two unshared vectors of the background framing with respect to each other along an edge by $360^\circ$ gives a $(d-1)$-simplex that's a part of the $Pin^-$ structure. Each such twist adds a minus sign to $(-1)^{\int \eta (\alpha_C)}$ for each crossing of $C$ with a $(d-1)$-simplex of $\eta$.} \label{fig:spinStructTwist} \end{figure} \section{Discussion} We've constructed a set of vector field flows inside the standard simplex $\Delta^n$ that allows us to interpret the higher cup products as a generalized intersection between a cochain and a thickened, shifted version of the dual of another cochain. This allows us generalize the cup product, whose geometric interpretation was an intersection with respect to a shifting, but without any thickening. In particular, the Steenrod operations can then be interpreted as generalized `self-intersection' classes, with respect to how the submanifolds dual to the cochains intersect themselves upon being thickened and perturbed. This is a rephrasing to the formula of Thom \cite{ThomPlongee}, that when representing $\alpha$ by some submanifold $M'$ with normal bundle $\nu(M')$ and embedding map $f$, that $$Sq^i(\alpha) = f_*(w_i(\nu(M')))$$ So, this interpretation of the $\cup_i$ products can be thought of as extending this understanding to the cochain level and as a binary operation. And, we found that the same vector fields were useful in defining combinatorial $Pin$ structures, defining the GWGK Grassmann Integral for $Spin$ and $Pin^-$ manifolds, and elucidating the geometric content of the `quadartic refinement' property of the GWGK Integral. We conclude with some questions and possible further directions about our constructions and how the applicability of these vector fields may be extended. \begin{enumerate} \item Can similar methods define higher cup products on other cell decompositions of manifolds? \item Can we extend this understanding to more general `mod p' Steenrod operations? \item In solving the intersection equations, we notice that there are often `lower-dimensional' cells that arise but don't contribute to the $\cup_i$ formulas. Do these have any geometric or cohomological significance? \item Can similar methods using Vandermonde and Schur determinants be used to elucidate cochain-level proofs of the Cartan relations and Ádem relations (as done recently in \cite{CochainLevelCartan,CochainLevelAdem}), or perhaps the Wu formula? \item Does the GWGK Integral $\sigma(\alpha)$ have a natural definition with respect to windings in the $Pin^+$ case? \item If we `smooth out' our vector fields near the boundaries of the simplices, can we use them to reproduce the cochain-level descriptions of $w_i$ from \cite{GoldsteinTurner}, similarly to the description of $w_2$? For example, our construction of $w_2$ is closely related to a formula for $w_2$ derived in \cite{YuAnBosonization}. It would be interesting if similar formulas existed for other $w_i$. \item Our definition of $\sigma(\alpha)$ via a loop decomposition depended on a trivalent resolution of the dual 1-skeleton. Our choice was \textit{ad hoc} and chosen to reproduce the quadratic refinement formula. Is there a more principled reason for this choice? \end{enumerate} \section*{Acknowledgements} \addcontentsline{toc}{section}{Acknowledgements} We thank Maissam Barkeshli and Danny Bulmash for related discussions. And we acknowledge TA appointments and the Condensed Matter Theory Center at UMD for support.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \begin{abstract} \noindent To advance our understanding of Quantum Cellular Automata in problem solving through parallel and distributed computing, this research quantized the density classification problem and adopted the Quantum Particle Automata (QPA) \cite{meyer} to solve the quantized problem. In order to solve this problem, the QPA needed a unitary operator to carry out the QPA evolution and a boundary partition to make the classification decisions. We designed a Genetic Algorithm (GA) \cite{holland} to search for the unitary operators and the boundary partitions to classify the density of binary inputs with length 5. The GA was able to find more than one unitary operator that can transform the QPA in ways such that when the particle was measured, it was more likely to collapse to the basis states that were on the correct side of the boundary partition for the QPA to decide if the binary input had majority density 0 or majority density 1. We analyzed these solutions and found that the QPA evolution dynamic was driven by a particular parameter $\theta$ of the unitary operator: a small $\theta$ gave the particle small mass hence fast evolution while large $\theta$ had the opposite effect. While these results are encouraging, scaling these solutions for binary inputs of arbitrary length of $n$ requires additional analysis, which we will investigate in our future work. \end{abstract} \section{Introduction} The density classification problem is an important test case to measure the computational power of Cellular Automata (CA)\cite{packard}\cite{mch}. In the CA computational model, there is a grid of cells where the states of the cells are updated synchronously according to a local rule. The density classification problem was proposed to study one-dimensional classical CA, where each cell contains a state of 0 or 1, and the task is to identify if the majority of the cells is 0 or 1. A solution to this problem is a local rule that converges the CA to a fixed point of all 1s if its initial configuration contains more 1s than 0s and to all 0s if its initial configuration contains more 0s than 1s. Such convergence normally takes place within \textit{M} time steps, where in general \textit{M} depends on the length of the lattice \textit{L} (which is assumed to have periodic boundary conditions, resulting in a circular grid). Over the years, various CA local rules have been proposed \cite{gkl}\cite{mch}\cite{abk}\cite{cst} to demonstrate that classical CA can achieve global synchronization through parallel local interactions. These results indicate that the CA computational model is well suited for distributed and parallel computing. Meanwhile, extensions of classical CA to Quantum CA (QCA) for distributed and parallel computing have also been investigated \cite{grossing_zeilinger}\cite{meyer}. Unlike classical information of binary 0 \emph{or} 1, quantum information resides in superpositions of 0 \emph{and} 1 simultaneously. Computation takes place on each of the superpositions following a distinct path, called "quantum parallelism" \cite{deutsch}. Therefore, there is a natural mapping between the parallel computation on \textit{quantum superpositions} and the parallel processing of classical \textit{CA cells}. In terms of distributed computing, Feynman and others \cite{feynman}\cite{deutsch} have shown that quantum formalism of closed, locally interacting microscopic systems are able to perform universal computation. To advance our understanding of QCA in problem solving through parallel and distributed computing, this research quantized the density classification problem and adopted the Quantum Particle Automata (QPA) devised by Meyer \cite{meyer} to solve this problem. The QPA is a one-dimensional CA that consists of a \emph{single particle}. Similar to the classical CA, the state of each cell in the QPA at a given time step depends on the states of the cells in some local neighborhood at the previous time step. However, the evolution of the QPA is quantum mechanical. More precisely, unlike classical CA where the state of each cell $x_i$ is a binary value of 0 or 1, the state of a QPA cell is a (complex) probability amplitude $c_i$ for the particle being in state $|x_i\rangle$ when it is measured (being in state $|x_i\rangle$ means being in position $x_i$). The state of the QPA $|\psi\rangle$ is a linear combination of all $L$ possible states, where $L$ is the length of the lattice: \begin{align*} |\psi\rangle=c_0|x_0\rangle+c_1|x_1\rangle+\cdots+c_{L-1}|x_{L-1}\rangle \end{align*} Moreover, the QPA local rule that is used to update the probability amplitude in each cell is a unitary operator. Since the transition of the QPA cells is unitary, the total probability, i.e. the sum of the norm squared of the probability amplitude at each cell, is preserved To solve the quantized density classification problem, the QPA needs a unitary operator to carry out the QPA evolution and a boundary partition to make the classification decisions. This research applied Genetic Algorithms (GA) \cite{holland} to discover the unitary operators and the boundary partitions for the QPA to classify the density of binary inputs with length 5. GA is a population-based search algorithm that has been widely used in optimization and machine learning \cite{goldberg}. Through the simultaneous search of a population of candidate solutions, the GA was able to discover more than one unitary operator that can transform the QPA in ways such that when the particle was measured, it was more likely to collapse to the basis states on the correct side of the boundary partition for the QPA to decide if the binary input had density 0 or density 1. We analyzed these solutions and found that the QPA evolution dynamic was driven by a particular parameter $\theta$ of the unitary operator: a small $\theta$ gave the particle small mass hence fast evolution while large $\theta$ had the opposite effect. While these results are encouraging, scaling these solutions for binary inputs of arbitrary length of $n$ requires additional analysis, which we will investigate in our future work. In addition to being the first to investigate QCA in solving the density classification problem, this research also made the following contributions: \begin{itemize} \item It devised a quantum version of the density classification problem that can be used to represent the problem for binary inputs of any length $n$. \item It demonstrated that for binary inputs of length 5, there are multiple solutions to the quantized density classification problem and the GA methodology can find many of these solutions. \item It analyzed these solutions and showed that the QPA evolution dynamic is driven by a particular parameter $\theta$ of the unitary operator: a small $\theta$ gives the particle small mass hence fast evolution while large $\theta$ has the opposite effect. \end{itemize} The rest of the paper is organized as follows. Section \ref{qca} explains classical CA, quantum CA and the QPA that we used to conduct our research. In Section \ref{density}, we first review the local rules proposed to solve the classical density classification problem and then quantize the problem for our QPA to solve this problem for binary inputs of length 5. Section \ref{simulation} describes the GA system we designed to discover the unitary operator and the boundary partition solutions. The results are then presented in Section \ref{result}. In Section \ref{analysis}, we analyze the two most extreme case unitary operator solutions and discuss scaling these solutions to binary inputs of arbitrary length $n$. Finally, Section \ref{conclusion} concludes this paper and outlines our future works. \section{Quantum Cellular Automata} \label{qca} In classical CA, there is a finite set of states $\Sigma$ and an infinite or finite lattice of $L$ cells, each of which is in one of the states in $\Sigma$. At each discrete time step $t$, the state of the lattice evolves according to some local rule, which transforms the state of each cell based on the state of some neighborhood cells at time step $t-1$. For example, \cite{gkl} and \cite{mch} employed a classical CA with two possible states ( $\Sigma=\{0,1\}$) and $L=149$ to solve the density classification problem. We will discuss the proposed classical local rules in Section \ref{density}. CA updating is discrete in time and space. It is also space and time homogeneous, which means that at each time step the \emph{same local rule} is applied to update \emph{all cells} synchronously. When Gr$\ddot{o}$ssing and Zeilinger \cite{grossing_zeilinger} formulated the first QCA, they found that except for the trivial case, strictly local, unitary evolution of the whole QCA is impossible. In fact, Meyer \cite{meyer} later proved that ``there is no homogeneous one-dimensional QCA that is nontrivial, local and scalar." Gr$\ddot{o}$ssing and Zellinger therefore relaxed the unitary constraint in their QCA by allowing approximate unitarity during states updating. After the updating at each time step, the states of the cells are normalized to make their QCA evolution unitary. This extra step also caused non-local interaction, which makes their QCA evolution non-local and the quantum unitiary non-linear. An alternative QCA formulation approach is to relax the homogeneity constraint by partitioning CA \cite{watrous}\cite{meyer}. The main idea of partitioned CA \cite{toffoli_margolus} is that the set of cells are partitioned in some periodic way where every cell belongs to exactly one block partition. At different time steps, the local rule acts on a different block partition of the lattice. Such a QCA is neither time homogeneous nor space homogeneous anymore, but periodic in time and space. However, the quantum unitarity property can be maintained by using a unitary operator to update each block partition. Since unitary operators preserve the norm squared of the probability amplitudes in each block, the evolution of the entire QCA is unitary. The QPA we used in this research is a partitioned QCA, where each block contains a pair of adjacent cells, with the pairing changing at alternating time step (see Figure \ref{partition}). A $2\times2$ unitary matrix is applied to each pair of the cells to update the states of the QPA. Hence, the QPA is 2-step translation invariant. \begin{figure}[!htp \centerline {\includegraphics[width=1.5in,height=1.0in]{partition.pdf}} \centering \caption{In QPA, each block partition contains 2 adjacent cells with the pairing changing at alternating time step. A unitary matrix is applied to each pair of the cells to update the states of the QPA.} \label{partition} \end{figure} The unitary operator used to update the QPA, $S \in U(2)$, is as follows: \begin{equation} S(\theta,\alpha,\beta,\gamma,\delta)=\begin{bmatrix} e^{i\alpha} sin \theta & e^{i\beta} cos \theta \\ e^{i\gamma} cos \theta & e^{i\delta} sin \theta \end{bmatrix} \label{unitary} \end{equation} where $(\alpha-\beta-\gamma+\delta)$ \% $2\pi \equiv \pi$; here \% is the modulo operator and $\equiv$ is the equal operator. Updating the QPA for a single time step is achieved by a series of matrix multiplications on each pair of the cells: \begin{equation} \begin{bmatrix} \phi_{t+1}(x-1) \\ \phi_{t+1}(x) \end{bmatrix}=\begin{bmatrix} e^{i\alpha} sin \theta & e^{i\beta} cos \theta \\ e^{i\gamma} cos \theta & e^{i\delta} sin \theta \end{bmatrix} \begin{bmatrix} \phi_{t}(x-1) \\ \phi_{t}(x) \end{bmatrix} \label{transition} \end{equation} where $\phi_{t}(x)$ is the state of cell $x$ at time step $t$. The cell-pairing $(x-1,x)$ in Equation \ref{transition} changes at alternating time step with the initial $x=t$\%$2$. Since the lattice is a circular grid under periodic boundary conditions, at \emph{even} time steps, the pairs $(x-1,x)$ are: $(L-1,0), (1,2), (3,4) \dots (L-3, L-2)$, while at \emph{odd} time steps, the pairs $(x-1,x)$ are: $(0,1), (2,3), (4,5) \dots (L-2, L-1)$, where $L$ is the length of the lattice. At each time step $t$, each cell $x$ of the lattice contains a complex number, $\phi_t(x)$, which is the probability amplitude for the particle being in position $x$ when it is measured. Since $S$ is unitary, the total probability i.e. the sum of the norm squared of the amplitude of each cell, is always 1: \begin{align*} \sum_{x=0}^{L-1}|\phi_t(x)|^2=1 \end{align*} where $L$ is the length of the lattice. \section{The Density Classification Problem}\label{density} Since the density classification problem was first formulated, various classical CA local rules have been proposed to solve the problem. The first one was by Gace, Kurdymov and Levin (GKL) \cite{gkl}, which consists of 2 rules: \begin{itemize} \item if $\phi_{t}(x)\equiv0$, $\phi_{t+1}(x)=$ majority of $\phi_{t}(x),\phi_{t}(x+1),\phi_{t}(x+3)$; \item if $\phi_{t}(x)\equiv1$, $\phi_{t+1}(x)=$ majority of $\phi_{t}(x),\phi_{t}(x-1),\phi_{t}(x-3)$; \end{itemize} The GKL rule gives classification accuracy of approximately 80\% when the test cases were randomly generated from all possible initial CA configurations under $L=149$. Later, Mitchell, Crutchfield and Hraber\cite{mch} designed a GA while Andre, Bennett and Koza \cite{abk} applied a Genetic Programming \cite{koza} system to discover CA local rules that gave improved performance. However, under the current problem formulation, Land and Belew \cite{land_belew} proved that for a one-dimensional grid of fixed size $L$ and fixed neighborhood size $r\ge1$, there is no perfect solution to correctly classify all possible inputs. Capcarrere, Sipper and Tomassini \cite{cst} therefore modified the problem slightly with a different final output specification: instead of a fixed-point configuration of either all 0s (when 0 is the majority in the initial configuration) or 1s (when 1 is the majority in the initial configuration), the final CA configuration can be either a background of alternative 0 and 1, with one or more blocks of at least two consecutive 0s (when 0 is the majority) or 1s (when 1 is the majority). With such modification, this problem can be solved perfectly using either rule 184 or rule 226 defined under Wolfram's classification \cite{wolfram}. To investigate the QPA's ability in solving this problem, we quantized the density classification problem in the following ways: \begin{itemize} \item The QPA lattice consists of $L$ cells, where each cell $x$ represents its equivalent binary input. In other words, it is the \emph{index}, not the \emph{content}, of the cell that corresponds to the binary input whose density the QPA is classifying. The binary inputs can be in binary code, for example, binary input 00011 is represented by cell 3 while binary input 01000 is represented by cell 8. Alternatively, the binary inputs can be in Gray code \cite{gray}, for example, binary input 00011 is represented by cell 2 while binary input 01000 is represented by cell 7. To classify the density of binary inputs of length $n$, the QPA has $L=2^n$. \item The state of cell $x$ at time step $t$ ($\phi_t(x)$) is a complex number, which is the probability amplitude of the particle being in position $x$ when it is measured. For example, if $\phi_t(3) = 0.2+0.3i$, the probability of the particle being in position 3 is $|0.2+0.3i|^2=0.13$. Of course, $\sum_{x=0}^{L-1}|\phi_t(x)|^2=1$. \item Initially, only the cell which represents the binary input to be classified has probability amplitude 1, while the rest $L-1$ cells have probability amplitude 0. For example, to classify the binary input 1000 in binary code, the initial QPA has cell 8 with probability amplitude 1, while the rest of the cells have probability amplitude 0. \item After the QPA is evolved for $M$ time steps using a unitary operator $S$, the particle is measured. We assume that within the $L$ bases Hilbert space, there is a partition between the basis states that classify the density of the binary inputs as 0 ($Z_{cells}$) and those that classify the density of the binary inputs as 1 ($O_{cells}$). Hence, if the particle collapses to one of the zero-state cells ($Z_{cells}$) more often ($>$ 50 \%) than it collapses to one of the one-state cells ($O_{cells}$), the QPA classifies the density of the binary input as 0. Otherwise, it classifies the density of the binary input as 1. We will discuss how the partition is decided in the next section. \end{itemize} This is very different from the classical version of the density classification problem, where a CA is expected to converge to all 0s or all 1s. Since the QPA is a reversible CA, different initial configurations can never converge to identical states. Moreover, the QPA evolution is unitary, hence preserves the sum of the lattice states at each time step. This is not the case in classical CA, where the sum of the lattice states may vary from one time step to another. Under this quantization, the density classification problem has a general classical solution for binary inputs of any length $n$: \begin{align*} S=I=\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \end{align*} \begin{align*} Z_{cells}=\{x: 0 \le x < n; majority(x_{binary})\equiv 0\} \\ O_{cells}=\{x: 0 \le x < n; majority(x_{binary})\equiv 1\} \end{align*} Since the identity operator (I) does not change the QPA initial configuration, by assigning all cells $x$ whose binary representation has majority 0 to $Z_{cells}$ and the others to $O_{cells}$, the particle always collapse to members of $Z_{cells}$ with certainty when the binary input has majority 0 and to members of $O_{cells}$ with certainty when the binary input has majority 1. But classical solutions are not interesting. We would like to know if there is a quantum solution that can manipulate the complex probability amplitudes of the QPA to distinguish binary inputs with majority 0 from those with majority 1. The next section presents a GA that we designed to search for quantum solutions. \section{Genetic Algorithms System Design}\label{simulation} The GA we implemented searches for quantum solutions $S$ that solve the quantized density classification problem for binary inputs of length 5. As shown in Equation \ref{unitary}, $S$ has 5 parameters: $\theta, \alpha,\beta, \gamma, \delta$. Since $\alpha, \beta, \gamma, \delta$ are constrained, $(\alpha-\beta-\gamma+\delta)$ \% $2\pi\equiv\pi$, we only need to know 3 of these 4 parameters to define $S$. The number of the $S$ parameters that GA needed to optimize was therefore 4: $\theta, \alpha,\beta,\gamma$. Initially, we specified the $Z_{cells}$ containing all odd cells of the lattice while the $O_{cells}$ containing all even cells of the lattice. Under this setup, the GA was not able to find a $S$ that can evolve the QPA to classify the density of all $2^5=32$ binary inputs correctly. We then made another attempt by specifying the left-half cells of the lattice as $Z_{cells}$ while the right-half cells of the lattice as $O_{cells}$ but the GA was still unable to find a correct $S$ solution. We therefore decided to let the GA to find the partition $Z$. Consequently, the total number of parameters that the GA optimized became 5: $\theta, \alpha,\beta, \gamma, Z$. The partition $Z$ is represented as an integer value between 0 and $2^{32}-1$. To divide the 32 basis states into $Z_{cells}$ and $O_{cells}$, the 10-based integer value $Z$ is first converted into a 2-based binary string $Z_{binary}$. For example, $Z=4279011723_{10}$, $Z_{binary}=11111111000011001000100110001011_{2}$. Next, the indices to all 0 bits in the $Z_{binary}$ become the members of the $Z_{cells}$ while the indices to all 1 bits in the $Z_{binary}$ become the members of the $O_{cells}$. In the above example, $Z_{cells}$=$\{2,4,5,6,9,10,12,13,14,16,17,20,21,22,23\}$. When the particle is measured at time step $t$, if $\sum |\phi_t(x_i)|^2 > 0.5, \forall i \in Z_{cells}$, the particle is more likely to collapse to one of the zero-state cells than to one of the one-state cells, hence the binary input is classified with majority 0. Otherwise, the binary input is classified with majority 1. In GA, the parameters undergoing optimization are called genes and they are packed into a linear chromosome, called individual. As the first attempt using GA to search for quantum solutions $S$, we only investigated binary inputs of length 5 in this study. Hence, the fitness ($f$) of an individual ($\theta, \alpha,\beta, \gamma, Z$) is its ability to correctly classify the density of $2^5=32$ binary inputs ($BI$). The higher the $f$ is, the better the $S$ and $Z$ are in solving the problem, i.e. this is a maximization problem. \begin{equation} \max f(\theta,\alpha,\beta,\gamma, Z) =\sum_{x=0}^{31} F(S(\theta,\alpha,\beta,\gamma), Z, BI_x) \label{fitness} \end{equation} Here, $F(S(\theta,\alpha,\beta,\gamma), Z, BI_x$) is a function that returns 1 if $S$ classifies binary input $BI_x$ correctly based on the partition $Z$. Otherwise, F returns 0. An individual that correctly classifies the density of all 32 binary inputs would have $f=32$ and is a solution to the quantized density classification problem. Function $F$ operates in the following ways. First, partition $Z$ is used to generate $Z_{cells}$ and $O_{cells}$. Next, for each binary input $BI_x$, a QPA of lattice size $L=32$ is initialized with probability amplitude of 1 at cell $x$ and 0 at all other cells, i.e. $\phi_0(x)=1$ and $\forall_{y=0}^{31}, y \ne x, \phi_0(y)=0 $. At each time step $t$, $S(\theta,\alpha,\beta,\gamma)$ is applied to each cell block from time $t-1$ to produce new probability amplitudes for time step $t$. Once completed, the particle is measured. If $\sum |\phi_t(x_i)|^2 > 0.5, \forall i \in Z_{cells}$, the QPA classifies $BI_x$ with majority 0. Otherwise, it classifies $BI_x$ with majority 1. After that, the QPA classification is compared to the correct classification. If the two match, $F$ returns 1. Otherwise $F$ returns 0. The same process is repeated for all 32 binary inputs to obtain the fitness $f$. If $f$ is 32, indicating $S$ classifies all 32 binary inputs correctly, the QPA evolution stops. Otherwise, the QPA continues to time step t+1. In other words, the number of time step $M$ of the QPA is not fixed but varies depending on $S$ and $Z$. The maximum allowed time step $M$ is 2,048 in this study\footnote{In physical implementation, when a particle is measured, it collapses to one of the possible states. Meanwhile, all superposition information is destroyed by the measurement operation. Since the objective of this research is to solve the quantized density classification problem by simulating the algorithm in a classical computer, the measurement of a particle does not destroy the amplitudes at each superpositions. Once a solution ($S, M, Z$) is obtained, the physical implementation of a particle only requires one measurement (at time step $M$) to solve the problem.}. In our implementation (see Appendix A), function F only evolved QPA once and the probability amplitudes in the 32 cells can be used to classify all 32 binary inputs. This is possible because the lattice is a grid with periodic boundary conditions. As a result, to classify each binary input, we only need to shift the grid one cell left and perform the same evaluation procedure to obtain the QPA classification. In other words, to classify binary input $BI_x$, cell number $(32-x)\%32$ is treated as cell 0 to test against $Z_{cells}$ for QPA classification decision. The GA uses the following operators to optimize $S$ and $Z$: \begin{itemize} \item Uniform crossover: each individual is paired with a randomly selected different individual (partner) in the population to produce one offspring. The gene values of the offspring are decided in the following ways. A random number is generated for each of the 5 genes; if the random number is less than the crossover rate ($cR$), the gene is selected from the original individual. Otherwise, it is selected from the partner individual. The smaller the $cR$ is, the higher portion of the original individual is disrupted to produce its offspring. \item Gaussian mutation: the gene values of an individual is mutated in the following ways. A random number is generated for each of the 5 genes; if the random number is less than the mutation rate ($mR$), the gene is mutated by adding a random value from the Gaussian distribution under the standard deviation ($std$) specified for that gene. Otherwise, the gene value remains the same. The higher the $mR$ is, the higher portion of the individual is disrupted with new gene values. \item Survival selection: if an individual is better than its offspring, i.e. with higher fitness $f$ according to Equation \ref{fitness}, the individual survives to the next generation. Otherwise, it is replaced by its offspring. \end{itemize} \begin{figure}[!htp] \centerline {\includegraphics[width=5in,height=4.5in]{ga.pdf}} \centering \vspace{-1.5cm} \caption{The GA workflow to discover unitary operators S and partition Z for the QPA to solve the quantized density classification problem.} \label{ga} \end{figure} Figure \ref{ga} gives the work flow of the GA optimization process. Initially, a population of individuals, each contains ($\theta, \alpha, \beta, \gamma, Z$), is randomly generated. After applying function F to evaluate its fitness, each individual in the population undergoes either uniform crossover or Gaussian mutation alone or both to produce one offspring. Under the current setup, $\sim80\%$ of the offspring are produced from mutation alone, $\sim20\%$ of the offspring are produced from both mutation and crossover. Only $\sim0.0002\%$ of the offspring are produced from crossover alone. Next, the offspring's fitness is evaluated. If the individual is better than its offspring, it survives to the next generation. Otherwise, the offspring is selected to the next generation. This process will create a new generation of population that contains individuals whose fitness are either better or as good as those in the previous generation. After repeating this process for the specified maximum number of generations, the best individual in the last population is selected as the final solution. We experimented with different $cR$ and $mR$ values and found that imposing higher disruption on individuals to produce offspring worked better in optimizing $S$ and $Z$. This might be due to the fact that our GA uses a survival selection scheme which technically is performing hill-climbing in the parameters space. Hilling-climbing is a greedy search algorithm. When all individuals perform this greedy search, the population diversity is reduced quickly and the search suffers from premature convergence. By using high $mR$ and low $cR$, a large amount of diversity is introduced to the population to balance the ``greediness" of the survival selection, hence leads to more effective search. \begin{table}[!htp] \centering \caption{GA system parameter values used to run the optimization.} \begin{tabular}{|c|c|c|c|} \hline {\bf parameter}&{\bf value} & {\bf parameter}&{\bf value}\\ \hline\hline pop\_size&200& max\_gen& 1,000 \\\hline xover\_rate (cR)& 25\% & mutation\_rate (mR) & 90\% \\\hline $\theta$ range & $0\sim\pi/2$ & $\theta$ std & 0.05 \\\hline $\alpha,\beta,\gamma$ range & $-\pi\sim\pi$& $\alpha,\beta,\gamma$ std & 0.1\\\hline $Z$ range & $0\sim2^{32}-1$& $Z$ std & 20 \\\hline \end{tabular \label{para} \end{table} Table \ref{para} lists the GA parameter values used to run the optimization. Additionally, we list the value ranges of $\theta, \alpha, \beta, \gamma$ and $Z$. In function $F$, these values are validated prior to be used to simulate the QPA for density classification. If any of the values is out-of-range, a random value within the legal range is generated to replace the out-of-range value \section{Results}\label{result} We made multiple GA runs using both binary and Gray code representations of the binary inputs. It appeared that Gray code was more suitable for this problem and was easier for the GA to discover unitary operator solutions. Among the 50 Gray code runs, 16 found a solution while only 9 of the 50 binary code runs found a solution. This might be because under Gray code representation, neighboring cells always represent two binary inputs that are different in only 1 digit. As a result, the unitary operator $S$ always updates the complex probability amplitudes of 2 binary inputs that are different in 1 digit. Such consistency made it easier for the unitary operator to classify the density of the given binary input correctly. \begin{table}[!htp] \centering \begin{tabular}{c|cccc} \hline run&$\theta(\degree)$&$\alpha(\degree)$&$\beta(\degree)$&$\gamma(\degree)$\\ \hline\hline 1&11.050685& -55.57717& 113.034549& -140.809934\\ 2&8.0667& -24.173737& 44.174208& -129.677467\\ 3&10.880958& -19.904022& 104.97193& -176.437282\\ 4&75.337518& 5.369795& 43.854368 &-49.357459\\ 5&7.259019& -27.299086& -62.015774& 111.243832\\ 6&13.700568& 76.91783& 111.454715& 134.389406\\ 7&23.718054& -86.090944& 127.961266& 106.217157\\ 8&25.152128& 170.66293 &-49.58397& -5.4686\\ 9&57.076225& -111.908294& 88.120218& -106.898772\\ \hline \end{tabular} \caption{The 9 unitary operator solutions found from the binary code runs. \label{binarypara}} \end{table} \begin{table}[!htp] \centering \begin{tabular}{c|cccc} \hline run&$\theta(\degree)$&$\alpha(\degree)$&$\beta(\degree)$&$\gamma(\degree)$\\ \hline\hline 1& 4.407264& 119.105462& -56.65684& 155.808914\\ 2& 22.357464 &117.90735& -9.633698& 151.621098\\ 3& 10.431628& -129.653366& 35.876261& -37.38258\\ 4& 7.303212& -36.827341& 42.439786& 13.288965 \\ 5& 12.565333& -133.764371& -116.992579& 27.211073 \\ 6& 6.568701& 167.726702& -105.341636& 125.939087 \\ 7& 27.507306 & -72.340869& -127.871776& -23.952314 \\ 8& 20.083639 &93.743705& -44.820769& -109.822316 \\ 9& 27.284116& -29.442762& -49.124343& 32.082641 \\ 10& 10.472256& -63.163587& 11.258074& -110.796705 \\ 11& 28.257343& 41.031094 &-28.261072& -83.989587 \\ 12& 36.973794& 105.346366& 168.109796& 110.596441 \\ 13& 12.472202& -24.919773& 1.682202& -178.873208 \\ 14& 29.280147& -124.541844& 120.74784& 62.525769 \\ 15& 24.339945& 148.928022& 87.470762 &-106.736046 \\ 16& 37.054707& -22.12085& -48.986319& 14.976938 \\ \hline \end{tabular} \caption{The 16 unitary operator solutions found from the Gray code runs. \label{graypara}} \end{table} Tables \ref{binarypara} and \ref{graypara} give these unitary operator solutions ($\theta, \alpha, \beta, \gamma$) found in the binary and Gray code runs while Tables \ref{binarystep} and \ref{graystep} present the partition $Z$ solutions and the number of time steps $M$. As shown, the unitary operators have a wide range of values and they evolved the QPA for a different number of time step $M$. They also used a different $Z$ to classify the density of the binary inputs. In Tables \ref{binarystep} and \ref{graystep}, the row ``majority" gives the correct density classification for binary input $x$; they are listed from left to right based on cell order ($0\leq x\le 31$). Similarly, $Z_{binary}$ gives the binary string converted from the partition $Z$, which is also listed in cell order from 0 to 31. One similarity among the $Z_{binary}$ solutions is that the number of 0-bit is close to half of the total number of binary inputs. This makes sense as half of the 32 binary inputs has majority 0 and the other half has majority 1. By allocating half of the cells to members of $Z_{cells}$ and the other half to members of $O_{cells}$, it is easier for the unitary operator to balance probability amplitudes among members of $Z_{cells}$ and $O_{cells}$ to classify both majority-0 binary inputs and majority-1 binary inputs correctly. \begin{table}[!htp] \centering \begin{tabular}{|c||c||c||c|} \hline x&\scalebox{0.71}{0-1-2-3-4-5-6-7-8-9-10-11-12-13-14-15-16-17-18-19-20-21-22-23-24-25-26-27-28-29-30-31}&0-bits&\\ \hline\hline majority&0-0-0-0-0-0-0-1-0-0-0-1-0-1-1-1-0-0-0-1-0-1-1-1-0-1-1-1-1-1-1-1&16&\\ \hline run& $Z_{binary}$&&$M$\\ \hline \hline 1&1-1-0-1-0-0-0-1-1-0-0-1-0-0-0-1-0-0-1-1-0-0-0-0-1-1-1-1-1-1-1-1&15&2048\\ 2&0-0-0-1-0-0-0-0-1-0-0-1-1-1-1-1-0-1-1-1-1-1-0-0-0-0-0-1-1-1-0-1&16&1368\\ 3&0-1-1-1-0-1-0-0-0-1-1-0-0-0-0-0-0-1-1-1-1-0-1-1-0-1-1-0-0-1-0-0&17&1880\\ 4&1-0-1-0-1-1-1-0-0-1-1-1-1-1-0-0-0-1-1-0-0-0-0-0-0-1-1-0-1-1-0-0&16&2034\\ 5&1-1-1-1-0-0-0-1-0-0-0-1-1-0-0-1-1-1-1-1-1-1-0-0-1-1-0-0-0-0-0-0&16&2048\\ 6&1-1-0-1-1-0-0-1-0-0-0-0-1-1-0-1-0-0-1-1-0-0-0-1-1-0-0-0-1-1-1-1&16&1184\\ 7&0-1-0-0-0-1-1-0-0-0-0-0-0-1-1-0-1-0-1-1-0-1-1-1-1-1-0-0-0-1-1-1&16&1594\\ 8&1-1-0-1-0-0-0-1-0-0-0-0-0-0-1-1-1-1-0-0-0-1-1-0-1-0-0-1-1-1-1-1&16&1182\\ 9&0-1-0-1-1-0-0-0-0-0-0-1-0-0-0-0-1-1-1-1-1-1-0-1-1-1-0-1-1-0-0-1&16&768\\ \hline \end{tabular} \caption{The $Z_{binary}$ and the time step $M$ solutions from the binary code runs.} \label{binarystep} \end{table} \begin{table}[!htp] \centering \begin{tabular}{|c||c||c||c|} \hline x&\scalebox{0.71}{0-1-2-3-4-5-6-7-8-9-10-11-12-13-14-15-16-17-18-19-20-21-22-23-24-25-26-27-28-29-30-31}&0-bits&\\ \hline\hline majority&0-0-0-0-0-1-0-0-0-1-1-1-0-1-0-0-0-1-1-1-1-1-1-1-0-1-1-1-0-1-0-0&16&\\ \hline run& $Z_{binary}$&&$M$\\ \hline \hline 1&1-0-0-1-1-0-1-1-0-0-0-1-1-1-0-1-1-0-0-0-0-1-0-0-1-0-0-1-0-1-1-1&16&2048\\ 2&0-1-0-1-1-1-0-1-1-1-1-1-1-1-0-0-0-1-1-0-1-0-0-0-1-1-0-0-0-0-0-0&16&2012\\ 3&1-0-1-1-1-0-1-1-1-1-1-1-0-0-0-0-1-1-0-1-1-0-0-0-1-0-0-0-0-0-0-1&16&811\\ 4&0-1-0-0-1-1-0-0-0-0-1-0-0-0-0-0-0-1-0-1-1-1-1-1-1-1-0-1-1-1-0-0&17&2048\\ 5&0-1-0-1-1-1-0-0-1-0-1-1-1-1-0-0-0-1-0-1-1-1-1-0-1-1-0-0-0-0-0-0&16&624\\ 6&0-0-0-1-0-1-1-1-0-0-0-1-0-0-0-0-0-0-0-1-1-1-1-1-0-1-1-1-1-0-1-0&17&2046\\ 7&1-1-0-0-1-0-0-0-0-1-1-1-1-0-0-0-0-0-0-0-1-1-0-1-1-1-0-1-1-1-0-1&16&1904\\ 8&0-1-0-0-1-0-1-1-1-1-1-0-1-1-0-1-0-1-0-1-1-1-1-1-0-0-0-0-0-0-0-0&16&1274\\ 9&0-0-0-0-0-0-0-1-1-1-1-1-0-1-1-1-1-1-1-1-0-0-0-1-0-1-1-1-0-0-0-0&16&620\\ 10&0-1-1-1-1-0-1-1-0-1-0-1-0-0-0-1-1-0-1-0-0-0-0-1-0-0-1-1-0-1-1-1&15&1978\\ 11&0-0-1-1-0-0-0-1-0-0-0-1-1-1-1-1-0-0-1-1-0-1-1-1-0-0-0-1-0-1-1-0&16&1572\\ 12&1-1-1-0-0-1-0-1-1-1-0-0-0-1-0-0-1-0-0-0-0-1-0-1-0-1-0-1-1-0-1-1&16&632\\ 13&1-1-1-1-0-0-0-0-1-0-0-0-0-1-0-1-0-1-1-0-0-1-1-1-1-1-0-0-0-0-1-1&16&1750\\ 14&0-1-1-1-0-0-0-1-0-1-1-1-1-0-1-0-0-0-0-0-0-0-0-1-0-1-1-1-0-1-1-1&16&800\\ 15&1-1-1-0-0-0-0-0-0-0-0-0-0-1-1-1-0-0-1-1-0-0-1-1-1-1-1-1-0-1-0-1&16&2048\\ 16&0-1-1-0-0-1-0-1-1-1-1-1-1-0-0-1-1-1-0-0-1-1-0-0-1-1-0-0-0-0-0-0&16&684\\ \hline \end{tabular} \caption{The $Z_{binary}$ and the time step $M$ solutions from the Gray code runs.} \label{graystep} \end{table} Since all these unitary operators classify the density of the 32 binary inputs of length 5 correctly, it is natural to ask ``do they produce the same QPA evolution dynamics, in terms of the probability amplitudes propagation?" We will analyze these unitary operator solutions through QPA simulations and answer this question in the following section. \section{Analysis and Discussion}\label{analysis} We simulated QPA evolutions and found that their dynamics were different under different unitary operator solutions. In particular, some QPA propagated the probability amplitudes very fast while others transformed the states in a much slower pace. Further analysis shows that the major driving force of the propagation is the value of $\theta$: a small $\theta$ gives the particle small mass hence fast propagation while large $\theta$ has the opposite effect. When $\theta=0\degree$, $S$ simply interchanges the states of adjacent cells so the probability propagation speed is 1 in lattice units. By contrast, when $\theta=\pi/2$, $S$ is the identity with phase in diagonal, where the phase is unobservable, hence there is no flow. Meyer \cite{meyer} has made similar observations in his QPA simulations under one-parameter ($\theta$) unitary operators. Since we used a more general unitary operator with 5 parameters, the QPA evolution dynamics were more complicated, given that $\alpha, \beta, \gamma, \delta$ also influenced the probability amplitudes propagation\footnote{By parameterizing $S$ from a $U(2)$ to a $SU(2)$, the number of independent parameters can be reduced to 4.}. We examined the QPA dynamics that were evolved under unitary operators that handled binary code and that handled Gray code representations and found that they had similar patterns. Thereafter, we will use the QPA evolved by unitary operators that handle binary code to perform our analysis. \begin{figure}[!htp] \vspace{-2cm} \begin{minipage}[t]{0.49\linewidth} \centerline{\includegraphics[width=7.8cm,height=9cm]{fig020.pdf}} \vspace{-2cm} \caption{QPA simulation with fast propagation under $S=\{\theta=7.26\degree,\alpha=-27.3\degree,\beta=-62\degree,\gamma=111.24\degree\}$; time step M=2048. \label{fast}} \end{minipage} \hspace{-0.0cm} \begin{minipage}[t]{0.49\linewidth} \centerline{\includegraphics[width=7.8cm,height=9cm]{fig020_64.pdf}} \vspace{-2cm} \caption{QPA simulation under the same unitary operator as the plot on the left; first 64 time steps. \label{fast64}} \end{minipage} \end{figure} Figure \ref{fast} gives the QPA simulation that has the fastest propagation flow among all solutions. The unitary operator is $S=\{\theta=7.26\degree,\alpha=-27.3\degree,\beta=-62\degree,\gamma=111.24\degree\}$. The value shown at each cell $x$ at time step $t$ is the norm squared of the probability amplitudes, i.e. $|\phi_{t}(x)|^2$. Figure \ref{fast64} shows the evolution during the first 64 time steps. Measuring the locations of the peaks of the probability distribution at successive time steps indicates that the propagation speed is a little bit less than 1 in lattice units. \begin{figure}[!htp] \vspace{-4.0cm} \begin{minipage}[t]{0.49\linewidth} \centerline{\includegraphics[width=7.5cm,height=9cm]{fig018.pdf}} \vspace{-2cm} \caption{QPA simulation with slow propagation under $S=\{\theta=75.3\degree,\alpha=-5.37\degree,\beta=43.84\degree,\gamma=-49.36\degree\}.$; time step M=2034. \label{slow}} \end{minipage} \begin{minipage}[t]{0.49\linewidth} \centerline{\includegraphics[width=7.5cm,height=9cm]{fig018_64.pdf}} \vspace{-2cm} \caption{QPA simulation under the same unitary operator as the plot on the left; first 64 time steps. \label{slow64}} \end{minipage} \end{figure} Similarly, Figure \ref{slow} gives the QPA simulation that has the slowest propagation flow among all solutions. The unitary operator is $S=\{\theta=75.3\degree,\alpha=-5.37\degree,\beta=43.84\degree,\gamma=-49.36\degree\}.$ Figure \ref{slow64} shows the probability propagation during the first 64 time steps. Measuring the locations of the peaks of the probability distribution at successive time steps indicates that the propagation speed is approximately 1/4 in lattice units When $\theta=\pi/2$ or $\theta=0\degree$, the QPA evolution dynamics are not interesting; the probability ($|\phi_{t}(x)|^2$) in each cell at the current time step is either an identical copy or a swap of its neighboring cell from the previous time step. However, when $0<\theta<\pi/2$, the probabilities in the QPA are propagated in a wide range of speeds, depending mostly on the $\theta$ of the unitary operator. Moreover, there are many unitary operators that can propagate the probabilities in ways such that when the particle is measured, it is more likely to collapse to the basis states that are on the correct side of the boundary partition $Z$ for the QPA to decide if the binary input has majority 0 or majority 1. As mentioned in Section \ref{density}, the classical solution $S=I$ uses the $Z_{binary}$ that is the same as the \emph{majority binary string} to classify the density of the binary inputs. By contrast, quantum solutions use $Z_{binary}$ that are very different from the \emph{majority binary string} to classify the density of the binary inputs (see Table \ref{binarystep} and \ref{graystep}). In other words, within the 32 bases Hilbert space, there exists many partitions $Z$ between the basis states that classify the density of the binary inputs as 0 and those that classify the density of the binary inputs as 1. Many unitary operators can pair with these partitions to solve the density classification problem for binary inputs of length 5. However, in order for the unitary operator solutions to work for binary inputs of any length $n$, the paired partitions also have to be general solutions. In the future work, we plan to investigate the patterns in these partition solutions and abstract these patterns to be used to classify binary inputs of any length $n$. An alternative approach to investigate general solutions for the quantized density classification problem is by applying the GA to search $S$ and $Z$ for different binary input sizes $n$. It is possible that such solutions exist, but the GA may or may not be able to find them. This is because with the increase of the binary input length, the search space of $Z$ increases exponentially. However, it is also possible that the number of partition $Z$ solutions increases as the length of the binary inputs increases, hence the problem difficulty remains constant regardless of the binary inputs length. In either case, we can adjust GA parameters, such as $mR$ and $cR$, to improve the GA search efficiency. Another issue that we need to address using this approach is the hardware resources required to simulate the QPA evolution under large lattice sizes. These issues will be investigated in our future work. Compared to the classical CA, which requires a lattice of length $L=n$ to solve the density classification problem for binary inputs of length $n$, the QPA requires lattice length $L=2^n$ under this quantized version of the density classification problem. However, the trade-off we have gained is a larger CA evolution space (complex Hilbert space) which allows many unitary operators to manipulate and solve the problem. In this research, our GA has found some of these solutions. Moreover, it is possible that the solutions are within some areas of the parameters space and can be simplified as 2-parameter unitary operators. We will test this hypothesis in our future work. \section{Concluding Remarks}\label{conclusion} Our investigation of using a quantum CA to solve the density classification problem not only produced some interesting results but also posed some challenging questions. First of all, we found that the QPA devised by Meyer can be used to solve a quantized version of the density classification problem for binary inputs of length 5 successfully. Secondly, we found that there are more than one quantum solution to this quantized problem and the GA system we designed can find many of them. Thirdly, we found that these quantum solutions propagate probability amplitudes in different speeds, depending mostly on a particular parameter $\theta$ of the unitary operator. Lastly, we found that there exists many boundary partitions in the 32 bases Hilbert space that separate the basis states classifying the density of the binary inputs as 0 from those classifying the density of the binary inputs as 1. When the partitions are paired with suitable unitary operators, they are able to solve the quantized density classification problem for binary inputs of length 5. However, scaling the approach to find general solutions to the quantized density classification problem requires more work. We have identified a couple of avenues: one studies the partition $Z$ patterns and the other improves GA scaling performance. We plan to investigate both approaches in our future work. Moreover, given the number of variables in the solutions ($\theta,\alpha,\beta,\gamma,Z,M$), it is not clear to us if we can reduce the number of parameters by increasing/decreasing other parameter values. For example, can we remove $Z$ by increasing $M$? Or can we remove $\alpha$ by increasing $M$? These are open questions that we plan to answer in our futuer work.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Quantum field theories at finite temperature has received considerable attention, since the work of Dolan and Jackiw twenty years ago \cite{dolan}. It was first observed by Kirznitz and Linde \cite{kirz} that symmetries which are spontaneously broken at zero temperature are normally restored at high temperatures. Recently, there has been much interest in the electroweak phase transition, mainly because of its role in the generation of the baryon asymmetry of the universe \cite{cohen}. Important aspects are the order and strength of the phase transition as a function of the Higgs boson mass, and the calculation of nucleation rates. Several approaches have been used in the the study of the electroweak phase transition. This includes the use of three-dimensional effective theories [4,5], the $\epsilon$-expansion \cite{arnold3} and ring improved effective potentials \cite{car}. Recently, Buchm\"uller {\it et al.} have proposed an effective potential for the operator $\sigma =\phi^{\dagger}\phi$ and have applied it to the Abelian Higgs model and SU(2) [8,9]. It has been claimed that this new effective potential is gauge invariant, which is a desireable property. The problems of gauge invariance and gauge fixing dependence are important and must be taken seriously. Physical quantities such as the critical temperature must be gauge invariant. Normally, the effective potential is gauge fixing independent at the minimum, that is for the value of the field that is a solution of the effective field equations (``on shell''). However, the potential should also be gauge invariant away from the minimum (``off shell'') since the form of the potential (e.g. barrier height) determines the dynamics of the phase transition. In the present work we consider the chiral Abelian Higgs model with a quartic self interaction in 3+1 dimensions. This model has previously been studied by Arnold and Espinosa \cite{arnold1}, using resummation techniques. We shall use a different approach by making a three-dimensional effective theory for the static modes by integrating out the heavy modes. This method has been extensively used in finite temperature field theory in recent years. Applications to U(1) and SU(2) gauge theories are found in refs. [4,5] and we recommend them for further details. The decoupling of the non-static (heavy) modes from the high temperature dynamics was proposed a long time ago and has been studied in detail in various theories, e.g. QCD and QED \cite{jour}. By doing this dimensional reduction [11-13] the infrared behaviour of the static modes improves due to the induced thermal masses. Secondly, this approach induces non-linear interaction between the static modes. The dimensional reduction is carried out in section two and it is explicitly demonstrated that one obtains the same thermal masses to leading order in $m^{2}/T^{2}$ as those found by solving the Schwinger-Dyson equation in the full four-dimensional theory \cite{arnold1}. In section three we compute the effective potential for the composite operator $\sigma =\phi^{\dagger}\phi$ to one loop order. The calculations are carried out in the $R_{\xi}$ gauge and it is shown that the effective potential is independent of the gauge parameter at one loop. Although this does not imply gauge invariance of the effective action, this independence is of course a nice feature. The potential is investigated numerically at finite temperature and is compared with the ordinary ring improved effective potential. It is found that the symmetry is restored at high temperature via a first order phase transition. Finally, we summarize and make some comments on further developments in section four. \section{The Three-dimensional Effective Theory} {\it The Abelian Higgs model.}$\,\,$ Let us first consider the Abelian Higgs model without fermions. The Euclidean Lagrangian reads: \begin{eqnarray} {\cal L}&=&\frac{1}{4}F_{\mu\nu}F_{\mu\nu}+\frac{1}{2} (D_{\mu}\Phi)^{\dagger}(D_{\mu}\Phi)-\frac{1}{2}c^{2}\Phi^{\dagger}\Phi +\frac{\lambda}{4}(\Phi^{\dagger}\Phi)^{2}. \end{eqnarray} plus gauge fixing terms. Here $D_{\mu}=\partial_{\mu}+ieA_{\mu}$ is the covariant derivative. The corresponding action is then \begin{equation} S=\int_{0}^{\beta}d\tau\int {\cal L}\,d^{3}{\bf x}. \end{equation} At finite temperature we expand the fields as \begin{eqnarray} A_{i}({\bf x},\tau)&=&\beta^{-\frac{1}{2}}\Big [\,A_{i}({\bf x}) +\sum_{n\neq 0}a_{i,n}({\bf x})e^{2\pi in\tau/\beta}\,\Big ], \hspace{1cm} i=1,2,3\\ A_{\tau}({\bf x},\tau)&=&\beta^{-\frac{1}{2}}\Big [\,\rho ({\bf x}) +\sum_{n\neq 0}a_{\tau,n}({\bf x})e^{2\pi in\tau/\beta}\,\Big ]\\ \Phi ({\bf x},\tau)&=&\beta^{-\frac{1}{2}}\Big [\,\phi_{0}({\bf x}) +\sum_{n\neq 0}\phi_{n}({\bf x})e^{2\pi in\tau/\beta}\,\Big ]. \end{eqnarray} The calculations will be carried out in the thermal static gauge \cite{jakovac}: \begin{equation} a_{\tau,n}({\bf x})=0,\hspace{1cm} \forall \,\,n. \\ \end{equation} We integrate over $\tau$ and exploit the orthonormality of the modes. There will now be terms in the action which involve only the static modes \begin{eqnarray} \nonumber \label{static} S^{(0)}&=&\int \Big [\,\frac{1}{4}F_{ij}F_{ij}+\frac{1}{2} (\partial_{i}\rho)^{2} +\frac{1}{2}(\partial_{i}\phi_{0})^{\dagger} (\partial_{i}\phi_{0})-\frac{1}{2}c^{2}\phi_{0}^{\dagger}\phi_{0} +\frac{\lambda T}{4}(\phi_{0}^{\dagger}\phi_{0})^{2}\\ &&+\frac{e^{2}T}{2}(A_{i}^{2}+\rho^{2})\phi_{0}^{\dagger}\phi_{0} +\frac{ieA_{i}T^{\frac{1}{2}}}{2}(\phi_{0}\partial_{i}\phi_{0}^{\dagger} -\phi_{0}^{\dagger}\partial_{i}\phi_{0})\,\Big ]\,d^{3}{\bf x}. \end{eqnarray} The terms that are quadratic in non-static modes are \begin{eqnarray}\nonumber \label{kvad} S^{(2)}&=&\sum_{n\neq 0}\int\Big [\frac{1}{2}(\partial_{i} a_{j,n})^{\dagger}(\partial_{i}a_{j,n})-\frac{1}{2} (\partial_{i}a_{i,n})^{\dagger}(\partial_{i}a_{i,n}) +\frac{1}{2}(2\pi nT)^{2}a_{i,n}^{\dagger}a_{i,n}\\ &&+\frac{1}{2}(2\pi nT)^{2}\phi_{n}^{\dagger}\phi_{n} +\frac{1}{2}(\partial_{i}\phi_{n})^{\dagger}(\partial_{i}\phi_{n}) -\frac{1}{2}c^{2}\phi_{n}^{\dagger}\phi_{n}\,\Big ]\,d^{3}{\bf x}, \end{eqnarray} and finally there are terms representing the interactions between the static and the non-static modes. These terms generate the effective thermal masses of the zero modes $\phi_{0}$ and $\rho$ \begin{eqnarray} \label{eq:int}\nonumber S^{(2)}_{int}&=&\sum_{n\neq 0}\int\Big [\frac{1}{2}e^{2}Ta_{i,n} ^{\dagger}a_{i,n}(\phi_{0}^{\dagger}\phi_{0}) +e\rho 2\pi nT^{\frac{3}{2}}\phi_{n}^{\dagger}\phi_{n} +\frac{\lambda T}{4}4\phi_{0}^{\dagger}\phi_{0}\phi_{n}^{\dagger}\phi_{n} \\ &&+\frac{ieT^{\frac{1}{2}}}{2}a_{i,n}(\phi_{0}\partial_{i}\phi_{n}^{\dagger} +\phi_{-n}\partial_{i}\phi_{0}^{\dagger}-\phi_{0}^{\dagger}\partial_{i} \phi_{-n}-\phi_{n}^{\dagger}\partial_{i}\phi_{0}) +\frac{1}{2}e^{2}\rho^{2}T(\phi_{n}^{\dagger}\phi_{n})\Big ]\,d^{3}{\bf x}. \end{eqnarray} We should make the remark that we in eq. (\ref{eq:int}) have set $A_{i}({\bf x})=0$ since these terms only affect the kinetic part of the effective theory (see refs. [15,16]). The omission of these terms then correspond to the neglect of wave function renormalization and also some finite higher order corrections to the interaction between $\rho$ and $\phi_{0}$ and to the scalar potential of the $\rho$-field. These corrections are of order $e^{4}$ and should hence not be included since we calculate consistently to order $e^{2}$. The fact that the spatial part of $A_{\mu}(x)$ remains massless and therefore acts as the gauge field in the dimensionally reduced theory could also have been predicted on general grounds by considering Ward identities in the high temperature limit [13,18]. \\ \\ Introducing the two real fields by $\phi_{n}=\phi_{1,n}+i\phi_{2,n}$, we define the propagators for the fields $a_{i,n}, \phi_{1,n}$ and $\phi_{2,n}$ by: \begin{eqnarray} \Big [\,[-\nabla^{2} +(2\pi nT)^{2}]\delta_{ij}+\partial_{i} \partial_{j}\,\Big ]D_{jk,n}({\bf x},{\bf x}^{\prime})&=&\delta_{ik}\delta ({\bf x},{\bf x}^{\prime}) \\ \Big [-\nabla^{2}-c^{2}+(2\pi nT)^{2}\Big ]\Delta_{i,n} ({\bf x} ,{\bf x}^{\prime})&=&\delta ({\bf x},{\bf x}^{\prime}),\hspace{1cm}i=1,2. \end{eqnarray} We may write the effective action for the zero modes as \begin{equation} S_{eff}=S^{(0)}+S^{(2)}+\Delta S \end{equation} where \begin{equation} \label{eq:eff} \Delta S=\langle S^{(2)}_{int}\rangle -\frac{1}{2}\langle (S^{(2)}_{int})^{2}\rangle +..., \end{equation} and $S^{(2)}$ is the quadratic contribution from eq. (\ref{kvad}). The second term in eq. (\ref{eq:eff}) is necessary in order to calculate consistently to order $\lambda$ and $e^{2}$. Using the propagators we find the different contributions to the effective theory. The contribution from the heavy scalar modes to the static scalar mode is easily found: \begin{eqnarray}\nonumber \label{eq:skalar} \Delta S_{scalar}&=&\sum_{n\neq 0,\,i}\int\lambda T\phi_{0}^{\dagger} \phi_{0}\Delta_{i,n}({\bf x},{\bf x})\,d^{3}{\bf x} \\ \nonumber &=&\sum_{n\neq 0}\int \Big [2\lambda T\phi_{0}^{\dagger}\phi_{0}\int \frac{d^{3}p}{(2\pi)^{3}}\frac{1}{p^{2}-c^{2}+(2\pi nT)^{2}}\Big ] \,d^{3}{\bf x} \\ \nonumber &=&\int \Big [\frac{2\lambda T\phi_{0}^{\dagger}\phi_{0}}{2\pi^{2}} \int_{0}^{\infty}\frac{pdp}{(\exp\beta p-1)}+{\cal O}(\frac{c^{2}}{T^{2}}) \Big ]\,d^{3}{\bf x}\\ &=&\int \Big[\lambda \phi_{0}^{\dagger}\phi_{0} \frac{T^{2}}{6}+{\cal O} (\frac{c^{2}}{T^{2}})\Big ]\,d^{3}{\bf x}. \end{eqnarray} In the above equation we have dropped a divergence, which corresponds to a mass renormalization (see also ref. \cite{fend}). The corresponding Feynman diagrams are shown in fig.~\ref{fey1}a. We would also like to make the remark that our renormalization procedure differs from that of Jakov\'ac and Patk\'os \cite{jakovac}. This is partly due to their introduction of the auxillary fields $\chi$ by a Hubbard-Stratonovich transformation \cite{hub}. The contributions from the vector particles are calculated in a similar way, and after some lengthy algebra we find \begin{eqnarray} \label{eq:ds} \Delta S=\int \Big[\frac{1}{2}\phi_{0}^{\dagger}\phi_{0}(4\lambda +3e^{2}) \frac{T^{2}}{12} +\frac{1}{6}\rho^{2}e^{2}T^{2}\Big ]\,d^{3}{\bf x}. \end{eqnarray} The Feynman diagrams are displayed in figs.~\ref{fey1}b - ~\ref{fey1}c and {}~\ref{fey2}a - ~\ref{fey2}b\footnote{Note that the contribution from fig.~\ref{fey1}b depends on the external momentum {\bf k}. We have made a high temperature expansion and included the dominant $T^{2}$ term. The {\bf k}$^{2}$ pieces contribute to wave function renormalization.}. Our result is to leading order in $c^{2}/T^{2}$ in accordance with that of Arnold and Espinosa \cite{arnold1}, who used the Schwinger-Dyson equations for the propagators. The result is also in agreement with that obtained by Jakov\'ac and Patk\'os \cite{jakovac} to order $\lambda$ and $e^{2}$.\\ \\ The effective theory for the static modes is now obtained: \begin{eqnarray}\nonumber \label{eff} S_{eff}&=&\int\Big [\,\frac{1}{4}F_{ij}F_{ij}+\frac{1}{2}(\partial_{i} \rho)^{2}+\frac{1}{2}m^{2}_{\rho}\rho^{2}+\frac{1}{2}(D_{\mu}\phi_{0}) ^{\dagger}(D_{\mu}\phi_{0})+\frac{1}{2}m^{2}\phi_{0}^{\dagger}\phi_{0} +\frac{\lambda T}{4}(\phi_{0}^{\dagger}\phi_{0})^{2} \\ &&+\frac{e^{2}T}{2}\rho^{2}\phi_{0}^{\dagger}\phi_{0}\,\Big ]\,d^{3}{\bf x}, \end{eqnarray} where the thermal masses are given by: \begin{eqnarray} \label{eq:thermal} m^{2}&=&-c^{2}+(4\lambda +3e^{2})\frac{T^{2}}{12}\\ m^{2}_{\rho}&=&\frac{e^{2}T^{2}}{3}. \end{eqnarray} Note that we discard $S^{(2)}$ from the effective theory since it is independent of the static mode. It only gives a temperature dependent contribution to the effective action and will not affect the critical temperature. {}From eq. (\ref{eff}) one observes that the zeroth component of the vector potential $A_{\mu}({\bf x},\tau)$ plays the role of an extra scalar field in this effective theory and that it is coupled non-linearly to the static mode of the scalar field $\Phi ({\bf x},\tau)$.\\ \\ {\it The chiral Abelian Higgs model.}$\,\,$ Let us next couple Dirac fermions chirally to the Abelian Higgs model: \begin{equation} {\cal L}^{\prime}={\cal L}+g\overline{\Psi}\Big [ \Phi^{\dagger} (\frac{1-\gamma_{5}}{2})+\Phi (\frac{1+\gamma_{5}}{2})\Big ]\Psi +\overline{\Psi}\Big [\gamma^{\mu}\partial_{\mu}-ie\gamma^{\mu}A_{\mu} (\frac{1-\gamma_{5}}{2})\Big ]\Psi. \end{equation} At finite temperature we expand the fermionic field as \begin{equation} \Psi ({\bf x},\tau)=\beta^{-\frac{1}{2}}\sum_{n}\psi_{n}({\bf x}) e^{\pi i(2n+1)/\beta}\,\,. \end{equation} The fermions are antiperiodic in time, which implies that they do not contribute to $S^{(0)}$ in eq. (\ref{static}). We then write \begin{eqnarray} S^{(2)\prime}&=&S^{(2)}+\sum_{n}\int \overline{\psi}_{n}\gamma^{\mu} \partial_{\mu}\psi_{n}\,d^{3}{\bf x} \\ \nonumber S^{(2)\prime}_{int}&=&S^{(2)}_{int}+\sum_{n}\int \Big [gT^{\frac{1}{2}} \overline{\psi}_{n}\phi_{0}^{\dagger}(\frac{1-\gamma_{5}}{2})\psi_{n} +gT^{\frac{1}{2}}\overline{\psi}_{n}\phi_{0}(\frac{1+\gamma_{5}}{2}) \psi_{n} \\ &&-ie\rho T^{\frac{1}{2}}\overline{\psi}_{n}\gamma^{0} (\frac{1-\gamma_{5}}{2})\psi_{n}\Big ]\,d^{3}{\bf x} \end{eqnarray} where $\gamma^{\mu}\partial_{\mu}$ now means $\gamma^{i}\partial_{i} +\gamma^{0}(2n+1)\pi T$. The fermion propagators are defined by \begin{equation} ( \gamma^{\mu}\partial_{\mu})S_{F,n}({\bf x},{\bf x}^{\prime}) =\delta ({\bf x},{\bf x}^{\prime}), \end{equation} We can now compute the fermion contribution to the effective action $S_{eff}$ for the static modes. The calculations are carried out consistently to order $g^{2}$ applying the same techniques as previously. After some manipulations one finds that the fermion contribution to $\langle S^{(2)\prime}_{int}\rangle $ vanishes identically due to the properties of the gamma matrices. Thus one is left with the correction: \begin{eqnarray}\nonumber \label{eq:ds2} \Delta S_{fermion}&=&\sum_{n}Tr\int\Big [ \frac{1}{2}g^{2}T\phi_{0} ^{\dagger}\phi_{0}\Delta_{F,n}({\bf x},{\bf x}^{\prime}) \Big (\frac{1-\gamma_{5}}{2} \Big )\Delta_{F,n}({\bf x},{\bf x}^{\prime}) \Big (\frac{1+\gamma_{5}}{2} \Big )\Big]\,d^{3}{\bf x}\, d^{3}{\bf x}^{\prime}\\ \nonumber &&+\sum_{n}Tr\int\Big [ \frac{1}{2}e^{2}T\rho^{2}\Delta_{F,n}({\bf x}, {\bf x}^{\prime})\gamma^{0}\Big (\frac{1-\gamma_{5}}{2} \Big ) \Delta_{F,n}({\bf x},{\bf x}^{\prime})\gamma^{0}\Big (\frac{1-\gamma_{5}}{2} \Big )\Big]\,d^{3}{\bf x}\,d^{3}{\bf x}^{\prime}\\ &=&\int\Big [\frac{1}{24}\phi_{0}^{\dagger}\phi_{0}g^{2}T^{2}+\frac{1}{12} \rho^{2}e^{2}T^{2}\Big ]\,d^{3}{\bf x}. \end{eqnarray} The diagrams are shown in figs.~\ref{fey1}d and~\ref{fey2}c. Again, we have only kept the dominant temperature contribution. Using eqs. (\ref{eq:ds}) and (\ref{eq:ds2}) one obtains the following effective masses \begin{eqnarray} \nonumber \label{eq:thermal2} m^{2}&=&-c^{2}+(4\lambda +3e^{2}+g^{2})\frac{T^{2}}{12}\\ m^{2}_{\rho}&=&\frac{e^{2}T^{2}}{2}. \end{eqnarray} Our result is again in agreement with that of Arnold and Espinosa \cite{arnold1}. \section{The One-Loop Effective Potential} {\it The one-loop effective potential.}$\,\,$ With the effective three-dimensional theory at hand, we now calculate the effective potential for the composite field $\sigma =\phi^{\dagger}\phi$ [8,9] in the one-loop approximation. The calculations for the Abelian Higgs model and the chiral Abelian Higgs model are identical; the only difference is that different thermal masses enter into the final result. (In the following we drop the subscript on the scalar field and hence write $\phi$ instead of $\phi_{0}$). In order to do so we compute the free energy in the presence of a constant external source $J$: \begin{equation} e^{-\Omega W(J)}=\int {\cal D}A_{i}{\cal D}\phi^{\dagger}{\cal D}\phi{\cal D} \rho e^{-S_{eff}(A_{i},\phi^{\dagger},\phi,\rho ) -\int \phi^{\dagger} \phi J\,d^{3}{\bf x} }\,, \end{equation} where $\Omega$ is the three-dimensional volume. The composite field $\sigma$ is defined through the relation: \begin{eqnarray} \label{invert} \frac{\delta W(J)}{\delta J}&=&\sigma . \end{eqnarray} The effective potential is then obtained as a Legendre transform in the usual way: \begin{eqnarray} V(\sigma)&=&W(J)-\sigma J. \end{eqnarray} The classical potential is \begin{equation} V_{0}(\phi^{\dagger}\phi)=\frac{1}{2}(m^{2}+2J)\phi^{\dagger}\phi +\frac{1}{4}\lambda T(\phi^{\dagger}\phi)^{2}+\frac{1}{2}m^{2}_{\rho}\rho^{2} +\frac{1}{2}e^{2}T\rho^{2}\phi^{\dagger}\phi . \end{equation} The classical equations of motion then read \begin{equation} e^{2}T\rho^{2}\phi+(m^{2}+2J+\lambda T\phi^{\dagger}\phi)\phi=0,\hspace{1cm} e^{2}T\rho\phi^{\dagger}\phi +m^{2}_{\rho}\rho =0, \end{equation} and have two solutions: \begin{eqnarray} \label{sym} \overline{\rho} &=&0,\hspace{1cm} \overline{\phi}=\phi_{s}=0 \\ \label{asym} \overline{\rho} &=&0,\hspace{1cm} \overline{\phi}=\phi_{b}= [-\frac{1}{\lambda T}(m^{2}+2J)]^{\frac{1}{2}}e^{i\alpha}. \end{eqnarray} Here $\alpha$ is a phase. The solutions (\ref{sym}) and (\ref{asym}) correspond to the global minimum of the classical action in the presence of the source $J$ for $m^{2}+2J>0$ and $m^{2}+2J<0$, respectively.\\ \\ The masses of the particles are given by the following expressions \begin{eqnarray} m^{2}_{A}&=&e^{2}T|\overline{\phi}|^{2},\hspace{3.5cm}m^{2}_{\phi} =m^{2}+2J+3\lambda T|\overline{\phi}|^{2}, \\ m^{2}_{\chi}&=&m^{2}+2J+\lambda T|\overline{\phi}|^{2},\hspace{1.5cm} m_{\rho}^{2}=\frac{e^{2}T^{2}}{2}+e^{2}T|\overline{\phi}|^{2}. \\ \end{eqnarray} In the symmetric phase the masses are \begin{equation} m^{2}_{A}=0,\hspace{1cm}m^{2}_{\phi}=m^{2}+2J,\hspace{1cm}m^{2}_{\chi}= m^{2}+2J,\hspace{1cm}m_{\rho}^{2}=\frac{e^{2}T^{2}}{2}, \end{equation} while in the broken phase one obtains \begin{equation} m^{2}_{A}=-\frac{e^{2}}{\lambda }(m^{2}+2J),\hspace{0.8cm}m^{2}_{\phi} =-2(m^{2}+2J),\hspace{0.8cm}m^{2}_{\chi}=0,\hspace{0.8cm}m_{\rho}^{2} =\frac{e^{2}T^{2}}{2}-\frac{e^{2}}{\lambda}(m^{2}+2J). \end{equation} We shall work in the $R_{\xi}$ gauge, where the gauge fixing term is \begin{equation} {\cal L}_{GF}=\frac{1}{2\xi}(\partial_{i}A_{i}+\xi eT^{\frac{1}{2}} \overline{\phi}\phi_{2})^{2}, \end{equation} where we have used the global O(2) symmetry to make $\overline{\phi}$ purely real and $\phi_{2}$ is the imaginary part of the quantum field $\phi$. This gauge is particularly simple since the cross terms between the scalar field and the gauge field in the effective theory disappear. This makes it rather easy to calculate the one loop correction to the classical potential: \begin{eqnarray}\nonumber W(J)&=&\frac{1}{2}\int \frac{d^{3}k}{(2\pi)^{3}}\Big[2\log (k^{2} +m^{2}_{A})+\log (k^{2}+m^{2}_{\phi}) +\log (k^{2}+m^{2}_{\chi} +\xi m_{A}^{2})\\ &&+\log (k^{2} +\xi e^{2}T\overline {\phi}^{2}) \label{1l} +(k^{2}+m_{\rho}^{2}) \Big ]. \end{eqnarray} The corresponding ghost contribution is found to be \cite{jakovac} \begin{equation} S_{ghost}=-\int \frac{d^{3}k}{(2\pi)^{3}}\log (k^{2}+\xi e^{2}T\overline {\phi}^{2}). \end{equation} Using dimensional regularization (see e.g. ref. \cite{ryder}) the above integrals are easily computed: \begin{equation} \int \frac{d^{3}k}{(2\pi)^{3}}\log (k^{2}+M^{2})=-\frac{1}{6\pi}M^{3}. \end{equation} The result is perfectly finite after regularization and is independent of the renormalization scale $\mu$.\\ \\ We see that the terms involving $\xi$ cancel since one of the masses, either $m_{A}^{2}$ or $m_{\chi}^{2}$, vanishes. The effective potential is thus {\it gauge parameter} independent (to one loop order), in contrast to the ordinary effective potential \cite{dolan}\footnote{Working in Lorentz gauge, where ${\cal L}_{GF}=\frac{1}{2\alpha}(\partial_{i}A_{i})^{2}$, one finds that the masses and hence the effective potential are explicitly dependent on the gauge parameter. Furthermore, this dependence disappears at the minimum of the potential, as explained in the introduction.}. Moreover, it can also be shown that eqs. (\ref{eq:sym}) and (\ref{eq:asym}) below can be obtained in Lorentz gauge. There is some confusion in the literature about gauge invariance and gauge parameter independence. We emphasize that gauge parameter independence and gauge invariance are related issues, but not equivalent. One may have gauge invariance with respect to gauge transformations of the background fields, but still have dependence on how one fixes the gauge of the quantum fields. This is the case when applying the method of mean field gauges. See ref. \cite{abbott} for details. \\ \\ In the symmetric phase one finds the free energy \begin{equation} \label{eq:sym} W_{s}(J)=-\frac{1}{6\pi}(m^{2}+2J)^{\frac{3}{2}}. \end{equation} In the broken phase we get \begin{eqnarray}\nonumber \label{eq:asym} W_{b}(J)&=&-\frac{1}{4\lambda T}(m^{2}+2J)^{2}-\frac{1}{6\pi} \Big [-\frac{e^{2}}{\lambda}(m^{2}+2J)\Big ]^{\frac{3}{2}} -\frac{1}{12\pi}\Big [-2(m^{2}+2J)\Big ]^{\frac{3}{2}}\\ &&-\frac{1}{12\pi}\Big [\frac{e^{2}T^{2}}{2} -\frac{e^{2}}{\lambda}(m^{2}+2J)\big ]^{\frac{3}{2}}. \end{eqnarray} Notice that the contribution from the $\rho$ particles is independent of $J$ in the symmetric phase. The contribution to the effective potential is therefore independent of the background field and is discarded. A similar remark applies to the vector meson in the symmetric phase, since the mass vanishes.\\ \\ In the broken phase it is sufficient to use the tree level expression for the free energy in order to calculate $\sigma$ from eq. (\ref{invert}) \cite{wilf1}. It is now straightforward to derive the results \begin{equation} V_{s}(\sigma)=\frac{1}{2}m^{2}\sigma -\frac{10\pi^{2}}{3}\sigma^{3} \end{equation} and \begin{equation} V_{b}(\sigma)=\frac{1}{2}m^{2}\sigma +\frac{1}{4}\lambda T\sigma^{2} -\frac{1}{6\pi}e^{3}(\sigma T)^{\frac{3}{2}}-\frac{1}{12\pi} (2\lambda\sigma T)^{\frac{3}{2}}-\frac{1}{12\pi}(\frac{e^{2}T^{2}}{2} +e^{2}\sigma T)^{\frac{3}{2}}. \end{equation} {}From eq. (\ref{invert}) one finds that the symmetric phase is represented by $\sigma <0$ while the broken phase is represented by $\sigma >0$. Thus, we can write our result as \begin{equation} V(\sigma)=V_{b}(\sigma)\Theta (\sigma)+V_{s}(\sigma)\Theta (-\sigma). \end{equation} Some comments are in order. Firstly, we note that we have a contribution to the potential from the zeroth component of the gauge field. This is in turn a consequence of the non-linear interaction between the fields $\phi$ and $\rho$ that was a result of the dimensional reduction. Secondly, the non-analytic terms in the effective potential are independent of $m^{2}$. This implies that $V(\sigma)$ is valid also below the barrier temperature (in the sense that it is purely real), contrary to the ring improved effective potential (see eq. (\ref{ring}) below). \\ \\ To lowest order in the couplings [5,8] one has \begin{equation} V_{4}(\sigma)=TV_{3}(\frac{\sigma}{T}), \end{equation} where $V_{4}$ is the effective potential in the full four-dimensional theory and $V_{3}$ is the effective potential in the three-dimensional theory. This implies that \begin{equation} V_{s}(\sigma)=\frac{1}{2}m^{2}\sigma -\frac{10\pi^{2}}{3}\frac{\sigma^{3}}{T^{2}} \end{equation} and \begin{equation} V_{b}(\sigma)=\frac{1}{2}m^{2}\sigma +\frac{\lambda}{4}\sigma^{2} -\frac{T}{6\pi}(e\sigma)^{\frac{3}{2}}-\frac{T}{12\pi}(2\lambda\sigma) ^{\frac{3}{2}}-\frac{T}{12\pi}(\frac{e^{2}T^{2}}{2}+e^{2}\sigma)^{\frac{3}{2}}. \end{equation} {\it The ring improved effective potential.}$\,\,$ The ordinary ring improved effective potential for the chiral Abelian Higgs model computed in Landau gauge is \cite{arnold1} \begin{equation} \label{ring} V_{ring}(\phi_{0})=\frac{1}{24}(4\lambda + 3e^{2}+g^{2})(T^{2}-T^{2}_{b})\phi_{0}^{2} -\frac{T}{12\pi}(2M_{T}^{3}+M_{L}^{3}+m_{1}^{3}+m_{2}^{3}) +\frac{1}{4}\lambda\phi_{0}^{4}, \end{equation} where $T_{b}$ is the barrier temperature: \begin{equation} T^{2}_{b}=\frac{12c^{2}}{4\lambda +3e^{2}+g^{2}} \end{equation} and the thermal masses are \begin{eqnarray} M_{T}^{2}&=&e^{2}\phi_{0}^{2}\\ M_{L}^{2}&=&e^{2}\phi_{0}^{2}+\frac{e^{2}T^{2}}{2}\\ m^{2}_{1}&=&-c^{2}+\lambda\phi_{0}^{2}+(4\lambda +3e^{2}+g^{2})\frac{T^{2}}{12} \\ m^{2}_{2}&=&-c^{2}+3\lambda\phi_{0}^{2}+(4\lambda +3e^{2}+g^{2}) \frac{T^{2}}{12}. \end{eqnarray} Here $\phi_{0}$ is the background field, which is always larger than or equal to zero. The symmetric phase is represented by $\phi_{0}=0$, while the broken phase is represented by $\phi_{0}>0$. Notice also that the masses $m_{1}^{2}$ and $m_{2}^{2}$ becomes negative below the barrier temperature, implying that the effective potential becomes complex. In fig.~\ref{comp} we have shown $V(\sigma)$ at the critical temperature for a Higgs mass of 55 GeV, and gauge and Yukawa couplings of 0.45 and 0.6, respectively. In fig.~\ref{ring2} we have shown the corresponding ring improved effective potential. Both potentials show that the symmetry is restored via a first order phase transition. This is to be expected from a renormalization group argument, namely that the renormalization group equations do not have a non-trivial fixed point \cite{gins}. We alsosee that the form of the potentials in the symmetric phase is qualitatively the same, although the barrier height is approximately 30$\%$ higher for $V(\sigma)$. In turn, this will affect the phase transition. Moreover, the potential in the symmetric phase increases rapidly for small values of $\sigma$. \section{Summary and Final Remarks} We have calculated a three-dimensional effective theory for the static modes in the chiral Abelian Higgs model by integrating out the heavy modes. The thermal masses are seen to be correctly reproduced. Using this three-dimensional model, a one-loop calculation for the composite operator $\phi^{\dagger}\phi$ has been performed. We have then used the obtained potential to investigate the phase transition. The potential is similar to the ring improved effective potential at the critical temperature and the symmetry is restored via a first order phase transition. We have also noted that the effective potential is gauge parameter independent in the one-loop approximation. The questions of gauge invariance and gauge fixing dependence are important and will be subject of further investigation, particularly in connection with the gauge invariant Vilkovisky-DeWitt effective action \cite{vilko}. This work is in progress. Finally, it would be of interest to extend the present work by doing a Hubbard-Stratonovich transformation \cite{hub}. One could then carry out a two-variable saddle point approximation for the auxillary fields. This method has previously been applied to $\lambda\phi^{4}$ and has correctly reproduced the second order phase transition this model undergoes \cite{skalar}.\\ \\ The author would like to thank Finn Ravndal for useful comments and suggestions.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The incorporation of deep neural networks into reinforcement learning (RL) has broadened the set of problems solvable with RL. Though these techniques yield high-performing agents, the policies are encoded using thousands to millions of parameters, and the parameters interact in complex, non-linear ways. As a result, directly inspecting and verifying the resulting policies is difficult. Without a mechanism for a human operator to readily inspect the resulting policy, we cannot deploy deep RL (DRL) in environments with strict regulatory or safety constraints. Decision trees (DTs) ~\cite{quinlan1986induction} are an interpretable model family commonly used to represent policies. Some benefits of DTs include that they allow formal verification of policy behavior~\cite{bastani2018viper}, counterfactual analysis~\cite{sokol2019desiderata}, and identification of relevant features. However, DRL techniques are not directly compatible with policies expressed as DTs. In contrast, some traditional RL algorithms, such as UTree~\cite{mccallum1997reinforcement}, produce DT policies but use specific internal representations that cannot be replaced with a deep neural network. An alternative approach is to approximate a DRL policy with a DT, but the resulting policy can be arbitrarily worse than the original one and the DT can be large due to unnecessary intricacies in the original policy. \begin{figure}[t!] \begin{center} \includegraphics[width=0.99\columnwidth]{fig/test_custard.png} \end{center} \caption{Method overview: we wrap a base MDP to form an IBMDP and solve using a modified RL algorithm. The solution is a decision tree policy for the base environment.} \label{fig:system} \end{figure} To address the limitations of these techniques, we propose to solve a meta-problem using RL such that the solution corresponds to a DT-format policy for the original problem. We introduce \mname{} (Constrain Underlying Solution to a Tree; Apply RL to Domain), a process that uses RL to solve the meta-problem while ensuring that the embedded solution is equivalent to a DT-format policy throughout training (overview in Figure~\ref{fig:system}). We propose a novel Markov Decision Process (MDP) formulation for the meta-problem: Iterative Bounding MDPs. We present a general procedure to make RL techniques compatible with \mname{}. \mname{} allows modern DRL techniques to be applied to the meta-problem, since the learned policy weights can be fully replaced by an equivalent DT. Thus, \mname{} maintains the interpretability advantage of a DT policy while using a non-interpretable function approximator during training. Additionally, \mname{} ensures that the DT is an exact representation of the learned behavior, not an approximation. The main contributions of this work are: (1) we introduce a novel MDP representation (IBMDPs) for learning a DT policy for a base MDP; (2) beginning with a two-agent UTree-like algorithm, we present an equivalent single-agent formulation which solves IBMDPs to produce DTs; (3) we show how to modify existing RL algorithms (policy gradient and Q-learning) to produce valid DTs for the base MDP; and (4) we empirically evaluate the performance of our approach and identify cases where it outperforms post-hoc DT fitting. \begin{comment} \begin{figure}[t!] \begin{center} \includegraphics[width=0.99\columnwidth]{fig/system.png} \end{center} \caption{Method overview: we wrap a base MDP to form an IBMDP and solve using a modified RL algorithm. The solution is a decision tree policy for the base environment.} \label{fig:system} \end{figure} \end{comment} \section{Background} \subsection{Solving Markov Decision Processes} \label{sec:mdp} In the RL framework, an agent acts in an environment defined by a Markov decision process (MDP). An MDP is a five-tuple $\langle {S},{A},T,R, \gamma \rangle$, consisting of a set of states ${S}$, a set of actions ${A}$, a transition function $T$, a reward function $R$, and a discount factor $\gamma$. We focus on factored MDPs~\cite{boutilier1995exploiting}, in which each state consists of a set of feature value assignments $s = \{ f_1, ..., f_n \}$. Note that we do not require a factored reward function. An agent is tasked with finding a policy, $\pi : S \rightarrow A$, which yields the highest expected discounted future return for all states. The expected return for a state $s$ when following policy $\pi$ is $V^\pi(s) = \mathbb{E}_\pi [\sum_{t=0}^\infty \gamma^t r_t]$. Analogously, the expected return when taking action $a$ and following $\pi$ afterward is the Q-function $Q^\pi(s, a) = \mathbb{E}_\pi [r_0 + \gamma V^\pi(s')] = \mathbb{E}_\pi [r_0 + \gamma \max_{a'} Q^\pi(s',a')]$. Q-learning-based algorithms directly approximate the Q-function, such as with a neural network~\cite{mnih2013playing}, and use it to infer the optimal policy $\pi^*$. The Q-function estimate is incrementally updated to be closer to a \textit{target}, the bootstrapped estimate $r_t + \gamma \max_{a'} Q(s_{t+1},a')$. In contrast, policy gradient methods directly model and optimize the policy. Actor-critic methods~\cite{schulman2017proximal} additionally model the value function to leverage it in the policy update. They often use a critic for estimating the advantage function, $A(s,a) = Q(s,a) - V(s)$. \subsection{Decision Trees in Reinforcement Learning} Decision Trees (DTs) recursively split the input space along a specific feature based on a cutoff value, yielding axis-parallel partitions. \textit{Leaf nodes} are the final partitions; \textit{internal nodes} are the intermediate partitions. DT-like models have been used to represent the transition model~\cite{strehl2007efficient}, reward function~\cite{degris2006learning}, value function~\cite{pyeatt2001decision,tuyls2002reinforcement}, relative effect of actions~\cite{hester2010generalized}, and the policy~\cite{mccallum1997reinforcement}. We focus on DT policies (DTPs), which map each state to a leaf node representing an action. Sufficiently small DTPs are \textit{interpretable}~\cite{molnar2020interpretable}, in that people understand the mechanisms by which they work. DTs conditionally exhibit simulatability, decomposability, and algorithmic transparency~\cite{lipton2018mythos}. When a person can contemplate an entire model at once, it is \textit{simulatable}; sufficiently small DTs exhibit this property. A \textit{decomposable} model is one in which sub-parts can be intuitively explained; a DT with interpretable inputs exhibits this property. \textit{Algorithmic transparency} requires an understanding of the algorithm itself: in particular, DTs are verifiable~\cite{bastani2018viper}, which is important in safety-critical applications. \section{Related Work} \subsection{Decision Tree Policies} \label{sec:dtrl} Prior work on creating DTPs using arbitrary function approximators focuses on explaining an existing agent; a non-DT policy is learned and then is approximated using a DT. One such method is VIPER~\cite{bastani2018viper}, which uses model compression techniques~\cite{bucilua2006model,hinton2015distilling,rusu2015policy} to distill a policy into a DT. This work adapts DAGGER~\cite{ross2011dagger} to prioritize gathering critical states, which are then used to learn a DT. MOET~\cite{vasic2019moet} extends VIPER by learning a mixture of DTPs trained on different regions of the state space. However, both VIPER and MOET approximate an expert. When the expert is poorly approximated by a DT, the resulting DTPs perform poorly. Other lines of work focus on directly learning a DTP, but they cannot use an arbitrary function approximator. UTree~\cite{mccallum1997reinforcement} and its extensions ~\cite{uther1998tree,pyeatt2003reinforcement,roth2019conservative} incrementally build a DT while training an RL agent. Transition tuples (or tuple statistics) are stored within leaf nodes, and a leaf node is split when the tuples suggest that two leaf nodes would better represent the Q-function. A concurrent work~\cite{rodriguez2020optimization} uses a differential decision tree to represent the policy and approximates the soft-max tree with a DT after training. However, these methods require specific policy or Q-function representations, so they cannot leverage powerful function approximators like neural networks. \subsection{Interpretable Reinforcement Learning} There exist other methods to produce interpretable RL policy summarizations. One line of work produces graph-structured policies, including finite state machines~\cite{koul2018learning}, Markov chains~\cite{topin2019xdrl}, and transition graphs between landmarks~\cite{sreedharan2020tldr}. Other work produces policies in custom representations. For example,~\citet{genetic} use a genetic algorithm to create policy function trees , which have algebraic functions as internal nodes and constants and state variables as leaves. \citet{HEIN2017fuzzyrl} express a policy as a fuzzy controller, which is a set of linguistic if-then rules whose outputs are combined. These policy formats address different aspects of interpretability compared to DTPs (e.g., showing long-term behavior rather than allowing policy verification). Another line of work uses attention mechanisms~\cite{wang2018reinforcement,annasamy2019towards,tang2020neuroevolution} to determine crucial factors in individual decisions made by the agent. A similar line of work is saliency methods, which produce visual representations of important pixels~\cite{greydanus2018visualizing,yang2018learn,huber2019enhancing} or objects~\cite{katia}. However,~\citet{atrey2019exploratory} argue that saliency maps are not sufficient explanations because the conclusions drawn from their outputs are highly subjective. Other methods explain decisions made by the agent as a function of the MDP components or the training process, including the reward function~\cite{anderson2019explaining,juozapaitis2019explainable,tabrez2019explanation}, transition probabilities~\cite{cruz2019memorybasedxrl}, and causal relationships in the environment~\cite{madumal2020distal,madumal2020explainable}. These methods are orthogonal to our work; they provide different insights and can be used alongside our approach. \subsection{Hierarchical Reinforcement Learning} Our method can be viewed as a type of hierachical decomposition, similar to that performed in hierarchical RL~\cite{dayan1993feudal,dietterich2000maxq}. Perhaps the most well-known formulation is the options framework~\cite{precup1998theoretical,sutton1999between}, in which the problem is decomposed into a two-level hierarchy. At the bottom level are options, which are subpolicies with termination conditions that take observations of the environment as input and output actions until the termination condition is met. A policy is defined over options; using this policy, an agent chooses an option and follows it until termination. Upon termination, the policy over options is again queried, and so on. Options over an MDP define a semi-MDP~\cite{bradtke1994reinforcement,mahadevan1997self,parr1998hierarchical}. In our method, the base MDP can be viewed as this semi-MDP and the IBMDP can be viewed as the full MDP. In a sense, the policy for the information-gathering actions is the lower-level policy, and the higher-level policy selects over the information-gathering policies. \section{Approach} We present \mname{}, an approach for training an agent to produce a Decision Tree Policy using existing RL algorithms. We achieve this goal by training the agent to solve a wrapped version of the original, \textit{base} MDP. The wrapped MDP, which we name an \textit{Iterative Bounding MDP}, extends the base MDP by adding information-gathering actions and \textit{bounding} state features to indicate the gathered information. The bounding features correspond to a position within a DT traversal, and the information-gathering actions correspond to partitions performed by internal nodes within a DT. By constraining an agent's policy to be a function of the bounding state features, the learned policy is equivalent to a DT. In Section~\ref{approach_mdp_formulation}, we describe IBMDPs. In Section~\ref{approach_tree_extraction}, we describe the process for extracting a DTP from an IBMDP policy during any point in training. In Section~\ref{approach_training_procedure}, we present methods for adapting existing RL algorithms to learn an implicit DTP for the base MDP. In particular, we describe modifications to Q-learning and actor-critic algorithms. \subsection{Iterative Bounding MDPs} \label{approach_mdp_formulation} We introduce Iterative Bounding MDPs (IBMDPs), a novel MDP formulation for producing DTPs. We seek to produce a DTP by ensuring that an agent's IBMDP policy is equivalent to a DTP for the original, base MDP. To use a DTP to select an action in the base MDP, a series of internal nodes are traversed, and then the leaf node specifies the action. To allow this behavior, an IBMDP has actions that are equivalent to traversing nodes and state features that indicate the current node. The base MDP must be an MDP with a factored state representation, where each state feature has upper and lower bounds on its values. A \textit{base state} is a state from the base MDP's state space, and a \textit{wrapped state} is a state from the IBMDP's state space; other terms are defined analogously. \paragraph{State Space} A wrapped state $s_w$ consists of two parts: a base state $s_b$ and bounding features, $f_1^l$-$f_n^l$ and $f_1^h$-$f_n^h$. There exist two bounding features per base state feature, such that $f_i^l$ represents a lower bound on the base feature $f_i$'s current value, and $f_i^h$ represents an upper-bound for that same base feature's current value. The bounding features reflect the outcomes of binary comparisons performed during the traversal, and the bounds are tightened with more comparisons. A sequence of wrapped states represents a traversal through a DTP for a specific $s_b$. For simplicity, and without loss of generality, we consider $s_b$ to be normalized such that all features are in $[0,1]$. We use $s_w[c]$ to refer to a component $c$ within $s_w$. An IBMDP state and state space are: \begin{equation*} S_w = S_b \times [0,1]^{2n}, ~~ s_w = \langle s_b, f_1^l, \dots, f_n^l, f_1^h, \dots, f_n^h \rangle. \end{equation*} \paragraph{Action Space} The action space for an IBMDP $A_w$ consists of the base actions $A_b$ and an additional set of information-gathering actions $A_I$: \begin{equation*} A_w = A_b \cup A_I. \end{equation*} Base actions correspond to taking an action within the base MDP, as when reaching a leaf in a DTP. Information-gathering actions specify a base state feature and a value, which correspond to the feature and value specified by an internal node of a DTP. We present two different action space formats: a discrete set of actions and a Parameterized Action Space~\cite{masson2016paramactionrl}. In both cases, the action can be described by a tuple, $\langle c, v \rangle$, where $c$ is the chosen feature and $v$ is the value. For simplicity, we consider $v \in [0,1]$, where $0$ and $1$ respectively correspond to the current lower and upper bound on $c$. With a discrete set of IBMDP actions, each of the $n$ features can be compared to one of $p$ possible values. This results in $p \times n$ discrete actions, with $v$ values of $1/(p+1), \dots, p/(p+1)$ for each of the $n$ possible $f$. With this construction, the base actions must be discrete. In this case, the information-gathering actions are: \begin{equation*} A_I = \left\{ c_1, \dots, c_n \right\} \times \left\{ \frac{1}{p+1}, \dots, \frac{p}{p+1} \right\}. \end{equation*} In a Parameterized Action Space MDP (PASMDP), each action $a \in A_d$ has $m_a$ continuous parameters. A specific action choice is specified by selecting $(a, p_1^a, \dots, p_{m_a}^{a})$. If the IBMDP is a PASMDP, then there is an action for each of the $n$ features with a single parameter ($m_a=1$), where the action specifies $c$ and the parameter specifies $v$. With this formulation, the base MDP may have a continuous, multi-dimensional action space. This is supported by adding a single $a$ with parameters corresponding to the base action choices. If $A_b$ has discrete actions, then an $a$ is added for each of them, with the corresponding $m_a$ set to zero. The information-gathering actions in the PASMDP variant are: \begin{equation*} A_I = \left\{ c_1, \dots, c_n \right\} \times [0,1]. \end{equation*} \paragraph{Transition Function} When an agent takes an information-gathering action, $\langle c, v \rangle$, the selected value $v$ is compared to the indicated feature $c$. Since $v$ is constrained to $[0,1]$ but represents values in $[c^l, c^h]$, the un-normalized $v_p$ is obtained by projecting $v_p \leftarrow v \times (c^h - c^l) + c^l$. The bounding features $c^l$ and $c^h$ are updated to reflect the new upper- and lower-bounds for $c$; the base features are unchanged. This process is equivalent to the behavior of an internal node in a DTP: a feature is compared to a value, and the two child nodes represent different value ranges for that feature. Thus, for an information-gathering action $\langle c, v \rangle$, the transition function of the IBMDP, $T_w$, is deterministic, and the next state, $s_w'$, is based on $s_w$: \begin{equation*} \begin{aligned} &s_w'[s_b] = s_w[s_b], \\ &s_w'[f] = s_w[f] \forall f \notin \{c^l, c^h\}, \\ &\textrm{If } s_b[c] \leq v_p \textrm{: } s_w'[c^h] = \min(s_w[c^h], v_p), s_w'[c^l] = s_w[c^l], \\ &\textrm{If } s_b[c] > v_p \textrm{: } s_w'[c^l] = \max(s_w[c^l], v_p), s_w'[c^h] = s_w[c^h]. \end{aligned} \end{equation*} When a base action is taken, the base features are updated as though this action was taken in the base MDP, and the bounding features are reset to their extreme values. This is equivalent to selecting a base action in a DTP and beginning to traverse the DTP for the next base state (starting from the root node). This corresponds to a transition function of: \begin{equation*} \begin{aligned} &a \in A_b \land ((s_w'[f_i^l] = 0) \land (s_w'[f_i^h] = 1) \forall i \in \left\{1, \dots, n\right\} ) \\ &\Longrightarrow T_w(s_w, a, s_w') = T_b(s_w[s_b], a, s_w'[s_b]). \end{aligned} \end{equation*} \paragraph{Reward Function} The reward for a base action is the reward specified by the base MDP for the base action, base original state, and base new state. The reward for information-gathering actions is a fixed, small penalty $\zeta$. For a sufficiently low value of $\zeta$, the optimal solution for the IBMDP includes the optimal solution of the base MDP. The overall IBMDP reward function is: \begin{equation*} \begin{aligned} a \in A_b &\implies R(s_w, a, s_w') = R(s_w[s_b], a, s_w'[s_b']), \\ a \notin A_b &\implies R(s_w, \langle c, v \rangle, s_w') = \zeta. \end{aligned} \end{equation*} \paragraph{Gamma} We introduce a second discount factor, $\gamma_w$. When a base action is taken in the IBMDP, the gamma from the base MDP, $\gamma_b$, is used to compute the expected discounted future reward. Otherwise, $\gamma_w$ is used. For a $\gamma_w$ sufficiently close to $1$, the expected discounted future reward is identical for an $s_w$, if acted upon in the IBMDP, and its corresponding $s_b$, if acted upon in the base MDP. \paragraph{Remaining Components} We present the additional components required for an episodic MDP, but the framework is also applicable to non-episodic environments. A transition in the IBMDP, $(s_w, a_w, s_w')$, is terminal if $a \in A_b$ and $(s_w[s_b], a, s_w'[s_b])$ is a terminal transition in the base MDP. The distribution over starting states of the IBMDP is derived from the distribution of starting states in the base MDP. The probability of starting in state $s_w$ is $0$ if some $f_i^l \neq 0$ or $f_i^h \neq 1$; otherwise, it is equal to the probability of starting in $s_w[s_b]$ in the base MDP. \subsection{Tree Extraction} \label{approach_tree_extraction} \begin{algorithm}[t] \caption{Extract a Decision Tree Policy from an IBMDP policy $\pi$, beginning traversal from $obs$.} \label{alg_extract_tree} \begin{algorithmic}[1] \Procedure{Subtree\_From\_Policy}{$obs, \pi$} \State $a \leftarrow \pi(obs)$ \label{pseudo:act} \If{ $a \in A_b$} \Comment{Leaf if base action} \label{pseudo:act2} \State $\textbf{return } \textrm{Leaf\_Node}(\textrm{action: } a)$ \label{pseudo:leaf} \Else \State $c,v \leftarrow a$ \Comment{Splitting action is feature and value} \label{pseudo:split} \State $v_p \leftarrow v \times (obs[c^h] - obs[c^l]) + obs[c^l]$ \label{pseudo:project} \State $obs_L \leftarrow obs; \qquad obs_R \leftarrow obs$ \label{pseudo:child} \State $obs_L[c^h] \leftarrow v_p; \quad obs_R[c^l] \leftarrow v_p$ \label{pseudo:child2} \State $child_L \leftarrow \textrm{Subtree\_From\_Policy}(obs_L, \pi)$ \label{pseudo:recurse} \State $child_R \leftarrow \textrm{Subtree\_From\_Policy}(obs_R, \pi)$ \label{pseudo:recurse2} \State $\textbf{return } \textrm{Internal\_Node}(\textrm{feature: } c, \textrm{value: } v_p,$ \par \hspace{4.5em} $\textrm{children: } (child_L, child_R) )$ \label{pseudo:final} \EndIf \EndProcedure \end{algorithmic} \end{algorithm} Not all policies for the IBMDP correspond to valid DTPs; the presence of $s_b$ within each wrapped state allows access to full state information at any point during tree traversal. However, all IBMDP policies that only consider the bounding features (i.e., ignore $s_b$) correspond to a DTP. We describe the process for extracting a DTP from a policy defined over bounding observations from the environment, $\pi(s_w \setminus s_b)$. We present the training of such policies in Section~\ref{approach_training_procedure}. Algorithm~\ref{alg_extract_tree} outlines the full DTP extraction procedure. $\textsc{Subtree\_From\_Policy}$ constructs a single node based on the current observation; that node's children are constructed through recursive calls to this same function. As described in Section~\ref{approach_mdp_formulation}, the bounding features ($s_w \setminus s_b$) describe a node within a DTP, with $s_w[f_i^l] = 0 \land s_w[f_i^h] = 1 \forall i \in [1, \dots, n]$ corresponding to the root node. $\textsc{Subtree\_From\_Policy}(s_w \setminus s_b, \pi)$ for a root node $s_w$ yields the DTP for $\pi$. An action $a$ within the IBMDP corresponds to a leaf node action (when $a \in A_b$) or a DT split (when $a \notin A_b$). Lines~\ref{pseudo:act}-\ref{pseudo:act2} obtain the action for the current node and identify its type. The action taken for a leaf node defines that leaf, so Line~\ref{pseudo:leaf} constructs a leaf if $a$ is not an information gathering action. Information gathering actions consist of a feature choice $c$ and a splitting value $v$ (Line~\ref{pseudo:split}). The IBMDP constrains $v$ to be in $[0,1]$, which corresponds to decision node splitting values between $s_w[c^l]$ and $s_w[c^h]$, the current known upper and lower bounds for feature $c$. Line~\ref{pseudo:project} projects $v$ onto this range, yielding $v_p$, to which feature $c$ can be directly compared. To create the full tree, both child nodes must be explored, so the procedure considers both possibilities ($s_b[c] \leq v_p$ and $s_b[c] > v_p$). Lines~\ref{pseudo:child}-\ref{pseudo:child2} construct both possible outcomes: a tighter upper bound, $c^h \leftarrow v_p$, and a tighter lower bound, $c^l \leftarrow v_p$. This procedure then recursively creates the child nodes (Lines~\ref{pseudo:recurse}-\ref{pseudo:recurse2}). The final result (Line~\ref{pseudo:final}) is an internal DTP node: an incoming observation's feature is compared to a value $v_p$ ($obs[c] \leq v_p$), and traversal continues to one of the children, depending on the outcome of the comparison. \subsection{Training Procedure} \label{approach_training_procedure} If an agent solves an IBMDP without further constraints, then it can learn a policy where actions depend on $s_b$ in arbitrarily complicated ways. To ensure that the base MDP policy follows a DT structure, the IBMDP policy must be a function of only the bounding features. Effectively, if the policy is a function of $s_w \setminus s_b$, then the policy is a DTP for the base MDP. However, with a policy of the form $\pi(s_w \setminus s_b)$, the standard bootstrap estimate does not reflect expected future reward because the next observation is always the zero-information root node state. Therefore, standard RL algorithms must be modified to produce DTPs within an IBMDP. We present a set of modifications that can be applied to standard RL algorithms so the one-step bootstrap reflects a correct future reward estimate. We motivate this set of modifications by presenting a ``two agent'' division of the problem and then show the equivalent single-agent $Q$ target. We then demonstrate how a target Q-function or critic can be provided the full state ($s_w$) to facilitate learning while maintaining a DT-style policy. Finally, we present how the modifications are applied to Q-learning and actor-critic algorithms. Without loss of generality, we focus on learning a Q-function. If learning an advantage function or value function, an analogous target modification can be made. \paragraph{Two Agent Division} \begin{figure}[t!] \begin{center} \includegraphics[width=0.95\columnwidth]{fig/2agentfinal.png} \end{center} \caption{The division between the tree agent (orange circle states and arrow actions) and the leaf agent (green square states and arrow actions). Each tree traversal is an episode for the tree agent and one transition for the leaf agent.} \label{fig:2agent} \end{figure} Learning in an IBMDP can be cast as a two-agent problem: (i) a \textit{tree agent} selects which information-gathering actions to take and when to take a base action, and (ii) a \textit{leaf agent} selects a base action using the bounding features, when prompted to do so. Figure~\ref{fig:2agent} shows this division, where the leaf agent selects actions in $s_{l_1}$ and $s_{l_2}$, and the tree agent selects all other actions. With this division of the problem, the leaf agent is equivalent to the agent in UTree-style methods. The tree agent replaces the incremental tree construction used in UTree and is akin to an RL agent constructing a DT for a supervised problem~\cite{preda2007buildingdtbyrl}. The leaf agent's observed transition sequence consists of leaf nodes and its own selected actions: $s_{l_1}, a_{l_1}, r_{l_1}, s_{l_2}, a_{l_2}$. The bootstrapped Q-value estimate is: \begin{equation*} r_{l_1} + \gamma_b \max_{a' \in A_b} Q_l(s_{l_2}, a'), \end{equation*} where $r_{l_1}$ is a reward obtained from the base MDP. In this framing, the tree agent experiences a new episode when a base action is taken. The initial state is always the zero-information, root state, and the episode terminates when the agent chooses the stop splitting action, $a_{stop}$, which we add for the two agent formulation. When the tree agent stops splitting, the reward is the value estimated by the leaf agent, $Q_l(s_l, a_l)$. The tree agent's Q-value target is: \begin{equation*} r_{d} + \gamma_w \max_{a' \in a_{stop} \cup A_w \setminus A_b} Q_d(s_{d}', a'), \end{equation*} where $r_{d}$ is $\max_{a' \in A_b} Q_l(s_{d}, a')$ if the $a_{stop}$ action was chosen and $\zeta$ otherwise. When $a_{stop}$ is taken, $Q_d(s_{d}', a')$ is $0$ for all $a'$ since the transition is terminal for the tree agent. These two equations for target Q-values allow an IBMDP to be solved using only the partial $s_w \setminus s_b$ observations. The tree agent does not directly receive a reward signal from future base actions but uses the leaf agent's estimates to update. The leaf agent learns long-term reward estimates based on rewards from the environment. \paragraph{Merging of Agents} The target Q-value for a terminal tree agent action is $r_{d}$, which is $\max_{a \in A_b} Q_l(s, a)$. The tree agent's episode terminates if and only if $a_{stop}$ is taken. Effectively, the tree agent seeks to learn $Q_d(s, a_{stop}) = \max_{a \in A_b} Q_l(s, a)$. Rather than learning this relationship, $Q_d(s, a_{stop})$ can directly query $Q_l$, simplifying the learning task without changing the underlying problem. With this change to $Q_d(s, a_{stop})$, $Q_d$ and $Q_l$ are defined over disjoint subsets of $A_w$. A single, unified Q-function $Q$ can be learned, which is defined over all $a$ in $A_w$. This allows the target Q-values to be re-written as: \begin{equation*} \begin{aligned} a \in A_b \implies \textrm{target } &= r_{l_1} + \gamma_b \max_{a' \in A_b} Q(s_{l_2}, a'), \\ a \notin A_b \implies \textrm{target } &= \zeta + \gamma_w \max_{a' \in A_w} Q(s', a'), \end{aligned} \end{equation*} where $s'$ is the next state, regardless of type. In the former equation, $s_{l_2}$ is the next state in which a base action is taken when following the greedy policy. In the latter equation, if the $\max$ returns the Q-value for an $a \notin A_b$, the two terms correspond to the reward and expected discounted future reward. When the $\max$ returns the Q-value for an $a \in A_b$, the two terms are then the immediate reward and the reward from $a_{stop}$ in the next state, effectively removing the terminal/non-terminal distinction for the tree agent. As a result, this two-agent problem is equivalent to a single agent updating a single Q-function using two different targets, depending on the action taken. The equation for computing a target differs from the standard Q-function update equation (as applied to the IBMDP) in one way: if a base action is taken, the ``next state'' is the next state in which a base action is taken, rather than simply the next state. This single change is sufficient to learn DTPs for IBMDPs. \paragraph{Omniscient Q-function} The above merged agent formulation can be directly used to learn DTPs. However, the merged formulation requires the next leaf state, $s_{l_2}$, when a base action is taken. This state is not naturally encountered when performing off-policy exploration, so $s_{l_2}$ must be computed by repeatedly querying the Q-function with a sequence of $s_d$ tree states until the next base action is chosen. As a result, computing a single base action target Q-value requires simulating the next choice of base action, roughly doubling the computation time. As an extention of the merged agent formulation, we propose to approximate $Q(s_{l_2}, a)$ using a second Q-function, $Q_o$. We refer to this second Q-function as an \textit{omniscient Q-function}, because its input is the full state $s_w$. $Q_o$ is used in a supporting role during training; the policy is obtained directly from $Q$. As a result, providing $Q_o$ the full state, $s_w$, does not violate the extraction process's requirement that the policy is a function of only $s_w\setminus s_b$. The omniscient Q-function is trained to approximate $Q(s_{l_2}, a)$ based on $a$ and the full state at $s_{l_2}$'s root, $s_r$. This root state is sufficient since $s_{l_2}$ is obtained from $s_r$ through a sequence of actions, each based on the previous $s_d$. Therefore, the current greedy policy corresponds to some function $F(s_r) = s_{l_2}$ for all $(s_r, s_{l_2})$ pairs. We have $Q_o$ implicitly learn this function as it aims to learn an approximation $Q_o(s_r, a) \approx Q(s_{l_2}, a)$ for all base actions. Additionally, the original merged formulation learns the Q-value at each level in the tree (for $s_{d1,1}, s_{d1,2},$ etc.) using targets computed from the next level. This leads to slow propagation of environment reward signals from the leaf nodes. In addition to using $Q_o$ for the root node, we propose to have it learn to approximate $Q_o(s_w, a) \approx Q(s, a)$ for all states and all actions. Since $Q_o$ has access to $s_b$, the rewards obtained in the leaf node, $s_{l_1}$, directly propagate through $Q_o$ to earlier levels of the tree instead of sequentially propagating upward (from leaf to root). \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{fig/mnamenew.png} \caption{The method for using the omniscient Q-function, $Q_o$, for $Q$ targets. The policy is based only on $Q$, so a DTP can be extracted despite $Q_o$ being a function on the full state.} \label{fig:omnqfn} \end{figure} As shown in Figure~\ref{fig:omnqfn}, during training, we use $Q_o$ in cases where $Q(s, a)$ would be used as a target. The action choice is still based on $Q(s,a)$, but the value is obtained from $Q_o$. Both $Q_o$ and $Q$ are updated using the $Q_o$-based target. \paragraph{Modifying Standard RL Algorithms} For Q-learning-based methods, such as Dueling Deep Q-Networks (DDQN)~\cite{wang2016dueling} and Model-Free Episodic Control (MFEC)~\cite{blundell2016model}, we use the merged agent formulation target value of $Q_o(s_w, \argmax_a Q(s_w \setminus s_b, a))$ in place of $\max_a Q(s,a)$ (for both $a \in A_b$ and $a \notin A_b$); the additional Q-function, $Q_o$, is updated using the same target value. For policy gradient methods, such as Proximal Policy Optimization (PPO)~\cite{schulman2017proximal}, the Q-function is used to compute advantage values only during training. Therefore, we use only $Q_o$, not $Q$, to compute advantages. $Q_o$ is then trained using the merged agent formulation target value computations (replacing $Q(s,a)$ with $Q_o(s_w, a)$). \section{Experiments} We evaluate \mname{}'s ability to generate DTPs through solving an IBMDP using a non-interpretable function approximator during the learning process. An alternative to implicitly learning a DTP is to learn a non-tree expert policy and then find a tree that mimics the expert. We compare to VIPER, which takes this alternative approach and outperforms standard imitation learning methods. VIPER gathers samples using a DAGGER variant and weights the samples during tree training. For this evaluation, we use three environments, briefly described in Section~\ref{sec:envs}, with further details in the Appendix. \subsection{Environments} \label{sec:envs} \paragraph{CartPole} CartPole~\cite{brockman2016gym} is a commonly used domain for evaluating methods that produce DTPs, where the agent must balance a pole affixed to a cart by moving the cart back and forth. An episode terminates when the pole falls or $200$ timesteps have elapsed. We include it to provide standard benchmark results; following previous work, methods are limited to DTs of depth two. \paragraph{PrereqWorld} PrereqWorld~\cite{topin2019xdrl} is an abstraction of a production task; the agent is tasked with constructing a goal item. Creating each item may require a subset of other, prerequisite items. This environment is similar in structure to advising domains~\cite{dodson2011natural} and crafting, as in MineRL~\cite{milani2020minerl}. The items are topologically sorted based on their prerequisite relationships such that lower-numbered items require higher-numbered items. A PrereqWorld environment has a number of item types $m$. A state consists of $m$ binary features which indicate whether each type of item is present. There are $m$ actions: each action corresponds to an attempt to produce a specific item. Creating the goal item yields a reward of $0$ and ends the episode; other actions yield a reward of -1. We use a base PrereqWorld environment with $m=10$ with a fixed set of prerequisites (specified in the Appendix). We produce smaller variants by removing high-numbed items (based on the topological sort). \paragraph{PotholeWorld} We introduce the PotholeWorld domain, where the agent is tasked with traversing $50$ units along a three-lane road. The first lane gives less reward per unit traveled, but the other two lanes contain ``potholes'' which lead to a reward penalty if traversed. A state contains a single feature: the current position, in $[0,50]$. The initial state is at position $0$; an episode terminates when the position is $50$. The three actions are $[lane\_1, lane\_2, lane\_3]$, which each advance the agent a random amount drawn from $Unif(0.5,1)$. Potholes are added starting from position $0$ until position $50$ with distances drawn from $Unif(1,2)$. Potholes are assigned with equal probability to lane two or lane three. When an action is taken, the base reward is equal to the distance moved. This is reduced by $10\%$ if the $lane\_1$ action was taken. If a pothole is in the chosen lane along the traversed stretch (i.e., between previous agent position and next agent position), then the reward is reduced by $5$. We specify the pothole positions used in our evaluation in the Appendix. \begin{table*}[t] \centering \begin{tabular}{p{3.2cm}p{2.4cm}p{2.4cm}p{2.4cm}p{2.4cm}p{2.4cm}p{2.4cm}} \toprule \multirow{2}{1cm}{} & \multicolumn{1}{l}{CartPole} & \multicolumn{2}{l}{PrereqWorld} & \multicolumn{2}{l}{PotholeWorld} \\ & {Reward} & {Reward} & {Depth} & {Reward} & {Depth} \\ \midrule VIPER (DQN) & 200.00~~(0.00) & -4.00~~(0.00) & 5.70~~(0.62) & 46.30~~(0.39) & 6.00~~(0.61)\\ VIPER (BI) & 200.00~~(0.00) & -4.00~~(0.00) & 6.00~~(0.00) & 46.31~~(1.32) & 9.18~~(1.08) \\ \mname{} (DQN) & 198.72~~(4.74) & -4.08~~(0.34) & 4.28~~(0.67) & 46.92~~(2.14) & 5.36~~(1.41) \\ \mname{} (PPO) & 199.32~~(3.23) & -4.04~~(0.20) & 4.16~~(0.47) & 45.39~~(0.42) & 1.04~~(0.75) \\ \mname{} (MFEC) & 200.00~~(0.00) & -4.00~~(0.00) & 3.92~~(0.27) & 49.18~~(1.04) & 9.74~~(0.49) \\ \bottomrule \end{tabular} \caption{Final average reward and tree depth (Std Dev).} \label{table:mname_table} \end{table*} \normalsize \subsection{Learning with \mname{}} \label{sec:mname_learning} \begin{figure}[t] \centering \begin{subfigure}[t]{0.95\columnwidth} \centering \includegraphics[width=\columnwidth]{fig/savefile_factory_depth2.png} \label{fig:sizevsdepth} \end{subfigure} \begin{subfigure}[t]{0.95\columnwidth} \centering \includegraphics[width=\columnwidth]{fig/savefile_factory_nodes2.png} \label{fig:sizevsnodes} \end{subfigure} \caption{Tree depth and node count as the PrereqWorld environment size increases (Std Dev bars). \mname{} yields smaller trees for larger environments than VIPER.} \label{fig:res} \end{figure} To evaluate \mname{}'s ability to produce DTPs with a non-interpretable function approximator for the IBMDP, we apply the \mname{} modifications to three base methods: DDQN, PPO, and MFEC with improvements from Neural Episodic Control~\cite{pritzel2017neural}. DDQN is a Q-learning approach that uses a neural network to learn a state-value function and action-value function, which are combined to form a Q-function. PPO is a policy gradient method that uses a critic for estimating the advantage function. We use a neural network for both the actor and critic. MFEC is a Q-learning approach that uses a nearest neighbor model to estimate Q-values. The modifications from Section~\ref{approach_training_procedure} are applied to all three methods. Actions are selected based on IBMDP states without the base state ($s_w \setminus s_b$); this affects the actor for PPO and the Q-function for DDQN and MFEC. DDQN and MFEC are used with a target Q-value function, $Q_o$, when performing updates, as in Figure~\ref{fig:omnqfn}. The target function and the critic for PPO are used with full IBMDP states. We compare to VIPER using two expert types: DQN and Backward Induction (BI). In Table~\ref{table:mname_table}, we show the final average reward and tree depth for 50 trials on CartPole, PrereqWorld ($m=7$), and PotholeWorld. Optimal final average rewards would be 200, -4, and 50, respectively. \mname{} finds DTPs with high average reward for all environments and tends to find shorter DTPs than VIPER. To further evaluate depth-vs.-reward trade-offs, we use VIPER(BI) and \mname{}(MFEC) since these methods have the fewest hyperparameters and are least computationally expensive. \subsection{Response to Environment Size} \label{sec:env_size} \mname{} discourages the learning of unnecessarily large trees through the use of two penalty terms, $\zeta$ and $\gamma_w$. These penalties are akin to regularization of the implicit DTP: when multiple optimal DTPs exist for the base MDP, the optimal IBMDP policy corresponds to the DTP with the lowest average leaf height. In contrast, if a tree mimics an expert policy, then the resulting tree will include complex behaviors that are artifacts of the expert's intricacy. We evaluate the decrease in tree size attained by using \mname{} to directly learn a tree. We compare the tree depth and node count for DTPs found by VIPER and \mname{} on PrereqWorld. The environment size is varied through $m$, which specifies the number of states ($2^m$) and the number of actions ($m$). For a given $m$, average reward is equal for both methods. The results are shown in Figure~\ref{fig:res} (50 trials per method/$m$ pair). \mname{} produces smaller trees for $m\geq4$, and the size differences increases with the environment size. This is because an unconstrained expert can learn more complex behaviors with a larger state space, and VIPER faithfully mimics the expert policy. \subsection{Response to Tree Depth} \label{sec:tree_depth} \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{fig/savefile_pothole_reward2.png} \caption{Average per-episode reward for trees of a fixed depth for PotholeWorld (Std Dev bars). \mname{} DTPs consistently achieve higher reward than VIPER's DTPs. Bar at 50 indicates maximum possible per-episode reward.} \label{fig:resdepth} \end{figure} If an application requires a DTP of fixed depth, then fitting a DT to an expert policy can yield a poor policy of that depth. This is because the expert is not learned in the context of the depth limitation; imperfectly imitating that expert can lead to low reward. \mname{} yields better policies at a given depth since it directly solves an IBMDP that can be augmented with a depth limit. An IBMDP can include affordances~\cite{khetarpal2020can}, so that information-gathering actions cannot be chosen $n$ actions after the most recent base action. With this modification, an RL algorithm can directly find the best DTP subject to the depth restriction. We evaluate \mname{}'s ability to find DTPs with high average reward for PotholeWorld subject to a tree depth limit. This domain is designed so the overall optimal DTP cannot be pruned to obtain the optimal DTP for a smaller depth. We present the average episode reward as a function of the depth limit in Figure~\ref{fig:resdepth} for VIPER and \mname{} (50 trials per method/depth pair). \mname{} attains higher reward through using $lane\_1$ when the DTP depth is too shallow to avoid potholes in the other lanes. In contrast, VIPER always attempts to imitate the expert and attains a low reward when the DTP poorly represents the expert policy. \section{Conclusion and Future Work} We introduce Iterative Bounding MDPs, an MDP representation which corresponds to the problem of finding a decision tree policy for an underlying MDP. Additionally, we identify how the standard value update rule must be changed so that all IBMDP solutions correspond to decision tree policies for the underlying MDP. We show how existing RL algorithms can be modified to solve IBMDPs, so a non-interpretable function approximator can be used in conjunction with an existing RL method to solve an IBMDP and produce a decision tree policy. In addition, we provide empirical results showing the tree size and reward improvements possible through solving an IBMDP rather than approximating a non-interpretable expert. Future work includes generalization of IBMDPs to encode other constraints so other explanation formats can be directly learned. \section{Acknowledgements} This material is based upon work supported by the Department of Defense (DoD) through the National Defense Science \& Engineering Graduate (NDSEG) Fellowship Program, DARPA/AFRL agreement FA87501720152, and NSF grant IIS-1850477. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Department of Defense, Defense Advanced Research Projects Agency, the Air Force Research Laboratory, or the National Science Foundation.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The yielding of glasses is important for a wide range of materials including metallic glasses, polymer- and soft glasses. Yielding demarcates the property of any solid to flow and deform irreversibly under applied deformation~\cite{Barnes1999}. At small applied stress and strain, the material deforms mostly elastically; at larger strain, the material starts to flow irreversibly, resulting in permanent deformation. Glasses are structurally frozen liquids with relaxation times exceeding the experimental time scale by many orders of magnitude and hence exhibiting solid-like properties~\cite{Ediger1996}. Yielding is central to many properties of glasses including time-dependent elasticity, relaxation, flow and fracture; insight into yielding should provide a deeper understanding of the glassy state~\cite{ColloidalGlasses}, but remains challenging. The yielding of glasses addresses an important fundamental question: how does the dynamically arrested state respond to the application of stress? While at the glass transition, microscopic observables change rather smoothly, yet rapidly~\cite{Pusey1987,Nagel1998} as a function of density or temperature, an important question to ask is whether a similarly smooth transition exists upon application of stress. The response to applied stress, however, remains poorly understood, partially because structural imaging of glasses during their yielding remains difficult~\cite{Petekidis2012}. Colloidal glasses provide good models for a wide range of soft and simple molecular glasses; they exhibit dynamic arrest due to crowding at volume fractions larger than $\phi_g \sim 0.58$, the colloidal glass transition~\cite{Pusey1986,vanMegen}. Microscopically, the particles are trapped within cages formed by their nearest neighbors allowing only for very slow structural rearrangements. These systems exhibit glass-like properties such as non-ergodicity and aging \cite{Bouchaud1992}, and upon application of small but sufficiently large stress, they yield and flow~\cite{YieldingCollGlass}. The yielding of colloidal glasses has been widely investigated by oscillatory rheology, in which the sample is probed with a time-dependent, oscillatory strain. Yielding is usually associated with the intersection of the strain-dependent storage and loss moduli, but the exact definition of the yield point remains a matter of debate~\cite{Bonn2006,BonnDenn}. Constitutive relations have been used to model yielding based on structural parameters~\cite{Langer2011}, and recent advanced oscillatory rheology~\cite{Rogers2011} and combined rheology and simulation~\cite{Petekidis2012} have provided some rheological insight into the yielding process. Furthermore, mode-coupling theory and simulations~\cite{Brader2010} have studied the nonlinear stress response in oscillatory shear, and shear heterogeneities in steady state shear and creep~\cite{shearbanding_sim}. However, the direct observation and investigation of the structure during oscillatory yielding, which would give important insight into the nature of the yielding transition, remained elusive. In this paper, we elucidate the microscopic yielding of glasses by direct measurement of the structure factor during the yielding process. We apply a recently introduced combination of x-ray scattering and rheology to the oscillatory shear of a colloidal glass to monitor the structure factor during the increasing applied strain amplitude. This allows us to obtain new insight into the nature of the yielding transition: We identify a sudden symmetry change in the orientational ordering, reflecting a surprisingly abrupt transition from a solid to a liquid-like state of the glass. Using a structural order parameter we demonstrate the sharpness of the induced transition as a function of the applied mechanical field. This sharp, dynamically induced transition under applied strain appears analogous to first-order equilibrium transitions. The experiments were carried out at the beam line P10 of the synchrotron PETRA III at DESY. Simultaneous rheology and structure factor measurements were achieved by placing an adapted rheometer (Mars II, Thermo Fisher) directly into the beam path of the synchrotron~\cite{BerndLangmuir,Denisov2013}. The well-collimated synchrotron beam with wavelength $\lambda = 0.154$ nm is deflected into the vertical direction to pass through the layer of suspension perpendicular to the rheometer plates (Fig.~\ref{fig:Setup}). The suspension consists of silica particles with a diameter of $50$ nm and a polydispersity of $10\%$, suspended in water. A small amount ($1 mM$) of $NaCl$ is added to screen the particle charges, resulting in a Debye screening length of $2.7$ nm, and an effective particle diameter of $2r_0 = 55.4$ nm. Dense samples were prepared by diluting samples centrifuged to a sediment. Measurements of the relaxation time of these samples yielded $\tau \sim 10^6 t_B$~\cite{Denisov2013}, where $t_B$ is the Brownian time, indicating that the suspension is close to the colloidal glass transition~\cite{vanMegen1998}. Estimation of the effective volume fraction from dilution of the centrifuged sediment yields a value of $\phi \sim 58\%$, consistent with this interpretation~\cite{vanMegen1998}. After loading, the samples are sealed with a small amount of low-viscosity oil to prevent evaporation and maintain sample stability over more than 4 hours. Samples were initialized by preshearing at a rate of $\dot{\gamma}=0.1$ s$^{-1}$ for 120 seconds followed by a rest time of 600 seconds; this procedure guaranteed reproducible results. We then apply oscillatory strain with frequency $f=1$ Hz and strain amplitude $\gamma_0$ increasing from $\gamma_{0min} = 10^{-4}$ to $\gamma_{0max} = 1$ (100 points on a logarithmic scale, three oscillations are averaged per cycle, leading to total duration of the experiment of around 5 minutes). We simultaneously monitor the scattered intensity using a Pilatus detector at a distance of $D = 280$ cm operating at a frame rate of 10 Hz. The detector with pixel size $172 \times 172$ $\mu$m$^2$ covers scattering angles $\theta$ in the range $0.03-0.5^{\circ}$, allowing access to wave vectors $q = 4\pi/\lambda sin(\theta/2)$ in the range $qr_0 = 0.5$ to $5$. From the recorded intensity, we determine the structure factor $S(\textbf{q})$ by subtracting the solvent background and dividing by the particle form factor determined from dilute suspensions. An example of the angle-averaged structure factor is shown in Fig.~\ref{fig:Sq}, inset. We focus on the first peak of the structure factor to elucidate changes in the nearest-neighbor structure upon yielding. The applied shear introduces structural anisotropy: in an elastic material, shear leads to a well-known distortion of the structure resulting in an anisotropic intensity distribution along the first ring; this is demonstrated by the angle-resolved structure factor at small strain amplitude in Fig.~\ref{fig:Sq} (red curve). The two-fold (p-wave) symmetry indicates a solid-like response; it reflects the elastic response of the material to local shear distortions~\cite{Argon_79}, consistent with our direct imaging of the strain field by confocal microscopy~\cite{chikkadi_schall11,schall2007}. The latter reveals the ubiquity of quadrupolar elastic fields known as Eshelby field~\cite{Eshelby} associated with elementary shear transformations~\cite{picard2004,schall2007}. The normal strain component of this Eshelby field has a $p$-wave symmetry in the shear plane, which is precisely the symmetry that we observe here. We elucidate the strain dependence of this anisotropy by following the structure factor as a function of increasing strain amplitude. While two-fold symmetry persists at small strain, at larger strain, this symmetry is lost and the structure factor becomes isotropic, as shown by the blue line for $\gamma_0 = 10^0$ in Fig. ~\ref{fig:Sq}. \begin{figure} \centering \subfigure {\includegraphics[width=0.6\columnwidth]{ExpSetup.eps} \begin{picture}(0,0)(0,0) \put(-155,0){(a)} \end{picture} \label{fig:Setup}} \subfigure {\includegraphics[width=0.85\columnwidth]{Sq_general2.eps} \begin{picture}(0,0)(0,0) \put(-200,0){(b)} \end{picture} \label{fig:Sq}} \caption{(Color online) \subref{fig:Setup} Schematic of the experimental setup showing the x-ray beam and detector with respect to the rheometer and the layer of sheared suspension. The rheometer is stress controlled and we use plate-plate geometry. The x-ray beam passes through the suspension at 0.78 times the disc radius; the beam diameter is smaller than 0.1 mm, much smaller than the disc radius of 18mm. \subref{fig:Sq} Angle dependence of the first peak of the structure factor $S(q_1)$ for small ($\gamma_0=10^{-2}$, red curve) and large strain amplitudes ($\gamma_0=10^{0}$, blue curve). Here, $\alpha$ is the angle with respect to shear direction. To calculate each value of $S(\alpha)$ we average in angular wedges of $\pi/30$ and radially over $\Delta q \sim2w_1$. Inset: angle-averaged structure factor as a function of wave vector magnitude.} \end{figure} To highlight this symmetry change most clearly, we determine the angular correlation function of the angle dependent structure factor $S_1(\alpha)$, \begin{multline} C(\beta)= \\ \frac{\int_0^{2\pi}(S_1(\alpha+\beta)-<S_1(\alpha)>)(S_1(\alpha)-<S_1(\alpha)>)d\alpha} {\int_0^{2\pi}(S_1(\alpha)-<S_1(\alpha)>)^2 d\alpha}, \label{eq1} \end{multline} where we integrate over all angles $\alpha$ as a function of the correlation angle $\beta$. We reduce possible effects of elliptical distortion of the first ring by averaging radially over an extended range of wave vectors ($\Delta q \sim2w_1$) around $q_1$. This allows us to follow the underlying symmetry most clearly. We illustrate its strain evolution in Fig.~\ref{fig:Cor2D_vf30} inset, where we represent the angular correlation function with color and follow its evolution along the vertical axis. A sudden loss of symmetry is observed at $\gamma_0^{\star}\sim 0.077$, as demonstrated by the sudden disappearance of the p-wave pattern. To investigate the sharpness of this transition, we define an order parameter that measures the degree of anisotropy. A good choice of such order parameter is the peak value of the correlation function, $C(\beta=\pi)$, which is 1 for the ideal case of p-wave symmetry, and 0 for a complete loss of symmetry. We show this order parameter as a function of $\gamma_0$ in Fig.~\ref{fig:Cor2D_vf30} (blue line). At $\gamma_0^{\star}$, an abrupt drop from $C(\beta=\pi,\gamma_0) \sim 0.8$ to $C(\beta=\pi,\gamma_0) \sim 0$ occurs, demonstrating a surprisingly sharp loss of orientational order and thus melting in the orientational degrees of freedom; at the same time, the mean absolute value of $S(q)$ does not change, indicating robust translational degrees of freedom. A similar symmetry change was observed by us in the real-space imaging of a sheared colloidal glass~\cite{chikkadi_schall11}: the microscopic strain correlations changed symmetry from anisotropic solid to isotropic liquid-like. Such symmetry change reminds of first order equilibrium transitions that demarcate qualitative changes of a material characterized by an order parameter. The order parameter defined here together with the excellent time resolution allow us to indeed demonstrate the sharpness of this transition. \begin{figure} \centering {\includegraphics[width=0.85\columnwidth]{CP2Corr1.eps} \caption{(Color online) Order parameter $C(\beta=\pi,\gamma_0)$ as a function of strain amplitude (left axis, blue). Also indicated are the elastic and viscous moduli, $G'$ and $G''$ (right axis, green and black). Inset: Color map showing the evolution of the angular correlation function $C(\beta,\gamma_0)$ with horizontal axis: correlation angle $\beta$; vertical axis: applied strain amplitude $\gamma_0$. Color indicates the value of $C(\beta,\gamma_0)$, see color bar.} \label{fig:Cor2D_vf30}} \end{figure} Concomitantly with the loss of symmetry, the amplitude of fluctuations increases. We investigate these fluctuations by calculating their time correlation via \begin{multline} F(\Delta t)= \\ \frac{1}{T}\int_0^{T}(C(t+\Delta t)-<C(t)>)(C(t)-<C(t)>)dt, \label{eq2} \end{multline} where $t\sim \log(\gamma_0/\gamma_{0min})$ and we correlate order parameter values $C(t)=C(\beta=\pi,t)$ as a function of delay time $\Delta t \sim \Delta\log(\gamma_0)$. Here, $T$ is the averaging time interval. For sufficiently large $T$, the time correlation should pick out the typical time scale of fluctuations, such as for example the underlying oscillation period, during which the colloidal glass may yield and reform \cite{Rogers2011b}. However, our data does not show such characteristic time scale, possibly due to limited resolution. This is confirmed independently by Fourier analysis. We thus interpret these fluctuations as noise. To investigate their amplitude as a function of $\gamma_0$, we choose a short averaging period $T = 1$s; this allows us to clearly observe a sudden increase in the noise amplitude at $\gamma_0^{\star}$ as shown in Fig~\ref{fig:CP2fluct_vf30}. \begin{figure} \centering {\includegraphics[width=0.85\columnwidth]{CP2fluct_vf30.eps} \caption{(Color online) Time correlation of the order parameter, $F(\gamma_0)$ (left axis, blue curve) and kurtosis $\kappa(\gamma_0)$ (right axis, green curve) as a function of strain amplitude. Red dotted line indicates Gaussian value 3 of the kurtosis. At $\gamma_0^*$, fluctuations increase, and the kurtosis changes sharply to its Gaussian value 3.} \label{fig:CP2fluct_vf30}} \end{figure} We elucidate this sudden increase of amplitude by determining the kurtosis $\kappa$. The kurtosis uses higher moments of intensity fluctuations to investigate the Gaussian nature of fluctuations: a kurtosis value of 3 indicates a Gaussian distribution and thus that fluctuations are uncorrelated. We determine instantaneous values of $\kappa$ from spatial fluctuations of the structure factor along the diffraction ring using $\kappa(\gamma_0)=m_4/m_2^2$, where the $i$-th moment of the structure factor $m_i=\frac{1}{2\pi}\int_{0}^{2\pi}{(S_1(\alpha)-<S_1(\alpha)>)^i d\alpha}$. This allows us to follow the kurtosis as a function of applied strain. The resulting evolution of $\kappa$ is shown in Fig.~\ref{fig:CP2fluct_vf30} (green curve). A sharp increase to a value of 3 occurs precisely at $\gamma_0^{\star}$, indicating a sudden transition to Gaussian fluctuations. The evolution of $\kappa$ mirrors the evolution of the order parameter shown in Fig.~\ref{fig:Cor2D_vf30}; hence, the loss of symmetry is accompanied by instantaneous changes to Gaussian intensity distributions. The disappearance of anisotropy, the increase of the amplitude of fluctuations and their transition to Gaussian distributions suggest sudden melting of the glass in the orientational degrees of freedom. We note that the value of strain $\gamma_0^{\star}$ is in reasonable agreement with the value 0.08 reported for the "shear melting" of colloidal glasses~\cite{Weitz2010}; the precise yield strain value, however, may depend weakly on the shear rate and the interactions of the particles. We thus identify a sharp, dynamically induced transition in the glass structure. To link this microscopic transition to the rheological behavior of the glass, we follow the strain-dependant storage and loss moduli, $G^\prime$ and $G^{\prime\prime}$, simultaneously with the structure factor (see Fig.~\ref{fig:Gprimes}). These moduli show the well-known strain dependence of dense suspensions: the plateau at small strain with $G^\prime > G^{\prime\prime}$ is followed by the decrease of the moduli and their intersection, indicating the non-linear regime. The point where the two moduli cross is generally associated with the yield point~\cite{BonnDenn}: the storage modulus decreases below the loss modulus indicating a gradual loss of elasticity. Interestingly, the point where the two curves meet is close to $\gamma_0^{\star}$, allowing us to associate the sharp structural transition with the rheological yielding of the material. This is shown most clearly in Fig.~\ref{fig:Cor2D_vf30}, where the moduli have been reproduced in enlarged form (green and black lines); the exact intersection of $G^\prime$ and $G^{\prime\prime}$, however, is hard to pinpoint and the curves even coexists for some range around $\gamma_0\sim10^{-1}$. Surprisingly, our structure factor analysis provides us with a much sharper definition of the yielding point ($\gamma_0^{\star}$), which makes the structural correlation analysis a powerful tool to pinpoint the yielding of the material. We note, however, that the moduli $G^\prime$ and $G^{\prime\prime}$ as usual represent only the first harmonic response; higher harmonics are missing in this representation, although they can be quite significant~\cite{Wilhelm,Laettinga}, and their inclusion might provide a sharper mechanical signature of yielding. \begin{figure} \centering \subfigure {\includegraphics[width=0.85\columnwidth]{GpGpp_er4_nq.eps} \label{fig:Gprimes}} \begin{picture}(0,0)(0,0) \put(-225,10){(a)} \end{picture} \subfigure {\includegraphics[width=0.85\columnwidth]{q1w1_vf30.eps} \label{fig:q1w1_vf30}} \begin{picture}(0,0)(0,0) \put(-225,10){(b)} \end{picture} \caption{(Color online) \subref{fig:Gprimes} Elastic and viscous moduli, $G^\prime$ and $G^{\prime\prime}$ as a function of oscillatory strain amplitude. \subref{fig:q1w1_vf30} Peak position $q_1$ (top) and half-width $w_1$ (bottom) of the first peak of the structure factor as a function of strain amplitude. Red and blue curves correspond to directions along and perpendicular to the shear, respectively.} \end{figure} Further signature of yielding is observed in the position $q_1$ and width $w_1$ of the first peak as shown in Fig.~\ref{fig:q1w1_vf30}. The initial solidity of the material reflects in the anisotropy of particle distances; the decrease of $q_1$ along (red curve), and its increase perpendicular to the shear direction (blue curve) indicates that particle separations increase along and decrease perpendicular to the applied shear, making them move past each more easily. The concomitant increase in the peak width (Fig.~~\ref{fig:q1w1_vf30}, bottom) indicates that this is accompanied by slight shear-induced disordering. This anisotropy increases until $\gamma_0\sim\gamma_0^{\star}$, where it suddenly disappears: the material can no longer sustain the anisotropic structure, and changes spontaneously into an isotropic fluid-like state. We note that the anisotropy shown in Fig.~\ref{fig:q1w1_vf30} appears as dip in the correlation function (Fig.~\ref{fig:Cor2D_vf30}) and as peak in the kurtosis (Fig.~\ref{fig:CP2fluct_vf30}); this allows us to estimate the effect of ring distortions - both real and artificial - on the presented analysis. We conclude that while ring distortions have a visible effect, the observed sharp transition at $\gamma_0^{\star}$ cannot be explained by these continuously evolving distortions and indicates a real structural transition of the glass. This abrupt transition, characterized by the order parameter $C(\beta=\pi)$, reminds of first-order transitions accompanying conventional solid-liquid phase changes. In the case presented here the transition is dynamically induced, triggered by the application of an external stress field to the glass. By combining oscillatory rheology and time-resolved x-ray scattering, we have identified a sharp structural transition at the yielding of a glass. The structural anisotropy characteristic of a solid vanishes abruptly, and isotropic Gaussian fluctuations characteristic of a liquid appear, indicating a sharp dynamically induced transition from a solid to a liquid-like state. While the overall structural effect is small and difficult to detect by real-space techniques such as confocal microscopy, the large averaging power of x-ray scattering allows us to identify this transition clearly. The definition of angular correlation functions as order parameters allowed us to pinpoint the transition and demonstrate its sharpness. This transition, induced by application of an external shear field, thus looks akin to conventional first-order transitions. \section{Acknowledgements} The authors thank T. A. Nguyen for her support during the measurements and G. Petekidis and J. Dhont for useful discussions. We thank DESY, Petra III, for access to the x-ray beam. This work was supported by the Foundation for Fundamental Research on Matter (FOM) which is subsidized by the Netherlands Organisation for Scientific Research (NWO). P.S. acknowledges support by a Vidi fellowship from NWO.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} As is well known, multivalued stochastic differential equations(MSDEs) are widely applied to model stochastic systems in different branches of science and industry. Stability, boundedness and applications of the solution are the most pop research topic in the field of stochastic dynamic systems and control. The form of these equations are as follows: \begin{equation}\label{a} \begin{cases} &X(t)+ A(X(t)){\mathord{{\rm d}}} t\ni f( X(t)){\mathord{{\rm d}}} t +g( X(t)){\mathord{{\rm d}}} W(t){\mathord{{\rm d}}} t, t>t_{0}\\ &X(t_{0})=x_{0}\in \overline{D(A)}, \end{cases} \end{equation} where $A$ is a multivalued maximal monotone operator on ${\mathbb R}^{m},$ $\overline{D(A)}$ is the closure of $D(A)$ and $W$ is a standard Brownian motion. Compared with the usual stochastic differential equations (SDEs), i.e., $A=0,$ most of difficulties for MSDEs come from the presence of the finite-variation process $\{k(t), t\in [0,T]\}.$ One only knows that $\{k(t), t\in [0,T]\}$ is continuous process with finite total variation and can not prove any further regularity. This type of equations was first studied by C\'{e}pa \cite{C1,C2}. Later, Zhang \cite{Z1} extended C\'{e}pa's results to the infinite-dimensional case. Ren et al. \cite{RXZ} studied the large deviations for MSDEs which solved moderate deviation problem for the above equations. In particular, if $A$ is taken as some subdifferential operator, the corresponding MSDEs can be used to solve a class of stochastic differential equations with reflecting boundary conditions. For MSDEs, more applications can be found in \cite{RW, RWH, PR,EE} and references therein. Recently, Gassous et al. \cite{GRR} built a fundamental framework for the following MSDEs with generalized subgradients: \begin{align*} \begin{cases} &{\mathord{{\rm d}}} x(t)+ H(x(t))\partial \Pi(x(t)){\mathord{{\rm d}}} t\ni f(x(t)){\mathord{{\rm d}}} t + g( x(t)){\mathord{{\rm d}}} B(t), t\in [t_{0}, T],\\ & x(t)=x_{0}, t= t_{0}, \end{cases} \end{align*} where the new quantity $H(\cdot, \cdot): \Omega\times {\mathbb R}^{m}\rightarrow {\mathbb R}^{m\times m}$ acts on the set of subgradients and $\partial\Pi(\cdot)$ is a subdifferential operator. The product $H(x(t)) \partial \Pi(x(t))$ will be called, from now on, the set of oblique subgradients. The problem becomes challenging due to the presence of this new term, since this new term preserves neither the monotonicity of the subdifferential operator nor the Lipschitz property of the matrix involved. Some different specific techniques have been applied to solve the existence and uniqueness results for this type of equations. Later, Gassous et al. \cite{GRR1} and Maticiuc and Rotenstein \cite{MR+} investigated backward stochastic differential equations (BSDEs) with oblique subgradients. But there are few applications of MSDEs with oblique subgradients. On the other hand, many researchers are interested in studying the following equations, which are called Mckean-Vlasov stochastic differential equations (MVSDEs): \begin{equation*} \begin{cases} &x(t)= x_{0}+\int^{t}_{0}f( x(s), \mu_{s}){\mathord{{\rm d}}} s +g( x(s), \mu_{s}){\mathord{{\rm d}}} B(s), t\in [t_{0}, \infty),\\ &\mu_{t}:= \mbox{the probability distribution of } x(t). \end{cases} \end{equation*} Obviously, the coefficients involved depend not only on the state process but also on its distribution. MVSDEs, being clearly more involved than It\^{o}'s SDEs, arise in McKean \cite {M2}, who was inspired by Kac's Programme in Kinetic Theory \cite{K}, as well as in some other areas of high interest such as propagation of chaos phenomenon, PDEs, stability, invariant probability measures, social science, economics, engineering, etc. (see e.g. \cite{CD, MH, MMLM, Z1, GA,S, W, GA, DQ1, DQ2, P}). Motivated by the above articles, we shall study the following Mckean-Vlasov multivalued stochastic differential equations with oblique subgradients (MVMSDEswOS): \begin{equation}\label{1} \begin{cases} &{\mathord{{\rm d}}} x(t)+ H(x(t), \mu_{t})\partial \Pi(x(t)){\mathord{{\rm d}}} t\ni f(x(t), \mu_{t}){\mathord{{\rm d}}} t +g( x(t), \mu_{t}){\mathord{{\rm d}}} B(t), t\in [t_{0}, T],\\ & x(t_{0})=x_{0}, \end{cases} \end{equation} where $H(\cdot, \cdot, \cdot): \Omega\times {\mathbb R}^{m} \times {\mathscr P}_{2}({\mathbb R}^{m}) \rightarrow {\mathbb R}^{m\times m},$ $\mu_{t}$ is the distribution of $x(t)$ and $x_{0}\in {\mathbb R}^{m}.$ The appearance of $H(\cdot, \cdot)$ leads to $H\partial \Pi$ does not inherit the maximal monotonicity of the subdifferential operator. As a consequence, some specific techniques when approaching the above problem are mandatory. The main contributions of the paper are as follows: \begin{itemize} \item[$\bullet$]Since the coefficients involved depend not only on the state process but also on its distribution, the method in \cite{GRR} is ineffective. We use the equations of Euler type and Skorohod's representaton theorem to prove the existence and uniqueness result. \item[$\bullet$] We present an example to illustrate our theory. The example indicates that a class of SDEs with time-dependent constraints are equivalent to some MVMSDEswOS. \item[$\bullet$] In \cite{GRR}, Gassous et al. studied MSDEswOS, but they did not give the applications for this type of equations. As far as we known, there are no works on optimal control problem for MSDEswOS. We will investigate an optimal control problem MVMSDEswOS and establish the dynamic programming principle for the value function. \end{itemize} We close this part by giving our organization in this article. In Section 2, we introduce some necessary notations, subdifferential operators. In Section 3, We give our main results and an example to illustrate our theory. In Section 4, we consider an optimal control problem and establish the dynamic programming principle for the value function. Furthermore, we make the following convention: the letter $C(\eta)$ with or without indices will denote different positive constants which only depends on $\eta$, whose value may vary from one place to another. \section{Notations, Subdifferential operators } \subsection{Notations}Throughout this paper, let $(\Omega, {\mathscr F}, {\mathbb F}, P)$ be a complete probability space with filtration ${\mathbb F} := \{{\mathscr F}_{t}\}_{t\geq 0}$ satisfying the usual conditions(i.e., it is increasing and right continuous, ${\mathscr F}_{0}$ contains all $P$-null sets) taking along a standard $d$-Brownian motion $B(t).$ For $x, y \in {\mathbb R}^{m},$ we use $|x |$ to denote the Euclidean norm of $x,$ and use $\langle x, y\rangle$ to denote the Euclidean inner product. For $M\in {\mathbb R}^{ m\times d},$ $|M |$ represents $\sqrt{\mathrm{Tr} (MM^{\ast})}.$ Denote ${\mathscr B}({\mathbb R}^{d})$ by the Borel $\sigma-$algebra on ${\mathbb R}^{d}.$ Let ${\mathscr P}({\mathbb R}^{m})$ be the space of all probability measures, and ${\mathscr P}_{p}({\mathbb R}^{m})$ denotes the space of all probability measures defined on ${\mathscr B}({\mathbb R}^{m}) $ with finite $p$th moment: $$W_{p}(\mu):=\bigg(\bigg.\int_{{\mathbb R}^{m}}|x|^{p}\mu({\mathord{{\rm d}}} x)\bigg)\bigg.^{\frac{1}{p}}<\infty.$$ For $ \mu, \nu\in {\mathscr P}_{p}({\mathbb R}^{m})$, we define the Wasserstein distance for $p\geq 1$ as follows: $$W_{p}(\mu, \nu):=\inf_{\pi \in \Pi(\mu, \nu)}\bigg\{\bigg. \int_{{\mathbb R}^{m}\times {\mathbb R}^{m}}|x-y|^{p}\pi({\mathord{{\rm d}}} x, {\mathord{{\rm d}}} y) \bigg\}\bigg.^{\frac{1}{p}}, $$ where $\Pi(\mu, \nu)$ denotes the family of all couplings for $\mu, \nu.$ Next, we define several spaces for future use.\\ $C([t_{0},T]; {\mathbb R}^{m})$ stands for the space of all continuous functions from $[t_{0},T]$ to ${\mathbb R}^{m},$ which is endowed with the uniform norm $| \varphi|_{\mathcal{C}(t_{0}, T)} = \sup_{t_{0}\leq t \leq T}|\varphi(t)|$.\\ ${\mathbb V}_{0}$ denotes the set of all continuous functions $k: [0, T]\rightarrow {\mathbb R}^{m}$ with finite variation and $k(0)=0.$ $\updownarrow k\updownarrow^{t}_{s}$ stands for the variation of $k$ on $[s,t],$ and denote $\updownarrow k\updownarrow_{t}=\updownarrow k\updownarrow^{t}_{0}.$\\ $L^{2}(0,T; {\mathbb R}^{m}):=\bigg\{\bigg. \varphi \,\mbox{is square integrable stochastic process i.e.}\, |\varphi|_{M^{2}}:=\bigg(\bigg. {\mathbb E}\int^{T}_{0}|\varphi(s)|^{2}{\mathord{{\rm d}}} s \bigg)\bigg.^{\frac{1}{2}}$ \begin{flushright} $<\infty\bigg\}\bigg..$ \end{flushright} $B(x,R)$ represents the ball centered at $x$ with the radius $R$ and $\bar{B}(x,R)$ represents the closed ball centered at $x$ and with the radius $R.$ \subsection{Subdifferential operators} We recall the definition of the subdifferential operators of a proper lower semicontinuous convex functions $\Pi$ (l.s.c. for short) and the Moreau-Yosida approximation of the function $\Pi.$ \begin{defn} Assume $\Pi: {\mathbb R}^{m}\rightarrow (-\infty, +\infty)$ is a proper lower semicontinuous convex functions such that $\Pi(x)\geq \Pi(0)=0, \forall x\in {\mathbb R}^{m}. $ Denote $D(\Pi)=\{x\in {\mathbb R}^{m}: \Pi(x)<\infty \}.$ The set $$\partial\Pi(x) =\{u\in {\mathbb R}^{m}: \langle u, v-x \rangle + \Pi(x)\leq \Pi(v), \forall v \in {\mathbb R}^{m} \} $$ is called the subdifferential operator of $\Pi.$ Denote its domain by $D(\partial\Pi)=\{x\in {\mathbb R}^{m}: \partial\Pi(x)\neq \emptyset \},$ and denote $Gr(\partial\Pi)=\{(a,b)\in {\mathbb R}^{2m}:a\in {\mathbb R}^{m}, b\in \partial\Pi(a)\},$ ${\mathscr A}:=\bigg\{\bigg. (x,k): x \in C([0, T], \overline{D(\partial\Pi)}), k\in {\mathbb V}_{0}, {\mathord{{\rm d}}} k(t)\in \partial \Pi(x(t)) {\mathord{{\rm d}}} t, \mbox{and}\,\,\langle x(t)-a, {\mathord{{\rm d}}} k(t)- b{\mathord{{\rm d}}} t \rangle \geq 0 \,\,\mbox{for any}\,\,(a,b)\in Gr(\partial \Pi) \bigg\}\bigg..$ \end{defn} The corresponding Moreau-Yosida approximation of $\Pi$ is defined as follows: $$\Pi_{\epsilon}(x)=\inf\left\{\frac{1}{2\epsilon}|z-x|^{2}+\Pi(z): z\in {\mathbb R}^{d}\right\}.$$ We know that $\Pi_{\epsilon}$ is a convex $C^{1}-$class function. For any $x\in {\mathbb R}^{m},$ denote $J_{\epsilon}x=x-\epsilon\nabla \Pi_{\epsilon}(x),$ where $\nabla$ is gradient operator. Then we have $\Pi_{\epsilon}(x)=\frac{1}{2\epsilon}|x-J_{\epsilon}x|^{2} +\Pi(J_{\epsilon}x).$ We present some useful properties on the above approximation tools (for more details, see, e.g., \cite{MR+}). \begin{itemize} \item[(a)] $\Pi_{\epsilon}(x)=\frac{\epsilon}{2}|\nabla \Pi_{\epsilon}(x)|^{2}+\Pi(J_{\epsilon}x).$ \item[(b)] $\nabla \Pi_{\epsilon}(x)\in \partial \Pi(J_{\epsilon}x).$ \item[(c)] $|\nabla \Pi_{\epsilon}(x)- \nabla \Pi_{\epsilon}(y) |\leq \frac{1}{\epsilon}|x-y|.$ \item[(d)] $\langle \nabla\Pi_{\epsilon}(x)- \nabla \Pi_{\epsilon}(y), x-y \rangle \geq 0.$ \item[(e)] $\langle \nabla\Pi_{\epsilon}(x)- \nabla \Pi_{\epsilon'}(y), x-y \rangle \geq -(\epsilon +\epsilon')\langle \nabla\Pi_{\epsilon}(x), \nabla \Pi_{\epsilon'}(y) \rangle.$ \item[(f)] $0=\Pi_{\epsilon}(0)\leq \Pi_{\epsilon}(x), J_{\epsilon}(0)= \nabla\Pi_{\epsilon}(0)=0.$ \item [(g)]$\frac{\epsilon}{2}|\nabla\Pi_{\epsilon}(x)|^{2}\leq \Pi_{\epsilon}(x)\leq \langle \nabla\Pi_{\epsilon}(x), x \rangle, \forall x\in {\mathbb R}^{d}.$ \end{itemize} \begin{lem}\label{ab} Suppose that $Int(D(\partial\Pi))\neq \emptyset,$ where $Int(D(\partial \Pi))$ denotes the interior of $D(\partial \Pi).$ Then for any $a \in Int(D(\partial \Pi)), $ there exists constants $\lambda_{1}> 0,\lambda_{2}\geq 0,\lambda_{3}\geq 0$ such that for any $(x, k)\in {\mathscr A} $ and $0\leq s\leq t\leq T,$ $$\int^{t}_{s}\langle x(r)- a, {\mathord{{\rm d}}} k(r)\rangle \geq \lambda_{1}\updownarrow k\updownarrow^{t}_{s} - \lambda_{2}\int^{t}_{s}|x(r)-a|{\mathord{{\rm d}}} r- \lambda_{3}(t-s). $$ \end{lem} \subsection{Strong and weak solutions of Eq.\eqref{1}} \begin{defn} A pair of continuous processes $(x,k)$ is called a strong solution of \eqref{1} if \begin{itemize} \item[\rm{(i)}] $P(x(t_0)=x_{0})=1;$ \item[\rm{(ii)}] $x(t)$ is ${\mathscr F}_{t}-$adapted. \item[\rm{(iii)}] $(x, k)\in {\mathscr A}, a.s., P.$ \item[\rm{(iv)}] $\int^{T}_{t_0}(|f(x(t), \mu_{t})|+|g(x(t), \mu_{t})|^{2}){\mathord{{\rm d}}} t< +\infty, a.s.,P.$ \item[\rm{(v)}] For $y\in C([t_0, +\infty); {\mathbb R}^{m})$ and $t_{0}\leq s \leq t < \infty$, it holds that \begin{align*} \int_{s}^{t}\langle y(r)-x(r), {\mathord{{\rm d}}} k(r)\rangle +\int_{s}^{t} \Pi(x(r)){\mathord{{\rm d}}} r \leq \int_{s}^{t}\Pi (y(r)){\mathord{{\rm d}}} r. \end{align*} \item[\rm{(vi)}] $(x,k)$ satisfies the following equation $$x(t)+ \int^{t}_{t_{0}}H(x(s), \mu_{s}){\mathord{{\rm d}}} k(s)= x_{0}+\int^{t}_{t_0}f( x(s), \mu_{s}){\mathord{{\rm d}}} s +\int^{t}_{t_0}g( x(s), \mu_{s}){\mathord{{\rm d}}} B(s), t\in [t_0, T].$$ \end{itemize} \end{defn} \begin{defn} We say that Eq.\eqref{1} admits a weak solution if there exists a filtered probability space $(\tilde{\Omega}, \tilde{{\mathscr F}}, \{\tilde{{\mathscr F}}_{t}\}_{t\in [0,T]}, \tilde{P})$ taking along a standard Brownian motion $\tilde{B}(t)$ as well as a pair of continuous processes $(\tilde{x},\tilde{k})$ defined on $(\tilde{\Omega}, \tilde{{\mathscr F}}, \{\tilde{{\mathscr F}}_{t}\}_{t\in [t_0,T]}, \tilde{P})$ such that \begin{itemize} \item[\rm{(i)}] $\tilde{P}(\tilde{x}(t_0)=x_{0})=1;$ \item[\rm{(ii)}] $\tilde{x}(t)$ is $\tilde{{\mathscr F}}_{t}-$adapted. \item[\rm{(iii)}] $(\tilde{x}, \tilde{k})\in {\mathscr A}, a.s., \tilde{P}.$ \item[\rm{(iv)}] $\int^{T}_{0}(|f(\tilde{x}(t),\mu_{t})|+|g(\tilde{x}(t), \mu_{t})|^{2}){\mathord{{\rm d}}} t< +\infty, a.s.,\tilde{P}.$ \item[\rm{(v)}] For $y\in C(t_0, +\infty; {\mathbb R}^{m})$ and $t_{0}\leq s \leq t < \infty$, it holds that \begin{align*} \int^{t}_{s}\langle y(r)-\tilde{x}(r), {\mathord{{\rm d}}} \tilde{k}(r)\rangle +\int^{t}_{s} \Pi(\tilde{x}(r)){\mathord{{\rm d}}} r \leq \int^{t}_{s}\Pi (\tilde{y}(r)){\mathord{{\rm d}}} r, \end{align*} \item[\rm{(vi)}] $(\tilde{x},\tilde{ k})$ satisfies the following equation $$\tilde{x}(t)+ \int^{t}_{t_{0}}H(\tilde{x}(s), \mu_{s}){\mathord{{\rm d}}} \tilde{ k}(s)= \tilde{x}_{0}+\int^{t}_{t_0}f( \tilde{x}(s), \mu_{s}){\mathord{{\rm d}}} s +\int^{t}_{t_0}g(\tilde{x}(s), \mu_{s}){\mathord{{\rm d}}} B(s), t\in [t_0, T].$$ \end{itemize} \end{defn} \begin{rem} We make the following convention: For a pair of process $(x, k)$ satisfying ${\mathord{{\rm d}}} k(t)\in \partial \Pi(x(t)) {\mathord{{\rm d}}} t $, we denote that $U(t) $ is a process such that ${\mathord{{\rm d}}} k(t)= U(t) {\mathord{{\rm d}}} t.$ \end{rem} \section{Main Results} Before giving our main results for Eq.\eqref{1}. For the sake of simplicity, we assume $f(0, \delta_{0} )=g(0,\delta_{0})=0,$ where $\delta_{x}$ stands for the Dirac measure at $x.$ Now, we make the following assumptions: \begin{itemize} \item[(A1)]The coefficients $f, g$ satisfy that for some positive constant $L>0$ and all $x,y \in {\mathbb R}^{d}, \mu, \nu \in {\mathscr P}_{2}({\mathbb R}^{m}),$ $$|f(x,\mu)-f(y,\nu)|+|g(x,\mu)-g(y,\nu)| \leq L(|x-y|+W_{2}(\mu, \nu)), $$ Furthermore, from the above assumptions, one has $$|f(x,\mu)|\leq L(|x|+W_{2}(\mu, \delta_{0})), |g(x,\mu)|\leq L(|x|+W_{2}(\mu, \delta_{0})). $$ \item[(A2)] $H(\cdot, \cdot)=(a_{ij}(\cdot, \cdot))_{m\times m}: {\mathbb R}^{m}\times {\mathscr P}_{2}({\mathbb R}^{m})\rightarrow {\mathbb R}^{m\times m}$ is a continuous mapping and for any $(x, \mu)\in {\mathbb R}^{m}\times {\mathscr P}_{2}({\mathbb R}^{m}),$ $H(x, \mu)$ is a invertible symmetric matrix. Moreover, there exist two positive $a_{H}, b_{H}$ such that \begin{itemize} \item[(i)] $a_{H}|u|^{2}\leq \langle H(x, \mu)u ,u\rangle\leq b_{H}|u|^{2}, \forall u\in {\mathbb R}^{m}.$ \item[(ii)]For all $x,y \in {\mathbb R}^{m}, \mu, \nu \in {\mathscr P}_{2}({\mathbb R}^{m}),$ $$|H(x,\mu)-H(y,\nu)| +|H^{-1}(x,\mu)-H^{-1}(y,\nu)|\leq L(|x-y|+W_{2}(\mu, \nu)), $$ where $|H(x,\mu)|:=\bigg(\bigg.\sum^{m}_{i, j=1} |a_{ij}(x,\mu)|^{2} \bigg)\bigg.^{\frac{1}{2}}$ and $H^{-1}(x,\mu)$ the inverse of the matrix $H(x,\mu).$ \end{itemize} \end{itemize} \begin{lem} \label{L2L} Let $I$ be an arbitrary set of indexes. For each $i\in I,$ suppose that $(\Omega^{i}, {\mathscr F}^{i}, P^{i},$ $\{{\mathscr F}_{t}^{i}\}_{t\geq 0}, B^{i}, x^{i}, k^{i} )$ is a weak solution of the equation \begin{equation}\label{i+++} \begin{cases} & {\mathord{{\rm d}}} x^{i}(t)+ H_{i}\partial \Pi(x^{i}(t)){\mathord{{\rm d}}} t\\ &\quad \quad \quad \ni f^{i}(x^{i}(t), \mu^{i}_{t}){\mathord{{\rm d}}} t + g^{i}( x^{i}(t), \mu^{i}_{t}){\mathord{{\rm d}}} B^{i}(t), t\in [t_{0}, T],\\ & x(t_{0})=x^{i}_{0}\in {\mathbb R}^{m}. \end{cases} \end{equation} where $f^{i}, g^{i}, H_{i}$ satisfy $(\mathrm{A}1)-(\mathrm{A}2)$ and $H^{i}$ is independent on $ x, \mu.$ If $\sup_{i}{\mathbb E}[\sup_{0\leq t \leq T}|x^{i}(t)|^{2}]< \infty,$ then $(x^{i}, k^{i})_{i\in I}$ is tight in $C([t_{0}, T]; {\mathbb R}^{m})\times C([t_{0}, T]; {\mathbb R}^{m}).$ \end{lem} \begin{proof} Set $\hat{x}^{i}(t)= H_{i}^{-\frac{1}{2}}x^{i}(t).$ From \eqref{i+++}, we know that $\hat{x}^{i}(t)$ satisfies the following equation: \begin{equation}\label{i++} \begin{cases} & {\mathord{{\rm d}}} \hat{x}^{i}(t)+ H_{i}^{\frac{1}{2}}\partial \Pi(x^{i}(t)){\mathord{{\rm d}}} t\\ &\quad \quad \quad \ni H_{i}^{-\frac{1}{2}}f^{i}(x^{i}(t), \mu^{i}_{t}){\mathord{{\rm d}}} t + H_{i}^{-\frac{1}{2}}g^{i}( x^{i}(t), \mu^{i}_{t}){\mathord{{\rm d}}} B^{i}(t), t\in [t_{0}, T],\\ & x(t_{0})=x^{i}_{0}. \end{cases} \end{equation} When we apply It\^{o}'s formula to $|\hat{x}^{i}(t)|^{2}$ for the above equation, $H_{i}^{\frac{1}{2}}$ will disappear. Then, we can use the maximal monotony property of the subdifferential operator $\partial \Pi$. Using the similar method in \cite [Theorem 3.2]{Z}, we can get the desired result. \end{proof} \begin{thm}\label{3.2} Let $f,g$ satisfy the following linear growth conditions: $$|f(x,\mu)|\leq L(|x|+W_{2}(\mu, \delta_{0})), |g(x,\mu)|\leq L(|x|+W_{2}(\mu, \delta_{0})),$$ and assume $(\mathrm{A2})$ holds. Then Eq.\eqref{1} has a weak solution. \end{thm} \begin{proof} For fixed $n,$ take $t_{n}= 2^{-n}\lfloor 2^{n}t\rfloor,$ where $\lfloor z\rfloor$ denotes the integer part of a real number $z.$ In addition, denote $\mu_{t_{0}}=\delta_{x_{0}}.$ Consider the following approximation equation for $n\geq 2$: \begin{equation}\label{h} \begin{cases} &{\mathord{{\rm d}}} x^{n}(t)+ H(x^{n-1}(t_{n}), \mu^{n-1}_{t_{n}})\partial \Pi(x^{n}(t)){\mathord{{\rm d}}} t\\ &\quad \quad \quad \ni f(x^{n-1}(t_{n}), \mu^{n-1}_{t_{n}}){\mathord{{\rm d}}} t +g( x^{n-1}(t_{n}), \mu^{n-1}_{t_{n}}){\mathord{{\rm d}}} B(t), t\in [t_{0}, T],\\ & x(t_{0})=x_{0}. \end{cases} \end{equation} By solving a deterministic Skorohod problem (see \cite{C2} for more details), we can obtain the solution of this equation step by step. Thus, there exists $(x^{n}, k^{n})$ such that \begin{equation}\label{i} \begin{cases} & x^{n}(t)+ \int^{t}_{t_{0}}H(x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}}){\mathord{{\rm d}}} k^{n}(s)\\ &\quad \quad \quad = \int^{t}_{t_{0}}f(x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}}){\mathord{{\rm d}}} s +\int^{t}_{t_{0}} g( x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}}){\mathord{{\rm d}}} B(s), t\in [t_{0}, T],\\ & x(t_{0})=x_{0}. \end{cases} \end{equation} For given $a \in Int (D(\partial \Pi)),$ set $\hat{x}^{n}(t)=x^{n}(t)-a, \tilde{x}^{n}(t) =H^{-\frac{1}{2}}(x^{n-1}(t_{n}), \mu^{n-1}_{t_{n}})\hat{x}^{n}(t).$ Then, $\tilde{x}^{n}$ satisfies the following equation: \begin{align}\label{a10+} \tilde{x}^{n}(t)&= \int^{t}_{t_{0}}H^{-\frac{1}{2}}(x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}}){\mathord{{\rm d}}}\hat{x}^{n}(s)+|\tilde{x}^{n}(t_{0})|^{2}\nonumber\\ & =\int^{t}_{t_{0}}H^{-\frac{1}{2}}(x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}})f( x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}}){\mathord{{\rm d}}} s - \int^{t}_{t_{0}}H^{\frac{1}{2}}(x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}}){\mathord{{\rm d}}} k^{n}(s)\nonumber\\ &+ \int^{t}_{t_{0}}H^{-\frac{1}{2}}(x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}})g( x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}}){\mathord{{\rm d}}} B(s)+|\tilde{x}^{n}(t_{0})|^{2}. \end{align} Using It\^{o}'s formula, we have \begin{align}\label{j} &|\tilde{x}^{n}(t)|^{2}= |\tilde{x}^{n}(t_{0})|^{2} + 2\int^{t}_{t_0}\langle \tilde{x}^{n}(s), H^{-\frac{1}{2}}(x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}})f( x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}}) \rangle{\mathord{{\rm d}}} s\nonumber\\ & +\int^{t}_{t_0}|H^{-\frac{1}{2}}(x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}})g( x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}})|^{2}{\mathord{{\rm d}}} s\nonumber\\ & + 2\int^{t}_{t_0}\langle \tilde{x}^{n}(s), H^{-\frac{1}{2}}(x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}})g( x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}}){\mathord{{\rm d}}} B(s)\rangle \nonumber\\ &- 2\int^{t}_{0}\langle x^{n}(s)-a, {\mathord{{\rm d}}} k^{n}(s)\rangle\nonumber\\ &=: |\tilde{x}^{n}(t_0)|^{2} + \sum^{4}_{i=1}I_{i}. \end{align} Firstly, we estimate $I_{1}.$ \begin{align*} I_{1}=\int^{t}_{t_0}&\langle \tilde{x}^{n}(s), H^{-\frac{1}{2}}(x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}})f( x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}}) \rangle{\mathord{{\rm d}}} s\\ &\leq 2\int^{t}_{t_0}|\tilde{x}^{n}(s)|^{2}{\mathord{{\rm d}}} s+ a_{H}^{-1}\int^{t}_{t_0} | f( x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}})|^{2}{\mathord{{\rm d}}} s\\ & \leq (1+8a_{H}^{-1}b_{H}L^{2})\int^{t}_{t_0}\sup_{1\leq i\leq n}\sup_{t_{0}\leq r\leq s}|\tilde{x}^{i}(s)|^{2}{\mathord{{\rm d}}} s+ 8L^{2}a^{2}a_{H}^{-1}T. \end{align*} Similarly, for $I_{2},$ we have \begin{align*} I_{2}=\int^{t}_{t_0}&|H^{-\frac{1}{2}}(x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}})g( x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}})|^{2}{\mathord{{\rm d}}} s\\ &\leq 8a_{H}^{-1}b_{H}L^{2}\int^{t}_{t_0}\sup_{1\leq i\leq n}\sup_{t_{0}\leq r\leq s}|\tilde{x}^{i}(s)|^{2}{\mathord{{\rm d}}} s+ 8L^{2}a^{2}a_{H}^{-1}T. \end{align*} Moreover, from the Burkholder-Davis-Gundy (BDG) inequality, for any $l>0,$ we have \begin{align*} I_{3}={\mathbb E}&\sup_{t_0\leq r\leq t}\bigg|\bigg.\int^{r}_{t_{0}}\langle \tilde{x}^{n}(s), H^{-\frac{1}{2}}(x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}})g( x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}}){\mathord{{\rm d}}} B(s)\rangle\bigg|\bigg.\\ & \leq 32{\mathbb E}\bigg(\bigg.\int^{t}_{t_{0}}|\tilde{x}^{n}(s)|^{2} | H^{-\frac{1}{2}}(x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}})g( x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}})|^{2} {\mathord{{\rm d}}} s \bigg)\bigg.^{\frac{1}{2}}\\ & \leq 32{\mathbb E}\bigg[\bigg. \sup_{t_0\leq s\leq t}|\tilde{x}^{n}(s)|\bigg(\bigg.\int^{t}_{t_{0}} |H^{-\frac{1}{2}}(x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}})g( x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}})|^{2} {\mathord{{\rm d}}} s \bigg)\bigg.^{\frac{1}{2}}\bigg]\bigg.\\ & \leq \frac{1}{l}{\mathbb E}[\sup_{t_0\leq s\leq t}|\tilde{x}^{n}(s)|^{2}] +32l {\mathbb E}\int^{t}_{t_{0}}|H^{-\frac{1}{2}}(x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}})g( x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}})|^{2}{\mathord{{\rm d}}} s\\ &\leq \frac{32}{l}{\mathbb E}[\sup_{1\leq i\leq n}\sup_{t_{0}\leq s\leq t}|\tilde{x}^{i}(s)|^{2}] + 256 l L^{2}a_{H}^{-1}b_{H} {\mathbb E}\int^{t}_{t_{0}}\sup_{1\leq i\leq n}\sup_{t_{0}\leq r\leq s}|\tilde{x}^{i}(r)|^{2}{\mathord{{\rm d}}} s+ 8L^{2}a^{2}a_{H}^{-1}T. \end{align*} Finally, we calculate $I_{4}.$ By Lemma \ref{ab}, we have \begin{align*} - 2\int^{t}_{0}\langle x^{n}(s)-a, {\mathord{{\rm d}}} k^{n}(s)\rangle&\leq \lambda_{2}\int^{t}_{s}|x^{n}(s)-a|{\mathord{{\rm d}}} s+ \lambda_{3}T\\ & \leq \lambda_{2}b_{H}T\int^{t}_{s}\sup_{1\leq i\leq n}\sup_{t_{0}\leq r\leq s}|\tilde{x}^{i}(r)|^{2}{\mathord{{\rm d}}} s+ \lambda_{3}T. \end{align*} Take $l= 64.$ Combining the above calculations, we have \begin{align*} {\mathbb E}[\sup_{1\leq i\leq n}\sup_{t_{0}\leq s\leq t}|\tilde{x}^{i}(s)|^{2}]&\leq (2^{18}a_{H}^{-1}b_{H}L^{2}+2^{5}a_{H}^{-1}b_{H}L^{2}+2\lambda_{2}b_{H}T+2 ){\mathbb E}\int^{t}_{t_{0}}\sup_{1\leq i\leq n}\sup_{t_{0}\leq r\leq s}|\tilde{x}^{i}(r)|^{2}{\mathord{{\rm d}}} s\\ &+ 8L^{2}a^{2}a_{H}^{-1}T+2\lambda_{3}T+|\tilde{x}^{n}(t_{0})|^{2}. \end{align*} By Gronwall's inequality, we have $$\sup_{ n\geq 0}{\mathbb E}[\sup_{t_{0}\leq s\leq t}|\tilde{x}^{n}(s)|^{2}]\leq C(a_{H},b_{H},T, \lambda_{2},\lambda_{3} ).$$ Then, $$\sup_{ n\geq 0}{\mathbb E}[\sup_{t_{0}\leq s\leq t}|x^{n}(s)|^{2}]\leq C(a_{H},b_{H},T, \lambda_{2},\lambda_{3} ).$$ Furthermore, from Lemma \ref{ab} and \eqref{j}, we derive $$\sup_{ n\geq 0}{\mathbb E}[\sup_{t_{0}\leq s\leq t}|k^{n}(s)|^{2}]\leq C(a_{H},b_{H},T, \lambda_{2},\lambda_{3} ).$$ Using the above estimates, by Lemma \ref{L2L}, $(x^{n}, k^{n})_{n\in {\mathbb N}}$ is tight in $C([t_{0}, T]; {\mathbb R}^{2m}).$ By the Prohorov theorem there exists a subsequence which we still denote by $(x^{n}, k^{n}, \updownarrow k^{n}\updownarrow, B) $ such that, as $n\rightarrow \infty,$ $$(x^{n}, k^{n}, \updownarrow k^{n}\updownarrow, B)\Rightarrow (x, k, \updownarrow k\updownarrow, B).$$ By Skorohod representation theorem, we can choose a probability space $(\check{\Omega}, \check{{\mathscr F}}, \check{P})$ and some quadruples $(\check{x}^{n}, \check{k}^{n}, \check{V}^{n}, \check{B}^{n})$ and $(\check{x}, \check{k}, \check{V}, \check{B})$ defined on $(\check{\Omega}, \check{{\mathscr F}}, \check{P}),$ having the same laws as $(x^{n}, k^{n}, \updownarrow k^{n}\updownarrow, B)$ and $(x, k, \updownarrow k \updownarrow, B),$ respectively, such that , in $C([t_{0}, T]; {\mathbb R}^{2m+1+d}),$ as $n\rightarrow \infty,$ $$(\check{x}^{n}, \check{k}^{n}, \check{V}^{n}, \check{B}^{n})\rightarrow (\check{x}, \check{k}, \check{V}, \check{B}), a.e., n\rightarrow \infty. $$ Since $(x^{n}, k^{n}, \updownarrow k^{n} \updownarrow, B)\Rightarrow (\check{x}, \check{k}, \check{V}, \check{B}),$ then by Proposition 16, in \cite{GRR}, we have that, for all $t_{0}\leq s<t,$ \begin{align} \label{k1} &\check{x}(t_{0})=x_{0}, \check{k}(t_{0})=0, |\check{k}|_{v}(t)-|\check{k}|_{v}(s)\leq \check{V}(t)-\check{V}(s), 0=\check{V}(t_{0})\leq\check{V}(s)\leq \check{V}(t), \check{P}-a.e. \end{align} Furthermore, since $\forall t_{0}\leq s<t,$ \begin{align*} \int^{t}_{s}\Pi(x^{n}(r)){\mathord{{\rm d}}} r \leq \int^{t}_{s}\Pi(y(r)){\mathord{{\rm d}}} r - \int^{t}_{s}\langle y(r)-x^{n}(r), {\mathord{{\rm d}}} k^{n}(r) \rangle, \end{align*} and \cite[Proposition 16]{GRR}, one can see that \begin{align}\label{k2} \int^{t}_{s}\Pi(\check{x}(r)){\mathord{{\rm d}}} r \leq \int^{t}_{s}\Pi(y(r)){\mathord{{\rm d}}} r - \int^{t}_{s}\langle y(r)-\check{x}(r), {\mathord{{\rm d}}} \check{k}(r) \rangle. \end{align} Combining \eqref{k1} and \eqref{k2}, it infer $${\mathord{{\rm d}}} \check{k}(r)\in \partial \Pi (\check{x}(r))({\mathord{{\rm d}}} r). $$ Using the Lebesgue theorem, we derive \begin{align*} \check{M}^{n}(\cdot)&=x_{0}+\int^{\cdot}_{t_{0}}f(\check{x}^{n-1}(s_{n}), \mu^{n-1}_{s_{n}}){\mathord{{\rm d}}} s +\int^{\cdot}_{t_{0}}g( \check{x}^{n-1}(s_{n}), \mu^{n-1}_{s_{n}}){\mathord{{\rm d}}} B(s)\\ & \rightarrow \check{M}(\cdot)=x_{0}+\int^{\cdot}_{t_{0}}f(\check{x}(s), \mu_{s}){\mathord{{\rm d}}} s +\int^{\cdot}_{t_{0}}g( \check{x}(s), \mu_{s}){\mathord{{\rm d}}} \check{B}(s)\,\, \mbox{in}\,\, L^{2}(t_{0}, T; {\mathbb R}^{m}). \end{align*} By Proposition 17 in \cite{GRR}, it follows that $$\mathcal{L}(\check{x}^{n}, \check{k}^{n}, \check{B}^{n},\check{M}^{n} )=\mathcal{L}(x^{n}, k^{n}, B^{n},M^{n} )\,\, \mbox{in}\,\, C([t_{0}, T]; {\mathbb R}^{3m+d}), $$ where $\mathcal{L}(\cdot)$ is denoted by the probability law of the random variables. Since $$ x^{n}(t)+ \int^{t}_{t_{0}}H(x^{n-1}(s_{n}), \mu^{n-1}_{s_{n}}){\mathord{{\rm d}}} k^{n}(s)-M^{n}(t)=0, $$ we have $$ \check{x}^{n}(t)+ \int^{t}_{t_{0}}H(\check{x}^{n-1}(s_{n}), \mu^{n-1}_{s_{n}}){\mathord{{\rm d}}} \check{k}^{n}(s)-\check{M}^{n}(t)=0. $$ Letting $n\rightarrow \infty, $ one has $$ \check{x}(t)+ \int^{t}_{t_{0}}H(\check{x}(s), \mu_{s}){\mathord{{\rm d}}} \check{k}(s)-\check{M}(t)=0. $$ This means \begin{align*} \check{x}(t)+ \int^{t}_{t_{0}}H(\check{x}(s), \mu_{s}){\mathord{{\rm d}}} \check{k}(s)=x_{0}+ \int^{t}_{t_{0}}f(\check{x}(s), \mu_{s}){\mathord{{\rm d}}} s + \int^{t}_{t_{0}}g( \check{x}(s), \mu_{s}){\mathord{{\rm d}}} \check{B}(s). \end{align*} Thus, $((\check{\Omega}, \check{{\mathscr F}}, \check{P}), {\mathscr F}_{t}^{\bar{B},\bar{X}},\check{x}(t), \check{k}(t), \check{B}(t) )_{t\geq t_{0}}$ is a weak solution. The proof is complete. \begin{thm} Let $(\mathrm{A}1)-(\mathrm{A}2)$ hold and $H$ is independent of $\mu.$ The Eq.\eqref{1} has a unique strong solution. \end{thm} Based on Theorem \eqref {3.2}, it suffices to prove the pathwise uniqueness. Assume that $(x, k)$ and $(x', k')$ are two solutions of \eqref{1}. Let $Q(t):= H^{-1}(x(t))+H^{-1}(x'(t)). $ Then, we have $${\mathord{{\rm d}}} Q^{\frac{1}{2}}(t)= {\mathord{{\rm d}}} N(t)+ \sum^{d}_{i=1}\beta_{i}(t){\mathord{{\rm d}}} B_{i}(t), $$ where $N$ is an ${\mathbb R}^{m\times m}-$valued bounded variation continuous stochastic process with $N(t_{0})=0,$ and $\beta_{i}, i=1,2,\cdots, d$ is an ${\mathbb R}^{m\times m}-$valued stochastic process such that ${\mathbb E}\int^{T}_{t_{0}}|\beta_{i}(t)|^{2}{\mathord{{\rm d}}} t< \infty.$ Set $\hat{x}(t)=Q^{\frac{1}{2}}(t)(x(t)-x'(t)).$ Then, $\hat{x}(t)$ satisfies the following equation: \begin{align*} {\mathord{{\rm d}}} \hat{x}(t)&=[{\mathord{{\rm d}}} Q^{\frac{1}{2}}(t)](x(t)-x'(t))+ Q^{\frac{1}{2}}(t){\mathord{{\rm d}}} (x(t)-x'(t))+ \sum^{d}_{i=1}\beta_{i}(t)(g(x(t), \mu_{t})-g(x'(t), \mu'_{t}))e_{i}{\mathord{{\rm d}}} t\\ &=: {\mathord{{\rm d}}} F(t)+ G(t){\mathord{{\rm d}}} B(t), \end{align*} where \begin{align*} {\mathord{{\rm d}}} F(t)&=[{\mathord{{\rm d}}} N(t)]Q^{-\frac{1}{2}}(t)\hat{x}(t)+ Q^{\frac{1}{2}}(t)(H(x(t)){\mathord{{\rm d}}} k(t)-H(x'(t)){\mathord{{\rm d}}} k'(t) )\\ &+Q^{\frac{1}{2}}[f(x(t), \mu_{t}){\mathord{{\rm d}}} k(t)-f(x'(t), \mu'_{t})]{\mathord{{\rm d}}} t+ \sum^{d}_{i=1}\beta_{i}(t)(g(x(t), \mu_{t})-g(x'(t), \mu'_{t}))e_{i}{\mathord{{\rm d}}} t, \end{align*} and \begin{align*} G(t)&=\Gamma(t)+ Q^{\frac{1}{2}}(g(x(t), \mu_{t})-g(x'(t), \mu'_{t}))e_{i}, \end{align*} and $\Gamma(t)$ is an ${\mathbb R}^{m\times d}$ matrix with the columns $\beta_{1}(t)(x(t)-x'(t)), \cdots, \beta_{i}(t)(x(t)-x'(t)).$\\ Using the properties of matrices $H, H^{-1}$ and the following relation \begin{align*} Q(t)(H(x'(t)){\mathord{{\rm d}}} k'(t)-H(x(t)){\mathord{{\rm d}}} k(t) ) &= (H^{-1}(x(t))-H^{-1}(x'(t)))[H(x'(t)){\mathord{{\rm d}}} k'(t)\\ &+H(x(t)){\mathord{{\rm d}}} k(t)]+ 2({\mathord{{\rm d}}} k'(t)-{\mathord{{\rm d}}} k(t)), \end{align*} we have \begin{align*} \langle &\hat{x}(t), Q^{\frac{1}{2}}(t)(H(x(t)){\mathord{{\rm d}}} k(t)-H(x'(t)){\mathord{{\rm d}}} k'(t) ) \rangle\\ &=\langle x(t)-x'(t), (H^{-1}(x(t))- H^{-1}(x'(t)))(H(x(t)){\mathord{{\rm d}}} k(t)+H(x'(t)){\mathord{{\rm d}}} k'(t)) \rangle\\ &\quad - 2\langle x(t)-x'(t), {\mathord{{\rm d}}} k(t)- {\mathord{{\rm d}}} k'(t) \rangle\\ & \leq C(L, b_{H})|x(t)-x'(t)|^{2}({\mathord{{\rm d}}} \updownarrow k\updownarrow_t+ {\mathord{{\rm d}}} \updownarrow k'\updownarrow_t). \end{align*} Thus, $$\langle \hat{x}(t), {\mathord{{\rm d}}} F(t) \rangle + \frac{1}{2}|G(t)|^{2}{\mathord{{\rm d}}} t \leq |\hat{x}(t)|^{2}{\mathord{{\rm d}}} V(t), $$ where $${\mathord{{\rm d}}} V(t)= C(L, b_{H})(1+ {\mathord{{\rm d}}} \updownarrow N\updownarrow_{t}+ {\mathord{{\rm d}}} \updownarrow k\updownarrow_{t}+ {\mathord{{\rm d}}} \updownarrow k'\updownarrow_{t})+ C(L, b_{H})\sum^{d}_{i=1}|\beta_{i}(t)|^{2}{\mathord{{\rm d}}} t. $$ By \cite[Proposition 14]{GRR}, it yields $${\mathbb E}\frac{e^{-2V(t)}|\hat{x}(t)|^{2}}{1+2e^{-2V(t)}|\hat{x}(t)|}\leq {\mathbb E}\frac{e^{-2V(0)}|\hat{x}(0)|^{2}}{1+2e^{-2V(0)}|\hat{x}(0)|}=0. $$ Then, $$Q^{\frac{1}{2}}(t)(x(t)-x'(t))=0,\, \mbox{for\,all},\,t\geq t_{0}. $$ Consequently, $$x(t)=x'(t),\, \mbox{for\,all},\,t\geq t_{0}.$$ Then, the uniqueness holds. The proof is complete. \end{proof} The following examples illustrate the theories about existence and uniqueness. \begin{exa} {\rm Assume that ${\mathscr O}$ is a closed convex subset of ${\mathbb R}^{2},$ and that $I_{{\mathscr O}}$ is the indicator function of ${\mathscr O}$, i.e., \begin{align*} I_{{\mathscr O}}(x)=\begin{cases} &+\infty, x \notin {\mathscr O},\\ &1, x\in {\mathscr O}.\\ \end{cases} \end{align*} Then, the subdifferential operator of $I_{{\mathscr O}}$ given by \begin{align*} \partial I_{{\mathscr O}}(x)=\begin{cases} &\emptyset, x \notin {\mathscr O},\\ &\{0\}, x\in \mbox{Int} ({\mathscr O}),\\ & \Lambda_{x}, x\in \partial {\mathscr O}, \end{cases} \end{align*} where $\Lambda_{x}$ is the exterior normal cone at $x$ and $\mbox{Int} ({\mathscr O})$ is the interior of ${\mathscr O}.$ For any $(x, \mu)\in {\mathbb R}^{2}\times {\mathscr P}_{2}({\mathbb R}^{2}), $ set $$ H(x, \mu)=\left ( \begin{array}{ccc} \sin x +5+ \cos (W_{2}(\mu, \delta_{0})) & 0 \\ 0 & e^{\cos x}+4 + (W_{2}(\mu, \delta_{0})\wedge 1)\\ \end{array} \right), $$ $$f(x, \mu)=\sqrt{|x|^{2}+5}+W_{2}(\mu, \delta_{0}), g(x, \mu)=e^{1\wedge |x|}+\sin (W_{2}(\mu, \delta_{0})).$$ Obviously, $H(\cdot, \cdot), f(\cdot, \cdot), g(\cdot, \cdot)$ satisfy $\mathrm{(A1)}$ and $\mathrm{(A2)}.$ Then, the following equation has a unique weak solution: \begin{align*} \begin{cases} &{\mathord{{\rm d}}} x(t)+ H(x(t), \mu_{t})\partial \Pi(x(t)){\mathord{{\rm d}}} t= f(x(t), \mu_{t}){\mathord{{\rm d}}} t +g( x(t), \mu_{t}){\mathord{{\rm d}}} B(t), t\in [t_{0}, T],\\ & x(t_{0})=x_{0}. \end{cases} \end{align*} Moreover, If we take $$ H(x)=\left ( \begin{array}{ccc} \sin x +5 & 0 \\ 0 & e^{x\wedge 1}+4 + \cos x\\ \end{array} \right), $$ then following equation has a unique strong solution: \begin{align*} \begin{cases} &{\mathord{{\rm d}}} x(t)+ H(x(t), \mu_{t})\partial \Pi(x(t)){\mathord{{\rm d}}} t\ni f(x(t), \mu_{t}){\mathord{{\rm d}}} t +g( x(t), \mu_{t}){\mathord{{\rm d}}} B(t), t\in [t_{0}, T],\\ & x(t_{0})=x_{0}. \end{cases} \end{align*} } \end{exa} \begin{exa} {\rm In this example, we shall show that some MVMSDEs with time-dependent constraints can be transferred to some suitable MVMSDEswOS with oblique subgradients. More precisely, consider the following MVMSDEswOS with time-dependent constraints: \begin{equation}\label{37} \begin{cases} &{\mathord{{\rm d}}} x(t)+\partial I_{H(t)\Xi}(x(t)){\mathord{{\rm d}}} t\ni f(x(t), \mu_{t}\circ H(t)){\mathord{{\rm d}}} t +g( x(t), \mu_{t}\circ H(t)){\mathord{{\rm d}}} B(t), t\in [t_{0}, T],\\ & x(t)=x_{0}, \end{cases} \end{equation} where $\mu_{t}$ is the distribution of $x(t),$ $\Xi\in {\mathbb R}^{d}$ is a convex set and a deterministic time-dependent matrix $H: [0, T] \rightarrow {\mathbb R}^{m\times m}$ satisfying $(\mathrm{A2})$. Assume the coefficients $f,g$ satisfy the condition $(\mathrm{A1}).$ Furthermore, for any ${\mathscr O}\in {\mathscr B}({\mathbb R}^{m}), H(t)({\mathscr O}):=\{H(t)a| a\in {\mathscr O} \}, \mu\circ H(t)({\mathscr O}):=\mu(H(t)({\mathscr O})).$ Since $\partial I_{H(t)\Xi}(y)= \partial I_{\Xi}(H^{-1}(t)y), \forall y \in D(\partial I_{{H(t)\Xi}}),$ we can easily verify that $x$ is a solution for Eq.\eqref{37} if only if $\bar{x}=H^{-1}x$ is a solution for the following GSDEs with oblique subgradients: \begin{equation}\label{37+} \begin{cases} &{\mathord{{\rm d}}} \bar{x}(t)+(H^{-1}(t))^{2}\partial I_{\Xi}(\bar{x}(t)){\mathord{{\rm d}}} t\ni \bar{f}(\bar{x}(t), \bar{\mu}_{t}){\mathord{{\rm d}}} t +\bar{g}( \bar{x}(t), \bar{\mu}_{t}){\mathord{{\rm d}}} B(t), t\in [t_{0}, T],\\ & \bar{x}(t_{0})=H^{-1}(t)x_{0}. \end{cases} \end{equation} where \begin{align*} &\bar{f}(\bar{x}(t), \bar{\mu}_{t})=H^{-1}(t)(f(H(t)\bar{x}(t), \bar{\mu}_{t})+H'(t)\bar{x}(t)),\\ &\bar{g}(\bar{x}(t), \bar{\mu}_{t})=H^{-1}(t)(g(H(t)\bar{x}(t), \bar{\mu}_{t})+H'(t)\bar{x}(t)). \end{align*} All assumptions in $(\mathrm{(A)}1)$ and $(\mathrm{A}2)$ are satisfied for coefficients of equation \eqref{37+}. Consequently, using Theorem \ref{50}, Eq.\eqref{37+} admits a unique solution. Thus, Eq.\eqref{37} has a solution. } \end{exa} \section{Stochastic principle of optimality} In this section, we will investigate optimal control for Eq.\eqref{40} below. The aim is to show that the value function satisfies the dynamic programming principle (DPP, for short). Let $\mathbf{U}$ be a separable metric space. For a control process $u(\cdot,\cdot): [0,T]\times \Omega \rightarrow \mathbf{U},$ we consider the following stochastic controlled system: \begin{align}\label{40} \begin{cases} &{\mathord{{\rm d}}} x(t)+ H(t)\partial \Pi(x(t)){\mathord{{\rm d}}} t\ni f(x(t), \mu_{t}, u(t)){\mathord{{\rm d}}} t +g( x(t), \mu_{t}, u(t)){\mathord{{\rm d}}} B(t), t\in [s, T],\\ & x(s)=x_{0}, \end{cases} \end{align} where $$f(\cdot, \cdot, \cdot ): {\mathbb R}^{m} \times {\mathscr P}_{2}({\mathbb R}^{m})\times \mathbf{U} \rightarrow {\mathbb R}^{m}, g(\cdot, \cdot, \cdot ): {\mathbb R}^{m} \times {\mathscr P}_{2}({\mathbb R}^{m})\times \mathbf{U}\rightarrow {\mathbb R}^{m\times d},$$ For the sake of simplicity, we assume $f(0, \delta_{0},u )=g(0,\delta_{0},u)=0$ and make the following assumptions: \begin{itemize} \item[(A3)]The coefficients $f$ and $g $ satisfy that for some positive constant $L>0$ and all $x,y \in {\mathbb R}^{d}, \mu, \nu \in {\mathscr P}_{2}({\mathbb R}^{m}), u\in \mathbf{U}.$ \begin{align*} |&f(x,\mu,u)-f(y,\nu,u)|+|g(x,\mu,u)-g(y,\nu,u)| \leq L(|x-y|+W_{2}(\mu, \nu)). \end{align*} From the above assumptions, one can see that $$|f(x,\mu,u)|\leq L(|x|+W_{2}(\mu, \delta_{0})), |g(x,\mu,u)|\leq L(|x|+W_{2}(\mu, \delta_{0})).$$ \end{itemize} Define the cost functional as follows: \begin{align}\label{41} J(s,x_{0};u)={\mathbb E}\bigg[\bigg.\int^{T}_{s}b(x^{s,x_{0},u}(t), u(t)){\mathord{{\rm d}}} t +\alpha(x^{s,x_{0},u}(T))\bigg]\bigg.. \end{align} where $x^{s,x_{0},u}$ is the solution of Eq.\eqref{40} associated with $s, x_{0}, u$ and $b: {\mathbb R}^{m}\times \mathbf{U}\rightarrow {\mathbb R}, \alpha: {\mathbb R}^{m}\rightarrow {\mathbb R}$ satisfy the following conditions: \begin{itemize} \item[$(\mathrm{A4})$] For some positive constant $L>0$ and all $x,y \in {\mathbb R}^{m}, \mu, \nu \in {\mathscr P}_{2}({\mathbb R}^{m}), u\in \mathbf{U},$ \begin{align*} |b(x,u)-b(y,u)|+|\alpha(x)-\alpha(y)| \leq L|x-y|, \end{align*} For the sake of simplicity, we assume $b(0, u )=\alpha(0)=0.$ This implies $$ |b(x,u)|\leq L|x|, \alpha(x)\leq L|x|. $$ \end{itemize} We give the associated valued function as the infimum among all $u\in {\mathscr U}[s,T]:$ \begin{align}\label{42} V(s,x_{0}):=\inf_{u\in {\mathscr U}[s,T]}J(s,x_{0};u), (s, x_{0})\in [0,T)\times {\mathbb R}^{m}, \end{align} where ${\mathscr U}[s, T]$ denotes the set of all the processes $u(\cdot ,\cdot): \Omega \times [s, T]\rightarrow \mathbf{U}$ satisfying $${\mathbb E}\bigg[\bigg.\int^{T}_{s}|b(x^{s,x_{0},u}(r), u(r) )|{\mathord{{\rm d}}} r+ |\alpha(x^{s,x_{0},u}(T) )|\bigg]\bigg.<\infty.$$ \begin{defn} \label{d1+} If for any $(s,x_{0})\in [0,T)\times {\mathbb R}^{m},$ it holds that \begin{align}\label{d1++} V(s,x_{0})=\inf_{u\in {\mathscr U}[s,T]}{\mathbb E}\bigg[\bigg.\int^{\tau}_{s}b(x^{s,x_{0},u}(t), u(t)){\mathord{{\rm d}}} t +V(\tau, x^{s,x_{0},u}(\tau))\bigg]\bigg., \end{align} for any $\tau \in [s,T].$ Then, we say that the value function $V$ satisfies the DDP. \end{defn} In order to prove that the value function for Eq. (\ref{40}) fulfill DDP, we consider the following penalized equation: \begin{align}\label{43} \begin{cases} & x^{\epsilon, s,x_{0},u}(t)+ \int^{t}_{s}H(r)\nabla \Pi_{\epsilon}(x^{\epsilon, s,x_{0},u}(r)){\mathord{{\rm d}}} r =x_{0}+\int^{t}_{s}f(x^{\epsilon, s,x_{0},u}(r), \mu_{t}, u(r)){\mathord{{\rm d}}} r \\ &\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad\quad \quad+\int^{t}_{s}g( x^{\epsilon, s,x_{0},u}(r), \mu_{t}, u(r)){\mathord{{\rm d}}} B(r), t\in [s, T],\\ & x^{\epsilon, s, x_{0}, u}(s)=x_{0}, \end{cases} \end{align} and for any $ (t,x_{0})\in [0,T)\times {\mathbb R}^{m}$ the penalized valued function is defined as follows: \begin{align}\label{44} V_{\epsilon}(s,x_{0})=\inf_{u\in {\mathscr U}[s,T]}{\mathbb E}\bigg[\bigg.\int^{T}_{s}b(x^{\epsilon, s,x_{0},u}(t), u(t)){\mathord{{\rm d}}} t +\alpha(x^{\epsilon, s,x_{0},u}(T))\bigg]\bigg., \end{align} and set \begin{align}\label{41b} J_{\epsilon}(s,x_{0};u)={\mathbb E}\bigg[\bigg.\int^{T}_{s}b(x^{\epsilon, s,x_{0},u}(t), u(t)){\mathord{{\rm d}}} t +\alpha(x^{\epsilon, s,x_{0},u}(T))\bigg]\bigg.. \end{align} \begin{lem} \label{le4.1} Assume $(\mathrm{A2})-(\mathrm{A4})$. Let $(x^{s, x_{0}, u}, k^{s, x_{0}, u})$ and $(x^{s', x_{0}', u}, k^{s', x_{0}', u})$ be the solutions of Eq.\eqref{40} corresponding to the initial date $(s, x_{0})$ and $(s', x_{0}')$ respectively. Then, we have the following estimates: \begin{align}\label{45} {\mathbb E}[\sup_{s\leq r \leq T}|x^{s,x_{0},u}(r)|^{2}] \leq C(a_{H},b_{H},x_{0}, L,T), \end{align} and \begin{align}\label{46} {\mathbb E}&[\sup_{s\vee s' \leq r \leq T}|x^{s,x_{0},u}(r)-x^{s',x_{0}',u}(r)|^{2}] \nonumber\\ & \leq C(a_{H},b_{H},x_{0}', L,T){\mathbb E}[|x_{0}-x_{0}'|^{2}] + C(a_{H},b_{H},x_{0}', L,T)|s-s'|. \end{align} \end{lem} \begin{proof} Assume that $U^{s,x_{0},u}, U^{s',x_{0}',u}$ are two processes such that $${\mathord{{\rm d}}} k^{s, x_{0}, u}(t)= U^{s,x_{0},u}(t) {\mathord{{\rm d}}} t, {\mathord{{\rm d}}} k^{s', x_{0}', u}(t)= U^{s',x_{0}',u}(t) {\mathord{{\rm d}}} t.$$ Using It\^{o}'s formula, \eqref{45} can be easily derived. We only prove \eqref{46}. Assume that $s\geq s'.$ For any $t\in [s, T],$ we have \begin{align}\label{47} x^{s,x_{0},u}(t)+ \int^{t}_{s}H(r)U^{s,x_{0},u}(r){\mathord{{\rm d}}} r&= x^{s,x_{0},u}(s)+\int^{t}_{s}f(x^{ s,x_{0},u}(r), \mu_{t}, u(r)){\mathord{{\rm d}}} r\nonumber \\ &\quad+\int^{t}_{s}g( x^{ s,x_{0},u}(r), \mu_{t}, u(r)){\mathord{{\rm d}}} B(r), t\in [s, T], \end{align} and \begin{align}\label{48} x^{s',x'_{0},u}(t)+ \int^{t}_{s}H(r)U^{s',x'_{0},u}(r){\mathord{{\rm d}}} r&= x^{s,x'_{0},u}(s)+\int^{t}_{s}f(x^{ s',x_{0}',u}(r), \mu'_{t}, u(r)){\mathord{{\rm d}}} r\nonumber \\ &\quad+\int^{t}_{s}g( x^{ s',x'_{0},u}(r), \mu'_{t}, u(r)){\mathord{{\rm d}}} B(r), t\in [s, T]. \end{align} Set $N(s)=-\frac{{\mathord{{\rm d}}} }{{\mathord{{\rm d}}} s}(H^{-\frac{1}{2}}(s))=\frac{1}{2}H^{-\frac{3}{2}}(s)\frac{{\mathord{{\rm d}}} H}{{\mathord{{\rm d}}} s}(s).$ By $(\mathrm{A2}),$ we have \begin{align}\label{49} |N(s)|\leq \frac{1}{2}|H^{-\frac{3}{2}}(s)|\bigg|\bigg.\frac{{\mathord{{\rm d}}} H}{{\mathord{{\rm d}}} s}(s) \bigg|\bigg.\leq \frac{1}{2}a_{H}^{-\frac{3}{2}}M, \end{align} where $M:=\sup_{0\leq s\leq T}\big|\big.\frac{{\mathord{{\rm d}}} H}{{\mathord{{\rm d}}} s}(s) \big|\big..$ Set $\hat{x}(t)=x^{s,x_{0},u}(t)-x^{s',x_{0}',u}(t), \tilde{x}(t) =H^{-\frac{1}{2}}(t)\hat{x}(t).$ Then, $ \tilde{x}(\cdot)$ satisfies the following equation: \begin{align}\label{50} \tilde{x}(t)&= \tilde{x} (s)+\int^{t}_{s}\hat{x}(r){\mathord{{\rm d}}} H^{-\frac{1}{2}}(r)+ \int^{t}_{s}H^{-\frac{1}{2}}(r){\mathord{{\rm d}}}\hat{x}(r)\nonumber\\ & =\int^{t}_{t_{0}}R(r){\mathord{{\rm d}}} r + \int^{t}_{t_{0}}H^{-\frac{1}{2}}(r)(g( x^{ s,x_{0},u}(r), \mu_{r}, u(r))\nonumber\\ &\quad \quad \quad\quad \quad \quad\quad \quad \quad\quad \quad-g( x^{ s',x'_{0},u}(r), \mu'_{r}, u(r))){\mathord{{\rm d}}} B(r). \end{align} where \begin{align*} R(r)&:=\hat{x}(r)N(r)-H^{\frac{1}{2}}(r)[U^{s,x_{0},u}(r) - U^{s',x_{0}',u}(r)]\\ & \quad+H^{-\frac{1}{2}}(s)[f(x^{ s,x_{0},u}(r), \mu_{r}, u(r))-f(x^{ s',x_{0}',u}(r), \mu'_{r}, u(r))]. \end{align*} Applying It\^{o}'s formula, we have \begin{align}\label{51} | \tilde{x}(t)|^{2} & =| \tilde{x}(s)|^{2}+2\int^{t}_{s}\langle \tilde{x}(r), R(r)\rangle{\mathord{{\rm d}}} r\nonumber\\ &\quad + \int^{t}_{s} | H^{-\frac{1}{2}}(r)(g( x^{ s,x_{0},u}(r), \mu_{r}, u(r))\nonumber\\ &\quad \quad \quad\quad \quad \quad\quad \quad \quad\quad \quad-g( x^{ s',x'_{0},u}(r), \mu'_{r}, u(r)))|^{2}{\mathord{{\rm d}}} r\nonumber\\ &\quad + 2\int^{t}_{s}\langle \tilde{x}(r), H^{-\frac{1}{2}}(r)(g( x^{ s,x_{0},u}(r), \mu_{r}, u(r))\nonumber\\ &\quad \quad \quad\quad \quad \quad\quad \quad \quad\quad \quad-g( x^{ s',x'_{0},u}(r), \mu'_{r}, u(r)))\rangle{\mathord{{\rm d}}} B(r)\nonumber\\ &\leq | \tilde{x}(s)|^{2}+(1+b_{H}^{\frac{1}{2}}a_{H}^{-\frac{3}{2}}M+4L^{2}a_{H}^{-1}b_{H}L^{2})\int^{t}_{s}\sup_{t_{0}\leq r\leq s}| \tilde{x}(r)|^{2}{\mathord{{\rm d}}} s\nonumber\\ &\quad + 4L^{2}a_{H}^{-1}b_{H}L^{2}\int^{t}_{s}W^{2}_{2}(\mu_{r}, \mu'_{r}){\mathord{{\rm d}}} r\nonumber\\ &\quad + 2\int^{t}_{s}\langle \tilde{x}(r), H^{-\frac{1}{2}}(s)(g( x^{ s,x_{0},u}(r), \mu_{r}, u(r))\nonumber\\ &\quad \quad \quad\quad \quad \quad\quad \quad \quad\quad \quad-g( x^{ s',x'_{0},u}(r), \mu'_{r}, u(r)))\rangle{\mathord{{\rm d}}} B(r). \end{align} From BDG's inequality, for any $l>0,$ we have \begin{align*} {\mathbb E}&\sup_{s\leq r\leq t}\bigg|\bigg.\int^{r}_{s}\langle \tilde{x}(u), H^{-\frac{1}{2}}(u)(g( x^{ s,x_{0},u}(u), \mu_{u}, u(u))\nonumber\\ &\quad \quad \quad\quad \quad \quad\quad \quad \quad\quad \quad-g( x^{ s',x'_{0},u}(u), \mu_{u}, u(u)))\rangle {\mathord{{\rm d}}} B(u)\bigg|\bigg.\\ &\leq 32{\mathbb E}\bigg(\bigg.\int^{t}_{s}| \tilde{x}(s) |^{2}|H^{-\frac{1}{2}}(s)(g( x^{ s,x_{0},u}(r), \mu'_{r}, u(r))\nonumber\\ &\quad \quad \quad\quad \quad \quad\quad \quad \quad\quad \quad-g( x^{ s',x'_{0},u}(r), \mu'_{r}, u(r)))|^{2} {\mathord{{\rm d}}} r \bigg)\bigg.^{\frac{1}{2}}\\ & \leq 32{\mathbb E}\bigg[\bigg. \sup_{s\leq r\leq t}| \tilde{x}(r)|\bigg(\bigg.\int^{t}_{s} |H^{-\frac{1}{2}}(s)(g( x^{ s,x_{0},u}(r), \mu_{r}, u(r))\nonumber\\ &\quad \quad \quad\quad \quad \quad\quad \quad \quad\quad \quad-g( x^{ s',x'_{0},u}(r), \mu_{r}, u(r)))|^{2} {\mathord{{\rm d}}} r \bigg)\bigg.^{\frac{1}{2}}\bigg]\bigg.\\ &\leq \frac{16}{l}{\mathbb E}[\sup_{s\leq r\leq t}| \tilde{x}(r)|^{2}] +16l {\mathbb E}\int^{t}_{s}|H^{-\frac{1}{2}}(r)(g( x^{ s,x_{0},u}(r), \mu_{r}, u(r))\nonumber\\ &\quad \quad \quad\quad \quad \quad\quad \quad \quad\quad \quad-g( x^{ s',x'_{0},u}(r), \mu'_{r}, u(r)))|^{2}{\mathord{{\rm d}}} r\\ &\leq \frac{16}{l}{\mathbb E}[\sup_{s\leq r\leq t}| \tilde{x}(r)|^{2}] +l 64L^{2}a_{H}^{-1}b_{H}{\mathbb E}\int^{t}_{s}| \tilde{x}(r)|^{2}{\mathord{{\rm d}}} r, \end{align*} This together with \eqref{51} implies for taking $l=2,$ \begin{align}\label{52} &{\mathbb E}[\sup_{s\leq r\leq t }| \tilde{x}(r)|^{2}]\nonumber\\ &\leq {\mathbb E}[| \tilde{x}(s)|^{2}] +(2+2b_{H}^{\frac{1}{2}}a_{H}^{-\frac{3}{2}}M+(2^{11}+2^{5})L^{2}a_{H}^{-1}b_{H})\int^{t}_{s}{\mathbb E}[\sup_{s\leq u\leq r }| \tilde{x}(u)|^{2}]{\mathord{{\rm d}}} u. \end{align} Furthermore, \begin{align}\label{53} {\mathbb E}[| \tilde{x}(s)|^{2}] &\leq 3a_{H}^{-1}|x_{0}-x'_{0}|^{2}+ 3a_{H}^{-1}{\mathbb E}\bigg|\bigg.\int^{s}_{s'}f'(r){\mathord{{\rm d}}} r \bigg|\bigg.^{2} +3a_{H}^{-1}{\mathbb E}\int^{s}_{s'}|g'(r)|^{2}{\mathord{{\rm d}}} r \nonumber\\ &\leq 3a_{H}^{-1}|x_{0}-x'_{0}|^{2}+ 3a_{H}^{-1}(s-s'){\mathbb E}\int^{s}_{s'}|f'(r)|^{2}{\mathord{{\rm d}}} r +3a_{H}^{-1}{\mathbb E}\int^{s}_{s'}|g'(r)|^{2}{\mathord{{\rm d}}} r\nonumber\\ &\leq 3a_{H}^{-1}|x_{0}-x'_{0}|^{2} + C(a_{H},b_{H},x'_{0}, L,T)(s-s'), \end{align} where $f'(r):= f(x'(r), \mu'_{r}), g'(r):= g(x'(r), \mu'_{r}).$ Combining \eqref{52} and \eqref{53}, we have \begin{align}\label{54} &{\mathbb E}[\sup_{s\leq r\leq t }| \tilde{x}(r)|^{2}] \leq {\mathbb E}[| \tilde{x}(s)|^{2}]\nonumber\\ &\quad+(2+2b_{H}^{\frac{1}{2}}a_{H}^{-\frac{3}{2}}M+(2^{11}+2^{5})L^{2}a_{H}^{-1}b_{H})\int^{t}_{s}{\mathbb E}[\sup_{s\leq u\leq r }| \tilde{x}(u)|^{2}]{\mathord{{\rm d}}} u\nonumber\\ &\leq (2b_{H}^{\frac{1}{2}}a_{H}^{-\frac{3}{2}}M+8L+\bar{\sigma}^{2}(6L^{2}+12L^{2}a_{H}^{-1}b_{H}))\int^{t}_{s}{\mathbb E}[\sup_{s\leq u\leq r }| \tilde{x}(u)|^{2}]{\mathord{{\rm d}}} u\nonumber\\ &\quad + 3a_{H}^{-1}|x_{0}-x'_{0}|^{2} + C(a_{H},b_{H},x'_{0}, L,T, \overline{\sigma})(s-s'). \end{align} Gronwall's inequality leads to $$ {\mathbb E}[\sup_{s\leq t\leq T }| \tilde{x}(t)|^{2}]\leq C(a_{H},b_{H},x'_{0}, L,T)|x_{0}-x'_{0}|^{2} + C(a_{H},b_{H},x'_{0}, L,T)|s-s'|, $$ The desired result is obtained. \end{proof} We can use techniques similar to those used in Lemma \ref{le4.1} to give a priori estimate for solution $x^{\epsilon, s, x_{0},u}.$ \begin{lem} \label{le4.2} Assume $(\mathrm{A2})-(\mathrm{A4})$. Let $x^{\epsilon, s, x_{0},u}$ be the solutions of Eq.\eqref{43} corresponding to the initial date $(s, x_{0}).$ Then, we have the following estimates: \begin{align}\label{45l} {\mathbb E}[\sup_{s\leq r \leq T}|x^{\epsilon, s, x_{0},u}|^{2}]+{\mathbb E}\int^{T}_{s}| \nabla\Pi_{\epsilon}(x^{\epsilon, s, x_{0},u}(s))|^{2}{\mathord{{\rm d}}} s \leq C(a_{H},b_{H},x_{0}, L,T). \end{align} \end{lem} The following lemma shows that $x^{\epsilon, s,x_{0},u}$ is a Cauchy sequence in $L^{2}(s, T; {\mathbb R}^{m})$. \begin{thm} \label{L4L} Assume $(\mathrm{A}2)-(\mathrm{A}4).$ Then, we have the following estimates: \begin{align}\label{+8+} {\mathbb E}[\sup_{t_{0}\leq t \leq T}|x^{\epsilon, s,x_{0},u}(t)-x^{\epsilon', s,x_{0},u}(t)|^{2}]\leq (a_{H},b_{H},x_{0}, L,T)(\epsilon+\epsilon'). \end{align} \end{thm} \begin{proof} Set $N(s)=-\frac{{\mathord{{\rm d}}} }{{\mathord{{\rm d}}} s}(H^{-\frac{1}{2}}(s))=\frac{1}{2}H^{-\frac{3}{2}}(s)\frac{{\mathord{{\rm d}}} H}{{\mathord{{\rm d}}} s}(s).$ By $(\mathrm{A2}),$ we have \begin{align}\label{9} |N(s)|\leq \frac{1}{2}|H^{-\frac{3}{2}}(s)|\bigg|\bigg.\frac{{\mathord{{\rm d}}} H}{{\mathord{{\rm d}}} s}(s) \bigg|\bigg.\leq \frac{1}{2}a_{H}^{-\frac{3}{2}}M, \end{align} where $M:=\sup_{0\leq s\leq T}\big|\big.\frac{{\mathord{{\rm d}}} H}{{\mathord{{\rm d}}} s}(s) \big|\big..$ Set $\hat{x}^{\epsilon, \epsilon'}(s)=x^{\epsilon, s,x_{0},u}(s)-x^{\epsilon', s,x_{0},u}(s), \tilde{x}^{\epsilon, \epsilon'}(s) =H^{-\frac{1}{2}}(s)\hat{x}(s).$ Then, $\tilde{x}^{\epsilon, \epsilon'}(t)$ satisfies the following equation: \begin{align}\label{10} \tilde{x}^{\epsilon, \epsilon'}(t)&=-\int^{t}_{t_{0}}\hat{x}^{\epsilon, \epsilon'}(s){\mathord{{\rm d}}} H^{-\frac{1}{2}}(s)- \int^{t}_{t_{0}}H^{-\frac{1}{2}}(s){\mathord{{\rm d}}}\hat{x}^{\epsilon, \epsilon'}(s)\nonumber\\ & =\int^{t}_{t_{0}}Q(s){\mathord{{\rm d}}} s + \int^{t}_{t_{0}}H^{-\frac{1}{2}}(s)(g( x^{\epsilon, s,x_{0},u}(s), \mu^{\epsilon}_{t})-g( x^{\epsilon', s,x_{0},u}(s), \mu^{\epsilon'}_{s}){\mathord{{\rm d}}} B(s) \end{align} where $Q(s):=\hat{x}^{\epsilon, \epsilon'}(s)N(s)-H^{\frac{1}{2}}(s)[\nabla \Pi_{\epsilon}(x^{\epsilon}(s))- \nabla \Pi_{\epsilon}(x^{\epsilon'}(s))]+H^{-\frac{1}{2}}(s)[f( x^{\epsilon, s,x_{0},u}(s), \mu^{\epsilon}_{s})-f( x^{\epsilon', s,x_{0},u}(s), \mu^{\epsilon'}_{s})].$ Applying It\^{o}'s formula and the property $(e)$ of $\nabla \Pi_{\epsilon}(\cdot),$ we have \begin{align}\label{11} |&\tilde{x}^{\epsilon, \epsilon'}(t)|^{2} =2\int^{t}_{t_{0}}\langle\tilde{x}^{\epsilon, \epsilon'}(s), Q(s)\rangle{\mathord{{\rm d}}} s\nonumber\\ &\quad + \int^{t}_{t_{0}} | H^{-\frac{1}{2}}(s)(g( x^{\epsilon, s,x_{0},u}(s), \mu^{\epsilon}_{s})-g( x^{\epsilon', s,x_{0},u}(s), \mu^{\epsilon'}_{s})|^{2}{\mathord{{\rm d}}} s\nonumber\\ &\quad + 2\int^{t}_{t_{0}}\langle \tilde{x}^{\epsilon, \epsilon'}(s), H^{-\frac{1}{2}}(s)(g( x^{\epsilon, s,x_{0},u}(s), \mu^{\epsilon}_{t})-g( x^{\epsilon', s,x_{0},u}(s), \mu^{\epsilon'}_{s})\rangle{\mathord{{\rm d}}} B(s)\nonumber\\ &\leq C(a_{H},b_{H},\xi,t_{0}, L,T)\int^{t}_{t_{0}}\sup_{t_{0}\leq r\leq s }|\tilde{x}^{\epsilon, \epsilon'}(r)|^{2}{\mathord{{\rm d}}} s\nonumber\\ &\quad +(\epsilon+\epsilon')\int^{t}_{t_{0}}|\nabla \Pi_{\epsilon}(x^{\epsilon}(s))|^{2}{\mathord{{\rm d}}} s+ (\epsilon+\epsilon')\int^{t}_{t_{0}}|\nabla \Pi_{\epsilon'}(x^{\epsilon'}(s))|^{2}{\mathord{{\rm d}}} s\nonumber\\ &\quad + 2\int^{t}_{t_{0}}\langle \tilde{x}^{\epsilon, \epsilon'}(s), H^{-\frac{1}{2}}(s)(g( x^{\epsilon, s,x_{0},u}(s), \mu^{\epsilon}_{s})-g( x^{\epsilon', s,x_{0},u}(s), \mu^{\epsilon'}_{s})\rangle{\mathord{{\rm d}}} B(s). \end{align} From BDG's inequality, for any $l>0,$ we have \begin{align*} {\mathbb E}&\sup_{0\leq r\leq t}\bigg|\bigg.\int^{r}_{t_{0}}\langle \tilde{x}^{\epsilon, \epsilon'}(s), H^{-\frac{1}{2}}(s)(g( x^{\epsilon, s,x_{0},u}(s), \mu^{\epsilon}_{s})-g( x^{\epsilon', s,x_{0},u}(s), \mu^{\epsilon'}_{s})\rangle {\mathord{{\rm d}}} B(s)\bigg|\bigg.\\ & \leq {\mathbb E}\bigg(\bigg.\int^{t}_{t_{0}}|\tilde{x}^{\epsilon, \epsilon'}(s) |H^{-\frac{1}{2}}(s)(g( x^{\epsilon, s,x_{0},u}(s), \mu^{\epsilon}_{s})-g( x^{\epsilon', s,x_{0},u}(s), \mu^{\epsilon'}_{s})|^{2} {\mathord{{\rm d}}} s \bigg)\bigg.^{\frac{1}{2}}\\ & \leq {\mathbb E}\bigg[\bigg. \sup_{t_0\leq s\leq t}|\tilde{x}^{\epsilon, \epsilon'}(s)|\bigg(\bigg.\int^{t}_{t_{0}} |H^{-\frac{1}{2}}(s)(g( x^{\epsilon, s,x_{0},u}(s), \mu^{\epsilon}_{s})-g( x^{\epsilon', s,x_{0},u}(s), \mu^{\epsilon'}_{s}|^{2} {\mathord{{\rm d}}} s \bigg)\bigg.^{\frac{1}{2}}\bigg]\bigg.\\ & \leq \frac{1}{l}{\mathbb E}[\sup_{t_0\leq s\leq t}|\tilde{x}^{\epsilon, \epsilon'}(s)|^{2}] +l {\mathbb E}\int^{t}_{t_{0}}|H^{-\frac{1}{2}}(s)(g( x^{\epsilon, s,x_{0},u}(s), \mu^{\epsilon}_{s})-g( x^{\epsilon', s,x_{0},u}(s), \mu^{\epsilon'}_{s})|^{2}{\mathord{{\rm d}}} s\\ &\leq \frac{1}{l}{\mathbb E}[\sup_{t_0\leq s\leq t}|\tilde{x}^{\epsilon, \epsilon'}(s)|^{2}] +l C(a_{H},b_{H},\xi,t_{0}, L,T) {\mathbb E}\int^{t}_{t_{0}}| \tilde{x}^{\epsilon, \epsilon'}(s)|^{2}{\mathord{{\rm d}}} s, \end{align*} This together with \eqref{11} implies for taking $l=2,$ \begin{align*} &{\mathbb E}[\sup_{t_{0}\leq t\leq T }|\tilde{x}^{\epsilon, \epsilon'}(s)|^{2}]\\ &\leq C(a_{H},b_{H},\xi,t_{0}, L,T) \int^{t}_{t_{0}}{\mathbb E}[\sup_{t_{0}\leq r\leq s }|\tilde{x}^{\epsilon, \epsilon'}(r)|^{2}]{\mathord{{\rm d}}} s\nonumber\\ &\quad +2(\epsilon+\epsilon'){\mathbb E}\int^{t}_{t_{0}}|\nabla \Pi_{\epsilon}(x^{\epsilon}(s))|^{2}{\mathord{{\rm d}}} s+ 2(\epsilon+\epsilon'){\mathbb E}\int^{t}_{t_{0}}|\nabla \Pi_{\epsilon'}(x^{\epsilon'}(s))|^{2}{\mathord{{\rm d}}} s\nonumber\\ &\leq C(a_{H},b_{H},\xi,t_{0}, L,T)\int^{t}_{t_{0}}{\mathbb E}[\sup_{t_{0}\leq r\leq s }|\tilde{x}^{\epsilon, \epsilon'}(r)|^{2}]{\mathord{{\rm d}}} s\\ &\quad +(\epsilon+\epsilon')C(a_{H},b_{H},x_{0}, L,T). \end{align*} Gronwall's inequality leads to $$ {\mathbb E}[\sup_{t_{0}\leq t\leq T }|\tilde{x}^{\epsilon, \epsilon'}(s)|^{2}]\leq C(a_{H},b_{H},x_{0}, L,T)(\epsilon+\epsilon'). $$ The proof is therefore complete. \end{proof} \begin{lem}\label{l4.4} Let $(\mathrm{A}2)-(\mathrm{A}4)$ hold. The following equality holds for a subsequence of $\epsilon,$ which we still denote by $\epsilon, $ $$\lim_{\epsilon \rightarrow 0}{\mathbb E}\int^{T}_{s}|x^{\epsilon, s,x_{0},u}(r)- x^{ s,x_{0},u}(r)|^{2}{\mathord{{\rm d}}} r=0. $$ \end{lem} \begin{proof} By Lemma \ref{L4L}, we know that there exists a stochastic process $\check{x}^{ s,x_{0},u}\in L^{2}(s, t; {\mathbb R}^{m})$ such that $\lim_{\epsilon\rightarrow 0}x^{\epsilon, s,x_{0},u}=\check{x}^{ s,x_{0},u} $ in $L^{2}(s, t; {\mathbb R}^{m}).$ Moreover, by Lemma \ref{le4.2}, there exists a stochastic process $\check{U}^{ s,x_{0},u}\in L^{2}(s, T; {\mathbb R}^{m})$ such that $\nabla\Pi_{\epsilon}(x^{\epsilon, s,x_{0},u}(\cdot))\Rightarrow \check{U}^{ s,x_{0},u}(\cdot) $ in $L^{2}(s, T; {\mathbb R}^{m}), \epsilon\rightarrow 0. $ As a consequence, $$ \lim_{\epsilon\rightarrow 0}\int^{T}_{t_{0}}H(s)\nabla\Pi_{\epsilon}(x^{\epsilon, s,x_{0},u}(s)){\mathord{{\rm d}}} s =\lim_{\epsilon\rightarrow 0}\int^{T}_{t_{0}}H(s)\check{U}^{ s,x_{0},u}(s){\mathord{{\rm d}}} s. $$ Therefore, we can pass to the limit in the approximating equation \eqref{43} and obtain that \begin{align*} \begin{cases} &\check{x}^{ s,x_{0},u}(t)+ \int^{t}_{s}H(r)\check{U}^{ s,x_{0},u}(r){\mathord{{\rm d}}} r= \int^{t}_{s}f(\check{x}^{ s,x_{0},u}(r), \check{\mu}^{ s,x_{0},u}_{r}, u(r)){\mathord{{\rm d}}} r\\ &\quad \quad \quad \quad \quad \quad\quad \quad \quad\quad \quad \quad+\int^{t}_{s}g( \check{x}^{ s,x_{0},u}(r), \check{\mu}^{ s,x_{0},u}_{r}, u(r)){\mathord{{\rm d}}} B(r), t\in [s, T],\\ & \check{x}(s)=x_{0}, \end{cases} \end{align*} where $\check{\mu}^{ s,x_{0},u}_{t}$ is the distribution of $\check{x}^{ s,x_{0},u}(t).$ By $(b),$ we have $$\nabla \Pi_{\epsilon}(x^{\epsilon, s,x_{0},u}(t))\in \partial \Pi(J_{\epsilon}(x^{\epsilon, s,x_{0},u}(t))).$$ Thus, \begin{align}\label{90} {\mathbb E}&\int^{T}_{s}\langle\nabla \Pi_{\epsilon}(x^{\epsilon, s,x_{0},u}(r)), \nu(r) -J_{\epsilon}(x^{\epsilon, s,x_{0},u}(t))) \rangle {\mathord{{\rm d}}} r + {\mathbb E}\int^{T}_{s} \Pi(J_{\epsilon}(x^{\epsilon, s,x_{0},u}(r))) {\mathord{{\rm d}}} r\nonumber\\ &\leq {\mathbb E}\int^{T}_{s} \Pi(\nu(r)) {\mathord{{\rm d}}} r \end{align} for any square integrable stochastic process $\nu.$ From the definition of $J_{\epsilon}(\cdot)$ and Lemma \ref{le4.2}, we know that \begin{align*} &\lim_{\epsilon\rightarrow 0}\int^{T}_{s}|J_{\epsilon}(x^{\epsilon, s,x_{0},u}(t))- \check{x}^{ s,x_{0},u}(r)|^{2}{\mathord{{\rm d}}} r\\ &=2\epsilon^{2}\limsup_{\epsilon \rightarrow 0}{\mathbb E}\int^{T}_{s}|\nabla\Pi_{\epsilon}(x^{\epsilon, s,x_{0},u}(r))|^{2}{\mathord{{\rm d}}} r+2\limsup_{\epsilon \rightarrow 0}{\mathbb E}\int^{T}_{s}|x^{\epsilon, s,x_{0},u}(r)-\check{x}^{ s,x_{0},u}(r)|^{2}{\mathord{{\rm d}}} r\\ &=0. \end{align*} By taking $\liminf_{\epsilon\rightarrow 0}$ on both side of the above inequality \eqref{90}, we have $U(s)\in \partial \Pi(x(s)).$ Thus $\check{x}^{ s,x_{0},u}$ be the solutions of Eq.\eqref{40} corresponding to the initial date $(s, x_{0}).$ By the uniqueness, we know that $\check{x}^{ s,x_{0},u}(t)= x^{ s,x_{0},u}(t), a.e.$ Then, \end{proof} \begin{lem}\label{h+} Assume $(\mathrm{A2})-(\mathrm{A4}).$ The following estimates hold: \begin{align} &|V(s, x_{0})| \leq C(a_{H},b_{H},x_{0}, L,T),\label{b+}\\ &|V(s, x_{0})- V(s', x'_{0})| \nonumber\\ &\leq C(a_{H},b_{H},x'_{0}, L,T)|x_{0}-x'_{0}|+ C(a_{H},b_{H},x'_{0}, L,T)|s-s'|^{\frac{1}{2}}.\label{b++} \end{align} \end{lem} \begin{proof} By $(\mathrm{A3}),$ we have \begin{align*} |J(t, x_{0})|&=\bigg|\bigg. {\mathbb E}\bigg[\bigg. \int^{T}_{s}b(t, x^{s, x_{0}, u}(t), u(t)){\mathord{{\rm d}}} t + \alpha(x^{s, x_{0}, u}(T)) \bigg]\bigg. \bigg|\bigg.\\ & \leq C(T,L){\mathbb E}(1+\sup_{s\leq t\leq T}|x^{s, x_{0}, u}(t)|)\\ &\leq C(a_{H},b_{H},x_{0}, L,T). \end{align*} Furthermore, for the sake of simplicity, assume $s\geq s'.$ Then, we have \begin{align}\label{c++} |&J(s, x_{0})- J(s', x'_{0})|=\bigg|\bigg. {\mathbb E}\bigg[\bigg. \int^{T}_{s}b(t, x^{s, x_{0}, u}(t), u(t)){\mathord{{\rm d}}} t + \alpha(x^{s, x_{0}, u}(T)) \bigg]\bigg.\nonumber\\ &\quad - {\mathbb E}\bigg[\bigg. \int^{T}_{s'}b(t, x^{s', x'_{0}, u}(t), u(t)){\mathord{{\rm d}}} t + \alpha( x^{s', x'_{0}, u}(T)) \bigg]\bigg. \bigg|\bigg.\nonumber\\ & \leq {\mathbb E}\bigg|\bigg.\int^{T}_{s}b(t, x^{s, x_{0}, u}(t), u(t))-\int^{T}_{s'}b(t, x^{s', x'_{0}, u}(t), u(t)){\mathord{{\rm d}}} t\bigg|\bigg. \nonumber\\ &\quad + {\mathbb E}[|\alpha( x^{s', x'_{0}, u}(T))- \alpha( x^{s', x'_{0}, u}(T)) |]\nonumber\\ &\quad + {\mathbb E}[|\alpha( x^{s, x_{0}, u}(T))- \alpha( x^{s', x'_{0}, u}(T)) |]\nonumber\\ & \leq C( L, T){\mathbb E}[\sup_{s\leq t\leq T}|x^{s, x_{0}, u}(t)- x^{s', x'_{0}, u}(t)|]\nonumber\\ &\quad + C( L, T)(s-s')+ C( L, T)(s-s'){\mathbb E}[\sup_{s\leq t\leq T}| x^{s', x'_{0}, u}(t)|]\nonumber\\ & \leq C(a_{H},b_{H},x'_{0}, L,T)|x_{0}-x'_{0}|+ C(a_{H},b_{H},x'_{0}, L,T)|s-s'|^{\frac{1}{2}}. \end{align} \end{proof} The following lemma about the DDP of Eq.\eqref{43} is a straightforward result of Theorem 4.2 in \cite{LC}. \begin{lem} Assume $(\mathrm{A2})-(\mathrm{A4})$. Let $x^{\epsilon, s,x_{0},u}$ be the solution of \eqref{43}. Then, for any $ (t,x_{0})\in [0,T)\times {\mathbb R}^{m},$ it holds that \begin{align}\label{55} V_{\epsilon}(s,x_{0})=\inf_{u\in {\mathscr U}[s,T]}{\mathbb E}\bigg[\bigg.\int^{\tau}_{s}b(x^{\epsilon, s,x_{0},u}(t), u(t)){\mathord{{\rm d}}} t +V(\tau, x^{\epsilon, s,x_{0},u}(\tau))\bigg]\bigg., \end{align} for every stopping time $\tau\in [s, T].$ \end{lem} Now, we are going to prove that Eq.\eqref{40} satisfies the DDP \begin{prop} Assume $(\mathrm{A2})-(\mathrm{A4})$. Then it holds that: \begin{align}\label{56} &|V_{\epsilon}(s,x_{0})-V(s,x_{0})|\leq C(a_{H},b_{H},x_{0}, L,T)\epsilon^{\frac{1}{2}}. \end{align} \end{prop} \begin{proof} By Lemma \ref{l4.4}, taking the limit in \eqref{+8+}, we deduce that $${\mathbb E}[\sup_{s\leq r\leq T}|x^{\epsilon, s,x_{0},u}(r)- x^{ s,x_{0},u}(r)|^{2}]\leq C(a_{H},b_{H},x_{0}, L,T)\epsilon.$$ With the same argument as in the proof of Lemma \ref{h+}, we obtain \begin{align*} |J_{\epsilon}(s,x_{0},u)-J(s,x_{0},u)|\leq C(a_{H},b_{H},x_{0}, L,T)\epsilon^{\frac{1}{2}}. \end{align*} Consequently, we obtain $$|V_{\epsilon}(s,x_{0})-V(s,x_{0})|\leq \sup_{u\in {\mathscr U}[s, T]}|J_{\epsilon}(s,x_{0},u)-J(s,x_{0},u)| \leq C(a_{H},b_{H},x_{0}, L,T)\epsilon^{\frac{1}{2}}. $$ The second conclusion follows. \end{proof} From the above lemma, the following result immediately holds. \begin{lem} $V_{\epsilon}(\cdot, \cdot)$ is uniformly convergent on compacts to the value function $V(\cdot, \cdot)$ on $[0,T)\times \overline{D(\Pi)}.$ \end{lem} Now, we state our main result. \begin{thm} Under the assumptions $(\mathrm{A2})-(\mathrm{A4}),$ the value function $V$ satisfies the DPP \eqref{d1++}. \end{thm} \begin{proof} Taking any $(t,x_{0})\in [0,T)\times \overline{D(\Pi)},$ we have, for any $\epsilon >0, u\in {\mathscr U}[s, T], \tau \in [s, T],$ and $R>0,$ \begin{align}\label{33+} &{\mathbb E}[|V_{\epsilon}(\tau,x^{\epsilon, s,x_{0},u}(\tau))-V(\tau,x^{ s,x_{0},u}(\tau))|]\nonumber\\ &\leq{\mathbb E}[|V_{\epsilon}(\tau,x^{\epsilon, s,x_{0},u}(\tau))-V_{\epsilon}(\tau,x^{ s,x_{0},u}(\tau))|]\nonumber\\ &\quad +{\mathbb E}[|V_{\epsilon}(\tau,x^{ s,x_{0},u}(\tau))-V(\tau,x^{ s,x_{0},u}(\tau))|(1_{A_{1}}+1_{A_{2}})]\nonumber\\ & \leq {\mathbb E}[|V_{\epsilon}(\tau,x^{\epsilon, s,x_{0},u}(\tau))-V_{\epsilon}(\tau,x^{ s,x_{0},u}(\tau))|] +\sup_{(t,y)\in B}{\mathbb E}[|V_{\epsilon}(t,y)-V(t,y)|]\nonumber\\ &\quad +{\mathbb E}[|V_{\epsilon}(\tau,x^{ s,x_{0},u}(\tau))-V(\tau,x^{ s,x_{0},u}(\tau))|1_{A_{2}}], \end{align} where $$A_{1}:= \{\omega: |x^{ s,x_{0},u}(\tau)|\leq R \}, A_{2}:= \{\omega: |x^{ s,x_{0},u}(\tau)|> R \}, $$ and $$B:= [0, T]\times (\overline{D(\Pi)}\cap \overline{B(0, R)}).$$ With the same argument as in the proof of Lemma \ref{h+}, we have \begin{align}\label{57} |J_{\epsilon}(\tau, x^{ \epsilon, s,x_{0},u}(\tau))- J_{\epsilon}(\tau, x^{ s,x_{0},u}(\tau))|\leq C(a_{H},b_{H},\xi, L,T)\epsilon^{\frac{1}{2}}. \end{align} Now, we look at the term ${\mathbb E}[|V_{\epsilon}(\tau,x^{ s,x_{0},u}(\tau))-V(\tau,x^{ s,x_{0},u}(\tau))|1_{A_{2}}].$ By definition of $A_{2},$ we have \begin{align}\label{58} {\mathbb E}&[|V_{\epsilon}(\tau,x^{ s,x_{0},u}(\tau))-V(\tau,x^{ s,x_{0},u}(\tau))|1_{A_{2}}]\leq ({\mathbb E}[|V_{\epsilon}(\tau,x^{ s,x_{0},u}(\tau))-V(\tau,x^{ s,x_{0},u}(\tau))|^{2}])^{\frac{1}{2}}(\mE1_{A_{2}})^{\frac{1}{2}}\nonumber\\ &\leq \sqrt{2}({\mathbb E}[|V_{\epsilon}(\tau,x^{ s,x_{0},u}(\tau))|^{2}]+{\mathbb E}[|V(\tau,x^{ s,x_{0},u}(\tau))|^{2}])^{\frac{1}{2}}|]\frac{({\mathbb E}| x^{ s,x_{0},u}(\tau)|^{2})^{\frac{1}{2}}}{R}\nonumber\\ &\leq \frac{C(a_{H},b_{H},x_{0}, L,T)}{R}. \end{align} Combining \eqref{33+}, \eqref{57} and \eqref{58}, we have \begin{align}\label{59} &{\mathbb E}[|V_{\epsilon}(\tau,x^{\epsilon, s,x_{0},u}(\tau))-V(\tau,x^{ s,x_{0},u}(\tau))|]\nonumber\\ &\leq {\mathbb E}[|V_{\epsilon}(\tau,x^{\epsilon, s,x_{0},u}(\tau))-V_{\epsilon}(\tau,x^{ s,x_{0},u}(\tau))|] +\sup_{(t,y)\in B}{\mathbb E}[|V_{\epsilon}(t,y)-V(t,y)|]\nonumber\\ &\quad+\frac{C(a_{H},b_{H},x_{0}, L,T)}{R}\nonumber\\ &\leq C(a_{H},b_{H},x_{0}, L,T)\epsilon^{\frac{1}{2}}+\sup_{(t,y)\in B}{\mathbb E}[|V_{\epsilon}(t,y)-V(t,y)|]\nonumber\\ &\quad+\frac{C(a_{H},b_{H},x_{0}, L,T)}{R}. \end{align} In addition, \begin{align*} V(t, \xi)&\leq V_{\epsilon}(t, x_{0})+|V_{\epsilon}(t, x_{0})-V(t, x_{0})|\nonumber\\ & \leq {\mathbb E}\bigg[\bigg.\int^{\tau}_{s}b(x^{\epsilon, s,x_{0},u}(r), x^{\epsilon, s,x_{0},u}_{r}, u(r)){\mathord{{\rm d}}} r +V(\tau, x^{\epsilon, s,x_{0},u}(\tau))\bigg]\bigg.+|V_{\epsilon}(t, x_{0})-V(t, x_{0})|\nonumber\\ & \leq {\mathbb E}\bigg[\bigg.\int^{\tau}_{s}b(x^{ s,x_{0},u}(r), u(r)){\mathord{{\rm d}}} r +V(\tau, x^{ s,x_{0},u}(\tau))\bigg]\bigg.+|V_{\epsilon}(t, x_{0})-V(t,x_{0})|\nonumber\\ &\quad +{\mathbb E}\bigg[\bigg.\int^{\tau}_{s}b(x^{\epsilon, s,x_{0},u}(r), u(r)){\mathord{{\rm d}}} r -\int^{\tau}_{s}b(x^{ s,x_{0},u}(r), u(r)){\mathord{{\rm d}}} r\bigg]\bigg.\nonumber\\ & \quad +|V_{\epsilon}(\tau, x^{\epsilon, s,x_{0},u}(\tau))-V(\tau, x^{ s,x_{0},u}(\tau))|\nonumber\\ & \leq {\mathbb E}\bigg[\bigg.\int^{\tau}_{s}b(x^{ s,x_{0},u}(r), u(r)){\mathord{{\rm d}}} r +V(\tau, x^{ s,x_{0},u}(\tau))\bigg]\bigg.\nonumber\\ &\quad + C(a_{H},b_{H},x_{0}, L,T)\epsilon^{\frac{1}{2}}+\sup_{(t,y)\in B}{\mathbb E}[|V_{\epsilon}(t,y)-V(t,y)|]\nonumber\\ &\quad+\frac{C(a_{H},b_{H},\xi, L,T)}{R}. \end{align*} Passing to the limit for $\epsilon\rightarrow 0$ and $R\rightarrow \infty,$ we obtain \begin{align*} V(t, x_{0}) \leq {\mathbb E}\bigg[\bigg.\int^{\tau}_{s}b(x^{ s,x_{0},u}(r), u(r)){\mathord{{\rm d}}} r +V(\tau, x^{ s,x_{0},u}(\tau))\bigg]\bigg.. \end{align*} Conversely, for any $\delta> 0,$ since $V_{\epsilon}$ satisfies the DPP, there exists $u_{\delta}\in {\mathscr U}[t,T]$ such that $$V_{\epsilon}(t,x)+\frac{\delta}{2}\geq {\mathbb E}\bigg[\bigg.\int^{\tau}_{s}b(x^{\epsilon, s,\xi,u_{\delta}}(r), u_{\delta}(r)){\mathord{{\rm d}}} r +V_{\epsilon}(\tau, x^{\epsilon, s,x_{0},u_{\delta}}(\tau))\bigg]\bigg.. $$ Using the above inequality, we have \begin{align*} &V(t,x)+\delta\geq V_{\epsilon}(t,x)+\delta-|V_{\epsilon}(t,x)-V(t,x)|\\ &\geq {\mathbb E}\bigg[\bigg.\int^{\tau}_{s}b(x^{\epsilon, s,x_{0},u_{\delta}}(r), u_{\delta}(r)){\mathord{{\rm d}}} r +V_{\epsilon}(\tau, x^{\epsilon, s,x_{0},u_{\delta}}(\tau))\bigg]\bigg.+\frac{\delta}{2}-|V_{\epsilon}(t,x)-V(t,x)|\nonumber\\ &\geq {\mathbb E}\bigg[\bigg.\int^{\tau}_{s}b(x^{ s,x_{0},u_{\delta}}(r), u_{\delta}(r)){\mathord{{\rm d}}} r +V(\tau, x^{ s,x_{0},u_{\delta}}(\tau))\bigg]\bigg.+\frac{\delta}{2}-|V_{\epsilon}(t, x_{0})-V(t, x_{0})|\nonumber\\ &\quad -{\mathbb E}\bigg[\bigg.\int^{\tau}_{s}b(x^{\epsilon, s,x_{0},u_{\delta}}(r), u_{\delta}(r)){\mathord{{\rm d}}} r -\int^{\tau}_{s}b(x^{ s,x_{0},u_{\delta}}(r), u_{\delta}(r)){\mathord{{\rm d}}} r\bigg]\bigg.\nonumber\\ &\quad -|V_{\epsilon}(\tau, x^{\epsilon, s,x_{0},u_{\delta}}(\tau))-V(\tau, x^{ s,x_{0},u_{\delta}}(\tau))|\\ &\geq {\mathbb E}\bigg[\bigg.\int^{\tau}_{s}b(x^{ s,x_{0},u_{\delta}}(r), x^{ s,x_{0},u_{\delta}}_{r}, u_{\delta}(r)){\mathord{{\rm d}}} r +V(\tau, x^{ s,x_{0},u_{\delta}}(\tau))\bigg]\bigg.+\frac{\delta}{2}-|V_{\epsilon}(t, x_{0})-V(t, x_{0})|\nonumber\\ &\quad - C(a_{H},b_{H},x_{0}, L,T)\epsilon^{\frac{1}{2}}-\sup_{(t,y)\in B}{\mathbb E}[|V_{\epsilon}(t,y)-V(t,y)|]-\frac{C(a_{H},b_{H},x_{0}, L,T)}{R}\nonumber\\ &\geq {\mathbb E}\bigg[\bigg.\int^{\tau}_{s}b(x^{ s,\xi,u_{\delta}}(r), x^{ s,x_{0},u_{\delta}}_{r}, u_{\delta}(r)){\mathord{{\rm d}}} r +V(\tau, x^{ s,x_{0},u_{\delta}}(\tau))\bigg]\bigg., \end{align*} where the last inequality follows by letting $\epsilon\rightarrow 0$ and $R\rightarrow \infty.$ The proof is complete. \end{proof} \section*{Funding} This research was supported by the National Natural Science Foundation of China (Grant nos. 61876192, 11626236), the Fundamental Research Funds for the Central Universities of South-Central University for Nationalities (Grant nos. CZY15017, KTZ20051, CZT20020). \section*{Availability of data and materials} \begin{center} Not applicable. \end{center} \section*{Competing interests} \begin{center} The author declare they have no competing interests. \end{center} \section*{Authors' contributions} All authors conceived of the study and participated in its design and coordination. All authors read and approved the final manuscript.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The right-hand tail of the firm size distribution closely resembles a power law.\footnote{In other words, for some measure of firm size $S$, there are positive constants $C$ and $\alpha$ such that $\mathbbm P\{S > s\} \approx C s^{-\alpha}$ for large $s$. A well known reference is \cite{axtell2001zipf}. The power law finding has been replicated in many studies. See, for example, \cite{gaffeo2003size}, who treats the G7 economies, as well as \cite{cirillo2009upper}, \cite{kang2011changes} and \cite{zhang2009zipf}, who use Italian, Korean and Chinese data respectively.} This property is highly significant for a range of aggregate outcomes. For example, \cite{carvalho2019large} used a calibrated model of industry dynamics to show that idiosyncratic firm-level shocks generate substantial aggregate volatility when the firm size distribution has the power law property (also referred to as a Pareto tail or fat tail). Aggregate volatility is generated because, with such heavy tails, the volatility contained in averages over independent shocks remains substantial even when the number of draws is large. Their work builds on \cite{nirei2006threshold} and \cite{gabaix2011granular}, who previously showed that idiosyncratic firm-level volatility can account for a substantial fraction of aggregate volatility when shocks have heavy tails. As well as volatility, the properties of the right tail of the firm size distribution also affect the both the wealth distribution and the income distribution, and hence the concentration of income and wealth, due to the high concentration of firm ownership and entrepreneurial equity.\footnote{See, for example, \cite{benhabib2018skewed}. The impact of capital income on income and wealth dispersion has risen in recent years, as documented and analyzed in \cite{kacperczyk2018investor}.} The income and wealth distributions in turn affect many other economic phenomena, including the composition of aggregate demand and the growth rate of aggregate productivity.\footnote{The literature on the connection between of the distribution of income and wealth and growth rates is extensive. A recent example combining theory and empirics is \cite{halter2014inequality}.} One related issue concerning heavy tails is that, for such distributions, averages and nonparametric statistics provide relatively little guidance on likely future outcomes, even when sample sizes are large.\footnote{As one illustrative example, if observations are {\sc iid} draws from the Cauchy distribution, then the sample mean is likewise Cauchy distributed. In particular, the amount of information that it contains about the underlying distribution shows \emph{no} increase with sample size. \cite{adler1998practical} discuss statistical methods in the presence of heavy tails.} Thus, data arising from such distributions must be supplemented by theory in order to make quantitative statements with any reasonable degree of accuracy. In other words, to accurately predict the time path of the aggregate variables discussed above under different policy settings or other economic scenarios, it is necessary to construct models that contain within them a prediction for the right-hand tail of the firm size distribution. Occam's razor recommends reproducing the tail of the firm size distribution from the simplest model possible, conditional on realistic assumptions and internal consistency. A natural candidate for this purpose is the entry-exit model of \cite{hopenhayn1992entry}, which forms one of the cornerstones of quantitative economics and has been successful in replicating several other key aspects of the firm size distribution. This leads to the question posed in the present study: Does the benchmark \cite{hopenhayn1992entry} model reproduce the Pareto tail (i.e., power law) of the distribution as an equilibrium prediction under realistic assumptions on firm-level productivity growth? The issue of what constitutes ``realistic'' firm-level productivity growth is critical here, since the exogenously specific dynamics for idiosyncratic firm-level productivity play a key role in shaping aggregate outcomes in the entry--exit model. A commonly used baseline for modeling firm growth is Gibrat's law, which postulates that growth is stationary, with mean and volatility of the growth rate both invariant to firm size. However, this baseline is not itself realistic, as a long series of empirical studies have shown that firm growth deviates from Gibrat's law in systematic ways. For example, small firms typically grow faster than large firms.\footnote{Some of the many studies showing faster firm growth for small firms are \cite{evans1987tests}, \cite{evans1987relationship} and \cite{hall1987relationship}.} A second deviation is that the growth rate of small firms is more volatile than that of large firms.\footnote{Studies showing this relationship include \cite{hymer1962firm} and \cite{dunne1989growth}. More references with similar findings and an overview of the literature can be found in \cite{santarelli2006gibrat}.} Any realistic specification of firm dynamics must admit these departures from Gibrat's law. This paper takes a theoretical approach to the problems raised above. We analyze a setting that admits a broad range of specifications of firm-level dynamics, including those with the systematic departures from Gibrat's law. For example, small firms can grow faster than large firms and their growth rates can exhibit greater volatility. In this setting we prove that when any of these firm-level dynamics are inserted into the standard \cite{hopenhayn1992entry} model, the endogenous firm size distribution generated by entry and exit exhibits a Pareto tail. Moreover, this result does not depend on the shape of the entrants' distribution, beyond a simple moment condition, or the demand side of the market. In this sense, the Pareto tail becomes a highly robust prediction of the standard entry-exit model once the state space is allowed to be unbounded. We also show that the tail index, which determines the amount of mass in the right tail of the distribution and has been the source of much empirical discussion (see, e.g., \cite{axtell2001zipf} or \cite{gabaix2016power}), depends only on the law of motion for incumbents. As such, it is invariant to the productivity distribution for new entrants, the profit functions of firms, and the structure of demand. For example, corporate tax policy has no impact on the tail index of the firm size distribution (unless it affects the growth rate of firm productivity). The results described above are valid whenever the deviation between incumbents' firm-level growth dynamics and Gibrat's law is not infinitely large, in the sense of expected absolute value. Although this restriction is surprisingly weak, it tends to bind more for large firms than for small ones, since large firms have greater weight in the integral that determines expected value. This restriction is consistent with the data, since large firms tend to conform more to Gibrat's law than do small ones (see, e.g., \cite{evans1987relationship}, \cite{evans1987tests} or \cite{becchetti2002determinants}). It also provides intuition behind our results, since large firms matter more for the properties of the tail of the firm size distribution. The machinery employed to prove these results draws on \cite{goldie1991implicit}, which uses implicit renewal theory to analyze Pareto tails of a range of time-invariant probability laws. In addition to the contribution described above, the extension of the standard \cite{hopenhayn1992entry} model to an unbounded state space provided in this paper has independent value, since, as noted above, appropriate theoretical structure is necessary for quantitative analysis in the presence of heavy tails. This extension to the model is nontrivial in three ways. First, the lifetime profits of firms are potentially unbounded, requiring a modified approach to the firm decision problem. Second, the time invariance condition for the equilibrium measure of firms concerns stationary distributions of Markov transitions that are possibly transient, due to the unboundedness of productivity. Third, aggregate output is potentially infinite, since integration across productivity states is over an unbounded set. We tackle the unbounded entry-exit model using a combination of weighted supremum norms for the firm decision problem and Kac's Theorem plus a drift condition to handle productivity dynamics. Through this combination, we provide an exact necessary and sufficient condition for existence and uniqueness of a stationary recursive equilibrium in the unbounded setting, as well as a decomposition of the equilibrium firm size distribution and sample path interpretation via Pitman's occupation measure. The latter connects the cross-sectional mass of firms in a given region of the distribution with the occupation times of individual firms. The proof of existence of a stationary recursive equilibrium is constructive, so quantitative tractability of the entry-exit model is preserved.\footnote{The drift condition that we use to handle productivity dynamics has some independent interest. Drift conditions are a well-known technique for controlling Markov processes on unbounded state spaces (see, e.g., \cite{meyn2012markov}). The idea is to obtain a Lyapunov function (sometimes called a Foster--Lyapunov function in the Markov setting) on the state space, which is defined by two properties: One is that the function becomes large as the state diverges. The second is that the value assigned to the state by the Lyapunov function under the Markov process in question tends to decrease if the state variable begins to diverge. The main difficulty with the approach is funding a suitable Lyapunov function. The innovation introduced below is to use firm output itself as the Lyapunov function.} This paper builds on previous studies that have linked random firm-level growth within an industry to Pareto tails in the cross-sectional distribution of firm size. Early examples include \cite{champernowne1953model} and \cite{simon1955class}, who showed that Pareto tails in stationary distributions can arise if time series follow Gibrat's law along with a reflecting lower barrier. Since then it has been well understood that Gibrat's law can generate Pareto tails for the firm size distribution in models where firm dynamics are exogenously specified. Surveys can be found in \cite{gabaix2008power} and \cite{gabaix2016power}. \cite{cordoba2008generalized} points out that Gibrat's law is not supported by the data on firm growth and considers a generalization where volatility can depend on firm size. He then shows that Pareto tails still arise in a discrete state setting under such dynamics. This paper strengthens his result in two ways. First, the firm size distribution is endogenously determined as the equilibrium outcome of an entry-exit model, allowing us to consider how regulations, policies and demand impact on the distribution. Second, we allow other departures from Gibrat's law supported by the data, such as dependence of the mean growth rate on firm size. More recently, \cite{carvalho2019large} produce a Pareto tail in a model of endogenous entry and exit whenever Gibrat's law holds for incumbent dynamics and new entrants draw their productivity from a Pareto distribution. While we make no contribution to their discussion of aggregate fluctuations, this paper enhances their power law finding by allowing for a more plausible range of firm-level growth specifications, while at the same time showing that the key results are invariant to the productivity distribution of new entrants. We also work with an unbounded state space, which allows direct modeling of the Pareto tail of the firm size distribution. There are a several studies not previously mentioned that generate Pareto tails for the firm size distribution using a number of alternative mechanisms. A classic example is \cite{lucas1978size}, which connects heterogeneity in managerial talent to a Pareto law. More recent examples include \cite{luttmer2011mechanics}, \cite{acemoglu2015innovation} and \cite{cao2018firm}. This paper does not attempt to distinguish between these models based on observed outcomes or question their significance. Rather, the objective of this paper is to determine whether or not Pareto-tailed distributions are a robust prediction of the benchmark entry-exit model under realistic firm-level dynamics, as well as to remove tail truncation from that model to improve quantitative analysis.\footnote{Also tangentially related to our study, at least on a technical level, is \cite{benhabib2015wealth}, which studies a nonlinear process associated with optimal household savings that approximates a Kesten process when income is large. This is somewhat analogous to our treatment of the firm size distribution, in that we allow nonlinear firm-level dynamics that approximate Gibrat's law. However, the topic and underlying methodology are substantially different.} The remainder of the paper is structured as follows. Section~\ref{s:ee} sets out the model. Section~\ref{s:sre} shows existence of a unique stationary recursive equilibrium when the state space is unbounded. Section~\ref{s:ht} investigates heavy tails and Section~\ref{s:c} concludes. Long proofs are deferred to the appendix. \section{Entry and Exit} \label{s:ee} The structure of the model follows \cite{hopenhayn1992entry}. There is a single good produced by a continuum of firms, consisting at each point in time of a mixture of new entrants and incumbents. The good is sold at price $p$ and the demand is given by $D(p)$. \begin{assumption} The demand function $D$ is continuous and strictly decreasing with $D(0) = \infty$ and $\lim_{p\to \infty} D(p) = 0$. \end{assumption} Firms facing output price $p$ and having firm-specific productivity $\phi$ generate profits $\pi(\phi, p)$ and produce output $q(\phi, p)$. (We take $q$ and $\pi$ as given but provide examples below where they are derived from profit maximization problems.) Profits are negative on the boundary due to fixed costs, as in \cite{hopenhayn1992entry}. In particular, \begin{assumption} \label{a:pq} Both $\pi$ and $q$ are continuous and strictly increasing on $\mathbbm R^2_+$. The function $q$ is nonnegative while $\pi$ satisfies $\pi(\phi, p) < 0$ if either $\phi = 0$ or $p = 0$. \end{assumption} Productivity of each incumbent firm updates according to the idiosyncratic Markov state process $\Gamma(\phi, \diff \phi')$, where $\Gamma$ is a transition probability kernel on $\mathbbm R_+$. The outside option for firms is zero and the value $v(\phi, p)$ of of an incumbent satisfies \begin{equation} \label{eq:vf} v(\phi, p) = \pi(\phi, p) + \beta \max \left\{ 0, \int v(\phi', p) \Gamma(\phi, \diff \phi') \right\}, \end{equation} where $\beta = 1/(1+r)$ for some fixed $r > 0$. Here and below, integrals are over $\mathbbm R_+$. \begin{assumption} \label{a:pgamma} The productivity kernel $\Gamma$ is monotone increasing. In addition, % \begin{enumerate} \item For each $a > 0$ and $\phi \geq 0$, there is an $n \in \mathbbm N$ such that $\Gamma^n(\phi, [0, a)) > 0$. \item For each $p > 0$, there exists a $\phi \geq 0$ such that $\int \pi(\phi', p) \Gamma(\phi, \diff \phi') \geq 0$. \end{enumerate} % \end{assumption} The symbol $\Gamma^n$ denotes the $n$-step transition kernel. The monotonicity assumption means that $\Gamma(\phi, [0, a])$ is decreasing in $\phi$ for all $a \geq 0$. Condition (a) is analogous to the recurrence condition in \cite{hopenhayn1992entry}. Condition (b) ensures that not all incumbents exit every period. New entrants draw productivity independently from a fixed probability distribution $\gamma$ and enter the market if $\int v( \phi', p) \, \gamma(\diff \phi') \geq c_e$, where $c_e > 0$ is a fixed cost of entry. \begin{assumption} \label{a:aper} The distribution $\gamma$ satisfies $\int q(\phi, p) \gamma(\diff \phi) < \infty$ and puts positive mass on the interval $[0, a]$ for all $a > 0$. \end{assumption} The first condition in Assumption~\ref{a:aper} is a regularity condition that helps to ensure finite output. The second condition is convenient because it leads to aperiodicity of the endogenous productivity process. \begin{assumption} \label{a:newent} There exists a $p > 0$ such that $\int \pi(\phi, p) \, \gamma(\diff \phi) \geq c_e$. \end{assumption} Assumption~\ref{a:newent} ensures that entry occurs when the price is sufficiently large. It is relatively trivial because, for price taking firms, revenue is proportional to price. For realistic industry dynamics, we also need a nonzero rate of exit. We implement this by assuming that, when a firm's current productivity is sufficiently low, its expected lifetime profits are negative: \begin{assumption} \label{a:npro} The profit function is such that $\sum_{t \geq 0} \beta^t \int \pi(\phi', p) \Gamma^t (0, \diff \phi') \leq 0$. \end{assumption} One (admittedly simplistic) example of a setting where Assumption~\ref{a:npro} holds is when firm growth follows Gibrat's law, so that $\Gamma$ is represented by the recursion $\phi_{t+1} = A_{t+1} \phi_t$ for some positive {\sc iid} sequence $\{A_t\}$. Then $\phi_0=0$ implies $\phi_t=0$ for all $t$, and hence $\Gamma^t (0, \diff \phi')$ is a point mass at zero. Hence the integral in Assumption~\ref{a:npro} evaluates to $\pi(0, p)$ for each $t$, which is negative by Assumption~\ref{a:pq}. Since productivity is unbounded and profits can be arbitrarily large, we also need a condition on the primitives to ensure that $v$ is finite. In stating it, we consider the productivity process $\{\phi_t\}$ defined by \begin{equation} \label{eq:inff} \phi_0 \sim \gamma \text{ and } \phi_{t+1} \sim \Gamma(\phi_t, \diff \phi') \text{ when } t \geq 1. \end{equation} \begin{assumption} \label{a:fc} There is a $\delta \in (\beta, 1)$ with $\sum_{t \geq 0} \, \delta^t \, \mathbbm E \, \pi (\phi_t, p) < \infty$ at all $p \geq 0$. \end{assumption} While slightly stricter than a direct bound on lifetime profits, Assumption~\ref{a:fc} has the benefit of yielding a contraction result for the Bellman operator corresponding to the Bellman equation \eqref{eq:vf}. Since we are working in a setting where profits can be arbitrarily large, the value function is unbounded, so the contraction in question must be with respect to a \emph{weighted} supremum norm. To construct this norm, we take $\delta$ as in Assumption~\ref{a:fc} and let \begin{equation} \label{eq:dkappa} \kappa(\phi, p) := \sum_{t \geq 0} \delta^t \mathbbm E_\phi \hat \pi (\phi_t, p) \; \text{ with } \hat \pi := \pi + b. \end{equation} Here $b$ is a constant chosen such that $\pi + b \geq 1$. The function $\kappa$ is constructed so that it dominates the value function and satisfies $1 \leq \kappa < \infty$ at all points in the state space.\footnote{To be more precise, $\phi \mapsto \kappa(\phi, p)$ is finite $\gamma$-almost everywhere by Assumption~\ref{a:fc}. If $\gamma$ is supported on all of $\mathbbm R_+$, then, since the function in question is monotone, this implies that $\kappa$ is finite everywhere. If not, then we tighten the assumptions above by requiring that $\kappa(\phi, p)$ is finite everywhere.} For each scalar-valued $f$ on $\mathbbm R_+^2$, let $\|f \|_\kappa := \sup |f/\kappa|$. This is the $\kappa$-weighted supremum norm. If it is finite for $f$ then we say that $f$ is $\kappa$-bounded. Let \begin{equation*} \mathscr C := \text{all continuous, increasing and $\kappa$-bounded functions on $\mathbbm R_+^2$}. \end{equation*} Under the distance $d(v, w) := \| w - v \|_\kappa$, the set $\mathscr C$ is a complete metric space.\footnote{Completeness of the set of continuous $\kappa$-bounded functions under $d$ is proved in many places, including \cite{hernandez2012further}, \S7.2. Our claim of completeness of $(\mathscr C, d)$ follows from the fact that the limit of a sequence of increasing functions in $(\mathscr C, d)$ is also increasing.} \begin{assumption} \label{a:gcm} If $u$ is in $\mathscr C$, then $(\phi, p) \mapsto \int u(\phi', p) \Gamma(\phi, \diff \phi')$ is continuous. \end{assumption} Assumption~\ref{a:gcm} is a version of the continuity property imposed by \cite{hopenhayn1992entry}, modified slightly to accommodate the fact that $\Gamma$ acts on unbounded functions. \section{Stationary Recursive Equilibrium} \label{s:sre} Now we turn to existence, uniqueness and computation of stationary recursive equilibria for the industry. All the assumptions of the previous section are in force. \subsection{Existence and Uniqueness} We begin our analysis with the firm decision problem: \begin{lemma} \label{l:tcm} The Bellman operator $T \colon \mathscr C \to \mathscr C$ defined at $v \in \mathscr C$ by % \begin{equation} \label{eq:bellop} (Tv)(\phi, p) = \pi(\phi, p) + \beta \max \left\{ 0, \int v(\phi', p) \Gamma(\phi, \diff \phi') \right\} \end{equation} % is a contraction on $(\mathscr C, d)$. Its unique fixed point $v^*$ is strictly increasing and strictly negative if either $\phi = 0$ or $p = 0$. \end{lemma} Now let $\bar \phi$ be the function on $\mathbbm R_+$ defined by \begin{equation} \label{eq:barphi} \bar \phi(p) := \min \left\{ \phi \geq 0 \;\; \big| \; \int v^*(\phi', p) \, \Gamma(\phi, \diff \phi') \geq 0 \right\}. \end{equation} With the convention that incumbents who are indifferent remain rather than exit, an incumbent with productivity $\phi$ exits if and only if $\phi < \bar \phi (p)$. In \eqref{eq:barphi} we take the usual convention that $\min \emptyset = \infty$. \begin{lemma} \label{l:barphi} The optimal exit threshold function $\bar \phi$ satisfies $\bar \phi(0) = \infty$ and is finite, strictly positive and decreasing on $(0, \infty)$. \end{lemma} The equilibrium \emph{entry condition} is $\int v^*( \phi, p) \gamma(\diff \phi) \leq c_e$ and, whenever the mass of entrants $M$ is strictly positive, \begin{equation} \label{eq:entc} \int v^*( \phi, p) \, \gamma(\diff \phi) = c_e. \end{equation} \begin{lemma} \label{l:pphis} There exists a unique price $p^* > 0$ such that \eqref{eq:entc} holds. \end{lemma} For stationarity, the measure $\mu$ of current firm productivity levels must satisfy the invariance condition \begin{equation} \mu (B) = \int \Gamma(\phi, B) \, \mathbbm 1\{\phi \geq \bar \phi(p)\} \, \mu(\diff \phi) + M \, \gamma(B) \text{ for all } B \in \mathscr B, \end{equation} where $\mathscr B$ is all Borel sets on $\mathbbm R_+$ and $\bar \phi$ is as in \eqref{eq:barphi}. Stationarity also requires balanced entry and exit, which mean that $M = \mu \{ \phi < \bar \phi(p) \}$. In particular, a \emph{stationary recursive equilibrium} for this model is a positive constant $p$, a finite Borel measure $\mu$ on $\mathbbm R_+$ and an $M \geq 0$ such that when $p$ is the output price, $\mu$ is the distribution of firms, and $M$ is the mass of firms that enter in each period, the equilibrium entry condition holds, the time-invariance condition \eqref{eq:dlom} is valid, there is balanced entry and exit, and the goods market clears: \begin{equation} \label{eq:goods} \int q( \phi, p) \mu(\diff \phi) = D(p). \end{equation} We can now state our main existence and uniqueness result. In stating it, we we take $\{\phi_t\}$ to be as in \eqref{eq:inff} and set $\tau(p)$ to be the lifespan of a firm with this productivity path when output price is $p$: \begin{equation} \tau(p) := \inf \setntn{t \geq 1}{\phi_t < \bar \phi(p)}. \end{equation} \emph{Expected lifetime firm output} at $p$ is \begin{equation*} \ell(p) := \mathbbm E \sum_{t=1}^{\tau(p)} q(\phi_t, p). \end{equation*} Let $p^*$ be the unique price consistent with the entry condition (see Lemma~\ref{l:pphis}). The next theorem characterizes equilibrium for the entry-exit model set out in Section~\ref{s:ee}. \begin{theorem} \label{t:bk1} The following two statements are logically equivalent: % \begin{enumerate} \item Expected lifetime output for firms is finite. \item A stationary recursive equilibrium exists. \end{enumerate} % If either and hence both of these statements is true, then the equilibrium is unique, the equilibrium mass of entrants $M^*$ is strictly positive, and the distribution of productivity $\mu^*$ and $M^*$ satisfy % \begin{equation} \label{eq:kdec} \mu^* (B) = M^* \cdot \mathbbm E \sum_{t = 1}^{\tau(p^*)} \mathbbm 1\{\phi_t \in B\} \quad \text{for all } B \in \mathscr B. \end{equation} % \end{theorem} Condition (a) in Theorem~\ref{t:bk1} requires $\ell(p^*) < \infty$. While $p^*$ is not a primitive, a simple sufficient condition involving only primitives is given in Section~\ref{ss:ascond}. The decomposition \eqref{eq:kdec} says that the mass of firms in set $B$ is proportional to the expected number of times that a firm's productivity visits $B$ over its lifespan. It ties the cross-sectional distribution of productivity to dynamics at the level of the firm. The decomposition is obtained by a combination of Kac's Theorem and the Pitman occupation formula. The proof of Theorem~\ref{t:bk1} is constructive and the basic approach and its numerical implementation are discussed in Section~\ref{ss:comp} below.\footnote{The traditional entry-exit model, with productivity bounded above by some constant $B$, is a special case of Theorem~\ref{t:bk1}. To see this, observe that $q \leq K$ for some constant $K$ in such a setting, so $\ell(p^*) \leq K \mathbbm E \tau(p^*)$. The expected firm lifespan $\mathbbm E \tau(p)$ is finite at every $p > 0$, since, by Assumption~\ref{a:pgamma}, there exists an integer $n$ such that $\epsilon := \Gamma^n (B, [0, \bar \phi(p))) > 0$, and hence $\{\phi_t\}$ falls below $\bar \phi(p)$ with independent probability at least $\epsilon$ every $n$ periods. This means that $\mathbbm E \tau(p) = \sum_{m \in \mathbbm N} \mathbbm P\{\tau(p) \geq m\} \leq \sum_{m \in \mathbbm N} (1-\epsilon)^{\lfloor m/n \rfloor}$, which is finite. Hence $\ell(p^*)$ is finite, and a stationary recursive equilibrium exists.} \subsection{A Sufficient Condition} \label{ss:ascond} We now give a simple sufficient condition for $\ell(p^*) < \infty$, which in turn implies all of the conclusion of Theorem~\ref{t:bk1}. \begin{assumption} \label{a:dc} For each $p > 0$, there exists an $\lambda \in (0, 1)$ and $L < \infty$ such that % \begin{equation} \label{eq:dc} \int q(\phi', p) \Gamma(\phi, \diff \phi') \leq \lambda q(\phi, p) + L \quad \text{ for all } \phi \geq 0. \end{equation} % \end{assumption} Assumption~\ref{a:dc} says that output growth for incumbents is expected to be negative whenever current output is sufficiently large.\footnote{To see this, we can write $q(\phi_t, p)$ as $Q_t$ and express \eqref{eq:dc} as $\ln( \mathbbm E_t Q_{t+1} / Q_t)\leq \ln (\lambda + L / Q_t)$. When $Q_t$ is sufficiently large, the right-hand side is negative.} In the literature on Markov processes, the bound in \eqref{eq:dc} is sometimes called a Foster--Lyapunov drift condition. In the present case, output $q$ is adopted as the Lyapunov function. \begin{proposition} \label{p:geoerg} If, in addition to the conditions in Section~\ref{s:ee}, Assumption~\ref{a:dc} holds, then expected lifetime output for each firm is finite. \end{proposition} The intuition behind Proposition~\ref{p:geoerg} is as follows. When Assumption~\ref{a:dc} is in force, incumbents with sufficiently large output tend to see output fall in the next period. Output is a strictly increasing function of $\phi$, so falling output means falling productivity. From this one can construct a finite interval such that, for any given incumbent, productivity returns to this interval infinitely often. At each such occasion, the recurrence condition in Assumption~\ref{a:pgamma} yields an independent $\epsilon$ probability of exiting. Eventually the firm exits and lifetime output remains finite.\footnote{Even though lifetime output is finite along every sample path, this does not necessarily imply that the \emph{expectation} of lifetime output $\ell(p^*)$ is finite. Hence there are some subtleties involved in the proof of Proposition~\ref{p:geoerg}. The reason that output is used as the Lyapunov function is that we need this expectation to be finite. The appendix gives details.} \begin{example} \label{eg:gll} Suppose that incumbent productivity grows according to % \begin{equation} \phi_{t+1} = A_{t+1} \phi_t + Y_{t+1} \quad \text{for some {\sc iid} sequence } \{A_t, Y_t\}, \end{equation} % that production is linear in $\phi$ and that all factors of production are constant, so that $q(\phi, p) = e \phi$ for some $e > 0$. Regarding the drift condition \eqref{eq:dc}, we have % \begin{equation*} \int q(\phi', p) \Gamma(\phi, \diff \phi') = e \mathbbm E A_{t+1} \phi + \mathbbm E Y_{t+1} = \mathbbm E A_{t+1} q(\phi, p) + \mathbbm E Y_{t+1}. \end{equation*} % Assumption~\ref{a:dc} is therefore satisfied whenever $\mathbbm E A_t < 1$ and $\mathbbm E Y_t < \infty$. \end{example} \begin{example} \label{eg:glcd} Suppose instead that production is Cobb--Douglas, with output $\phi n^\theta$ under labor input $n$ and parameter $\theta \in (0, 1)$. With profits given by $p \phi n^\theta - c - w n$ for some $c, w > 0$, the function for output at optimal labor input is % \begin{equation*} q(\phi, p) = \phi^\eta m(p) \quad \text{where } \eta := \frac{1}{1-\theta} \; \text{ and } \; m(p) := \left( \frac{p \theta}{w} \right)^{\theta/(1-\theta)}. \end{equation*} % If productivity growth follows $\phi_{t+1} = A_{t+1} \phi_t$, then the right-hand side of the drift condition \eqref{eq:dc} becomes % \begin{equation*} \int q(\phi', p) \Gamma(\phi, \diff \phi') = \mathbbm E (A_{t+1} \phi)^\eta \, m(p) = \mathbbm E A_{t+1}^\eta q(\phi, p). \end{equation*} % Thus, Assumption~\ref{a:dc} is valid whenever $\mathbbm E[ A_t^{1/(1-\theta)} ] < 1$. This is a joint restriction on the rate of incumbent firm growth and the Cobb--Douglas production parameter $\theta$. \end{example} \subsection{Computing the Solution} \label{ss:comp} When proving Theorem~\ref{t:bk1}, our first step is to insert balanced entry and exit into the time invariance condition, yielding \begin{equation} \label{eq:dlom} \mu (B) = \int \Pi_p(\phi, B) \mu (\diff \phi) \quad \text{for all } B \in \mathscr B, \end{equation} where $\Pi_p$ is the transition kernel on $\mathbbm R_+$ defined by \begin{equation} \label{eq:upm} \Pi_p(\phi, B) = \Gamma(\phi, B) \mathbbm 1\{\phi \geq \bar \phi(p) \} + \mathbbm 1\{\phi < \bar \phi(p) \} \gamma(B). \end{equation} Then we proceed as follows: \begin{enumerate} \item[(S1)] Take the price $p^*$ dictated by the entry condition, as in Lemma~\ref{l:pphis}. \item[(S2)] Take $\mu$ to be the unique stationary Borel probability measure for $\Pi_{p^*}$. \item[(S3)] Rescale by setting $s := D(p^*)/ \int q( \phi, p^*) \mu(\diff \phi)$ and then $\mu^* := s \, \mu$. \end{enumerate} Rescaling in (S3) is implemented so that the goods market clears. As shown in the proof, uniqueness in (S2) always holds because $\Pi_p$ is irreducible for all $p > 0$. The two challenging parts of the proof of Theorem~\ref{t:bk1} are existence of $\mu$ and positivity of $s$. In both cases our solution draws on Kac's Theorem. The proof in the appendix gives details. In terms of computational methods, the value $p^*$ in (S1) can be obtained once we solve for $v^*$. The latter is a fixed point of a contraction map, as shown in Lemma~\ref{l:tcm}. This provides the basis of a globally convergent method of computation. The situation for $\mu$ in (S2) is similar. It is shown in the appendix that the endogenous productivity process is aperiodic and $\gamma$-irreducible. (See \cite{meyn2012markov} for definitions and notation related to Markov processes.) The condition $\ell(p^*) < \infty$ then implies that the same process is Harris recurrent and ergodic, opening avenues for computing $\mu$ through either simulation or successive approximations. Stronger statements are true when Assumption~\ref{a:dc} holds. We show in the proof of Proposition~\ref{p:geoerg} that when Assumption~\ref{a:dc} is in force, the transition kernel $\Pi_p$ is $V$-uniformly ergodic \cite[Chapter~16]{meyn2012markov} for all $p > 0$, implying that the marginal distributions generated by $\Pi_{p}$ converge to its unique stationary distribution at a geometric rate and yielding a range of sample path properties.\footnote{There is a caveat, however. As we show in the next section, the productivity distribution $\mu$ and hence the firm size distribution $\mu^*$ have very heavy tails for under realistic firm-level growth dynamics. This adds a level of complication to numerics. Suitable methods for handling fat tails numerically have been proposed by \cite{gouin2019pareto} in the context of the wealth distribution and similar ideas should be applicable here.} \section{Pareto Tails} \label{s:ht} Next we turn to the tail properties of the equilibrium distribution of firms identified by Theorem~\ref{t:bk1}. To be certain that this distribution exists, we impose the conditions of Proposition~\ref{p:geoerg}. While we focus on productivity when analyzing firm size, heavy tails in productivity is typically mirrored or accentuated in profit-maximizing output.\footnote{For example, in the Cobb--Douglas case studied in Example~\ref{eg:glcd}, profit-maximizing output is convex in productivity.} It is convenient to introduce a function $G$ and an {\sc iid} sequence $\{W_t\}$ such that \begin{equation} \label{eq:defgw} \phi_{t+1} = G(\phi_t, W_{t+1}) \end{equation} obeys the incumbent dynamics embodied in the Markov kernel $\Gamma$.\footnote{In other words, $\mathbbm P\{ G(\phi, W_{t+1}) \in B \} = \Gamma(\phi, B)$ for all $\phi \geq 0, \; B \in \mathscr B$.} Such a representation can always be constructed (see, e.g., \cite{bhattacharya2007random}). Let $X$ be a random variable with distribution $\mu$, where $\mu$ is the unique probability measure identified in step (S2) of Section~\ref{ss:comp}. The firm size distribution\footnote{In referring to this distribution, we ignore the distinction between the probability distribution $\mu$, from which $X$ is drawn, and the equilibrium distribution $\mu^*$ identified in step (S3) of Section~\ref{ss:comp}, since one is a rescaled version of the other and hence the tail properties are unchanged.} has a Pareto tail with tail index $\alpha > 0$ if there exists a $C > 0$ with \begin{equation} \label{eq:dpt} \lim_{x \to \infty} x^{\alpha} \, \mathbbm P \{X > x \} = C. \end{equation} In other words, the distribution is such that $\mathbbm P \{X > x \}$ goes to zero like $x^{-\alpha}$. To investigate when $X$ has this property, we impose the following restriction on the law of motion for incumbent firms. In stating it, we take $W$ as a random variable with the same distribution as each $W_t$. \begin{assumption} \label{a:gc} There exists an $\alpha > 0$ and an independent random variable $A$ with continuous distribution function such that $\mathbbm E A^{\alpha} = 1$, the moments $\mathbbm E A^{\alpha+1}$ and $\int z^\alpha \gamma(\diff z)$ are both finite, and % \begin{equation} \label{eq:gc} \mathbbm E \left| G(X, W)^\alpha - (A X)^\alpha \right| < \infty. \end{equation} % \end{assumption} Condition \eqref{eq:gc} bounds the deviation between the law of motion \eqref{eq:defgw} for incumbent productivity and Gibrat's law, which is where productivity updates via $\phi_{t+1} = A_{t+1} \phi_t$. The existence of a positive $\alpha$ such that $\mathbbm E A^{\alpha} = 1$ requires that $A$ puts at least some probability mass above 1. In terms of Gibrat's law $\phi_{t+1} = A_{t+1} \phi_t$, this corresponds to the natural assumption that incumbent firms grow with positive probability. \begin{theorem} \label{t:bk2} If Assumption~\ref{a:gc} holds for some $\alpha > 0$, then the endogenous stationary distribution for firm productivity is Pareto-tailed, with tail index equal to $\alpha$. \end{theorem} While Assumption~\ref{a:gc} involves $X$, which is endogenous, we can obtain it from various sufficient conditions that involve only primitives. For example, suppose there exist independent nonnegative random variables $A$ and $Y$ such that \begin{enumerate} \item[(P1)] $Y$ has finite moments of all orders, \item[(P2)] $A$ satisfies the conditions in Assumption~\ref{a:gc} for some $\alpha \in (0, 2)$, and \item[(P3)] the bound $|G(\phi, W) - A \phi| \leq Y$ holds for all $\phi \geq 0$. \end{enumerate} We also assume that the first moment of $\gamma$ is finite, although this is almost always implied by Assumption~\ref{a:aper} (see, e.g., Examples~\ref{eg:gll}--\ref{eg:glcd}). Condition (P3) provides a connection between incumbent dynamics and Gibrat's law. Note that the dynamics in $G$ can be nonlinear and, since $Y$ is allowed to be unbounded, infinitely large deviations from Gibrat's law are permitted. One simple specification satisfying (P3) is when $G(\phi, W) = A \phi + Y$, which already replicates some empirically relevant properties (e.g., small firms exhibit more volatile and faster growth rates than large ones). Conditions (P1)--(P3) only restrict incumbent dynamics (encapsulated by $\Gamma$ in the notation of Sections~\ref{s:ee}--\ref{s:sre}). Since, in Theorem~\ref{t:bk2}, the tail index is determined by $\alpha$, these dynamics are the only primitive that influences the index on the Pareto tail. The range of values for $\alpha$ in (P2) covers standard estimates (see, e.g., \cite{gabaix2016power}). To show that (P1)--(P3) imply the conditions of Assumption~\ref{a:gc}, we proceed as follows. As $A$ satisfies the conditions of Assumption~\ref{a:gc}, we only need to check that \eqref{eq:gc} holds. In doing so, we will make use of the elementary bound \begin{equation} \label{eq:eb} |x^a - y^a| \leq \begin{cases} |x - y|^a & \quad \text{if } 0 < \alpha \leq 1; \\ \alpha |x - y| \max\{x, y\}^{a-1} & \quad \text{if } 1 < \alpha \end{cases} \end{equation} for nonnegative $x,y$. In the case $0 < \alpha \leq 1$, we therefore have, by (P3), \begin{equation*} \left| G(X, W)^\alpha - (A X)^\alpha \right| \leq \left| G(X, W) - (A X) \right|^\alpha \leq Y^\alpha . \end{equation*} But $Y$ has finite moments of all orders by (P1), so the bound in \eqref{eq:gc} holds. Next consider the case $1 < \alpha < 2$. Using \eqref{eq:eb} again, we have \begin{equation*} \left| G(X, W)^\alpha - (A X)^\alpha \right| \leq \alpha \left| G(X, W) - (A X) \right| \max\{G(X, W), A X\}^{\alpha - 1}. \end{equation*} In view of (P3) above and the identity $2\max\{x, y\} = |x-y| + x + y$, we obtain \begin{equation*} \left| G(X, W)^\alpha - (A X)^\alpha \right| \leq \alpha Y \left[ Y + G(X, W) + (A X) \right]^{\alpha - 1}. \end{equation*} Setting $a:= 1/(\alpha - 1)$ and using Jensen's inequality combined with the fact that $\alpha < 2$ now yields \begin{equation*} \mathbbm E \left| G(X, W)^\alpha - (A X)^\alpha \right| \leq \alpha \left[ \mathbbm E Y^{a+1} + \mathbbm E Y^a G(X, W) + \mathbbm E Y^a (A X) \right]^{\alpha - 1}. \end{equation*} We need to bound the three expectations on the right hand side. In doing so we use Lemma~\ref{l:fmom} in the appendix, which shows that $\mathbbm E X < \infty$ when $1 < \alpha < 2$. The first expectations is finite by (P1). The third is finite by (P1) and independence of $Y$, $A$ and $X$.\footnote{Note that $\mathbbm E A^\alpha = 1$ and, in the present case, we have $1 < \alpha < 2$, so finiteness of $\mathbbm E A$ is assured.} For the second, since $Y$ is independent of $X$ and $W$, finiteness of the expectation reduces to finiteness of $\mathbbm E G(X, W)$. We have \begin{equation*} G(X,W) = G(X,W) \mathbbm 1\{X < \bar \phi(p^*)\} + G(X,W) \mathbbm 1\{X \geq \bar \phi(p^*)\}. \end{equation*} Taking expectations and observing that, given $X \geq \bar \phi(p^*)$, the random variable $G(X,W)$ has distribution $\Pi_{p^*} (X, \diff \phi')$, we have \begin{equation*} \mathbbm E G(X, W) \leq \int z \gamma(\diff z) + \int \int \phi' \, \Pi_{p^*} (\phi, \diff \phi') \mu(\diff \phi) = \int z \gamma(\diff z) + \int \phi \mu(\diff \phi). \end{equation*} The equality on the right is due to stationarity of $\mu$ under the endogenous law of motion for firm productivity. Since $\int z \gamma(\diff z)$ is finite by assumption and $\int \phi \mu(\diff \phi) = \mathbbm E X$, which is finite as stated above, we conclude that under (P1)--(P3), the conditions of Theorem~\ref{t:bk2} are satisfied. \section{Conclusion} \label{s:c} In this paper we investigated properties of the firm size distribution in the entry-exit model of \cite{hopenhayn1992entry}. We removed the upper bound on firm productivity, imposed previously in order to simplify analysis, allowing us to consider more realistic representations of firm growth. We found that the standard entry-exit model provides a direct and robust theory of the power law in firm size under realistic firm-level growth dynamics. Substantial and empirically relevant deviations from Gibrat's law were accommodated without altering these results. The methodology developed in this paper can potentially be applied to other settings where a power law is observed. For example, the wealth distribution is Pareto tailed, while the rate of return on wealth (and hence the growth rate of wealth) has been found to vary with the level of wealth in systematic ways (see, e.g., \cite{fagereng2016heterogeneity}). Similarly, the distribution of city sizes tends to a Pareto tail. At the same time, Gibrat's law fails in this setting too (see, e.g., \cite{cordoba2008generalized}). Such topics are left to future work.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} This paper explores the relativistic heavy ion collisions at RHIC and the medium produced in these collisions using hadronic observables. Being most abundantly produced, hadrons define the bulk medium behavior, which is governed by soft, non-perturbative particle production. Analysis of identified hadron spectra and yield ratios allows determination of the kinetic and chemical properties of the system. Hydrodynamics models~\cite{hydro} have been successful in reproducing identified hadron spectral shapes and their characteristic mass dependence at low $p_{T}$ as well as the azimuthal anisotropy of particle emission. Notably, in order to match the data the models require rapid equilibration of the produced matter and a QGP equation of state. The particle abundances also point to an equilibrated system and are well described by statistical thermal models~\cite{thermal}. The chemical freeze-out at $T_{ch} \approx 170$ MeV is suggestive, as it is at the phase boundary of the transition between hadron gas and QGP, as predicted by lattice QCD calculations~\cite{Lqcd}. Above $p_{T}\approx 2$ GeV/$c$, hard-scattering processes become increasingly important. After the hard-scattering, a colored object (the hard-scattered quark or gluon) traverses the medium produced in the collision and interacts strongly. As a result, it loses energy via induced gluon radiation. This phenomenon, known as jet-quenching, manifests itself as suppression in the yields of high-$p_{T}$ hadrons, when compared to the production in pp collisions and weakening of the back-to-back angular correlations between the jet fragments. The yield suppression is measured in terms of the nuclear modification factor $R_{AA} = Yield_{AA}/N_{coll}/Yield_{pp}$, where the number of binary nucleon-nucleon collisions, $N_{coll}$, is introduced to account for the nuclear geometry. In this paper, we use the ratio $R_{CP}$, which is obtained from the $N_{coll}$ scaled central to peripheral spectra and carries similar information. Jet quenching was discovered at RHIC both in suppressed hadron production at high-$p_{T}$ ~\cite{ppg003} ($R_{AA}<1$) and in vanishing back-to-back jet correlations~\cite{starb2b}. Another discovery, unpredicted by theory, is a large enhancement in the production of baryons and anti-baryons at intermediate $p_{T} \approx$ 2--5 GeV/$c$~\cite{ppg006,ppg015},compared to expectations from jet fragmentation. This is in contrast to the suppression of $\pi^{0}$~\cite{ppg014}. In central $Au+Au$ collisions the ratio $\overline{p}/\pi$ is of the order 1 - a factor of 3 above the ratio measured in peripheral reactions or in $pp$ collisions. In this region of $p_{T}$ fragmentation dominates the particle production in $pp$ collisions. It is expected that fragmentation is independent of the colliding system - hence the large baryon fraction observed at RHIC comes as a surprise. At RHIC, the medium influences the dynamics of hadronization resulting in enhanced baryon production but the exact mechanism is not yet completely understood. This paper reviews the latest experimental results relevant to this subject. \section{Radial flow at intermediate $p_{T}$.} The most common conjecture that is invoked to explain the large $\overline{p}/\pi$ ratios observed by PHENIX ~\cite{ppg015} is the strong radial flow that boosts the momentum spectra of heavier particles to high $p_{T}$. In this scenario, the soft \begin{figure}[h] \begin{center} \includegraphics[width=0.8\linewidth]{RadialFlow.eps} \caption{ Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$, and $p,\overline p$ and hydrodynamical fit results for 0--10\% central Au+Au collisions at $\snn = $ 200 GeV~\cite{ppg016}. The $p_{T}$ ranges for the fit are indicated by the solid lines, while the dashed lines show the extrapolated predictions for each particle species. The $\phi$-meson spectrum, not included in the fit, is compared to the model's prediction.} \label{fig:radflow} \end{center} \end{figure} processes dominate the production of (anti)protons at 2--4.5 GeV/$c$, while the pions are primarily produced by fragmentation of hard-scattered partons. In Fig.~\ref{fig:radflow} we compare the 10\% central spectra of $\pi^{\pm}, K^{\pm}$,$ p,$ and $\overline p$ to a hydrodynamics model~\cite{Schnedermann93} that has been fitted to the data. The free parameters in the model are the kinetic freeze-out temperature $T_{fo}$, the transverse flow velocity $\beta_{T}$ and the absolute normalization. The line drawn through the $\phi$-meson spectrum is the model's prediction obtained after fitting all other particle species. We see that: 1) Hydrodynamics gives a good description of the $p$ and $\overline p$ spectral shapes up to $\approx 3$ GeV/$c$, and 2) the $\phi$-meson spectrum can be described by the same parameter set as the protons.For lighter particles, the deviation from hydrodynamics happens at lower $p_{T}$. These results may lead to the conclusion that the enhanced $\overline{p}/\pi$ ratio is a mass effect and the intermediate $p_{T}$ (anti)protons are primarily produced in soft processes. We now examine the scaling of the yields in different centrality classes. We expect that for soft production the yields will scale as the number of nucleons participating in the collision, while for hard processes the scaling is with $N_{coll}$. In Fig. ~\ref{fig:scaling} the $p_{T}$ distributions for (anti)protons and $\phi$ are scaled down each by their respective $N_{coll}$. To isolate mass effects from baryon/meson effects we compare a heavy meson to the protons. The result is rather surprising. At intermediate $p_{T}$ the $p+\overline{p}$ yields scale with $N_{coll}$ as expected for hard processes. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{scaling_p_phi.eps} \caption{ $N_{coll}$-scaled transverse momentum spectra of $(p+\overline {p})/2$ and $\phi$-mesons for three different centrality classes. High-$p_{T}$ baryon yields scale with $N_{coll}$, while no scaling is observed for $\phi$-mesons~\cite{ppg016}.} \label{fig:scaling} \end{center} \end{figure} The $\phi$ yields do not scale. Although the shape of the $p+\overline{p}$ and $\phi$ spectra is the same and is well reproduced by hydrodynamics, the absolute yields for $\phi$ grow slower with centrality. When the central and peripheral yields are used to evaluate the nuclear modification factor (Fig. ~\ref{fig:phircp}), the (anti)protons show no suppression ($R_{CP} \approx 1$) , while the $\phi$ are suppressed similar to $\pi^{0}$. This result rules out the radial flow (and the mass) as the sole factor that is responsible for the baryon enhancement. The similarity in the centrality dependence of $\phi$ and $\pi$ production suggests an effect related to the number of constituent quarks rather than the mass. The STAR experiment also observed a clear baryon/meson distinction in $R_{CP}$ of $K^{*}$, $K^{0}_{s}$, $\Lambda$, and $\Xi$~\cite{starLambda}. \begin{figure}[b] \begin{center} \includegraphics[width=0.9\textwidth]{Rcp.eps} \caption{The nuclear modification factor, $R_{CP}$, of $\phi$, $p+\overline{p}$ and $\pi^{0}$ measured by PHENIX in Au+Au collisions at $\snn = $ 200 GeV~\cite{qmproc,ppg016}.} \label{fig:phircp} \end{center} \end{figure} \section {Recombination and empirical scaling of elliptic flow.} Recently, several quark recombination models~\cite{recoDuke,recoOregon,recoTAMU} have been proposed to resolve the RHIC baryon puzzle. In the dense medium produced in central $Au+Au$ collisions, recombination of quarks becomes a natural hadronization mechanism. When produced from the same underlying thermal quark distribution, baryons get pushed to higher $p_{T}$ than mesons due to the addition of quark momenta. At intermediate $p_{T}$ recombination wins over fragmentation for baryons, while mesons are still dominated by fragmentation. After fitting the inclusive hadron spectra to extract the thermal component, the models are able to reproduce a large amount of data on identified particle spectra, particle ratios and nuclear modification factors. The most spectacular success of the recombination models comes from the comparison with the data on elliptic flow. At low-$p_{T}$ hydrodynamics describes both the magnitude and the mass dependence of $v_{2}$. However, at $p_{T}$$>2$ GeV/$c$ the mass ordering of $v_{2}$ changes,namely : $v_{2}(p) > v_{2}(\pi)$~\cite{ppg022} and $v_{2}(\Lambda) > v_{2}(K_{s})$~\cite{starLambda}. In addition, the size of the signal is too big to be explained by asymmetric jet absorption~\cite{Molnar03}.The recombination models solve the problem by assigning the elliptic flow signal to the quarks, instead of the hadrons. Then the baryon/meson split in $v_{2}$ is naturally explained. It has been demonstrated empirically, that the flow per quark is universal. Recent results from the STAR~\cite{Castillo} experiment that include the measurement of multi-strange baryons are shown in Fig.~\ref{Fig:V2PtXiOm}. A clear baryon/meson difference is observed in the data at $p_{T}$ $>2 $GeV/$c$. The results from a typical hydrodynamic model calculations~\cite{Pasi01} are shown with a band. After re-scaling of both axes in Fig.~\ref{Fig:V2PtXiOm} to represent the quark flow, the data falls on a universal curve as demonstrated in Fig.~\ref{Fig:QuarkCoal}. \begin{figure} \begin{minipage}[t]{0.48\textwidth} \begin{center} \includegraphics[width=0.99\textwidth]{qm04XiOmV2VsPt.eps} \caption{$v_2(p_{T})$ for $\Xi^-$+$\overline{\Xi}^+$ , $\Omega^-$+$\overline{\Omega}^+$ , $\mathrm{K}^{0}_{S}$ ~and $\Lambda$+$\overline{\Lambda}$ for minimum bias $Au+Au$ collisions~\cite{Castillo}. The curves show the results from hydrodynamics calculations.} \label{Fig:V2PtXiOm} \end{center} \end{minipage}\hfill \begin{minipage}[t]{0.48\textwidth} \begin{center} \includegraphics[width=0.99\textwidth]{qm04V2nVsPtn.eps} \caption{$v_{2}/n$ as a function of $p_{T}/n$ for $\mathrm{K}^{0}_{S}$, $\Lambda$+$\overline{\Lambda}$~\cite{starLambda} and $\Xi^-$+$\overline{\Xi}^+$, where $n$ is the number of constituent quarks for each particle. (Figure taken from~\cite{Castillo}.)} \label{Fig:QuarkCoal} \end{center} \end{minipage}\hfill \end{figure} \section{Jet correlations with leading baryons or mesons} The recombination models resolve most of the baryon/meson effects observed in the data. However, from spectra, particle ratios and elliptic flow it is difficult to infer whether the recombining quarks come from the thermal bath (soft processes) or from hard-scattering. To unravel the nature of the baryon enhancement and to test the recombination approach, the PHENIX experiment examined the two-particle angular correlations with identified meson or baryon trigger particle~\cite{ppg033}. The momentum of the triggers was chosen to be in the range of $p_{T}$ where baryon/meson differences are observed ($2.5 < p_T < 4 $ GeV/$c$). For both types of trigger, clear jet-like angular correlations were observed both on the same side of the trigger and at $180^{0}$. This result shows that both mesons and baryons have a significant hard-scattering component at intermediate $p_{T}$, although the $\overline{p}/\pi$ ratios are dramatically different from fragmentation in the vacuum. In order to quantify the similarity between baryon and meson triggered correlations, the yield of associated particles is integrated at the near-side and the away side peaks. The results for different centrality classes and colliding systems are shown in Fig.~6. There is an increase in partner yields in mid-central Au+Au compared to the d+Au and p+p collisions. In Au+Au collisions, the near side yield per {\it meson} trigger remains constant as a function of centrality, whereas the near-side yield per {\it baryon} trigger decreases in the most central collisions as expected if a fraction of the baryons were produced by soft processes such as recombination of thermal quarks. The dashed line in Fig.~6 represents an upper limit to the centrality dependence of the jet partner yield from thermal recombination. \begin{figure} \begin{center} \includegraphics[width=0.99\linewidth]{ppg033_jet_yield.eps} \caption{Yield per trigger for associated charged hadrons between $1.7 < p_T < 2.5$ GeV/$c$ for the near- (top) and away- (bottom) side jets~\cite{ppg033}. The dashed line (top) represents an upper limit of the centrality dependence of the near-side partner yield from thermal recombination.} \end{center} \label{fig:jet_yield} \end{figure} The data clearly disagree with both the centrality dependence and the absolute yields of this estimation, indicating that the baryon excess has the same jet-like origin as the mesons, except perhaps in the highest centrality bin. The bottom panel of Fig. ~6 shows the conditional yield of partners on the away side. It drops equally for both trigger baryons and mesons going from p+p and d+Au to central Au+Au, in agreement with the observed disappearance \cite{starb2b} and/or broadening of the dijet azimuthal correlations. It further supports the conclusion that the baryons originate from the same jet-like mechanism as mesons.The description of the data in the pQCD framework would require an in-medium modification of the jet fragmentation functions. For recombination models, the experimental results imply that shower and thermal partons have to be treated on an equal basis~\cite{recoOregon}. \section{Summary} We reviewed the results on hadron production and flow in relativistic heavy ion collisions at RHIC. The production mechanisms at low-$p_{T}$ and high-$p_{T}$ are relatively well understood in terms of soft and hard processes, respectively. The intermediate $p_{T}$ region ($2 < p_{T} < 5 $GeV/$c$) is marked by a number of puzzling experimental observations and most notably, by the baryon excess over the expectation from vacuum fragmentation functions. By comparing spectra and centrality scaling of (anti)protons and $\phi$-mesons, we established that the excess of anti-protons with respect to pions is not due to the larger mass of the anti-proton, but is related to the number of constituent quarks. The recombination models get a beautiful confirmation in the empirical scaling relation of the elliptic flow results. Jet-correlations with trigger baryons or mesons show a similar hard-scattering component in both. This observation is also in line with the $N_coll$ scaling observed in the yields of protons and anti-protons. However, it implies that protons originate from hadrons that experience little or no energy loss, while pions come from partons that have suffered large energy loss. This result is conceptually difficult, unless baryons and mesons have a very different formation time and thus - the original partons have different time to interact with the medium. Recombination models which combine hard-scattered partons with thermal ones give the most likely explanation of the experimental results as a whole. The baryon excess is clearly an effect of the medium produced in $Au+Au$ collisions and maybe, thorough the comparison with recombination models, gives evidence for its partonic nature.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{figs/fig_title.pdf} \vspace{-0.25in} \caption{We propose MoFaNeRF, which is a parametric model that can synthesize free-view images by fitting to a single image or generating from a random code. The synthesized face is \emph{morphable} that can be rigged to a certain expression and be edited to a certain shape and appearance. } \label{fig:title} \vspace{-0.2in} \end{figure} Modeling 3D face is a key problem to solve face-related vision tasks such as 3D face reconstruction, reenactment, parsing, and digital human. The 3D morphable model (3DMM)\cite{blanz1999morphable} has long been the key solution to this problem, which is a parametric model transforming the shape and texture of the faces into a vector space representation. 3DMMs are powerful in representing various shapes and appearances, but require a sophisticated rendering pipeline to produce photo-realistic images. Besides, 3DMMs struggled to model non-Lambertian objects like pupils and beards. Recently, the neural radiance field (NeRF)\cite{mildenhall2020nerf} was proposed to represent the shapes and appearances of a static scene using an implicit function, which shows superiority in the task of photo-realistic free-view synthesis. The most recent progress shows that the modified NeRF can model a dynamic face\cite{gafni2021dynamic,wang2021learning,park2021hypernerf,park2020deformable}, or generate diversified 3D-aware images\cite{schwarz2020graf,chan2021pi,gu2021stylenerf}. However, there is still no method to enable NeRF with the abilities of single-view fitting, controllable generation, face rigging and editing at the same time. In summary, conventional 3DMMs are powerful in representing large-scale editable 3D faces but lack the ability of photo-realistic rendering, while NeRFs are the opposite. To combine the best of 3DMM and NeRF, we aim at creating a facial parametric model based on the neural radiance field to have the powerful representation ability as well as excellent free-view rendering performance. However, achieving such a goal is non-trivial. The challenges come from two aspects: firstly, how to memorize and parse the very large-scale face database using a neural radiance field; secondly, how to effectively disentangle the parameters (e.g. shape, appearance, expression), which are important to support very valuable applications like face rigging and editing. To address these challenges, we propose the Morphable Facial NeRF (MoFaNeRF) that maps free-view images into a vector space of coded facial identity, expression, and appearance using a neural radiance field. Our model is trained on two large-scale 3D face datasets, FaceScape\cite{yang2020facescape,zhu2021facescape} and HeadSpcae\cite{Dai2019} separately. FaceScape contains $359$ available faces with $20$ expressions each, and HeadSpace contains $1004$ faces in the neutral expression. The training strategy is elaborately designed to disentangle the shape, appearance and expression in the parametric space. The identity-specific modulation and texture encoder are proposed to maximize the representation ability of the neural network. Compared to traditional 3DMMs, MoFaNeRF shows superiority in synthesizing photo-realistic images even for pupils, mouth, and beards which can not be modeled well by 3D mesh models. Furthermore, we also propose the methods to use our model to achieve image-based fitting, random face generation, face rigging, face editing, and view extrapolation. Our contributions can be summarized as follows: \begin{itemize} \item To the best of our knowledge, we propose the first parametric model that maps free-view facial images into a vector space using a neural radiance field and is free from the traditional 3D morphable model. \item The neural network for parametric mapping is elaborately designed to maximize the solution space to represent diverse identities and expressions. The disentangled parameters of shape, appearance and expression can be interpolated to achieve a continuous and morphable facial synthesis. \item We present to use our model for multiple applications like image-based fitting, view extrapolation, face editing, and face rigging. Our model achieves competitive performance compared to state-of-the-art methods. \end{itemize} \section{Related Work} \label{sec:related} As our work is a parametric model based on neural radiance field, we will review the related work of 3D morphable model and neural radiance field respectively. \noindent\textbf{3D Morphable Model.} 3DMM is a statistical model which transforms the shape and texture of the faces into a vector space representation\cite{blanz1999morphable}. By optimizing and editing parameters, 3DMMs can be used in multiple applications like 3D face reconstruction\cite{xiao2022detailed}, alignment\cite{jourabloo2016large}, animation\cite{zhang2021flow}, etc. We recommend referring to the recent survey\cite{egger20203d} for a comprehensive review of 3DMM. To build a 3DMM, traditional approaches first capture a large number of 3D facial meshes, then align them into a uniform topology representation, and finally process them with principal component analysis algorithm\cite{yang2020facescape,zhu2021facescape,vlasic2005face,cao2013facewarehouse,li2017learning,jiang2019disentangled}. The parameter of the 3DMM can be further disengaged into multiple dimensions like identity, expression, appearance, and poses. In recent years, several works tried to enhance the representation power of 3DMM by using a non-linear mapping\cite{bagautdinov2018modeling,tewari2018self,tran2019learning,tran2018nonlinear,cheng2019meshgan,tran2019towards}, which is more powerful in representing detailed shape and appearance than transitional linear mapping. However, they still suffer from the mesh representation which is hard to model fine geometry of pupils, eyelashes and hairs. Besides, traditional 3DMMs require sophisticated rendering pipelines to render photo-realistic images. By contrast, our model doesn't explicitly generate shape but directly synthesizes photo-realistic free-view images even for pupils, inner-mouth and beards. Very recently, Yenamandra~{\emph{et al.}}~\cite{yenamandra2021i3dmm} proposed to build the 3DMM with an implicit function representing facial shape and appearance. They used a neural network to learn a signed distance field(SDF) of 64 faces, which can model the whole head with hair. Similarly, our model is also formulated as an implicit function but very different from SDF. SDF still models shape while our method focuses on view synthesis and releases constraints of the shape, outperforming SDF in rendering performance by a large margin. \noindent\textbf{Neural Radiance Field.} NeRF\cite{mildenhall2020nerf} was proposed to model the object or scene with an impressive performance in free-view synthesis. NeRF synthesizes novel views by optimizing an underlying continuous volumetric scene function that is learned from multi-view images. As the original NeRF is designed only for a static scene, many efforts have been devoted to reconstructing deformable objects. Aiming at the human face many methods\cite{gafni2021dynamic,wang2021learning,park2021hypernerf} modeled the motion of a single human head with a designed conditional neural radiance field, extending NeRF to handle dynamic scenes from monocular or multi-view videos. Aiming at human body, several methods have been proposed by introducing human parametric model (e.g. SMPL)\cite{noguchi2021neural,chen2021animatable,liu2021neural,peng2021neural} or skeleton\cite{peng2021animatable} as prior to build NeRF for human body. For a wide range of dynamic scenarios, Park {\emph{et al.}}~\cite{park2020deformable} proposed to augment NeRF by optimizing an additional continuous volumetric deformation field, while Pumarola {\emph{et al.}}~\cite{pumarola2021d} optimized an underlying deformable volumetric function. Another group of works \cite{schwarz2020graf,chan2021pi,gu2021stylenerf} turned NeRF into a generative model that is trained or conditioned on certain priors, which achieves 3D-aware images synthesis from a collection of unposed 2D images. To reduce the image amount for training, many works~\cite{yu2021pixelnerf,wang2021ibrnet,raj2021pixel,Gao20arxiv_pNeRF} trained the model across multiple scenes to learn a scene prior, which achieved reasonable novel view synthesis from a sparse set of views. Different from previous NeRFs, our method is the first parametric model for facial neural radiance field trained on a large-scale multi-view face dataset. Our model supports multiple applications including random face generation, image-based fitting and facial editing, which is unavailable for previous NeRFs. \section{Morphable Facial NeRF} \label{sec:method} \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth, trim=0 0.2in 0 0]{figs/fig_network.pdf} \caption{MoFaNeRF takes appearance code $\alpha$, shape code $\beta$, expression code $\epsilon$, position code $\textbf{x}$ and view direction $\textbf{d}$ as input, synthesizing a coarse result which is then refined by a RefineNet. As shown in the right bottom corner, MoFaNeRF can be used in generating (synthesize free-view images given parameters) or fitting (optimize for parameters given a single image).} \label{fig:network} \vspace{-0.2cm} \end{figure} Morphable facial NeRF is a parametric model that maps free-view facial portraits into a continuous morphable parametric space, which is formulated as: \begin{equation} \mathcal{M}: (\mathbf{x,d,\beta,\alpha,\epsilon}) \to \{\mathbf{c,\sigma}\}, \end{equation} where $\mathbf{x}$ is the 3D position of a sample point; $\mathbf{d}$ is the viewing direction consisting of pitch and yaw angles; $\beta, \alpha, \epsilon$ are the parameters denoting facial shape, appearance, and expression respectively; $\mathbf{c}$ and $\sigma$ are the RGB color and the density used to represent the neural radiance field. In the next, we will explain $\mathbf{x, d, c,\sigma}$ that are referred from NeRF in Section~\ref{sec:nerf}, then introduce $\beta, \alpha, \epsilon$ in Section~\ref{sec:param}. The network design is illustrated in Section~\ref{sec:network} and the training details are explained in Section~\ref{sec:train}. \subsection{Neural Radiance Field} \label{sec:nerf} As defined in NeRF\cite{mildenhall2020nerf}, the radiance field is represented as volumetric density $\sigma$ and color $\mathbf{c}=(R,G,B)$. An MLP is used to predict $\sigma$ and $\mathbf{c}$ from a 3D point $\mathbf{x}=(x,y,z)$ and viewing direction $\mathbf{d}=(\theta,\phi)$. Position encoding is introduced to transform the continuous inputs $\mathbf{x}$ and $\mathbf{d}$ into a high-dimensional space, which is also used in our model. The field of $\sigma$ and $\mathbf{c}$ can be rendered to images using a differentiable volume rendering module. For a pixel in the posed image, a ray $\mathbf{r}$ is cast through the neural volume field from the ray origin $\mathbf{o}$ along the ray direction $\mathbf{d}$ according to the camera parameters, which is formulated as $\mathbf{r}(z)=\mathbf{o}+z\mathbf{d}$. Through sampling points along this ray, and accumulating the sampled density $\sigma(\cdot)$ and RGB values $\mathbf{c}(\cdot)$ computed by $\mathcal{F}$, the final output color $\mathbf{C(r)}$ of this pixel can be evaluated by: \begin{equation} \mathbf{C(r)}\!=\!\int_{z_{n}}^{z_{f}} T(z) \sigma( \mathbf{r}(z)) \mathbf{c}(\mathbf{r}(z),\mathbf{d})dz\,, \:\textup{where}\:T(z)\!=\!\exp\!\left(-\int_{z_{n}}^{z}\sigma (\mathbf{r}(s))\mathrm{ds}\right) . \end{equation} $T(t)$ is defined as the accumulated transmittance along the ray from $z_n$ to $z$, where $z_n$ and $z_f$ are near and far bounds. Through the rendered color, a photometric loss can be applied to supervise the training of the MLP. \subsection{Parametric Mapping} \label{sec:param} Our model is conditioned on the parameters to represent the identity and facial expression $\epsilon$, and the identity is further divided into shape $\beta$ and appearance $\alpha$. Initially, we consider integrating $\beta$ and $\alpha$ into a single identity code, however, we find it is hard for an MLP to memorize the huge amount of appearance information. Therefore, we propose to decouple the identity into shape and appearance. These parameters need to be disentangled to support valuable applications like face rigging and editing. \noindent\textbf{Shape parameter $\beta$} represents the 3D shape of the face that is only related to the identity of the subject, like the geometry and position of the nose, eyes, mouth and overall face. A straightforward idea is to use one-hot encoding to parameterize $\beta$, while we find it suffers from redundant parameters because the similarity of large-amount faces is repeatedly expressed in one-hot code. Instead, we adopt the identity parameters of the bilinear model of FaceScape\cite{yang2020facescape} as shape parameter, which is the PCA factors of the 3D mesh for each subject. The numerical variation of the identity parameter reflects the similarity between face shapes, which makes the solution space of facial shapes more efficient. \noindent\textbf{Appearance parameter $\alpha$} reflects photometric features like the colors of skin, lips, and pupils. Some fine-grained features are also reflected by appearance parameters, such as beard and eyelashes. Considering that the UV texture provided by FaceScape dataset is the ideal carrier to convey the appearance in a spatial-aligned UV space, we propose to encode the UV texture maps into $\alpha$ for training. The texture encoding module (TEM) is proposed to transfer the coded appearance information into the MLP, which is a CNN based encoder network. TEM is only used in the training phase, and we find it significantly improves the quality of synthesized images. We consider the reason is that the appearance details are well disentangled from shape and spatial-aligned, which relieves the burden of memorizing appearances for the MLP. \noindent\textbf{Expression parameter $\epsilon$} is corresponding to the motions caused by facial expressions. Previous methods\cite{park2021hypernerf,Tretschk20arxiv_NR-NeRF} try to model the dynamic face by adding a warping vector to the position code $\textbf{x}$, namely deformable volume. However, our experiments show that the deformable volume doesn't work in our task where too many subjects are involved in a single model. More importantly, our training data are not videos but images with discrete 20 expressions, which makes it even harder to learn a continuous warping field. By contrast, we find directly concatenating expression parameters with the position code as \cite{li2021dynerf,gafni2021dynamic} causes fewer artifacts, and our identity-specific modulation (detailed in Section~\ref{sec:network}) further enhances the representation ability of expression. We are surprised to find that MLP without a warping module can still synthesize continuous and plausible interpolation for large-scale motions. We believe this is the inherent advantage of the neural radiance field over 2D-based synthesis methods. \subsection{Network Design} \label{sec:network} As shown in Figure~\ref{fig:network}, the backbone of MoFaNeRF mainly consists of MLPs, identity-specific modulation (ISM) module and texture encoding module(TEM). These networks transform the parameters $\alpha, \beta, \epsilon$, position code $\textbf{x}$ and viewing direction $\textbf{d}$ into the color $\textbf{c}$ and density $\sigma$. The predicted colors are then synthesized from $\textbf{c}$ and $\sigma$ through volume rendering. Considering that the appearance code $\alpha$ is only related to the color $c$, it is only fed into the color decoder. The expression code $\epsilon$ is concatenated to the position code after the identity-specific modulation, as it mainly reflects the motions that are intuitively modulated by shape $\beta$. The RefineNet takes the coarse image predicted by MoFaNeRF as input and synthesizes a refined face. The results presented in this paper are the refined results by default. The additional texture encoding module (TEM) is used only in the training phase, which consists of $7$ convolution layers and $5$ full connected layers . The detailed parameters of our network are shown in the supplementary. To represent a large-scale multi-view face database, the capacity of the network needs to be improved by increasing the number of layers in MLP and the number of nodes in each hidden layer. The generated images indeed gets improved after enlarging the model size, but is still blurry and contains artifacts in the expressions with large motions. To further improve the performance, we present the identity-specific modulation and RefineNet. \noindent\textbf{Identity-specific modulation (ISM).} Intuitively, facial expressions of different individuals differ from each other as individuals have their unique expression idiosyncrasies. However, we observed that the MLPs erase most of these unique characteristics after the disentanglement, homogenizing the expressions from different subjects. Motivated by AdaIN\cite{karras2019style,karras2020analyzing}, we consider the unique expression of individuals as a modulation relationship between $\beta$ and $\epsilon$, which can be formulated as: \begin{equation} \epsilon'=M_s(\beta)\cdot\epsilon+M_b(\beta), \end{equation} where $\epsilon'$ is the updated value to the expression code, $M_s$ and $M_b$ are the shallow MLPs to transform $\beta$ into an identity-specific code to adjust $\epsilon$. Both $M_s$ and $M_b$ output tensors with the same length as $\epsilon$. Our experiments show that ISM improves the representation ability of the network especially for various expressions. \noindent\textbf{RefineNet.} We propose to take advantage generative adversial networks to further improve the synthesis of the facial details. We use Pix2PixHD\cite{wang2018high} as the backbone of RefineNet, which refine the results of MoFaNeRF with GAN loss\cite{goodfellow2014generative} and perceptual loss\cite{johnson2016perceptual}. The input of RefineNet is the coarse image rendered by MoFaNeRF, and the output is a refined image with high-frequency details. We find that RefineNet significantly improves details and realism with less impact on identity-consistency. The influence of RefineNet on identity-consistency are validated in the ablation study in Section~\ref{sec:disen_eval}. \subsection{Training} \label{sec:train} \noindent\textbf{Data preparation.} We use $7180$ models released by FaceScape\cite{yang2020facescape} and $1004$ models released by HeadSpace\cite{Dai2019} to train two models respectively. In FaceScape, the models are captured from $359$ different subjects with $20$ expressions each. For FaceScape, we randomly select $300$ subjects ($6000$ scans) as training data, leaving $59$ subjects ($1180$ scans) for testing. For HeadSpace, we randomly select $904$ subjects as training data, leaving $100$ subjects for testing. As HeadSpace only consists of a single expression for each subjects, the expression input part of the network to train HeadSpace data is removed. All these models are aligned in a canonical space, and the area below the shoulder is removed. We render $120$ images in different views for each subjects. The details about the rendering setting are shown in the supplementary. \noindent\textbf{Landmark-based sampling.} In the training phase, the frequency of ray-sampling is modified besides the uniform sampling to make the network focus on the facial region. Specifically, we dectect $64$ 2D key-points of the mouth, nose, eyes, and eyebrows, and the inverse-projecting rays are sampled around each key-point based on a Gaussian distribution. The standard deviation of the Gaussian distribution is set to $0.025$ of the image size all our experiments. The uniform sampling and the landmark-based sampling are combined with the ratio of 2:3. \noindent\textbf{Loss function.} The loss function to train MoFaNeRF is formulated as: \begin{equation} L=\sum\limits _{\mathbf{r}\in \mathcal{R}} \left [ \left \| \hat{C}_c(\mathbf{r}) - C(\mathbf{r}) \right \|^2_2 + \left \| \hat{C}_f(\mathbf{r}) - C(\mathbf{r}) \right \|^2_2 \right], \end{equation} where $\mathcal{R}$ is the set of rays in each batch, $C(\mathbf{r})$ is the ground-truth color, $\hat{C}_c(\mathbf{r})$ and $\hat{C}_f(\mathbf{r})$ are the colors predicted by coarse volume and fine volume along ray $\mathbf{r}$ respectively. It is worth noting that the expression and appearance parameters are updated according to the back-propagated gradient in the training, while the shape parameters remain unchanged. We firstly train the network of MoFaNeRF, then keep the model fixed and train the RefineNet. The RefineNet is trained with the loss function following Pix2PixHD\cite{wang2018high}, which is the combination of GAN loss\cite{goodfellow2014generative} and perceptual loss\cite{dosovitskiy2016generating,gatys2016image,johnson2016perceptual}. The implementation details can be found in the supplementary material. \begin{figure}[tb] \centering \includegraphics[width=1\linewidth, trim=0 0.0cm 0 0]{figs/fig_morph.pdf} \caption{Our model is able to synthesize diverse appearance, shape and expressions, while these three dimensions are well disentangled. The interpolation in the parametric space shows that the face can morph smoothly between two parameters. } \label{fig:morph} \vspace{-0.1cm} \end{figure} \subsection{Application} \label{sec:app} In addition to directly generating faces from a certain or random vector, MoFaNeRF can also used in image-based fitting, face rigging and editing. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth,trim=0 0in 0 0]{figs/fig_fit_pipeline.pdf} \vspace{-0.3in} \caption{The pipeline for fitting our model to a single image.} \label{fig:fit_pipeline} \vspace{-0.1in} \end{figure} \noindent\textbf{Image-based fitting.} As shown in Figure~\ref{fig:fit_pipeline}, we propose to fit our model to an input image. Firstly, we normalize the image to the canonical image with an affine transformation. Specifically, we first extract the 2D landmarks $L_t$ from the target image with \cite{kazemi2014one} , then align $L_t$ to the predefined 3D landmarks $L_c$ of the canonical face by solving: \begin{equation} \textbf{d, s} = \arg\min \left \|(\Pi (L_c, \textbf{d})) \cdot \textbf{s} - L_t \right \|_2, \end{equation} where $\textbf{d}$ is the view direction, $\textbf{s}$ is the scale. $\Pi(L_c, $\textbf{d}$)$ is the function to project 3D points to the 2D image plane according to the view direction $\textbf{d}$. The scale $\textbf{s}$ is applied to the target image, and $\textbf{d}$ is used in the fitting and remains constant. Then we use EHANet~\cite{luo2020ehanet,CelebAMask-HQ} to segment the background out, and normalize the lighting with the relighting method~\cite{zhou2019deep}. In practice, we find it important to eliminate the influence of light because our model cannot model complex lighting well. After the pre-processing, we can optimize for $\beta, \alpha, \epsilon$ through the network. Specifically, $\beta$ and $\alpha$ are randomly initialized by Gaussian distribution, and $\epsilon$ is initialized with the learned value from the training. Then we freeze the pre-trained network weights and optimize $\alpha, \beta, \epsilon$ through the network by minimizing only the MSE loss function between the predicted color and the target color. Only points around landmarks are sampled in fitting. \noindent\textbf{Face rigging and editing.} The generated or fitted face can be rigged by interpolating in expression dimension with controllable view-point. The expression vector can be obtained by fitting to a video or manually set. Currently, we only use the basic $20$ expressions provided by FaceScape to generate simple expression-changing animation. By improving the rigging of the face to higher dimensions\cite{li2010example}, our model has the potential to perform more complex expressions. The rigged results are shown in Figure~\ref{fig:title}, Figure~\ref{fig:morph} and the supplementary materials. The generated or fitted face can be edited by manipulating the shape and appearance code. As explained in Section~\ref{sec:param}, shape coder refers to the shape of the face, the geometry and position of the nose, eyes, and mouth; while appearance refers to the color of skin, lips, pupils, and fine-grained features like beard and eyelashes. These features can be replaced from face A to face B by simply replacing the shape or appearance code, as shown in Figure~\ref{fig:title}. Our model supports manually editing by painting texture map, then using TEM to generate appearance code for a generation. However, we find only large-scale features of the edited content in the texture map will take effect, like skin color and beard, while small-scale features like moles won't be transferred to the synthesized face. We also demonstrate that the face can morph smoothly by interpolating in the vector space, as shown in Figure~\ref{fig:morph}. \section{Experiment} \label{sec:exp} \begin{table}[t] \centering \caption{Quantitative evaluation of representation ability.} \begin{tabular}{@{}lccc@{}} \toprule Model & PSNR(dB)$\uparrow$ & SSIM$^*$$\uparrow$ & LPIPS$^*$$\downarrow$ \\ \midrule FaceScape\cite{yang2020facescape} & 27.96$\pm$1.34 & 0.932$\pm$0.012 & 0.069$\pm$0.009 \\ FaceScape{$^*$}\cite{yang2020facescape} & 27.07$\pm$1.46 & 0.933$\pm$0.011 & 0.080$\pm$0.014 \\ i3DMM\cite{yenamandra2021i3dmm} & 24.45$\pm$1.58 & 0.904$\pm$0.014 & 0.112$\pm$0.015 \\ MoFaNeRF & \textbf{31.49$\pm$1.75} & \textbf{0.951$\pm$0.010} & 0.061$\pm$0.011 \\ MoFaNeRF-fine & 30.17$\pm$1.71 & 0.935$\pm$0.013 & \textbf{0.034$\pm$0.007}\\ \bottomrule \end{tabular} \label{tab:represent} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{figs/fig_represent.pdf} \vspace{-0.1in} \caption{Visual comparison of representation ability. Facescape$^*$ is the smaller version with comparable model size to our model($\approx120$M).} \vspace{-0.1in} \label{fig:represent} \end{figure} We firstly compare our model with previous parametric models in representation ability, then show the effectiveness of the parameter disentanglement and the network design in the ablation study. Finally, we evaluate the performance of MoFaNeRF in single-view image-based fitting, view extrapolation, and face manipulation. \subsection{Comparison of Representation Ability} We compare the representation ability of our MoFaNeRF with two SOTA facial parametric models - FaceScape bilinear model\cite{yang2020facescape} and i3DMM\cite{yenamandra2021i3dmm}. FaceScape is the traditional 3DMM that applies PCA to 3D triangle mesh, while i3DMM is the learning-based 3DMM that represents shape via SDF. Both models are trained on FaceScape dataset as described in Section~\ref{sec:train}. The default generated number of parameters for FaceScape is very large($\approx630$M), so to be fair, we also generated a model with a similar number of parameters to our model($\approx120$M), labeled as FaceScape$^*$. PSNR\cite{PSNRvsSSIM}, SSIM\cite{wang2004image} and LPIPS\cite{zhang2018unreasonable} are used to measure the objective, structural, and perceptual similarity respectively. The better performance in similarity between the generated face image and ground truth indicates better representation ability. From the visual comparison in Figure~\ref{fig:represent}, we can see that the FaceScape bilinear model doesn't model pupils and inner mouth, as it is hard to capture accurate 3D geometry for these regions. The rendered texture is blurry due to the misalignment in the registration phase and the limited representation ability of the linear PCA model. i3DMM is able to synthesize the complete head, but the rendering result is also blurry. We observed that the performance of i3DMM trained on our training set has degraded to some extent, and we think it is because our data amount is much larger than theirs ($10$ times larger), which makes the task more challenging. By contrast, our model yields the clearest rendering result, which is also verified in quantitative comparison shown in Table~\ref{tab:represent}. The refinement improves the LPIPS but decrease PSNR and SSIM, we believe this it is because the GAN loss and perceptual loss focus on hallucinate plausible details but is less faithful to the original image. \begin{table}[t] \centering \caption{Validation of identity consistency.} \begin{tabular}{@{}lccc@{}} \toprule Setting & before RefineNet & after RefineNet & ground-truth \\ \midrule changing view & 0.687$\pm$0.027 & 0.707$\pm$0.028 & 0.569$\pm$0.048 \\ changing exp, view & 0.703$\pm$0.023 & 0.720$\pm$0.025 & 0.633$\pm$0.029\\ \bottomrule \end{tabular} \label{tab:ablation_view} \vspace{0.1cm} \end{table} \subsection{Disentanglement Evaluation} \label{sec:disen_eval} We show the synthesis results of different parameters in the right side of Figure~\ref{fig:morph} to demonstrate that shape, appearance and expression are well disentangled, and shown the interpolation of different attributes in the left side of Figure~\ref{fig:morph} to demonstrate that the face can morph continuously. We further validate identity-consistency among different views and different expressions. The distance in facial identity feature space (DFID) defined in FaceNet~\cite{schroff2015facenet} is used to measure how well the identity is preserved. Following the standard in FaceNet, two facial images with DFID $\le 1.1$ are judged to be the same person. We use a subset of our training set for this experiment, containing $10$ subjects with $10$ expressions each. We evaluate DFID between the ground truth and the fitted face rendered in $50$ views with heading angle in $0\sim90^{\circ}$. As reported in Table~\ref{tab:ablation_view}, the DFID scores after RefineNet are slightly increased, but are still comparable to the DFID scores of the ground-truth images. The DFID scores of both changing view and changing view and expression are much lower than $1.1$, which demonstrates that the RefineNet doesn't cause severe identity inconstancy across rendering view-points. \begin{table}[t] \centering \caption{Ablation study.} \begin{tabular}{@{}lccc@{}} \toprule Label & PSNR(dB)$\uparrow$ & SSIM$\uparrow$ & LPIPS$\downarrow$ \\ \midrule (a.1)One-hot expression code $\epsilon$\ &25.59$\pm$2.25 &0.888$\pm$0.025 &0.184$\pm$0.039 \\ (a.2)PCA expression code $\epsilon$\ &25.79$\pm$2.25 &0.886$\pm$0.025 &0.187$\pm$0.039 \\ (a.3)One-hot shape code $\beta$ &26.27$\pm$2.25 &0.895$\pm$0.024 &0.174$\pm$0.039 \\ (a.4)Leanable shape code $\beta$ &25.24$\pm$2.13 &0.883$\pm$0.024 &0.200$\pm$0.041 \\ (a.5){w/o appearance code $\alpha$ \& TEM} &25.73$\pm$2.03 &0.889$\pm$0.025 &0.184$\pm$0.039 \\ (b.1)Deformation &24.22$\pm$2.33 & 0.863$\pm$0.027 & 0.231+0.041 \\ (b.2)Modulation &25.69$\pm$2.22 &0.886$\pm$0.025 &0.187$\pm$0.039 \\ (b.3)Hybrid &24.42$\pm$2.13 & 0.857$\pm$0.027 & 0.241+0.039 \\ (c)Uniform Sampling &25.67$\pm$2.09 &0.888$\pm$0.024 &0.185$\pm$0.037 \\ (d)Ours with on changes &\textbf{26.57$\pm$2.08} &\textbf{0.897$\pm$0.025} &\textbf{0.166$\pm$0.037} \\ \bottomrule \end{tabular} \label{tab:ablation} \vspace{-0.2cm} \end{table} \begin{figure}[ht] \centering \includegraphics[width=1\linewidth, trim=0.1in 0.2in 0.0in 0]{figs/fig_ablation.pdf} \caption{Visual comparison of ablation study. } \label{fig:ablation} \vspace{-0.5cm} \end{figure} \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth, trim=0 0.05cm 0 0]{figs/fig_fit.pdf} \caption{The fitting results to a single-view image of MoFaNeRF. The testing image are from FaceScape testing set and in-the-wild images. The comparison with single-view face reconstruction and failure cases is shown in the supplementary material. } \vspace{-0.1in} \label{fig:fit} \end{figure} \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{figs/fig_rotate.pdf} \vspace{-0.3in} \caption{The fitting and facial rotating results compared with previous methods. } \vspace{-0.2in} \label{fig:compare_gan} \end{figure} \subsection{Ablation Study} \label{sec:ablation} We provide ablation studies on coding strategy, morphable NeRF architecture, and sampling strategy: \noindent$\bullet$ (a) Ablation Study for different coding strategy: one-hot code, PCA code, learnable code. PCA code is the PCA weights generated by the bilinear models\cite{cao2013facewarehouse,yang2020facescape}; Learnable code is optimized in the training of our MoFaNeRF model, which is initialized by a normal distribution. Our method adopts learnable expression code and PCA shape code, so the other 2 choices for expression and shape code are compared. Considering the appearance cannot be coded as one-hot or PCA code, we only compare with and without TEM module and coded appearance in the ablation study. The coding strategy and the TEM module are described in Section~\ref{sec:param} and Section~\ref{sec:network} respectively. \noindent$\bullet$ (b) Ablation study for morphable NeRF architecture. Previous NeRF variants that support morphable or dynamic objects can be divided into three distinct categories: deformation-based approaches\cite{Tretschk20arxiv_NR-NeRF,park2020deformable,Pumarola20arxiv_D_NeRF}, modulation-based approaches\cite{li2021dynerf,xian2021videoNerf,Li20arxiv_nsff}, and a hybrid of both\cite{park2021hypernerf}. All of these methods were only tested for a single or a small collections, and our ablation study aims to verify their representative ability for a large-scale face dataset. We select NR-NeRF\cite{Tretschk20arxiv_NR-NeRF}, Dy-NeRF\cite{li2021dynerf}, Hyper-NeRF\cite{park2021hypernerf} to represent deformation-based, modulation-based, and hybird architecture respectively. \noindent$\bullet$ (c) Ablation study to verify our sampling strategy (Section~\ref{sec:train}). We replace our landmark-based sampling strategy with uniform sampling strategy in the training phase. \noindent$\bullet$ (d) Ours with no changes. We reconstruct $300$ images of the first $15$ subjects in our training set for evaluation, with random view directions and expressions. The results are reported quantitatively in Table~\ref{tab:ablation} and qualitatively in Figure~\ref{fig:ablation}. \textbf{Discussion.} As reported in Table~\ref{tab:ablation}, comparing (d) to (a.1) and (a.2), we find the PCA identity code most suitable for encoding shape, which reflects the shape similarity in the parameters space. Comparing (d) to (a.3) and (a.4), we can see that learnable code is most suitable for encoding expressions. We think the reason is that the categories of the expression are only $20$, which is quite easy for the network to memorize and parse, while PCA code doesn't help for the few categories. By comparing items (b.1) - (b.3), we can see that modulation-based method (Dy-NeRF) shows better representative ability in modeling large-scale morphable faces, which explain the reason why our final model is based on modulation-based structure. By comparing (b.2) and (d), the positive effect of ISM module explained in Section~\ref{sec:network} is verified. By comparing (c) and (d), we can see our sampling method further boost the performance. Comparing (a.5), (b.2), (c) to (d), we can see that our proposed TEM, ISM, and landmark-based sampling all have positive effects on model representation ability, and synthesize more faithful results in the visual comparison. \subsection{Application Results} \label{sec:Application Result} \noindent\textbf{Image-based fitting.} The fitted result to the testing set and in-the-wild images are shown in Figure~\ref{fig:fit}. More results, comparison with single-view reconstruction methods, and failure cases can be found in the supplementary material. \noindent\textbf{Facial rotating.} We fit our model to a single image and rotate the fitted face by rendering it from a side view, as shown in Figure~\ref{fig:compare_gan}. The facial rotating results is compared with Nitzan~{\emph{et al.}}~\cite{Nitzan2020FaceID}, Pi-GAN\cite{piGAN2021} and HeadNeRF\cite{hong2022headnerf}. We can see that our method synthesizes a plausible result even at a large angle (close to $\pm90^\circ$) while the facial appearance and shape are maintained. Nitzan~{\emph{et al.}} and Pi-GAN are GAN-based networks, while HeadNeRF is a parametric NeRF trained with the help of traditional 3DMM. The results of all these three methods contain obvious artifacts when the face is rotated at a large angle. \noindent\textbf{Face rigging and editing.} As shown in Figure~\ref{fig:title} and Figure~\ref{fig:morph}, after the model is fitted or generated, we can rig the face by driving the expression code $\epsilon$, and edit the face by changing shape code $\beta$ and appearance code $\alpha$. Please watch our results in the video and supplementary materials. \section{Conclusion} \label{sec:con} In this paper, we propose MoFaNeRF that is the first facial parametric model based on neural radiance field. Different to the previous NeRF variants that focuses on a single or a small collection of objects, our model disentangles the shape, appearance, and expression of human faces to make the face morphable in a large-scale solution space. MoFaNeRF can be used in multiple applications and achieves competitive performance comparing to SOTA methods. \noindent\textbf{Limitation.} Our model doesn't explicitly generate 3D shapes and focuses on free-view rendering performance. This prevents our model from being directly used in traditional blendshapes-based driving and rendering pipelines. Besides, the single-view fitting of MoFaNeRF only works well for relatively diffused lighting, while the performance will degrade for extreme lighting conditions. In the future work, we believe that introducing illumination model with MoFaNeRF will improve the generalization and further boost the performance. \noindent\textbf{Acknowledgement} This work was supported by the NSFC grant 62025108, 62001213, and Tencent Rhino-Bird Joint Research Program. We thank Dr. Yao Yao for his valuable suggestions and Dr. Yuanxun Lu for proofreading the paper. \section{Supplementary Materials} \subsection{Overview} The supplementary material contains a video available at \url{https://neverstopzyy.github.io/mofanerf} and additional descriptions. The video shows a brief overview of our method and the animation of rigging and editing. The additional content contains more results of the image-based fitting (Section~\ref{sec:more_fitting}), failure cases (Section~\ref{sec:fitting_failure}), comparison with previous reconstruction-based view synthesis (Section~\ref{sec:fit_compare}), randomly generated faces (Section~\ref{sec:rand_gen}), the network architecture (Section~\ref{sec:net_param}), details about the training (Section~\ref{sec:train_detail}), and the source of used face images (Section~\ref{sec:data_source}). \subsection{Animation of Rigging and Editing} After the face is fitted or generated, it can be rigged by driving the expression code $\epsilon$, and be edited by changing shape code $\beta$ and appearance code $\alpha$. The animation of rigging and editing results are shown in the supplementary video (Part 3 and 4). We can see that the face can be driven by the expression code extracted from a RGB video, and the face can morph smoothly in the dimensions of shape, appearance and expression. \subsection{More Results of image-based fitting} \label{sec:more_fitting} We show more faces fitted to a single image in Figure~\ref{fig:more_fit1} and Figure~\ref{fig:more_fit2}, which is the extension of Figure 7 in the main paper. \begin{figure*} \centering \includegraphics[width=1.0\linewidth]{figs/fig_more_fit.pdf} \vspace{-0.2in} \caption{More fitting results of MoFaNeRF to a single-view image based on FaceScape model. } \label{fig:more_fit1} \vspace{-0.1in} \end{figure*} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figs/fig_failure.pdf} \vspace{-0.3in} \caption{Failure cases of fitting MoFaNeRF to a single image.} \label{fig:failure} \vspace{-0.1in} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=1.0\linewidth]{figs/fig_more_fit_headspace.pdf} \vspace{-0.2in} \caption{More fitting results of MoFaNeRF to a single-view image based on HeadSpace model. } \label{fig:more_fit2} \vspace{-0.1in} \end{figure*} \subsection{Failure Cases in Fitting} \label{sec:fitting_failure} We show some failure cases of the image-based fitting in Figure~\ref{fig:failure}. The first column on the left shows that the fitting results are bad in the extreme lighting. As our model is trained in the images with relatively diffused lighting, large areas of shadow will interfere with our fitting. Lighting models may be introduced in future work to improve the generalization ability for complex lighting conditions. The second column from the left shows that the fitting may fail for the faces with large occluded regions. Fitting MoFaNeRF to an occluded face is still a challenging task to be solved. The third column from the left shows that the fitting results degraded for the faces that are quite different from the FaceScape dataset in shape (top) or skin color (bottom). The generalization of image-based fitting still needs to be improved. \begin{figure*} \centering \includegraphics[width=1.0\linewidth]{figs/fig_single_compare.pdf} \vspace{-0.1in} \caption{We compare our fitting and rendering result with SOTA single-view reconstruction methods (SVR). In the red circles, we can see that the inaccurately predicted shape of the nose leads to artifacts in the side view. 3DDFAv2, FaceScape-fit, and DECA commonly contain artifacts on the cheeks due to the misalignment of the predicted shape and the source image. Besides, all four methods cannot align ears well, so no texture is assigned to ears. } \label{fig:vs_single} \vspace{-0.1in} \end{figure*} \subsection{Fitting v.s. Single-View Reconstruction} \label{sec:fit_compare} As shown in Figure~\ref{fig:vs_single}, we compare our method with four state-of-the-art Single-View Reconstruction(SVR) methods\cite{yang2020facescape,shang2020self,guo2020towards,feng2021learning} in rendering performance. These methods take the single-view image as input and predict the mesh with texture. The images are rendered from the predicted meshes in the frontal view and $\pm60^{\circ}$ side views. Please note that FaceScape-fit\cite{yang2020facescape}, 3DDFAv2\cite{guo2020towards} and DECA\cite{feng2021learning} reconstructed the full head, however, their textures come from the source image and only facial textures are assigned. Therefore, we only render the regions with texture for these three methods. We can see that the inaccurately predicted shape of FaceScape-fit\cite{yang2020facescape}, 3DDFA-v2\cite{guo2020towards} and DECA\cite{feng2021learning} leads to the artifacts in the side views, as shown in the red dotted circles. Besides, these methods commonly contain wrong scratches on the cheeks due to the misalignment of the predicted shape and the source image. Though the MGCNet doesn't contain the scratches problems, its texture tends to be a mean texture with less detail. We can also observe that in some cases the shape of the nose is unfaithful. Besides, all four methods cannot align ears well, so no texture is assigned to ears. By contrast, our rendering results contain fewer artifacts and are more plausible in the side views. The ears are also rendered in our method, which makes the side-view rendering complete. \begin{figure*}[t] \centering \includegraphics[width=1.0\linewidth]{figs/fig_more_gen.pdf} \vspace{-0.3in} \caption{More random generated results by our models. We visualize them in three views with yaw angles in $[-60^{\circ},0^{\circ},60^{\circ}]$ and pitch angles in $0^{\circ}$. } \label{fig:more_gen} \vspace{-0.1in} \end{figure*} \subsection{Results of Random Generation} \label{sec:rand_gen} Some randomly generated faces are shown in Figure~\ref{fig:more_gen}. We can see that our model covers a wide range of facial shapes, appearances and expressions. \subsection{Disentanglement between appearance and geometry} In addition to Fig. 3 of the main paper, we supplement the experiment in Fig.~\ref{fig:disen_app_and_geo} to extract the shape from the radiance fields, and report the chamfer distance to quantitatively measure the shape consistency. The figure and quantitative results show that the extracted shape is changed only when the shape code is changed, which indicates the effective disentanglement of shape from appearance and expression. \label{sec:disen_app_and_geo} \begin{figure}[h] \centering \includegraphics[width=1.0\linewidth]{figs/fig_disen_app_geo.pdf} \vspace{-0.3in} \caption{Extracted shape for verifying disentanglement.} \label{fig:disen_app_and_geo} \vspace{-0.1in} \end{figure} \subsection{Parameters of Network} \label{sec:net_param} The parameters of our network are shown in Figure~\ref{fig:network_par}. The boxes represent the full connection layer, where the numbers represent the number of neurons. The circles with C inside represent the concatenating operation between tensors. The numbers follow the parameter means the dimension of this parameter. $\alpha, \beta, \epsilon$ are the parameters of appearance, shape and expression. $\gamma(\textbf{x})$ and $\gamma(\textbf{d})$ mean the position encoding of the position code $\textbf{x}$ and viewing direction $\textbf{d}$. $\sigma$ and $\textbf{c}$ are the density and color that form the radiance field. The parameters of the TEM module are shown in Table~\ref{tab:tem}. \begin{figure}[t] \centering \includegraphics[width=0.75\linewidth, scale=0.8]{figs/fig_network_par.pdf} \vspace{-0.2in} \caption{The detailed parameters of the network in MoFaNeRF. The number in brackets indicate the length of the tensor.} \label{fig:network_par} \end{figure} \begin{table}[t] \centering\caption{The detailed parameters of the TEM in MoFaNeRF. All convolution layers and linear layers are followed by Leaky ReLU\cite{maas2013rectifier} with negative slope of 0.2 and 0.1 respectively, except for layers ``mu" and ``std". 'Repara.': means the reparameterization method\cite{kingma2013auto,wang2021learning} to produce latent code from the distribution $\mathcal{N}(\mathbf{\mu},\mathbf{\sigma})$. $k$: kernel size ($k \times k$). $s$: stride in both horizontal and vertical directions. $p$: padding size ($p \times p$). $c$: number of output channels. $d$: output spatial dimension ($d \times d$). `Conv': convolution layer. `Linear': fully connected layer. 'Flatten': flatten layer.} \begin{tabular}{ccccccc}\toprule Name & Type & input & (k,s,p) & c & d \\ \midrule conv1 & Conv & textureMap & (4,2,1) & 32 & 256 \\ conv2 & Conv & conv1 & (4,2,1) & 32 & 128 \\ conv3 & Conv & conv2 & (4,2,1) & 32 & 64 \\ conv4 & Conv & conv3 & (4,2,1) & 32 & 32 \\ conv5 & Conv & conv4 & (4,2,1) & 64 & 16 \\ conv6 & Conv & conv5 & (4,2,1) & 128 & 8 \\ conv7 & Conv & conv6 & (4,2,1) & 256 & 4 \\ flat0 & Flatten & conv7 &\ \,--$^*$ & 4096 & 1 \\ line1 & Linear & flat0 &-- & 512 & 1 \\ mu & Linear & line1 &-- & 256 & 1 \\ logstd & Linear & line1 &-- & 256 & 1 \\ para. & Repara. & (mu,logstd) &-- & 256 & 1 \\ line2 & Linear & para. &-- & 256 & 1 \\ line3 & Linear & line2 &-- & 256 & 1 \\ app. & Linear & line3 &-- & 256 & 1 \\\bottomrule \end{tabular} \leftline{ \small{* `--' means meaningless parameters.}} \label{tab:tem} \end{table}\textbf{} \subsection{Details of Training and Data} \label{sec:train_detail} \noindent\textbf{Implement details.} Our model is implemented on PyTorch~\cite{PyTorch}. In our experiment, $1024$ rays are sampled in an iteration, each with 64 sampled points in the coarse volume and additional $64$ in the fine volume. The strategy of hierarchical volume sampling is used to refine coarse volume and fine volume simultaneously, as defined in NeRF\cite{mildenhall2020nerf}. The resolution of the images rendered by MoFaNeRF is $256\time256$, and the RefineNet takes the image rescaled to $512\times512$ as input, and synthesizes the final image in $512\times512$. We use the Adam optimizer\cite{kingma2015adam} with the initial learning rate as $5\times{10^{-4}}$, decayed exponentially as $2\times{10^{-5}}$, $\beta_1=0.9$, $\beta_2=0.999$, $\epsilon=10^{-7}$. Our model is trained for roughly $400k$ iterations, which takes about 2 days on dual NVIDIA GTX3090 GPUs. \begin{figure*}[t] \centering \includegraphics[width=1.0\linewidth]{figs/fig_rend_view.pdf} \vspace{-0.3in} \caption{Multi-view images are generated by FaceScape in $120$ views with $6$ pitch angles in $[-30^\circ\sim+45^\circ]$ and $20$ yaw angles in $[-90^\circ\sim+90^\circ]$. } \label{fig:rend_view1} \vspace{-0.1in} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=1.0\linewidth]{figs/fig_rend_view_headspace.pdf} \vspace{-0.3in} \caption{Multi-view images are generated by HeadSpace in $120$ views with $6$ pitch angles in $[-30^\circ\sim+45^\circ]$ and $20$ yaw angles in $[-90^\circ\sim+90^\circ]$. } \label{fig:rend_view2} \vspace{-0.1in} \end{figure*} \noindent\textbf{Details about data preparation.} Taking face orientation as the reference direction, we evenly select $6$ pitch angles from $-30^\circ$ to $45^\circ$ and $20$ yaw angles from $-90^\circ$ to $+90^\circ$ degrees for rendering in $120$ viewpoints. The samples of $120$ views are shown in Figure~\ref{fig:rend_view1} and Figure~\ref{fig:rend_view2}. We use all the training data to train the network of MoFaNeRF, and randomly select $24,000$ rendering results to train the RefineNet. Initially, we plan to use the raw scanned multi-view images released by FaceScape~\cite{yang2020facescape,zhu2021facescape}, however, we find the camera locations are not uniform for all these $7180$ tuples of images. We contacted the authors of FaceScape and learned that the reason was that the capturing took place in two locations, where the camera setups and parameters were changed several times. Therefore, we use the multi-view images to color the raw scanned models, then render them according to the viewpoints as set above to obtain high-fidelity multi-view images with uniform camera parameters. \subsection{Face Image Source} \label{sec:data_source} To avoid portrait infringement, we used some synthesized `in-the-wild' face images for testing our model. The source images in Figure~{\color{red}1, 4, 9, 10} are synthesized by StyleGANv2\cite{karras2020analyzing}, which is released under Nvidia source code license. So the use of these virtual portraits will not raise infringement issues. We have signed the license agreement with the authors of FaceScape~\cite{yang2020facescape,zhu2021facescape} to obtain the permission to use the dataset for non-commercial research purpose, and the permission to publish the subjects of $12$, $17$, $40$, $49$, $57$, $92$, $97$, $168$, $211$, $212$, $215$, $234$, $260$, $271$, $326$ in Figure~{\color{red}2, 3, 5, 6, 7, 8} of this paper.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In the present paper, we study the existence and uniqueness of solutions to the following initial-value problem: \begin{align*} \tag{KS+N} \label{eq:KSN} \begin{dcases} \begin{aligned} i\displaystyle\frac{\partial \bm{\psi}}{\partial t}&=\widehat{H}[\mathbf{x},\rho]\bm{\psi},\\ \displaystyle\frac{\textrm{d}^2\mathbf{x}}{\textrm{d}t^2}&= \mathbf{a}[\rho](\mathbf{x}),&\\ \bm{\psi}(0)&=\bm{\psi}^0,\quad \mathbf{x}(0)=\mathbf{x}^0,\quad \dot{\mathbf{x}}(0)=\mathbf{v}^0, \end{aligned} \end{dcases} \end{align*} for given $\bm{\psi}^0\in\left(H^2\left(\mathbb{R}^3;\mathbb{C}\right)\right)^N$, $\mathbf{v}^0\in\left(\mathbb{R}^3\right)^M$ and $\mathbf{x}^0\in\left(\mathbb{R}^3\right)^M$ with $\mathbf{x}^0_k\neq \mathbf{x}^0_\ell$ for $k\neq \ell$. Here, we use the short-hand notation $\bm{\psi}=\left(\psi_1,\ldots,\psi_N\right)$, with \begin{align*} \psi_j=\psi_j(\mathbf{r},t),\qquad \mathbf{r}\in\mathbb{R}^3,t\geq 0,\qquad j\in\{1,\ldots,N\}, \end{align*} and $\mathbf{x}=\left(\mathbf{x}_1,\ldots,\mathbf{x}_M\right)\in\left(\mathbb{R}^3\right)^M$, with \begin{align*} \mathbf{x}_k=\mathbf{x}_k(t),\qquad t\geq 0,\qquad k\in\{1,\ldots,M\}. \end{align*} Throughout the paper, we consider elements $\bm{\psi}(t)\in\left(H^2\left(\mathbb{R}^3;\mathbb{C}\right)\right)^N$ for $t\geq 0$, and set \begin{align*} \rho:=\sum_{j=1}^N\left|\psi_j\right|^2. \end{align*} In particular, we study the dynamics of these elements driven by the Hamiltonian operator $\widehat{H}[\mathbf{x},\rho]:\left(H^2\left(\mathbb{R}^3;\mathbb{C}\right)\right)^N\longrightarrow\left(L^2\left(\mathbb{R}^3;\mathbb{C}\right)\right)^N$, defined for any given element $\mathbf{x}\in\left(\mathbb{R}^3\right)^M$, in Hartree atomic units, as \begin{align} \left(\widehat{H}[\mathbf{x},\rho]\bm{\psi}\right)_j:=-\frac{1}{2}\Delta\psi_j-\sum_{k=1}^M\frac{z_k}{\left|\cdot-\textbf{x}_k\right|}\psi_j+\left(\frac{1}{\left|\,\cdot\,\right|}\star\rho\right)\psi_j + \lambda\rho^{q-1}\psi_j,\qquad j\in\{1,\ldots,N\}, \label{eq:KSHam} \end{align} with $\lambda\in\mathbb{R}$, $q>1$ and $\bm{z}=\left(z_1,\ldots,z_M\right)\in\mathbb{N}^M$. The interested reader may be referred to \cite{Perdew2003DensityCentury} for the physical interpretation. Further, we set $\bm{m}=\left(m_1,\ldots,m_M\right)\in\left(\mathbb{R}^+\right)^M$. The dynamics of the elements $\mathbf{x}(t)\in\left(\mathbb{R}^3\right)^M$ is driven by the acceleration function $\mathbf{a}=\mathbf{a}^1+\mathbf{a}^2$, where the components are defined as \begin{align*} \mathbf{a}^1_k[\rho]\left(\mathbf{x}\right)&:=\frac{z_k}{m_k}\int_{\mathbb{R}^3}\frac{\mathbf{r}-\mathbf{x}_k}{\left|\mathbf{r}-\mathbf{x}_k\right|^3}\rho(\mathbf{r})\mathrm{d}^3\mathbf{r},\\ \mathbf{a}^2_k(\mathbf{x})&:=\frac{z_k}{2m_k}\sum_{\subalign{\ell&=1,\\\ell&\neq k}}^Mz_\ell\frac{\mathbf{x}_k-\mathbf{x}_\ell}{\left|\mathbf{x}_k-\mathbf{x}_\ell\right|^3}. \end{align*} The main result of our paper is the following. \begin{theorem} \label{thm:shorttimeexistence} Let $q\geq 7/2$ and $\lambda\in\mathbb{R}.$ Then, there exists $\tau>0$ such that the system $\mrm{\eqref{eq:KSN}}${} has a unique solution $(\bm{\psi},\mathbf{x})\in X(\tau),$ where \begin{align*} X(\tau ):=C^1\left([0,\tau];\left(L^2(\mathbb R^3;\mathbb{C})\right)^N\right)\cap C^0\left([0,\tau];\left(H^2\left(\mathbb{R}^3;\mathbb{C}\right)\right)^N\right)\times C^2\left([0,\tau];\left(\mathbb{R}^{3}\right)^{M}\right). \end{align*} \end{theorem} Typically, Cauchy problems such as \eqref{eq:KSN} describe the non-adiabatic dynamics of molecular, spin-unpolarised systems involving an even number of $N\in 2\mathbb{N}$ electrons and $M\in\mathbb{N}$ nuclei with masses $\bm{m}$ and charges $\bm{z}$. The interested reader can consult e.g. \cite{Hohenberg1964InhomogeneousGas,Lieb1983DensitySystems,Anantharaman2009ExistenceChemistry,Perdew2003DensityCentury,Cohen2012ChallengesTheory,Kohn1965Self-ConsistentEffects,Ullrich2012Time-DependentApplications}, which form a sample of the extensive body of literature on both physical and mathematical aspects of the so-called Density-Functional Theory (DFT), which comprises the framework of the Time-Dependent Kohn--Sham (TDKS) equations, given in the first line of \eqref{eq:KSN}. We stress that the Cauchy problem associated with these equations, namely \begin{align*} \tag{KS}\label{eq:KS} \begin{dcases} \begin{aligned} i\dot{\bm{\psi}}&=\widehat{H}[\mathbf{x},\rho]\bm{\psi},\\ \bm{\psi}(0)&=\bm{\psi}^0, \end{aligned} \end{dcases} \end{align*} describes the electronic evolution in terms of the single-particle wave functions $\psi_j$, $j\in\{1,\ldots,N\}$, known in the physical literature as the Kohn--Sham (KS) orbitals. The TDKS equations have been extensively considered as an approximation to the time-dependent Schr\"odinger equation, which reduces the electronic dynamics to a single-particle description based on the KS density function $\rho$. For the reader's convenience, we briefly recall the physical interpretation of each potential in the KS Hamiltonian $\widehat{H}$ from \eqref{eq:KSHam}, which we can write as \begin{align*} \widehat{H}[\mathbf{x},\rho]=-\frac{1}{2}\Delta+v[\mathbf{x}]+v_\textsc{hxc}[\rho],\qquad v_{\textsc{hxc}}:=v_\textsc{h}+v_{\textsc{x}}+v_{\textsc{c}}. \end{align*} The electrostatic potential \begin{align*} v[\mathbf{x}](\textbf{r})&:=-\sum_{k=1}^M\frac{z_k}{\left|\textbf{r}-\textbf{x}_k\right|} \end{align*} is an external potential, generated by the nuclei, which corresponds to the Coulombic nucleus-electron interactions. The Hartree potential \begin{align*} v_\textsc{h}[\rho]:=\left|\cdot\right|^{-1}\star\rho \end{align*} corresponds to the Coulombic internal electronic interactions. The remaining term, the exchange-correlation potential $v_\textsc{x}+v_\textsc{c}$, is not explicitly known: in the Local Density Approximation (LDA), introduced by Kohn \&{} Sham in \cite{Kohn1965Self-ConsistentEffects}, the exchange potential $v_\textsc{x}$ is derived from the homogeneous electron gas approximation \cite{Parr1989Density-FunctionalMolecules}. In this paper, we study a generalisation of this exchange potential to the form \begin{align*} v_{\textsc{x}}[\rho]:=\lambda\rho^{q-1}, \end{align*} for generic parameters $\lambda\in\mathbb{R}$, $q>1$. In the present paper, we set the so-called correlation potential \begin{align*} v_{\textsc{c}}\equiv 0, \end{align*} and write accordingly $v_{\textsc{hxc}}=v_{\textsc{hx}}$. The interested reader may consult e.g. \cite{Jerome2015TimeSolutions,Anantharaman2009ExistenceChemistry} and references therein, where the case $v_{\textsc{c}}\not\equiv 0$ is considered. The coupling of \eqref{eq:KS} with the Cauchy problem associated with the system of equations in the second line of \eqref{eq:KSN}, \begin{align*} \tag{N}\label{eq:N} &\begin{dcases} \begin{aligned} \ddot{\mathbf{x}}&= \mathbf{a}[\rho](\mathbf{x}),&\\ \mathbf{x}(0)&=\mathbf{x}^0,\quad \dot{\mathbf{x}}(0)=\mathbf{v}^0, \end{aligned} \end{dcases} \end{align*} describing the nuclear dynamics, reflects the so-called mean-field Ehrenfest dynamics approach, which is a non-adiabatic mixed quantum-classical dynamics method: see e.g. \cite{Tully1998MixedDynamics}, \cite[\S 2.3]{Micha2007QuantumSystems}, \cite[\S V.]{Agostini2013MixedProcesses} and \cite[\S 2.1]{Crespo-Otero2018RecentDynamics}. Following Newton's law of motion, for all $k\in\{1,\ldots,M\}$ it holds that \begin{align*} m_k\mathbf{a}_k[\rho]\left(\mathbf{x}\right)=-\nabla_{\textbf{x}_k}W[\rho](\mathbf{x}), \end{align*} where \begin{align*} W[\rho](\mathbf{x}):=\left\langle v[\mathbf{x}],\rho\right\rangle_{L^2\left(\mathbb{R}^3;\mathbb{C}\right)}+\frac{1}{2}\sum_{\subalign{k&,\ell=1,\\&k\neq \ell}}^M\frac{z_k z_\ell}{\left|\mathbf{x}_k-\mathbf{x}_\ell\right|} \end{align*} describes the interaction of the electrons with the external potential and the Coulombic internal nuclear interactions. The total energy $E$ associated with the system \eqref{eq:KSN} is given by \begin{align*} E[\mathbf{x},\rho]:=T[\mathbf{x},\rho]+W[\rho](\mathbf{x})+U[\rho]+E_\textsc{x}[\rho], \end{align*} where \begin{align*} T[\mathbf{x},\rho]:=\frac{1}{2}\left[\sum_{k=1}^Mm_k\left|\dot{\mathbf{x}}_k\right|^2+\sum_{j=1}^N\left\|\nabla\psi_j\right\|^2_{L^2\left(\mathbb{R}^3;\mathbb{C}^3\right)}\right] \end{align*} contains the kinetic energies of the systems (\ref{eq:N}, resp. \ref{eq:KS}). This is a functional of the density, since the external potential $v$ and hence the KS orbitals can be seen as functionals of the density: see \cite{Perdew2003DensityCentury}. The other terms are potential energies: \begin{align*} U[\rho]:=\frac{1}{2}\int_{\mathbb{R}^3}\int_{\mathbb{R}^3} \frac{\rho(\mathbf{r})\rho\left(\mathbf{r}'\right)}{\left|\mathbf{r}-\mathbf{r}'\right|}\mathrm{d}^3\mathbf{r}\mathrm{d}^3\mathbf{r}' \end{align*} is the Hartree electrostatic self-repulsion of the KS electron density, and \begin{align*} E_\textsc{x}[\rho]:=\frac{\lambda}{q}\left\|\rho\right\|^q_{L^q\left(\mathbb{R}^3;\mathbb{R}\right)} \end{align*} denotes the exchange energy, whose functional derivative yields the exchange potential $v_\textsc{x}$. It is standard to see that the total energy $E$ as well as $\left\|\rho\right\|_{L^1\left(\mathbb{R}^3;\mathbb{R}\right)}$ are conserved quantities.\newline It is worth pointing out that well-posedness results are relevant in relation to the stability and validity of the various computational methods which are used to approximate solutions. Roughly speaking, numerical solutions may exhibit unexpected behaviour, which may be due to the ill-posedness of the equations. For an overview of mathematical methods used in quantum chemistry, we refer the reader to \cite{Cances2005MathematiquesIntroduction}.\newline It is impossible to do justice to the extensive literature on mathematical and physical aspects related to similar models and systems, in both time-dependent and time-independent settings: see \cite{Lieb1983DensitySystems,Perdew1981Self-interactionSystems,Perdew2003DensityCentury,LeBris2005FromJourney,Ullrich2012Time-DependentApplications} and the references therein. We therefore restrict ourselves to a brief review of the contributions we believe are the closest to our work. Canc\`es \&{} Le Bris \cite{Cances1999OnDynamics} have considered similar electronic evolution equations coupled with classical nuclear dynamics consistent with the mean-field Ehrenfest approach. More precisely, they studied a system involving the Hartree--Fock equations: \begin{align*} \tag{HF+N}\label{eq:HFN} \begin{dcases} \begin{aligned} i\dot{\bm{\phi}}&=\widehat{H}^{\textsc{hf}}[\mathbf{x},\bm{\phi}]\bm{\phi},\\ \ddot{\mathbf{x}}&=\mathbf{a}[\rho](\mathbf{x}),\\ \bm{\phi}(0)&=\bm{\phi}^0,\quad \mathbf{x}(0)=\mathbf{x}^0,\quad\dot{\mathbf{x}}(0)=\mathbf{v}^0, \end{aligned} \end{dcases} \end{align*} where $\rho=\displaystyle\sum_{j=1}^N\left|\phi_j\right|^2$, the Hartree--Fock Hamiltonian is defined as \begin{align*} \widehat{H}^{\textsc{hf}}[\mathbf{x},\bm{\phi}]:=-\frac{1}{2}\Delta+v[\mathbf{x}]+v_\textsc{h}[\rho]+v_{\textsc{x}}^{\textsc{hf}}[\bm{\phi}], \end{align*} and \begin{align*} \left(v_{\textsc{x}}^{\textsc{hf}}[\bm{\phi}]\bm{\phi}\right)_j&:=-\sum_{i=1}^N\left(\overline{\phi_i}\phi_j\star\left|\cdot\right|^{-1}\right)\phi_j,\qquad j\in\{1,\ldots,N\} \end{align*} is known as the Hartree--Fock exchange potential. Here, $\phi_j$, $j\in\{1,\ldots,N\}$, denote single-particle wave functions. In \cite{Cances1999OnDynamics}, the result of global-in-time existence and uniqueness of solutions to \eqref{eq:HFN} in $\left(H^2\left(\mathbb{R}^3;\mathbb{C}\right)\right)^N$ is mainly based on the celebrated result by Yajima \cite{Yajima1987ExistenceEquations} on the existence of propagators associated with linear, time-dependent Hamiltonians. The proof in \cite{Cances1999OnDynamics} consists of two main steps: a fixed-point argument to show existence of short-time solutions, based on Lipschitz estimates in $\left(H^2\left(\mathbb{R}^3;\mathbb{C}\right)\right)^N,$ and a Gr\"onwall-type argument which relies on energy conservation, conservation of the $\left(L^2\left(\mathbb{R}^3;\mathbb{C}\right)\right)^N$ norm of $\bm{\phi}$, and linear estimates of the solutions $\bm{\phi}$ in the $\left(H^2\left(\mathbb{R}^3;\mathbb{C}\right)\right)^N$ norm. To the best of our knowledge, since the paper by Canc\`es \&{} Le Bris \cite{Cances1999OnDynamics}, only a few contributions deal with the coupling of a system describing electronic evolution with nuclear dynamics: this is the case, for instance, for \cite{Baudouin2005ExistenceDynamics}, where other existence and regularity questions have been studied. Authors have also devoted considerable attention to Schr\"odinger--Poisson-type equations, which include the Hartree--Fock and the TDKS equations: see for instance \cite{Mauser2001TheEquation,Catto2013ExistencePrinciple,Bokanowski2003OnSystem,Zagatti1992TheEquations,Chadam1975GlobalEquations,Castella1997L2Effects,Anantharaman2009ExistenceChemistry,Jerome2015TimeSolutions}. We also mention \cite{Sprengel2017AEquations}, where existence, uniqueness, and regularity questions are investigated for TDKS equations set on bounded space domains, in relation to control problems. None of the contributions listed above have considered the nuclear and electronic dynamics as described in our system combined with each other.\\ The paper is organised as follows.\\ In \S\ref{sec:preplemmas}, we recall the relevant results from Yajima \cite{Yajima1987ExistenceEquations} on the construction and properties of a family of propagators \begin{align*} U(t,s):\left(H^2\left(\mathbb{R}^3;\mathbb{C}\right)\right)^N\longrightarrow\left(L^2\left(\mathbb{R}^3;\mathbb{C}\right)\right)^N,\qquad t,s\in[0,\Theta], \end{align*} associated to the linear parts of the KS Hamiltonians $\widehat{H}\left[\mathbf{x}(t),\rho\right]$ for $t\in[0,\Theta]$, with $0<\Theta<\infty$, and some useful results from Canc\`es \& Le Bris \cite{Cances1999OnDynamics} on the bounds on the operator norms of these propagators. In the same section, in view of a Duhamel-type argument developed in later sections, we also state and prove some useful Lipschitz estimates on the non-linear mapping \begin{align*} \bm{\psi}\longmapsto v_{\textsc{hx}}\left[\sum_{j=1}^N\left|\psi_j\right|^2\right]\bm{\psi}. \end{align*} The restriction $q\geq 7/2$ arises from these estimates.\\ In \S{}\ref{sec:locex}, we prove existence of a solution $(\bm{\psi},\mathbf{x})$ to \eqref{eq:KSN} on an interval $[0,\tau]$ in $X(\tau)$, for $q\geq 7/2$ and some $\tau>0$. To this purpose, we first define bounded regions $\mathcal{B}_{\mathrm{el}}\left(\tau \right)$ and $\mathcal{B}_{\mathrm{nuc}}\left(\tau\right)$, designed to seek solutions to (\ref{eq:KS}, resp. \ref{eq:N}) on $[0,\tau]$, and the mappings \begin{align*} \mathcal{N}:\mathcal{B}_{\mathrm{el}}\left(\tau\right)\longrightarrow \mathcal{B}_{\mathrm{nuc}}\left(\tau\right)\cap C^2\left([0,\tau];\left(\mathbb{R}^3\right)^M\right),\qquad \mathcal{E}:\mathcal{B}_{\mathrm{nuc}}\left(\tau\right)\longrightarrow\mathcal{B}_{\mathrm{el}}\left(\tau\right), \end{align*} which connect these solutions. Next, we prove that for some $\tau>0$ and any fixed $\bm{\psi}\in\mathcal{B}_{\mrm{el}}(\tau) $, the Cauchy problem \eqref{eq:N} has a unique solution $\mathbf{x}\in\mathcal{B}_{\mrm{nuc}}(\tau) \cap C^2\left([0,\tau];\overline{B_\delta\left(\mathbf{x}^0\right)}\right)$, and the mapping $\mathcal{N}[\bm{\psi}]=\mathbf{x}$ is bounded and continuous. We construct these solutions as fixed points of the mapping \begin{align*} \left(\mathcal{T}\left[\mathbf{x}\right]\right)(t)=\mathbf{x}^0+\mathbf{v}^0t+\int_0^t(t-\sigma)\mathbf{a}(\sigma,\mathbf{x}(\sigma))\mathrm{d} \sigma. \end{align*} Here, we drop the dependence on $\rho$ in $\mathbf{a}$, since $\bm{\psi}$ is a given fixed element in $\mathcal{B}_{\mrm{el}}(\tau) $. Further, we prove that for $q\geq 7/2$, some $\tau>0$ and any fixed $\mathbf{x}\in\mathcal{B}_{\mrm{nuc}}(\tau) $, the Cauchy problem \eqref{eq:KS} has a unique solution $\bm{\psi}\in\mathcal{B}_{\mrm{el}}(\tau) $, and the mapping $\mathcal{E}[\mathbf{x}]=\bm{\psi}$ is bounded and continuous. Again, solutions are constructed as fixed points of the mapping \begin{align*} \left(\mathcal{F}[\bm{\psi}]\right)(t)=U(t,0)\bm{\psi}^0-i\int_0^tU(t,\sigma)v_{\textsc{hx}}[\rho]\bm{\psi}(\sigma)\mathrm{d}\sigma. \end{align*} Using results from \S{}\ref{sec:preplemmas} and Yajima \cite{Yajima1987ExistenceEquations}, we show that fixed points of this mapping are strong solutions to \eqref{eq:KS}. After this, we prove that for $q\geq 7/2$ and some $\tau>0$, the Cauchy problem \eqref{eq:KSN} has a coupled solution $\left(\bm{\psi},\mathbf{x}\right)$ in $X(\tau)$. To this end, we construct the composite mapping \begin{align*} \mathcal{K}:\mathcal{B}_{\mrm{nuc}}(\tau) \longrightarrow \mathcal{B}_{\mrm{nuc}}(\tau) ,\qquad \mathcal{K}=\mathcal{I}\circ \mathcal{N}\circ\mathcal{E}, \end{align*} using the injection $\mathcal{I}:\mathcal{B}_{\mrm{nuc}}(\tau) \cap C^2\left([0,\tau];\left(\mathbb{R}^3\right)^M\right)\varlonghookrightarrow\mathcal{B}_{\mrm{nuc}}(\tau) $, and we apply a Schauder-type argument to $\mathcal{K}$, combining the previous results and the classical Arzel\`a--Ascoli compactness principle.\\ Section \S\ref{sec:locun} is devoted to uniqueness. \subsection*{Related questions} We list below some questions which we believe it is natural to explore in future projects. \begin{itemize} \item We note that Theorem \ref{thm:shorttimeexistence} can be generalised to LDA-type non-linearities which are sufficiently smooth at the origin $\rho=0$, and enjoy $H^2$-Lipschitz estimates like those obtained in the present paper. This is the case, for instance, for $\lambda_1 \rho^{q_1-1}-\lambda_2 \rho^{q_2-1}$ with $q_1,q_2\geq7/2$ and $\lambda_1,\lambda_2>0,$ which share a similar structure with non-linearities involved in various well-known models in quantum mechanics, such as the Thomas--Fermi--Dirac--Von Weizs\"acker model \cite{Lieb1983DensitySystems}. For this particular example, working in the same functional setting, it would be interesting to explore, for certain ranges of exponents, the occurrence of either a blow-up at finite time in the norm of the solutions or the existence of maximal solutions defined for all $t\geq0$: see \cite{Cazenave1998AnEquations,Cazenave2003SemilinearEquations}. \item It would be interesting to identify a functional setting (and a possibly different proof), --- the most natural would certainly be $H^1$, --- in order to capture the physically remarkable exponent $q=4/3,$ which remains uncovered in the present work. We wonder if a suitable regularisation `at the origin' of the LDA term for $q=4/3$ would allow to cover this case as a result of a limit process. \end{itemize} \section{Notation} Throughout the paper, we make use of the modified Vinogradov notation: \begin{itemize} \item We use \begin{align*} X\lesssim{} Y \end{align*} for $X,Y\in\mathbb{R}$ to denote $\left|X\right|\leq CY$, where $0<C<\infty$. \item We use \begin{align*} X\lesssim_{\beta_1,\beta_2,\ldots} Y \end{align*} for $X,Y\in\mathbb{R}$ to denote dependence on parameters $\beta_1,\beta_2,\ldots$: so, $\left|X\right|\leq C_{\beta_1,\beta_2,\ldots}Y$ with $0<C_{\beta_1,\beta_2,\ldots}<\infty$. Note that \begin{align*} X\lesssim_{\beta_1,\ldots} Y\lesssim_{\beta_2,\ldots} Z \Longrightarrow X\lesssim_{\beta_1,\beta_2,\ldots} Z \end{align*} for $X,Y,Z\in\mathbb{R}$. \end{itemize} Further, we make use of the following normed spaces: \begin{itemize} \item We use $\mathbf{y}\in\left(\mathbb{C}^n\right)^m$ to denote $\mathbf{y}=\left(\mathbf{y}_1,\ldots,\mathbf{y}_m\right)$, $\mathbf{y}_k\in\mathbb{C}^n$, with \begin{align*} \left|\mathbf{y}\right|^2=\sum_{k=1}^m\left|\mathbf{y}_k\right|^2. \end{align*} \item We use the short-hand notation $L^p(\mathbb{R}^3;\mathbb{C})=L^p$, $p\in[1,\infty]$, for Lebesgue spaces, with \begin{itemize} \item the short-hand $\langle\,\cdot,\cdot\,\rangle$ for the inner product on $L^2$, and \item the norm \begin{align*} \left\|\bm{\phi}\right\|_{\left(L^p\right)^m}:=\left\|\,\left|\bm{\phi}\right|\,\right\|_{L^p} \end{align*} on $\left(L^p\right)^m$. \end{itemize} \item We use the short-hand notation $L^{p,r}(\mathbb{R}^3;\mathbb{C})=L^{p,r}$, $p\in[1,\infty)$, $r\in[1,\infty]$, for Lorentz spaces, with \begin{itemize} \item the radial decreasing rearrangement \begin{align*} \phi^*(t):=\inf\left\{A>0\middle\rvert \left|\left\{\mathbf{y}\in\mathbb{R}^3\middle|\left|\phi(\mathbf{y})\right|>A\right\}\right|\leq t\right\}, \end{align*} \item the quasinorm \begin{align*} \left\|\phi\right\|_{L^{p,r}}^r:=\int_0^\infty\left|t^{1/p}\phi^*(t)\right|^r t^{-1}\mathrm{d}t \end{align*} on $L^{p,r}$, $r<\infty$, \item the quasinorm \begin{align*} \left\|\phi\right\|_{L^{p,r}}:=\sup_{t\in\mathbb{R}}\left\{\left|t^{1/p}\phi^*(t)\right|\right\} \end{align*} on the weak Lebesgue space $L^{p,r}$, $r=\infty$, and \item the quasinorm \begin{align*} \left\|\bm{\phi}\right\|_{\left(L^{p,r}\right)^m}:=\sum_{j=1}^m\left\|\phi_j\right\|_{L^{p,r}} \end{align*} on $\left(L^{p,r}\right)^m$. \end{itemize} \item We use the short-hand notation $W^{k,p}(\mathbb{R}^3;\mathbb{C})=W^{k,p}$ and $W^{k,2}=H^k$, $k\in\mathbb{N}$, $p\in[1,\infty]$, for classical Sobolev spaces, with \begin{itemize} \item the norm \begin{align*} \left\|\phi\right\|_{H^2}^2:=\left\|\phi\right\|^2_{L^2}+\left\|\Delta \phi\right\|^2_{L^2}. \end{align*} on $H^2$, and \item the norm \begin{align*} \left\|\bm{\phi}\right\|^2_{\left(H^{2}\right)^m}:=\sum_{j=1}^m\left\|\phi_j\right\|^2_{H^2} \end{align*} on $\left(H^2\right)^m$. \end{itemize} \end{itemize} Further, we make use of the following inequalities: \begin{itemize} \item H\"older's inequality on Lorentz spaces: \begin{itemize} \item Let $\phi\in L^{p_1,q_1}$, $\chi\in L^{p_2,q_2}$, $p_1,p_2\in(0,\infty)$, $q_1,q_2\in(0,\infty]$. \item Then \begin{align*} \|\phi\chi\|_{L^{r,s}}\lesssim_{p_1,p_2,q_1,q_2}\|\phi\|_{L^{p_1,q_1}}\|\chi\|_{L^{p_2,q_2}} \end{align*} with $r^{-1}:=p_1^{-1}+p_2^{-1}$, $s^{-1}:=q_1^{-1}+q_2^{-1}$. \cite{ONeil1963ConvolutionSpaces} \end{itemize} \item Young's convolution inequality on Lorentz spaces: \begin{itemize} \item Let $\phi\in L^{p_1,q_1},\chi\in L^{p_2,q_2}$, $p_1^{-1}+p_2^{-1}>1$. \item Then \begin{align*} \left\|\phi\star \chi\right\|_{L^{r,s}}\lesssim_{r} \left\|\phi\right\|_{L^{p_1,q_1}}\left\|\chi\right\|_{L^{p_2,q_2}} \end{align*} with $r^{-1}:=p_1^{-1}+p_2^{-1}-1$ and $s\in[1,\infty]$ such that $q_1^{-1}+q_2^{-1}\geq s^{-1}$. \cite[Thm. 2.10.1]{Ziemer1989WeaklyVariation} \end{itemize} \item Hardy's inequality: \begin{align*} \left\rVert \left|\mathbf{r}-\cdot\right|^{-1}\phi\right\rVert_{L^2}\lesssim\left\rVert\nabla\phi\right\rVert_{L^2(\mathbb{R}^3;\mathbb{C}^3)} \end{align*} for all $\phi\in H^1$ and $\mathbf{r}\in\mathbb{R}^3$. \cite{Hardy1952Inequalities} \end{itemize} \section{Proof of Thm. \ref{thm:shorttimeexistence}} \label{sec:cauchy} \subsection{Preparatory results} \label{sec:preplemmas} We start with a result, that generalises \cite[Lemma 3]{Cances1999OnDynamics}, on bounds for the following functions.\\\\ We define $\mathbf{f}_k^{ij}:\left(\mathbb{R}^3\right)^M\longrightarrow\mathbb{C}^3$ (a.e.) for $k\in\{1,\ldots,M\}$ and $i,j\in\{1,\ldots,N\}$ as \begin{align*} \mathbf{f}_k^{ij}:=\nabla_{\mathbf{x}_k}\left\langle\psi_i,\left(v[\mathbf{x}]\bm{\psi}\right)_j\right\rangle, \end{align*} namely, \begin{align*} \mathbf{f}_k^{ij}\left(\mathbf{x}\right)&=-z_k\left\langle\psi_i,\frac{\cdot-\mathbf{x}_k}{\left|\cdot-\mathbf{x}_k\right|^{3}}\psi_j\right\rangle. \end{align*} Because of this, the functions effectively only depend on $\mathbf{x}_k$, and can therefore be rephrased as having the domain $\mathbb{R}^3$. Also, we can write \begin{align*} \mathbf{a}^1_k=-\frac{1}{2m_k}\sum_{j=1}^N\mathbf{f}^{jj}_k. \end{align*} \begin{remark} The bounds in the following Lemma are only with respect to spatial dependence: therefore, all quantities depend implicitly on time. \end{remark} \begin{lemma} \label{lem:forces} For all functions $\psi_i,\psi_j\in H^2$, $i,j\in\{1,\ldots,N\}$, the following estimates hold: \begin{alignat}{3} \label{eq:forcebounds1} \left\|\mathbf{f}_k^{ij}\right\|_{L^{\infty}(\mathbb{R}^3;\mathbb{C}^3)}&\lesssim_{z_k} \|\nabla \psi_i\|_{L^2\left(\mathbb{R}^3;\mathbb{C}^3\right)}\|\nabla \psi_j\|_{L^2\left(\mathbb{R}^3;\mathbb{C}^3\right)}\qquad &&\text{for all $\psi_i,\psi_j\in H^1$, and}\\ \left\|D\mathbf{f}_k^{ij}\right\|_{L^{\infty}(\mathbb{R}^3;\mathbb{C}^{3\times 3})}&\lesssim_{z_k}\|\psi_i\|_{H^2}\|\psi_j\|_{H^2}&&\text{for all $\psi_i,\psi_j\in H^2$.}\label{eq:forcebounds2} \end{alignat} In addition, we have that the functions $\mathbf{f}_k^{ij}\in W^{1,\infty}(\mathbb{R}^3;\mathbb{C}^3)\cap C^1(\mathbb{R}^3;\mathbb{C}^3)$ for all $k\in\{1,\ldots,M\}$. \end{lemma} \begin{proof} Define the sesquilinear map $G$ on $H^2$ as \begin{align} G\left[\phi_1,\phi_2\right]:=\left(\overline{\phi_1}\,\phi_2\right)\star\left|\cdot\right|^{-1}. \label{eq:mappingG} \end{align} For $f,g\in W^{1,1}, i\in\{1,2,3\}$, we know that $\partial_i(f\star g)=(\partial_if)\star g=f\star(\partial_i g)$, so \begin{align*} \partial_iG\left[\phi_1,\phi_2\right]&=\left(\overline{\phi_1} \phi_2\right)\star \left(r_i\left|\mathbf{r}\right|^{-3}\right),\\ \partial_{ij}G\left[\phi_1,\phi_2\right]&=\left[\left(\partial_{ i}\overline{\phi_1}\right)\phi_2+\overline{\phi_1}\left(\partial_{j}\phi_2\right)\right]\star \left(r_i\left|\mathbf{r}\right|^{-3}\right). \end{align*} This gives the following estimates for $i,j\in\{1,2,3\}$ and all $\mathbf{r}\in\mathbb{R}^3$, where we use the Cauchy--Schwarz and Hardy's inequalities: \begin{align} \left|G\left[\phi_1,\phi_2\right](\mathbf{r})\right|&=\left|\left\langle\phi_1,\left|\cdot-\mathbf{r}\right|^{-1}\phi_2\right\rangle\right|\nonumber\\ &\lesssim{} \rVert\phi_1\rVert_{L^2}\left\rVert \nabla\phi_2\right\rVert_{L^2(\mathbb{R}^3;\mathbb{C}^3)},\label{eq:Gestimate1}\\ \left|\partial_i G\left[\phi_1,\phi_2\right](\mathbf{r})\right|&\leq \left\langle \left|\cdot-\mathbf{r}\right|^{-1}\left|\phi_1\right|,\left|\cdot-\mathbf{r}\right|^{-1}\left|\phi_2\right|\right\rangle\nonumber\\ &\lesssim{} \rVert\nabla\phi_1\rVert_{L^2(\mathbb{R}^3;\mathbb{C}^3)}\left\rVert \nabla\phi_2\right\rVert_{L^2(\mathbb{R}^3;\mathbb{C}^3)},\label{eq:Gestimate2}\\ \left|\partial_{ij} G\left[\phi_1,\phi_2\right](\mathbf{r})\right|&\leq \left\langle \left|\cdot-\mathbf{r}\right|^{-1}\left|\partial_i\phi_1\right|,\left|\cdot-\mathbf{r}\right|^{-1}\left|\phi_2\right|\right\rangle+\left\langle\left|\cdot-\mathbf{r}\right|^{-1}\left|\phi_1\right|,\left|\cdot-\mathbf{r}\right|^{-1}\left|\partial_j\phi_2\right|\right\rangle\nonumber\\ &\lesssim{}\rVert\nabla\partial_i\phi_1\rVert_{L^2(\mathbb{R}^3;\mathbb{C}^3)}\left\rVert\nabla\phi_2\right\rVert_{L^2(\mathbb{R}^3;\mathbb{C}^3)}\nonumber\\ &\qquad +\left\rVert \nabla\phi_1\right\rVert_{L^2(\mathbb{R}^3;\mathbb{C}^3)}\rVert\nabla\partial_j\phi_2\rVert_{L^2(\mathbb{R}^3;\mathbb{C}^3)}\nonumber\\ &\lesssim{}\rVert\phi_1\rVert_{H^2}\left\rVert\phi_2\right\rVert_{H^2},\label{eq:Gestimate3} \end{align} which shows that for all $\phi_1,\phi_2\in H^2$, $G\left[\phi_1,\phi_2\right]\in W^{2,\infty}$. Using \begin{align*} \mathbf{f}^{ij}_k=-z_k\nabla G\left[\psi_i,\psi_j\right], \end{align*} we get \begin{align*} \left\|\mathbf{f}^{ij}_k\right\|_{L^{\infty}\left(\mathbb{R}^3;\mathbb{C}^3\right)}&\lesssim_{z_k}\rVert\nabla\psi_i\rVert_{L^2(\mathbb{R}^3;\mathbb{C}^3)}\left\rVert\nabla\psi_j\right\rVert_{L^2(\mathbb{R}^3;\mathbb{C}^3)},\\ \left\|D\mathbf{f}^{ij}_k\right\|_{L^{\infty}\left(\mathbb{R}^3;\mathbb{C}^{3\times 3}\right)}&=\max_{\mathbf{r}\in\mathbb{R}^3}\left\{\left\| D^2G_k[\psi_i,\psi_j](\mathbf{r})\right\|_{\mathrm{F}\left(\mathbb{C}^{3\times 3}\right)}\right\}\\ &\lesssim_{z_k}\rVert\psi_i\rVert_{H^2}\left\rVert \psi_j\right\rVert_{H^2}, \end{align*} where $\|\cdot\|_{\mathrm{F}(\mathbb{C}^{m\times n})}$ denotes the Frobenius norm of a $(m\times n)$ matrix. This shows $\mathbf{f}^{ij}_k\in W^{1,\infty}(\mathbb{R}^3;\mathbb{C}^3)$; showing $\mathbf{f}^{ij}_k\in C^1(\mathbb{R}^3;\mathbb{C}^3)$ follows from showing $G\left[\psi_i,\psi_j\right]\in C^2$. We see $G\left[\psi_i,\psi_j\right]$ solves the Poisson equation \begin{align} \Delta G\left[\psi_i,\psi_j\right]=-4\pi\overline{\psi_i}\psi_j.\label{eq:deltaG} \end{align} By standard elliptic regularity results, we know $\overline{\psi_i}\psi_j\in C^{0,\alpha}_{\mathrm{loc}}$, which ensures $G\left[\psi_i,\psi_j\right]\in C^{2,\alpha}_{\mathrm{loc}}\subset C^2$. \end{proof} The following results are on the existence of the propagator for the linear parts of the KS-type Hamiltonians $\widehat{H}\left[\mathbf{x}(t),\rho\right]$ for $t\in[0,\Theta]$, with $0<\Theta<\infty$, for a given nuclear configuration $\mathbf{x}\in C^1\left([0,\Theta];\left(\mathbb{R}^3\right)^M\right)$.\\\\ For given $\mathbf{x}\in C^1\left([0,\Theta];\left(\mathbb{R}^3\right)^M\right)$, for some $0<\Theta<\infty$, we consider the family of linear time-dependent Hamiltonians $\left\{\widehat{\mathfrak{H}}(t),t\in[0,\Theta]\right\}\subset \mathcal{L}\left(\left(H^2\right)^N;\left(L^2\right)^N\right)$, with \begin{align} \label{eq:linHam} \widehat{\mathfrak{H}}(t):=-\frac{1}{2}\Delta + \mathfrak{v}(t), \end{align} where the external potential $\mathfrak{v}$ is given by \begin{align} \mathfrak{v}(t,\cdot)&:=v[\mathbf{x}(t)],\label{eq:calV} \end{align} Note that the Hamiltonians $\widehat{\mathfrak{H}}(t)$ correspond to the linear part of the KS-type Hamiltonians $\widehat{H}\left[\mathbf{x}(t),\rho\right]$, and that they are self-adjoint on $\left(L^2\right)^N$, since the Laplacian is, and $\mathfrak{v}$ is real-valued. We also notice that they depend on the time evolution of the nuclear configuration $\mathbf{x}$. Associated with this family of Hamiltonians are the corresponding Cauchy problems \begin{align*} \begin{dcases} \begin{aligned} i\dot{\bm{\psi}}&= \widehat{\mathfrak{H}}(t)\bm{\psi},\\ \bm{\psi}(s)&=\bm{\psi}^0, \end{aligned} \end{dcases} \end{align*} on $[0,\Theta]$, for some $s\in[0,\Theta]$. For $s=0$, this can be considered the linear part of \eqref{eq:KS}. We also consider the equivalent integral equation \begin{align} \label{eq:IE} \bm{\psi}(t)=U_0(t-s)\bm{\psi}^0-i \int_s^tU_0(t-\sigma)\mathfrak{v}(\sigma)\bm{\psi}(\sigma)\mathrm{d} \sigma, \end{align} where \begin{align*} U_0(t):=\exp\left(\frac{it\Delta}{2}\right) \end{align*} denotes the free propagator (i.e., the propagator for the free particle), which is an evolution operator on $\left(H^2\right)^N$. Now, we formulate the following Lemma in the spirit of \cite[Lemma 4]{Cances1999OnDynamics}, based on the idea of \cite[Cor. 1.2. (1)--(2)--(4), Thm. 1.1. (2) \& Thm. 1.3. (5)--(6)]{Yajima1987ExistenceEquations}. \begin{lemma} \label{lem:evolutionoperators} For the family of Hamiltonians $\left\{\widehat{\mathfrak{H}}(t),t\in[0,\Theta]\right\}$, there exists a unique family of linear evolution operators \begin{align*} U(t,s):\left(H^2\right)^N\longrightarrow\left(L^2\right)^N,\qquad (t,s)\in[0,\Theta]^2, \end{align*} such that \begin{align*} \bm{\psi}=U(\cdot,s)\bm{\psi}^0 \end{align*} solves \eqref{eq:IE} for all $\bm{\psi}^0\in\left(H^2\right)^N$, with \begin{align*} \left\|\bm{\psi}(t)\right\|_{\left(L^{2}\right)^N}=\left\|\bm{\psi}^0\right\|_{\left(L^{2}\right)^N} \end{align*} for all $t\in[0,\Theta]$. Moreover, this family enjoys the following properties: \begin{enumerate}[(i)] \item $U(t,s)U(s,r)=U(t,r)$ for all $(t,s,r)\in[0,\Theta]^3$. \item $U(t,t)=\mathrm{Id}$ for all $t\in[0,\Theta]$. \item $U(t,s)$ is a unitary operator on $\left(L^2\right)^N$ for all $(t,s)\in[0,\Theta]^2$: \begin{align*} \left\|U(t,s)\bm{\psi}\right\|_{\left(L^{2}\right)^N}=\left\|\bm{\psi}\right\|_{\left(L^{2}\right)^N}. \end{align*} \item For all $\bm{\phi}\in\left(L^2\right)^N$, $\left((t,s)\longmapsto U(t,s)\bm{\phi}\right): [0,\Theta]^2\longrightarrow \left(L^2\right)^N$ is a continuous mapping. \item $U(t,s)\in \mathcal{L}\left(\left(H^2\right)^N\right)$ for all $(t,s)\in[0,\Theta]^2$. \item For all $\bm{\phi}\in\left(H^2\right)^N$, $\left((t,s)\longmapsto U(t,s)\bm{\phi}\right): [0,\Theta]^2\longrightarrow \left(H^2\right)^N$ is a continuous mapping. \item For all $\bm{\phi}\in\left(H^2\right)^N$, the mapping $\left((t,s)\longmapsto U(t,s)\bm{\phi}\right)\in C^1\left([0,\Theta]^2;\left(L^2\right)^N\right)$, and the following equations hold in $\left(L^2\right)^N$: \begin{align*} i\frac{\partial}{\partial t}\left(U(t,s)\bm{\phi}\right)&=\widehat{\mathfrak{H}}(t)U(t,s)\bm{\phi},\\ i\frac{\partial}{\partial s}\left(U(t,s)\bm{\phi}\right)&=-U(t,s)\widehat{\mathfrak{H}}(s)\bm{\phi}. \end{align*} \item For all $\gamma>0$, there is a constant $B_{\Theta,\gamma}$ of the form \begin{align} B_{\Theta,\gamma}=A_\gamma^{1+C_\gamma\Theta},\qquad A_\gamma>1,\quad C_\gamma>2,\label{eq:Bconst} \end{align} such that if \begin{align*} \left\|\dot{\mathbf{x}}\right\|_{C^0\left([0,\Theta];\left(\mathbb{R}^3\right)^M\right)}\leq \gamma, \end{align*} we have for all $(t,s)\in[0,\Theta]^2$ \begin{align*} \left\|U(t,s)\right\|_{\mathcal{L}\left(\left(H^{2}\right)^N\right)}\leq B_{\Theta,\gamma}. \end{align*} \end{enumerate} \end{lemma} \begin{proof} Since the linear Hamiltonians $\widehat{\mathfrak{H}}(t)$ do not depend on an electronic configuration $\bm{\psi}$, and act on every element $\psi_j,j\in\{1,\ldots,N\}$ independently, the result for general $N$ readily follows from the case $N=1$, taking into account the product topology. Properties (i)---(vii) have been proven for the case $N=1$ in \cite[Cor. 1.2. (1)--(2)--(4), Thm. 1.1. (2) \& Thm. 1.3. (5)--(6)]{Yajima1987ExistenceEquations}; property (viii) has been proven for the case $N=M=1$ in \cite[Lemma 4]{Cances1999OnDynamics}, which on its own follows the results in \cite{Yajima1987ExistenceEquations}. We therefore justify (viii) for the case $N=1,M>1$, which only needs some additional changes. For all $a\in[0,\Theta]$, we define, with $p\in[2,3)$ and $p_1=2p(p+1)^{-1}$, the following norms for $\mathfrak{v}$ and its derivative $\dot{\mathfrak{v}}$: \begin{align} \left\|\mathfrak{v}\right\|_{\widetilde{\mathcal{M}}}&:=\inf_{\subalign{&\,\,\,\mathfrak{v}_1,\mathfrak{v}_2:\\ \mathfrak{v}=&\mathfrak{v}_1+\mathfrak{v}_2\text{ a.e.}}}\left\{\left\|\mathfrak{v}_1\right\|_{L^{\infty}\left([-a,a];L^{p}\right)}+\left\|\mathfrak{v}_2\right\|_{L^{\infty}\left([-a,a];L^{\infty}\right)}\right\},\label{eq:Mnorm}\\ \left\|\dot{\mathfrak{v}}\right\|_{\mathcal{N}} &:=\inf_{\subalign{&\,\,\,\mathfrak{w}_1,\mathfrak{w}_2:\\ \dot{\mathfrak{v}}=&\mathfrak{w}_1+\mathfrak{w}_2\text{ a.e.}}}\left\{\left\|\mathfrak{w}_1\right\|_{L^{\infty}\left([-a,a];L^{p_1}\right)}+\left\|\mathfrak{w}_2\right\|_{L^{\infty}\left([-a,a];L^{\infty}\right)}\right\}.\label{eq:Nnorm} \end{align} Note that \eqref{eq:Mnorm} is a bounded quantity, independent of $\mathbf{x}$. Using the chain rule, we can bound \eqref{eq:Nnorm} by \begin{align*} \gamma\times \sup_{\subalign{&w_1,w_2:\\ \nabla_\mathbf{x}v[\mathbf{x}]&=w_1+w_2\text{ a.e.}}}\left\{\left\|w_1\right\|_{L^{p_1}\left(\mathbb{R}^3;\left(\mathbb{R}^3\right)^M\right)}+\left\|w_2\right\|_{L^{\infty}\left(\mathbb{R}^3;\left(\mathbb{R}^3\right)^M\right)}\right\}. \end{align*} Now, we can conclude the proof as carried out in \cite{Cances1999OnDynamics}. \end{proof} In the following, we study the non-linear mapping from $\left(H^1\right)^N$ to $\left(L^2\right)^N$ given by $\bm{\psi}\longmapsto v_\textsc{hx}[\rho]\bm{\psi}=\left(v_\textsc{h}[\rho]+v_{\textsc{x}}[\rho]\right)\bm{\psi}$, with the non-local, convolution term $v_\textsc{h}[\rho]\bm{\psi}$ and the local, LDA term $v_{\textsc{x}}[\rho]\bm{\psi}$. We want to obtain Lipschitz estimates for each of these terms. Note that in the following Lemmata, we also regularly drop the temporal and spatial variables as arguments of the KS orbitals. \begin{lemma}[Lipschitz estimates on the Hartree term] \label{lem:lipschitzconvolution} For all $\bm{\psi},\bm{\psi}'\in (H^1)^N$, where $\rho':=\left|\bm{\psi}'\right|^2$, we have \begin{align*} &\left\|v_\textsc{h}[\rho]\bm{\psi}-v_\textsc{h}\left[\rho'\right]\bm{\psi}'\right\|_{(L^2)^N}\lesssim{} \sqrt{N}\left\|\bm{\psi}-\bm{\psi}'\right\|_{\left(L^{2}\right)^N}\times\tag{A}\label{eq:A}\\ \nonumber&\qquad\left[\sum_{j=1}^N\left(\left\|\nabla\psi_j\right\|_{L^2\left(\mathbb{R}^3;\mathbb{C}^3\right)}+ \left\|\nabla\psi'_j\right\|_{L^2\left(\mathbb{R}^3;\mathbb{C}^3\right)}\right) \left\|\bm{\psi}'\right\|_{\left(L^{2}\right)^N}+ \sum_{i=1}^N\left\|\psi_i\right\|_{L^2}\left\|\nabla\psi_i\right\|_{L^2\left(\mathbb{R}^3;\mathbb{C}^3\right)}\right]. \end{align*} Additionally, for all $\bm{\psi},\bm{\psi}'\in \left(H^2\right)^N$ we have \begin{align*} \left\|v_\textsc{h}[\rho]\bm{\psi}\right\|_{(H^2)^N} &\lesssim{} \sqrt{N} \sum_{j=1}^N\left\|\psi_j\right\|_{H^1}^2 \left\|\bm{\psi}\right\|_{(H^2)^N},\tag{B}\label{eq:B}\\ \left\|v_\textsc{h}[\rho]\bm{\psi}-v_\textsc{h}\left[\rho'\right]\bm{\psi}'\right\|_{(H^2)^N} &\lesssim{}\sqrt{N}\left\|\bm{\psi}-\bm{\psi}'\right\|_{\left(H^{2}\right)^N}\times \nonumber\\ &\qquad\sum_{j=1}^N\left[\left(\left\|\psi_j\right\|_{H^1}+ \left\|\psi_j'\right\|_{H^1}\right) \left\|\bm{\psi}'\right\|_{\left(H^{2}\right)^N}+\left\|\psi_j\right\|^2_{H^1}\right]. \tag{C}\label{eq:C} \end{align*} \end{lemma} \begin{proof} \textit{Part \ref{eq:A}.}\\ By adding and subtracting the term $\left(\left|\psi_i\right|^2\star\left|\cdot\right|^{-1}\right)\psi_j'$, we can write for all $j\in\{1,\ldots,N\}$ \begin{align*} &\left\|\left(v_\textsc{h}[\rho]\bm{\psi}-v_\textsc{h}\left[\rho'\right]\bm{\psi}'\right)_j\right\|_{L^2}\leq\\ &\qquad\sum_{i=1}^N \left[\underbrace{\left\|\left(\left|\psi_i\right|^2\star\left|\cdot\right|^{-1}\right)\left(\psi_j-\psi_j'\right)\right\|_{L^2}}_{=:\,\mathrm{(I)}}+ \underbrace{\left\|\left(\left(\left|\psi_i\right|^2-\left|\psi_i'\right|^2\right)\star\left|\cdot\right|^{-1}\right)\psi_j'\right\|_{L^2}}_{=:\,\mathrm{(II)}} \right]. \end{align*} Using the Cauchy--Schwarz inequality in (\ref{eq:starA1},\ref{eq:starA3}), Hardy's inequality in (\ref{eq:starA2},\ref{eq:starA4}), and the reverse triangular inequality in \eqref{eq:starA4}, we have \begin{align*} \mathrm{(I)}&\leq \left\|\left|\psi_i\right|^2\star\left|\cdot\right|^{-1}\right\|_{L^{\infty}}\left\|\psi_j-\psi_j'\right\|_{L^2}\\ &\leq \underset{\mathbf{r}\in\mathbb{R}^3}{\mathrm{esssup}}\left\{\left|\left\langle \left|\psi_i\right|,\left|\cdot-\mathbf{r}\right|^{-1}\left|\psi_i\right|\right\rangle\right|\right\}\left\|\bm{\psi}-\bm{\psi}'\right\|_{\left(L^2\right)^N}\\ &\leq \underset{\mathbf{r}\in\mathbb{R}^3}{\mathrm{esssup}}\left\{\left\| \psi_j\right\|_{L^2}\left\|\left|\cdot-\mathbf{r}\right|^{-1}\psi_i\right\|_{L^2}\right\}\left\|\bm{\psi}-\bm{\psi}'\right\|_{\left(L^2\right)^N}\tag{$*$}\label{eq:starA1}\\ &\lesssim{}\left\|\psi_i\right\|_{L^2}\left\|\nabla\psi_i\right\|_{L^2\left(\mathbb{R}^3;\mathbb{C}^3\right)}\left\|\bm{\psi}-\bm{\psi}'\right\|_{\left(L^2\right)^N}\tag{$*2$}\label{eq:starA2}, \end{align*} and \begin{align*} \mathrm{(II)}&\leq\left\|\left(\left|\psi_i\right|^2-\left|\psi_i'\right|^2\right)\star\left|\cdot\right|^{-1}\right\|_{L^{\infty}}\left\|\psi_j'\right\|_{L^2}\\ &\leq \underset{\mathbf{r}\in\mathbb{R}^3}{\mathrm{esssup}}\left\{\left|\left\langle \left|\psi_i\right|-\left|\psi_i'\right|,\left|\cdot-\mathbf{r}\right|^{-1}\left(\left|\psi_i\right|+\left|\psi_i'\right|\right)\right\rangle\right|\right\}\left\|\bm{\psi}'\right\|_{\left(L^2\right)^N}\\ &\leq \underset{\mathbf{r}\in\mathbb{R}^3}{\mathrm{esssup}}\left\{ \left\| \left|\psi_i\right|-\left|\psi_i'\right|\right\|_{L^2} \left(\left\|\left|\cdot-\mathbf{r}\right|^{-1}\psi_i\right\|_{L^2}+ \left\|\left|\cdot-\mathbf{r}\right|^{-1}\psi_i'\right\|_{L^2}\right)\right\}\tag{$*3$}\label{eq:starA3}\\ &\lesssim{} \left(\left\|\nabla\psi_i\right\|_{L^2\left(\mathbb{R}^3;\mathbb{C}^3\right)}+ \left\|\nabla\psi_i'\right\|_{L^2\left(\mathbb{R}^3;\mathbb{C}^3\right)} \right)\left\|\bm{\psi}'\right\|_{\left(L^2\right)^N}\left\|\psi_i-\psi_i'\right\|_{L^2}\tag{$*4$}\label{eq:starA4}\\ &\lesssim{}\left(\left\|\nabla\psi_i\right\|_{L^2\left(\mathbb{R}^3;\mathbb{C}^3\right)}+ \left\|\nabla\psi_i'\right\|_{L^2\left(\mathbb{R}^3;\mathbb{C}^3\right)} \right)\left\|\bm{\psi}'\right\|_{\left(L^2\right)^N}\left\|\bm{\psi}-\bm{\psi}'\right\|_{\left(L^2\right)^N}. \end{align*} Using this and the norm on $\left(L^2\right)^N$, \eqref{eq:A} follows.\\ \\ \textit{Part \ref{eq:B}.}\\ Adopting the map $G$ from \eqref{eq:mappingG}, we can write \begin{align*} v_\textsc{h}[\rho]\bm{\psi}=\sum_{i=1}^NG\left[\psi_i,\psi_i\right]\bm{\psi}. \end{align*} We obtain, using the product rule for the Laplacian and \eqref{eq:deltaG}, for all $\phi_1,\phi_2,\phi_3\in H^2$ \begin{align*} \Delta \left(G\left[\phi_1,\phi_2\right]\phi_3\right)=G\left[\phi_1,\phi_2\right]\Delta\phi_3+2\nabla\left(G\left[\phi_1,\phi_2\right]\right)\cdot\nabla\phi_3-4\pi\overline{\phi_1}\phi_2\phi_3. \end{align*} We have the following estimates, using (\ref{eq:Gestimate1},\ref{eq:Gestimate2}): \begin{align*} \left\|G\left[\phi_1,\phi_2\right]\phi_3\right\|_{L^2}&\leq \left\|G\left[\phi_1,\phi_2\right]\right\|_{L^{\infty}}\left\|\phi_3\right\|_{L^2}\\ &\lesssim{}\left\|\phi_1\right\|_{H^1}\left\|\phi_2\right\|_{H^1}\left\|\phi_3\right\|_{H^1},\\ \left\|G\left[\phi_1,\phi_2\right]\Delta\phi_3\right\|_{L^2}&\leq\left\|G\left[\phi_1,\phi_2\right]\right\|_{L^{\infty}}\left\|\Delta\phi_3\right\|_{L^2}\\ &\lesssim{} \left\|\phi_1\right\|_{H^1}\left\|\phi_2\right\|_{H^1}\left\|\phi_3\right\|_{H^2},\\ \left\|\nabla\left(G\left[\phi_1,\phi_2\right]\right)\cdot\nabla\phi_3\right\|_{L^2}&\leq \left\|G\left[\phi_1,\phi_2\right]\right\|_{W^{1,\infty}}\left\|\nabla\phi_3\right\|_{L^2\left(\mathbb{R}^3;\mathbb{C}^3\right)}\\ &\lesssim{} \left\|\phi_1\right\|_{H^1}\left\|\phi_2\right\|_{H^1}\left\|\phi_3\right\|_{H^1}. \end{align*} Furthermore, using H\"older's inequality and Sobolev's embedding theorem, we have \begin{align*} \left\|\overline{\phi_1}\phi_2\phi_3\right\|_{L^2}&\leq \left\|\phi_1\right\|_{L^6}\left\|\phi_2\right\|_{L^6}\left\|\phi_3\right\|_{L^6}\\ &\lesssim{} \left\|\phi_1\right\|_{H^1}\left\|\phi_2\right\|_{H^1}\left\|\phi_3\right\|_{H^1}. \end{align*} This gives for all $j\in\{1,\ldots,N\}$ \begin{align*} \left\|\left(v_\textsc{h}[\rho]\bm{\psi}\right)_j\right\|_{H^2}&\leq \sum_{i=1}^N\left\|G\left[\psi_i,\psi_i\right]\psi_j\right\|_{H^2}\\ &\lesssim{}\sum_{j=1}^N\left\|\psi_j\right\|_{H^1}^2\left\|\bm{\psi}\right\|_{(H^2)^N}. \end{align*} Using this and the norm on $\left(L^2\right)^N$, \eqref{eq:B} follows.\\\\ \textit{Part \ref{eq:C}.}\\ Following the first line in Part A, we bound for all $j\in\{1,\ldots,N\}$ \begin{align*} &\left\|\Delta\left(v_\textsc{h}[\rho]\bm{\psi}-v_\textsc{h}\left[\rho'\right]\bm{\psi}'\right)_j\right\|_{L^2}\leq\\ &\qquad\sum_{i=1}^N\left\{\underbrace{\left\|\Delta\left[G\left[\psi_i,\psi_i\right]\left(\psi_j-\psi_j'\right)\right]\right\|_{L^2}}_{=:\,\mathrm{(I)}} +\underbrace{\left\|\Delta\left[G\left[\left|\psi_i\right|+\left|\psi_i'\right|,\left|\psi_i\right|-\left|\psi_i'\right|\right]\psi_j'\right]\right\|_{L^2}}_{=:\,\mathrm{(II)}}\right \}. \end{align*} Using the estimates in Part B, we can bound (I) and (II) using $(\phi_1,\phi_2,\phi_3)=(\psi_i,\psi_i,\psi_j-\psi_j')$ for (I) and $(\phi_1,\phi_2,\phi_3)=\left(\left|\psi_i\right|+\left|\psi_i'\right|,\left|\psi_i\right|-\left|\psi_i'\right|,\psi_j\right)$ for (II). This way, we can bound, using the reverse triangular inequality for $\left|\psi_i\right|-\left|\psi_i'\right|\leq \left|\psi_i-\psi_i'\right|$ and the norm on $\left(L^2\right)^N$, \begin{align*} &\left\|\Delta\left(v_\textsc{h}[\rho]\bm{\psi}-v_\textsc{h}\left[\rho'\right]\bm{\psi}'\right)\right\|_{\left(L^2\right)^N}\lesssim{} \\ &\qquad \sqrt{N}\left[ \sum_{j=1}^N\left\|\psi_j\right\|^2_{H^1} +\left( \left\|\psi_j\right\|_{H^1} + \left\|\psi'_j\right\|_{H^1} \right)\left\|\bm{\psi}'\right\|_{\left(H^2\right)^N} \right]\left\|\bm{\psi}-\bm{\psi}'\right\|_{\left(H^2\right)^N}. \end{align*} Since estimate \eqref{eq:A} can also be bounded by the above expression, \eqref{eq:C} directly follows. \end{proof} \begin{remark}\label{rem:ell2} Since we deal with $\mathbb{C}^N$-valued functions $\bm{\psi}$ on $\mathbb{R}^3$ with gradients $\nabla\bm{\psi}=\left(\nabla\psi_1,\ldots,\nabla\psi_N\right)\in\left(\mathbb{C}^{3}\right)^N$ and $\mathbb{R}$-valued functions of the density $f=f(\rho)$ on $\mathbb{R}^3$ with $\mathbb{R}^3$-valued gradients $\nabla f$, we get that the inner product of these gradients gives $\nabla\bm{\psi}\cdot\nabla f=\left(\nabla\psi_1\cdot\nabla f,\ldots,\nabla\psi_N\cdot\nabla f\right):\mathbb{R}^3\longrightarrow\mathbb{C}^N$. Now, we bound these inner products in $\mathbb{C}^N$, in order to find a suitable norm: by the Cauchy--Schwarz inequality, we have \begin{align*} \left|\nabla\bm{\psi}\cdot\nabla f\right|^2 &=\sum_{j=1}^N\left|\nabla\psi_j\cdot\nabla f\right|^2\\ &\leq\left|\nabla f\right|^2\sum_{j=1}^N\left|\nabla\psi_j\right|^2, \end{align*} so as a norm for the gradient $\nabla \bm{\psi}$, we can take \begin{align*} \left\|\nabla \bm{\psi}\right\|_{*}^2:=\sum_{j=1}^N\left|\nabla\psi_j\right|^2. \end{align*} Using this norm and the Cauchy--Schwarz inequality, we can get the following bound for $\mathbb{C}^N$-valued functions $\bm{\psi},\bm{\psi}'$: \begin{align} \label{eq:ell2bound} \left|\bm{\psi}\cdot\nabla\bm{\psi}'\right|^2 &= \left|\sum_{j=1}^N\psi_{j}\nabla\psi_{j}'\right|^2\\ &\leq \sum_{j=1}^N\left|\psi_{j}\right|^2\sum_{i=1}^3\sum_{k=1}^N\left|\partial_i\psi'_{k}\right|^2\\ &= \left|\bm{\psi}\right|^2\left\|\nabla\bm{\psi}'\right\|_{*}^2. \end{align} \end{remark} \begin{lemma}[Mean-value estimates for the density] \label{lem:MVE} For $\alpha\geq1/2$, we have, with $\rho':=\left|\bm{\psi}'\right|^2$, \begin{align} \left|\rho^\alpha-\rho'^\alpha\right|\lesssim_{\alpha} \left(\left\|\rho\right\|^{\alpha-1/2}_{L^{\infty}}+\left\|\rho'\right\|^{\alpha-1/2}_{L^{\infty}}\right)\left|\bm{\psi}-\bm{\psi}'\right|.\label{eq:MVE} \end{align} \end{lemma} \begin{proof} We have, by the fundamental theorem of calculus, \begin{align*} \left|\rho^\alpha-\rho'^\alpha\right|&=\left|\left|\bm{\psi}\right|^{2\alpha}-\left|\bm{\psi}'\right|^{2\alpha}\right|\\ &=\left|\int_0^1\frac{\mathrm{d}}{\mathrm{d}t}\left[\left|\bm{\psi}'+t\left(\bm{\psi}-\bm{\psi}'\right)\right|^{2\alpha}\right]\mathrm{d}t\right|\\ &\lesssim_{\alpha} \left(\left|\bm{\psi}\right|+\left|\bm{\psi}'\right|\right)^{2\alpha-1}\left|\bm{\psi}-\bm{\psi}'\right|\\ &\lesssim_{\alpha}\left(\left|\bm{\psi}\right|^{2\alpha-1}+\left|\bm{\psi}'\right|^{2\alpha-1}\right)\left|\bm{\psi}-\bm{\psi}'\right|\\ &= \left(\rho^{\alpha-1/2}+\rho'^{\alpha-1/2}\right)\left|\bm{\psi}-\bm{\psi}'\right|, \end{align*} and \eqref{eq:MVE} readily follows. \end{proof} \begin{lemma}[Mean-value estimates for the density gradient] \label{lem:MVEnablarho} For $\beta\geq3/2$, we have, with $\rho':=\left|\bm{\psi}'\right|^2$, \begin{align} \left|\nabla\left(\rho^{\beta}\right)-\nabla\left(\rho'^{\beta}\right)\right| \lesssim_{\beta}\,&\left(Q_1\left\|\nabla\bm{\psi}\right\|_{*}+Q_2\left\|\nabla\bm{\psi}'\right\|_{*}\right)\left|\bm{\psi}-\bm{\psi}'\right|+Q_3\left\|\nabla\bm{\psi}-\nabla\bm{\psi}'\right\|_{*},\label{eq:MVEnablarho} \end{align} where \begin{align*} \begin{dcases} \begin{aligned} Q_1&=\rho^{\beta-1},\\ Q_2&=\rho'^{1/2}\left(\rho^{\beta-3/2}+\rho'^{\beta-3/2}\right),\\ Q_3&=\rho^{\beta-1}\rho'^{1/2}. \end{aligned} \end{dcases} \end{align*} \end{lemma} \begin{proof} We have, using $\nabla\rho=\nabla\bm{\psi}\cdot \overline{\bm{\psi}}+\bm{\psi}\cdot\nabla\left(\overline{\bm{\psi}}\right)$ and \eqref{eq:ell2bound} for the pair $(\bm{\psi},\overline{\bm{\psi}})$, \begin{align} \label{eq:nablarho} \left|\nabla\rho\right|\lesssim \rho^{1/2}\left\|\nabla\bm{\psi}\right\|_{*}. \end{align} Using \begin{align*} \left|\nabla\left(\rho^{\beta}\right)\right|\lesssim_{\beta}\rho^{\beta-1}\left|\nabla\rho\right| \end{align*} and adding and subtracting the term $\rho^{\beta-1}\nabla\rho'$, we get for all $\beta\geq 1$ \begin{align*} \left|\nabla\left(\rho^{\beta}\right)-\nabla\left(\rho'^{\beta}\right)\right|\lesssim_{\beta}\rho^{\beta-1}\left|\nabla\rho-\nabla\rho'\right|+\left|\rho^{\beta-1}-\rho'^{\beta-1}\right|\left|\nabla\rho'\right|. \end{align*} By adding and subtracting $\bm{\psi}'\cdot\nabla\left(\overline{\bm{\psi}}\right)$ and $\overline{\bm{\psi}'}\cdot\nabla\bm{\psi}$, and using \eqref{eq:ell2bound} for the pairs $\left(\bm{\psi}-\bm{\psi}',\overline{\bm{\psi}}\right)$, $\left(\bm{\psi}',\overline{\bm{\psi}-\bm{\psi}'}\right)$, $\left(\overline{\bm{\psi}-\bm{\psi}'},\bm{\psi}\right)$ and $\left(\overline{\bm{\psi}'},\bm{\psi}-\bm{\psi}'\right)$, we get \begin{align} \nonumber \left|\nabla\rho-\nabla\rho'\right|&= \left|\bm{\psi}\cdot\nabla\left(\overline{\bm{\psi}}\right)-\bm{\psi}'\cdot\nabla\left(\overline{\bm{\psi}'}\right)+\overline{\bm{\psi}}\cdot\nabla\bm{\psi}-\overline{\bm{\psi}'}\cdot\nabla\left(\bm{\psi}'\right)\right|\\ \nonumber &\leq\left|\left(\bm{\psi}-\bm{\psi}'\right)\cdot\nabla\left(\overline{\bm{\psi}}\right)\right|+ \left|\bm{\psi}'\cdot\nabla\left(\overline{\bm{\psi}-\bm{\psi}'}\right)\right|\\ &\qquad \nonumber+\left|\left(\overline{\bm{\psi}-\bm{\psi}'}\right)\cdot\nabla\left(\bm{\psi}\right)\right|+\left|\overline{\bm{\psi}'}\cdot\nabla\left(\bm{\psi}-\bm{\psi}'\right)\right|\\ &\lesssim{}\left\|\nabla\bm{\psi}\right\|_{*}\left|\bm{\psi}-\bm{\psi}'\right|+\rho'^{1/2}\left\|\nabla\bm{\psi}-\nabla\bm{\psi}'\right\|_{*}. \label{eq:nablarhodiff} \end{align} Using \eqref{eq:MVE} with $\alpha=\beta-1\geq 1/2$ and \eqref{eq:nablarho} for $\rho'$, we get \begin{align*} \left|\rho^{\beta-1}-\rho'^{\beta-1}\right|\left|\nabla\rho'\right|\lesssim_{\beta}\left(\rho^{\beta-3/2}+\rho'^{\beta-3/2}\right)\rho'^{1/2}\left\|\nabla\bm{\psi}'\right\|_{*}\left|\bm{\psi}-\bm{\psi}'\right|. \end{align*} These estimates altogether give the result in \eqref{eq:MVEnablarho}. \end{proof} \begin{lemma}[Lipschitz estimates on the local non-linearity] \label{lem:lipschitzlda} Let $q\in[1,\infty)$, and $\lambda\in\mathbb{R}$. For any fixed $p\in[1,\infty]$ and for all $\bm{\psi},\bm{\psi}'\in \left(H^2\right)^N\cap \left(L^p\right)^N,$ we have, with $\rho':=\left|\bm{\psi}'\right|^2$, \begin{align*} \left\|v_{\textsc{x}}[\rho]\bm{\psi}-v_{\textsc{x}}\left[\rho'\right]\bm{\psi}'\right\|_{(L^p)^N} \lesssim_{q,\lambda} \sum_{j=1}^N\left[\left\|\psi_j\right\|_{H^2}^{2(q-1)}+\left\|\psi_j'\right\|_{H^2}^{2(q-1)}\right] \left\|\bm{\psi}-\bm{\psi}'\right\|_{\left(L^p\right)^N}.\tag{D}\label{eq:D} \end{align*} Moreover, for all $q\geq 7/2$ and any $\lambda\in\mathbb{R}$, it holds that \begin{align*} \left\|v_{\textsc{x}}[\rho]\bm{\psi}-v_{\textsc{x}}\left[\rho'\right]\bm{\psi}'\right\|_{(H^2)^N} \leq \mathcal{L}_{q,\lambda}\left(\max\left\{\left\|\bm{\psi}\right\|_{(H^2)^N},\left\|\bm{\psi}'\right\|_{(H^2)^N}\right\}\right) \left\|\bm{\psi}-\bm{\psi}'\right\|_{(H^2)^N},\tag{E}\label{eq:E} \end{align*} where $\mathcal{L}_{q,\lambda}:\mathbb{R}_0^+\longrightarrow\mathbb{R}_0^+$ is a non-decreasing function, which is only zero at the origin. \end{lemma} \begin{proof} \textit{Part \ref{eq:D}.}\\ By the fundamental theorem of calculus, we can write \begin{align*} \left|v_{\textsc{x}}[\rho]\bm{\psi}-v_{\textsc{x}}\left[\rho'\right]\bm{\psi}'\right|&=|\lambda|\left|\left|\bm{\psi}\right|^{2(q-1)}\bm{\psi}-\left|\bm{\psi}'\right|^{2(q-1)}\bm{\psi}'\right|\\ &\lesssim_\lambda\left|\int_0^1\frac{\mathrm{d}}{\mathrm{d}t}\left[\left|\bm{\psi}'+t\left(\bm{\psi}-\bm{\psi}'\right)\right|^{2(q-1)}\left(\bm{\psi}'+t\left(\bm{\psi}-\bm{\psi}'\right)\right)\right]\mathrm{d} t\right|\\ &\lesssim_{q}\left|\bm{\psi}-\bm{\psi}'\right|\int_0^1\left|\bm{\psi}'+t\left(\bm{\psi}-\bm{\psi}'\right)\right|^{2(q-1)}\mathrm{d} t\\ &\leq\left|\bm{\psi}-\bm{\psi}'\right|\Big(\left|\bm{\psi}'\right|+\left|\bm{\psi}\right|\Big)^{2(q-1)}\\ &\lesssim_{q}\left(\rho^{q-1} +\rho'^{q-1}\right)\left|\bm{\psi}-\bm{\psi}'\right|. \end{align*} Since $H^2$ is embedded into $L^\infty$, we have \begin{align} \label{eq:rhoinfty} \left\|\rho\right\|_{L^\infty}^{\alpha}\lesssim_\alpha \sum_{j=1}^N\left\|\psi_j\right\|_{H^2}^{2\alpha}, \end{align} for all $\alpha>0$. Taking $\alpha=q-1>0$ and combining the results, \eqref{eq:D} follows.\\\\ \textit{Part \ref{eq:E}.}\\ Taking $p=2$ in \eqref{eq:D}, we only need the $\left(L^2\right)^N$ norm of $\Delta\left(v_{\textsc{x}}[\rho]\bm{\psi}-v_{\textsc{x}}[\rho']\bm{\psi}'\right)$ in addition to get the $\left(H^2\right)^N$ norm estimate. Using the product rule for the Laplacian in $\mathbb{R}^3$, we get \begin{align} \nonumber &\Delta\left(v_{\textsc{x}}[\rho]\bm{\psi}-v_{\textsc{x}}[\rho']\bm{\psi}'\right)=\lambda\biggr\{\underbrace{\rho^{q-1}\Delta\bm{\psi}-\rho'^{q-1}\Delta\bm{\psi}'}_{=:\,\,\text{(I)}}\\ &\quad+\underbrace{2\left[\nabla\left(\rho^{q-1}\right)\cdot\nabla\bm{\psi}-\nabla\left(\rho'^{q-1}\right)\cdot\nabla\bm{\psi}'\right]}_{=:\,\,\text{(II)}} +\underbrace{\Delta\left(\rho^{q-1}\right)\bm{\psi}-\Delta\left(\rho'^{q-1}\right)\bm{\psi}'}_{=:\,\,\text{(III)}}\biggr\},\label{eq:laplacianP2} \end{align} which is in $\mathbb{C}^N$, as discussed in Remark \ref{rem:ell2}. We discuss the terms one by one.\\\\ \noindent\textit{Part \ref{eq:E}(I).}\\ By adding and subtracting the term $\rho^{q-1}\Delta\bm{\psi}'$ and using \eqref{eq:MVE} with $\alpha=q-1>1$, we have \begin{align} \nonumber \left|\mathrm{(I)}\right|&\leq \left|\rho^{q-1}\right|\left|\Delta\bm{\psi}-\Delta\bm{\psi}'\right|+\left|\rho^{q-1}-\rho'^{q-1}\right|\left|\Delta\bm{\psi}'\right|\\ &\lesssim_q A_1\left|\Delta\bm{\psi}'\right|\left|\bm{\psi}-\bm{\psi}'\right| +A_2\left|\Delta\bm{\psi}-\Delta\bm{\psi}'\right|,\label{eq:i} \end{align} where \begin{align*} \begin{dcases} \begin{aligned} A_1&=\left\|\rho\right\|_{L^{\infty}}^{q-3/2}+\left\|\rho'\right\|_{L^{\infty}}^{q-3/2},\\ A_2&=\left\|\rho\right\|_{L^{\infty}}^{q-1}. \end{aligned} \end{dcases} \end{align*} \textit{Part \ref{eq:E}(II).}\\ By adding and subtracting the term $\nabla\left(\rho^{q-1}\right)\cdot\nabla\bm{\psi}'$, we can write \begin{align*} &\left|\nabla\left(\rho^{q-1}\right)\cdot\nabla\bm{\psi}-\nabla\left(\rho'^{q-1}\right)\cdot\nabla\bm{\psi}'\right| \leq\\ &\quad\left|\nabla\left(\rho^{q-1}\right)\right|\left\|\nabla\bm{\psi}-\nabla\bm{\psi}'\right\|_{*}+\left\|\nabla\bm{\psi}'\right\|_{*}\left|\nabla\left(\rho^{q-1}\right)-\nabla\left(\rho'^{q-1}\right)\right|. \end{align*} Using \eqref{eq:nablarho}, we get \begin{align*} \left|\nabla\left(\rho^{q-1}\right)\right|&=(q-1)\left|\rho^{q-2}\right|\left|\nabla\rho\right|\\ &\lesssim (q-1)\left\|\rho\right\|_{L^{\infty}}^{q-3/2}\left\|\nabla\bm{\psi}\right\|_{*}. \end{align*} Using this and \eqref{eq:MVEnablarho} for $\beta=q-1>2$, we have \begin{align} \nonumber \left|\mathrm{(II)}\right|&\lesssim_q\left(B_1\left\|\nabla\bm{\psi}\right\|_{*}\left\|\nabla\bm{\psi}'\right\|_{*}+ B_2\left\|\nabla\bm{\psi}'\right\|_{*}^2 \right)\left|\bm{\psi}-\bm{\psi}'\right|\\ &\quad+\left(B_3\left\|\nabla\bm{\psi}\right\|_{*}+B_4\left\|\nabla\bm{\psi}'\right\|_{*}\right)\left\|\nabla\bm{\psi}-\nabla\bm{\psi}'\right\|_{*}, \label{eq:ii} \end{align} where \begin{align*} \begin{dcases} \begin{aligned} B_1&=\left\|\rho\right\|_{L^{\infty}}^{q-2},\\ B_2&=\left\|\rho'\right\|_{L^{\infty}}^{1/2}\left(\left\|\rho\right\|_{L^{\infty}}^{q-5/2}+\left\|\rho'\right\|_{L^{\infty}}^{q-5/2}\right),\\ B_3&=\left\|\rho\right\|_{L^{\infty}}^{q-3/2},\\ B_4&=\left\|\rho\right\|_{L^{\infty}}^{q-2}\left\|\rho'\right\|_{L^{\infty}}^{1/2}. \end{aligned} \end{dcases} \end{align*} \iffalse 4(q-1)\times\\ \nonumber &\biggr\{ \bigg[ \left\|\rho\right\|_{L^{\infty}}^{q-2} \left( \left\|\nabla\bm{\psi}\right\|_{*} +(q-2)2^{2q-5}\left\|\rho\right\|_{L^{\infty}}^{-1/2}\left\|\rho'\right\|_{L^{\infty}}^{1/2} \left\|\nabla\bm{\psi}'\right\|_{*} \right) \\ \nonumber & +(q-2)2^{2q-5}\left\|\rho'\right\|_{L^{\infty}}^{q-2} \left\|\nabla\bm{\psi}'\right\|_{*} \bigg] \left\|\nabla\bm{\psi}'\right\|_{*} \left|\bm{\psi}-\bm{\psi}'\right| \\ & + \bigg( \left\|\rho\right\|_{L^{\infty}}^{1/2}\left\|\nabla\bm{\psi}\right\|_{*} + \left\|\rho'\right\|_{L^{\infty}}^{1/2}\left\|\nabla\bm{\psi}'\right\|_{*} \bigg) \left\|\rho\right\|_{L^{\infty}}^{q-2} \left\|\nabla\bm{\psi}-\nabla\bm{\psi}'\right\|_{*} \biggr\}. \fi \textit{Part \ref{eq:E}(III).}\\ By adding and subtracting the term $\Delta\left(\rho^{q-1}\right)\bm{\psi}'$, we can write \begin{align*} \left|\mathrm{(III)}\right|\leq\underbrace{\left|\Delta\left(\rho^{q-1}\right)\right|}_{=:\,\mathrm{(a)}}\left|\bm{\psi}-\bm{\psi}'\right|+\underbrace{\left|\Delta\left(\rho^{q-1}\right)-\Delta\left(\rho'^{q-1}\right)\right|}_{=:\,\mathrm{(b)}}\left\|\rho'\right\|^{1/2}_{L^{\infty}}. \end{align*} We have, using $\Delta\rho=\overline{\bm{\psi}}\cdot\Delta\bm{\psi}+\overline{\Delta\bm{\psi}}\cdot\bm{\psi}+2\left\|\nabla\bm{\psi}\right\|_{*}^2$ and the Cauchy--Schwarz inequality, \begin{align} \label{eq:Deltarho} \left|\Delta\rho\right|\lesssim{} \left\|\rho\right\|^{1/2}_{L^{\infty}}\left|\Delta\bm{\psi}\right|+\left\|\nabla\bm{\psi}\right\|_{*}^2. \end{align} Using this and \eqref{eq:nablarho}, we get \begin{align*} \mathrm{(a)}&\lesssim_q (q-2)\left|\rho\right|^{q-3}\left|\nabla\rho\right|^2+\left|\rho\right|^{q-2}\left|\Delta\rho\right|\\ &\lesssim{} \left\|\rho\right\|_{L^{\infty}}^{q-2} \left[ \left(2q-3\right) \left\|\nabla\bm{\psi}\right\|_{*}^2 + \left\|\rho\right\|_{L^{\infty}}^{1/2} \left|\Delta\bm{\psi}\right| \right]. \end{align*} By similar reasoning, we get, by adding and subtracting the terms $\rho^{q-3}\left|\nabla\rho'\right|^2$ and $\rho^{q-2}\Delta\rho'$, \begin{align*} \mathrm{(b)}&\lesssim_q(q-2)\biggr(\left|\rho\right|^{q-3}\underbrace{\left|\left|\nabla\rho\right|^2-\left|\nabla\rho'\right|^2\right|}_{=:\,\mathrm{(i)}}+\underbrace{\left|\rho^{q-3}-\rho'^{q-3}\right|\left|\nabla\rho'\right|^2\biggr)}_{=:\,\mathrm{(ii)}}\\ &\quad + \left|\rho\right|^{q-2}\underbrace{\left|\Delta\rho-\Delta\rho'\right|}_{=:\,\mathrm{(iii)}}+\underbrace{\left|\rho^{q-2}-\rho'^{q-2}\right|\left|\Delta\rho'\right|}_{=:\,\mathrm{(iv)}}. \end{align*} Using \eqref{eq:nablarho} for $\rho$ and $\rho'$ and \eqref{eq:nablarhodiff}, we get \begin{align*} \mathrm{(i)}&\leq \left( \left|\nabla\rho\right|+\left|\nabla\rho'\right| \right) \left|\nabla\rho-\nabla\rho'\right|\\ &\lesssim{}\left(\left\|\rho\right\|^{1/2}_{L^{\infty}}\left\|\nabla\bm{\psi}\right\|_{*}+\left\|\rho'\right\|^{1/2}_{L^{\infty}}\left\|\nabla\bm{\psi}'\right\|_{*}\right)\left(\left|\bm{\psi}-\bm{\psi}'\right|\left\|\nabla\bm{\psi}\right\|_{*}+ \left\|\rho'\right\|^{1/2}_{L^{\infty}}\left\|\nabla\bm{\psi}-\nabla\bm{\psi}'\right\|_{*}\right). \end{align*} Further, using \eqref{eq:MVE} with $\alpha=q-3\geq1/2$ and \eqref{eq:nablarho} for $\rho'$, we have \begin{align*} \mathrm{(ii)}\lesssim_q \left(\left\|\rho\right\|^{q-7/2}_{L^{\infty}}+\left\|\rho'\right\|^{q-7/2}_{L^{\infty}}\right)\left\|\rho'\right\|_{L^{\infty}}\left\|\nabla\bm{\psi}'\right\|_{*}^2\left|\bm{\psi}-\bm{\psi}'\right|. \end{align*} In addition, by adding and subtracting the terms $\overline{\bm{\psi}'}\cdot\Delta\bm{\psi}$ and $\overline{\Delta\bm{\psi}}\cdot\bm{\psi}'$, using the reverse triangular and the Cauchy--Schwarz inequalities, we have \begin{align*} \mathrm{(iii)}&=\biggr|2\left(\left|\nabla\bm{\psi}\right|^2-\left|\nabla\bm{\psi}'\right|^2\right)+\left(\overline{\bm{\psi}}-\overline{\bm{\psi}'}\right)\cdot\Delta\bm{\psi}+\left(\Delta\bm{\psi}-\Delta\bm{\psi}'\right)\cdot\overline{\bm{\psi}'}\\ &\quad+\left(\bm{\psi}-\bm{\psi}'\right)\cdot\overline{\Delta\bm{\psi}}+\left(\overline{\Delta\bm{\psi}}-\overline{\Delta\bm{\psi}'}\right)\cdot\bm{\psi}'\biggr|\\ &\lesssim{}\left(\left\|\nabla\bm{\psi}\right\|_{*}+\left\|\nabla\bm{\psi}'\right\|_{*}\right)\left\|\nabla\bm{\psi}-\nabla\bm{\psi}'\right\|_{*}+\left|\Delta\bm{\psi}\right|\left|\bm{\psi}-\bm{\psi}'\right|+\left\|\rho'\right\|_{L^{\infty}}^{1/2}\left|\Delta\bm{\psi}-\Delta\bm{\psi}'\right|. \end{align*} Furthermore, using \eqref{eq:MVE} with $\alpha=q-2>1$ and \eqref{eq:Deltarho} for $\rho'$, we obtain \begin{align*} \mathrm{(iv)}\lesssim_{q}\left(\left\|\rho\right\|^{q-5/2}_{L^{\infty}}+\left\|\rho'\right\|^{q-5/2}_{L^{\infty}}\right)\left(\left\|\rho'\right\|_{L^{\infty}}^{1/2}\left|\Delta\bm{\psi}'\right|+\left\|\nabla\bm{\psi}'\right\|_{*}^2\right)\left|\bm{\psi}-\bm{\psi}'\right|. \end{align*} Altogether, we get \begin{align} \nonumber \left|\mathrm{(III)}\right|&\lesssim_{q} \biggr(C_1\left\|\nabla\bm{\psi}\right\|_{*}^2+C_2\left\|\nabla\bm{\psi}\right\|_{*}\left\|\nabla\bm{\psi}'\right\|_{*}+C_3\left\|\nabla\bm{\psi}'\right\|_{*}^2+C_4\left|\Delta\bm{\psi}\right|+C_5\left|\Delta\bm{\psi}'\right| \biggr)\\ &\qquad \times \left|\bm{\psi}-\bm{\psi}'\right|\nonumber\\ &\quad+\left(C_6\left\|\nabla\bm{\psi}\right\|_{*}+C_7\left\|\nabla\bm{\psi}'\right\|_{*}\right)\left\|\nabla\bm{\psi}-\nabla\bm{\psi}'\right\|_{*}+C_8\left|\Delta\bm{\psi}-\Delta\bm{\psi}'\right|, \label{eq:iii} \end{align} where \begin{align*} \begin{dcases} \begin{aligned} C_1&=\left\|\rho\right\|_{L^{\infty}}^{q-5/2}\left(\left\|\rho\right\|_{L^{\infty}}^{1/2}+\left\|\rho'\right\|_{L^{\infty}}^{1/2}\right),\\ C_2&=\left\|\rho\right\|_{L^{\infty}}^{q-3}\left\|\rho'\right\|_{L^{\infty}},\\ C_3&=\left\|\rho'\right\|_{L^{\infty}}\left[\left\|\rho\right\|_{L^{\infty}}^{q-7/2}\left(1+\left\|\rho\right\|_{L^{\infty}}\right)+\left\|\rho'\right\|_{L^{\infty}}^{q-7/2}\left(1+\left\|\rho'\right\|_{L^{\infty}}\right)\right],\\ C_4&=\left\|\rho\right\|_{L^{\infty}}^{q-3/2}\left(\left\|\rho\right\|_{L^{\infty}}^{1/2}\left\|\rho'\right\|_{L^{\infty}}^{1/2}+1\right),\\ C_5&=\left\|\rho'\right\|_{L^{\infty}}\left(\left\|\rho\right\|_{L^{\infty}}^{q-5/2}+\left\|\rho'\right\|_{L^{\infty}}^{q-5/2}\right),\\ C_6&=\left\|\rho\right\|_{L^{\infty}}^{q-5/2}\left\|\rho'\right\|_{L^{\infty}}^{1/2}\left(\left\|\rho\right\|_{L^{\infty}}^{1/2}+\left\|\rho'\right\|_{L^{\infty}}^{1/2}\right),\\ C_7&=\left\|\rho\right\|_{L^{\infty}}^{q-3}\left\|\rho'\right\|_{L^{\infty}}^{1/2}\left(\left\|\rho\right\|_{L^{\infty}}+\left\|\rho'\right\|_{L^{\infty}}\right),\\ C_8&=\left\|\rho\right\|_{L^{\infty}}^{q-2}\left\|\rho'\right\|_{L^{\infty}}. \end{aligned} \end{dcases} \end{align*} \textit{Conclusion to Part \ref{eq:E}.}\\ The function $\mathcal{L}$ can be split into terms $\mathcal{L}=\mathcal{L}_0+\mathcal{L}_{\mathrm{I}}+\mathcal{L}_{\mathrm{II}}+\mathcal{L}_{\mathrm{III}}$. As discussed at the start of Part E, $\mathcal{L}_0$ is the contribution of Part D for $p=2$. The other terms stem from (I), (II) and (III) in \eqref{eq:laplacianP2}, and are obtained taking the $L^2$ norm in (\ref{eq:i}, \ref{eq:ii} resp. \ref{eq:iii}). Here, we only discuss the $\mathcal{L}_{\mathrm{III}}$ term; all scalars $C_i$ can be bounded using \eqref{eq:rhoinfty}. The same embedding, of $H^2$ into $L^\infty$, is also used for the factors $\left|\bm{\psi}-\bm{\psi}'\right|$ in the $C_1,\ldots,C_5$ terms. In the $C_1$ and $C_3$ terms, also the $L^4$ integrability of the gradient terms $\left|\nabla\psi_j\right|$, together with the embedding of $H^1$ into $L^4$, is used; after using Young's product inequality, the $C_2$ term follows the same line. For the $C_4$ and $C_5$ term, we use the $L^2$ integrability of the Laplacian term $\left|\Delta\bm{\psi}\right|$. For the $C_6$ and $C_7$ terms, we use the Cauchy--Schwarz inequality, and the $C_8$ term follows by definition of the $\left(H^2\right)^N$ norm. The terms $\mathcal{L}_{\mathrm{I}}$ and $\mathcal{L}_{\mathrm{II}}$ can be handled fully analogously. \end{proof} An immediate consequence of Lemmata \ref{lem:lipschitzconvolution} and \ref{lem:lipschitzlda} is the following one, on Lipschitz estimates on the non-linear mapping $\bm{\psi}\longmapsto v_{\textsc{hx}}[\rho]\bm{\psi}$ for electronic feasible configurations $\bm{\psi}\in\mathcal{B}_{\mrm{el}}(\tau) $. \begin{lemma}[Lipschitz estimates for the non-linearity] \label{lem:lipschitzP} For $q\geq 7/2$ and any $\lambda\in\mathbb{R}$, there exists a non-decreasing function $\mathscr{L}_{q,\lambda}:\mathbb{R}_0^+\longrightarrow \mathbb{R}_0^+$, which is only zero at the origin, such that for all $\bm{\psi},\bm{\psi}'\in\mathcal{B}_{\mrm{el}}(\tau) $, with $\rho':=\left|\bm{\psi}'\right|^2$, \begin{align*} \left\|v_{\textsc{hx}}[\rho]\bm{\psi}-v_{\textsc{hx}}[\rho']\bm{\psi}'\right\|_{C^0\left([0,\tau];\left(H^2\right)^N\right)}&\leq\mathscr{L}_{q,\lambda}\left(\alpha+\left\|\bm{\psi}^0\right\|_{\left(H^2\right)^N}\right)\\ &\qquad\times \left\|\bm{\psi}-\bm{\psi}'\right\|_{C^0\left([0,\tau];\left(H^2\right)^N\right)},\tag{F}\label{eq:F}\\ \left\|v_{\textsc{hx}}[\rho]\bm{\psi}\right\|_{C^0\left([0,\tau];\left(H^2\right)^N\right)}&\leq \left(\alpha+\left\|\bm{\psi}^0\right\|_{\left(H^2\right)^N}\right)\mathscr{L}_{q,\lambda}\left(\alpha+\left\|\bm{\psi}^0\right\|_{\left(H^2\right)^N}\right).\tag{G}\label{eq:G} \end{align*} \end{lemma} \begin{proof} This is primarily based on the previous Lemmata: combining the estimates from Lemmata \ref{lem:lipschitzconvolution} \eqref{eq:C} and \ref{lem:lipschitzlda} \eqref{eq:E}, we have that the mapping $\bm{\psi}\longmapsto v_{\textsc{hx}}[\rho]\bm{\psi}$ is locally Lipschitz-continuous in $\left(H^2\right)^N$. Consequentially, for all $\bm{\psi},\bm{\psi}'\in C^0\left([0,\tau];\left(H^2\right)^N\right)$ we can write \begin{align} \nonumber \label{eq:LipP} \left\|v_{\textsc{hx}}[\rho]\bm{\psi}-v_{\textsc{hx}}[\rho']\bm{\psi}'\right\|_{C^0\left([0,\tau];\left(H^2\right)^N\right)}\leq\,&\mathscr{L}_{q,\lambda}\left(\max\left\{\left\|\bm{\psi}\right\|_{C^0\left([0,\tau];\left(H^2\right)^N\right)},\left\|\bm{\psi}'\right\|_{C^0\left([0,\tau];\left(H^2\right)^N\right)}\right\}\right)\\ &\times\left\|\bm{\psi}-\bm{\psi}'\right\|_{C^0\left([0,\tau];\left(H^2\right)^N\right)}, \end{align} where $\mathscr{L}_{q,\lambda}$ can be created using the previously established bounds. Since this function is non-decreasing, we can get \eqref{eq:F} from \eqref{eq:LipP} by definition of $\mathcal{B}_{\mrm{el}}(\tau) $. In particular, we can get \eqref{eq:G} from \eqref{eq:LipP} when we first set $\bm{\psi}'\equiv \mathbf{0}$. \end{proof} \subsection{Local-in-time existence} \label{sec:locex} As stated before, we devote this section to proving a local-in-time existence result for $\mrm{\eqref{eq:KSN}}${}. To this end, we fix some arbitrary time $0<T<\infty$, and we let $\tau\leq T$. We define \begin{align*} \gamma :=2\left|\mathbf{v}^0\right|+1. \end{align*} Using this bound $\gamma$ and putting $\Theta=\tau$, we can make use of Lemma \ref{lem:evolutionoperators} (viii) and construct the $\mathcal{L}\left(\left(H^2\right)^N\right)$ bound $B_{\tau,\gamma}$ on the propagators $U(t,s)$, $(t,s)\in[0,\tau]^2$ associated with the family of Hamiltonians $\left\{\widehat{\mathfrak{H}}(t),t\in[0,\tau]\right\}$ for a given $\mathbf{x}\in C^1\left([0,\tau];\left(\mathbb{R}^3\right)^M\right)$ for which \begin{align*} \left\|\dot{\mathbf{x}}\right\|_{C^0\left([0,\tau];\left(\mathbb{R}^3\right)^M\right)}\leq \gamma. \end{align*} Using this bound $B_{\tau,\gamma}$, we can define the radii \begin{align*} \alpha &:=2B_{\tau,\gamma}\left\|\bm{\psi}^0\right\|_{\left(H^2\right)^N},\\ \delta &:=\frac{1}{4}\min_{\subalign{k,\ell\in&\{1,\ldots,M\}\\&\,k\neq \ell}}\left\{\left|\mathbf{x}^{0}_k-\mathbf{x}^{0}_\ell\right|\right\}, \end{align*} for the balls centred around the initial configurations $\bm{\psi}^0\in\left(H^2\right)^N$ and $\mathbf{x}^0\in\left(\mathbb{R}^3\right)^M$, with $\mathbf{x}^0_k\neq \mathbf{x}^0_\ell$ for $k\neq \ell$: \begin{align*} B_\alpha\left(\bm{\psi}^0\right)&:=\left\{\bm{\psi}\in\left(H^2\right)^N\middle\rvert\left\|\bm{\psi}-\bm{\psi}^0\right\|_{\left(H^2\right)^N}<\alpha\right\},\\ B_\delta\left(\mathbf{x}^0\right)&:=\left\{\mathbf{x}\in\left(\mathbb{R}^3\right)^M\middle\rvert\left|\mathbf{x}-\mathbf{x}^0\right|<\delta\right\}. \end{align*} We obtain the closed balls by taking the closure: \begin{align*} \overline{B_\alpha\left(\bm{\psi}^0\right)}&=\left\{\bm{\psi}\in\left(H^2\right)^N\middle\rvert\left\|\bm{\psi}-\bm{\psi}^0\right\|_{\left(H^2\right)^N}\leq \alpha\right\},\\ \overline{B_\delta(\mathbf{x}^0)}&=\left\{\mathbf{x}\in\left(\mathbb{R}^3\right)^M\middle\rvert\left|\mathbf{x}-\mathbf{x}^0\right|\leq \delta\right\}. \end{align*} We define the electronic and the nuclear feasible regions for the short-time interval $[0,\tau]$ as follows: \begin{align*} \mathcal{B}_{\mathrm{el}}\left(\tau \right)&:=\left\{\bm{\psi} \in C^1\left([0,\tau];\left(L^2\right)^N\right)\cap C^0\left([0,\tau],\overline{B_{\alpha}\left(\bm{\psi}^0\right)}\right)\,\,\middle\rvert \,\,\bm{\psi}(0)=\bm{\psi}^0\right\},\\ \mathcal{B}_{\mathrm{nuc}}\left(\tau \right) &:=\left\{\mathbf{x}\in C^1\left([0,\tau];\overline{B_{\delta}\left(\mathbf{x}^0\right)}\right) \,\middle\rvert\,\mathbf{x}(0)=\mathbf{x}^0,\dot{\mathbf{x}}(0)=\textbf{v}^0,\left\|\dot{\mathbf{x}}\right\|_{C^0\left([0,\tau];\left(\mathbb{R}^3\right)^M\right)}\leq \gamma\right\}. \end{align*} These regions are equipped with the $C^0\left([0,\tau];\left(L^2\right)^N\right)$, resp. the $C^1\left([0,\tau];\left(\mathbb{R}^{3}\right)^{M}\right)$ norms, and are designed to contain short-time solutions $\bm{\psi}$ to \eqref{eq:KS}, resp. $\mathbf{x}$ to \eqref{eq:N} on the short-time interval $[0,\tau]$, which we call feasible electronic, resp. nuclear configurations.\\\\ We can define the following mappings, that connect feasible nuclear and electronic configurations: \begin{alignat*}{3} \mathcal{N}&:\mathcal{B}_{\mathrm{el}}\left(\tau\right)&&\longrightarrow \mathcal{B}_{\mathrm{nuc}}\left(\tau\right)\cap C^2\left([0,\tau];\left(\mathbb{R}^3\right)^M\right), \\ \mathcal{E}&:\mathcal{B}_{\mathrm{nuc}}\left(\tau\right)&&\longrightarrow\mathcal{B}_{\mathrm{el}}\left(\tau\right). \end{alignat*} \begin{lemma} \label{lem:KStoCL} We let $A_\gamma,C_\gamma$ be defined as in Lemma \ref{lem:evolutionoperators} (viii), and recall that $\alpha$ is of the form \begin{align*} \alpha(\tau)=2A_\gamma^{1+C_\gamma\tau}\left\|\bm{\psi}^0\right\|_{\left(H^2\right)^N}. \end{align*} There exist constants $\mathcal{C}_1,\mathcal{C}_2>0$ with $\mathcal{C}_i=\mathcal{C}_i\left(\bm{z},\bm{m},\delta,\mathbf{x}^0\right)$, $i\in\{1,2\}$, such that for $\tau>0$ satisfying \begin{align*} \max\left\{\tau,\tau^2\right\} \left[\left|\mathbf{v}^0\right|+ \mathcal{C}_1\left(\alpha(\tau)+\left\|\bm{\psi}^0\right\|_{\left(H^{2}\right)^N}\right)^2 +\mathcal{C}_2 \right]<\min\left\{1,\delta\right\}, \tag{A1}\label{eq:A1} \end{align*} the system \eqref{eq:N} has for given $\bm{\psi}\in\mathcal{B}_{\mrm{el}}(\tau) $ a unique short-time solution $\mathbf{x}\in\mathcal{B}_{\mrm{nuc}}(\tau) \cap C^2\left([0,\tau];\overline{B_\delta\left(\mathbf{x}^0\right)}\right)$, and the mapping $\mathcal{N}[\bm{\psi}]=\mathbf{x}$ is bounded and continuous. \end{lemma} \begin{proof} \textit{Part 1: Existence and uniqueness of $\mathbf{x}$ in $C^2\left([0,\tau];\overline{B_\delta\left(\mathbf{x}^0\right)}\right)$.}\\ Here, we can write the RHS function from \eqref{eq:N} without parameters, so as $\mathbf{a}=\mathbf{a}(t,\mathbf{x})$: since $\bm{\psi}$ are given, $\rho$ is too. Note we now write $t$ explicitly as a variable for the $\mathbf{a}^1_k$ terms, and that it does not appear in the $\mathbf{a}^1_k$ terms.\\\\ We define the compact set \begin{align*} K:=[0,\tau]\times \overline{B_\delta(\mathbf{x}^0)}, \end{align*} which is a subset of the domain of $\mathbf{a}$. Note that by the definition of $\delta$ and the reverse triangular inequality, we have for all $\mathbf{x}\in \overline{B_{\delta}\left(\mathbf{x}^0\right)}$ and $k,\ell\in\{1,\ldots,M\}$ with $k\neq \ell$, \begin{align} \left|\mathbf{x}_k-\mathbf{x}_\ell\right|& \geq\,\,\min_{\subalign{m,n&\in\{1,\ldots,M\}\\&\,m\neq n}}\left\{\left|\mathbf{x}^0_m-\mathbf{x}^0_n\right|\right\}-2\left|\mathbf{x}-\mathbf{x}^{0}\right| \nonumber\\ &\geq 2\delta,\label{eq:delta1}\\ \left|\mathbf{x}_k\right|&\nonumber \leq\left|\mathbf{x}\right|\\ &\leq \left|\mathbf{x}^0\right|+\left|\mathbf{x}-\mathbf{x}^0\right|\nonumber\\ &\leq \left|\mathbf{x}^0\right|+\delta. \label{eq:delta2} \end{align} First, we prove $\mathbf{a}$ is continuous in $(t,\mathbf{x})$ on $K$, via continuity of $\mathbf{a}^1_k$ and $\mathbf{a}^2_k$ for all $k\in\{1,\ldots,M\}$. To this end, we create a coupled sequence $\left((t_n,\mathbf{x}_n)\right)_{n\in\mathbb{N}}\subset K$ with $(t_n,\mathbf{x}_n)\xrightarrow{n\longrightarrow\infty}(t^*,\mathbf{x})\in K$. The functions $\mathbf{a}^1_k$ give for all $n$, using the Cauchy--Schwarz and Hardy's inequalities, \begin{align*} \left|\mathbf{a}^1_k\left(t_n,\mathbf{x}_{n}\right)-\mathbf{a}^1_k\left(t^*,\mathbf{x}_{n}\right)\right| &\lesssim_{z_k,m_k} \sum_{j=1}^N\left\langle\left|\mathbf{x}_{nk}-\cdot\,\right|^{-2},\left|\left(\psi_j(t_n,\cdot)\right)^2-\left(\psi_j(t^*,\cdot)\right)^2\right|\right\rangle\\ &\lesssim{} \sum_{j=1}^N\biggr\langle\left|\mathbf{x}_{nk}-\cdot\,\right|^{-1}\max_{t\in[0,\tau]}\left\{\left|\psi_j(t,\cdot)\right|\right\},\left|\mathbf{x}_{nk}-\cdot\,\right|^{-1}\left|\psi_j(t_n,\cdot)-\psi_j(t^*,\cdot)\right|\biggr\rangle\\ &\lesssim{}\sum_{j=1}^N\left\|\nabla\psi_j\right\|_{L^{\infty}\left([0,\tau];L^2\left(\mathbb{R}^3;\mathbb{C}^3\right)\right)}\left\|\nabla\psi_j(t_n,\cdot)-\nabla\psi_j(t^*,\cdot)\right\|_{L^2\left(\mathbb{R}^3;\mathbb{C}^3\right)}\\ &\xrightarrow{n\longrightarrow\infty}0, \end{align*} as $\bm{\psi}\in C^0\left([0,\tau];\left(H^2\right)^N\right)$. Using this and Lemma \ref{lem:forces}, by which $\mathbf{a}^1_k(t^*,\cdot)\in C^0\left(\left(\mathbb{R}^3\right)^M;\mathbb{C}^3\right)$, we have for all $n$ \begin{align*} \left|\mathbf{a}^1_k(t_n,\mathbf{x}_{n})-\mathbf{a}^1_k(t^*, \mathbf{x})\right| &\leq \left|\mathbf{a}^1_k(t_n,\mathbf{x}_{n}) -\mathbf{a}^1_k(t^*,\mathbf{x}_{n})\right| + \left|\mathbf{a}^1_k(t^*,\mathbf{x}_{n})-\mathbf{a}^1_k(t^*,\mathbf{x})\right|\\ &\xrightarrow{n\longrightarrow\infty}0. \end{align*} The functions $\mathbf{a}^2_k$ are not explicitly time-dependent, a sum of smooth functions on $\overline{B_\delta\left(\mathbf{x}^0\right)}$, and therefore continuous on $K$ too. Also, by Lemma \ref{lem:forces}, $\mathbf{a}^1_k(t,\cdot)$ is bounded in $(t,\mathbf{x})$ on $K$ by \begin{align} \nonumber \left\|\mathbf{a}^1_k\right\|_{C^0\left([0,\tau],W^{1,\infty}\left(\overline{B_\delta\left(\mathbf{x}^0\right)};\mathbb{C}^3\right)\right)}&\lesssim_{z_k,m_k}\left\|\bm{\psi}\right\|^2_{C^0\left([0,\tau];\left(H^2\right)^N\right)} \\ &\leq\left(\alpha+\left\|\bm{\psi}^0\right\|_{\left(H^{2}\right)^N}\right)^2, \label{eq:ak1} \end{align} since $\bm{\psi}\in\mathcal{B}_{\mrm{el}}(\tau) $. The functions $\mathbf{a}^2_k$ are bounded on $\overline{B_\delta\left(\mathbf{x}^0\right)}$, using \eqref{eq:delta1}, by \begin{align} \nonumber\left\|\mathbf{a}^2_k\right\|_{L^{\infty}\left(\overline{B_\delta\left(\mathbf{x}^0\right)};\mathbb{C}^3\right)} &\lesssim_{z_k,m_k} \sum_{\subalign{m&=1,\\m&\neq k}}^Mz_m\left\||\mathbf{x}_k-\mathbf{x}_m|^{-2}\right\|_{L^{\infty}\left(\left(\mathbb{R}^3\right)^M;\mathbb{C}\right)}\\ &\lesssim{}\delta^{-2}\sum_{\subalign{m&=1,\\m&\neq k}}^Mz_m.\label{eq:ak2} \end{align} This gives us as an upper bound \begin{align} \left\rVert\mathbf{a}\right\|_{C^0\left(K;\left(\mathbb{C}^{3}\right)^M\right)} \leq \mathcal{M}&:= C_1\left(\alpha+\left\|\bm{\psi}^0\right\|_{\left(H^{2}\right)^N}\right)^2+C_2 \label{eq:Mstar} \end{align} for some constants $C_1=C_1(\bm{z},\bm{m})>0$ and $C_2(\bm{z},\bm{m},\delta)>0$. Now, we prove local Lipschitz-continuity of $\mathbf{a}$ in $\mathbf{x}$ on $K$, i.e. for uniformly all $t\in[0,\tau]$. Similarly to the continuity proof, $\mathbf{a}$ is locally Lipschitz-continuous if $\mathbf{a}^1_k$ and $\mathbf{a}^2_k$ are for all $k\in\{1,\ldots,M\}$. By Lemma \ref{lem:forces}, the $\mathbf{a}^1_k$ terms $\mathbf{a}^1_k(t,\cdot)$ are uniformly Lipschitz-continuous for all $t\in[0,\tau]$, since \begin{align*} m_k^{-1}\sum_{j=1}^N\left\|D\mathbf{f}^{jj}_k(t,\cdot)\right\|_{L^{\infty}(\mathbb{R}^3;\mathbb{C}^{3\times 3})}\lesssim_{z_k,m_k}\left(\alpha+\left\|\bm{\psi}^0\right\|_{\left(H^{2}\right)^N}\right)^2, \end{align*} by definition of $\mathcal{B}_{\mrm{el}}(\tau) $. For the $\mathbf{a}^2_k$ terms, we first note that for all $\mathbf{x},\mathbf{x}'\in \overline{B_{\delta}\left(\mathbf{x}^0\right)}$, using \eqref{eq:delta1}, \begin{align*} \left|\frac{\mathbf{x}_{k}-\mathbf{x}_{m}}{\left|\mathbf{x}_{k}-\mathbf{x}_{m}\right|^3} -\frac{\mathbf{x}_{k}'-\mathbf{x}_{m}'}{\left|\mathbf{x}'_{k}-\mathbf{x}'_{m}\right|^3}\right|\leq \frac{1}{2\delta^3}\left|\mathbf{x}-\mathbf{x}'\right|+\underbrace{\left|\frac{\mathbf{x}_{k}-\mathbf{x}_{m}}{\left|\mathbf{x}'_{k}-\mathbf{x}'_{m}\right|^3}-\frac{\mathbf{x}'_k-\mathbf{x}'_m}{\left|\mathbf{x}_{k}-\mathbf{x}_{m}\right|^3}\right|}_{=:\,\,(\Xi)}. \end{align*} Using \eqref{eq:delta1} in \eqref{eq:starB1}, \eqref{eq:delta2} in \eqref{eq:starB2}, and the mean value theorem in \eqref{eq:starB3}, we estimate \begin{align*} (\Xi)&=\left| \frac{\mathbf{x}_{k}-\mathbf{x}'_{k}}{\left|\mathbf{x}'_{k}-\mathbf{x}'_{m}\right|^3} + \frac{\mathbf{x}'_{m}-\mathbf{x}_{m}}{\left|\mathbf{x}_{k}-\mathbf{x}_{m}\right|^3} + \left(\mathbf{x}'_k-\mathbf{x}_{m}\right) \left( \left|\mathbf{x}'_{k}-\mathbf{x}'_{m}\right|^{-3} -\left|\mathbf{x}_{k}-\mathbf{x}_{m}\right|^{-3} \right) \right| \\ &\leq \frac{1}{4\delta^3}\left|\mathbf{x}-\mathbf{x}'\right|+\frac{\left|\mathbf{x}'_k\right|+\left|\mathbf{x}_{m}\right|}{\left|\mathbf{x}_{k}-\mathbf{x}_{m}\right|^{3}\left|\mathbf{x}'_{k}-\mathbf{x}'_{m}\right|^{3}} \left|\left|\mathbf{x}_{k}-\mathbf{x}_{m}\right|^{3}-\left|\mathbf{x}'_{k}-\mathbf{x}'_{m}\right|^{3}\right|\tag{$*$}\label{eq:starB1}\\ &\leq\frac{1}{4\delta^3}\left|\mathbf{x}-\mathbf{x}'\right|+2^{-5}\delta^{-6}\left(\left|\mathbf{x}^0\right|+\delta\right) \left|\left|\mathbf{x}_{k}-\mathbf{x}_{m}\right|^{3}-\left|\mathbf{x}'_{k}-\mathbf{x}'_{m}\right|^{3}\right|\tag{$*2$}\label{eq:starB2}\\ &\lesssim\left[\delta^{-3}+\delta^{-6}\left(\left|\mathbf{x}^0\right|+\delta\right)^3\right]\left|\mathbf{x}-\mathbf{x}'\right|.\tag{$*3$}\label{eq:starB3} \end{align*} Altogether, we get for the $\mathbf{a}_k^2$ term \begin{align*} &\sum_{k=1}^M\left|\mathbf{a}^2_k\left(\mathbf{x}\right)-\mathbf{a}^2_k\left(\mathbf{x}'\right)\right| \lesssim_{\bm{z},\bm{m},\delta,\mathbf{x}^0} \left|\mathbf{x}-\mathbf{x}'\right|. \end{align*} This gives as uniform Lipschitz constant for $\mathbf{a}$ on $K$ in the $\left(\mathbb{R}^3\right)^M$ norm \begin{align} L:=C_1\left(\alpha+\left\|\bm{\psi}^0\right\|_{\left(H^{2}\right)^N}\right)^2+C_2' \label{eq:L} \end{align} with $C_1$ from \eqref{eq:Mstar} and some constant $C_2'=C_2'\left(\bm{z},\bm{m},\delta,\mathbf{x}^0\right)$. Now, we define $\mathcal{T}$ as the following mapping on the complete metric space $C^0\left([0,\tau];\overline{B_\delta\left(\mathbf{x}^0\right)}\right)$, equipped with the $C^0\left([0,\tau];\left(\mathbb{R}^3\right)^M\right)$ norm: \begin{align} \label{eq:mappingT} \left(\mathcal{T}\left[\mathbf{x}\right]\right)(t):=\mathbf{x}^0+\mathbf{v}^0t+\int_0^t(t-\sigma)\mathbf{a}(\sigma,\mathbf{x}(\sigma))\mathrm{d} \sigma. \end{align} Note that we have proven that $\mathbf{a}$ is continuous on $K$. Moreover, we have proven that $\mathbf{a}$ is bounded on $K$ Because of this, we have for all $\mathbf{x}\in C^0\left([0,\tau];\overline{B_\delta\left(\mathbf{x}^0\right)}\right)$ \begin{align*} \left\|\mathcal{T}\left[\mathbf{x}\right]-\mathbf{x}^0\right\|_{C^0\left([0,\tau];\left(\mathbb{R}^3\right)^M\right)}\leq \left|\mathbf{v}^0\right|\tau+\frac{\mathcal{M}\tau^2}{2}. \end{align*} Note that \eqref{eq:A1} implies \begin{align} \left|\mathbf{v}^0\right|\tau+ \left[\mathcal{C}_1\left(\alpha(\tau)+\left\|\bm{\psi}^0\right\|_{\left(H^{2}\right)^N}\right)^2 +\mathcal{C}_2 \right]\tau^2\leq\delta, \tag{A1a}\label{eq:A1a} \end{align} with $2\mathcal{C}_i\geq C_i$, $i\in\{1,2\}$, from \eqref{eq:Mstar}. Therefore, $\mathcal{T}$ maps $C^0\left([0,\tau];\overline{B_\delta\left(\mathbf{x}^0\right)}\right)$ into itself. Besides this, we have proven that on $K$, the integrand $\mathbf{a}$ is Lipschitz-continuous in $\mathbf{x}$ uniformly with respect to $t$. Hence, for all $\mathbf{x},\mathbf{x}'\in C^0([0,\tau];\overline{B_{\delta}\left(\mathbf{x}^0\right)})$, we have \begin{align*} \left\|\mathcal{T}\left[\mathbf{x}\right]-\mathcal{T}\left[\mathbf{x}'\right]\right\|_{C^0\left([0,\tau];\left(\mathbb{R}^3\right)^M\right)}&\leq \max_{t\in[0,\tau]}\left\{\int_0^t(t-s)\left|\mathbf{a}\left(s,\mathbf{x}(s)\right)-\mathbf{a}\left(s,\mathbf{x}'(s)\right)\right|\mathrm{d} s\right\}\\ &\leq\frac{L\tau^2}{2}\left\|\mathbf{x}-\mathbf{x}'\right\|_{C^0\left([0,\tau];\left(\mathbb{R}^3\right)^M\right)}. \end{align*} Note that \eqref{eq:A1} also implies \begin{align*} \left[ \mathcal{C}_1\left(\alpha(\tau)+\left\|\bm{\psi}^0\right\|_{\left(H^{2}\right)^N}\right)^2 +\mathcal{C}_2 \right]\tau^2 < 1, \tag{A1b}\label{eq:A1b} \end{align*} with $2\mathcal{C}_1\geq C_1$ from \eqref{eq:Mstar} and $2\mathcal{C}_2\geq \max\left\{C_2,C_2'\right\}$, with $C_2$ from \eqref{eq:Mstar} and $C_2'$ from \eqref{eq:L}. Therefore, $\mathcal{T}$ is a contraction on $C^0\left([0,\tau];\overline{B_\delta\left(\mathbf{x}^0\right)}\right)$ in the $C^0\left([0,\tau];\left(\mathbb{R}^3\right)^M\right)$ norm. Hence, we are in the position to apply the classical contraction mapping theorem, by which it follows that $\mathcal{T}$ has a unique fixed point in $C^0([0,\tau];\overline{B_{\delta}\left(\mathbf{x}^0\right)})$. Because of this, \eqref{eq:N} has a unique short-time solution in $C^2\left([0,\tau];\overline{B_{\delta}\left(\mathbf{x}^0\right)}\right)$.\\\\ \textit{Part 2: Localisation of $\mathbf{x}$ in $\mathcal{B}_{\mrm{nuc}}(\tau) $.}\\ Integrating the ODE in \eqref{eq:N} and using \eqref{eq:Mstar}, we get \begin{align*} \left\|\dot{\mathbf{x}}\right\|_{C^0\left([0,\tau];\left(\mathbb{R}^3\right)^M\right)} &\leq \left|\mathbf{v}^0\right|+\tau\mathcal{M}. \end{align*} Note that \eqref{eq:A1} implies \begin{align*} \tau \left[\mathcal{C}_1\left(\alpha(\tau)+\left\|\bm{\psi}^0\right\|_{\left(H^{2}\right)^N}\right)^2 +\mathcal{C}_2 \right]\leq 1+\left|\mathbf{v}^0\right|, \tag{A1c}\label{eq:A1c} \end{align*} with $\mathcal{C}_1\geq C_1$ and $\mathcal{C}_2\geq \max\left\{C_2,C_2'/2\right\}$, with $C_1,C_2$ from \eqref{eq:Mstar} and $C_2'$ from \eqref{eq:L}. Therefore, $\mathbf{x}\in \mathcal{B}_{\mrm{nuc}}(\tau) \cap C^2\left([0,\tau];\overline{B_\delta\left(\mathbf{x}^0\right)}\right)$.\\\\ \textit{Part 3: Boundedness and continuity of $\mathcal{N}$.}\\ We have that $\mathcal{N}\left[\bm{\psi}\right]=\mathbf{x}\in\mathcal{B}_{\mrm{nuc}}(\tau) $, so $\mathbf{x}(t)\in \overline{B_\delta\left(\mathbf{x}^0\right)}$ for all $t\in[0,\tau]$. Also, we have that $\mathbf{x}$ solves \eqref{eq:N} on $[0,\tau]$, so $\ddot{\mathbf{x}}(t)=\mathbf{a}(t,\mathbf{x})$ for all $t\in[0,\tau]$. Combining this with the results of Parts 1 and 2, we can prove the mapping $\mathcal{N}$ is bounded with respect to the $C^2\left([0,\tau];\left(\mathbb{R}^3\right)^M\right)$ norm: \begin{align*} \left\|\mathbf{x}\right\|_{C^2\left([0,\tau];\left(\mathbb{R}^3\right)^M\right)} &\le \left|\mathbf{x}^0\right|+\delta+\gamma+\mathcal{M}. \end{align*} In order to prove continuity of $\mathcal{N}$ in the $C^2\left([0,\tau];\left(\mathbb{R}^3\right)^M\right)$ norm, we consider a sequence $(\bm{\psi }_n)_{n\in\mathbb{N}}\subset\mathcal{B}_{\mrm{el}}(\tau) $ such that $\bm{\psi }_n\xrightarrow{n\longrightarrow\infty}\bm{\psi }\in\mathcal{B}_{\mrm{el}}(\tau) $. Similarly to $\mathbf{x}=\mathcal{N}\left[\bm{\psi }\right]$, we define $\mathbf{x}_n:=\mathcal{N}\left[\bm{\psi }_n\right]$ and $\rho_n:=\left|\bm{\psi}_n\right|^2$: then, \begin{align*} \left|\left(\ddot{\mathbf{x}}_{n}-\ddot{\mathbf{x}}\right)(t)\right|\leq\underbrace{\sum_{k=1}^M\left|\mathbf{a}^1_k\left[\rho_n\right](t,\mathbf{x}_{n})-\mathbf{a}^1_k\left[\rho\right](t,\mathbf{x})\right|}_{=:\,\mathrm{(I)}}+ \underbrace{\sum_{k=1}^M\left|\mathbf{a}^2_k\left[\mathbf{x}_n\right](\mathbf{x}_{n})-\mathbf{a}^2_k\left[\mathbf{x}\right](\mathbf{x})\right|}_{=:\,\mathrm{(II)}} \end{align*} for all $t\in[0,\tau]$. We can bound \begin{align*} &\mathrm{(I)}\lesssim_{\bm{z},\bm{m}}\\ &\underbrace{\sum_{j=1}^N\left|\left<\psi_{j}(t,\cdot),\mathbf{R}(\cdot-\mathbf{x}_k)\left(\psi_{nj}(t,\cdot)-\psi_j(t,\cdot)\right)\right>+\left<\psi_{nj}(t,\cdot)-\psi_j(t,\cdot),\mathbf{R}\left(\cdot-\mathbf{x}_{nk}\right)\psi_{nj}(t,\cdot)\right>\right|}_{=:\,\mathrm{(Ia)}}\\ &\qquad +\underbrace{\sum_{j=1}^N\left|\left<\psi _{j}(t,\cdot),\mathbf{R}\left(\cdot-\mathbf{x}_{nk}\right)\psi_{nj}(t,\cdot)\right>-\left<\psi_{j}(t,\cdot),\mathbf{R}\left(\cdot-\mathbf{x}_k\right)\psi_{nj}(t,\cdot)\right>\right|}_{=:\mathrm{(Ib)}}, \end{align*} where $\mathbf{R}(\mathbf{r}):=\mathbf{r}\left|\mathbf{r}\right|^{-3}$. Arguing as in \cite[p. 980]{Cances1999OnDynamics}, (Ia) can be bounded by \begin{align*} \beta_{n}&:=\sum_{j=1}^N \sup_{(t,\mathbf{r})\in[0,\tau]\times\mathbb{R}^3}\left\{ \left\langle \left|\cdot-\mathbf{r}\right|^{-1}\left|\psi_j(t,\cdot)+\psi_{nj}(t,\cdot)\right|, \left|\cdot-\mathbf{r}\right|^{-1}\left|\psi_{nj}(t,\cdot)-\psi_j(t,\cdot)\right|\right\rangle \right\}\\ &\xrightarrow{n\longrightarrow\infty}0, \end{align*} as $\bm{\psi }_n\xrightarrow{n\longrightarrow\infty}\bm{\psi}$ in $C^0\left([0,\tau];\left(L^2\right)^N\right)$. We also have \begin{align*} \mathrm{(Ib)}&\lesssim{}\sum_{j=1}^N\left\|\nabla G\left[\psi_j,\psi_{nj}\right]\left(\mathbf{x}_{nk}\right)- \nabla G\left[\psi_j,\psi_{nj}\right]\left(\mathbf{x}_k\right)\right\|_{C^0\left([0,\tau];\mathbb{C}^3\right)}\\ &\leq L_{1,n}\left|\mathbf{x}_{n}-\mathbf{x}\right|, \end{align*} where we adopted the map $G$ from \eqref{eq:mappingG}, and used that the functions $\nabla G\left[\psi_j,\psi_{nj}\right]$ are uniformly Lipschitz-continuous in $\mathbf{x}$ for uniformly all $t\in[0,\tau]$. So is (II), with some Lipschitz constant $L_{2,n}\leq L$ from \eqref{eq:L}. For all $n$, $L_{1,n}$ and $L_{2,n}$ are uniformly bounded, since all $\bm{\psi}_n$ and $\bm{\psi}$ are taken from the uniformly bounded set $\mathcal{B}_{\mrm{el}}(\tau) $. Altogether, we can write the uniform bound \begin{align}\left|\ddot{\mathbf{x}}_{n}-\ddot{\mathbf{x}}\right|\leq \left(L_{1,n}+L_{2,n}\right)\left|\mathbf{x}_{n}-\mathbf{x}\right|+\beta_n. \label{eq:continuityC2} \end{align} With some abuse of notation we regard $\left(\mathbf{x}_{n}-\mathbf{x},\dot{\mathbf{x}}_{n}-\dot{\mathbf{x}}\right)$ as an element in $\mathbb{R}^{6M},$ and set $y_{n}:=\left|\left(\mathbf{x}_{n}-\mathbf{x},\dot{\mathbf{x}}_{n}-\dot{\mathbf{x}}\right)\right|^2.$ By the Cauchy--Schwarz and Young's product inequalities, we obtain \begin{align*} \dot{y}_{n}&\leq 2\left|\dot{\mathbf{x}}_{n}-\dot{\mathbf{x}}\right|\left(\left|\mathbf{x}_n-\mathbf{x}\right|+\left|\ddot{\mathbf{x}}_{n}-\ddot{\mathbf{x}}\right|\right)\\ &\leq 2\left(1+L_{1,n}+L_{2,n}\right)y_{n}+\beta_n\left|\dot{\mathbf{x}}_{n}-\dot{\mathbf{x}}\right|\\&\leq \left(2+L_{1,n}+L_{2,n}\right)y_{n}+\beta_n^2\\ &\leq C_\tau y_n+\beta_n^2, \end{align*} for some uniform bound $C_\tau>0$. Since $\mathcal{N}$ maps to $\mathcal{B}_{\mrm{nuc}}(\tau) $, we have for all $n$ that $\left(\mathbf{x}_{n}(0),\dot{\mathbf{x}}_{n}(0)\right)=\left(\mathbf{x}(0),\dot{\mathbf{x}}(0)\right)=(\mathbf{x}^0,\mathbf{v}^0)$, so $y_{n}(0)=0$. Hence, by Gr\"onwall's inequality, we can deduce that \begin{align*} \max_{t\in[0,\tau]}\left\{y_{n}(t)\right\}&\lesssim_\tau\beta_n^2\\ &\xrightarrow{n\longrightarrow \infty}0. \end{align*} Because of these results, \eqref{eq:continuityC2} gives continuity in the $C^2\left([0,\tau];\left(\mathbb{R}^3\right)^M\right)$ norm. This concludes the proof. \qedhere \end{proof} \iffalse \begin{remark} {\it From a numerical viewpoint, note that this unique solution is the limit in the $C^0\left([0,\tau];\overline{B_{\delta}\left(\mathbf{x}^0\right)}\right)$ metric of a classical iteration sequence defined from the initial nuclear position as \begin{align*} \mathbf{x}_0=\mathbf{x}^0,\qquad \mathbf{x}_n=\mathcal{T}\left[\mathbf{x}_{n-1}\right]\text{\quad for\quad }n=1,2,\ldots \end{align*} with error estimates obtained e.g. by induction, in terms of our quantities: $$\left\|\mathbf{x}_n-\mathbf{x}_{n-1}\right\|_{C^0\left([0,\tau];\overline{B_\delta\left(\mathbf{x}^0\right)}\right)}\leq \frac{\mathcal M}{L}\frac{1}{n!}\left(\frac{L \tau^2}{2}\right)^n.$$} \end{remark} \fi \begin{lemma} \label{lem:CLtoKS} We let $A_\gamma,C_\gamma$ be defined as in Lemma \ref{lem:evolutionoperators} (viii), and recall that $B_{\tau,\gamma}$ and $\alpha$ are of the form \begin{align*} B_{\tau,\gamma}=A_\gamma^{1+C_\gamma\tau},\qquad \alpha(\tau)=2A_\gamma^{1+C_\gamma\tau}\left\|\bm{\psi}^0\right\|_{\left(H^2\right)^N}. \end{align*} Let $q\geq 7/2$, $\lambda\in\mathbb{R}$. Then, for $\tau>0$ satisfying \begin{align*} \tau B_{\tau,\gamma}\frac{2B_{\tau,\gamma}+1}{B_{\tau,\gamma}-1}\mathscr{L}_{q,\lambda}\left(\alpha(\tau)+\left\|\bm{\psi}^0\right\|_{\left(H^2\right)^N}\right)&<1,\tag{A2}\label{eq:A2} \end{align*} the system \eqref{eq:KS} has for given $\mathbf{x}\in\mathcal{B}_{\mrm{nuc}}(\tau) $ a unique short-time solution $\bm{\psi}$ in $\mathcal{B}_{\mrm{el}}(\tau) $. \end{lemma} \begin{proof} This proof is based on Lemma \ref{lem:evolutionoperators}, which ensures the existence and the $\mathcal{L}\left((H^2)^N\right)$ bounds of the propagator $U(t,s)$ for the family of linear Hamiltonians $\left\{\widehat{\mathfrak{H}}(t),t\in[0,\tau]\right\}$ from \eqref{eq:linHam}, and on Lemma \ref{lem:lipschitzP}, which ensures the non-linear mapping $\bm{\psi}\longmapsto v_{\textsc{hx}}[\rho]\bm{\psi}$ is locally Lipschitz in $\left(H^2\right)^N$.\\\\ We define $\mathcal{F}$ as the following mapping on the complete metric space $C^0\left([0,\tau];\overline{B_{\alpha}\left(\bm{\psi}^0\right)}\right)$, equipped with the $C^0\left([0,\tau];\left(H^2\right)^N\right)$ norm: \begin{align*} \left(\mathcal{F}[\bm{\psi}]\right)(t):=U(t,0)\bm{\psi}^0-i\int_0^tU(t,\sigma)v_{\textsc{hx}}[\rho]\bm{\psi}(\sigma)\mathrm{d}\sigma. \end{align*} Note that we obtain for all $\bm{\psi}\in C^0\left([0,\tau];\overline{B_{\alpha}\left(\bm{\psi}^0\right)}\right)$, using Lemma \ref{lem:evolutionoperators} (ii), \begin{align} \nonumber\mathcal{F}[\bm{\psi}](0)&=U(0,0)\bm{\psi}^0\\ &=\bm{\psi}^0,\label{eq:fpsi0} \end{align} and \begin{align*} \left\|\mathcal{F}[\bm{\psi}]-\bm{\psi}^0\right\|_{C^0\left([0,\tau];\left(H^2\right)^N\right)} &=\left\|\left[U(\cdot,0)-\mathrm{Id}\right]\bm{\psi}^0-i\int_0^\cdot U(\cdot,\sigma)v_{\textsc{hx}}[\rho]\bm{\psi}(\sigma)\mathrm{d}\sigma\right\|_{C^0\left([0,\tau];\left(H^2\right)^N\right)}\\ &\leq B_{\tau,\gamma}\left(\left\|\bm{\psi}^0\right\|_{\left(H^2\right)^N}+\tau \left\|v_{\textsc{hx}}[\rho]\bm{\psi}\right\|_{C^0\left([0,\tau];\left(H^2\right)^N\right)}\right)\\ &\qquad +\left\|\bm{\psi}^0\right\|_{\left(H^2\right)^N}\tag{$*$}\label{eq:starC1}\\ &\leq\biggr[1+B_{\tau,\gamma}+\tau B_{\tau,\gamma}\left(2B_{\tau,\gamma}+1\right)\mathscr{L}_{q,\lambda}\left(\alpha+\left\|\bm{\psi}^0\right\|_{\left(H^2\right)^N}\right) \biggr]\\&\qquad\times \left\|\bm{\psi}^0\right\|_{\left(H^2\right)^N},\tag{$*2$}\label{eq:starC2}\\ &\leq 2B_{\tau,\gamma}\left\|\bm{\psi}^0\right\|_{\left(H^2\right)^N}\tag{$*3$}\label{eq:starC3a}\\ &=\alpha, \end{align*} where we used Lemma \ref{lem:evolutionoperators} (viii) in \eqref{eq:starC1}, Lemma \ref{lem:lipschitzP} \eqref{eq:G} in \eqref{eq:starC2}, and \eqref{eq:A2} in \eqref{eq:starC3a}. This means that $\mathcal{F}$ maps the complete metric space $C^0\left([0,\tau];\overline{B_{\alpha}\left(\bm{\psi}^0\right)}\right)$ into itself. Besides this, we have for all $\bm{\psi},\bm{\psi}'\in C^0\left([0,\tau];\overline{B_{\alpha}\left(\bm{\psi}^0\right)}\right)$ \begin{align*} \left\|\mathcal{F}[\bm{\psi}]-\mathcal{F}[\bm{\psi}']\right\|_{C^0\left([0,\tau];\left(H^2\right)^N\right)} &=\left\|\int_0^\cdot U(\cdot,\sigma)\left(v_{\textsc{hx}}[\rho]\bm{\psi}(\sigma)-v_{\textsc{hx}}[\rho']\bm{\psi}'(\sigma)\right)\mathrm{d}\sigma\right\|_{C^0\left([0,\tau];\left(H^2\right)^N\right)}\\ &\leq\tau B_{\tau,\gamma}\left\|v_{\textsc{hx}}[\rho]\bm{\psi}-v_{\textsc{hx}}\left[\rho'\right]\bm{\psi}'\right\|_{C^0\left([0,\tau];\left(H^2\right)^N\right)}\tag{$*$}\label{eq:starD1}\\ &\leq \tau B_{\tau,\gamma}\mathscr{L}_{q,\lambda}\left(\alpha+\left\|\bm{\psi}^0\right\|_{\left(H^2\right)^N}\right)\\ &\qquad \times \left\|\bm{\psi}-\bm{\psi}'\right\|_{C^0\left([0,\tau];\left(H^2\right)^N\right)},\tag{$*2$}\label{eq:starD2} \end{align*} where we used Lemma \ref{lem:evolutionoperators} (viii) in \eqref{eq:starD1} and Lemma \ref{lem:lipschitzP} \eqref{eq:F} in \eqref{eq:starD2}. Note that \eqref{eq:A2} implies \begin{align} \tau B_{\tau,\gamma}\mathscr{L}_{q,\lambda}\left(\alpha(\tau)+\left\|\bm{\psi}^0\right\|_{\left(H^2\right)^N}\right) <1,\tag{A2a}\label{eq:A2a} \end{align} since $B_{\tau,\gamma}>1$. Therefore, we have that $\mathcal{F}$ is a contraction on $C^0\left([0,\tau];\overline{B_{\alpha}\left(\bm{\psi}^0\right)}\right)$ in the $C^0\left([0,\tau];\left(H^2\right)^N\right)$ norm. Now, we can apply the classical contraction mapping theorem, by which it follows that $\mathcal{F}$ has a unique fixed point in $C^0\left([0,\tau];\overline{B_{\alpha}\left(\bm{\psi}^0\right)}\right)$. It is now left to prove that this fixed point, simply denoted by $\bm{\psi}$, is also of class $C^1\left([0,\tau];\left(L^2\right)^N\right)$; then, it solves \eqref{eq:KS} strongly on $[0,\tau]$. To this end, we consider the following identity, which holds for all $0\leq t<t'\leq \tau$: \begin{align*} i\frac{\bm{\psi}(t')-\bm{\psi}(t)}{t'-t}&= i\frac{U(t',0)-U(t,0)}{t'-t}\bm{\psi}^0+\int_0^t\frac{U(t',\sigma)-U(t,\sigma)}{t'-t}v_{\textsc{hx}}\left[\rho\right]\bm{\psi}(\sigma)\mathrm{d}\sigma \\ &\qquad +\int_t^{t'}\frac{U(t',\sigma)}{t'-t}v_{\textsc{hx}}\left[\rho\right]\bm{\psi}(\sigma)\mathrm{d}\sigma\\ &=:\mathrm{(R)}, \end{align*} and we show that \begin{align*} \left\|\mathrm{(R)}-\widehat{H}\left[\mathbf{x}(t),\rho\right]\bm{\psi}(t)\right\|_{\left(L^2\right)^N}\xrightarrow{t'\longrightarrow t}0. \end{align*} This will imply that $ \bm{\psi}(\cdot)$ is differentiable as a mapping $[0,\tau] \longmapsto (L^2)^N$ with a derivative $\dot{\bm{\psi}}(\cdot)$ such that \begin{align*} i\dot{\bm{\psi}}(t)=\widehat{H}\left[\mathbf{x}(t),\rho\right]\bm{\psi}(t). \end{align*} Note in particular that for the given $\mathbf{x}\in\mathcal{B}_{\mrm{nuc}}(\tau) $, $\widehat{H}\left[\mathbf{x}(\cdot),\rho\right]\bm{\psi}(\cdot)$ is a continuous mapping $[0,\tau] \longmapsto (L^2)^N$, which will imply that $ \bm{\psi}\in C^1\left([0,\tau];\left(L^2\right)^N\right)$, so that the PDE is satisfied in the strong sense. We have \begin{align*} &\left\|\mathrm{(R)}-\widehat{H}\left[\mathbf{x}(t),\rho\right]\bm{\psi}(t)\right\|_{\left(L^2\right)^N}\\ &\quad \leq \underbrace{\left\| i\frac{U(t',0)-U(t,0)}{t'-t}\bm{\psi}^0+ \int_0^t\frac{U(t',\sigma)-U(t,\sigma)}{t'-t}v_{\textsc{hx}}\left[\rho\right]\bm{\psi}(\sigma)\mathrm{d}\sigma - \widehat{\mathfrak{H}}(t)\bm{\psi}(t) \right\|_{\left(L^2\right)^N}}_{=:\,\mathrm{(I)}} \\ &\quad+ \underbrace{\left\| \int_t^{t'}\frac{U(t',\sigma)}{t'-t}v_{\textsc{hx}}\left[\rho\right]\bm{\psi}(\sigma)\mathrm{d}\sigma - v_{\textsc{hx}}\left[\rho\right]\bm{\psi}(t) \right\|_{\left(L^2\right)^N}}_{=:\,\mathrm{(II)}}. \end{align*} In the limit, we get \begin{align*} \lim_{t'\longrightarrow t}\left\{\mathrm{(I)}\right\}&=\left\|i\frac{\partial}{\partial t}\left[U(t,0)\bm{\psi}^0\right] +\int_0^t \frac{\partial}{\partial t}\left[U(t,\sigma)v_{\textsc{hx}}\left[\rho\right]\bm{\psi}(\sigma)\right]\mathrm{d}\sigma- \widehat{\mathfrak{H}}(t)\bm{\psi}(t) \right\|_{\left(L^2\right)^N} \\ &= \left\| \widehat{\mathfrak{H}}(t)\left[U(t,0)\bm{\psi}^0\right] + \int_0^t -i \widehat{\mathfrak{H}}(t)\left[ U(t,\sigma)v_{\textsc{hx}}\left[\rho\right]\bm{\psi}(\sigma) \right] \mathrm{d}\sigma - \widehat{\mathfrak{H}}(t)\bm{\psi}(t) \right\|_{\left(L^2\right)^N} \tag{$*$}\label{eq:starE1} \\ &= \left\| \widehat{\mathfrak{H}}(t)\left[\mathcal{F}\left[\bm{\psi}(t)\right]-\bm{\psi}(t)\right] \right\|_{\left(L^2\right)^N}\tag{$*2$}\label{eq:starE2} \\&=0,\tag{$*3$}\label{eq:starC3} \end{align*} where we used Lemma \ref{lem:evolutionoperators} (vii) (see also \cite[Thm. 1.3. (6)]{Yajima1987ExistenceEquations}) in \eqref{eq:starE1}, the linearity of the Hamiltonians $\widehat{\mathfrak{H}}(t)$ in \eqref{eq:starE2}, and $\bm{\psi}$ being a fixed point of $\mathcal{F}$ in \eqref{eq:starC3}. Further, we have \begin{align*} \mathrm{(II)}&\leq \underbrace{\left\| \frac{1}{t'-t}\int_t^{t'}U(t,\sigma)v_{\textsc{hx}}\left[\rho\right]\bm{\psi}(\sigma)\mathrm{d}\sigma - v_{\textsc{hx}}\left[\rho\right]\bm{\psi}(t) \right\|_{\left(L^2\right)^N}}_{=:\,\mathrm{(a)}}\\ &\quad+\underbrace{\frac{1}{t'-t}\left\| \int_t^{t'}\left[U(t',\sigma)-U(t,\sigma)\right]v_{\textsc{hx}}\left[\rho\right]\bm{\psi}(\sigma)\mathrm{d}\sigma \right\|_{\left(L^2\right)^N}}_{=:\,\mathrm{(b)}} . \end{align*} In the limit, (a) goes to zero, because of the fundamental theorem of calculus for Bochner integrals and Lemma \ref{lem:evolutionoperators} (ii). Furthermore, we have \begin{align*} \lim_{t'\longrightarrow t}\left\{\mathrm{(b)}\right\}&\leq\lim_{t'\longrightarrow t}\left\{\frac{1}{t'-t} \int_t^{t'}\left\|\left[U(t',\sigma)-U(t,\sigma)\right]v_{\textsc{hx}}\left[\rho\right]\bm{\psi}(\sigma)\right\|_{\left(L^2\right)^N}\mathrm{d}\sigma\right\}\\ &\leq\lim_{t'\longrightarrow t}\left\{\left\|\left[U(t',\cdot)-U(t,\cdot)\right]v_{\textsc{hx}}\left[\rho\right]\bm{\psi}\right\|_{C^0\left([0,T],\left(L^2\right)^N\right)}\right\}\\ &=0,\tag{$*$}\label{eq:starF1} \end{align*} where we used the uniform continuity of $U(t,s)v_{\textsc{hx}}\left[\rho\right]\bm{\psi}(s)$ on $[0,T]^2$ together with Lemma \ref{lem:evolutionoperators} (iv) in \eqref{eq:starF1}. Since $\bm{\psi}$ also is a fixed point of $\mathcal{F}$, by which $\bm{\psi}=\mathcal{F}[\bm{\psi}](0)=\bm{\psi}^0$ (see \eqref{eq:fpsi0}), we know $\bm{\psi}$ is a strong solution to \eqref{eq:KS} on $[0,\tau]$. Now, we show uniqueness of the short-time solution $\bm{\psi}$ to \eqref{eq:KS} in the class $C^1\left([0,\tau];\left(L^2\right)^N\right)\cap C^0\left([0,\tau];\overline{B_{\alpha}\left(\bm{\psi}^0\right)}\right)$: although the classical contraction mapping theorem also provides uniqueness, this is only in the class $C^0\left([0,\tau];\overline{B_{\alpha}\left(\bm{\psi}^0\right)}\right)$. So, now we prove uniqueness in the different space $C^1\left([0,\tau];\left(L^2\right)^N\right)$. To this end, we let $\bm{\psi}$ and $\bm{\psi}'$ denote two short-time solutions to \eqref{eq:KS} in $C^1\left([0,\tau];\left(L^2\right)^N\right)$. First, we have $\left(\bm{\psi}-\bm{\psi}'\right)(0)=\bm{\psi}^0-\bm{\psi}^0=0$. Moreover, we can write for all $j\in\{1,\ldots,N\}$, using the PDE in \eqref{eq:KS}, \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t}\left(\left\|\psi_{j}-\psi_{j}'\right\|^2\right)&= \frac{\mathrm{d}}{\mathrm{d} t}\left(\left\langle \psi_{j}-\psi_{j}',\psi_{j}-\psi_{j}'\right\rangle\right)\nonumber \\ &= \left\langle \dot{\psi}_{j}-\dot{\psi}_{j}',\psi_{j}-\psi_{j}'\right\rangle + \overline{\left\langle \dot{\psi}_{j}-\dot{\psi}_{j}',\psi_{j}-\psi_{j}'\right\rangle}\nonumber\\ &=\mathrm{(I)}+\mathrm{(II)}, \end{align*} where we have, using that the linear Hamiltonians $\widehat{\mathfrak{H}}(t)$ are self-adjoint on $\left(L^2\right)^N$, \begin{align*} \mathrm{(I)}&= -i\left( \left\langle\left(\widehat{\mathfrak{H}}(t)\left(\bm{\psi}-\bm{\psi}'\right)\right)_j,\psi_{j}-\psi_{j}'\right\rangle + \left\langle\psi_j-\psi_j',\left(\widehat{\mathfrak{H}}(t)\left(\bm{\psi}-\bm{\psi}'\right)\right)_j\right\rangle \right)\\ &=0, \end{align*} and, with $\rho':=\left|\bm{\psi}'\right|^2$, \begin{align*} \mathrm{(II)}&=i\left[ \overline{\left\langle\left(v_{\textsc{hx}}\left[\rho\right]\bm{\psi}-v_{\textsc{hx}}\left[\rho'\right]\bm{\psi}'\right)_j,\psi_j-\psi_j'\right\rangle}\right.\left.- \left\langle\left(v_{\textsc{hx}}\left[\rho\right]\bm{\psi}-v_{\textsc{hx}}\left[\rho'\right]\bm{\psi}'\right)_j,\psi_j-\psi_j'\right\rangle\right] \\ &=2\mathfrak{I}\left(\left\langle\left(v_{\textsc{hx}}\left[\rho\right]\bm{\psi}-v_{\textsc{hx}}\left[\rho'\right]\bm{\psi}'\right)_j,\psi_j-\psi_j'\right\rangle\right). \end{align*} Using this, we get \begin{align*} \frac{\mathrm{d}}{\mathrm{d} t}\left(\left\|\bm{\psi}-\bm{\psi}'\right\|^2_{\left(L^2\right)^N}\right) &= \sum_{j=1}^N\frac{\mathrm{d}}{\mathrm{d} t}\left(\left\|\psi_j-\psi_j'\right\|^2\right)\nonumber\\ &= 2\mathfrak{I}\left(\left\langle v_{\textsc{hx}}\left[\rho\right]\bm{\psi}-v_{\textsc{hx}}\left[\rho'\right]\bm{\psi}',\bm{\psi}-\bm{\psi}'\right\rangle_{\left(L^2\right)^N}\right)\nonumber\\ &\leq C\left\|\bm{\psi}-\bm{\psi}'\right\|^2_{\left(L^{2}\right)^N}, \end{align*} where $C=C\left(\left\|\bm{\psi}\right\|_{C^0\left([0,\tau];\left(H^{2}\right)^N\right)},\left\|\bm{\psi}'\right\|_{C^0\left([0,\tau];\left(H^{2}\right)^N\right)},\tau,q,\lambda,N\right)>0$ stems from using of the Cauchy--Schwarz inequality and combining Lemmata \ref{lem:lipschitzconvolution} \eqref{eq:A} and \ref{lem:lipschitzlda} \eqref{eq:D}. Now, by Gr\"onwall's Lemma we get that $\bm{\psi}=\bm{\psi}'$. This concludes the proof. \end{proof} \begin{lemma} \label{lem:mappingE} Let $q\geq7/2$ and $\lambda\in\mathbb{R}$. Further, let $\tau>0$ satisfy \eqref{eq:A2}. For given $\mathbf{x}\in\mathcal{B}_{\mrm{nuc}}(\tau) $, let $\bm{\psi}\in\mathcal{B}_{\mrm{el}}(\tau) $ denote the unique short-time solution to \eqref{eq:KS}. Then, the mapping $\mathcal{E}[\mathbf{x}]=\bm{\psi}$ is bounded and continuous. \end{lemma} \begin{proof} Since $\mathcal{B}_{\mrm{el}}(\tau) $ is a bounded subset of $C^0\left([0,\tau];\left(L^2\right)^N\right)$, the mapping $\mathcal{E}$ is bounded in the $C^0\left([0,\tau];\left(L^2\right)^N\right)$ norm. In order to prove continuity of $\mathcal{E}$ in the $C^0\left([0,\tau];\left(L^2\right)^N\right)$ norm, we consider a sequence $(\mathbf{x}_n)_{n\in\mathbb{N}}\subset\mathcal{B}_{\mrm{nuc}}(\tau) $ such that $\mathbf{x}_n\xrightarrow{n\longrightarrow\infty}\mathbf{x}\in\mathcal{B}_{\mrm{nuc}}(\tau) $. Similarly to $\bm{\psi}=\mathcal{E}\left[\mathbf{x}\right]$, we define $\bm{\psi}_n:=\mathcal{E}\left[\mathbf{x}_n\right]$ with $\rho_n:=\left|\bm{\psi}_n\right|^2$. Then, we can write \begin{alignat*}{3} &\begin{dcases} \begin{aligned} \displaystyle i\frac{\partial}{\partial t}\left(\bm{\psi}_n-\bm{\psi}\right)&=\widehat{H}\left[\mathbf{x},\rho\right]\left(\bm{\psi}_n-\bm{\psi}\right)+\bm{\zeta}_n,\\ \left(\bm{\psi}_n-\bm{\psi}\right)(0)&=0, \end{aligned} \end{dcases}\hspace*{-10cm}&&\\ &\bm{\zeta}_n&&:=\bm{\zeta}_n^1+\bm{\zeta}_n^2+\bm{\zeta}_n^3,\\ &\bm{\zeta}_n^1&&:= v\left[\mathbf{x}_n\right]\bm{\psi}_n-v\left[\mathbf{x}\right]\bm{\psi}-v\left[\mathbf{x}\right]\left(\bm{\psi}_n-\bm{\psi}\right)\\ & &&\,\,= \left(v\left[\mathbf{x}_n\right]-v\left[\mathbf{x}\right]\right)\bm{\psi}_n,\\ &\bm{\zeta}_n^2&&:=v_\textsc{h}\left[\rho_n\right]\bm{\psi}_n-v_\textsc{h}\left[\rho\right]\bm{\psi}-v_\textsc{h}\left[\rho\right]\left(\bm{\psi}_n-\bm{\psi}\right)\\ & &&\,\,= \sum_{j=1}^N \left\{\mathfrak{R}\left[\overline{\left(\psi_{nj}-\psi_j\right)}\left(\psi_{nj}+\psi_j\right)\right]\star \left|\cdot\right|^{-1}\right\}\bm{\psi}_n\tag{$*$}\label{eq:starG1},\\ &\bm{\zeta}_n^3&&:=v_{\textsc{x}}\left[\rho_n\right]\bm{\psi}_n-v_{\textsc{x}}\left[\rho\right]\bm{\psi}-v_{\textsc{x}}\left[\rho\right]\left(\bm{\psi}_n-\bm{\psi}\right)\\ & &&\,\,= \lambda\left[\left(\rho_n\right)^{q-1}-\rho^{q-1}\right]\bm{\psi}_n, \end{alignat*} where we used $|u|^2-|v|^2=\mathfrak{R}\left[\overline{\left(u-v\right)}\left(u+v\right)\right]$ in \eqref{eq:starG1}. We denote by $\left\{\widehat{H}\left[\mathbf{x}(t),\rho\right],t\in[0,T]\right\}$ the family of KS Hamiltonians for the given $\mathbf{x}\in\mathcal{B}_{\mrm{nuc}}(\tau) $. Note that since $\bm{\psi}$ and thus $\rho$ are fixed now, these Hamiltonians are acting linearly on $\bm{\psi}_n-\bm{\psi}$, and can thus be written, similarly to \eqref{eq:linHam}, as \begin{align*} \widehat{H}(t)=-\frac{1}{2}\Delta + \mathfrak{v}(t)+v_{\textsc{hx}}\left[\rho\right] \end{align*} with $\mathfrak{v}$ from \eqref{eq:calV}. We have that the linear potential $\mathfrak{v}(t)+v_{\textsc{hx}}\left[\rho\right]$ satisfies \cite[Ass. (A.1)]{Yajima1987ExistenceEquations}; hence, there exists a family of evolution operators $\left\{\mathfrak{U}(t,s),(t,s)\in[0,T]^2\right\}$, associated with this family of Hamiltonians, satisfying properties (i)---(iv) of our Lemma \ref{lem:evolutionoperators}.\footnote{Note that these four properties are a consequence of \cite[Thm. 1.1]{Yajima1987ExistenceEquations}, which requires only the Assumption (A.1), stated above the mentioned Theorem.} In what now follows, we argue like \cite[Lemma 4.1.1.]{Cazenave1998AnEquations}. For fixed $t\in (0,T]$, we consider the mapping \begin{align*} u(\sigma):=\mathfrak{U}(t,\sigma)\left(\bm{\psi}_n-\bm{\psi}\right)(\sigma) \end{align*} for $\sigma\in[0,t]$. Let $\sigma\in[0,t]$ and $h\in(0,t-\sigma)$. Then, we have \begin{align*} \frac{u(\sigma+h)-u(\sigma)}{h} &=\frac{1}{h} \left[ \mathfrak{U}(t,\sigma+h) \left(\bm{\psi}_n-\bm{\psi}\right)(\sigma+h) - \mathfrak{U}(t,\sigma)\left(\bm{\psi}_n-\bm{\psi}\right)(\sigma) \right]\\ &= \mathfrak{U}(t,\sigma+h) \frac{1}{h} \left[ \left(\bm{\psi}_n-\bm{\psi}\right) (\sigma+h) - \mathfrak{U}(\sigma+h,\sigma) \left( \bm{\psi}_n-\bm{\psi} \right) (\sigma) \right]\tag{$*$}\label{eq:starH1}\\ &= \mathfrak{U}(t,\sigma+h) \biggr[ \frac{\left(\bm{\psi}_n-\bm{\psi}\right) (\sigma+h) - \left(\bm{\psi}_n-\bm{\psi}\right) (\sigma) }{h} \\&\qquad\qquad\qquad\quad- \frac{\mathfrak{U}(\sigma+h,\sigma) -\mathrm{Id}}{h} \left( \bm{\psi}_n-\bm{\psi} \right) (\sigma) \biggr]\\ &\xrightarrow{h\downarrow 0}\mathfrak{U}(t,\sigma) \biggr[ \frac{\partial}{\partial t}\left(\bm{\psi}_n-\bm{\psi}\right) (\sigma) +i \widehat{H}[\mathbf{x}(\sigma),\rho] \left( \bm{\psi}_n-\bm{\psi} \right) (\sigma) \biggr]\tag{$*2$}\label{eq:starG2}\\ &=-i \mathfrak{U}(t,\sigma)\bm{\zeta}_n(\sigma). \end{align*} where we used Lemma \ref{lem:evolutionoperators} (i) in \eqref{eq:starH1} and \cite[Cor. 1.2. (4)]{Yajima1987ExistenceEquations} in \eqref{eq:starG2}. Due to Lemma \ref{lem:evolutionoperators} (iv), we know that for fixed $t\in[0,T]$, $\mathfrak{U}(t,\cdot)\bm{\zeta}_n\in C^0\left([0,t],\left(L^2\right)^N\right)$; hence, we have that $u\in C^1\left([0,t);\left(L^2\right)^N\right)$, with for all $\sigma\in[0,t)$ \begin{align*} u'(\sigma)=-i \mathfrak{U}(t,\sigma)\bm{\zeta}_n(\sigma). \end{align*} Integrating this expression from $0$ to $t'<t$ over $\sigma$, we obtain the expression \begin{align*} u(t')-u(0)&= \mathfrak{U}\left(t,t'\right)\left(\bm{\psi}_n-\bm{\psi}\right)\left(t'\right)\\ &=-i\int_0^{t'}\mathfrak{U}(t,\sigma)\bm{\zeta}_n(\sigma)\mathrm{d} \sigma. \end{align*} In the limit $t'\longrightarrow t$, using Lemma \ref{lem:evolutionoperators} (ii), it follows that the corresponding integral representation holds for all $t\in[0,T]$: \begin{align*} \left(\bm{\psi}_n-\bm{\psi}\right)(t)=-i\int_0^t\mathfrak{U}(t,\sigma)\bm{\zeta}_n(\sigma)\mathrm{d} \sigma. \end{align*} Using Lemma \ref{lem:evolutionoperators} (iii), we can bound for all $n\in\mathbb{N}$ and $t\in[0,\tau]$ \begin{align*} &\left\|\left(\bm{\psi}_n-\bm{\psi}\right)(t)\right\|_{\left(L^2\right)^N}\lesssim{} \sum_{j\in\{1,2,3\}}\int_0^t\left\|\bm{\zeta}_n^j(\sigma)\right\|_{\left(L^2\right)^N}\mathrm{d}\sigma. \end{align*} So, now we deduce $\left(L^2\right)^N$ estimates on $\bm{\zeta}_n^j(\sigma)$ for $j\in\{1,2,3\}$ for all $\sigma\in(0,t)$, using that $\bm{\psi}_n$ and $\bm{\psi}\in\mathcal{B}_{\mrm{el}}(\tau) $, which makes them uniformly bounded with respect to $n$ in $C^0\left([0,\tau];\left(H^2\right)^N\right)$. For $j=1$, we note that this makes them uniformly bounded with respect to $n$ in $L^\infty\left([0,\tau];\left(L^2\right)^N\right)$ and $L^\infty\left([0,\tau];\left(L^\infty\right)^N\right)$ too, using the embedding of $H^2$ into $L^\infty$. Also, we use that for all $n$, $V[\mathbf{x}_n(\cdot)]=V_n^1+V_n^2$ and $V[\mathbf{x}(\cdot)]=V^1+V^2$ belong to the space $C^0\left([0,T];L^{2}\right)+C^0\left( [0,T];L^{\infty}\right)$, and their difference goes to zero in this space; see also \eqref{eq:Mnorm}. Together, this gives for all $0< \sigma< t\leq \tau \leq T$ \begin{align*} \left\|\bm{\zeta}_n^1(\sigma)\right\|_{\left(L^2\right)^N} &\leq \left\|\left(V_n^1(\sigma)-V^1(\sigma)\right)\bm{\psi}_n(\sigma)\right\|_{\left(L^2\right)^N} +\left\|\left(V_n^2(\sigma)-V^2(\sigma)\right)\bm{\psi}_n(\sigma)\right\|_{\left(L^2\right)^N}\\ &\leq \left\|V_n^1-V^1\right\|_{L^{\infty}\left([0,T];L^{2}\right)}\left\|\bm{\psi}_n\right\|_{L^\infty\left([0,\tau];\left(L^\infty\right)^N\right)} \\&\quad+ \left\|V_n^2-V^2\right\|_{L^{\infty}\left([0,T];L^{\infty}\right)}\left\|\bm{\psi}_n\right\|_{L^\infty\left([0,\tau];\left(L^2\right)^N\right)}\\ &\leq C\left(\left\|V_n^1-V^1\right\|_{L^{\infty}\left([0,T];L^{2}\right)} + \left\|V_n^2-V^2\right\|_{L^{\infty}\left([0,T];L^{\infty}\right)}\right)\\ &=:C_{1,n}\xrightarrow{n\longrightarrow\infty}0 \end{align*} for some $C=C\left(\alpha,\bm{\psi}^0\right)>0$. For $j=2$, we adopt the map $G$ from \eqref{eq:mappingG}. This gives for all $\sigma\in(0,t)$ \begin{align*} \left\|\bm{\zeta}_n^2(\sigma)\right\|_{\left(L^2\right)^N} &\leq \sum_{j=1}^N \left\|G\left[\psi_{nj}(\sigma)-\psi_j(\sigma),\psi_{nj}(\sigma)+\psi_j(\sigma)\right]\right\|_{L^\infty}\left\|\bm{\psi}_n(\sigma)\right\|_{\left(L^2\right)^N}\\ &\lesssim{} \sum_{j=1}^N \left\|\psi_{nj}(\sigma)-\psi_j(\sigma)\right\|_{L^2}\left\|\psi_{nj}(\sigma)+\psi_j(\sigma)\right\|_{H^2}\left\|\bm{\psi}_n\right\|_{L^\infty\left([0,\tau];\left(L^2\right)^N\right)}\\ &\leq C_2\left\|\bm{\psi}_{n}(\sigma)-\bm{\psi}(\sigma)\right\|_{\left(L^2\right)^N} \end{align*} for some $C_2=C_2\left(\alpha,\bm{\psi}^0\right)>0$. For $j=3$, we use \eqref{eq:MVE} and \eqref{eq:rhoinfty}, which gives for all $\sigma\in(0,t)$ \begin{align*} \left\|\bm{\zeta}_n^3(\sigma)\right\|_{\left(L^2\right)^N} &\lesssim_{q,\lambda}\left(\left\|\rho_n(\sigma)\right\|_{L^\infty}^{q-3/2}+\left\|\rho(\sigma)\right\|_{L^\infty}^{q-3/2}\right)\left\|\rho_n(\sigma)\right\|_{L^\infty}^{1/2}\left\|\bm{\psi}_n(\sigma)-\bm{\psi}(\sigma)\right\|_{\left(L^2\right)^N}\\ &\leq C_3\left\|\bm{\psi}_n(\sigma)-\bm{\psi}(\sigma)\right\|_{\left(L^2\right)^N}, \end{align*} for some $C_3=C_3\left(q,\alpha,\bm{\psi}^0\right)>0$. Combining these three estimates, we can write for all $t\in [0,\tau]$ \begin{align*} \left\|\left(\bm{\psi}_n-\bm{\psi}\right)(t)\right\|_{\left(L^2\right)^N}\leq C_{1,n}\tau+ \left(C_2+C_3\right)\int_0^t \left\|\left[\bm{\psi}_n-\bm{\psi}\right](\sigma)\right\|_{\left(L^2\right)^N}\mathrm{d}\sigma, \end{align*} Now, we can apply Gr\"onwall's Lemma, and conclude that for all $t\in[0,\tau]$ \begin{align*} \left\|\left(\bm{\psi}_n-\bm{\psi}\right)(t)\right\|_{\left(L^2\right)^N}\leq C_{1,n}\tau e^{\left(C_2+C_3\right)t}. \end{align*} As a consequence, we have \begin{align*} \left\|\bm{\psi}_n-\bm{\psi}\right\|_{C^0\left([0,\tau];\left(L^2\right)^N\right)}\leq C_{1,n}\tau e^{\left(C_2+C_3\right)\tau}\xrightarrow{n\longrightarrow\infty} 0. \end{align*} This proves continuity of the mapping $\mathcal{E}$ in the $C^0\left([0,\tau];\left(L^2\right)^N\right)$ norm. \end{proof} \begin{lemma} \label{lem:shorttimeexistence} Let $q\geq 7/2$ and $\lambda\in\mathbb{R}$. Further, let $\tau>0$ satisfy (\ref{eq:A1},\ref{eq:A2}). Then, the system $\mrm{\eqref{eq:KSN}}${} has a solution $\left(\bm{\psi},\mathbf{x}\right)\in X(\tau)$. \end{lemma} \begin{proof} We define the injection \begin{align*} \mathcal{I}:\mathcal{B}_{\mrm{nuc}}(\tau) \cap C^2\left([0,\tau];\left(\mathbb{R}^3\right)^M\right)&\varlonghookrightarrow\mathcal{B}_{\mrm{nuc}}(\tau) ,\\ \mathcal{I}[\mathbf{x}]&:=\mathbf{x}, \end{align*} and the mapping \begin{align*} \mathcal{K}:\mathcal{B}_{\mrm{nuc}}(\tau) &\longrightarrow \mathcal{B}_{\mrm{nuc}}(\tau) ,\\ \mathcal{K}&:=\mathcal{I}\circ \mathcal{N}\circ\mathcal{E}. \end{align*} Now, we have by Lemmata \ref{lem:KStoCL} and \ref{lem:CLtoKS} that for given $\mathbf{y}\in\mathcal{B}_{\mrm{nuc}}(\tau) $, with $\mathbf{z}:=\mathcal{K}\left[\mathbf{y}\right]$, the pair $\left(\bm{\phi},\mathbf{z}\right)\in \mathcal{B}_{\mrm{el}}(\tau) \times\mathcal{B}_{\mrm{nuc}}(\tau) $, with $\varrho:=\left|\bm{\phi}\right|^2$, satisfies \begin{align*} \begin{dcases} \begin{aligned} i\dot{\bm{\phi}} &= \widehat{H}\left[\mathbf{y},\varrho\right] \bm{\phi},\\ \ddot{\mathbf{z}} &=\mathbf{a}\left[\mathbf{z},\varrho\right],\\ \bm{\phi}(0)&=\bm{\psi}^0,\quad\mathbf{z}(0)=\mathbf{x}^0,\quad\dot{\mathbf{z}}(0)=\mathbf{v}^0 \end{aligned} \end{dcases} \end{align*} on $[0,\tau]$. From Lemmata \ref{lem:KStoCL}, \ref{lem:CLtoKS} and \ref{lem:mappingE}, we know that $\mathcal{E}$ and $\mathcal{N}$ are continuous and bounded. Since this is with respect to the $C^2\left([0,\tau];\left(\mathbb{R}^3\right)^M\right)$ norm in the range of $\mathcal{N}$, we have that $\mathcal{B}_{\mrm{nuc}}(\tau) \cap C^2\left([0,\tau];\left(\mathbb{R}^3\right)^M\right)$ is a compact subset of $\mathcal{B}_{\mrm{nuc}}(\tau) $, which altogether makes $\mathcal{I}$ a continuous and compact injection. Also, this enables us to apply the Arzel\`a--Ascoli theorem on $\mathcal{B}_{\mrm{nuc}}(\tau) \cap C^2\left([0,\tau];\left(\mathbb{R}^3\right)^M\right)\subset \mathcal{B}_{\mrm{nuc}}(\tau) \cap C^{1,\alpha}\left([0,\tau];\left(\mathbb{R}^3\right)^M\right)$ for some $\alpha\in(0,1]$, which makes $\mathcal{K}$ a compact mapping. Furthermore, the domain and range of $\mathcal{K}$, the space $\mathcal{B}_{\mrm{nuc}}(\tau) $, is convex, closed, and non-empty. This altogether enables us to apply the Schauder fixed-point theorem, by which $\mathcal{K}$ has a fixed point $\mathbf{x}\in\mathcal{B}_{\mrm{nuc}}(\tau) $. Since this is also a fixed point for the mapping $\mathcal{E}\circ\mathcal{N}$, $\mathbf{x}\in\mathcal{B}_{\mrm{nuc}}(\tau) \cap C^2\left([0,\tau];\left(\mathbb{R}^3\right)^M\right)$. Altogether, $\left(\bm{\psi},\mathbf{x}\right)$ with $\bm{\psi}:=\mathcal{E}\left[\mathbf{x}\right]$ is a solution to $\mrm{\eqref{eq:KSN}}${} in the space $\mathcal{B}_{\mrm{el}}(\tau) \times\left(\mathcal{B}_{\mrm{nuc}}(\tau) \cap C^2\left([0,\tau];\left(\mathbb{R}^3\right)^M\right)\right)\subset X(\tau)$. \end{proof} \subsection{Local-in-time uniqueness} \begin{lemma} \label{lem:coupledsolution} Let $q\geq 7/2$ and $\lambda\in\mathbb{R}$. Further, let $\tau>0$ satify (\ref{eq:A1},\ref{eq:A2}). Let $(\mathbf{x},\bm{\psi}),(\mathbf{x}',\bm{\psi}')\in X(\tau)$ denote two solutions to $\mrm{\eqref{eq:KSN}}${}. Then, we have for all $t\in[0,\tau]$ \begin{align} \left|\left(\ddot{\mathbf{x}}-\ddot{\mathbf{x}}'\right)(t)\right| &\leq C\left[\left|\left(\mathbf{x}-\mathbf{x}'\right)(t)\right|+\left\|\left(\bm{\psi}-\bm{\psi}'\right)(t)\right\|_{\left(L^{3,\infty}\right)^N}\right],\tag{a}\label{eq:a}\\ \left\|\left(\bm{\psi}-\bm{\psi}'\right)(t)\right\|_{\left(L^{3,\infty}\right)^N} &\leq C\int_0^t\frac{1}{\sqrt{t-\sigma}}\left[\left|\left(\mathbf{x}-\mathbf{x}'\right)(\sigma)\right|+\left\|\left(\bm{\psi}-\bm{\psi}'\right)(\sigma)\right\|_{\left(L^{3,\infty}\right)^N}\right]\mathrm{d}\sigma,\tag{b}\label{eq:b} \end{align} where $C=C\left(\left\|\bm{\psi}\right\|_{C^0\left([0,\tau];\left(H^2\right)^N\right)},\left\|\bm{\psi}'\right\|_{C^0\left([0,\tau];\left(H^2\right)^N\right)}\right)$. \end{lemma} \begin{proof} \textit{Part \ref{eq:a}.}\\ In this part, we use for short-hand notation the function $\mathbf{R}:\mathbb{R}^3\longrightarrow \mathbb{R}^3$ (a.e.), $\mathbf{r}\longmapsto\mathbf{r}\left|\mathbf{r}\right|^{-3}$.\\\\We have for all $t\in[0,\tau]$ and $k\in\{1,\ldots,M\}$ \begin{align*} \left|\left(\ddot{\mathbf{x}}_{k}-\ddot{\mathbf{x}}_{k}'\right)(t)\right|&\leq \left|\mathbf{a}_k^1\left[\rho(t)\right]\left(\mathbf{x}(t)\right)-\mathbf{a}_k^1\left[\rho'(t)\right]\left(\mathbf{x}'(t)\right)\right|+ \left|\mathbf{a}_k^2\left(\mathbf{x}(t)\right)-\mathbf{a}_k^2\left(\mathbf{x}'(t)\right)\right|\\ &\leq \underbrace{\left|\mathbf{a}_k^1\left[\rho(t)\right]\left(\mathbf{x}(t)\right) -\mathbf{a}_k^1\left[\rho(t)\right]\left(\mathbf{x}'(t)\right)\right|}_{=:\,\,\text{(I)}}+ \underbrace{\left|\mathbf{a}_k^1\left[\rho(t)\right]\left(\mathbf{x}'(t)\right) -\mathbf{a}_k^1\left[\rho'(t)\right]\left(\mathbf{x}'(t)\right)\right|}_{=:\,\,\text{(II)}}\\ &\quad+ \underbrace{\left|\mathbf{a}_k^2\left(\mathbf{x}(t)\right) -\mathbf{a}_k^2\left(\mathbf{x}'(t)\right)\right|}_{=:\,\,\text{(III)}} . \end{align*} By Lemma \ref{lem:forces} on the force functions, $\mathbf{a}_k^1\left[\rho\right]$ are uniformly Lipschitz-continuous in the nuclear variable for all $t\in[0,\tau]$, $k\in\{1,\ldots,M\}$, by which we can bound $\text{(I)}$, using \eqref{eq:forcebounds2}, as \begin{align*} \text{(I)}&\lesssim_{z_k,m_k}\sum_{j=1}^N\left|\left\langle\psi_j(t),\mathbf{R}\left(\cdot-\mathbf{x}_{k}(t)\right)\psi_j(t)\right\rangle - \left\langle\psi_j(t),\mathbf{R}\left(\cdot-\mathbf{x}_{k}'(t)\right)\psi_j(t)\right\rangle \right|\\ &\leq C_{\mathrm{I}}\left|\left(\mathbf{x}_{k}-\mathbf{x}_{k}'\right)(t)\right|\\ &\leq C_{\mathrm{I}}\left|\left(\mathbf{x}-\mathbf{x}'\right)(t)\right| \end{align*} for some $C_{\mathrm{I}}=C_{\mathrm{I}}\left(\left\|\bm{\psi}\right\|_{C^0\left([0,\tau];\left(H^2\right)^N\right)}\right)>0$. Also, we can bound \begin{align*} \text{(II)}&\lesssim_{z_k,m_k}\sum_{j=1}^N\biggr|\left\langle\psi_j(t),\mathbf{R}\left(\cdot-\mathbf{x}_{k}(t)\right)\left(\psi_j-\psi_j'\right)(t)\right\rangle+ \left\langle\left(\psi_j-\psi_j'\right)(t),\mathbf{R}\left(\cdot-\mathbf{x}_{k}'(t)\right)\psi_j'(t)\right\rangle \biggr|\\ &\lesssim{}\sum_{j=1}^N\biggr[\left|\left\langle\left|\cdot-\mathbf{x}_{k}(t)\right|^{-1}\psi_j(t),\left|\cdot-\mathbf{x}_{k}(t)\right|^{-1}\left(\psi_j-\psi_j'\right)(t)\right\rangle\right|\\ &\qquad\qquad+ \left|\left\langle\left|\cdot-\mathbf{x}_{k}'(t)\right|^{-1}\left(\psi_j-\psi_j'\right)(t),\left|\cdot-\mathbf{x}_{k}'(t)\right|^{-1}\psi_j'(t)\right\rangle\right|\biggr] \tag{$*$}\label{eq:starI1}\\ &\lesssim{}\sum_{j=1}^N\biggr[\left\|\left|\cdot-\mathbf{x}_{k}(t)\right|^{-1}\psi_j(t)\right\|\left\|\left|\cdot-\mathbf{x}_{k}(t)\right|^{-1}\left(\psi_j-\psi_j'\right)(t)\right\|\\ &\qquad\qquad+ \left\|\left|\cdot-\mathbf{x}_{k}'(t)\right|^{-1}\psi_j'(t)\right\| \left\|\left|\cdot-\mathbf{x}_{k}'(t)\right|^{-1}\left(\psi_j-\psi_j'\right)(t)\right\|\biggr]\tag{$*2$}\label{eq:starH2} \\ &\lesssim{}\sum_{j=1}^N\left[\left\|\nabla\psi_j(t)\right\|_{L^2\left(\mathbb{R}^3;\mathbb{C}^3\right)}+ \left\|\nabla\psi_j'(t)\right\|_{L^2\left(\mathbb{R}^3;\mathbb{C}^3\right)}\right]\left\|\left[\left|\cdot\right|^{-2}\star\left|\psi_j-\psi_j'\right|^2(t,\cdot)\right]\right\|_{L^\infty}^{1/2}\tag{$*3$}\label{eq:starD3}\\ &\lesssim{}\left\|\left|\cdot\right|^{-2}\right\|_{L^{3/2,\infty}}^{1/2}\sum_{j=1}^N\left(\left\|\psi_j(\sigma)\right\|_{H^2}+\left\|\psi_j'(\sigma)\right\|_{H^2}\right)\sum_{i=1}^N\left\|\left|\psi_{i}-\psi_{i}'\right|^2(t)\right\|_{L^{3/2,\infty}}^{1/2}\tag{$*4$}\label{eq:starB4} \\ &\leq C_{\mathrm{II}} \left\|\left(\bm{\psi}-\bm{\psi}'\right)(t)\right\|_{\left(L^{3,\infty}\right)^N}\tag{$*5$}\label{eq:starA5} \end{align*} for some $C_{\mathrm{II}}=C_{\mathrm{II}}\left(\left\|\bm{\psi}\right\|_{C^0\left([0,\tau];\left(H^2\right)^N\right)},\left\|\bm{\psi}'\right\|_{C^0\left([0,\tau];\left(H^2\right)^N\right)}\right)>0$. Here, we used that $\left|\mathbf{R}\right|=\left|\cdot\right|^{-2}$ in \eqref{eq:starI1}, the Cauchy--Schwarz and Hardy's inequalities in (\ref{eq:starH2},\ref{eq:starD3}), \cite[Thm. 2.10.2.]{Ziemer1989WeaklyVariation} in \eqref{eq:starB4}, and $\left\|\cdot\right\|^{-n/s}_{\mathbb{R}^n}\in L^{s,\infty}$ with $(n,s)=(3,3/2)$ and $\left\|\phi^2\right\|_{L^{3/2,\infty}}=\left\|\phi\right\|^2_{L^{3,\infty}}$ (since $\left(\phi^2\right)^*=\left(\phi^*\right)^2$) in \eqref{eq:starA5}. Since $\mathbf{x},\mathbf{x}'\in\mathcal{B}_{\mrm{nuc}}(\tau) $, we can bound (III) similarly to Part 1 of the Proof of Lemma \ref{lem:KStoCL}: \begin{align*} \text{(III)}&\lesssim_{\bm{z},m_k}\sum_{\subalign{&m=1,\\&m\neq k}}^M\left|\mathbf{R}\left(\left(\mathbf{x}_{k}-\mathbf{x}_{1m}\right)(t)\right)-\mathbf{R}\left(\left(\mathbf{x}_{k}'-\mathbf{x}_{2m}\right)(t)\right)\right|\\ &\lesssim_{\delta,\mathbf{x}^0}\left|\left(\mathbf{x}-\mathbf{x}'\right)(t)\right|. \end{align*} Since these results hold for all $k\in\{1,\ldots,M\}$, \eqref{eq:a} follows.\\\\ \textit{Part \ref{eq:b}.}\\Similarly to the proof of Lemma \ref{lem:CLtoKS}, we can write \begin{align*} &\begin{dcases} \begin{aligned} \displaystyle i\frac{\partial}{\partial t}\left(\bm{\psi}-\bm{\psi}'\right)&=-\displaystyle\frac{1}{2}\Delta\left(\bm{\psi}-\bm{\psi}'\right)+ v\left[\mathbf{x}\right]\left(\bm{\psi}-\bm{\psi}'\right) +v_{\textsc{hx}}\left[\rho\right]\left(\bm{\psi}-\bm{\psi}'\right) +\widetilde{\bm{\zeta}},\\ \left(\bm{\psi}-\bm{\psi}'\right)(0)&=0, \end{aligned} \end{dcases} \end{align*} where $\widetilde{\bm{\zeta}}:=\widetilde{\bm{\zeta}}^1+\widetilde{\bm{\zeta}}^2+\widetilde{\bm{\zeta}}^3$, with for $j\in\{1,2,3\}$, $\widetilde{\bm{\zeta}}^j$ being $\bm{\zeta}_n^j$ with $\left(\mathbf{x}_n,\bm{\psi}_n\right)\longmapsto \left(\mathbf{x}',\bm{\psi}'\right)$. As the operator $-\Delta/2$ generates the free propagator, we can write the equivalent integral equation for all $t\in[0,\tau]$ \begin{align*} \left(\bm{\psi}-\bm{\psi}'\right)(t)=-i\int_0^tU_0(t-\sigma)\left\{v\left[\mathbf{x}(\sigma)\right]\left(\bm{\psi}-\bm{\psi}'\right)(\sigma) +v_{\textsc{hx}}\left[\rho\right]\left(\bm{\psi}-\bm{\psi}'\right)(\sigma) +\widetilde{\bm{\zeta}}(\sigma)\right\}\mathrm{d}\sigma. \end{align*} In \cite[Lemma 6]{Cances1999OnDynamics}, the following result on the free propagator is proven: for all $\sigma\in(0,\tau]$ and $\phi\in L^{3/2,\infty}$, we have \begin{align*} \left\|U_0(\sigma)\phi\right\|_{L^{3,\infty}}\lesssim{} \frac{1}{\sqrt{\sigma}}\left\|\phi\right\|_{L^{3/2,\infty}}. \end{align*} Using this result and the quasi-triangular inequality, we can bound for all $t\in[0,\tau]$, $j\in\{1,\ldots,N\}$, \begin{align*} \left\|\left(\psi_j-\psi_j'\right)(t)\right\|_{L^{3,\infty}}&\lesssim{}\int_0^t\frac{1}{\sqrt{t-\sigma}}\biggr[\left\|\left(v\left[\mathbf{x}(\sigma)\right]\left(\bm{\psi}-\bm{\psi}'\right)\right)_j(\sigma)\right\|_{L^{3/2,\infty}}\\ &\quad+ \left\|\left(v_\textsc{h}\left[\rho\right]\left(\bm{\psi}-\bm{\psi}'\right)\right)_j(\sigma)\right\|_{L^{3/2,\infty}}+\left\|\left(v_\textsc{x}\left[\rho\right]\left(\bm{\psi}-\bm{\psi}'\right)\right)_j(\sigma)\right\|_{L^{3/2,\infty}} \\ &\quad +\sum_{\ell\in\{1,2,3\}}\left\|\left(\widetilde{\bm{\zeta}}^\ell(\sigma)\right)_j\right\|_{L^{3/2,\infty}}\biggr]\mathrm{d}\sigma. \end{align*} Since $\left\|\cdot\right\|^{-n/s}_{\mathbb{R}^n}\in L^{s,\infty}$ with $(n,s)=(3,3)$ here, we arrive at the following estimates for all $\sigma\in(0,t)$, $j\in\{1,\ldots,N\}$. Using the quasi-triangular inequality and H\"older's inequality on $L^{3/2,\infty}$, we have \begin{align*} \left\|\left(v\left[\mathbf{x}(\sigma)\right]\left(\bm{\psi}-\bm{\psi}'\right)\right)_j(\sigma)\right\|_{L^{3/2,\infty}}&\lesssim_{\bm{z}}\sum_{\ell=1}^M\left\|\left|\cdot-\mathbf{x}_{\ell}(\sigma)\right|^{-1}\right\|_{L^{3,\infty}}\left\|\left(\psi_j-\psi_j'\right)(\sigma)\right\|_{L^{3,\infty}}\\ &\lesssim{}\left\|\left|\cdot\right|^{-1}\right\|_{L^{3,\infty}}\left\|\left(\psi_j-\psi_j'\right)(\sigma)\right\|_{L^{3,\infty}}\\ &\lesssim{}\left\|\left(\psi_j-\psi_j'\right)(\sigma)\right\|_{L^{3,\infty}}. \end{align*} Further, we get \begin{align*} \left\|\left(v_\textsc{h}\left[\rho\right]\left(\bm{\psi}-\bm{\psi}'\right)\right)_j(\sigma)\right\|_{L^{3/2,\infty}}&\lesssim{}\left\|\rho(\sigma,\cdot)\star\left|\cdot\right|^{-1}\right\|_{L^{3,\infty}} \left\|\left(\psi_j-\psi_j'\right)(\sigma)\right\|_{L^{3,\infty}}\tag{$*$}\label{eq:starJ1}\\ &\lesssim{}\left\|\rho(\sigma)\right\|_{L^{1}}\left\|\left|\cdot\right|^{-1}\right\|_{L^{3,\infty}} \left\|\left(\psi_j-\psi_j'\right)(\sigma)\right\|_{L^{3,\infty}}\tag{$*2$}\label{eq:starI2}\\ &\leq C_\textsc{h} \left\|\left(\psi_j-\psi_j'\right)(\sigma)\right\|_{L^{3,\infty}} \end{align*} for some $C_\textsc{h}=C_\textsc{h}\left(\left\|\bm{\psi}\right\|_{C^0\left([0,\tau];\left(H^{2}\right)^N\right)}\right)>0$. Here, we used H\"older's inequality on $L^{3/2,\infty}$ in \eqref{eq:starJ1} and Young's convolution inequality on $L^{3,\infty}$ in \eqref{eq:starI2}. Furthermore, we have \begin{align*} \left\|\left(v_{\textsc{x}}\left[\rho\right]\left[\bm{\psi}-\bm{\psi}'\right]\right)_j(\sigma)\right\|_{L^{3/2,\infty}} &\lesssim_{\lambda}\left\|\left[\rho(\sigma)\right]^{q-1}\right\|_{L^{3,\infty}}\left\|\left(\psi_j-\psi_j'\right)(\sigma)\right\|_{L^{3,\infty}}\tag{$*$}\label{eq:starK1}\\ &\lesssim{} \left\|\left[\rho(\sigma)\right]^{q-1}\right\|_{L^{3}}\left\|\left(\psi_j-\psi_j'\right)(\sigma)\right\|_{L^{3,\infty}}\tag{$*2$}\label{eq:starJ2}\\ & \leq C_\textsc{x} \left\|\left(\psi_j-\psi_j'\right)(\sigma)\right\|_{L^{3,\infty}} \tag{$*3$}\label{eq:starE3} \end{align*} for some $C_\textsc{x}=C_\textsc{x}\left(\left\|\bm{\psi}\right\|_{C^0\left([0,\tau];\left(H^2\right)^N\right)},q\right)>0$. Here, we used H\"older's inequality on $L^{3/2,\infty}$ in \eqref{eq:starK1} and \cite[Prop. 4.2.]{Bennett1988TheTheorems} in \eqref{eq:starJ2}. In \eqref{eq:starE3}, we used the quasi-triangular inequality, Sobolev's inequality with interpolation, and the embedding of $H^2$ into $L^\infty$, by which, with $\theta:=6(q-1)>6$, we have \begin{align*} \left\|\left[\rho(\sigma)\right]^{q-1}\right\|_{L^{3}}^3&\lesssim_q \sum_{i=1}^N\left\|\left[\psi_{i}(\sigma)\right]^{(\theta-6)+6}\right\|_{L^1}\\ &\lesssim{} \sum_{i=1}^N\left\|\psi_{i}(\sigma)\right\|_{L^\infty}^{\theta-6}\left\|\psi_{i}(\sigma)\right\|_{L^6}^{6}\\ &\lesssim{} \sum_{i=1}^N\left\|\psi_{i}\right\|_{C^0\left([0,\tau];H^2\right)}^{\theta}. \end{align*} Further, we have \begin{align*} \left\|\left(\widetilde{\bm{\zeta}}^1(\sigma)\right)_j\right\|_{L^{3/2,\infty}} & \lesssim_{\bm{z} }\sum_{\ell=1}^M \left\| \left( \left| \cdot-\mathbf{x}_{\ell}(\sigma) \right|^{-1}- \left| \cdot-\mathbf{x}_{\ell}'(\sigma) \right|^{-1} \right)\psi_j'\left(\sigma,\cdot\right)\right\|_{L^{3/2,\infty}}\tag{$*$}\label{eq:starL1}\\ & =\sum_{\ell=1}^M \left\| \left( \left| \cdot-\left(\mathbf{x}_{\ell} -\mathbf{x}_{\ell}'\right)(\sigma) \right|^{-1}- \left| \cdot \right|^{-1} \right)\psi_j'\left(\sigma,\cdot+\mathbf{x}_{\ell}'\right)\right\|_{L^{3/2,\infty}} \tag{$*2$}\label{eq:starK2}\\ &\lesssim{} \left\|\psi_j'\left(\sigma\right) \right\|_{L^{\infty}} \sum_{\ell=1}^M \left\| \left| \cdot \right|^{-1} \left| \cdot-\left(\mathbf{x}_{\ell} -\mathbf{x}_{\ell}'\right)(\sigma) \right|^{-1}\right\|_{L^{3/2,\infty}}\left|\left(\mathbf{x}_{\ell} -\mathbf{x}_{\ell}'\right)(\sigma)\right|\tag{$*3$}\label{eq:starZ3}\\ &\lesssim{} \left\|\psi_j'\right\|_{C^0\left([0,\tau];H^2\right)} \left\| \left| \cdot \right|^{-1} \right\|_{L^{3,\infty}} \sum_{\ell=1}^M \left\| \left| \cdot-\left(\mathbf{x}_{\ell} -\mathbf{x}_{\ell}'\right)(\sigma) \right|^{-1} \right\|_{L^{3,\infty}} \\ &\qquad\times \left|\left(\mathbf{x}_{\ell} -\mathbf{x}_{\ell}'\right)(\sigma)\right|\tag{$*4$}\label{eq:starC4}\\ &\lesssim_{M} \left\|\psi_j'\right\|_{C^0\left([0,\tau];H^2\right)} \left\| \left| \cdot \right|^{-1} \right\|^2_{L^{3,\infty}} \left|\left(\mathbf{x} -\mathbf{x}'\right)(\sigma)\right|\tag{$*5$}\label{eq:starB5}\\ &\lesssim{} \left\|\psi_j'\right\|_{C^0\left([0,\tau];H^2\right)} \left|\left(\mathbf{x} -\mathbf{x}'\right)(\sigma)\right|. \end{align*} where we used the quasi-triangular inequality in \eqref{eq:starL1}, shift invariance of the weak Lebesgue norms in (\ref{eq:starK2},\ref{eq:starB5}), the reverse triangular inequality for $\left|\cdot\right|-\left|\cdot-\left(\mathbf{x}_{\ell} -\mathbf{x}_{\ell}'\right)(\sigma)\right|\leq \left|\left(\mathbf{x}_{\ell} -\mathbf{x}_{\ell}'\right)(\sigma)\right|$ in \eqref{eq:starZ3} and H\"older's inequality on $L^{3/2,\infty}$ and the embedding of $H^2$ into $L^\infty$ in \eqref{eq:starC4}. In addition, we have \begin{align*} \left\|\left(\widetilde{\bm{\zeta}}^2(\sigma)\right)_j\right\|_{L^{3/2,\infty}} &\lesssim{}\sum_{i=1}^N \left\|\left\{\left[\overline{\left(\psi_{i}-\psi_{i}'\right)(\sigma)}\left(\psi_{i}+\psi_{i}'\right)(\sigma)\right]\star \left|\cdot\right|^{-1}\right\}\psi_j'(\sigma) \right\|_{L^{3/2,\infty}}\tag{$*$}\label{eq:starM1}\\ &\lesssim{}\sum_{i=1}^N \left\|\left[\overline{\left(\psi_{i}-\psi_{i}'\right)(\sigma)}\left(\psi_{i}+\psi_{i}'\right)(\sigma)\right]\star \left|\cdot\right|^{-1} \right\|_{L^{6,\infty}} \left\|\psi_j'(\sigma) \right\|_{L^{2,\infty}}\tag{$*2$}\label{eq:starL2}\\ &\lesssim{}\sum_{i=1}^N \left\|\left[\overline{\left(\psi_j-\psi_j'\right)(\sigma)}\left(\psi_j+\psi_j'\right)(\sigma)\right]\star \left|\cdot\right|^{-1} \right\|_{L^{6,2}} \left\|\psi_j'(\sigma) \right\|_{L^{2}}\tag{$*3$}\label{eq:starF3}\\ &\lesssim{} \left\|\psi_j'\right\|_{C^0\left([0,\tau];H^{2}\right)}\sum_{i=1}^N \left\| \overline{\left(\psi_{i}-\psi_{i}'\right)(\sigma)}\left(\psi_{i}+\psi_{i}'\right)(\sigma) \right\|_{L^{6/5,2}}\left\|\left|\cdot\right|^{-1} \right\|_{L^{3,\infty}}\tag{$*4$}\label{eq:starD4}\\ &\lesssim{} \left\|\psi_j'\right\|_{C^0\left([0,\tau];H^{2}\right)} \sum_{i=1}^N \left[\left\|\psi_{i}(\sigma)\right\|_{L^{2}} +\left\|\psi_{i}'(\sigma) \right\|_{L^{2}} \right]\left\| \left(\psi_{i}-\psi_{i}'\right)(\sigma) \right\|_{L^{3,\infty}}\tag{$*5$}\label{eq:starC5} \\ & \leq C_2 \left\| \left(\bm{\psi}-\bm{\psi}'\right)(\sigma) \right\|_{\left(L^{3,\infty}\right)^N} \end{align*} for some $C_2=C_2\left(\left\|\bm{\psi}\right\|_{C^0\left([0,\tau];\left(H^{2}\right)^N\right)}, \left\|\bm{\psi}'\right\|_{C^0\left([0,\tau];\left(H^{2}\right)^N\right)}\right)>0$. Here, we used the quasi-triangular inequality in \eqref{eq:starM1}, H\"older's inequality on $L^{3/2,\infty}$ in \eqref{eq:starL2}, \cite[Prop. 4.2.]{Bennett1988TheTheorems} in \eqref{eq:starF3}, Young's convolution inequality on $L^{6,2}$ in \eqref{eq:starD4}, and H\"older's inequality on $L^{6/5,2}$ in \eqref{eq:starC5}. Additionally, we have \begin{align*} \left\|\left(\widetilde{\bm{\zeta}}^3(\sigma)\right)_j\right\|_{L^{3/2,\infty}}&\lesssim_{\lambda,q}\left\| \left[\rho(\sigma)\right]^{q-3/2}+\left[\rho'(\sigma)\right]^{q-3/2} \right\|_{L^{\infty}} \left\|\psi_j'(\sigma)\left|\bm{\psi}(\sigma)-\bm{\psi}'(\sigma)\right| \right\|_{L^{3/2,\infty}}\tag{$*$}\label{eq:starN1}\\ &\lesssim{} \left[\left\|\rho(\sigma)\right\|^{q-3/2}_{L^\infty}+\left\|\rho'(\sigma)\right\|_{L^\infty}^{q-3/2}\right] \left\|\psi_j'(\sigma)\right\|_{L^{3,\infty}}\left\|\left|\left(\bm{\psi}-\bm{\psi}'\right)(\sigma)\right| \right\|_{L^{3,\infty}}\tag{$*2$}\label{eq:starM2}\\ &\lesssim_q \left[ \sum_{k=1}^N\left\|\psi_{k}(\sigma)\right\|_{H^2}^{2q-3} + \sum_{\ell=1}^N\left\|\psi_{\ell}'(\sigma)\right\|_{H^2}^{2q-3} \right] \\&\qquad\times\left\|\psi_j'(\sigma)\right\|_{L^{3}}\sum_{i=1}^N\left\|\left(\psi_{i}-\psi_{i}'\right)(\sigma) \right\|_{L^{3,\infty}}\tag{$*3$}\label{eq:starG3}\\ &\leq C_3\left\|\left(\bm{\psi}-\bm{\psi}'\right)(\sigma) \right\|_{\left(L^{3,\infty}\right)^N}.\tag{$*4$}\label{eq:starE4} \end{align*} for some $C_3=C_3\left(\left\|\bm{\psi}\right\|_{C^0\left([0,\tau];\left(H^2\right)^N\right)},\left\|\bm{\psi}'\right\|_{C^0\left([0,\tau];\left(H^2\right)^N\right)},q\right)>0$. Here, we used \eqref{eq:MVE} in \eqref{eq:starN1}; H\"older's inequality on $L^{3/2,\infty}$ in \eqref{eq:starM2}; \eqref{eq:rhoinfty}, \cite[Prop. 4.2.]{Bennett1988TheTheorems} and the quasi-triangular inequality in \eqref{eq:starG3}; and Sobolev's embedding theorem with interpolation in \eqref{eq:starE4}. Since all of these estimates hold for all $\sigma\in(0,t)$, $j\in\{1,\ldots,N\}$, \eqref{eq:b} follows. \end{proof} \label{sec:locun} \begin{proof}[Proof of Thm. \ref{thm:shorttimeexistence}.] Let $\tau$ satisfy (\ref{eq:A1},\ref{eq:A2}). Existence of a solution in $X(\tau)$ to $\mrm{\eqref{eq:KSN}}${} is proven in Lemma \ref{lem:shorttimeexistence}. Uniqueness of this solution follows from Lemma \ref{lem:coupledsolution}. For two solutions $\left(\mathbf{x},\bm{\psi}\right),\left(\mathbf{x}',\bm{\psi}'\right)\in X(\tau)$ and $p>2$, let us define the function $h\in C^0\left([0,\tau];\mathbb{R}_0^+\right)$ by \begin{align*} h(t):=\left[\left|\left(\mathbf{x}-\mathbf{x}'\right)(t)\right|+\left\|\left(\bm{\psi}-\bm{\psi}'\right)(t)\right\|_{\left(L^{3,\infty}\right)^N}\right]^p. \end{align*} Since $\mathbf{x}$ and $\mathbf{x}'$ both solve \eqref{eq:N} on $[0,\tau]$ and thus are fixed points of the mapping $\mathcal{T}$ in \eqref{eq:mappingT}, we can write for all $t\in[0,\tau]$: \begin{align*} \left|\left(\mathbf{x}-\mathbf{x}'\right)(t)\right|&\leq \left|\int_0^t(t-\sigma)\left(\ddot{\mathbf{x}}-\ddot{\mathbf{x}}'\right)(\sigma)\mathrm{d}\sigma\right|\\ &\leq \int_0^t(t-\sigma)\left|\left(\ddot{\mathbf{x}}-\ddot{\mathbf{x}}'\right)(\sigma)\right|\mathrm{d}\sigma. \end{align*} Now, using this in combination with Lemma \ref{lem:coupledsolution} in \eqref{eq:starO1}, H\"older's inequality in \eqref{eq:starN2}, and the fact that since $p>2$, its H\"older conjugate $p'<2$, which ensures the $L^{p'}\left([0,t];\mathbb{R}\right)$ norm of $t-\cdot+(t-\cdot)^{-1/2}$ is finite in \eqref{eq:starH3}, we can bound for all $t\in[0,\tau]$ \begin{align*} h(t)&\lesssim_{p}C\left\{\int_0^t\left(t-\sigma+\frac{1}{\sqrt{t-\sigma}}\right)\left[\left|\left(\mathbf{x}-\mathbf{x}'\right)(\sigma)\right|+\left\|\left(\bm{\psi}-\bm{\psi}'\right)(\sigma)\right\|_{\left(L^{3,\infty}\right)^N}\right]\mathrm{d}\sigma\right\}^p\tag{$*$}\label{eq:starO1}\\ &\lesssim{} C\left\|\left(t-\cdot + \frac{1}{\sqrt{t-\cdot}}\right)h^{1/p}\right\|^p_{L^1\left([0,t];\mathbb{R}\right)} \\ &\lesssim{} C\left\|t-\cdot + \frac{1}{\sqrt{t-\cdot}}\right\|^p_{L^{p'}\left([0,t];\mathbb{R}\right)} \left\|h^{1/p}\right\|^p_{L^p\left([0,t];\mathbb{R}\right)}\tag{$*2$}\label{eq:starN2}\\ &\lesssim_{ \tau } C\int_0^t h(\sigma)\mathrm{d}\sigma, \tag{$*3$}\label{eq:starH3} \end{align*} where $C=C\left(\left\|\bm{\psi}\right\|_{C^0\left([0,\tau];\left(H^2\right)^N\right)},\left\|\bm{\psi}'\right\|_{C^0\left([0,\tau];\left(H^2\right)^N\right)} \right)$ is from Lemma \ref{lem:coupledsolution}. Now, using Gr\"onwall's Lemma, we obtain $h\leq 0$ on $[0,\tau]$. Since $h\geq 0$ too by definition, and $h(0)=0$ since $\mathbf{x}(0)=\mathbf{x}'(0)=\mathbf{x}^0$ and $\bm{\psi}(0)=\bm{\psi}'(0)=\bm{\psi}^0$, we get $h\equiv 0$, by which $\left(\mathbf{x},\bm{\psi}\right)=\left(\mathbf{x}_2,\bm{\psi}'\right)$. This completes the proof. \end{proof} \section{Acknowledgements} B.B., O.\c{C}. and W.S. acknowledge funding by the Innovational Research Incentives Scheme Vidi of the Netherlands Organisation for Scientific Research (NWO) with project number 723.016.002. C.M. was partially funded by The CHERISH Digital Economy Centre - Collaboration and Knowledge Exchange Support ref 69M. C.M. would like to thank the members of the Department of Mathematics and Computer Science and of the Institute for Complex Molecular Systems - Eindhoven University of Technology, for the warm hospitality. \section{CRediT author statement} Wouter Scharpach: \textit{Conceptualisation, formal analysis, writing original draft, review \&{} editing, project administration}; Carlo Mercuri: \textit{Conceptualisation, methodology, formal analysis, writing original draft, supervision, funding acquisition}; Bj\"orn Baumeier: \textit{Conceptualisation, review, supervision, funding acquisition}; Mark Peletier: \textit{Conceptualisation, methodology}; Georg Prokert: \textit{Conceptualisation, review (in part)}; Onur \c{C}aylak: \textit{Conceptualisation}. \section{References} \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In a standard scenario, big bang nucleosynthesis (BBN) can produce only light elements, up to $^{7}$Li, and all heavy elements have been synthesized in stars. However, many phase transitions in the early universe could have printed their trace in a non-standard way. For example, some baryogenesis models\cite{Dolgov:1992pu} predict very high baryon density islands in ordinary low density backgrounds. In the previous paper\cite{Matsuura:2005rb}, we studied heavy element production in inhomogeneous BBN from this point of view. \footnote{For previous works on the inhomogeneous big bang nucleosynthesis, see~\cite{IBBN1, IBBN2}. Heavy elements production is also mentioned in~\cite{jedam}.} However we limited ourselves to the heavy element abundance and did not discuss about the light element abundance and consistency with observations. This is because we assumed that high baryon density region is very local and do not affect the global light element abundance. In \cite{Rauscher:2006mv}, Rauscher pointed out that in order not to contradict to observations, the high baryon density region must be very small and cannot affect the present heavy element abundance. In this paper, we show that there is a parameter region in which the heavy element can be produced enough to affect observation while keeping the light element abundance consistent with observations. We consider that the disagreement between Rauscher's opinion and our opinion comes from two points. One is that we are looking at some parameter regions in which neutrons in high baryon density do not diffuse so much as to cause disaster in standard BBN. We would like to emphasize this point. The other is that the relevant quantity is not the spatial size of the high baryon density region but the amount of baryon in high density regions. We will discuss the following issues: In section \ref{LEA}, we discuss the light element abundance in the homogeneous high baryon density region and after mixing the high and the low baryon density region. In section \ref{HEAVY}, we study the heavy element(Ru,Mo) abundance in high and averaged baryon density and show that heavy elements can be produced enough without contradicting the light element observation. In section \ref{DIFF}, we briefly comment on the diffusion scale of the high baryon density region. \section{Light element abundance} \label{LEA} \subsection{Homogeneous BBN} We calculate homogeneous BBN with various values of $\eta$(baryon photon ratio). In Table.\ref{light-table} and \ref{light-normal}, we show the numerical result of the mass fraction and the number fraction of each light element for $\eta=10^{-3}$ and $3.162 \times 10^{-10}$. \begin{center} \begin{table}[htbp] \begin{tabular}{ccc}\hline \multicolumn{3}{c}{$\eta=10^{-3}$} \\ \hline name & mass fraction & number fraction \\ H & $ 5.814 \times 10^{-1} $ & $ 8.475 \times 10^{-1}$ \\ $^{4}$He & $ 4.185 \times 10^{-1}$ & $ 1.525 \times 10^{-1}$ \\ $^{3}$He & $ 4.842 \times 10^{-13} $ & $ 1.614 \times 10^{-13}$ \\ $^{7}$Li+$^{7}$Be & $ 1.559\times 10^{-12}$ & $ 2.227 \times 10^{-13}$ \\ D & $ 1.577 \times 10^{-22}$ & $ 7.883\times 10^{-23}$ \\ \hline \end{tabular} \caption{The mass and the number fractions of light elements for the homogeneous BBN with $\eta=10^{-3}$} \label{light-table} \end{table} \end{center} \vspace{0.7cm} \begin{center} \begin{table}[htbp] \begin{tabular}{ccc}\hline \multicolumn{3}{c}{$\eta=3.162 \times 10^{-10}$} \\ \hline name & mass fraction & number fraction \\ H & $ 7.58 \times 10^{-1}$ & $ 9.26 \times 10^{-1}$ \\ $^{4}$He & $ 2.419 \times 10^{-1}$ & $ 7.39 \times 10^{-2}$ \\ $^{3}$He & $ 4.299 \times 10^{-5}$ &$ 1.433 \times 10^{-5}$ \\ $^{7}$Li + $^{7}$Be & $ 8.239 \times 10^{-10}$ & $ 1.177 \times 10^{-10}$ \\ D & $ 1.345 \times 10^{-4}$ &$ 6.723 \times 10^{-5}$ \\ \hline \end{tabular} \caption{The mass and the number fractions of light elements for the homogeneous BBN with $\eta=3.162 \times 10^{-10}$} \label{light-normal} \end{table} \end{center} \vspace{0.7cm} As baryon density becomes higher, more protons and neutrons are bounded to form $^{4}$He. At $\eta=10^{-3}$, most of the final product of $^{7}\rm{Li}$ comes from $^{7}\rm{Be}$ which decays to $^{7}\rm{Li}$ after BBN. Details on light element production for various $\eta$ can also be found in \cite{Wagoner:1966pv}. In this paper we almost concentrate on a case in which the high baryon density region has $\eta=10^{-3}$. We expect that compared to $\eta \geq 10^{-3}$, the profile of the abundance for $\eta=10^{-3}$ is more different from standard BBN because most of the light element abundances change monotonically with respect to $\eta$ and if this case does not contradict to observations, other cases would also be consistent. Briefly, the amount of $\rm{H}$ decreases and $^{4}\rm{He}$ increases monotonically as $\eta$ become larger. The number fraction of $\rm{D}$ is less than $10^{-20}$ for $\eta$ greater than $10^{-7}$. For $^{3}\rm{He}$, the number fraction drastically decreases around $\eta =10^{-4}$ down to ${\cal O}(10^{-13})$, and for $^{7}\rm{Li}$, the number fraction increases until $\eta=10 ^{-6}$ and drastically decreases for a larger value of $\eta$. In the following sections, we will see that this non-standard setup does not strongly contradict to the observations. For simplicity we ignore the diffusion effect before and during BBN, and after BBN both high and low baryon density regions are completely mixed. Detailed analysis such as the case in which the high baryon density region doesn't completely mixed, or taking into account diffusion effects are left for future work. \subsection{Parameters and Basic equations} In this section, we summarize the relations among parameters. Notations : $n$, $n ^{H}$, $n ^{L}$ are averaged, high, and low baryon number density. $f ^{H}$, $f ^{L}$ are the volume fractions of the high and the low baryon density region. $y_{i} $, $y_{i} ^{H}$, $y_{i} ^{L}$ are the mass fractions of each element (i) in averaged-, high- and low-density regions. The basic relations are \begin{eqnarray} f ^{H}+f ^{L}&=&1 \label{vol} \\ f ^{H}n ^{H}+f ^{L}n ^{L}&=&n \label{density} \\ y_{i} ^{H}f ^{H}n ^{H}+y_{i} ^{L}f ^{L}n ^{L}&=&y_{i}n. \label{spec} \end{eqnarray} Under the assumption that the temperature of the universe is homogeneous, the above equation can be written as \begin{eqnarray} f ^{H}\eta ^{H}+f ^{L}\eta ^{L}&=&\eta\label{denseta} \\ y_{i} ^{H}f ^{H}\eta ^{H}+y_{i} ^{L}f ^{L}\eta ^{L}&=&y_{i}\eta\label{speceta} \end{eqnarray} where $\eta =\frac{n}{n_{\gamma}}$,$\eta ^{H,L} =\frac{n ^{H,L}}{n_{\gamma}}$ Conventional parameters for inhomogeneous BBN are $\eta$, $f$ and density ratio $R=\frac{n^{H} }{n^{L} }$. Here we use a different combination of parameters. Relevant values for the abundance analysis are products $f^{H,L} \times \eta ^{H,L} $ and $\eta ^{H,L}$. $f^{H,L}_{v}\times\eta ^{H,L}$ determines the amount of baryon from high- and low- density regions. $\eta ^{H,L}$ determines the mass fraction of each species of nuclei. For convenience, we write the ratio of baryon number contribution from high density region as $a$, i.e., $f ^{H}\eta ^{H} : f ^{L}\eta ^{L}=a:(1-a) $. There are 5 parameters($n^{H,L} , n$ and $f^{H,L} $) and 2 constraints (Eq.(\ref{vol}) and Eq.(\ref{density})). We calculate the light element abundance for various values of $\eta ^{H,L}$. $\eta$ can also take any value, but in order not to contradict observational constraints, we choose $\eta$ from $3.162 \times 10^{-10}$ to $10^{-9}$. $a$ is determined by Eq(\ref{denseta}). The aim of the analysis in this section is not to find parameter regions which precisely agree with the observational light element abundance and $\eta$ from CMB. Our model is too simple to determine the constraints to parameters. For example, we completely ignore the diffusion effect before and during BBN. Instead we see that at least our analysis in previous paper is physically reasonable. \subsection{Theoretical predictions and observations of light elements} We consider the cases of $\eta ^{H}=10^{-3}$ and $\eta ^{L}=3.162\times 10^{-10}$. The mass fractions of and $\rm{H}$ and $^{3}\rm{He}$ in the high density region are $0.5814$ and $4.842\times 10^{-13}$, respectively, while those in the low density region are $0.758$ and $4.299\times 10^{-5}$. From Eq.(\ref{speceta}), we have \begin{eqnarray} f ^{H}\eta ^{H}y_{^3\rm{He}} ^{H}+f ^{L}\eta ^{L}y_{^3\rm{He}} ^{L}&=&\eta y_{^3\rm{He}} \\ 4.842\times 10^{-13}\times a+4.299\times 10^{-5}\times(1-a)&=& y_{^3\rm{He}} \end{eqnarray} \begin{eqnarray} f ^{H}\eta ^{H}y_{\rm{H}} ^{H}+f ^{L}\eta ^{L}y_{\rm{H}} ^{L}&=&\eta y_{\rm{H}} \\ 0.5814\times a+0.758\times(1-a)&=& y_{\rm{H}}. \end{eqnarray} We can calculate an averaged value of the abundance ratio of $^3$He to H as \begin{equation} (\frac{^3\rm{He}}{\rm{H}})=\frac{1}{3}\frac{4.842\times 10^{-13}\times a+4.299\times 10^{-5}\times(1-a)} {0.5814\times a+0.758\times(1-a)}. \end{equation} where $a$ is related to $\eta$ as \begin{eqnarray} a&=&\frac{\eta ^{H}}{\eta}\frac{\eta - \eta ^{L}}{\eta ^{H}-\eta ^{L}}\\ &=& \frac{10^{-3}}{\eta}\frac{\eta - 3.162\times10^{-10}}{10^{-3}-3.162\times10^{-10}}\\ &\sim&\frac{\eta - 3.162\times10^{-10}}{\eta}. \end{eqnarray} Here $a$ varies from 0 to 0.9 for reasonable values of $\eta$, or $3.162 \times 10^{-10}-10^{-9}$. Similarly, for $\eta ^{H}=10^{-3}$ the number fractions are \begin{equation} (\frac{\rm{D}}{\rm{H}})=\frac{1}{2}\frac{1.577\times 10^{-22}\times a+1.345\times 10^{-4}\times (1-a)}{0.5814\times a+0.758\times(1-a)} \end{equation} \begin{equation} (\frac{^{7}\rm{Li}}{\rm{H}})=\frac{1}{7}\frac{1.559\times 10^{-12}\times a +8.239\times 10^{-10}\times (1-a)} {0.5814\times a+0.758\times(1-a)}. \end{equation} Fig.\ref{deps},\ref{heeps} and \ref{lieps} represent the averaged abundance ratio, (D/H), ($^{3}$He/H) and ($^{7}$Li/H) respectively. \begin{figure}[htbp] \includegraphics[width=9cm,clip]{ratio-D.eps} \caption{Averaged ratio of D to H,(D/H) vs $\eta$} \label{deps} \end{figure} \begin{figure}[htbp] \includegraphics[width=9cm,clip]{ratio-He3.eps} \caption{Same as Fig.\ref{deps} but for ($^3$He/H)} \label{heeps} \end{figure} \begin{figure}[htbp] \includegraphics[width=9cm,clip]{ratio-Li7.eps} \caption{Same as Fig.\ref{deps} but for ($^{7}$Li/H)} \label{lieps} \end{figure} We can see that the light element abundance is the same order around $\eta \sim 5\times 10^{-10}-10^{-9}$ as observations \cite{Fields:2004cb,Kirkman:2003uv,O'Meara:2000dh,Kirkman:1999zu,Linsky:2003ia,Ryan:1999jq,Bonifacio:2002yx,Pinsonneault:2001ub}. \begin{eqnarray} (\frac{\rm{D}}{\rm{H}})_{obs}=(1.5-6.7) \times 10^{-5} \label{obs1}\\ (\frac{^{7}\rm{Li}}{\rm{H}})_{obs}=(0.59-4.1)\times 10^{-10}. \label{obs2} \end{eqnarray} We do not discuss detail about diffusion here. But at least above result suggest that our analysis is not beside the point. \section{Theoretical predictions and observations of heavy elements ($^{92,94}$\rm{Mo}, $^{96,98}$\rm{Ru})} \label{HEAVY} The same analysis can be applied for heavy elements such as $^{92}\rm{Mo}$, $^{94}\rm{Mo}$, $^{96}\rm{Ru}$ and $^{98}\rm{Ru}$. We are interested in these elements because in many models of supernovae nucleosynthesis, these p-nuclei are less produced. We will see that some amount of these heavy elements can be synthesized in BBN. \begin{center} \begin{table}[htbp] \begin{tabular}{cc}\hline \multicolumn{2}{c}{$\eta=10^{-3}$} \\ \hline name & mass fraction \\ H & $ 5.814 \times 10^{-1} $ \\ $^{4}$He & $4.185\times10^{-1}$ \\ $^{92}$Mo & $1.835\times 10^{-5}$ \\ $^{94}$Mo & $4.1145\times 10^{-6}$ \\ $^{96}$Ru & $1.0789 \times 10^{-5}$ \\ $^{98}$Ru & $1.0362\times 10^{-5}$ \\\hline \label{heavy} \end{tabular} \caption{The mass fractions of nuclei for homogeneous BBN with $\eta=10^{-3}$} \end{table} \end{center} \vspace{0.7cm} From Table.\ref{heavy}, we can derive the expected value of these elements. \begin{equation} (\frac{^{92}\rm{Mo}}{\rm{H}})=\frac{1}{92}\frac{1.835\times 10^{-5}\times a}{0.5814\times a+0.758\times(1-a)} \end{equation} \begin{equation} (\frac{^{94}\rm{Mo}}{\rm{H}})=\frac{1}{94}\frac{4.1145\times 10^{-6}\times a}{0.5814\times a+0.758\times(1-a)} \end{equation} \begin{equation} (\frac{^{96}\rm{Ru}}{\rm{H}})=\frac{1}{96}\frac{1.0789 \times 10^{-5}\times a}{0.5814\times a+0.758\times(1-a)} \end{equation} \begin{equation} (\frac{^{98}\rm{Ru}}{\rm{H}})=\frac{1}{98}\frac{1.0362\times 10^{-5}\times a}{0.5814\times a+0.758\times(1-a)}. \end{equation} \vspace{1cm} We plot expected value of these quantities in Fig.\ref{combps}. \begin{figure}[htbp] \includegraphics[width=9cm,clip]{ratio-MoRu.eps} \caption{($^{92}$\rm{Mo/H}),($^{94}$\rm{Mo/H}),($^{96}$\rm{Ru/H}) and ($^{98}$\rm{Ru/H}) vs $\eta$. Red, green, blue and pink lines represent the ratio ($^{92}$\rm{Mo/H}),($^{94}$\rm{Mo/H}),($^{96}$\rm{Ru/H}),($^{98}$\rm{Ru/H}) respectively.} \label{combps} \end{figure} These values should be compared with the solar abundance(Table.\ref{solar})\cite{Anders:1989zg}. \begin{center} \begin{table}[htbp] \begin{tabular}{ccc}\hline name & number fraction & ratio to H \\ \hline H & $7.057280 \times 10^{-1}$ & 1 \\ $^{92}$Mo & $8.796560 \times 10^{-10}$ & $1.2465 \times 10^{-9} $ \\ $^{94}$Mo &$ 5.611420 \times 10^{-10}$ & $ 7.9512\times 10^{-10} $ \\ $^{96}$Ru & $2.501160 \times 10^{-10}$ & $ 3.5441\times 10^{-10} $ \\ $^{98}$Ru &$ 8.676150 \times 10^{-11} $& $ 1.2294\times 10^{-10} $ \\ \hline \end{tabular} \caption{The abundances of $^{92,94}$\rm{Mo} and $^{96,98}$\rm{Ru} in the solar system\cite{Anders:1989zg}} \label{solar} \end{table} \end{center} Compared those observational values with Fig.\ref{combps}, it is clear that the heavy element produced in BBN can affect the solar abundance heavy element. Some of them are produced too much. But this is not a problem of the previous work \cite{Matsuura:2005rb}, because we assumed that high density regions are very small and do not disturb standard BBN. The analysis here suggest that even if we assume the density fluctuations are completely mixed, heavy element can have enough affect to the solar abundance. \section{Diffusion during BBN} \label{DIFF} In the previous analysis, we assumed that the diffusion effect can be ignored during BBN and both high density regions and low density regions are completely mixed after BBN. In this section, we determine the scale of high baryon density island in which the diffusion effect during BBN is very small enough and our assumption is valid. We do not discuss the diffusion after BBN here. A detail analysis of the comoving diffusion distance of the baryon, the neutron and the proton is in \cite{Applegate:1987hm}. From Fig.1 in \cite{Applegate:1987hm}, in order to safely ignore the diffusion effect, it is necessary for the high baryon density island to be much larger than $10^{5}$cm at T=0.1MeV($1.1\times 10^{9}$K). Notice that $T \propto \frac{1}{A}$, where A is a scale factor. For scale d now corresponds to $d/(4.0\times10^8)$ at BBN epoch. Present galaxy scale is $\mathcal{O}(10^{20})$cm, which corresponds to $\mathcal{O}(10^{12})$cm $>>10^{5}$cm at BBN epoch. \begin{center} \begin{table}[htbp] \begin{tabular}{cc}\hline \multicolumn{2}{c}{temperature and scale} \\ \hline temperature & scale \\ $1.1\times10^{9}$K (BBN) & d \\ 3000K (decouple) & $3.7\times 10^{6}\times d$ \\ 2.725K (now) & $4.0\times 10^{8}\times d$ \\ \hline \end{tabular} \caption{Relation between temperature and scale} \end{table} \end{center} The maximum angular resolution of CMB is $l_{max} \sim$2000. The size of universe is $\sim 5000$Mpc. In order not to contradict to CMB observation, the fluctuation of baryon density must be less than $\sim 16$Mpc now. This corresponds to $10^{17}$cm at BBN. Since the density fluctuation size in Dolgov and Silk's model\cite{Dolgov:1992pu} is a free parameter, the above brief estimation suggests that we can take the island size large enough to ignore the diffusion effect without contradicting to observations, i.e., the reasonable size of $10^{5}$cm $-$$10^{17}$cm at the BBN epoch. We can choose distances between high density islands so that we obtain a suitable value of $f$. \section{Summary} In this paper, we studied the relation between the heavy element production in high baryon density regions during BBN and the light element observation. By averaging the light element abundances in the high and the low density regions we showed that it is possible to produce a relevant amount of heavy element without contradicting to observations. However we should stress that in this paper we restricted ourselves to some parameter regions where neutrons in high baryon density regions do not destroy the standard BBN. So our setup is different from the conventional inhomogeneous BBN studies. We also studied the size of the density fluctuation to show that there is a parameter region in which the neutron diffusion is negligible and which is much smaller than CMB observation scale. It is worthwhile to investigate further how the produced heavy elements can be related to the detailed observations. \section{Acknowledgements} We thank R.H. Cyburt, R. Allahverdi and R. Nakamura for useful discussions. This research was supported in part by Grants-in-Aid for Scientific Research provided by the Ministry of Education, Science and Culture of Japan through Research Grant No.S 14102004, No.14079202. S.M.'s work was supported in part by JSPS(Japan Society for the Promotion of Science).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} First ideas to use a two scale formulation to describe fluid flow in fractured porous media came up around 1960 e.g. by Barenblatt, Zheltov and Kochina \cite{BZK}. They reflect very well the exceptionell manner of the geometric conditions. The material possesses one natural porous structure and a second one is added by the dense system of cracks. A mathematical derivation of the model was later given in \cite{ADH}. In the sequel weak formulations of the problem have been studied intensively either by homogenization theory \cite{Hornung1, Hornung2} or with the help of monotone operators \cite{Antontsev, CookShow, HorShow}. Stationary solutions and the elliptic problem are for example treated in \cite{ShoWa3}. Reaction terms and evolving pore geometry have been considered in several papers \cite{FrieKna, FrieTza, Meier1, MeiBo, Peter1, PeterBoehm}. Although the model in these studies has a similar form as in the present paper, the considered length scales are different. We consider the matched microstructure model (MM) as it was formulated by Showalter and Walkington in \cite{ShoWa}. Assume we are given a \textit{macroscopic} domain $\Omega\subset \mathbb{R}^n$, and for each $x\in \Omega$ a \textit{cell} domain $\Omega_x\subset \mathbb{R}^n$. These cell domains stand for the porous blocks while $\Omega$ contains in a homogenized sense the fissure system. The model (MM) consists of two parts: The macro model for a function $u$, that represent the density of the fluid on the domain $\Omega$: \begin{align*} \frac{\partial}{\partial t} u(t,x)-\Delta_x u(t,x) &= f(t,u)+ q(U)(t,x), && x\in \Omega, t\in (0,T],\\ u(t,x)&=0, && x\in \Gamma, t\in (0,T],\\ u(t=0)&=u_0. \end{align*} The micro model for the function $U$, that models the density in all blocks $\Omega_x$: \begin{align*} \frac{\partial}{\partial t} U(t,x,z)-\Delta_z U(t,x,z)&=0, &&x\in \Omega, z\in \Omega_x, t\in (0,T],\\ U(t,x,z)&=u(x), &&x\in \Omega, z\in \Gamma_x, t\in (0,T],\\ U(t=0)&=U_0. \end{align*} The coupling between the macro and the micro scale is reflected by two terms. Firstly the boundary condition in the cells $\Omega_x$, \begin{align} U(x)=u(x) \qquad \text{on } \partial \Omega_x, \text{ for all } x\in \Omega\label{match} \end{align} models the matching of the densities on the material interface. For this reason the model was introduced in \cite{ShoWa} as matched microstructure model. Secondly the term \begin{equation}\label{qU} q(U)(t,x) = -\int_{\Gamma_x} \frac{\partial U(t,x,s)}{\partial \nu} \, ds = -\frac{\partial}{\partial t} \int_{\Omega_x} U(t,x,z) \, dz. \end{equation} represents the amount of fluid that is exchanged between the two structures. It acts as a source or sink term in the macroscopic system. Our interpretation of this model is based on the derivation of the coupled equations for the case of uniform cells at each point $x$ in the considered domain $\Omega$. On this basis we will first present our restrictions on the geometry and the definition of suitable Banach spaces. Then we reformulate the problem as an abstract semilinear intial value problem on the product space $ L_p(\Omega)\times L_p(\Omega, L_p( \Omega_x))$. Therefore we introduce an operator $\textbf{A}$ that includes the highest order derivatives and the first coupling condition. In Theorem \ref{allgemein} we prove that $-\textbf{A}$ generates an analytic semigroup which finally implies well-posedness of the matched microstructure problem. A further part of this work is the consideration of the long time behaviour of the solution. We show that for Dirichlet boundary condition the solution decays to zero at an exponential rate. In the Neumann case we prove mass conservation. Finally we consider a special two dimensional geometry and include a nonlinear boundary condition on $\partial \Omega$. A detailed derivation of this is given in \cite{Doktorarbeit}. We prove well posedness for weak solutions which can be improved using the concept of Banach scales. The approach is funded on work of Escher \cite{Escher89} and Amann \cite{Amann88, Amann_Multi, Amann}. Our methods are quite flexible and provide the high regularity of strong solutions. The outline of this paper is the following. First we present some basic lemmas for a uniform geometry. The largest section is then devoted to the variation of the cell domain and the transcription of the problem (MM) into an abstract initial value problem. There the mains result that $-\textbf{A}$ generates an analytic semigroup is located. Further the spectrum of $\textbf{A}$ is investigated for the case $p=2$. The last chapter contains the model which includes nonlinear boundary conditions. With the help of a retraction from the boundary we transform the equations into an abstract semilinear problem, which leads to well-posedness of the original system. \section{Some Aspects for Uniform Cells} Let $\Omega\subset \mathbb{R}^n$ be a bounded domain with smooth boundary $\Gamma:= \partial \Omega$ and let $(\Omega, \mathcal{A}, \mu)$ be a measure space. Let $X,Y$ be Banach spaces. If $A$ is a closed linear operator from $X$ to $Y$ we denote with $D(A) = (dom(A), \|\cdot\|_A)$ the domain of definition of $A$ equipped with the graph norm. Further we write $\hat{U}$ if we mean a representativ in $\mathcal{L}_p$ of a give function $U\in L_p$. With $[\cdot]$ we indicate the equivalence class again. The shifted operators will always be denoted with bold letters. \begin{lem}\label{boundedShift} Assume that $(x\mapsto A(x)) \in C(\overline{\Omega}, \mathcal{L}(X,Y))$. Let \begin{align*} dom(\textbf{A}) &= L_p(\Omega, X),\\ \textbf{A} U &= \left[A(x)\hat{U}(x)\right], && \text{for } U\in L_p(\Omega, X),\, \hat{U} \in U. \end{align*} Then $\textbf{A}$ is a well defined, bounded linear operator from $L_p(\Omega, X)$ to $L_p(\Omega, Y)$. If further $A(x)=A$ independent of $x$ and $A$ is a retraction, then $\textbf{A}$ is a retraction as well. \end{lem} \begin{proof} The continuity of $(x\mapsto A(x))$ assures that $\textbf{A}$ is well defined. Further we easily get $\textbf{A} U\in L_p(\Omega, Y)$ and $\textbf{A}$ is bounded because \begin{align*} \|\textbf{A} U\|^p_{L_p(\Omega,X)} =\int_\Omega \|A(x) \hat{U}(x)\|^p_Y \, d\mu(x) \leq \max_{x\in \overline{\Omega}} \|A(x)\|^p_{\L(X,Y)} \cdot\|U\|^p_{L_p(\Omega, X)}. \end{align*} Now assume that $A(x)=A$ is a retraction and let $R \in \L(Y,X)$ be a continuous right inverse of $A$, so that $ A \circ R = id_Y.$ Let $V\in L_p(\Omega,Y)$, $\hat{V}\in \L_p(\Omega,Y)$ a representative of $V$. As before we define $$\textbf{R}\in \L(L_p(\Omega, Y),L_p(\Omega, X)),\qquad \textbf{R}V = [R\hat{V}(x)].$$ Then a short calculation shows that this is a continuous right inverse for $\textbf{A}$. \end{proof} \begin{lem}\label{Lemma2} For $x\in \Omega$, let $A(x) \in \mathcal{A}(X,Y)$ be a closed linear operator. Assume that there is $A_0\in \mathcal{A}(X,Y)$, such that $dom(A(x)) = dom(A_0),\text{ for all }x\in \Omega.$ Furthermore let $(x\mapsto A(x)) \in C(\overline{\Omega}, \L(D(A_0),Y))$. Then the operator \begin{align*} dom(\textbf{A}) &= L_p(\Omega, dom(A_0)),\\ \textbf{A} U &= \left[A(x)\hat{U}(x)\right], && \text{for } U\in dom(\textbf{A}),\, \hat{U} \in U. \end{align*} is a well defined, closed linear operator from $L_p(\Omega, X)$ to $L_p(\Omega, Y)$. If further $dom(A_0)$ is dense in $X$, then $\textbf{A}$ is densely defined. \end{lem} \begin{proof} The first statement follows from Lemma \ref{boundedShift} and a short calculation. The density assertion follows from the proof of Theorem 4.3.6 in \cite{Amann}. \end{proof} Now we turn to sectorial operators. Let $\omega \in \mathbb{R}, \theta \in (0, \pi)$. We set $$S_{\theta,\omega} = \{\lambda \in \mathbb{C}; \lambda \neq \omega, |arg(\lambda-\omega)| < \theta\}.$$ \begin{lem}\label{Lemma3} Let the assumptions of the previous Lemma be fulfilled with $X=Y$. Assume further that there exist constants $\omega \in \mathbb{R}, \theta \in (0, \pi)$, $M\geq 1$ such that for every $x\in \Omega$, $S_{\theta,\omega} \subset \rho(-A(x))$ and \begin{align*} \|(\lambda +A(x))^{-1}\|_{\L(X)} &\leq \frac{M}{|\lambda-\omega|} && \text{for } \lambda \in S_{\theta,\omega}. \end{align*} Then $\textbf{A}$ is sectorial in $L_p(\Omega, X)$. \end{lem} \begin{proof} Let $\lambda \in S_{\theta,\omega}$. With Lemma \ref{boundedShift} we define $\textbf{R}_\lambda\in \L(L_p(\Omega, X))$ by $$\textbf{R}_\lambda U := [(\lambda + A(x))^{-1} \hat{U}(x)]$$ for $U\in L_p(\Omega, X)$. Then one easily calculates that this is the inverse of $\lambda+ \textbf{A}$. Thus $\lambda \in \rho(-\textbf{A})$. Furthermore it holds \begin{align*} \|(\lambda+\textbf{A})^{-1}U\|_{L_p(\Omega,X)}^p&\leq \int_\Omega \|(\lambda + A(x))^{-1} \|^p_{\L(X)} \|U(x)\|_X^p \, d\mu(x)\\ &\leq \left(\frac{M}{|\lambda-\omega|}\right)^p \int_\Omega \|U(x)\|_X^p \, d\mu(x). \end{align*} So we conclude that $\textbf{A}$ is sectorial. \end{proof} \section{The Semilinear Problem} \subsection{Geometry}\label{1.1} The main idea of this part is to relate the cell's shape to one standard cell, the unit ball $B= B(0,1) \subset \mathbb{R}^n$. Let $S= \partial B$ be its boundary. We assume that there are two mappings $\Psi, \Phi$ with \begin{align*} \Psi : \Omega \times B &\to \mathbb{R}^n,\\ \Phi: \Omega\times B &\to \mathbb{R}^n\times\mathbb{R}^n,\\ (x,y) &\mapsto (x, \Psi(x,y)). \end{align*} Now a cell at a point $x\in \Omega$ is the image of $B$ at $x$, i.e. $\Omega_x:= \Psi(x,B).$ We set $$Q:= \bigcup_{x\in \Omega} \{x\} \times \Omega_x.$$ Then $ Q= \Phi(\Omega\times B)$. To assure that $\Omega_x$ is a bounded smooth domain as well impose some properties of $\Phi, \Psi$: \begin{align} \Phi&\in \Lip(\Omega\times B, Q),\label{cond1}\\ \Phi^{-1} &\in \Lip( Q, \Omega\times B), \label{cond2}\\ \Phi(x, \cdot) &\in \diff(\overline{B}, \overline{\Omega}_x), && \textrm{for all }x\in \Omega,\label{cond3}\\ &\sup_{x\in \Omega, |\alpha|\leq 2} \left\{\|\partial^\alpha_y \Phi(x)\|_p, \|\partial^\alpha_z \Phi^{-1}(x)\|_p\right\} < \infty.\label{cond4} \end{align} Here $\|\cdot\|_p$ denotes the usual $L_p$-norm. The set $\diff(\overline{B}, \overline{\Omega}_x)$ shall denote all $C^\infty$-diffeomorphism from $\overline{B}$ to $\overline{\Omega}_x$ such that the restriction to the boundary $S$ gives a diffeomorphism to $\Gamma_x$. It follows from the assumptions that $Q$ is measurable. Further for every $x\in \Omega$, the set $\Omega_x$ is a bounded domain with smooth boundary $\Gamma_x:= \partial \Omega_x$. Note that the special construction of $\Phi$ implies that it is injectiv. Thus we will be able to work with the trace operator on $B$ and transfer it to $\Omega_x$. The conditions ensure that the following maps are well defined isomorphisms. Given $2\leq p<\infty$, we define pull back and push forward operators \begin{align*} \Phi_* &: L_p(\Omega\times B)\to L_p(Q): U \mapsto U \circ \Phi^{-1},\\ \Phi^* &: L_p(Q)\to L_p(\Omega \times B): V \mapsto V\circ \Phi. \end{align*} The following definition of a function space is based on Bochner's integration theory. In \cite{Adams} (3.34, 3.35), it is proven for Sobolev-Slobodetski spaces that under these diffeomorphisms $W_p^s(\overline{B})$ is mapped onto $W_p^s(\overline{\Omega}_x)$ for $0\leq s\leq 2$. For $s=0$ we identify $W_p^0(\overline{B}) =L_p(B)$. The space $L_p(\Omega, W_p^s(B))$ is now defined by means of the Bochner integration theory. We define \begin{align} L_p(\Omega, W_p^s(\Omega_x)) := \Phi_* (L_p(\Omega, W^s_p(B))).\label{spaces} \end{align} We can prove that equipped with the induced norm \begin{align*} \|f\|_{x,s}&:= \|\Phi^* f\|_{L_p(\Omega, W_p^s(B))}, &&f\in L_p(\Omega, W_p^s(\Omega_x)), \end{align*} this is a Banach space.\footnote{Note that hypothesis \eqref{cond4} ensures that different $\Phi$'s within the class \eqref{cond1} to \eqref{cond4} lead to equivalent norms.} For the formulation of the boundary conditions on the cells we need suitable trace operators. Our assumptions ensure that we can restrict $\Phi^*, \Phi_*$ to $L_p(\Omega\times S)$. We use the same notation for the pullback and push forward as on $\Omega\times B$. Let $s\geq 0$. We define \begin{align*} L_p(\Omega, W_p^s(\Gamma_x))&:= \Phi_* \left(L_p(\Omega, W_p^s(S))\right)\\ \|U\|_{L_p(\Omega, W^s_p(\Gamma_x))} &= \|\Phi^* U\| _{L_p(\Omega, W^s_p(S))}, && U\in L_p(\Omega, W^s_p(\Gamma_x)). \end{align*} As before this is a Banach space. From Lemma \ref{boundedShift} we deduce that the shifted trace \begin{align*} \tr_S: L_p(\Omega, W_p^1(B)) \to L_p(\Omega, W_p^{1-\frac1p} (S)): \tr_S U = [tr_S\hat{U}], \end{align*} is a well defined linear operator. The last trace in the brackets is the usual trace on $B$. Next we transport this operator to $Q$. We set \begin{align*} \tr &:L_p(\Omega, W_p^1(\Omega_x)) \to L_p(\Omega, W_p^{1-\frac1p}(\Gamma_x)), \, \tr:= \Phi_* \tr_S \Phi^*. \end{align*} The continuity of $\Phi_*, \Phi^*, \tr_S$ ensures that $\tr$ is a continuous operator. In particular $\tr U=0$ implies $\tr_S ( \Phi^* U)=0$. From Lemma \ref{boundedShift} we conclude that $\tr_S$ is a retraction. There exists a continuous right inverse $R_S$ of $\tr_S$ that maps constant functions on the boundary to constant functions on $B$. We define \begin{align*} R&:= \Phi_* R_S \Phi^*. \end{align*} Then this is a continuous right inverse to $\tr$. Let $u\in L_p(\Omega)$. We identify $$R u = R(u \cdot 1_S)\in L_p(\Omega, W_p^2(B)).$$ With $\Delta_z$ we denote the Laplace operator in the coordinates $z\in \Omega_x$. Similarly we write $\Delta_y$ and $\Delta_x$ for the Laplace acting on functions over $B$ or $\Omega$. The definitions above ensure that \begin{align} \Delta_z Ru(x) =0, \qquad \text{for a.e. } x\in \Omega.\label{null} \end{align} This will be helpful in later calculations. Another definition of function spaces for the matched microstructure problem can be found in \cite{MeiBo}. \subsection{Operators} \label{TransOp} To use existing results for strongly elliptic operators, we first consider some auxiliary operators. Let $A_1$ be the Dirichlet-Laplace operator on $\Omega$, \begin{align*} dom(A_1)&= W_p^2(\Omega)\cap W_p^{1,0}(\Omega),\\ A_1 u&=-\Delta_x u, && \text{for } u\in dom(A_1). \end{align*} It is well known that $A_1$ is sectorial. For each $x\in \Omega$ we define a Riemannian metric $g(x)$ on the unit ball $B$. We write \begin{align*} g_{ij}(x) &:=(\partial_{z_i} \Phi(x)|\partial_{z_j} \Phi(x)),\\ \sqrt{|g(x) |}&:= \sqrt{\det{g_{ij}(x)}},\\ g^{ij}(x) &:= (g_{ij}(x))^{-1}. \end{align*} Then the regularity assumptions on $\Phi$ imply that this metric is well defined and there exists constants $C_i>0, i=1,2$ such that $C_1 |x|^2 \leq \sum_{i,j} g^{ij} x_i x_j \leq C_2 |x|^2$. Let $U\in L_p(\Omega, W_p^1(\Omega_x))$, $V= \Phi^* U$. We set \begin{align} q(U)(x) := -\int_S \sqrt{|g(x)|}g^{ij}(x) \partial_{y_i} \hat{V}(x) \cdot \nu_j \, ds. \end{align} Here $\nu=(\nu_1, \dots , \nu_n)$ denotes the outer normal vector on $B$. Then $q(U)$ is a function in $ L_p(\Omega)$. Using the transformation rule for integrals one sees that this definition is consistent with \eqref{qU}. We define the operator $\textbf{A}_2$ using the transformed setting \begin{align} dom(\textbf{A}_2) &= \{U \in L_p(\Omega, W_p^2(\Omega_x)); \tr U=0\},\\ \textbf{A}_2U &= \Phi_* [\mathcal{A}_x \hat{V}(x)]. \end{align} The brackets $[\cdot]$ again indicate taking the equivalence class and $\hat{V}$ is a representative of $V$. Given $x\in \Omega$, the operator $\mathcal{A}_x$ acts in the following way on $v\in W_p^2(B)$, $$\mathcal{A}_x v = -\frac{1}{\sqrt{|g(x)|}} \sum_{i,j}\partial_{y_i}\left(\sqrt{|g(x)|} g^{ij}(x) \partial_{y_j}\right) v.$$ Note that $\mathcal{A}_x$ is the Laplace-Beltrami-operator with respect to the Riemannian metric $g$. It holds \begin{lem}\label{Lemma6} The operator $\textbf{A}_2$ is well defined. \end{lem} \begin{proof} The coefficients of $\mathcal{A}_x$ depend continuously on $x$. Moreover the domain of definition is independent of $x\in \Omega$. Since $\Phi$ is defined up to the boundary of $\Omega$ the definition can be extended to its closure. So the hypothesis follows from Lemma \ref{Lemma2} and the properties of $\Phi_*$. \end{proof} The following lemma collects some properties of the defined operators. Let $R(\lambda, A) = (\lambda+A)^{-1}$ denote the resolvent operator of $-A$ for $\lambda\in \rho(-A)$. \begin{lem} Assume that for any $x\in \Omega$, $\Phi_x:= \Phi(x,\cdot)$ is orientation preserving. Further assume that the Riemannian metric $g^{ij}$ induced from $\Phi$ is well defined. For each cell we define the transformation $B_x$ of the Dirichlet-Laplace operator , \begin{align*} dom(B_x) &= W_p^2(B)\cap W_p^{1,0}(B),\\ B_x v &= \mathcal{A}_x v, && \textrm{ for } \ v\in dom(B_x). \end{align*} It holds \renewcommand{\labelenumi}{(\alph{enumi})} \begin{enumerate} \item The operators $B_x$ are strongly elliptic in $L_p(B)$. \item Given $x\in \Omega$, the operator $B_x$ is sectorial. In addition there exists a sector $$S_{\theta, \omega} = \{\lambda\in \mathbb{C}; \lambda\neq \omega, |arg (\lambda-\omega)| < \theta\},$$ and a constant $M_2>0$, both independent of $x$, such that \begin{align} \rho(-B_x) & \supseteq S_{\theta, \omega},\\ \| R(\lambda, B_x)\|_{\L(L_p(B))} &\leq \frac{M_2}{|\lambda-\omega|}, && \text{for all } \lambda \in S_{\theta,\omega}.\label{sect} \end{align} \item The operator $\textbf{B}$ in $L_p(\Omega \times B)$, given by \begin{align*} dom(\textbf{B}) &= L_p(\Omega, W_p^2(B)\cap W_p^{1,0}(B) ),\\ \textbf{B} V &= [ B_x \hat{V}(x) ], && V\in dom(\textbf{B} ),\, \hat{V} \in V, \end{align*} is well defined and sectorial. \item Set $\tilde{f} := \Phi^* f\in L_p(\Omega\times B)$. If the function $V\in L_p(\Omega, W_p^2(B)\cap W_p^{1,0}(B))$ is a solution of $\textbf{B} V=\tilde{f}$, then $U:= \Phi_* V$ fulfills $$ -\Delta_z V(x,\cdot) = f(x, \cdot), \ \ \text{ for a.e. }x\in \Omega.$$ Moreover $$U(x,z) = 0, \ \ \text{ for a.e. }x\in \Omega, z\in \Gamma_x.$$ \end{enumerate} \end{lem} \begin{proof} The first part follows from the fact that strong ellipticity is preserved under transformation of coordinates. Part b) is a consequence of the definition of $\Phi$ and \cite{Luna}, Theorem 3.1.3. With the help of Theorem 9.14, \cite{GilbTru} we conclude that the sector is independent of $x$. Now the rest follows by definition and Lemma \ref{Lemma6}. \end{proof} A more detailed proof can be found in \cite{Doktorarbeit}. Now we are ready to treat the coupled problem. Given $u\in L_p(\Omega)$, we set $$D_0(u):= \left\{ U\in L_p\left(\Omega, W_p^2(\Omega_x)\right); \tr U=u\right\}.$$ This is a closed linear subspace of $L_p(\Omega, W_p^2(\Omega_x))$. So we can define the operator $\textbf{A}$ by \begin{align*} dom(\textbf{A})&=\bigcup_{u\in W_p^2(\Omega)\cap W_p^{1,0}(\Omega)} \{u\} \times D_0(u),\\ \textbf{A} (u,U)&=\left(-\Delta_x u, [\Phi_* \mathcal{A}_x \Phi^* \hat{U}(x)] \right),&& \text{for } (u,U)\in dom(\textbf{A}). \end{align*} Observe that the operator contains the matching condition \eqref{match}. The exchange term $q(U)$ will appear as a term on the right hand side of the abstract problem \eqref{P}. Let $(f,g) \in L_p(\Omega)\times L_p(\Omega,L_p(\Omega_x))$, $\lambda \in S_{\theta,\omega}$. We consider the system \begin{align} (\lambda + \textbf{A}) (u,U) = (f,g), \qquad \text{ for } (u,U)\in dom(\textbf{A}).\label{problem} \end{align} This formally corresponds to \begin{align} \lambda u-\Delta_x u&=f,&& u\in W_p^2(\Omega)\cap W_p^{1,0}(\Omega),\label{7}\\ \lambda U-\Delta_z U &=g,&& U\in D_0(u).\label{8} \end{align} \begin{prop}\label{Prop8} The operator $-\textbf{A}$ is the generator of an analytic semigroup on the space $L_p(\Omega)\times L_p(\Omega,L_p(\Omega_x))$. \end{prop} \begin{proof} Let $\omega_i, \theta_i$ such that $S_{\theta_1,\omega_1} \subset \rho(-A_1)$, $S_{\theta_2,\omega_2} \subset \rho(-\textbf{A}_2)$. Set \begin{align*} \omega &= \max\{\omega_1, \omega_2\},&& \theta = \min \{\theta_1,\theta_2\}. \end{align*} Then $S_{\theta, \omega} \subset \rho(-A_1)\cap \rho(-\textbf{A}_2)$. Take $\lambda \in S_{\theta,\omega}$. Without restriction we suppose $\omega=0$. Since $A_1$ is sectorial the function $u=R(\lambda, A_1)f$ solves \eqref{7}. Furthermore there is $M_1\geq 1$ such that $$|\lambda| \|u\| \leq \||\lambda| R(\lambda, A_1) f\|\leq M_1 \|f\|.$$ For $U \in D_0(u)$, it holds $$ U-Ru\in L_p(\Omega,W_p^2(\Omega_x))\cap \ker \tr=dom(\textbf{A}_2).$$ Here $R$ is the extension operator defined in the previous chapter. So \eqref{null} implies that \eqref{8} is equivalent to \begin{align} \lambda(U-Ru)+ \textbf{A}_2(U-Ru)=g-\lambda Ru.\label{9} \end{align} Since $\textbf{B}$ is sectorial, \eqref{9} has the unique solution $$U=\Phi_* R(\lambda, \textbf{B})\Phi^* (g-\lambda Ru)+Ru.$$ So we have shown that (\ref{problem}) has a unique solution for $\lambda\in S_{\theta,\omega}$. Hence we conclude that $\lambda \in \rho(-\textbf{A})$. To shorten the notation, we write $X_0 = L_p(\Omega,L_p(\Omega_x))$. It holds \begin{align*} |\lambda| \| U\| &\leq |\lambda| \|\Phi_* R(\lambda,\textbf{B})\Phi^* g\|_{X_0} + |\lambda | \|\Phi_* R(\lambda,\textbf{B})\Phi^*\lambda R\ R(\lambda, A_1)f\|_{X_0} \\ &\qquad +|\lambda| \|R\ R(\lambda, A_1)f\|_{X_0}\\ &\leq M_2\|\Phi_*\| \|\Phi^*\| \|g\|_{X_0} +M_2\|\Phi_*\| \|\Phi^*\|\|R\| M_1\|f\|_{L_p(\Omega)} + M_1 \|R\| \|f\|_{L_p( \Omega)}. \end{align*} With this we estimate the norm of the resolvent of $-\textbf{A}$ \begin{align*} |\lambda| \|R(\lambda,\textbf{A})\|&=\sup\{ |\lambda| \|U\| + |\lambda|\|u\| ;u=R(\lambda, A_1)f,\\ & \qquad \qquad U=R(\lambda, \textbf{A}_2)(g-\lambda Ru)+Ru, \|f\|+\|g\|\leq 1\},\\ &\leq M_2\|\Phi_*\| \|\Phi^*\| + M_1 M_2\|\Phi_*\| \|\Phi^*\| \|R\| + M_1\|R\| + M_1=: M. \end{align*} Hence the sector is contained in the resolvent set $\rho(-\textbf{A})$ and the inequality above holds for some constant $M\geq 1 $ independent of $\lambda$. Hence $\textbf{A}$ is sectorial and $-\textbf{A}$ generates a holomorphic semigroup. \end{proof} We are now prepared to write the matched microstructure problem as an abstract evolution equation. Set $w=(u,U)$, $w_0=(u_0,U_0)$. We look for $w$ satisfying \begin{align}\label{P} \begin{cases} \partial_t w + \textbf{A} w =f(w),& t\in(0,T)\\ w(0)= w_0.& \end{cases}\end{align} To solve this semilinear problem we take $\frac12<\Theta<1$. Our goal then is to show that $$f: (0,T) \times [Y_0, D(\textbf{A})]_\Theta \to Y_0:=L_p(\Omega)\times L_p(\Omega,L_p(\Omega_x)) $$ is locally H\"older continuous in $t$ and locally Lipschitz continuous in $w$. For the initial value we will require that $w_0\in [Y_0,D(\textbf{A})]_\Theta$. Then results from Amann \cite{NLQ} imply local existence and uniqueness. \subsection{Interpolation and Existence Results}\label{3.3} Let $X_0, X_1$ be two Banach spaces that form an interpolation couple. Let $0\leq\Theta\leq 1$. We denote with $[X_0,X_1]_\Theta$ the complex interpolation space of order $\Theta$. Let $\tr_{\Gamma}: W_p^1(\Omega) \to W_p^{1-\frac1p}(\Gamma)$ be the trace operator on $\Omega$. R. Seeley showed in \cite{Seeley} that \begin{align}\label{15} \left[L_p(\Omega), W_p^2(\Omega) \cap \ker \tr_{\partial \Omega} \right]_\Theta = \begin{cases} W_p^{2\Theta}, & \text{if } 2\Theta < \frac1p,\\ W_p^{2\Theta}(\Omega)\cap \ker \tr_{\partial \Omega}, & \text{if } 2\Theta > \frac1p. \end{cases} \end{align} He actually gives a proof for any normal boundary system (defined in the sense of \cite{Seeley}, \textsection 3). To determine $[Y_0,D(\textbf{A})]_\Theta$ we start with the case of uniform spherical cells $B$. Let $0<\Theta<1$. We set \begin{align*} X_0 &= L_p(\Omega) \times L_p(\Omega, L_p(B)),\\ X_1 &= \left( W_p^2(\Omega) \cap \ker \tr_\Gamma\right) \times L_p(\Omega, W_p^2(B)\cap \ker \tr_S). \end{align*} Then $\{X_0,X_1\}$ is an interpolation couple. Due to Proposition I.2.3.3 in \cite{Amann} it suffices to interpolate both factors separately. Let $2\Theta > \frac1p$. We deduce from \eqref{15} $$ \left[ L_p(\Omega), W_p^2(\Omega)\cap \ker \tr_\Gamma\right]_\Theta = \ker \tr_\Gamma \cap W_p^{2\Theta} (\Omega).$$ Further the results in \cite{Calderon} and \eqref{15} show that \begin{align*} \left[L_p(\Omega, L_p(B)), L_p(\Omega, W_p^2(B) \cap \ker \tr_S)\right]_\Theta &= L_p(\Omega, \left[ L_p(B), W_p^2(B)\cap \ker \tr_S\right]_\Theta ),\\ &= L_p(\Omega, W_p^{2\Theta}(B)) \cap \ker \tr_S. \end{align*} The map $(I, \Phi_*): (u,U) \mapsto (u, \Phi_* U)$ is an isomorphism. It maps \begin{align*} X_0 = L_p(\Omega) \times L_p(\Omega, L_p(B)) \to L_p(\Omega)\times L_p(\Omega, L_p(\Omega_x))=: Y_0 \end{align*} as well as $$ X_1 \to (W_p^2(\Omega)\cap \ker \tr_\Gamma) \times L_p(\Omega, W_p^2(\Omega_x)) \cap \ker \tr =: Y_1.$$ So $\{Y_0,Y_1\}$ is an interpolation couple and Proposition I 2.3.2 from \cite{Amann} implies \begin{align*} \left[ Y_0,Y_1\right]_\Theta &= (I, \Phi_*) \left[X_0,X_1\right]_\Theta= \left(W_p^{2\Theta}(\Omega)\cap \ker \tr_\Gamma\right) \times \left(L_p(\Omega, W_p^{2\Theta}(\Omega_x)) \cap \ker \tr\right) \end{align*} for $2\Theta > \frac1p$. Finally we define the isomorphism \begin{align*} J: Y_0 &\to Y_0: (u,U) \mapsto (u, U + Ru). \end{align*} Here $R$ is the retraction of the lifted trace. Then $J$ maps $Y_1$ onto $D(\textbf{A})$. Clearly $T: Y_0\to Y_0 : (u,U) \mapsto (u, U- Ru)$ is the inverse of $J$. So $J$ fulfills the conditions of Proposition I 2.3.2 from \cite{Amann} for the interpolation couples $\{Y_0,Y_1\}$ and $\{Y_0, D(\textbf{A})\}$. Hence it maps $ \left[Y_0,Y_1\right]_\Theta$ onto $\left[Y_0, D(\textbf{A})\right]_\Theta.$ So for $2 \Theta > \frac1p$, we get explicitely \begin{align*} \left[Y_0, D(\textbf{A})\right]_\Theta &= J\left( \left[Y_0,Y_1\right]_\Theta\right)= \bigcup_{\begin{subarray}{l}u\in W_p^{2\Theta} (\Omega)\\ \quad \cap \ker \tr_\Gamma\end{subarray} } \{u\} \times \{U\in L_p(\Omega, W_p^{2\Theta}(\Omega_x)); \tr U = u\}. \end{align*} If $2 \Theta < \frac1p$ the boundary condition drops in both scales. Hence we conclude $$ \left[Y_0, D(\textbf{A})\right]_\Theta = W_p^{2\Theta}(\Omega)\times L_p(\Omega, W_p^{2\Theta}(\Omega_x)).$$ A similar analysis can be done for real interpolation functors. In particular \begin{align} (Y_0, D(\textbf{A}))_{1-\frac1p,p}& = \bigcup_{\begin{subarray}{l}u\in W_p^{2-\frac2p} (\Omega)\\ \quad \cap \ker \tr_\Gamma\end{subarray} } \{u\} \times \{U\in L_p(\Omega, W_p^{2-\frac2p}(\Omega_x)); \tr U = u\}. \end{align} Let $0<\Theta < 1$ and write $X^\Theta = [Y_0, D(\textbf{A})]_\Theta.$ We consider functions \begin{equation}\label{rechts} \underline{f}= (f,g): [0,\infty) \times X^\Theta \to Y_0. \end{equation} \begin{thm}\label{allgemein} Let $\underline{f} = (f,g)$ be as in \eqref{rechts}. Assume $$\underline{f}\in C^{1-}([0,\infty) \times X^\Theta, Y_0),$$ is locally Lipschitz continuous for some $0<\Theta<1$. Then for any $(u_0,U_0) \in X^\Theta$, there exists $T=T(u_0, U_0, \Theta)>0$, such that (\ref{P}) has a unique strong solution $w=(u,U)$ on $(0,T)$ which satisfies initial conditions \begin{align*} u(t=0)&= u_0 \qquad\text{ and }\qquad U(t=0)=U_0. \end{align*} In particular \begin{align*} w\in C^1\left(([0,T), Y_0\right)\cap C\left([0,T), X^\Theta\right). \end{align*} \end{thm} \begin{proof} With the above considerations, Theorem 12.1 and Remark 12.2. (b) from \cite{NLQ} can be applied to the abstract equation (\ref{P}). The regularity results are proved in \cite{Amann84}. \end{proof} In order to state our main result on the matched microstructure model let $$X_1^\Theta := W_p^{2\Theta} (\Omega)\cap \ker \tr_{\partial \Omega}, \qquad \qquad \Theta > \frac{1}{2p},$$ denote the first component of the interpolation space $X^\Theta$. \begin{cor} Let $$f\in C^{1-}\left([0,\infty)\times X_1^{1/2+1/2p}, L_p(\Omega)\right)$$ be given. Then for $(u_0, U_0)\in X^{1/2+1/2p}$, there exists $T>0$, such that the matched microstructure problem (MM) has a unique strong solution on $(0,T)$. \end{cor} \begin{proof} Let $\Theta=\frac12+\frac{1}{2p}$. Let $U\in L_p(\Omega, W_p^{2 \Theta}(\Omega_x))$, $V= \Phi_* U$. Since $q(\cdot)$ is time independent it remains to show the Lipschitz continuity in $U$. There is $C>0$, such that \begin{align*} & \|q(U)\|^p_{L_p(\Omega)}\\ &\leq \int_\Omega \left| \int_S \sqrt{|g(x,s)|} g^{ij}(x,s) \partial_{y_i} \hat{V}(x,s) \nu_j \, ds \right|^p \, dx\\ &\leq c^p_p\max_{\tiny{\begin{array}{c} (x,y)\in \overline{\Omega}\times \overline{B},\\ i,j \end{array}}} \left\{\sqrt{|g|(x,y)}\left|g^{ij}(x,y)\right|\right\}^p \int_\Omega \sum_{k=1}^n \int_S |\partial_{y_k} \hat{V}(x,s)|^p \,ds \, dx \\ &\leq C \int_\Omega \|\hat{V}(x)\|^p_{W_p^{2\Theta}(B)}\, dx = C\|U \|^p_{L_p(\Omega, W_p^{2\Theta} (\Omega_x))}. \end{align*} The constant $c_p$ is the embedding constant of $L_p(B)$ into $L_1(B)$. Together with the linearity of $q$ this shows that $q$ is locally Lipschitz continuous in $$W_p^{1+\frac12}(\Omega)\times L_p(\Omega, W_p^{1+\frac12}(\Omega_x)) \supset X^\Theta.$$ Setting finally $\underline{f}(u,U) := (f(u)+q(U),0)$, the assumption follows from Theorem \ref{allgemein}. \end{proof} \subsection{Exponential Decay under Dirichlet Boundary Conditions, Neumann BC}\label{qualitativ} Now we assume that there are no external sources in the system, i.e. $f=0$. We will show that the corresponding solutions decay exponentially fast to zero. First we investigate the spectrum of $\textbf{A}$. Let $p=2$. The space $L_2(\Omega) \times L_2(\Omega, L_2(B))$ is a Hilbert space. This implies that $Y_0$ is a Hilbert space with the inner product $$((u,U), (w,W))_{Y_0} = (u,w)_{L_2(\Omega)} + (\Phi_* U, \Phi_* W)_{L_2(\Omega\times B)}$$ for $(u,U), (w,W) \in Y_0$. We introduce an extended operator $\textbf{A}_q$ by \begin{align*} dom(\textbf{A}_q) &= dom (\textbf{A}),\\ \textbf{A}_q (u,U) & = \textbf{A}(u,U) - (q(U),0), && \text{for all } (u,U)\in dom(\textbf{A}_q). \end{align*} We investigate the spectrum of $\textbf{A}_q$. It is convenient to introduce a weighted space $Y_g$. Set \begin{align*} Y_g&= L_2(\Omega)\times L_2(\Omega, L_2(\Omega_x, \sqrt{|g|})),\\ \|(u,U)\|^2_{Y_g} &= \|u\|^2_{L_2(\Omega)} + \|\Phi_* U \|^2_{L_2( \Omega, L_2(B,\sqrt{|g|}))}. \end{align*} This is a well defined Hilbert space with respect to the inner product \begin{align*} ((u,U), (w,W))_{Y_g} &= \int_\Omega uw + \int_{\Omega\times B} \sqrt{|g|}\Phi^* U \Phi^* V,&&\text{for }(u,U), (w,W) \in Y_g. \end{align*} We can show that with this modified inner product $Y_g$ is also a Hilbert space. \begin{lem} The operator $\textbf{A}_q$ is self adjoint in $Y_g$. \end{lem} \begin{proof} Take $(u,U), (w,W) \in dom(\textbf{A}_q)$. Let $V= \Phi^* U$, $Z=\Phi^* W$. Then $$((u,U), (w,W))_{Y_g} = (u,w)_{L_2(\Omega)} + (V,Z)_{L_2(\Omega, L_2(B, \sqrt{|g|}))}$$ and \begin{align} &(\textbf{A}_q(u,U),(w,W))_{Y_q} = \int_\Omega \left(-\Delta_x u(x) w(x) - q(U)(x) w(x)\right) \, dx\nonumber \\ &\qquad - \int_\Omega \int_B \frac{1}{\sqrt{|g|}} \sum_{i,j} \partial_{y_i} \left(g^{ij} \sqrt{|g|} \partial_{y_j} V(x,y)\right) Z(x,y) \sqrt{|g|} \, dy \, dx.\label{Beweis} \end{align} Manipulation of the last integral in (\ref{Beweis}) by partial integration together with the boundary conditions on $\Omega$ and $B$ shows \begin{align*} & -\int_\Omega \int_B \frac{1}{\sqrt{|g|}} \sum_{i,j} \partial_{y_i} \left(g^{ij} \sqrt{|g|} \partial_{y_j} V(x,y)\right) Z(x,y) \sqrt{|g|} \, dy \, dx\\ &= -\int_\Omega \int_B V(x,y) \sum_{i,j}\partial_{y_j}\left( g^{ij} \sqrt{|g|}\partial_{y_i} Z(x,y)\right) \, dy \, dx + \int_\Omega q(U) w - \int_\Omega u\ q(W). \end{align*} This implies that $\textbf{A}_q$ is symmetric. From the theory of elliptic operators and the representation of $\textbf{A}$ we can conclude that $\textbf{A}$ is invertible, in particular we have that $im \textbf{A}= Y$. Our next step is to show that this holds for $\textbf{A}_q$ as well. Let $(v,V)\in Y$. We show that there exist $(z,Z)\in dom(A_q)$ with $\textbf{A}_q(z,Z)=(v,V)$. First we know that there are $(u,U) \in dom(\textbf{A})$ such that $\textbf{A} (u,U)= (v,V)$. Then $$\textbf{A}_q(u,U) = (v-q(U),V).$$ Clearly $(q(U),0)\in Y$. Again there are functions $(w,W)\in dom( \textbf{A})$ with $\textbf{A}(w,W) = (q(U),0)$. This implies that $W(x) =const.$ for a.e. $x\in \Omega$. Thus $q(W)=0$. Then from the linearity of $\textbf{A}_q$ it follows that \begin{align*} \textbf{A}_q\{ (w,W)+(u,U)\} =(q(U),0) + (v-q(U), V) = (v,V). \end{align*} Thus $\textbf{A}_q$ is symmetric and $im(\textbf{A}_q)= Y$. It follows from the fact that $$ \ker (\textbf{A}_q^*) = im (\textbf{A}_q)^\perp = \{0\},$$ that the dual operator is injektiv. Thus $\textbf{A}_q \subset \textbf{A}_q^*$ implies the assertion. \end{proof} \begin{lem} It exists a constant $\sigma>0$ such that $$ (-\textbf{A}_q(u,U), (u,U))_{Y_g} \leq-\sigma ((u,U),(u,U))_{Y_g},\, \text{ for all }(u,U)\in dom(\textbf{A}_q).$$ \end{lem} \begin{proof} Let $(u,U)\in dom(\textbf{A}_q)$. Set $V= \Phi^* U.$ We make use of equivalent norms in $W_2^1$, Sobolev embedding results and that the metric $g^{ij}$ is bounded from below. Let $C>0$ denote an appropriate constant. It holds \begin{align*} & (-\textbf{A}_q(u,U), (u,U))_{Y_g}\\&= -\int_\Omega |\nabla_x u|^2 + \int_\Omega q(U) u - \int_\Omega \int_B \sum_{i,j} g^{ij} \sqrt{|g|} (\partial_{y_i}V) (\partial_{y_j} V)\\ &\qquad + \int_\Omega \underbrace{\int_S \sum_{i,j} g^{ij} \sqrt{|g|} \partial_{y_i}V\cdot \nu_j}_{= -q(U)} \overbrace{V(x)}^{= u(x)}\\ &\quad\leq -C\left(\|u\|^2_{L_2(\Omega)} + \|U\|^2_{L_2(\Omega,L_2(\Omega_x))}\right)= -\sigma\left((u,U),(u,U)\right). \end{align*} Hence we have obtained a bound for the numerical range of $-\textbf{A}_q$ in the weighted space. \end{proof} The spectrum of a self adjoint operator is contained in the closure of its numerical range (see \cite{Kato}, Section V, \textsection 3). Hence the spectrum of $-\textbf{A}_q$ lies totally on the right hand side of $-\sigma$. Since the weighted norm and the usual norm on $Y$ are equivalent, we also get a spectral bound for $-\textbf{A}_q$ in the unweighted space. So the right half space is containt in the resolvent of $-\textbf{A}_q$. We set \begin{align*} Q: D(\textbf{A}^{\frac12}) \to Y_0 : (u,U) \mapsto (-q(U),0). \end{align*} Then $Q\in \mathcal{L}(Y_{\frac12},Y_0)$. Obviously $$-\textbf{A}_q = -\textbf{A} + Q$$ and the conditions of Proposition 2.4.1 in \cite{Luna} are satisfied. So $\textbf{A}_q$ is sectorial. The matched microstructure problem is equivalent to \begin{align}\label{MMPq} \begin{cases} \partial_t(u,U) + \textbf{A}_q (u,U) =0, \qquad t\in (0,T),\\ (u,U)(0)= (u_0,U_0). \end{cases} \end{align} \begin{prop}\label{Prop_decay} Let $(u,U)$ be a solution of (\ref{MMPq}).\\ Then $(u,U) \searrow (0,0)$ exponentially fast. \end{prop} \begin{proof} This follows from the the fact that for analytic semigroups the growth bound and the spectral bound coincide as it is e.g. shown in \cite{EngNa} Corollary 3.12. \end{proof} Proposition \ref{Prop_decay} allows also to apply the principle of linearized stability to the semilinear version of (MM), provided $f$ is of class $C^1$, cf. \cite{Luna}. We can also treat a modified model with no-flux or Neumann boundary conditions on $\Omega$. Thus we want $\partial_\nu u = 0 \text{ on }\Gamma.$ We set \begin{align*} dom(A_1^N) &= \left\{u\in W_p^2(\Omega); \partial_\nu u =0\text{ on }\Gamma\right\},\\ A_1^N u &= -\Delta_x u, &&\text{for all } u\in dom(A_1^N). \end{align*} The boundary conditions in the cells are not changed. Hornung and J\"ager also derived this model in \cite{HorJa}. Then $-A_1^N$ is the generator of a strongly continuous, analytic semigroup in $L_p(\Omega)$ for $1<p<\infty$. We set \begin{align*} dom(\textbf{A}^N) &= \bigcup_{u\in dom(A_1^N)} \{u\} \times D_0(u),\\ \textbf{A}^N (u,U) &= (A_1^N u, [\Phi_* \mathcal{A}_x \hat{V}(x)]), && \text{for } (u,U) \in dom (\textbf{A}^N), V= \Phi^* U. \end{align*} The modified model can be formulated as evolution equation \begin{align}\label{MMPn} \begin{cases} \partial_t(u,U) + \textbf{A}^N (u,U) =(q(U),0),\qquad t\in (0,T),\\ (u,U)(0)= (u_0,U_0). \end{cases} \end{align} The changes in the operator occur only on the macroscales. Thus they can be treated with well known results for elliptic operators on bounded domains. So the same considerations as for $\textbf{A}$ can be done for $\textbf{A}^N$. Existence and uniqueness can be proved similarly as in Chapter \ref{3.3}. Nevertheless the qualitative behaviour is different. \begin{prop} Let $(u_0,U_0)\in W_p^1(\Omega)\times L_p(\Omega, W_p^1(\Omega_x))$ and let $(u,U)$ be the solution to the matched microstructure problem with Neumann boundary conditions (\ref{MMPn}) on some time interval $[0,T]$. Then the material value $$ S(u,U) := \int_\Omega u + \int_\Omega \int_B \sqrt{|g|} \Phi^* U, \qquad t\in (0,T], $$ is preserved. \end{prop} \begin{proof} This follows from a straight forward calculation \end{proof} \section{Nonlinear Boundary Conditions}\label{Gravity} The following version of the MMP brings gravity into the scheme. Let us restrict ourselves to uniform cells. Assume we are given a function $f : (0,2\pi) \to (0,\infty)$ periodic and differentiable. We consider the fixed domain $$\Omega_f= \left\{(x,y) \in \mathbb{S }^1\times \mathbb{R}; 0<y<f(x)\right\}.$$ It is shown in Figure \ref{Omega_f}. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.5\textwidth]{fig1.eps} \caption{The periodic domain $\Omega_f$}\label{Omega_f} \end{center} \end{figure} The gravitational force points into the $-y$ direction. The almost cylindrical domain $\Omega_f$ can be treated with the same methods as before. For a work on the torus see \cite{EsPro}. Let $h: \Omega_f\times (0,T) \to \mathbb{R}$ describe the sources and sinks in the macro system. A solution of the matched microstructure model with gravity is a pair of functions $(u,U)$ that satisfies \begin{align*} (\text{P})\quad \left\{ \begin{array}{r cl l} \partial_t u -\Delta_x u &=& h + q(U), & \text{on } \Omega_f, t\in (0,T),\\ \partial_2 u &=& -u^2, & \text{on } \Gamma_0, t\in(0,T),\\ u&=& \rho_0,& \text{on } \Gamma_f, t\in (0,T),\\ \partial_t U - \Delta_y U &=& 0,& \text{in } \Omega_f\times B, t\in (0,T),\\ U&=&u, & \text{on }\Omega_f\times S, t\in (0,T),\\ (u,U)(0) &=& (u_0,U_0), &\text{on }\Omega_f \times (\Omega_f\times B). \end{array}\right. \end{align*} Let $v= u - \rho_0 \cdot \mathds{1}_{\Omega_f}$, $ V= U - \rho_0 \mathds{1}_{\Omega_f\times B}$. By definition (\ref{qU}) it holds that $q(U) =q(V+\rho_0) = q(V)$. So $(v,V)$ solves \begin{align*} (\text{P'})\quad \left\{ \begin{array}{r cl l} \partial_t v -\Delta_x v &=& h + q(V), & \text{on } \Omega_f, t\in (0,T),\\ \partial_2 v &=& -(v+\rho_0)^2, & \text{on } \Gamma_0, t\in(0,T),\\ v&=& 0,& \text{on } \Gamma_f, t\in (0,T),\\ \partial_t V - \Delta_y V &=& 0,& \text{in } \Omega_f\times B, t\in (0,T),\\ V&=&v, & \text{on }\Omega_f\times S, t\in (0,T),\\ (v,V)(0) &=& (u_0-\rho_0,U_0-\rho_0), &\text{on }\Omega_f \times (\Omega_f\times B). \end{array}\right. \end{align*} To treat the nonlinear boundary condition in the macroscopic scale we us a weak formulation. As before we define operators $A_1$ and $\textbf{A}_q$ with linear zero boundary conditions ($\partial_2 v = 0$ on $\Gamma_0$, $v=0$ on $\Gamma_f$). The mixed Dirichlet-Neumann conditions on $\Omega_f$ do not effect the properties of the operators. Especially $\textbf{A}_q$ is selfadjoint and $-\textbf{A}_q$ is the generator of an analytic semigroup in the Hilbert space $$Y_0= L_2(\Omega_f) \times L_2(\Omega_f\times B).$$ The operator $A_1$ is a well known form of the Laplace operator and so it is invertible. The presented method is due to Amann \cite{Amann88} and Escher \cite{Escher89}. The main idea is to move the nonlinearity from the boundary to the right hand side $h$. Therefore we need to construct an appropriate inverse operator from the trace space on $\Gamma_0$ to the domain $\Omega_f$. The resulting semilinear evolution equation can be treated as before. For $u\in L_2(\Omega_f)$ we set $$D_0^s(u) = \left\{ U\in L_2(\Omega_f, W_2^{2s}(B)), \tr_S U = u\right\}, \qquad \frac14<s<\infty.$$ In the rest of the chapter we drop the index $f$ of $\Omega_f$. Let $\tr_0, \tr_f$ denote the trace operators onto $\Gamma_0$ and $\Gamma_f$. With the boundary operator on $\Omega$ we mean an operator $\mathcal{B}$, with $\mathcal{B} u = \tr_0 \partial_\nu u + \tr_f u.$ We define \begin{align} Y_s = \begin{cases} \left\{ (u,U), u\in W_2^{2s}(\Omega), U\in D_0^s(u), \mathcal{B}u=0 \right\}, & \text{for } \frac32 < 2s\leq \infty,\\ \left\{ (u,U), u\in W_2^{2s}(\Omega), U\in D_0^s(u), \tr_f u=0\right\}, & \text{for } \frac12 < 2s \leq \frac32,\\ W_2^{2s}(\Omega)\times L_2(\Omega, W_2^{2s}(B)), & \text{for } 0\leq2s\leq \frac12. \end{cases}\label{spaces1} \end{align} To construct a suitable retract, we first restrict ourselves to the macro scale. Let $\mathcal{A}_1= - \Delta_x $. Considered as an unbounded operator in $L_2(\Omega)$ it is closable. Together with $\mathcal{B}$ it fits into the scheme of \cite{Escher89}, Chapter 3. We will use the same notation. Let $\overline{\mathcal{A}}_1$ be the closure of $\mathcal{A}_1$. Then $W_2^2(\Omega)\stackrel{\mathrm{d}}\hookrightarrow D(\overline{\mathcal{A}}_1)$. In addition we set $\mathcal{C} u = \tr_0 u+ \tr_f \partial_\nu u,$ and \begin{align*} \partial W_2^{2s} = W_2^{2s-\frac32}(\Gamma_0) \times W_2^{2s-\frac12}(\Gamma_f),&&\partial_1 W_1^{2s} = W_2^{2s-\frac12}(\Gamma_0) \times W_2^{2s-\frac32}(\Gamma_f), \end{align*} for $0\leq s\leq 1$. Combined together, the map $(\mathcal{B},\mathcal{C})\in \L(W_2^2(\Omega), \partial W_2^2\times \partial_1 W^2_2)$ is a retraction. So we can apply Theorem 4.1. from \cite{Amann88}: \begin{prop} There exists a unique extension $\left(\overline{\mathcal{B}}, \overline{\mathcal{C}}\right)\in \L(D\left(\overline{\mathcal{A}}_1\right), \partial W_2^0\times \partial_1 W^0_2)$ of $(\mathcal{B},\mathcal{C})$ such that for $u\in D(\overline{\mathcal{A}}_1)$, $v\in W_2^2(\Omega)$ the generalized Green's formula $$\langle v, \overline{\mathcal{A}}_1 u \rangle_{Y_0} + \langle \mathcal{C}v, \overline{\mathcal{B}}u\rangle_{\partial W_2^0} = \langle \mathcal{A}_1 v, u\rangle_{Y_0} + \langle \mathcal{B}v, \overline{\mathcal{C}}u\rangle_{\partial_1 W_2^0}$$ is valid. \end{prop} Then from interpolation theory (Proposition I 2.3.2 in \cite{Amann}) and well known a priori estimates for $\mathcal{A}_1$, it follows that $$\left(\overline{\mathcal{A}}_1, \overline{\mathcal{B}}\right) \in Isom((D(\overline{\mathcal{A}}_1),W_2^2(\Omega))_\theta, L_2(\Omega)\times \partial W_2^{2\theta}), \qquad \theta\in [0,1].$$ Therefore we can define the right inverse $$R_\theta = \left(\overline{\mathcal{A}}_1, \overline{\mathcal{B}}\right)^{-1}| \{0\} \times \partial W_2^{2\theta}, \qquad \theta \in [0,1].$$ Then $R_\theta \in \L(\partial W_2^{2\theta}, W_2^{2\theta}(\Omega))$. We now add the microscopic scale. With $\mathcal{Y}^s$ we mean \begin{align*} \mathcal{Y}^s = \begin{cases} \{(u,U)\in W_2^{2s}(\Omega)\times L_2(\Omega, W_2^{2s}(B)); U\in D_0^{2s}(u)\}, & \frac12 < 2s\leq 2,\\ W_2^{2s}(\Omega)\times L_2(\Omega, W_2^{2s}(B)), & 0\leq 2s \leq \frac12. \end{cases} \end{align*} So if $s< \frac34$ the two sets $Y_s$ and $ \mathcal{Y}^s$ coincide. Define $\textbf{R}_\theta \in \L(\partial W_2^{2\theta},\mathcal{Y}^{2\theta})$ by $$\textbf{R}_\theta u= (R_\theta u, R_\theta u \cdot \mathds{1}_B), \qquad u\in \partial W_2^{2\theta}.$$ We set $$\partial_0 W_2^{2\theta} = \{u\in \partial W_2^{2\theta}; \tr_f u =0\}.$$ Obviously it is a closed linear subspace of $\partial W_2^{2\theta}$. It can be identified with the space $W_2^{2\theta-\frac32}(\Gamma_0)$. So it holds $$\textbf{R}_\theta( \partial_0 W_2^{2\theta}) \subset Y_{2\theta},\qquad \text{if }2\theta \leq \frac32.$$ For the formulation of the abstract evolution problem we use the scale of interpolation and extrapolation spaces $\{(Y_\alpha, (\textbf{A}_q)_\alpha ), \alpha \in \mathbb{R}\}$ as it was defined by Amann \cite{Amann88}. For $0\leq\alpha \leq 1$ this corresponds to the interpolation spaces in Section \ref{3.3} and Definition (\ref{spaces1}). We set as in \cite{Escher89} \begin{align*} \mathbb{A} = (\textbf{A}_q)_{-\frac12}, \qquad H = Y_{-\frac12}, \qquad D=Y_{\frac12} = D(\mathbb{A}). \end{align*} Then the duality theory tells us that $D= H'$ and the duality pairings satisfy $$\langle \vec{u},\vec{v}\rangle_H = \langle \vec{u},\vec{v} \rangle_{Y_0} = (\vec{u}|\vec{v})_{Y_0}, \quad \text{ for } \vec{u}\in D, \vec{v}\in Y_0.$$ Let $a: D\times D \to \mathbb{R}$ be the coercive bilinear form $$a(\vec{u},\vec{v}) = \int_\Omega \nabla_x u \cdot \nabla_x v \, dx + \int_{\Omega\times B} \nabla_z U \cdot \nabla_z V \, d(x,z), \quad \vec{u}, \vec{v} \in D.$$ Usually it is clear from the context whether we refer to the function $u$ living on $\Omega$ or the pair $\vec{u}=(u,U)$. For $\vec{u},\vec{v}\in D$ we get \begin{align*} \langle \vec{v}, \mathbb{A} \vec{u}\rangle_H &= \int_\Omega \nabla_x v \cdot \nabla_x u - \int_\Omega v q(U) + \int_{\Omega\times B } \nabla_z V \cdot \nabla_z U - \int_{\Omega\times S} V \nabla_z U \cdot \nu\\ & \qquad- \int_{\Gamma_0} v \partial_\nu u - \int_{\Gamma_f} v \nabla_x u \cdot \nu = a(\vec{u},\vec{v}). \end{align*} The possible approximation of $u$ by functions in $Y_1$ and the continuity of the left and right hand side justify this formal calculation. To treat the nonlinear boundary condition we define the map $G: D \to L_2(\Gamma_0)$, $$G(u) = -\tr_0(u+\rho_0)^2.$$ We have to show that $(h,G)$ satisfies the assumption (3.6) of \cite{Escher89}. For $h$ we just assume that $h\in Y_0$. For $G$ the properties are summarized in the following lemma. \begin{lem} \label{Lemma22} $$G\in C^1(D, W_2^{2\beta+\frac12}(\Gamma_0))$$ for any fixed $\beta \in (-\frac12,-\frac14)$, and the Lipschitz continuity is uniform on bounded sets. \end{lem} \begin{proof} Fix $\beta\in \left(-\frac12, -\frac14\right)$. Let $\vec{u}\in D$. Then also $u+ \rho_0 \in W_2^1(\Omega)$. From \cite{Amann_Multi}, Theorem 4.1 and the fact that the Besov space $B_{22}^s(\Omega) = W_2^s(\Omega)$, we know that the multiplication $ W_2^1( \Omega) \cdot W_2^1(\Omega) \to W_2^{1-\epsilon}(\Omega)$ is continuous for $0<\epsilon < 1$. We conclude that for fixed $\epsilon < \frac12$ we have $$(u+\rho_0)^2 \in W_2^{1-\epsilon}(\Omega) \qquad \text{and} \qquad \tr_0(u+\rho_0)^2 \in W_2^{\frac12-\epsilon}(\Gamma_0) .$$ Then by Sobolev embedding it holds $ W_2^{\frac12-\epsilon}(\Gamma_0) \stackrel{\mathrm{d}}\hookrightarrow L_2(\Gamma_0) \stackrel{\mathrm{d}}\hookrightarrow W_2^{2\beta+ \frac12}(\Gamma_0).$ The second inclusion follows from the definition of $W_2^{-s}$ as a dual spaces for $s>0$. So finally $$-\tr_0 (u+\rho_0)^2 \in W_2^{2\beta +\frac12}(\Gamma_0).$$ The Fr\'echet derivative of $G$ is the linear operator $\partial G(u)v = -2\tr_0 (u+\rho_0) v.$ Thus $G\in C^1(D, W_2^{2\beta+\frac12}(\Gamma_0))$. It remains to show that the map is uniformly Lipschitz continuous on bounded sets. Let $W\subset D$ be bounded. Take $\vec{u},\vec{v}\in W$. Then \begin{align*} \tr_0 (u+\rho_0)^2 -\tr_0 (v+\rho_0)^2 = \tr_0 u^2 -\tr_0 v^2 + 2\rho_0 (\tr_0 u-\tr_0 v). \end{align*} Clearly the last term is uniformly Lipschitz on $W$. Further $W_2^1(\Omega) \hookrightarrow C(\overline{\Omega})$. So a bounded set in $D$ is bounded in $C(\overline{\Omega})$. Thus there exists a constant $c_1>0$, such that $\|u\|_\infty \leq c_1$ for all $\vec{u}\in W$. It follows from this and Sobolev embeddings that \begin{align*} \| \tr_0 u^2 -\tr_0 v^2\|_{W_2^{2\beta+\frac12} (\Gamma_0)}& \leq C \|\tr_0(u^2-v^2)\|_{L_2(\Gamma_0)}\\ & \leq L \|u-v\|_{W_2^1(\Omega)}. \end{align*} The last constant $L$ is independent of $\vec{u},\vec{v}\in W$. This completes the proof. \end{proof} Now we define the right hand side to write (P') as an abstract evolution equation. Let $\textbf{R} :=\textbf{R}_{\frac12}$. Then it holds for $u \in W_2^{\frac12}(\Gamma_0)$ and $ \vec{v}\in D$ \begin{align} \langle \vec{v}, \mathbb{A} \textbf{R} u\rangle_H = \langle v, u\rangle_{W_2^{\frac12}(\Gamma_0)}=: \langle v,u\rangle_{\Gamma_0},\label{Rand} \end{align} in the sense of trace. We set \begin{align*} F(\vec{u}) = (h,0)+ \mathbb{A} \textbf{R} G(\vec{u}), \qquad \vec{u}\in D. \end{align*} Note that the second component of $F(\vec{u})$ vanishes since $R_{\frac12}u\cdot \mathds{1}_B$ is constant on each cell. By assumption $h\in Y_0 \hookrightarrow Y_\beta$. It was shown in \cite{Escher89}, p.301, that under this circumstances $F$ is well defined and the previous lemma ensures that $$F\in C^1(D, Y_\beta) \text{ is uniformly Lipschitz continuous on bounded sets.}$$ \begin{prop}\label{Prop23} For each $\vec{u}_0\in D$ there is a unique maximal solution $\vec{u}(\cdot,\vec{u}_0)\in C([0,T_1), D)$ of the semilinear Cauchy problem \begin{align} \dot{\vec{u}} + \mathbb{A}\vec{u} = F(\vec{u}), \qquad \vec{u}(0)=\vec{u}_0\label{abstract} \end{align} with $0<T_1\leq \infty$. In addition for any $\epsilon \in (0,\frac14)$ holds $$\vec{u}\in C((0,T_1), Y_{\epsilon + \frac12} )\cap C^1((0,T_1), Y_{\epsilon -\frac12}).$$ \end{prop} \begin{proof} Take $\beta = -\frac12 +\epsilon$. Then the assertion follow from \cite{Amann88}, Sect. 12 and the previous lemma. \end{proof} \renewcommand{\phi}{\varphi} By a \textit{weak solution} of (P') we mean a function $\vec{u} \in C^1([0,T_1), D)$ such that the initialcondition $\vec{u}(0)= \vec{u}_0-\rho$ is satisfied, and $$ -\int_0^T \langle \dot{\phi},\vec{u}\rangle_H + a(\phi,\vec{u}) \, dt = \int_0^T \left(\langle \phi, (h,0)\rangle_H +\int_{\Gamma_0} \phi G(\vec{u})\right)\, dt + \langle \phi(0), \vec{u}_0-\rho_0\rangle_H $$ for all $0<T<T_1$, $\phi \in C([0,T], D)\cap C^1([0,T],H)$ with $\phi(T)=0$. So the above considerations show that \begin{cor}\ \\ For each $\vec{u}_0\in Y_\frac12$ there exists a unique maximal weak solution of (P'). \end{cor} \begin{proof} This follows from the representation of $\mathbb{A}$ and (\ref{Rand}). \end{proof} \textbf{Remarks:}\\ (i) The construction in \cite{Escher89} allows to consider a more general $h\in C^1(D,Y_\beta)$. This means a full semilinear version of (P') can be treated.\\ (ii) Abstract results on evolution equations in interpolation-extrapolation scales ensure that the solutions satisfies $$(u,U) \in C((0,T_1),Y_1)\cap C^1(0,T_1),Y_0).$$ So the system (P') is satisfied pointwise in time.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The discovery of neutrino oscillations, showing the non-zero neutrino masses, has opened the door to physics beyond the Standard Model~(SM). The oscillation experiments so far have provided the rather precise values of mass squared differences and mixing angles of active neutrinos~\cite{Gonzalez-Garcia:2014bfa}. There are, however, unknown properties of active neutrinos, {\it i.e.}, the ordering and the absolute values of neutrino masses, the violation of CP symmetry in the leptonic sector and the Driac or Majorana property of neutrinos. In addition, we do not know whether an additional particle is present associated with the origin of neutrino masses. Heavy neutrino is an well-motivated particle in the models of neutrino masses. One of the most attractive examples is the model with the canonical seesaw mechanism~\cite{seesaw} where right-handed neutrinos are introduced with Majorana masses. In this case the mass eigenstates are three active neutrinos and heavy neutrinos, and both neutrinos are Majorana particles. Usually, heavy neutrinos are considered to be much heavier than $m_W$ and even close to the unification scale $\sim 10^{16}$~GeV. Such heavy particles are attractive since they can also account for the baryon asymmetry of the universe (BAU) via leptogenesis~\cite{Fukugita:1986hr}. On the other hand, heavy neutrinos with masses below $m_W$ are also attractive. Even in this case the seesaw mechanism is still effective by requiring the suppressed Yukawa coupling constants of neutrinos. Furthermore, the BAU can be explained by using the different mechanism~\cite{Akhmedov:1998qx,Asaka:2005pn}. Heavy neutrinos with $\sim 100$ MeV are interesting for the supernova explosion~\cite{Fuller:2009zz}. If its mass is around keV scale, it can be a candidate for the dark matter~\cite{Dodelson:1993je}. Futher it may explain the origin of pulsar velocities~\cite{pulsar}. (See, for example, Ref.~\cite{Kusenko:2009up} for astrophysics of heavy neutrinos.) Therefore, heavy neutrinos which are lighter than the electroweak scale are also well-motivated particles beyond the SM. Interestingly, such particles can be tested in terrestrial experiments~\cite{terrestrial}. If neutrinos are Majorana particles, the lepton number of the SM Lagrangian is broken. In this case there appear various phenomena which are absent in the SM. The contribution from heavy Majorana neutrino can be significant depending on its mass and mixing. The well-known example is the neutrinoless double beta decay $(Z,A) \to (Z+2,A) + 2 e^-$. See, for example, a recent review~\cite{Pas:2015eia} and references therein. When the mass is of the order of 0.1--1~GeV, the contribution from heavy Majorana neutrino can be significant to alter the prediction of the rate solely from active neutrinos. The LNV process $e^- e^- \to W^- W^-$ (called as the inverse neutrinoless double beta decay~\cite{Rizzo:1982kn}) is another interesting possibility to test the Majorana property of heavy neutrino. Various aspects of this process have been investigated so far~\cite{i0nbb}. It is a good target of the future lepton colliders such as the International Linear Collider (ILC)~\cite{Baer:2013cma} and the Compact Linear Collider (CLIC)~\cite{Accomando:2004sz}. Another example is the rare decay of meson like $M^+ \to \ell^+ \ell'{}^+ M'{}^-$ where $M$ and $M'$ are mesons and $\ell$ and $\ell'$ are charged leptons with the same charge~\cite{Ng:1978ij,Abad:1984gh,Littenberg:1991ek,Dib:2000wm,Ali:2001gsa,Atre:2005eb,Cvetic:2010rw,Canetti:2014dka,Milanes:2016rzr,Cvetic:2016fbv}. See the current experimental limits on these processes in Refs.~\cite{terrestrial,Agashe:2014kda}. Heavy Majorana neutrino with an appropriate mass gives a sizable contribution to these processes, and its mixing receives the upper bounds from the experimental data. In this paper we discuss the LNV decay of $B$ mesons induced by heavy Majorana neutrino with GeV-scale mass. In particular, we study the testability of the mode $B^+ \to \mu^+ \mu^+ \pi^-$ by the future experiments. The expected limits on the mixing of heavy neutrino by Belle II~\cite{SuperKEKB} and the $e^+ e^-$ collisions on $Z$-pole at the future circular collider (FCC-ee)~\cite{FCCee} will be presented. \section{Heavy Majorana neutrino} We consider a heavy Majorana neutrino $N$ with mass $M_N \sim $ GeV which mixes with ordinary left-handed neutrinos $\nu_{L \alpha}$ ($\alpha = e, \mu, \tau$) as \begin{align} \nu_{L \alpha} = U_{\alpha i} \, \nu_i + \Theta_{\alpha } \, N \,, \end{align} where $U_{\alpha i}$ is the PMNS mixing matrix of active neutrinos $\nu_i$ ($i=1,2,3$). In this case $N$ has the weak gauge interactions which are suppressed by the mixing $\Theta_\alpha$. Here we discuss only one heavy neutrino for simplicity, but the extension to the case with more heavy neutrinos is straightforward by replacing $\Theta_\alpha N$ with $\sum_I \Theta_{\alpha I} N_I$. If heavy neutrinos provide the tiny neutrino masses through the seesaw mechanism, the masses and mixings of heavy neutrinos must satisfy a certain relation to explain the experimental results of the neutrino oscillations. However, we do not specify the origin of $N$ to make a general argument and consider $M_N$ and $\Theta_\alpha$ as free parameters in this analysis. It is possible to test directly heavy neutrino $N$ by various experiments because of the smallness of its mass. Since there is no signal of this particle, the upper bounds on the mixing $|\Theta_\alpha|$ are imposed from various experiments depending on its mass~\cite{terrestrial}. It is then important to search it by future experiments at the first step. Furthermore, not only the discovery but also the detail study is crucial to reveal the properties of $N$. In the present analysis we consider the experimental test for the LNV to show the Majorana property of $N$. Especially, we focus on the LNV decay of $B$ meson as a concrete example \footnote{In this analysis we discuss only the decay into two muons, but the extension to the decays into the like sign leptons with other flavors is straightforward.} \begin{align} \label{eq:LNVB} B^+ \to \mu^+ \, N \to \mu^+ \, \mu^+ \, \pi^- \,, \end{align} which is mediated by the on-shell $N$ as shown in Fig.~\ref{Fig:Bdec}. \begin{figure}[t] \centerline{ \includegraphics[width=10cm]{b_decay.eps} }% \caption{ LNV decay process of charged $B$ meson. } \label{Fig:Bdec} \end{figure} Notice that there is also the charge conjugated process which is implicit from now on. From the kinematical reason we restrict ourselves to the mass region \begin{align} m_B - m_\mu > M_N > m_{\pi} + m_\mu \,. \end{align} In the process (\ref{eq:LNVB}) the production rate of $N$ is proportional to $|\Theta_\mu|^2$ and the decay rate is also proportional to $|\Theta_\mu|^2$, and then the LNV signal is induced as the $|\Theta_\mu|^4$ effect. This process has been discussed as an interesting target for Belle and LHCb experiments~\cite{terrestrial,Cvetic:2010rw,Canetti:2014dka,% Milanes:2016rzr,Cvetic:2016fbv}. The recent results of the search for $B^+ \to \mu^+ \mu^+ \pi^-$ are obtained by Belle~\cite{Liventsev:2013zz} and LHCb~\cite{Aaij:2014aba}. (See also Ref.~\cite{Shuve:2016muy} for the revision of the LHCb limit.) They presented the upper bounds on the mixing $|\Theta_\mu|^2$ as shown in Fig.~\ref{Fig:UB}. In the same figure we also present various constraints on heavy neutrino which are from Ref.~\cite{terrestrial}. It is found that these bounds on $|\Theta_\mu|^2$ are weaker than other constraints on heavy neutrino which are applicable to both Dirac and Majorana cases. The future prospect of the LHCb search for the LNV decays of $B$ and $B_c$ mesons including (\ref{eq:LNVB}) has been discussed in Ref.~\cite{Milanes:2016rzr}. The sensitivity on the mixing by using the mode $B_c^+ \to \mu^+ \mu^+ \pi^-$ at LHC run 3, which is better than that of (\ref{eq:LNVB}), is also shown in Fig.~\ref{Fig:UB}. In the present analysis, we then investigate the search for the process (\ref{eq:LNVB}) at Belle II and FCC-ee. \begin{figure}[ht] \centerline{ \includegraphics[width=17cm]{FIG_UB_THmusq.eps}% }% \caption{ The sensitivity limits on $|\Theta_\mu|^2$ from the LNV decay $B^+ \to \mu^+ \mu^+ \pi^-$ due to heavy neutrino at Belle II with $N_B=5 \times 10^{10}$ (magenta dot-dashed line) and at FCC-ee with $N_Z = 10^{13}$ (red solid line). The orange long-dashed line is the limit from $W^+ \to \mu^+ \mu^+ \pi^-$ at FCC-ee with $N_W = 2 \times 10^{8}$. For comparision we also show the limit from the LNV decays $B^+_c \to \mu^+ \mu^+ \pi^+$ at LHCb for LHC run 3~\cite{Milanes:2016rzr} (cyan solid line). The blue dashed lines are the upper bounds from the LNV $B$ decays by LHCb~\cite{Aaij:2014aba} and Belle~\cite{Liventsev:2013zz}. The gray region is excluded by search experiments: DELPHI~\cite{Abreu:1996pa}, NA3~\cite{Badier:1986xz}, CHARM II~\cite{Vilain:1994vg}, BEBC~\cite{CooperSarkar:1985nh}, and NuTeV~\cite{Vaitaitis:1999wq}. } \label{Fig:UB} \end{figure} \section{Search at Belle II} Let us first consider the search for the LNV decay of $B^+$ shown in Eq.~(\ref{eq:LNVB}) at Belle~II~\cite{SuperKEKB}, where $5 \times 10^{10}$ pairs of $B$ mesons (at 50~ab$^{-1}$) are planned to be produced. In this analysis we take the number of $B^+$ as $N_{B} = 5 \times 10^{10}$ and the energy as $E_{B} = m_{B^\pm}$ since the velocity of produced $B^\pm$'s is low enough. Let us then estimate the expected number of the signal events below. First, the partial decay rate of $B^+ \to \mu^+ N$ is given by \begin{align} \Gamma (B^+ \to \mu^+ \, N) &= \frac{G_F^2 \, f_{B^\pm}^2 \, m_{B^\pm}^3}{8 \pi} \, |V_{ub}|^2 \, |\Theta_{\mu}|^2 \left[ r_\mu^2 + r_N^2 - (r_\mu^2 - r_N^2)^2 \right] \sqrt{ 1 - 2( r_\mu^2 + r_N^2 ) + (r_\mu^2 - r_N^2)^2 } \,, \end{align} where $f_{B^\pm}$ is the decay constant, $V_{ub}$ is the CKM element, and \begin{align} r_\mu = \frac{m_\mu}{m_{B^\pm}} \,,~~~~~ r_N = \frac{M_N}{m_{B^\pm}} \,. \end{align} Notice that the rate is enhanced by $M_N^2/m_\mu^2$ for $M_N \gg m_\mu$ because of the helicity suppression effect of this process. In oder to avoid the uncertainty in $f_B$ and $V_{ub}$ the branching ratio of $B^+ \to \mu^+ N$ is estimated as \begin{align} Br (B^+ \to \mu^+ N) = \frac{\Gamma (B^+ \to \mu^+ N)}{\Gamma ( B^+ \to \tau^+ \nu_\tau)} \times Br( B^+ \to \tau^+ \nu_\tau) \,, \end{align} where the branching ratio of $B^+ \to \tau^+ \nu_\tau$ is $Br( B^+ \to \tau^+ \nu_\tau) = (1.14 \pm 0.27) \times 10^{-4}$~\cite{Agashe:2014kda}. In order to estimate the number of the signal events the energy distribution of $N$ in $B^+ \to \mu^+ N$ is important since it determines the decay length of $N \to \mu^+ \pi^-$. In the present case due to the two-body decay at rest it is simply given by \begin{eqnarray} E_N = \frac{m_{B^\pm}^2 + M_N^2 - m_\mu^2}{2 m_{B^\pm}} \,. \end{eqnarray} The number of the signal events is then \begin{eqnarray} \label{eq:Nevent1} N_{\rm event} &= 2 \, N_{B^+} \, Br (B^+ \to \mu^+ N) \, P( N \to \mu^+ \pi^- ; E_N, L_{\rm det} ) \,, \end{eqnarray} where $P( N \to \mu^+ \pi^- ; E_N, L_{\rm det} )$ is the probability that the signal decay $N \to \mu^+ \pi^-$ occurs inside the detector, which is given by \begin{align} \label{eq:P} P( N \to \mu^+ \pi^- ; E_N, L_{\rm det} ) = \frac{\Gamma (N \to \mu^+ \pi^-)}{\Gamma_N} \left[ 1 - \exp \left( - \frac{M_N \Gamma_N L_{\rm det}}{E_N} \right) \right] \,, \end{align} where $\Gamma_N$ is the total decay rate of $N$. We calculate $\Gamma_N$ for the case when $\Theta_\mu \neq 0$ and $\Theta_e = \Theta_\tau =0$ taking into account the possible decay channels by using the expressions for the partial rates in Ref.~\cite{Gorbunov:2007ak}. On the other hand, the partial rate of $N \to \mu^+ \pi^-$ is given by \begin{align} \Gamma (N\to \mu^+ \pi^-) &= \frac{1}{16 \pi} |\Theta_\mu|^2 |V_{ud}|^2 G_F^2 f_{\pi^\pm}^2 M_N^3 \left[ \left( 1 - \frac{m_\mu^2}{M_N^2} \right)^2 - \frac{m_{\pi^\pm}^2}{M_N^2} \left( 1 + \frac{m_\mu^2}{M_N^2} \right) \right] \nonumber \\ &\times \left[ 1 - 2 \frac{m_{\pi^\pm}^2 + m_\mu^2}{M_N^2} + \frac{(m_{\pi^\pm}^2 - m_\mu^2)^2}{M_N^4} \right]^{1/2} \,. \end{align} Here we take $m_{\pi^\pm}=139.6$~MeV, $f_{\pi^\pm}=130.4$~MeV and $|V_{ud}|=0.9743$~\cite{Agashe:2014kda}. The typical size of the detector is denoted by $L_{\rm det}$ and we take it as $L_{\rm det}=1.5$~m for Belle II detector for simplicity. Note that the factor 2 in Eq.~(\ref{eq:Nevent1}) represents the contribution from the charge conjugate process of (\ref{eq:LNVB}). We assume that there is no background event and the sensitivity limit on $|\Theta_\mu|^2$ at 95~\% C.L. is obtained from $N_{\rm event}=3.09$~\cite{Feldman:1997qc}. The result is shown in Fig.~\ref{Fig:UB}. It is seen that Belle II can probe the LNV effect by heavy neutrino with $M_N \simeq 2$--3~GeV and $|\Theta_\mu|^2 = {\cal O}(10^{-5})$ which is consistent with various experimental constraints.% \footnote{This issue has also been discussed in Ref.~\cite{Cvetic:2016fbv}. Although they have not presented the quantitative estimate of the limit, their qualitative result is consistent with ours.} Interestingly, the sensitivity is better than the test of $B_c^+ \to \mu^+ \mu^+ \pi^-$ at LHCb for LHC run 3~\cite{Milanes:2016rzr}. \section{Search at FCC-ee} Next, we turn to consider the search at the future plan, the $e^+ \, e^-$ collisions at the Future Circular Collider (FCC-ee). It is planned to produce $10^{12}$-$10^{13}$ $Z$ bosons at the $Z$-pole $\sqrt{s}=m_Z$. The direct search for heavy neutrino at FCC-ee has been discussed in Ref.~\cite{Blondel:2014bra}. The method there cannot clarify whether heavy neutrino is a Dirac or Majorana particle. Here we shall discuss the sensitivity of the LNV process~(\ref{eq:LNVB}) aiming to test the Majorana property of heavy neutrino. The number of $B^+$ in $Z$ decays is estimated as \begin{align} N_{B^+} = N_Z \times Br ( Z \to b \bar b) \times f_u \,, \end{align} where $N_Z$ is the number of $Z$ produced at FCC-ee, and $N_Z = 10^{13}$ is assumed in the present analysis. $Br (Z \to b \bar{b})=0.1512$~\cite{Agashe:2014kda} is the branching ratio of $Z \to b \bar{b}$ and $f_u = 0.410$~\cite{Amhis:2014hma} is the fraction of $B^+$ from $\bar b$ quark in $Z$ decay. It is then found that $N_{B^+} = 6.20 \times 10^{-2} \, N_Z$ is much larger than that in the case of Belle II, from which we can expect the much better sensitive at FCC-ee. Although the produced $B^+$'s have the energy distribution peaked at $E_{B^+} \sim 40$~GeV (see, {\it e.g.}, Ref.~\cite{Ackerstaff:1998zf}), we shall set \begin{align} E_{B^+} = \frac{m_Z}{2} \,, \end{align} for simplicity. In this case the distribution of the energy of $N$ in $B^+ \to \mu^+ N$ is flat as \begin{align} \frac{1}{\Gamma_{B^+ \to \mu^+ N}} \frac{d \Gamma_{B^+ \to \mu^+ N}}{d E_N} = \frac{1}{p_{B^+} \beta_f} \,, \end{align} for the energy range $E_N^+ \ge E_N \ge E_N^-$. Here $p_{B^+}= \sqrt{E_{B^+}^2 - m_{B^\pm}^2}$ and \begin{align} \beta_f &= \sqrt{ 1 - \frac{2 (M_N^2+m_\mu^2)}{m_{B^\pm}^2} + \frac{(M_N^2 - m_\mu^2)^2}{m_{B^\pm}^4}} \, , \\ E_N^\pm &= \frac{4 (m_{B^\pm}^2 + M_N^2 - m_\mu^2) E_{B^+} \pm 4 p_{B^+} m_{B^\pm}^2 \beta_f}{8 m_{B^\pm}^2} \,. \end{align} The number of the signal events~(\ref{eq:LNVB}) is then estimated as \begin{align} N_{\rm event} &= 2 \, \int_{E_N^-}^{E_N^+} dE_N N_{B^+} \, Br (B^+ \to \mu^+ N) \, \frac{1}{p_{B^+} \beta_1} \, P( N \to \mu^+ \pi^- ; E_N, L_{\rm det} ) \,. \end{align} Now we take $L_{\rm det}=2$~m for the probability $P( N \to \mu^+ \pi^- ; E_N, L_{\rm det} )$ in Eq.~(\ref{eq:P}). In Fig.~\ref{Fig:UB} we also show the sensitivity limit on the mixing $|\Theta_\mu|^2$ from the LNV decay $B^+ \to \mu^+ \mu^+ \pi^-$ at FCC-ee with $N_Z =10^{13}$. As in the previous case we assumed no background event and estimate the limit from $N_{\rm event}=3.09$. We can see that FCC-ee improves greatly the sensitivity compared with those of Belle II and LHCb for LHC run 3. For heavy Majorana neutrino with $M_N \simeq 4$~ GeV the mixing $|\Theta_\mu|^2 \mathop{}_{\textstyle \sim}^{\textstyle >} 10^{-6}$ can be probed. Thus, FCC-ee can offer the significant test of the LNV by heavy Majorana neutrino. One might think that the LNV signal might be boosted for $N$ produced in $B_c$ mesons, since the partial rate of $B_c^+ \to N + \mu$ receives a milder suppression factor $|V_{cb}|^2 = 1.69 \times 10^{-3}$ rather than $|V_{ub}|^2=1.71 \times 10^{-5}$~\cite{Agashe:2014kda}. The production of $B_c$ in $Z$ decays, however, is hard and the branching ratio is $Br (Z \to B_c^+ + b + \bar c) = (2.04-3.33) \times 10^{-5}$~\cite{Bcprod}. Thus, the LNV events through $B_c$ meson is smaller than those through $B$ and then we shall neglect it in the present analysis. It is, however, an interesting target for LHCb experiment as discussed in Ref.~\cite{Milanes:2016rzr}. See also Fig.~\ref{Fig:UB}. We should mention that FCC-ee offers another promising test of the LNV induced by heavy Majorana neutrino.% \footnote{ The Majorana property of heavy neutrino may also be probed from $e^+ e^- \to N \nu \to \ell q \bar q' \nu$ by using the angular distribution between $N$ and the incoming $e^-$~\cite{delAguila:2005pin}. In addition, the LNV process like $e^+ e^- \to N e^\pm W^\mp \to \ell^\pm W^\pm e^\pm W^\mp$ leading to the same-sign dilepton with four hadronic jets is also an interesting target~\cite{Banerjee:2015gca}. } It is planned to produce more than $2 \times 10^8$ $W$ pairs at the center-of-mass energy at the $WW$ threshold and above~\cite{Gomez-Ceballos:2013zzn}. In this case the LNV decay $W^+ \to \ell^+ N \to \ell^+ \ell'{}^+ \pi^-$ can be tested.\footnote{ The LNV decay of $W$ at LHC has been discussed in Refs.~\cite{LNVW}.} The sensitivity limit on $|\Theta_\mu|^2$ by using this mode is also shown in Fig.~\ref{Fig:UB}. It is found that the sensitivity by using $B^+ \to \mu^+ \mu^+ \pi^-$ is better than this for the parameter range in which constraints are avoided. \section{Summary} We have discussed the LNV decay of $B$ meson, $B^+ \to \mu^+ \mu^+ \pi^-$, induced by heavy Majorana neutrino. In particular we have estimated the sensitivity limits on the mixing $|\Theta_\mu|^2$ by the experimental searches at Belle II and at FCC-ee (at $Z$-pole). These facilities can probe the parameter region in which the various experimental constraints on heavy neutrino are avoided. Thus, the LNV $B$ decay is a significant and promising target for the LNV, which is complementary to the neutrinoless double beta decay. \section*{Acknowledgments} The work of T.A. was partially supported by JSPS KAKENHI Grant Numbers 15H01031and 25400249. T.A. thanks the Yukawa Institute for Theoretical Physics at Kyoto University, where this work was initiated during the YITP-S-16-01 on "The 44th Hokuriku Spring School".
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} A quantum communication network based on the distribution and sharing of entangled states is potentially secure to eavesdropping and is therefore of great practical interest \cite{QI,cryp,QI2}. \ A protocol for the realization of such a long distance system, known as the quantum repeater, was proposed by Briegel \textit{et al}. \cite{repeater,Dur}. \ A quantum repeater based on the use of atomic ensembles as memory elements, distributed over the network, was subsequently suggested by Duan, Lukin, Cirac and Zoller \cite{dlcz}. \ The storage of information in the atomic ensembles involves the Raman scattering of an incident light beam from ground state atoms with the emission of a signal photon. \ The photon is correlated with the creation of a phased, ground-state, coherent excitation of the atomic ensemble. \ The information may be retrieved by a reverse Raman scattering process, sending the excitation back to the initial atomic ground state and generating an idler photon directionally correlated with the signal photon \cite{qubit,chou,vuletic,collective,store,collective2,single2,pan,kimble}. \ In the alkali gases, the signal and the idler field wavelengths are in the near-infrared spectral region. \ This presents a wavelength mismatch with telecommunication wavelength optical fiber, which has a transmission window at longer wavelengths\ (1.1-1.6 um). \ It is this mismatch that motivates the search for alternative processes that can generate telecom wavelength photons correlated with atomic spin waves \cite{telecom}. \ This motivates the research presented in this article where we study multi-level atomic schemes in which the transition between the excited states is resonant with a telecom wavelength light field \cite{telecom}. \ The basic problem is to harness the absorption and the emission of telecom photons while preserving quantum correlations between the atoms, which store information and the photons that carry along the optical fiber channel of the network. It is not common to have a telecom ground state transition in atomic gases except for rare earth elements \cite{erbium,dysprosium} or in an erbium-doped crystal \cite{solid}. \ However, a telecom wavelength (signal) can be generated from transitions between excited levels in the alkali metals \cite{telecom,radaev}. \ The ladder configuration of atomic levels provides a source for telecom photons (signal) from the upper atomic transition. \ For rubidium and cesium atoms, the signal field has the range around 1.3-1.5 $\mu$m that can be coupled to an optical fiber and transmitted to a remote location. \ Cascade emission may result in pairs of photons, the signal entangled with the subsequently emitted infrared photon (idler) from the lower atomic transition. \ Entangled signal and idler photons were generated from a phase-matched four-wave mixing configuration in a cold, optically thick $^{85}$Rb ensemble \cite{telecom}. \ This correlated two-photon source is potentially useful as the signal field has telecom wavelength. The temporal emission characteristics of the idler field, generated on the lower arm of the cascade transition, were observed in measurements of the joint signal-idler correlation function. \ The idler decay time was shorter than the natural atomic decay time and dependent on optical thickness in a way reminiscent of superradiance \cite{Dicke,Stephen,Lehm,mu,OC:Mandel}. The spontaneous emission from an optically dense atomic ensemble is a many-body problem due to the radiative coupling between atoms. \ This coupling is responsible for the phenomenon of superradiance firstly discussed by Dicke \cite{Dicke} in 1954. Since then, this collective emission has been extensively studied in two atom systems indicating a dipole-dipole interaction \cite{Stephen,Lehm}, in the totally inverted N atom systems \cite{stehle,Tallet}, and in the extended atomic ensemble \cite{mu}. \ The emission intensity has been investigated using the master equation approach \cite{master,Bon,Bon1} and with Maxwell-Bloch equations \cite{Bon2,Feld}. \ A useful summary and review of superradiance can be found in the reference \cite{Gross,phase}. \ Recent approaches to superradiance include the quantum trajectory method \cite{trajectory,eig1} and the quantum correction method \cite{Fleischhauer99}% . \ In the limit of single atomic excitation, superradiant emission characteristics have been discussed in the reference \cite{Eberly} and \cite{Scully}. \ For a singly excited system, the basis set reduces to N rather than $2^{N}$ states. \ Radiative phenomena have been investigated using dynamical methods \cite{kurizki,Scully2,eig2} and by the numerical solution of an eigenvalue problem \cite{Friedberg08a, Svidzinsky08,Friedberg08b,Friedberg08c}. \ A collective frequency shift \cite{Arecchi, Morawitz} can be significant at a high atomic density \cite{Scully09} and has been observed recently in an experiment where atoms are resonant with a planar cavity \cite{supershift}. To account for multiple atomic excitations in the signal-idler emission from a cascade atomic ensemble, the Schr\"{o}dinger's equation approach becomes cumbersome. \ An alternative theory of c-number Langevin equations is suitable for solution by stochastic simulations. Langevin equations were initially derived to describe Brownian motion \cite{SM:Gardiner}. \ A fluctuating force is used to represent the random impacts of the environment on the Brownian particle. \ A given realization of the Langevin equation involves a trajectory perturbed by the random force. \ Ensemble averaging such trajectories provides a natural and direct way to investigate the dynamics of the stochastic variables. \ \ An essential element in the stochastic simulations is a proper characterization of the Langevin noises. \ These represent the quantum fluctuations responsible for the initiation of the spontaneous emission from the inverted \cite{Feld,Haake1,Haake2,Polder79}, or pumped atomic system \cite{Chiao88,Chiao95} as in our case.\ \ The positive-P phase space method \cite{QN:Gardiner, quantization,Smith88,Smith0,Smith1,Boyd89,Drummond91} is employed to derive the Fokker-Planck equations that lead directly to the c-number Langevin equations. \ The classical noise correlation functions, equivalently diffusion coefficients, are alternatively\ confirmed by use of the Einstein relations \cite{LP:Sargent, QO:Scully, Fleischhauer94}. \ The c-number Langevin equations correspond to Ito-type stochastic differential equations that may be simulated numerically. \ The noise correlations can be represented either by using a square \cite{Carmichael86} or a non-square "square root" diffusion matrix \cite{Smith1}. \ The approach enables us to calculate normally-ordered quantities, signal-idler field intensities, and the second-order correlation function. \ The numerical approach involves a semi-implicit difference algorithm and shooting method \cite{numerical} to integrate the stochastic "Maxwell-Bloch" equations. Recently a new positive-P phase space method involving a stochastic gauge function \cite{Drummond02} has been developed. \ This approach has an improved treatment of sampling errors and boundary errors in the treatment of quantum anharmonic oscillators \cite{Drummond01,Collett01}. \ It has also been applied to a many-body system of bosons \cite{Drummond03} and fermions \cite{Drummond06}. \ In this paper, we follow the traditional positive-P representation method \cite{drummond80}.\ The remainder of this paper is organized as follows. In section II, we show the formalism of positive P-representation, and demonstrate the stochastic differential equations of cascade emission (signal and idler) from an atomic ensemble. In section III we solve numerically for the dynamics of the atoms and counter-propagating signal and idler fields in a positive P-representation. We present results of signal and idler field intensities, and the signal-idler second order correlation function for different optical depths of the atomic ensemble. Section IV presents our discussions and conclusions. In the appendix, we show the details in the derivations of c-number Langevin equations that are the foundation for numerical approaches of the cascade emission. \ In Appendix A, we formulate the Hamiltonian, and derive the Fokker-Planck equations by characteristic functions \cite{LT:Haken} in positive P-representation. Then corresponding c-number Langevin equations are derived, and the noise correlations are found from the diffusion coefficients in Fokker-Planck equations as shown in Appendix B. \ \section{Theory of Cascade emission} The phase space methods \cite{QN:Gardiner} that mainly include P-, Q-, and Wigner (W) representations are techniques of using classical analogues to study quantum systems, especially harmonic oscillators. \ The eigenstate of harmonic oscillator is a coherent state that provides the basis expansion to construct various representations. \ P and Q-representation are associated respectively with evaluations of normal and anti-normal order correlations of creation and destruction operators. \ W-representation is invented for the purpose of describing symmetrically ordered creation and destruction operators. \ Since P-representation describes normally ordered quantities that are relevant in experiments, we are interested in investigating one class of generalized P-representations, the positive P-representation that has semi-definite property in the diffusion process, which is important in describing quantum noise systems. Positive-P representation \cite{QO:Walls, drummond80} is an extension to Glauber-Sudarshan P-representation that uses coherent state ($|\alpha\rangle$) as a basis expansion of density operator $\rho$. \ In terms of diagonal coherent states with a quasi-probability distribution, $P(\alpha,\alpha^{\ast})$, a density operator in P-representation is \begin{equation} \rho=\int_{D}|\alpha\rangle\langle\alpha|P(\alpha,\alpha^{\ast})d^{2}\alpha, \end{equation} where $D$ represents the integration domain. \ The normalization condition of $\rho,$ which is Tr\{$\rho$\}$=1,$ indicates the normalization for $P$ as well, $\int_{D}P(\alpha,\alpha^{\ast})d^{2}\alpha=1$. \ Positive P-representation uses a non-diagonal coherent state expansion and the density operator can be expressed as% \begin{equation} \rho=\int_{D}\Lambda(\alpha,\beta)P(\alpha,\beta)d\mu(\alpha,\beta), \end{equation} where% \begin{equation} d\mu(\alpha,\beta)=d^{2}\alpha d^{2}\beta\text{ and }\Lambda(\alpha,\beta)=\frac{|\alpha\rangle\langle\beta^{\ast}|}{\langle\beta^{\ast}% |\alpha\rangle}, \end{equation} and $\langle\beta^{\ast}|\alpha\rangle$ in non-diagonal projection operators, $\Lambda(\alpha,\beta),$ makes sure of the normalization condition in distribution function, $P(\alpha,\beta).$ Any normally ordered observable can be deduced from the distribution function $P(\alpha,\beta)$ that \begin{equation} \langle(a^{\dag})^{m}a^{n}\rangle=\int_{D}\beta^{m}\alpha^{n}P(\alpha,\beta)d\mu(\alpha,\beta). \end{equation} A characteristic function $\chi_{p}(\lambda_{\alpha},\lambda_{\beta})$ (Fourier-transformed distribution function in Glauber-Sudarshan P-representation but now is extended into a larger dimension) can help formulate distribution function, which is% \begin{equation} \chi_{p}(\lambda_{\alpha},\lambda_{\beta})=\int_{D}e^{i\lambda_{\alpha}\alpha+i\lambda_{\beta}\beta}P(\alpha,\beta)d\mu(\alpha,\beta). \end{equation} It is calculated from a normally ordered exponential operator $E(\lambda),$ \begin{equation} \chi_{p}(\lambda_{\alpha},\lambda_{\beta})=\text{Tr\{}\rho E(\lambda)\text{\}, }E(\lambda)=e^{i\lambda_{\beta}a^{\dagger}}e^{i\lambda_{\alpha}a}. \end{equation} Then a Fokker-Planck equation can be derived from the time derivative of characteristic function,% \begin{equation} \frac{\partial\chi_{p}}{\partial t}=\frac{\partial}{\partial t}\text{Tr\{}\rho E(\lambda)\text{\}=Tr\{}\frac{\partial\rho}{\partial t}E(\lambda)\text{\}}% \end{equation} by Liouville equations,% \begin{equation} \frac{\partial\rho}{\partial t}=\frac{1}{i\hbar}[H,\rho]. \end{equation} \begin{figure} [ptb] \begin{center} \includegraphics[ natheight=7.499600in, natwidth=9.999800in, height=2.1037in, width=3.0682in]% {fig1.eps}% \caption{Four-level atomic ensemble interacting with two driving lasers (solid) with Rabi frequencies $\Omega_{a}$ and $\Omega_{b}.$ \ Signal and idler fields are labelled by $\hat{a}_{s}$ and $\hat{a}_{i},$ respectively and $\Delta_{1}$ and $\Delta_{2}$ are one and two-photon laser detunings. }% \label{four}% \end{center} \end{figure} In laser theory \cite{LT:Haken}, a P-representation method is extended to describe atomic and atom-field interaction systems. \ When a large number of atoms is considered, which is indeed the case of the actual laser, a macroscopic variable can be defined. \ Then a generalized Fokker-Planck equation can be derived from characteristic functions by neglecting higher order terms that are proportional to the inverse of number of atoms. \ It is similar to our case when we solve light-matter interactions in an atomic ensemble that the large number cuts off the higher order terms in characteristic functions. We consider $N$ cold atoms that are initially prepared in the ground state interacting with four independent electromagnetic fields.\ \ As shown in Fig.\ref{four}, two driving lasers (of Rabi frequencies $\Omega_{a}$ and $\Omega_{b}$) excite a ladder configuration $|0\rangle\rightarrow |1\rangle\rightarrow|2\rangle.$ \ Two quantum fields, signal $\hat{a}_{s}$ and idler $\hat{a}_{i},$ are generated spontaneously. We note that the spontaneous emission from the cascade driving scheme is a stochastic process due to the quantum fluctuations, unlike the diamond configuration where quantum noise can be neglected \cite{conversion,thesis}. The complete derivation of the c-number Langevin equations for cascade emission from the four-level atomic ensemble is described in Appendix A and B. \ After setting up the Hamiltonian, we follow the standard procedure to construct the characteristic functions \cite{LT:Haken} in Appendix A using the positive-P representation \cite{QN:Gardiner}. \ In Appendix B.1, the Fokker-Planck equation is found by directly Fourier transforming the characteristic functions, and making a $1/N_{z}$ expansion.\ \ Finally the Ito stochastic differential equations are written down from inspection of the first-order derivative (drift term) and second-order derivative (diffusion term) in the Fokker-Planck equation. \ The equations are then written in dimensionless form by introducing the Arecchi-Courtens cooperation units \cite{scale} in Appendix B.2. \ From Eq. (\ref{bloch2}) and the field equations that follow, these c-number Langevin equations in a co-moving frame are,% \begin{widetext} \begin{align} \frac{\partial}{\partial\tau}\pi_{01} & =(i\Delta_{1}-\frac{\gamma_{01}}% {2})\pi_{01}+i\Omega_{a}(\pi_{00}-\pi_{11})+i\Omega_{b}^{\ast}\pi_{02}% -i\pi_{13}^{\dag}E_{i}^{+}+\mathcal{F}_{01}\text{ (I),}\nonumber\\ \frac{\partial}{\partial\tau}\pi_{12} & =i(\Delta_{2}-\Delta_{1}% +i\frac{\gamma_{01}+\gamma_{2}}{2})\pi_{12}-i\Omega_{a}^{\ast}\pi_{02}% +i\Omega_{b}(\pi_{11}-\pi_{22})+i\pi_{13}E_{s}^{+}e^{-i\Delta kz} +\mathcal{F}_{12},\nonumber\\ \frac{\partial}{\partial\tau}\pi_{02} & =(i\Delta_{2}-\frac{\gamma_{2}}% {2})\pi_{02}-i\Omega_{a}\pi_{12}+i\Omega_{b}\pi_{01}+i\pi_{03}E_{s}% ^{+}e^{-i\Delta kz}-i\pi_{32}E_{i}^{+}+\mathcal{F}_{02},\nonumber\\ \frac{\partial}{\partial\tau}\pi_{11} & =-\gamma_{01}\pi_{11}+\gamma_{12}% \pi_{22}+i\Omega_{a}\pi_{01}^{\dag}-i\Omega_{a}^{\ast}\pi_{01}-i\Omega_{b}% \pi_{12}^{\dag}+i\Omega_{b}^{\ast}\pi_{12}+\mathcal{F}_{11},\nonumber \end{align}\begin{align} \frac{\partial}{\partial\tau}\pi_{22} & =-\gamma_{2}\pi_{22}+i\Omega_{b}% \pi_{12}^{\dag}-i\Omega_{b}^{\ast}\pi_{12}+i\pi_{32}^{\dag}E_{s}% ^{+}e^{-i\Delta kz}-i\pi_{32}E_{s}^{-}e^{i\Delta kz}+\mathcal{F}% _{22},\nonumber\\ \frac{\partial}{\partial\tau}\pi_{33} & =-\gamma_{03}\pi_{33}+\gamma_{32}% \pi_{22}-i\pi_{32}^{\dag}E_{s}^{+}e^{-i\Delta kz}+i\pi_{32}E_{s}^{-}e^{i\Delta kz}+i\pi_{03}^{\dag}E_{i}^{+}-i\pi_{03}E_{i}^{-} +\mathcal{F}_{33},\nonumber\\ \frac{\partial}{\partial\tau}\pi_{13} & =-(i\Delta_{1}+\frac{\gamma _{01}+\gamma_{03}}{2})\pi_{13}-i\Omega_{a}^{\ast}\pi_{03}-i\Omega_{b}\pi _{32}^{\dag}+i\pi_{12}E_{s}^{-}e^{i\Delta kz}+i\pi_{01}^{\dag}E_{i}% ^{+} +\mathcal{F}_{13},\nonumber\\ \frac{\partial}{\partial\tau}\pi_{03} & =-\frac{\gamma_{03}}{2}\pi _{03}-i\Omega_{a}\pi_{13}+i\pi_{02}E_{s}^{-}e^{i\Delta kz}+i(\pi_{00}-\pi _{33})E_{i}^{+}+\mathcal{F}_{03},\nonumber\\ \frac{\partial}{\partial\tau}\pi_{32} & =i\Delta_{2}-\frac{\gamma_{03}% +\gamma_{2}}{2}\pi_{32}+i\Omega_{b}\pi_{13}^{\dag}-i(\pi_{22}-\pi_{33}% )E_{s}^{+}e^{-i\Delta kz}-i\pi_{02}E_{i}^{-}+\mathcal{F}_{32},\nonumber\\ \frac{\partial}{\partial z}E_{s}^{+} & =-i\pi_{32}e^{i\Delta kz}\frac {|g_{s}|^{2}}{|g_{i}|^{2}}-\mathcal{F}_{s},\text{ }\frac{\partial}{\partial z}E_{i}^{+}=i\pi_{03}+\mathcal{F}_{i}, \label{bloch3}% \end{align} \end{widetext} where (I) stands for Ito type SDE. \ $\pi_{ij}$ is the stochastic variable that corresponds to the atomic populations of state $|i\rangle$ when $i=j$ and to atomic coherence when $i\neq j$, and $\mathcal{F}_{ij}$ are c-number Langevin noises. \ The remaining equations of motion, which close the set, can be found by replacing the above classical variables, $\pi_{jk}^{\ast }\rightarrow\pi_{jk}^{\dag},$ $(\pi_{jk}^{\dag})^{\ast}\rightarrow\pi_{jk},$ $(E_{s,i}^{+})^{\ast}\rightarrow E_{s,i}^{-},$ $(E_{s,i}^{-})^{\ast }\rightarrow E_{s,i}^{+}$ , and $\mathcal{F}_{jk}^{\ast}\rightarrow \mathcal{F}_{jk}^{\dag}$.\ \ Note that the atomic populations satisfy $\pi_{jj}^{\ast}=\pi_{jj}.$ \ The superscripts, dagger ($\dag$) for atomic variables and ($-$) for field variables, denote the independent variables, which is a feature of the positive-P representation: there are double dimension spaces for each variable. \ These variables are complex conjugate to each other when ensemble averages are taken, for example $\left\langle \pi_{jk}\right\rangle =\left\langle \pi_{jk}^{\dag}\right\rangle ^{\ast}$ and $\left\langle E_{s,i}^{+}\right\rangle =\left\langle E_{s,i}^{-}\right\rangle ^{\ast}.$ \ The doubled spaces allow the variables to explore trajectories outside the classical phase space. Before going further to discuss the numerical solution of the SDE, we point out that the diffusion matrix elements have been computed using Fokker-Planck equations and by the Einstein relations discussed in Appendix B.2. \ This provides the important check on the lengthy derivations of the diffusion matrix elements we need for the simulations. The next step is to find expressions for the Langevin noises in terms of a non-square matrix $B$ \cite{QO:Walls,Smith1}. \ The matrix $B$ is used to construct the symmetric diffusion matrix $D(\alpha)=B(\alpha)B^{T}(\alpha)$ for a Ito SDE,% \begin{equation} dx_{t}^{i}=A_{i}(t,\overrightarrow{x_{t}})dt+\sum\limits_{j}B_{ij}% (t,\overrightarrow{x_{t}})dW_{t}^{j}(t)\text{ \ (I)}\label{Ito}% \end{equation} where $\xi_{i}dt=dW_{t}^{i}(t)$ (Wiener process) and $\left\langle \xi _{i}(t)\xi_{j}(t^{\prime})\right\rangle =\delta_{ij}\delta(t-t^{\prime}).$ \ Note that $B\rightarrow BS,$ where $S$ is an orthogonal matrix ($SS^{T}=I$), leaves $D$\ unchanged, so $B$ is not unique. \ We could also construct a square matrix representation $B$ \cite{QN:Gardiner,SM:Gardiner,Carmichael86}. \ This involves a procedure of matrix decomposition into a product of lower and upper triangular matrix factors. \ A Cholesky decomposition can be used to determine the $B$ matrix elements successively row by row. \ The downside of this procedure is that the $B$ matrix elements must be differentiated in converting the Ito SDE to its equivalent Stratonovich form for numerical solution. The Stratonovich SDE is necessary for the stability and the convergence of semi-implicit methods. \ Because of the analytic difficulties in transforming to the Stratonovich form, we use instead the non-square form of $B$ \cite{Smith1}. \ In this case a typical $B$ matrix element is a sum of terms, each one of which is a product of the square root of a diffusion matrix element with a unit strength real (if the diffusion matrix element is diagonal) or complex (if the diffusion matrix element is off-diagonal) Gaussian unit white noise. \ It is straightforward to check that a $B$ matrix constructed in this way reproduces the required diffusion matrix $D=BB^{T}$. As pointed out in the reference \cite{Drummond91}, the transverse dipole-dipole interaction can be neglected and nonparaxial spontaneous decay rate can be accounted for by a single atom decay rate if the atomic density is not too high. \ We are interested here in conditions where the ensemble length $L$ is significant and propagation effects are non-negligible, and the average distance between atoms $d=\sqrt[3]{V/N}$ is larger than the transition wavelength $\lambda.$ \ The length scales satisfy $\lambda\lesssim d\ll L,$ and we consider a pencil-like cylindrical atomic ensemble. \ The paraxial or one-dimensional assumption for field propagation is then valid, and the transverse dipole-dipole interaction is not important for the atomic density we focus here. The theory of cascade emission presented here provides the solid ground for simulations of fluctuations that initiate the radiation process in the atomic ensemble. A proper way of treating fluctuations or noise correlations and formulating SDE requires an Ito form that is derived from the Fokker-Planck equation. An alternative but more straightforward approach by making quantum to classical correspondence in the quantum Langevin equation does not guarantee an Ito type SDE. That is the reason we take the route of Fokker-Planck equation, and the coupled equations of Eq.(\ref{bloch3}) are the main results in this section. \section{Results for signal, idler intensities, and the second-order correlation function} There are several possible ways to integrate the differential equation numerically. \ Three main categories of algorithm used are forward (explicit), backward (implicit), and mid-point (semi-implicit) methods \cite{numerical}. The forward difference method, which Euler or Runge-Kutta methods utilizes, is not guaranteed to\ converge in stochastic integrations \cite{xmds}. \ There it is shown that the semi-implicit method \cite{semi} is more robust in Stratonovich type SDE simulations \cite{Drummond91b}. \ More extensive studies of the stability and convergence of SDE can be found in the reference \cite{SDE:Kloeden}. \ The Stratonovich type SDE equivalent to the Ito type equation (\ref{Ito}), is% \begin{align} dx_{t}^{i} & =[A_{i}(t,\overrightarrow{x_{t}})-\frac{1}{2}\sum\limits_{j}% \sum\limits_{k}B_{jk}(t,\overrightarrow{x_{t}})\frac{\partial}{\partial x^{j}% }B_{ik}(t,\overrightarrow{x_{t}})]dt\nonumber\\ & +\sum\limits_{j}B_{ij}(t,\overrightarrow{x_{t}})dW_{t}^{j}\text{ \ (Stratonovich),}% \end{align} which has the same diffusion terms $B_{ij},$ but with modified drift terms. \ This "correction" term arises from the different definitions of stochastic integral in the Ito and Stratonovich calculus. At the end of Appendix C.3, we derive the "correction" terms noted above. \ We then have 19 classical variables including atomic populations, coherences, and two counter-propagating cascade fields. \ With 64 diffusion matrix elements and an associated 117 random numbers required to represent the instantaneous Langevin noises, we are ready to solve the equations numerically using the robust midpoint difference method. The problem we encounter here involves counter-propagating field equations in the space dimension and initial value type atomic equations in the time dimension. The counter-propagating field equations have a boundary condition specified at each end of the medium. \ This is a two-point boundary value problem, and a numerical approach to its solution, the shooting method \cite{numerical}, is used here. Any normally-ordered quantity $\left\langle Q\right\rangle $ can be derived by ensemble averages that $\left\langle Q\right\rangle =\sum_{i=1}^{R}Q_{i}/R $ where $Q_{i}$ is the result for each realization. In this section, we present the second-order correlation function of signal-idler fields, and their intensity profiles. $\ $We define the intensities of signal and idler fields by \ % \begin{equation} I_{s}(t)=\left\langle E_{s}^{-}(t)E_{s}^{+}(t)\right\rangle ,\text{ }% I_{i}(t)=\left\langle E_{i}^{-}(t)E_{i}^{+}(t)\right\rangle , \end{equation} respectively, and the second-order signal-idler correlation function% \begin{equation} G_{s,i}(t,\tau)=\left\langle E_{s}^{-}(t)E_{i}^{-}(t+\tau)E_{i}^{+}% (t+\tau)E_{s}^{+}(t)\right\rangle \end{equation} where $\tau$ is the delay time of the idler field with respect a reference time $t$\ of the signal field. \ Since the correlation function is not stationary \cite{QL:Loudon}, we choose $t$ as the time when $G_{s,i}$ is at its maximum. We consider a cigar shaped $^{85}$Rb ensemble of radius $0.25$ mm and $L=3$ mm. \ The operating conditions of the pump lasers are ($\Omega_{a},$ $\Omega_{b},$ $\Delta_{1},$ $\Delta_{2}$) $=$ ($0.4,$ $1,$ $1,$ $0$% )$\gamma_{03}$ where $\Omega_{a}$ is the peak value of a $50$ ns square pulse, and $\Omega_{b}$ is the Rabi frequency of a continuous wave laser. Four-wave mixing condition ($\Delta k=0$) is assumed.\ The four atomic levels are chosen as ($|0\rangle,$ $|1\rangle,$ $|2\rangle,$ $|3\rangle$) $=$ ($|$5S$_{1/2},$F=3$\rangle,$ $|5$P$_{3/2},$F=4$\rangle,$ $|4$D$_{5/2},$F=5$\rangle,$ $|5$P$_{3/2},$F=4$\rangle$). \ The natural decay rate for atomic transition $|1\rangle\rightarrow|0\rangle$ or $|3\rangle \rightarrow|0\rangle$ is $\gamma_{01}=\gamma_{03}=1/26$ ns and they have a wavelength 780 nm. \ For atomic transition $|2\rangle\rightarrow|1\rangle$ or $|2\rangle\rightarrow|3\rangle$ is $\gamma_{12}=\gamma_{32}=0.156\gamma_{03}$ \cite{gsgi} with a telecom wavelength 1.53$\mu$m. \ The scale factor of the coupling constants for signal and idler transitions is $g_{s}/g_{i}=0.775.$ We have investigated six different atomic densities from a dilute ensemble with an optical density (opd) of 0.01 to a opd = 8.71. \ In Fig.\ref{omega_population}, \ref{intensity}, and \ref{gsi_t}, we take the atomic density $\rho=10^{10}$ cm$^{-3}$ (opd = 2.18) for example, and the grid sizes for dimensionless time $\Delta t=4$ and space $\Delta z=0.0007$ are chosen. \ The convergence of the grid spacings is fixed in practice by convergence to the signal intensity profile with an estimated relative error less than 0.5\%.% \begin{figure} [ptb] \begin{center} \includegraphics[width=0.5\textwidth ]% {fig2.eps}% \caption{(Color online) Time-varying pump fields and time evolution of atomic populations. (Left) The first pump field $\Omega_{a}$ (dotted-red) is a square pulse of duration 50 ns and $\Omega_{b}$ is continuous wave (solid-blue). \ (Right) The time evolution of the real part of populations for three atomic levels $\sigma_{11}=\left\langle \tilde{\alpha}_{13}\right\rangle $ (dash dotted-red), $\sigma_{22}=\left\langle \tilde{\alpha}_{12}\right\rangle $ (dotted-blue), $\sigma_{33}=\left\langle \tilde{\alpha}_{11}\right\rangle $ (solid-green) at $z=0,L$, and almost vanishing imaginary parts for all three of them. indicate convergence of the ensemble averages.\ Note that these atomic populations are uniform as a function of $z.$}% \label{omega_population}% \end{center} \end{figure} The temporal profiles of the exciting lasers are shown in the left panel of Fig.\ref{omega_population}. \ The atomic density is chosen as $\rho =10^{10}$ cm$^{-3},$ and the cooperation time $T_{c}$ is 0.35 ns. \ The right panel shows time evolution of atomic populations for levels $|1\rangle$, $|2\rangle,$ and $|3\rangle$\ at $z=0,L,$ that are spatially uniform. \ The populations are found by ensemble averaging the complex stochastic population variables. \ The imaginary parts of the ensemble averages tend to zero as the ensemble size is increased, and this is a useful indicator of convergence. \ In this example, the ensemble size was 8$\times10^{5}.$ \ The small rise after the pump pulse $\Omega_{a}$ is turned off is due to the modulation caused by the pump pulse $\Omega_{b},$ which has a generalized Rabi frequency $\sqrt{\Delta_{2}^{2}+4\Omega_{b}^{2}}$. This influences also the intensity profiles and the correlation functions.% \begin{figure} [ptb] \begin{center} \includegraphics[width=0.5\textwidth,height=0.35\textwidth ]% {fig3.eps}% \caption{(Color online) Spatial-temporal intensity profiles of counter-propagating signal and idler fields. \ (a) At $z=0,$ real (dashed-blue) and imaginary (solid-red) parts of signal intensity. \ (b) At $z=L,$ real (dash dotted-blue) and imaginary (solid-red) parts of idler intensity. \ (c) and (d) are spatial-temporal profiles for signal and idler intensities respectively. \ Both intensities are normalized by the peak value of signal intensity that is $7.56\times10^{-12}$ $E_{c}^{2}$. \ Note that the idler fluctuations and its non-vanishing imaginary part indicate a relatively slower convergence compared with the signal intensity. \ The ensemble size was 8$\times10^{5},$ and the atomic density $\rho=10^{10}% $cm$^{-3}$.}% \label{intensity}% \end{center} \end{figure} In Fig.\ref{intensity}, we show counter-propagating signal ($-\hat{z} $) and idler ($+\hat{z}$) field intensities at the respective ends of the atomic ensemble and their spatial-temporal profiles respectively. \ The plots show the real and imaginary parts of the observables, and both are normalized to the peak value of signal intensity. \ Note that the characteristic field strength in terms of natural decay rate of the idler transition ($\gamma_{03}$) and dipole moment ($d_{i}$) is $(d_{i}/\hbar )E_{c}\approx36.3\gamma_{03}$. \ The fluctuation in the real idler field intensity at $z=L$ and non-vanishing imaginary part indicates a slower convergence compared to the signal field that has an almost vanishing imaginary part. \ The slow convergence is a practical limitation of the method. \ In Fig.\ref{gsi_t} (a), we show a contour plot of the second-order correlation function $G_{s,i}(t_{s},t_{i})$ where $t_{i}\geq t_{s}.$ \ In Figure \ref{gsi_t} (b), a section is shown through $t_{s}\approx75$ ns where $G_{s,i}$ is at its maximum. \ The approximately exponential decay of $G_{s,i}$ is clearly superradiant qualitatively consistent with the reference \cite{telecom}. \ The non-vanishing imaginary part of $G_{s,i}$ calculated by ensemble averaging is also shown in (b) and indicates a reasonable convergence after 8$\times10^{5}$ realizations. \begin{figure} [ptb] \begin{center} \includegraphics[width=0.5\textwidth,height=0.38\textwidth ]% {fig4.eps}% \caption{(Color online) Second-order correlation function $G_{s,i}(t_{s},t_{i}).$ The 2-D contour plot of the real part of $G_{s,i}$ with a causal cut-off at $t_{s}=t_{i}$ is shown in (a). \ The plot (b) gives a cross-section at $t_{s}=t_{m}\approx75$ ns, which is normalized to the maximum of the real part (dashed-blue) of $G_{s,i}.$ \ The imaginary part (solid-red) of $G_{s,i}$ is nearly vanishing, and the number of realizations is 8$\times10^{5}$ for $\rho=10^{10}$cm$^{-3}.$}% \label{gsi_t}% \end{center} \end{figure} In Table \ref{table1}, we display numerical parameters of our simulations for six different atomic densities. \ The number of dimensions in space and time is $M_{t}\times M_{z}$ with grid sizes ($\Delta t,\Delta z$) in terms of cooperation time ($T_{c}$), length ($L_{c}$). \ The superradiant time scale ($T_{f}$) is found by fitting $G_{s,i}$ to an exponential function ($e^{-t/T_{f}}$), with $95\%$ confidence range.% \begin{table}[h] \centering \caption{Numerical simulation parameters for different atomic densities $\rho$. Corresponding optical depth (opd), time and space grids ($M_{t}\times M_{z}$) with grid sizes ($\Delta t,\Delta z$) in terms of cooperation time ($T_c$) and length ($L_c$), and the fitted characteristic time $T_{f}$ for $G_{s,i}$ (see text).}% \begin{tabular} [c]{|c|c|c|c|c|c|}\cline{1-5}\cline{4-4}\cline{6-6} $\rho($cm$^{-3})$ & opd & $M_{t}\times M_{z}$ & \begin{tabular} [c]{c}% $\Delta t(T_{c}),$\\ $\Delta z(L_{c})$% \end{tabular} & \begin{tabular} [c]{c}% $T_{c}($ns$),$\\ $L_{c}($m$)$% \end{tabular} & \begin{tabular} [c]{c}% fitted $T_{f}$\\(ns \end{tabular} \\\cline{1-5}\cline{4-4}\cline{6-6} 5$\times10^{7}$ & $0.01$ & $111\times42$ & 0.3, 5$\times10^{-5}$ & 4.89, 1.47 & $25.9$\\\hline 5$\times10^{8}$ & $0.11$ & $101\times44$ & 0.9, 1.5$\times10^{-4}$ & 1.55, 0.46 & $24.6$\\\hline 5$\times10^{9}$ & $1.09$ & $101\times42$ & 2.8, 4.5$\times10^{-4}$ & 0.49, 0.15 & $14.8$\\\hline 1$\times10^{10}$ & $2.18$ & $101\times42$ & 4.0, 7$\times10^{-4}$ & 0.35, 0.10 & $9.4$\\\hline 2$\times10^{10}$ & $4.35$ & $101\times42$ & 5.5, 1$\times10^{-3}$ & 0.24, 0.07 & $5.0$\\\hline 4$\times10^{10}$ & $8.71$ & $101\times42$ & 8.0, 1.4$\times10^{-3}$ & 0.17, 0.06 & $3.1$\\\hline \end{tabular} \label{table1}% \end{table} In Fig.\ref{gsi}, the characteristic time scale is plotted as a function of atomic density and the factor $N\mu$, and shows faster decay for optically denser atomic ensembles. \ We also plot the timescale $T_{1}% =\gamma_{03}^{-1}/(N\mu+1)$ (ns) where $\mu$ is the geometrical constant for a cylindrical ensemble \cite{mu}. $\ $The natural decay time $\gamma_{03}^{-1}=26$ ns corresponds to the D2 line of $^{85}$Rb. \ The error bar indicates the deviation due to the fitting range from the peak of $G_{s,i}$ to approximately 25\% and 5\% of the peak value. \ The results of simulations are in good qualitative agreement with the timescale of $T_{1}$ that can be regarded as a superradiant time constant of lower transition in a two-photon cascade \cite{QO:Scully, QL:Loudon}. $T_{f}$ approaches independent atom behavior at lower densities, which indicates no collective behavior as expected. We note here that our simulations involve multiple excitations within the pumping condition similar to the experimental parameters \cite{telecom}. The small deviation of $T_{f}$ and $T_{1}$ might be due to the multiple emissions considered in our simulations other than a two-photon source. On the other hand the close asymptotic dependence of atomic density or optical depth in $T_{f}$ and $T_{1}$ indicates a strong correlation between signal and idler fields due to the four-wave mixing condition as required and crucial in experiment \cite{telecom}. For larger opd atomic ensembles, larger statistical ensembles are necessary for numerical simulations to converge. \ The integration of 8$\times10^{5}$ realizations used in the case of $\rho=10^{10}$ cm$^{-3}$ consumes about 14 days with Matlab's parallel computing toolbox (function "\textit{parfor"}) with a Dell precision workstation T7400 (64-bit Quad-Core Intel Xeon processors).% \begin{figure} [ptb] \begin{center} \includegraphics[width=0.52\textwidth,height=0.29\textwidth ]% {fig5.eps}% \caption{(Color online) Characteristic timescales, $T_{f}$ and $T_{1}$ vs atomic density $\rho$ and the superradiant enhancement factor $N\mu$. $\ T_{f}$ (dotted-blue) is the fitted characteristic timescale for $G_{s,i}(t_{s}=t_{m},t_{i}=t_{m}+\tau)$ where $t_{m}$ is chosen at its maximum, as in Figure \ref{gsi_t}. \ The error bars indicate the fitting uncertainties. \ As a comparison, $T_{1}$=$\gamma_{03}^{-1}/$($N\mu+1$) (dashed-black) is plotted where $\gamma_{03}^{-1}=26$ ns is the natural decay time of D2 line of $^{85}$Rb atom, and $\mu$ is the geometrical constant for a cylindrical atomic ensemble. \ The number of realizations is 4$\times10^{5}$ for $\rho=5\times10^{7}$, $5\times10^{8}$, $5\times10^{9}$ cm$^{-3}$, 8$\times10^{5}$ for $\rho=10^{10}$, $2\times10^{10}$ cm$^{-3}$, and 16$\times10^{5}$ for $\rho=4\times 10^{10}$ cm$^{-3}$.}% \label{gsi}% \end{center} \end{figure} \section{Discussion and Conclusion} The cascade atomic system studied here provides a source of telecommunication photons that are crucial for long distance quantum communication. We may take advantage of such low loss transmission bandwidth in the DLCZ protocol for a quantum repeater. The performance of the protocol relies on the efficiency of generating the cascade emission pair, which is better for a larger optical depth of the prepared atomic ensemble. For other applications in quantum information science such as quantum swapping and quantum teleportation, the frequency space correlations also influence their success rates \cite{spectral}. To utilize and implement the cascade emission in quantum communication, we characterize the emission properties, especially the signal-idler correlation function and its dependence on optical depths. Its superradiant timescale indicates a broader spectral distribution which saturates the storage efficiency of idler pulse in an auxiliary atomic ensemble \cite{telecom} by means of EIT (electromagnetic induced transparency). Therefore our calculation provides the minimal spectral window (1/$T_f$) of EIT to efficiently store and retrieve the idler pulse. In summary, we have derived c-number Langevin equations in the positive-P representation for the cascade signal-idler emission process in an atomic ensemble. \ The equations are solved numerically by a stable and convergent semi-implicit difference method, while the counter-propagating spatial evolution is solved by implementing the shooting method. \ We investigate six different atomic densities readily obtainable in a magneto-optical trap experiment. \ Signal and idler field intensities and their correlation function are calculated by ensemble averages. \ Vanishing of the unphysical imaginary parts within some tolerance is used as a guide to convergence. \ We find an enhanced characteristic time scale for idler emission in the second-order correlation functions from a dense atomic ensemble, qualitatively consistent with the superradiance timescales used in a cylindrical dense atomic ensemble \cite{mu, telecom}. \section*{ACKNOWLEDGMENTS} We acknowledge support from NSF, USA and NSC, Taiwan, R. O. C., and thank T. A. B. Kennedy for guidance of this work.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Large scale historically significant phenomena can trigger societal transformations by shifting citizens' preferences and beliefs on important metrics, such as trust in institutions and the role of government in the economy. How such change happens, however, has been difficult to examine. Previous studies have relied mostly on survey data that is collected with large time gaps, and on cross-sectional samples of respondents, making it difficult to understand how citizens process new information while events unfold, and what factors might change or reinforce their prior beliefs. In this study, we implement a seven-wave longitudinal survey on a representative sample of Americans that we track from April to October 2020, an eventful period characterized by a public health and economic crisis that occurred during a Presidential election campaign.\footnote{One concern with using subjective response scales to measure preferences or beliefs is that respondents may have different opinions about what some scales or questions might mean. This is less problematic in a longitudinal study such as ours.} Across survey waves, we track respondents' preferences for welfare and temporary relief policies, their trust in institutions, and how they process information about the crisis. In addition to a rich set of socio-economic and demographic controls, we record respondents' direct and indirect exposure to the crisis as well as their media diet. \par In line with previous studies documenting the impact of economic crises, we find that during the COVID-19 pandemic Americans reduced their trust in most institutions while increasing their support for a greater role of government in the economy \citep{margalit2019, giuliano2014, garand2010, cogley2008, piketty1995}. Our methodology allows us to see that such preferences and beliefs are more likely to change when citizens are directly affected by the crisis, rather than through mere exposure. Losing a large portion of income or having a family member or close friend hospitalized with the virus are associated with an increased support for welfare policies such as universal basic income, assistance to the elderly, financial assistance to low income students, and keep prices under control. Income loss was also associated with higher support for temporary relief spending on policies such as cash transfers to families and businesses, and protection of essential workers, while at the same time it decreased support for other economic policies, such as helping industry grow. This suggests that citizens did not necessarily increased their support for greater public spending overall, but rather differentiated across types of government intervention. \par We further support these findings by running a series of checks to control for endogeneity and alternative assumptions. In Section \ref{robust} and in the Appendix, we replicate the analysis using an alternative measure of direct economic shock: whether a respondent lost at least 20\% of their income between the first and last wave of the survey - that is, whether they incurred a more permanent income shock. We find that the effects remain almost identical. As we track multiple outcomes across welfare and institution trust areas, it is also possible that a higher number of outcomes increases the probability of detecting significant results. To mitigate this risk, we undertake multiple hypothesis testing following \citet{anderson2008multiple}, and then replicate the analysis using Average Effect Sizes by bundling outcome measures, as in \citet{giuliano2014}. Further, since some measures of shock might be correlated with some of our controls, such as income and age, we reduce risks of endogeneity by replicating our analysis using entropy balancing weights, thus reducing differences with respect to a set of balance conditions between respondents that incurred and those who did not incurr a shock. Lastly, we replicate the regressions using alternative outcomes, alternative regression models (e.g. logit and fixed effects), and voting intentions instead of party affiliation. All the results remain similar and consistent. \par The COVID-19 crisis in the U.S. also occurred at a time of a divisive Presidential election campaign. Previous studies suggest that citizens might make sense of an ongoing crisis by engaging in motivated reasoning that confirms their priors, thus potentially cancelling out the effects of direct shocks on preferences \citep{alesina2020, kranton2016, cook2016, lewandowsky2012, nyhan2010}. To understand the mitigating or reinforcing role of political identity on preferences, we first measure the partisan gap between Democrats and Republicans on institutional trust and policy preferences. We then show that this gap increased during the crisis, an effect that is largely driven by respondents who consumed predominantly partisan leaning news. Teasing out the mechanisms behind this trend, we see that respondents who consumed partisan news were more likely to misunderstand the gravity of the crisis. By May 2020, Republicans who consumed predominantly Republican leaning news were more likely to underestimate the COVID-19 death rate, while Democrats who consumed Democratically leaning news were more likely to overestimate it. To study whether erroneous beliefs were a function of motivated reasoning or simply lack of exposure to similar information sources \citep{alesina2020, kranton2016, cook2016, lewandowsky2012, nyhan2010}, we implemented an experiment where half of the respondents were suggested, without any incentive, to check the COVID-19 death rate from the U.S. Centers for Disease Control and Prevention (C.D.C.). We find that this light-touch intervention significantly re-aligns respondents' beliefs with those of the C.D.C. and has a directional positive effect in changing their judgment of how public authorities handled the crisis. In the last wave of the survey, around four months later, we find that the treatment had a persistent effect. Using a causal forest methodology to estimate heterogeneous treatment effects \citep{athey2019estimating}, we find that the experiment was most effective on more educated respondents who consumed Democratic leaning news. Conversely, direct and indirect economic and health shocks caused by the crisis played a comparatively less important role. This suggests that a direct experience with a crisis might not necessarily make citizens more responsive to information campaigns aimed at re-calibrating their (mis)perceptions. \par Our study makes several contributions. First, we are one of the first to investigate the role of lived experiences and media consumption during a crisis on the same group of respondents, thus contributing to bridge two streams of studies on the role of crisis on welfare preferences and political divisions. Second, our methodology allows us to demonstrate that changes in institutional trust and preferences for public policies can occur very rapidly during a crisis, in some cases in as little as few weeks, as opposed to extended periods as previously suggested. Third, we show that changes in political polarization on policies and institutional trust are more easily explained by the consumption of politically leaning news rather than by direct experiences. Lastly, we contribute to the growing literature on survey experiments that use information treatments to reduce biased perceptions and demonstrate that these low-cost interventions can have long-term effects regardless of a person's experience with a crisis. \par The paper is structured as follows. Section 1 describes the methodology and the outcomes we tracked over time - namely, support for welfare and temporary relief policies, trust in institutions, and understanding of the gravity of the crisis. In section 2, we explain how we define the different types of shocks and how we calculated respondents' biased media consumption. In the third section, we disentangle how shocks and media shaped Americans' beliefs during the crisis. We then zoom in on the effects of biased media consumption on the understanding of the gravity of the crisis, and present the results of the survey experiment to correct for misinformation. Section 4 reports a set of robustness checks, showing that our results are consistent across several changes in assumptions and model specifications. We conclude with a summary of our findings in the final section.\\ \section{Methodology} We partnered with NORC at the University of Chicago, the organization responsible for administering the General Social Survey survey (GSS), to implement a survey to a sub-sample of their AmeriSpeak® Panel.\footnote{Funded and operated by NORC at the University of Chicago, AmeriSpeak® is a probability-based multi-client household panel sample designed to be representative of the US household population. Randomly selected US households are sampled using area probability and address-based sampling, with a known, non-zero probability of selection from the NORC National Sample Frame. These selected households are then contacted by US mail, telephone, and field interviewers (face to face). The panel provides sample coverage of approximately 97\% of the U.S. household population. Those excluded from the sample include people with P.O. Box only addresses, some addresses not listed in the USPS Delivery Sequence File, and some addresses in newly constructed dwellings. While most AmeriSpeak households participate in surveys by web, non-internet households can participate in AmeriSpeak surveys by telephone. Households without conventional internet access but with web access via smartphones are allowed to participate in AmeriSpeak surveys by web.} We recruited about 1,440 U.S. citizens (see Table \ref{tab:summary_stats} in the Appendix for a summary of demographic and socio-economic characteristics) which we surveyed seven times between April and October 2020 \footnote{The use of a longitudinal multi-wave panel survey has several advantages. First, we are able to choose the timing and frequency of our survey waves in a way that best allows us to answer our research questions, without the need to wait for two years or more in between data collection periods. Second, we can ask the same set of questions more than twice, thus reducing any possible volatility or inconsistency in respondents' answers. Third, we minimize the risk of recollection bias, as events that occurred in a person's life are more salient and gives respondents a better opportunity to provide more accurate answers about their economic and health situation during a crisis. At the same time, this methodology also doesn't force us to ask questions about preferences and shocks within the same survey wave, which might bias respondents' answers. Fourth, because we follow the same panel of respondents over time, we have baseline data that we can compare against when evaluating changes to their views accounting for their point of departure. This is particularly important when analyzing whether crises lead to convergence (e.g. increasing support for welfare policies among those who were previously not supporting it) or polarization (e.g. decreasing or increasing support for a policy among those who did not have a strong opinion).}. \par In the first wave of the survey we collected baseline data on the main outcomes of interest (e.g. policy preferences and trust in institutions) as well as media consumption and beliefs (e.g. political ideology). The subsequent weekly waves allowed us to track respondents' lived experiences during the most dramatic first month of the pandemic. The next two waves were administered on a monthly basis, on the week commencing May 18 and June 22 2020, respectively, and recorded respondents' perception of the gravity of the crisis. These waves focused on how Americans were coping in the weeks immediately after a possible health or economic shock, while the event was still vivid in their minds.\footnote{In order to minimize possible priming bias, we always left the shock questions at the end of the survey. Further, while we collected information on economic and health shock in every wave, in the last wave we asked again respondents to report these shocks on a monthly and more detailed basis} Lastly, we implemented a seventh and last wave of the survey on the week commencing October 19, 2020. We purposely timed the last wave to both track any changes to respondents' beliefs and preferences immediately prior to the Presidential elections. The summary of the questions asked in each wave is presented in \autoref{tab:questions}. \subsection{Outcomes} Across survey waves, we collected participants' responses to the following set of outcomes: (i) preferences for welfare policies, (ii) preferences for temporary relief policies, (iii) trust in institutions, and (iv) how respondents perceived the gravity of the crisis. \textbf{Preferences for welfare policies}. To study how the crises affected preferences for welfare policies, we administered a module of questions based on the GSS questionnaire, which asks respondents whether they think it should be the government's responsibility to intervene in a series of policy areas. Respondents can provide an answer for each of these policies on a 4-point scale from ``\textit{Definitely should not be}'' to ``\textit{Definitely should be}.'' The policy areas are the following: (1) provide mental health care for persons with mental illnesses, (2) help individuals affected by natural disasters, (3) keep prices under control, (4) provide a decent standard of living for the old, (5) provide a decent standard of living for the unemployed, (6) provide everyone with a guaranteed basic income, (7) provide industry with the help it needs to grow, (8) reduce income differences between the rich and the poor, (9) give financial help to university students from low-income families \footnote{In our survey we replicate the exact wording of the GSS survey. Later we compare our baseline findings to previous GSS waves.} We asked these questions in waves 1, 4 and 7. In addition, we also asked respondents a question about universal healthcare. The question read as follows: ``\textit{Do you favor or oppose a universal health care system covered by the government so that every American can have equal access to health care, even if this means that you will have to pay higher taxes?}" \footnote{Response options were on a 5-point scale from ``\textit{Strongly oppose}'' to ``\textit{Strongly favor}''}.\footnote{We purposely asked this question in a way that encouraged respondents to think carefully about costs and benefits of a universal healthcare system, and limited saliency bias that might arise from the ongoing crisis on universal health care.} We also asked this question in waves 1, 4, and 7 of our survey. \\ \indent \textbf{Preferences for temporary relief policies}. In addition to tracking Americans' preferences for government intervention in the economy, we also tracked their support for the temporary relief policies that federal and state governments adopted to respond to the crisis. The objective was to see whether government interventions in times of crisis that do not consist in permanent changes to the welfare system were less polarizing.\citep{druckman2021, druckman2013}.\footnote{Indeed, recent surveys suggest that, despite deepening partisan divisions, Americans tend to agree on several policy areas. See for instance: \url{https://cgoap.net/ and https://vop.org/wp-content/uploads/2020/08/Common_Ground_Brochure.pdf} }. These temporary policy questions, which we asked in waves 4 and 7, asked respondents to what extent they agreed with the following statements: (1) "\textit{the government should transfer money directly to families and businesses for as long as lockdown measures are kept in place}", (2) "\textit{the government should do more to protect essential workers from contracting the virus}", (3) "\textit{the government should spend more on public healthcare to reduce the number of preventable deaths}". \\ \indent \textbf{Trust in institutions}. Previous studies have documented that economic crises result in loss of trust in institutions \citep{algan2017, dotti2016, giuliano2014}. To measure how trust in institutions might have changed during the crisis, we asked our respondents the following set of questions, which, again, replicates the wording of the GSS: "\textit{How much confidence do you have in the people running the following institutions?}"\footnote{Like the GSS questions, response options were on a 5-point scale, from "\textit{Complete confidence}" to "\textit{No confidence at all}"}. The list of institutions was the following: (1) U.S. Congress and Senate, (2) White House, (3) scientific community, (4) banks and financial institutions, (5) private sector, (6) hospitals and healthcare professionals, (7) health insurance companies. We asked all of these trust questions in waves 1, 4, and 7. \\ \indent \textbf{Information processing and interpretation of reality}. Americans experienced the ongoing crisis not solely through direct experiences, but also through the news they consumed. Since news sources portrayed the gravity of the crisis in different ways, depending on their political alignment, we were interested in understanding how news consumption might also have shaped citizens' evolving views during the crisis. Respondents were asked in the fifth wave of the survey (i.e. week commencing May 18th) to forecast the expected additional death rate by the end of the year, and to judge the work done by the authorities in containing the pandemic. In the sixth wave of the survey, about a month later, we asked respondents to report the current COVID-19 death rate.\footnote{As part of this question, we also randomly exposed half of the respondents to a link to the official figure on the C.D.C. website, as we will explain in more detail in the dedicated section of this paper.} Lastly, in the seventh wave of the survey (i.e. in the third week of October), we asked respondents to reflect on how they believed the U.S. coronavirus death rate compared to the rest of the world. \par \subsection{Shocks} One of the objectives of our study is to disentangle the channels that might lead citizens to update their beliefs across the previously listed set of outcomes. In our study, we focus on four types of shocks: direct and indirect, economic and health. Direct shocks refer to major life events that affected the respondents personally, while indirect shocks refer to exposure to a crisis because of the historical time or geographic location. \\ \indent \textbf{Direct economic shocks}. To measure direct economic shocks, we asked all respondents in the last wave of the survey to report their (and their spouse, if present) monthly gross income between February and October. \footnote{In addition to asking in most waves whether respondents incurred any economic or health shock, in the last wave we asked them to report the exact amount of household income on every month as well as if they knew anyone hospitalized each month. This allows us to have a more granular and quantifiable measure of economic shock beyond the timing of our survey waves} Further, we also asked respondents' (and their spouse, if present) monthly additional sources of income, the monthly number of hours worked, and whether they received any financial support from the government or non-government organizations at any time during the crisis. This data allows us to estimate both the timing and the magnitude of the economics shocks incurred by respondents' households between waves. We measure direct income shocks in two different ways, and we show that they provide comparable results. In our main specification, we consider whether respondents have lost more than 20\% of their income (combining both income from work and other sources) between any two months between February (or the baseline month) and October (or the outcome month) 2020 to capture the effects of a sudden large drop in income. In the Appendix, we show that the results remain unchanged when adopting a less stringent measure of 10\% income loss between any two months. $$shock_1 = \begin{cases} 1, & \mbox{if } \frac{income_{t}-income_{t-1}}{income_{t-1}}\leq -0.20 \\ 0, & \mbox{otherwise } \end{cases} $$ In our sample of respondents who participated in the first and the last survey waves (i.e. \textit{n}=1,076), we find that about 38\% of respondents lost at least 20\% of their household income, between any two months, between February and October 2020 \footnote{As reported in Table \ref{tab:balancetable_shock} in the Appendix, respondents who lost at least 20\% of their household income between any two months from March to October 2020 are more likely to be young, with a low baseline income and to belong to a racial minority group. Furthermore, women, Democrats, and those who live in a metropolitan area have incurred such a negative income shock with marginally significant higher probability, while co-habitation (or marriage) seems to smooth the financial impact of the pandemic. We control for all these characteristics in our analysis and show how using different specifications does not change our main results}. \\ \indent \textbf{Direct health shocks}. Our main measure of direct health shock is whether respondents had a family member, a friend, an acquaintance \footnote{We consider this combined measure, as 2.4\% of the respondents has a family member who has been hospitalized, 9.8\% has a relative, 14.1\% a friend and 14.9\% an acquaintance.} who was hospitalized with COVID-19 \footnote{To control for additional direct health shocks, we also asked respondents their type of health insurance (e.g. public or private), whether they have caring responsibilities towards an elderly or someone with disabilities, which are at greater risk of complications from contracting the virus, and if they knew a healthcare professional who had been working closely with COVID-19 patients}. In our longitudinal sample, we find that by October 2020, about 30\% of our respondents knew someone (among family, friends or acquaintances) who was hospitalized with COVID-19, while 69\% knew someone who tested positive. About 33\% tested positive for COVID-19 themselves. \\ \indent \textbf{Indirect economic shocks}. In addition to the direct shock measures, we also control for indirect shocks. It is possible that many Americans changed their preferences and beliefs even just by mere exposure to the crisis, such as by knowing someone who got affected economically by the crisis or living in area that suffered a relatively higher economic distress compared to the rest of the country. In the months our data covers, the pandemic crisis affected some communities more than others \citep{dyer2020covid, wright2020poverty}. Measuring economic variations between two months of the same year, however, is a challenge. Many macroeconomic indicators such as unemployment rate or business closures are rarely available at the county level, are often only released at an aggregate level or on a frequency that is less regular than the timing of our survey waves, making any meaningful comparison difficult. Therefore, to measure indirect economic shocks we use data collected and updated in real-time by the Harvard's Opportunity Insights team on the weekly percentage variations in consumer expenditures with respect to the first week of January 2020 \citep{chetty2020}. This variable is seasonally adjusted and is available at the county level, which we match with the respondents' residential information. \footnote{The Opportunity Insights team uses anonymized data from several private companies to construct public indices of consumer spending, employment, and other outcomes. See \citet{chetty2020} for further information on series construction.}\\ \indent \textbf{Indirect health shocks}. Collecting information at a zip code or country level on indirect health shocks, such as COVID-19 cases and deaths, is also not an easy task. In the early stages of the pandemic, many States followed different guidelines in recording COVID-19 deaths, and they all implemented different strategies for testing. While our survey questions were detailed enough to account for possible exposure to the virus (i.e. by asking whether respondents knew a family member, friend, or acquaintance who tested positive), we complement it with data on the number of COVID-19 cases in their county of residence. While this measure might be subject to different protocols depending on the State, these figures were likely to be the same ones reported by the local media. We consider COVID-19 cases\footnote{We exploited the data collected by the New York Times from state and local governments and health departments, and available here \url{https://github.com/nytimes/covid-19-data}.} at the county level reported by the middle of each week We then consider the population size at the county level in 2019 and construct the following measure\footnote{We multiply this measure by 100 to ease the interpretation of the coefficients in our regressions} of increase in cases between week \textit{t} and \textit{t-1} in county \textit{c}: $\frac{cases_{ct} - cases_{ct-1}}{population_c}$. When, instead, we consider an outcome that is not in changes, we focus on the logarithm of the cumulative number of cases weighted by the county population: $ln \left( \frac{cases_{ct}}{population_c} * 100,000 \right) $. \subsection{Politically leaning news}. To understand how the media might have shaped Americans' views, we collected information on respondents' preferred news sources (including international news and social media) and the number of hours of news they consumed \footnote{The question asked: ``Do you get your news from any of these sources (either on television or on the internet)?", and the multiple option answers were: ``ABC, CBS, NBC, CNN, Fox News, MSNBC, and 'other, please specify'" (e.g. some respondents added The NY Times, The Washington Post, BBC, NPR, and PBS). The objective of these questions was to assess whether individuals were exposed to news that might have been politically polarizing. While there is no exact methodology to measure and rate the partisan bias of news sources \citep{budak2016fair, groseclose2005}, and within each source, different programs might cover the same news in different tones \citep{bursztyn2020misinformation}, we were more interested in evaluating whether respondents were exposed to different points of view during the crisis.}. Based on the the news sources indicated by our respondents, we constructed a ``bias score" using the ``\textit{AllSides.com}" platform, one of the most commonly used source of partisan media bias analysis.\footnote{\url{https://www.allsides.com/unbiased-balanced-news}}. The website assigns a score from 1 (Extremely Left) to 5 (Extremely Right) to major sources of news by analyzing their written content on a regular basis. \par Matching the scores by Allsides\footnote{We use the scores of the first week of April 2020, our baseline wave} to the respondents' choices, we create an index summing the scores of each source consulted by an individual and divided by the maximum number of possible points. \begin{center} \textit{Media slant index, for an individual consuming N sources of news} = $ \frac{\sum_{n =1}^{N} score_{n}}{N} $ \end{center} This variable measures how politically homogeneous the news sources that respondents consumed are, by taking any value between 1 and 5: the closer a respondent is to 1 the more they consume homogeneous (i.e. less politically diversified) left-leaning media, while the closer they are to 5 the more homogeneous and right-leaning is their media consumption. A score towards the middle indicated either that respondents consume unbiased news, or that they consume news that are biased in both directions, and so that they are exposed to both sides. In other words, we created a measure of echo-chamber effect hat might naturally arise from a heavy consumption of politically biased media. Based on this specification, we see that 51\% of Republicans who consume Republican leaning news, and 46\% Democrats consume Democratic leaning news. Among independents and non-voters, around 25\% (24\%) consume Republican leaning news (Democratic leaning new). \subsection{Estimation} To estimate changes in main outcomes (i.e. preferences for welfare and economic policies, preferences temporary relief policies, trust in institutions), we rely on the same estimation approach. For brevity, we present the approach referring to trust in institutions as an example. \footnote{For the outcomes on information processing and the interpretation of reality, we use mostly OLS regressions, which we explain in greater detail in the relevant sections of the paper.} Since most of our outcomes are measures in a Likert-scale, we construct a variable equal to one if the respondent decreased (increased) their confidence in a given institution (their support in a government policy), between the first and the last wave. \footnote{In the Online Appendix we replicate the same analyses with the inverted binary variables, i.e. decreased support for policies and increased trust for institutions, and show that the results are unchanged.} This approach helps us to overcome some of the limitations of survey-based measures previously highlighted by \citet{bond2019sad} and \citet{zanella2019}. We flag respondents who could have not further decreased (increased) their trust (policy preference), since they had already given the minimum (maximum) value on the given scale in the baseline (i.e. wave 1). We then estimate the following OLS regression, considering only the respondents who participated in both waves: $$Y_{ic}= \alpha + X_i \beta + S_i\theta_1 + Z_c \theta_2 + Yb_{i}\gamma + \epsilon_{ic} $$ \noindent with $Y_{ic}$ being a dummy variable equal to 1 if the respondent decreased (increased) their level of trust in a certain institution (or support for a policy) between the first and the seventh wave (and between the fourth and seventh wave for temporary relief policies). $X_i$ is a vector of time-invariant demographic characteristics; $S_i$ is a vector including the direct health and economic shocks that affected respondents between survey waves when we collected outcome measures; $Z_c$ is a vector of indirect health or wealth shocks at the county (or zip code) level, reported in variation between the first and the last wave (and the fourth and last wave for the temporary policies). $Yb_{i}$ is a dummy variable equal to 1 if the respondent was at the lower bound in wave 1, i.e. if they already gave the highest or lowest score, i.e. could not possibly further decrease (or increase) their score. In all our regressions we apply survey weights, making our sample representative of the U.S. population, and we adjust the standard errors considering the primary sampling units (PSUs) and strata that the population was divided into. Survey weights are recalculated in every wave to keep the sample representative of the population. In the Appendix, we present the analyses on survey attrition and show that these are not correlated with the outcomes. Lastly, we flag respondents who completed the surveys in a time that is equal or shorter than the first percentile of sample duration, which we consider as a proxy of limited attention during the survey. As we consider multiple outcomes, we replicate our analyses using Average Effect Sizes (AES), as in \citet{kling2004moving, clingingsmith2009estimating, giuliano2014, heller2017thinking}. Further, we perform a series of multiple hypothesis tests, which we show in the Appendix. In the Appendix, we also repeat our main analyses adopting other estimation techniques: we perform a logistic regression on the same specification presented above, we run a fixed effect model using our data in a panel format, and we vary the way in which we measure shocks. We show that the main results remain unchanged. \section{Results} We begin our analysis by looking at the overall support for policies and institutional trust across survey waves. The first notable result is that, while our first wave was implemented shortly after the number of COVID-19 cases started soaring in the country, our baseline levels of policy support and institutional trust are comparable to previous GSS waves, as shown in the Appendix in Tables \ref{tab:GSS_policies}, \ref{tab:GSS_trust1} and \ref{tab:GSS_trust2}. When comparing our baseline wave (April) to our last wave (October), we see that the share of Democrats supporting public spending overall remained around an average of 87\% while Republicans decreased to a lower 66\%, though still higher compared to previous years. We see a similar trend on the three temporary relief policies, which had an average of 80\% of support among Democrats in April and 76\% in October, compared to an average of 42\% of support among Republicans in April that decreased to a lower average of 32\%, and from 66\% to 51\% among Independents and non-voters\footnote{All the percentages reported in this section account for survey weights.}. However, these differences mask important heterogeneity. \par To provide a more granular measure of the gap, we calculate the partisan gap on welfare and temporary relief policies as the difference in support between the average score by Democrats and Republicans who don't consume politically homogeneous media, and then repeat the same calculation for the distance in average scores between Democrats who consumed Democratic biased media and Republicans who consumed Republican biased media. We then replicate this approach for shocks. The summary plots are show in in Figures \ref{fig:policies_1} to \ref{fig:confid_3} in Appendix \ref{A3}. Among Democrats and Republicans who did not consume politically homogeneous media, there was a decrease in partisan distance in seven out of the 10 policies we tracked, while the distance increased among consumers of politically biased media from both parties, which was already large in April and increased further for seven out of the 10 policies. Similarly, on the temporary relief policies, we find that the partisan gap increased comparatively more for politically biased media consumers. When replicate the same calculations for partisan distance that might arise as a result of direct economic or health shocks, we see that the results are more mixed. A direct economic shock reduced political distance on six out of the 10 policies we track, and sometimes significantly so \footnote{For example, Republicans who lived through a direct personal shock during the crisis - that is, they either lost at least 20\% of their income or knew someone hospitalized with COVID-19, were marginally more likely to increase support for a guaranteed basic income (27\% compared to 17\% of Republicans who did not incur neither shock; F(1,171)=3.0282, \textit{p}=0.0836). Among Democrats, where support for the policy is already at a higher 67\% baseline level, those who incurred a shock were not significantly different from those who did not incur neither direct shock (22\% vs 25\%, F(1,169)=0.301, \textit{p}=0.583).}. \par Moving to respondents' trust in institutions over time, we see that overall the partisan gap gets larger compared to previous years, but specifically on the scientific community where the Democrats-Republican gap doubles by the end of October 2020. This is mostly due to a significant drop among Republicans, where 61\% trusted the scientific community in 2018, 51\% in April 2020, and 36\% in October 2020 (compared to 59\%, 68\% and 70\% of Democrats), and specifically by Republicans consuming Republican leaning news (see\ref{fig:confid}). This is in line with \cite{hmielowski2014attack} who show that consuming more conservative media decreases trust in scientists which, in turn, negatively affected support for climate change related policies. More recently and related to the COVID-19 pandemic, \citep{bursztyn2020misinformation}, \cite{ajzenman2020more}, and \cite{mariani2020words} find a similar result also in other countries. We also record a large increase in the partisan gap on trust in the institutions that played a major role in managing the crisis, namely hospitals, Congress and Senate, and the White House, as a result of direct economic or health shocks. \par In the next sections we disentangle the effects of shocks, party, and media, controlling for a rich set of socio-economic and demographic characteristics. We first show the results of the regressions estimating changes in preferences for welfare policies, temporary relief policies, and trust in institutions. We then show a separate set of regressions on how respondents processed information depending on their direct experiences and media diet, which we complement with a randomized information experiment. \subsection{How shocks and media changed support for policies} Figure \ref{fig:welfare_preferences} reports the coefficients and confidence intervals of the regressions estimating the role of shocks, party, and media in changing respondents' preferences for policies between April and October 2020 (for full specification see Tables \ref{tab:media_policies_A}, \ref{tab:media_policies_B}, and \ref{tab:media_policies_C} in the Appendix). Having lost at least 20\% of income is associated with a marginal increase in support for the introduction of a guaranteed basic income and assistance for the elderly, while it decreases the belief that the government should help industry grow. The income shock coefficient is even larger on the the increase in support for greater intervention by the government in all the temporary relief policies, as shown in Figure \ref{fig:covid_preferences}. Similarly, knowing someone who was hospitalized with COVID-19 led to an increase in support for a greater government intervention to assist the elderly, presumably because most hospitalization concerned older Americans who were more vulnerable to the virus, as well a marginal increase in support for helping low-income students and keeping prices under control. An indirect economic shock, namely living in a county that recovered faster its consumer expenditure, is associated with stronger support for a reduction in income inequality, help citizens affected by natural disasters, and keeping prices under control. This measure of indirect shock is also correlated with stronger support for all temporary relief policies. Our interpretation of these correlations is that whether a shock affected a person directly or indirectly changes the type of policies they support. A person who incurred a direct shock might now be more appreciative of welfare policies that are targeted at the individual level and can improve the livelihood of their own families, while respondents who have not been directly affected but lived in areas that witnessed a faster economic recovery will be more appreciative of economic policies that can boost internal demand and restart the economy. This interpretation is in line with the analysis by \citep{chetty2020}, who noted that economic policies during a pandemic have different effects on households based on their income level. Thus, it is possible that families who lost part of their income during the crisis would now favor more social insurance policies that help mitigate the economic hardship they lived through, while higher income households might be more likely to assume that more traditional macroeconomic policies aimed at stimulating internal demand would still be effective at reducing the unemployment rate. Across all outcomes, we also note important differences between Democrats and Republicans. As report in Tables \ref{tab:media_policies_A}, \ref{tab:media_policies_B}, and \ref{tab:media_policies_C}, the sign of the Republican party dummy variable is almost always negative while the opposite is true for the Democratic party variable. In the second column of each outcome, we see that this polarizing effect can be mostly explained by respondents who consumed politically biased media, in line with other studies \citep{gentzkow2011newspapers, dellavigna2007, allcott2020polarization, grossman2020political, simonov2020persuasive}. \begin{figure}[H] \caption{The effect of shocks and media on welfare policy preferences} \label{fig:welfare_preferences} \begin{center} \includegraphics[height=19cm]{welfare_preferences_grid.jpeg} \end{center} \begin{minipage}{1\linewidth \setstretch{0.75}} {\scriptsize{\textit{Notes}:} \scriptsize All regressions are OLS regressions that take into account population survey weights and the sampling procedure. The dependent variable is a dummy=1 if the respondent increased their belief that it should be a government's responsibility to provide the following policies. The control variables include: gender, race, age, education, parental status, caring responsibilities for an elderly or a person with disability, baseline income in February 2020, cohabitation with a partner, labor force participation and employment status in February 2020, health insurance provider, if the respondent had financial difficulties before the pandemic, macro-region, metro vs rural, the population density at the zip code, and two dummy variables indicating if they consume at least 30min a week of international news and if they have at least one social media account. We also control for whether respondents completed the survey in a shorter time than the 99$^{th}$ percentile as well as ceiling effects.} \end{minipage} \end{figure} \begin{figure}[H] \caption{The effect of shocks and media on temporary relief policies} \label{fig:covid_preferences} \begin{center} \includegraphics[height=6cm]{covid_preferences_grid.jpeg} \end{center} \begin{minipage}{1\linewidth \setstretch{0.75}} {\scriptsize{\textit{Notes}:} \scriptsize All regressions are OLS regressions that take into account population survey weights and the sampling procedure. The dependent variable is a dummy=1 if the respondent increased their belief that it should be a government's responsibility to provide the following policies. The control variables include: gender, race, age, education, parental status, caring responsibilities for an elderly or a person with disability, baseline income in February 2020, cohabitation with a partner, labor force participation and employment status in February 2020, health insurance provider, if the respondent had financial difficulties before the pandemic, macro-region, metro vs rural, the population density at the zip code, and two dummy variables indicating if they consume at least 30min a week of international news and if they have at least one social media account. We also control for whether respondents completed the survey in a shorter time than the 99$^{th}$ percentile as well as ceiling effects.} \end{minipage} \end{figure} \subsection{How shocks and media change trust in institutions} The impact of crisis on institutional trust has been explored by previous studies. \citet{algan2017} and \citet{dotti2016} find that Europeans decreased their trust in the European Parliament and national parliaments after the 2008 global financial crisis (GFC), and \citet{knell2015} finds similar negative trends when it comes to banks, particularly among people who were directly affected by the crash. Similarly, \cite{aksoy2020political} and \cite{aassve2021epidemics} show that exposure to previous epidemics negatively affects trust in the government and public health systems. In Figure \ref{fig:covid_preferences} we see that while demand for government spending increased among those who have been affected by this crisis, respondents were also more likely to reduce their confidence in the people running most institutions (see Tables \ref{tab:trust_a} and \ref{tab:trust_b} in Appendix A.4 for full specifications). Complementing the findings by \cite{giuliano2014}, we show that such effects can occur very rapidly and regardless of a person's age. In particular, we find that losing at least 20\% of the household income in any two months during the crisis significantly decreased trust in financial institutions and in the private sector - two closely related entities - as well as in the Congress and Senate, and hospitals. As shown in the Appendix, some of these effect are even stronger among respondents whose income in October was at least 20\% lower than in April - that is, those who did not recover from the economic shock by the last wave of our survey. Looking at our measures of indirect shocks, we don't see large effects besides that an increase in consumer expenditures between April and October is positively correlated with a decrease in confidence in the White House. We explain this with the fact that this measure is sensitive to its baseline: indeed, the larger the initial drop, the larger the possible subsequent increase in consumer expenditures. Conversely, we see that respondents who lived in counties that recovered more quickly from the initial drop in consumer spending were less likely to have reduced their confidence in health insurance companies and hospitals, presumably as they associated the economic recovery with better crisis response by institutions. \begin{figure}[H] \caption{The effect of shocks and media on trust in institutions} \label{fig:institution_preferences} \begin{center} \includegraphics[height=16cm]{institution_preferences_grid.jpeg} \end{center} \begin{minipage}{1\linewidth \setstretch{0.75}} {\scriptsize{\textit{Notes}:} \scriptsize All regressions are OLS regressions that take into account population survey weights and the sampling procedure. The dependent variable is a dummy=1 if the respondent increased their belief that it should be a government's responsibility to provide the following policies. The control variables include: gender, race, age, education, parental status, caring responsibilities for an elderly or a person with disability, baseline income in February 2020, cohabitation with a partner, labor force participation and employment status in February 2020, health insurance provider, if the respondent had financial difficulties before the pandemic, macro-region, metro vs rural, the population density at the zip code, and two dummy variables indicating if they consume at least 30min a week of international news and if they have at least one social media account. We also control for whether respondents completed the survey in a shorter time than the 99$^{th}$ percentile as well as ceiling effects.} \end{minipage} \end{figure} \par We note again substantial differences across parties. Compared to the Independents and non-voters, Republicans were less likely to have decreased trust in the U.S. Congress and Senate and in the White House, while the exact opposite is true for Democrats (by October, only about 3\% of Democrats had a lot of confidence in the White House, compared to 52\% of Republicans and 18\% of Independent and non-voters). Democrats were also less likely to have decreased their trust in the scientific community and in hospitals, regardless of whether they incurred any shock. In early April, 67\% of the Democrats, 50\% of the Republicans and 51\% of the other respondents declared to have a ``great deal" or ``complete" confidence in the scientific community, whereas by the end of October, the percentage of respondents reporting the same trust had increased to 69\% for the Democrats, but it had dropped to 44\% for the Independents and to 36\% for the Republicans. These opposite directional effects seem to support the claim that the crisis might have further polarized Americans' trust in institutions due more to their political party and media consumption than direct negative experiences. \par Overall, these results suggest that direct negative experiences during a crisis play an important role in increasing support for welfare policies and greater government spending, as well as reducing trust in institutions. We have also shown that these effects can occur very rapidly, sometimes over a period of one to six months, and rarely return to pre-crisis levels in an equally short time. Further, these effects are robust to several specifications and a rich set of controls, as shown in greater detail in Section \ref{robust}. We also find that political party affiliation per se doesn't fully explain the polarizing trends, and that Democrats and Republicans who lived through similar negative experiences might tend to react in similar ways when it comes to policy support and confidence in institutions. We find, instead, that consuming mostly politically biased media is associated with a stronger polarizing trend. This bears the question as to whether citizens might be more likely to converge their views on several issues in the absence of polarizing media outlets.r In the next section, we study more closely the mechanisms through which a biased media consumption might have increased polarization, and we show that most of the ``distance" between Democrats and Republicans can be explained by different understanding of the gravity of the crisis. \subsection{Information processing and the interpretation of reality} Previous studies have documented circumstances in which individuals might either update their beliefs or engage in motivated reasoning when exposed to new information \citep{gentzkow2006media, barabas2004deliberation,gaines2007, taber2006motivated}. Evidence on either behaviors as a consequence of a direct experience during a crisis is more scant. In this section we aim to tease out the role of direct shocks \textit{and} exposure to new information in updating beliefs. To do this, we focus on respondents' grasp of the COVID-19 death rate, arguably the most salient indicator of the gravity of the crisis. While cases were soaring in some states and cities across the country, the rapidly increasing number of hospitalizations and deaths is what prompted some states and cities to introduce stringent non-pharmaceutical measures to contain the spread of the virus. \par We first wanted to document whether there existed a partisan gap in expectations about the gravity of the pandemic based on this metric. In our fifth wave of the survey (week commencing May 18), we showed all respondents the COVID-19 death rate according to the CDC up to the week before - i.e. 90,000 deaths by May 17. We then asked them to forecast how many more Americans they thought would die by the end of the year due to COVID-19.\footnote{The questions asked: \textit{By May 17, the U.S. Centers for Disease Control and Prevention (CDC) stated that about 90,000 Americans have so far died from COVID-19 (coronavirus). In addition to this, how many more Americans do you think will die by the end of this year due to coronavirus?}} After they answered the question, all respondents saw their expected total death rate by the end of the year (i.e. the sum of the CDC figure and their additional estimate), and were prompted to look at this figure again and judge how public authorities had been managing the crisis.\footnote{The question asked: \textit{Looking again at your estimated number of total coronavirus deaths in the U.S. by the end of the year, and considering how public authorities in the country have been managing the pandemic crisis, do you think the estimate you expect can be defined as a: Great success/ Success / Failure / Great Failure}. We specifically chose the wording 'public authorities' to partly reduce political priming effects.} The objective of these questions is twofold. Firstly, we wanted to study whether respondents held different beliefs and expectations about the danger of the virus at the first peak of the crisis; by making the latest figure by the CDC common knowledge to all respondents, we also aimed to partially control for differences in knowledge. Secondly, we were interested in understanding how partisanship affected their interpretation of reality, and whether respondents engaged in a rationalization process in line with their political views.\footnote{\citet{gaines2007} studies a similar setting showing results of a survey where Americans were asked to state the need and support for the Iraqi war in 2003: while the majority of all respondents thought it was unlikely that the U.S. would ever find weapons of mass destruction, Democrats were more likely to concluded that they simply did not exist while Republicans were more likely to state that they believed the Iraqi government moved or destroyed the weapons.}. \par We find that, among Republicans, 24\% believed the rate would be 10,000 deaths or fewer (the lowest available option) compared to just 9\% of Democrats. The trend is reversed for the high bound estimates, where 10\% of Republicans believe there were going to be additional 100,000 deaths or more, compared to 31\% of Democrats. A Kruskal-Wallis equality-of-populations rank test confirms that these differences are statistically significant ($\chi^2$= 93.25, p$<$0.001). Among Independents and non-voters, we find a less polarized and more equally distributed estimate of additional deaths by the end of the year: about 19\% expect the number to be 10,000 or fewer, about 21\% to be between 20,000 and 30,000, another 19\% to be 50,000 and about 18\% to be 100,000 or more. \\ \par We further investigate correlates of these difference by performing a regression, reported in table \ref{tab:expected_death}. We find that not only the discrepancy in forecasts is indeed significantly different across party lines, but that this is further exacerbated by their source of information. Democrats consuming Democratic leaning news estimated, on average, about 11,500 more deaths that those consuming unbiased sources, while the opposite is true for Republicans consuming Republican leaning media, who reported about 11,000 deaths less. We then look at whether the additional death rate that they estimated could be considered as a success or a failure. Also in this case, we observe a strong partisan effect: 15\% of Democrats would consider their own estimate as a success, while the percentage increase to 45\% for the Independents and the non-voters, and it reaches a 73\% for the Republicans (F(2, 393.90) = 76.3895, $p<$ 0.0001). Also in this case, consuming politically leaning news further exacerbates this difference: Democratic leaning news are correlated with a decrease in the probability of considering the death rate as a success of 11.5 percentage points, whereas Republican leaning ones with an increase of 18 percentage points. The effects of party and media are mostly robust to the inclusion of the expected number of deaths as a control, as shown in Column (3) of table \ref{tab:expected_death}. This suggests that political polarization might keep playing a role also after controlling for expectations - that is, Democrats and Republicans seem to judge the gravity of crisis differently even when holding similar beliefs. At the same time, however, we also see that the closer Democrats and Republicans are in their beliefs, the lower is their distance in assessing the gravity of the crisis. In figure \ref{fig:forecast_death}, we plot the correlation between the expected additional deaths and whether respondents considered this figure as a success. Following \cite{chetty2014measuring}, we report a binscatter, controlling for a set of variables, and using a restricted model in which each covariate has the same coefficient in each by-value sample \footnote{Binscatter is a binned scatterplot, in which the x-axis variable (estimated deaths) are grouped into equal-sized bins and the mean of the x- and y-axis within each bin are computed. This allows us to visual the expected number of respondents considering the estimated death rate as a success, conditional on the value that they had assigned} \footnote{We also repeated the same exercise by plotting the residuals of a regression with a dummy variable indicating whether the additional expected deaths were a success, as the dependent variable, and a set of controls as explanatory variables. This way, we control for the demographic characteristics that might be correlated with both our outcome (success) and our explanatory variable (forecast deaths). Results are robust also to this specification.}. \begin{figure}[H] \caption{Share of respondents believing that the annual COVID-19 death rate in 2020 could be considered a success, by political party and expected death rate.} \label{fig:forecast_death} \begin{center} \includegraphics[height=8cm]{US_death_rate_forcast_success_binscat_controls.png} \end{center} \begin{minipage}{1\linewidth \setstretch{0.75}} {\scriptsize{\textit{Notes}:} \scriptsize The figure shows a binned scatterplot in which the x-axis variable (estimated deaths) are grouped into equal-sized bins and the mean of the x- and y-axis within each bin are computed. The plot controls for a set of variables.} \end{minipage} \end{figure} These results suggest that while Americans assessed the gravity of the situation through political lenses, there might have been a slight convergence in views as the distance in (mis)perceptions about the death rate got lower. An important notation here is also that media consumption might be endogenous. As such, citizens might have preferences for media sources that are less diversified and more aligned with their views, or might consider a media source as more reliable if it confirms their prior beliefs \citep{gentzkow2006media}. This, in turn, will influence how they perceived the gravity of the crisis and their support for response policies. To disentangle this effect we implement an experiment where we randomize exposure to the same information source. \\ \textbf{Survey experiment}. In the sixth wave of the survey (week commencing June 22), we asked every respondent the same following questions: \textit{How many people have died in your state because of coronavirus from the first death until today?} and \textit{How many people have died in the U.S. because of coronavirus from the first death until today?}\footnote{To avoid any survey fatigue effects, we asked these questions within the first block of ten questions of the survey.}. Immediately prior to seeing these questions, half of the respondents, the treatment group\footnote{see table \ref{tab:balancetable_deathexp} in the Appendix for the balance tables)}, was shown a blank page with only the following text: \textit{Please answer the following questions carefully. If you wish to do so, you can look up the answer on the official CDC website at the following link: https://www.cdc.gov/coronavirus/2019-ncov/cases-updates/cases-in-us.html}. The link was spelled out, as opposed to a hyperlink, so respondents could see the full URL of the webpage and could choose not to click on it and move to the next question. If they clicked on the link, a separate internet browser window would open, redirecting them to the webpage of the CDC that displayed clearly the total number of cases and deaths in the country, and a map that showed the same statistics in each state by simply hovering the mouse over the interested area (see figure \ref{fig:wte} in the Appendix). \par We find that the treatment significantly increased the time respondents took in answering the question, particularly among Republicans, suggesting that respondents did not avoid this new information even if they were not incentivized to consult the link \footnote{Due to privacy regulation, we could not check whether respondents clicked on the link, but we are able to track the time they spent answering the questions, which we use as a proxy for engagement with the website. We see that treated respondents spent an average of 40 seconds to answer the first question on the total number of deaths in their state, compared to a lower 26.5 seconds in the control group (Adj.Wald test with survey weights: F(1,189)=14.49; \textit{p}$<$0.001), but about the same time to answer the second question on number of deaths in the U.S. (25.5 seconds in the control group and 25.6 in the treatment group; Adj. Wald test with survey weights: F(1,189)=0.00; \textit{p}=0.973). These estimates are confirmed in a linear regression. We also find differences across political lines, with the treatment being effective at increasing the time Republicans (50.8 seconds for control group and 89.1 for the treated one, Adj. Wald test F(1,189)=5.59; \textit{p}=0.015)) and Independents (42.2 seconds for control group and 52.4 in the treated one, Adj. Wald test F(1,189)=4.72; \textit{p}=0.033)) spent answering the questions, but we do not notice significant effects between Democrats in the control and treatment group. In other words, Republicans did not discard or avoid the new information, even if they might have anticipated the objective of the question asked \citep{saccardo2020}.}. The treatment also significantly increased the share of respondents who reported the state death rate according to the CDC, especially among Republicans (from 41\% to 60.4\% in the treated group, F(1,189)=6.319, \textit{p}=0.013) than the Democrats (from 51.5\% to 61.2\% in the treated group, F(1,189)=2.903, \textit{p}=0.09)\footnote{We analyzed whether the treatment had a stronger impact on respondents who expected a low number (i.e. below the median) of additional deaths in wave 5. Results show a positive but not significant effect}. These effects are confirmed in a series of regressions showing that treatment increased significantly the likelihood of reporting the correct death rate both at the State and the country level. \par After answering the question on the actual number of deaths in their state and in the U.S., all respondents were asked: \textit{Looking again at your estimated number of total coronavirus deaths in your state and in the US so far, and considering how public authorities in the country have been managing the pandemic crisis, do you think the current death rate can be defined as a: Great success; success; failure; or great failure}. Among Democrats, already 88\% consider the outcome a failure or a great failure, but having answered the death rate questions according to CDC figures further increases the likelihood of stating so (from 85\% among those who didn't answer it correctly to 92\%, F(1,190)=3.187, \textit{p}=0.076). Among Republicans, a lower 40\% overall considered the death rate a failure or great failure of how public authorities managed the crisis, but also here answering the death rate questions as per CDC official figure reduced this likelihood, although not significantly, from 40\% among those who didn't answer it correctly to 29\%, F(1,189)=20.026, \textit{p}=0.156). Importantly, we do not observe a backfiring effect of information exposure among Republicans, suggesting that respondents might not have engaged in motivated reasoning \citep{nyhan2021}. \begin{figure}[H] \caption{Judgment as a function of accurate information } \label{f:deaths__success_f} \begin{center} \includegraphics[height=11cm]{death_success_treat.png} \end{center} \begin{minipage}{1\linewidth \setstretch{0.75}} {\scriptsize{\textit{Notes}:} \scriptsize The figure on top shows the share of respondents who correctly estimated the number of COVID-19 deaths in both their state and the U.S., by party and treatment group. The figure at the bottom shows the share of respondents who believed the COVID-19 death rate could be considered a success, by party and by whether they were in the treatment or the control group. Error bars are 95\% confidence intervals.} \end{minipage} \end{figure} \par As estimating the number of deaths according to the CDC might be endogenous to a person's political beliefs, we exploit the exogenous variation in likelihood of correctly estimating the number of deaths caused by our treatment, which was randomly assigned. Hence, we study whether the number of deaths affected respondents' judgement, controlling for a set of demographic characteristics, media consumption, and shocks: $$ Pr(Success_{ic}) = \alpha + \beta Shock_{i} + \gamma Shock_{c} + \theta_1 Rep_i + \theta_2 Dem_i + $$ $$ \phi Treat_i + \delta X_{ic} + \epsilon_{ic} $$ The dependent variable in our regression is the probability of considering the current deaths as a ``success"; $Shock_i$ and $Shock_c$ indicated whether the respondent incurred a direct or indirect shock \footnote{The indirect economic shock in this regression is the variation in consumer spending between the time of the survey wave and the baseline of January 2020.}, and $X_{ic}$ captures a set of demographic variables. In table \ref{tab:death_exp_main}, we show the results of the OLS regressions described above. In the first two columns, we show that the treatment succeeded in increasing the chances of stating the death rate as per CDC figures, both at the federal and the national level, while in the remaining columns we report the effect of the treatment on the likelihood of declaring the number of deaths a success. In table \ref{tab:death_tr_effect} in the Appendix, we show that the treatment was effective in increasing the time respondents spent answering the questions. In columns (3)-(6), we further break down the outcomes of the experiment, separating between those who under, over or correctly estimated the number of deaths at the State or the US level. We see that Republicans were significantly more likely to underestimate the number of State and US deaths, while Democrats less so. In the control group, which was not shown the link to the CDC website, 35\% of the Republicans under-reported both the number of the US and State deaths, while the Democrats doing so were 18\% and 26\%, respectively. Similarly, 35\% of the Democrats overestimated the number of deaths in the US compared to 27\% of the Republicans. These results suggest that exposure to the same information can correct for partisan gap in estimating the gravity of a crisis, in line with recent studies \citep{haaland2019}. We also find a directional, although not significant, change in the way respondents judged the gravity of the crisis and the success of the response by public authorities as a result of this intervention. \begin{table}[H] \centering \caption{The effect of providing factual information in changing misunderstanding and assessment of the gravity of the crisis.} \label{tab:death_exp_main} \resizebox{\textwidth}{!}{% \begin{tabular}{lcccccc} \hline \hline & (1) & (2) & (3) & (4) & (5) & (6) \\ & \begin{tabular}[c]{@{}c@{}}Correctly \\ estimated\\ US \& State\\ deaths\end{tabular} & \begin{tabular}[c]{@{}c@{}}Correctly\\ estimated\\ US \& State\\ deaths\end{tabular} & \begin{tabular}[c]{@{}c@{}}US \& State \\ deaths\\ are a\\ success\end{tabular} & \begin{tabular}[c]{@{}c@{}}US \& State \\ deaths\\ are a\\ success\end{tabular} & \begin{tabular}[c]{@{}c@{}}US \& State \\ deaths\\ are a\\ success\end{tabular} & \begin{tabular}[c]{@{}c@{}}Correctly \\ stated\\ the US deaths\\ VS the world\end{tabular} \\ \hline & & & & & & \\ CDC Tx & 0.118*** & 0.149*** & -0.0415 & -0.0198 & -0.0404 & 0.0125 \\ & (0.0305) & (0.0341) & (0.0313) & (0.0370) & (0.0368) & (0.0423) \\ CDC Tx*Rep news & & -0.0370 & & -0.0996 & -0.0613 & 0.236** \\ & & (0.0643) & & (0.0719) & (0.0729) & (0.0922) \\ CDC Tx*Dem news & & -0.0905 & & -0.000831 & -0.00802 & -0.0417 \\ & & (0.0624) & & (0.0617) & (0.0571) & (0.0671) \\ Democrat & 0.0615 & 0.0596 & -0.130*** & -0.130*** & -0.0533* & -0.111*** \\ & (0.0402) & (0.0404) & (0.0307) & (0.0306) & (0.0291) & (0.0416) \\ Republican & -0.0330 & -0.0331 & 0.230*** & 0.230*** & 0.143*** & -0.0336 \\ & (0.0369) & (0.0366) & (0.0431) & (0.0432) & (0.0425) & (0.0386) \\ Lost 20\% income & -0.0307 & -0.0313 & -0.0109 & -0.0108 & 0.00956 & -0.0441 \\ & (0.0395) & (0.0398) & (0.0383) & (0.0380) & (0.0351) & (0.0429) \\ Knows hospitalized & -0.0730* & -0.0746* & -0.0135 & -0.0145 & -0.0164 & -0.0166 \\ & (0.0422) & (0.0418) & (0.0329) & (0.0333) & (0.0324) & (0.0389) \\ ln COVID-19 cases & -0.0178 & -0.0187 & -0.00475 & -0.00522 & -0.0150 & 0.0191 \\ & (0.0178) & (0.0179) & (0.0191) & (0.0195) & (0.0199) & (0.0251) \\ Consumer exp - June & 0.158 & 0.153 & -0.204* & -0.228** & -0.179 & 0.0111 \\ & (0.124) & (0.123) & (0.115) & (0.114) & (0.111) & (0.119) \\ Dem leaning news & 0.0188 & 0.0635 & -0.0419 & -0.0420 & -0.0175 & 0.0280 \\ & (0.0369) & (0.0478) & (0.0380) & (0.0526) & (0.0503) & (0.0577) \\ Rep leaning news & -0.0267 & -0.00890 & 0.267*** & 0.311*** & 0.214*** & -0.284*** \\ & (0.0514) & (0.0565) & (0.0420) & (0.0508) & (0.0562) & (0.0726) \\ \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Expected additional death\\ rate is a success (w5)\end{tabular}} & & & & & 0.390*** & \\ & & & & & (0.0390) & \\ Constant & 0.300** & 0.297** & 0.396*** & 0.395*** & 0.174 & 0.954*** \\ & (0.140) & (0.140) & (0.139) & (0.140) & (0.134) & (0.249) \\ & & & & & & \\ Controls & Yes & Yes & Yes & Yes & Yes & Yes \\ Observations & 1,141 & 1,141 & 1,137 & 1,137 & 1,137 & 948 \\ R-squared & 0.158 & 0.160 & 0.285 & 0.287 & 0.390 & 0.102 \\ Mean dep. var. & 0.330 & 0.330 & 0.335 & 0.335 & 0.335 & 0.552 \\ \hline \multicolumn{7}{l}{% \begin{minipage}{1.25\columnwidth}% \small \textit{Notes}: Standard errors in parentheses. *** p\textless{}0.01, ** p\textless{}0.05, * p\textless{}0.1. The dep. var. in Col (1) and (2) is a dummy=1 if the respondent provided the correct death rate, while col (2), (3) and (4) it is a dummy=1 if the respondents believed the COVID-19 death rate at the National and State level was a success. Col. (6) reports a regression predicting whether the respondent correctly stated that the US death rate was higher than in most countries in the world, in wave 7. The control variables include: gender, race, age, education, parental status, caring responsibilities for an elderly or a person with disability, baseline income in February 2020, cohabitation with a partner, labor force participation and employment status in February 2020, health insurance provider, if the respondent had financial difficulties before the pandemic, macro-region, metro vs rural, and the population density at the zip code. We also control for whether respondents completed the survey in a shorter time than the 99$^{th}$ percentile. Finally, we consider social media usage and the amount of international news consumed. \end{minipage}% } \end{tabular}% } \end{table} Lastly, we consider whether respondents have a preferences for consistency in their (motivated) response \citep{falk2018information}. To do this, we replicate the same regressions as above, this time adding a dummy for whether the respondent stated in the previous wave (i.e. wave 5) that the expected additional death rate could be considered a success. We find that this dummy significantly increases the probability that respondents considered the actual death rate as a success \footnote{We also replicate the same analysis by looking at whether the treatment had heterogenous effects depending on the size of the gap between the forecast in wave 5 and the actual measure in wave 6. We find that the treatment had a similar effect regardless of how `'far" a person's forecast was.}. In other words, those who considered in May that the forecast death rate was a success were more likely to consider the death rate in the subsequent wave as a success. However, the inclusion of this control does not change the statistical significance of the treatment effect in the first stage regression, nor the significance of party identity and biased media in the second stage. As an additional check, we instrument ``correctly estimated the state and country number of deaths" with the treatment assignment, which provides an exogenous variation. The results of the first and the second stage of a Two Stage Least Square regression (2SLS) are presented in table \ref{tab:covid_iv}, in the Appendix. \par Since the treatment effects are large and significant, we are interested in studying whether such simple and non-incentivized suggestion to improve one's beliefs had persistent effects. To do this, in wave 7 (end of October), more than 3 months later than the survey experiment wave, we asked respondents how the U.S. COVID-19 death rate compared to the rest of the world. The possible answers ranged from ``\textit{The highest in the world}'' to ``\textit{The lowest in the world}'', in a four point scale. In column 6 of table \ref{tab:death_tr_effect} we see that the treatment had a persistent, significant, and large effect in changing respondents' beliefs about the gravity of the crisis in the long-run. Further, we see from the interaction terms that the treatment helped to counterbalance the role played by the consumption of biased media (see Appendix for the full regression on this question alone). To show this graphically, in graph \ref{f:death_rate_pol}, we report the percentage of individuals selecting each option by political party\footnote{When this survey wave was administered, the U.S. cumulative death rate was the 10th highest in the world, with 685 deaths per million inhabitants. We consider the cumulative death rate per million inhabitants reported by the website ``Our World In Data" on October, the 26th 2020 (url: \url{https://ourworldindata.org/covid-deaths})}. While about half of the respondents correctly selected that the death rate was higher than most countries, answers varied significantly across parties, with Democrats overestimating the U.S. ranking (40\% believe it to be highest in the world, compared to 20\% of the Independents, and 12\% of the Republicans) and the Republicans underestimating it (40\% reporting it lower than most countries, compared to 20\% of the Independents, and 7\% of the Democrats). Also in this case, a person's news consumption mattered: consuming Republican leaning news is associated with a high probability of incorrectly stating the the death rate compared to the rest of the world. \\ \begin{figure}[H] \caption{The long-term effect of information treatment on beliefs.} \label{f:death_rate_pol} \begin{center} \includegraphics[height=11cm]{death_experiment_us_pol_party3_tc.png} \end{center} \begin{minipage}{1\linewidth \setstretch{0.75}} {\scriptsize{\textit{Notes}:} \scriptsize The figure shows the share of respondents who correctly estimated the death rate of the U.S. compared to the rest of the world by political party and by whether they were in the treatment group in the previous similar question we asked more than 3 months earlier. Error bars show 95\% confidence intervals.} \end{minipage} \end{figure} In sum, we see that Democrats and Republicans who consumed media that was more politically leaning towards their ideology were more likely to hold erroneous beliefs about the gravity of a crisis, potentially mitigating the convergent effect that shocks played. However, supplying individuals with the same information had a significant and long-term effect in closing this partisan gap\footnote{A debated issue with survey-based measures is whether some answers are biased by a cheerleading effect - that is, survey respondents' inclination to respond to questions to signal support for their political party rather than what they actually believe in. Recent studies, however, show that cheerleading effects might be less of a concern, and that respondents do engage in motivated reasoning even in financially incentivized contexts \citep{peterson2021partisan}.}. We complement our experiment analysis by estimating the treatment effect across sub-groups of respondents. We do this to understand whether some individuals are more responsive to information interventions and if so what the characteristics of these individuals are. Further, this methodology also allows us to see the effects of shocks and media on increasing respondents' responsiveness to information treatments. \\ \par \textbf{Heterogeneous treatment effects}. We employ a causal forest methodology, as in \citet{athey2019estimating}, to predict conditional average treatment effects for each respondent. This approach allows us to construct non-biased partitions of the data ex-ante from which a valid CATE may be achieved. To improve precision, we first train a pilot random forest on all baseline characteristics included in the OLS regression to identify relative variable importance to the model. We then train a second forest using only the subset of covariates which score above mean importance to eliminate possible confounding effects \citep{basu2018iterative}. We run tests to detect any heterogeneity in our primary outcomes of interest: (1) correctly identifying state and national COVID-19 death rates and (2) evaluating these rates as a success. Additionally, we test for heterogeneity in sustained informational effects, which are measured through a question in the next wave evaluating if respondents correctly identify the relative US death rate to other countries. Should our causal forest identify treatment heterogeneity in a given outcome, we will explore which characteristics may be correlated with higher estimated treatment response. We begin by testing whether the causal forest detects treatment heterogeneity through an ensemble approach. While there is not a clear consensus on causal forest validation, one approach suggested by \citet{athey2019estimating} is the use of a "best linear predictor" method, which fits the conditional average treatment effects (CATE) as a linear function of out-of-bag causal forest estimates. This approach provides strong evidence of a useful association between causal forest estimates and heterogeneity in treatment effect for outcome (1) (correctly estimating state and national COVID death rates) but not the others - this is consistent with the non-significance of our OLS estimate for outcome (2), but suggests we are not powered to detect variation in sustained informational effects. We also employ a series of tests suggested by \citep{davis2020rethinking} to verify that out-of-bag predictions and actual treatment effects are related, and find that the results for outcome (1) are consistent with our calibration test (see Appendix Table \ref{tab:bestlinearfit}). Together, these tests suggest there is an meaningful source of heterogeneity in treatment effectiveness that is worthy of further examination. We use a quartile breakout by predicted treatment effects for our primary outcome of interest(i.e., correct estimation of state and national US death rates). Appendix Table \ref{tab:causalforestquartiles} provides summary statistics by quartile for our baseline characteristics, as well as the mean CATE prediction. Results show that direct shocks are not correlated with higher treatment efficacy. Further, directly affected individuals do not have a higher average correct reporting rate, suggesting this is not due to already answering the question correctly regardless of treatment. Another key finding is that higher levels of education attainment positively mediate informational treatment efficacy. Specifically, a subset of respondents with a a bachelor's degree or higher display significantly higher treatment effects, representing over 60\% of the highest quartile. In contrast, respondents with a high school education experience constant diminishing representation in each subsequent quartile. This suggests that some highly educated respondents may be particularly responsive to informational treatments. Lastly, Democratic respondents who consumed more Democratic leaning news were also more responsive to the treatment compared to other political sub-groups, suggesting that responsiveness to a certain source of information, in this case the C.D.C., might be correlated with political ideology. \section{Robustness checks} \label{robust} \textbf{Alternative measures of shocks}. In the results presented in the main section of the paper, we considered as a direct economic shock whether respondents lost at least 20\% of their household income between any two months between the baseline survey wave and the last survey wave. We replicate the same model specifications using two different assumptions of direct economic shock: (a) whether respondents lost at least 20\% of their household income between February and October - that is, they incurred a more permanent loss in income, thus excluding those who eventually recovered from their loss by our last wave, and (b) the percentage decrease in income between the baseline and the outcome month, to account for possible different magnitudes of the level of shock. The two measures are respectively: $$shock_2 = \begin{cases} 1, & \mbox{if } \frac{income_{final}-income_{baseline}}{income_{baseline}} \leq -0.20 \\ 0, & \mbox{otherwise } \end{cases} $$ and $$shock_3 = \begin{cases} \frac{income_{final}-income_{baseline}}{income_{baseline}}, & \mbox{if } < 0 \\ 0, & \mbox{otherwise} \end{cases} $$ \\ Among respondents in our sample who participated in the first and the last survey waves (i.e. \textit{n}=1,076), about 27\% of our respondents lost at least 20\% of their income in a permanent way between February and October, compared to 38\% who lost it between any two months, but potentially recovered. When looking at the continuous measure of shock, we find that between February and October, about 4\% of respondents reported having lost all of their household income while about 17\% lost up to half of their household income. In tables \ref{tab:policy_inc2}, \ref{tab:policy_inc3} and \ref{tab:covid_policy_inc23} in the Appendix, we report the results of the regressions on policy preferences and trust in institutions using these two alternative measures of shocks. The magnitude and the coefficient signs are consistent with what our main specification: direct income shocks increased support for most government interventions, with the exclusion of providing mental healthcare and universal healthcare, whose associated coefficients are not significant, and help the industry grow. Support for the latter significantly decreased among respondents who incurred a shock, regardless of how it's measured, in line with our main results. For what concerns temporary relief policies, we see an even stronger support, both in terms of outcomes and significance, among respondents who incurred an income shock and had not recovered by October, suggesting that support for welfare policies increased with the severity of a person's income loss. We also report results related to institutional trust in tables \ref{tab:trust_inc2} and \ref{tab:trust_inc3}. Also in this case, the coefficients are consistent with what our main specification: incurring an economic shock is associated with an increase in the likelihood of having lost confidence in institutions, and particularly so in the U.S. Congress and Senate and in the private sector. \textbf{Multiple hypothesis testing}. In our analyses we consider multiple sets of outcomes. For a given $\alpha$ level of tolerance for type I error, as the number of tested outcomes increases also the probability of rejecting at least one null hypothesis increases. To take this into account, we adjust the p-values of the shocks controlling for the False Discovery Rate (FDR). Given a set of $H$ hypothesis, if $F$ is the number of false rejections and $C$ the number of correct rejections, then $T = F + C$ is the total number of rejections. The FDR is the expected proportion of all rejections that are type I errors, or $E[\frac{F}{T}]$. In the Online Appendix, we report the p-values associated with economic and health direct shocks, corrected for the FDR, following the algorithm proposed by \cite{anderson2008multiple}. From these tests, we see that most of our results hold and remain at least marginally significant. \textbf{Alternative measures of outcomes and regression models}. In line with \citet{giuliano2014}, we focused on analysing an increase in support for policies and government interventions, and a decrease in institutional trust. However, we also considered the opposite direction - that is, a decrease in support for welfare and an increase in institutional trust. We report these results in an Online Appendix and show that they are in line with what presented above: Democrats are significantly less likely to have decreased their support for most of government interventions, while Republicans are more likely to do so, and the biased media diet further increased this trend. On the other side, Democrats are less likely to have increased their trust in President Trump and in the U.S. Congress and Senate, but have significantly increased their confidence in people running the scientific community, whereas the opposite is true for respondent supporting the Republican party. \par Lastly, since most of our outcomes are binary variables, for completeness, we also show that our results hold when using a logistic regression instead of OLS, as also shown in the Online Appendix. \textbf{Average effect sizes.} Another robustness check we perform is testing whether our results hold when considered as a bundle, which allows to make more general claims. To do so, we replicate the analyses using an Average Effect Sizes, as in \cite{kling2004moving, clingingsmith2009estimating, giuliano2014, heller2017thinking}. To perform such an analysis, one needs to make several assumptions about the nature of the outcomes being studied, since an AES estimation requires stacking multiple outcomes. As we have seen in the main specifications of our results, support for policies and trust in institutions change in different directions according to a person's party, and depending on the nature of the shock they incurred. As such, this requires grouping dependent variables in sub-groups using a more subjective judgment. In the Appendix, we propose one plausible stacking approach and show that the results remain qualitatively similar to those presented in the previous sections. We group the variables according to the type of institutions or policies considered. When analyzing policies, we separate between questions related to whether it's a government responsibilities to provide a set of services and those concerning coronavirus relief. Within the first ones, we further split the variables in two groups: one considering traditional macroeconomic policies (keep prices under control and help industry grow), and ones focused on welfare issues (reduce inequality, provide for the unemployed, provide help to university students from a disadvantage background, and provide a basic income, universal healthcare, provide mental health care services to people with mental illnesses, provide for the elderly and help those affected by natural disasters). For what concerns institutional trust, we separate between government-related institutions (the U.S. Congress and Senate and the White House), science-related ones (scientific community, hospitals and health care professionals, and health insurance companies), and the ones related to the economy (banks and financial institutions, and the private sector). Again, we see that our results remain qualitatively identical to the main specifications presented in the body of the paper. \textbf{Entropy weights}. The COVID-19 pandemic affected communities and citizens differently, also depending on their income levels. As such, some shocks, such as incurring an income loss, are correlated with several demographic characteristics, including income, gender and race. Even though we consider variations at the individual level, which reduces concerns related to endogeneity, we cannot entirely exclude that those who have been affected by a shock were systematically different from those who did not, and that their preferences and opinions would have varied in a different way. In order to minimize this potential source of endogeneity, we repeat our analyses with entropy balancing weights. The entropy balancing technique re-weights the observations in order to reduce the differences with respect to a set of balance conditions across treated and control units (in our case those who incurred a shock vs those who did not) \footnote{See \cite{hainmueller2013ebalance} for the Stata package and \cite{hainmueller2012entropy} for the theory behind this approach. We opt for applying entropy balancing weights, instead of performing any matching technique, in order to avoid excluding any observation.}. These survey weights still take into account the population weights, so the resulting weights still reflect the whole population. In the Online Appendix, we report the regression results using entropy balancing weights. Coefficients do not vary in a substantial way with regards to the magnitude and the signs, suggesting that the level of endogeneity is not of particular concern in the interpretation of our results. \textbf{Voting intentions}. The COVID-19 crisis occurred at a time of great political polarization in the U.S., also due to the Presidential elections. The months just before the elections of November 2020 saw greater division among the public, with some voters not necessarily reflecting themselves in one of the two main parties but rather in the Presidential nominees. To account for different political identity effects, we replicate our analysis considering voting intentions, which we collected from our respondents in the middle of May. Results are presented in the Online Appendix. Again, the sign and the magnitude of the coefficients associated with the political parties are consistent across specifications. The only marginal differences we note are that Trump voters are significantly less likely to have increased their belief that it's a government responsibility to provide for the unemployed, to provide a basic income or to reduce inequality, while Republicans, in general, were not. However, Biden voters, unlike Democrats, have not significantly increased their support for coronavirus-related policies or for other government interventions. However, such differences are minor and the coefficient signs are consistent with our main specifications. \textbf{Fixed effects}. We also perform similar analyses to those presented above, but considering a model with longitudinal data and controlling for fixed effects at the individual level. $$ y_{ict}= \alpha_i + wave_t + shock_{it} +shock_{ct} + \epsilon_{ict}$$ with $y_{ict}$ being one outcome of interest for individual $i$, in county $c$, in time $t$; $shock_{it}$ and $shock_{ct}$ being a shock for individual $i$ or county $c$, in time $t$; $\alpha_i$ the individual fixed effects, and $wave_t$ the survey wave. Variables referring to direct shocks are dummy variables flagging if the respondent incurred a shock in any time preceding the current wave, so if the event occurred in a certain month, the shock variable will be equal to one for all the subsequent observations. In this way, we track the impact of having had an income loss or knowing someone hospitalized at least once in our time frame, similarly to what measured in the regression in differences. Since the individual effects absorb all time-invariant variables, from the main specification, we cannot assess whether respondents' political views affected their opinions and preferences in time. Thus, we repeat the same analysis but subgroups, considering a sample of Republicans and one of Democrats. The results of the analysis concerning institutional trust are presented in the Online Appendix. Again, we can see that the results don't change in any dramatic way. The fixed effect model allows us to assess how support for government interventions and institutional trust have varied in time. Since the beginning of April, respondents have decreased their belief that the government should keep prices under control, and this seems to be driven by the Republicans, and we observe a similar pattern for two other welfare policies: support for the unemployed and for the elderly. For what concerns trust, the Democrats increased their confidence in the U.S. Senate and Congress between the first and the last week of April, but by mid-May, the level of trust had dropped back to the baseline levels. On the contrary, confidence in President Trump dropped significantly both in May and October, and the coefficients remain negative for both sub-samples of Democrats and Republicans, although they are not significant for the latter ones. Trust in financial institutions and in the private sector has oscillated in time, while confidence in scientific institutions has dropped in time across all parties, reaching the lowest point in June. \section{Conclusions} Large scale crises can lead to significant and persistent changes in citizens' beliefs and preferences. Previous studies suggested that such changes occur slowly, over long periods of exposure to a crisis or regime change. Using a longitudinal multi-wave survey, we show that such changes can actually occur very rapidly. We also find that most of the changes in preferences for policies and institutional trust occur after a direct negative experiences with the crisis, rather than just exposure per se. A direct economic or health shock during a crisis increases citizens' preferences for greater government spending, in particular on welfare assistance, and decreases trust in institutions. Changes in the political polarization on the same set of policies and institutions can instead be largely explained by whether a person consumes mostly news sources that are aligned with their political views. We show that the main channel through which news source influence polarization is by creating misperceptions about the gravity of the crisis. Throghout the crisis, Democrats were more likely to overestimate the COVID-19 death rate while Republicans were more likely to underestimate it. In a non-incentivized light-touch experiment, we find that exposing respondents to the same source of information from an official government source reduces the misinformation partisan gap, with this effect persisting several months, potentially counteracting media-led biases. Our results contribute to a growing literature on how crises transform societies, pointing to the importance of tracking preferences frequently and disentangling the mechanisms behind such changes. \newpage \subsection*{GMO} \begin{table}[] \centering \caption{} \label{tab:my-table} \begin{tabular}{lcccc}\hline & 1 & 2 & 3 & 4 \\ VARIABLES & \begin{tabular}[c]{@{}c@{}}GMOs\\ are safe\end{tabular} & \begin{tabular}[c]{@{}c@{}}GMOs\\ are safe\end{tabular} & \begin{tabular}[c]{@{}c@{}}GMOs\\ are safe\end{tabular} & \begin{tabular}[c]{@{}c@{}}GMOs\\ are safe\end{tabular} \\\hline & & & & \\ Treatment match & -0.0660 & -0.0415 & -0.0125 & -0.0688 \\ & (0.213) & (0.196) & (0.220) & (0.254) \\ Treatment mismatch & 0.0478 & 0.0457 & 0.208 & 0.289 \\ & (0.193) & (0.184) & (0.194) & (0.227) \\ Treatment match party*Rep & 0.579** & 0.498** & 0.516* & 0.670** \\ & (0.234) & (0.220) & (0.291) & (0.282) \\ Treatment mismatch party*Rep & 0.229 & 0.177 & 0.335 & -0.0426 \\ & (0.284) & (0.286) & (0.300) & (0.264) \\ Republican & -0.370*** & -0.260** & -0.253** & -0.262** \\ & (0.124) & (0.1000) & (0.101) & (0.101) \\ Democrat & 0.184 & 0.188 & 0.185 & 0.178 \\ & (0.165) & (0.167) & (0.174) & (0.164) \\ Republican leaning news & & 0.0911 & 0.174 & 0.113 \\ & & (0.0965) & (0.106) & (0.0906) \\ Democratic leaning news & & 0.140 & 0.245* & 0.153 \\ & & (0.106) & (0.137) & (0.102) \\ Confid. sci. com. in w1 & & 0.407*** & 0.416*** & 0.464*** \\ & & (0.0834) & (0.0840) & (0.101) \\ Confid. fed. gov. in w1 & & 0.0148 & 0.00546 & 0.00609 \\ & & (0.157) & (0.153) & (0.160) \\ Treatment match party*Biased news & & & -0.0708 & \\ & & & (0.247) & \\ Treatment mismatch party*Biased news & & & -0.358 & \\ & & & (0.282) & \\ Treatment match party*Biased news*Rep & & & -0.0348 & \\ & & & (0.318) & \\ Treatment mismatch party*Biased news*Rep & & & -0.156 & \\ & & & (0.462) & \\ Treatment match party*Confid. sci. com. & & & & 0.0491 \\ & & & & (0.307) \\ Treatment mismatch party*Confid. sci. com. & & & & -0.361 \\ & & & & (0.281) \\ Treatment match party*Confid. sci. com. & & & & -0.317 \\ & & & & (0.356) \\ Treatment mismatch party*Confid. sci. com.*Rep & & & & 0.303 \\ & & & & (0.343) \\ Constant & 3.256*** & 3.118*** & 3.051*** & 3.078*** \\ & (0.271) & (0.300) & (0.308) & (0.307) \\ & & & & \\ \hline Demographic controls & Yes & Yes & Yes & Yes \\ Controls for network and information & No & Yes & Yes & Yes \\ Observations & 1,089 & 1,089 & 1,089 & 1,089 \\ Standard errors in parentheses; *** p<0.01, ** p<0.05, * p<0.1 \\ \hline \end{tabular} \end{table} \subsection*{Mask experiment} \begin{table}[H] \centering \label{tab:my-table} \resizebox{\textwidth}{!}{% \begin{tabular}{lcccc} \hline & 1 & 2 & 3 & 4 \\ VARIABLES & \begin{tabular}[c]{@{}c@{}}Supports\\ mandatory mask\end{tabular} & \begin{tabular}[c]{@{}c@{}}Supports\\ mandatory mask\end{tabular} & \begin{tabular}[c]{@{}c@{}}Supports\\ mandatory mask\end{tabular} & \begin{tabular}[c]{@{}c@{}}Supports\\ mandatory mask\end{tabular} \\ \hline & & & & \\ Treatment match & 0.177 & 0.279 & 0.0582 & 0.448 \\ & (0.394) & (0.382) & (0.450) & (0.439) \\ Treatment mismatch & -0.371 & -0.284 & -0.308 & -0.111 \\ & (0.380) & (0.357) & (0.394) & (0.574) \\ Treatment match party*Rep & -0.126 & -0.283 & -0.0877 & 0.279 \\ & (0.530) & (0.570) & (0.729) & (0.630) \\ Treatment mismatch party*Rep & -0.0619 & -0.0833 & 0.496 & -0.0168 \\ & (0.594) & (0.510) & (0.583) & (0.719) \\ Republican & -0.716** & -0.402 & -0.426 & -0.375 \\ & (0.286) & (0.321) & (0.318) & (0.316) \\ Democrat & 1.665*** & 1.119*** & 1.134*** & 1.121*** \\ & (0.312) & (0.346) & (0.339) & (0.351) \\ Direct financial shock w4 & 0.256 & 0.269 & 0.279 & 0.284 \\ & (0.206) & (0.194) & (0.194) & (0.195) \\ Knows someone hospitalized for COVID-19 & -0.0609 & -0.200 & -0.200 & -0.245 \\ & (0.220) & (0.197) & (0.198) & (0.190) \\ log county deaths/100,000 by w5 & & 0.147** & 0.145** & 0.140** \\ & & (0.0593) & (0.0590) & (0.0595) \\ Family/friends have health pre-condition & & 0.00288 & 0.00110 & 0.0218 \\ & & (0.170) & (0.166) & (0.168) \\ Republican leaning news & & -0.532*** & -0.469** & -0.506** \\ & & (0.204) & (0.228) & (0.204) \\ Democratic leaning news & & -0.0795 & -0.116 & -0.0931 \\ & & (0.256) & (0.275) & (0.255) \\ Confid. in sci. comm in w1 & & 1.165*** & 1.170*** & 1.378*** \\ & & (0.175) & (0.168) & (0.221) \\ Confid. in fed. gov. in w1 & & -0.0597 & -0.0518 & -0.108 \\ & & (0.297) & (0.301) & (0.299) \\ Treatment match party*Biased news & & & 0.499 & \\ & & & (0.583) & \\ Treatment mismatch party*Biased news & & & 0.0462 & \\ & & & (0.550) & \\ Treatment match party*Biased news*Rep & & & -0.432 & \\ & & & (0.928) & \\ Treatment mismatch party*Biased news*Rep & & & -1.035 & \\ & & & (0.749) & \\ Treatment match party*Confid. sci. com. & & & & -0.265 \\ & & & & (0.461) \\ Treatment mismatch party*Confid. sci. com. & & & & -0.274 \\ & & & & (0.540) \\ Treatment match party*Confid. sci. com.*Rep & & & & -1.124* \\ & & & & (0.659) \\ Treatment mismatch party*Confid. sci. com.*Rep & & & & -0.193 \\ & & & & (0.677) \\ Constant & 3.918*** & 2.416 & 2.551* & 2.326 \\ & (0.765) & (1.510) & (1.483) & (1.511) \\ \hline & & & & \\ Demographic controls & Yes & Yes & Yes & Yes \\ Controls for network and information & No & Yes & Yes & Yes \\ Observations & 1,198 & 1,197 & 1,197 & 1,197 \\ Standard errors in parentheses; *** p$<$0.01, ** p$<$0.05, * p$<$0.1 \\ \hline \end{tabular}% } \end{table} \subsection*{GMO & mask } \begin{table}[H] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{lcccc} \hline & (1) & (2) & (3) & (4) \\ VARIABLES & \begin{tabular}[c]{@{}c@{}}GMOs\\ are safe\end{tabular} & \begin{tabular}[c]{@{}c@{}}GMOs\\ are safe\end{tabular} & \begin{tabular}[c]{@{}c@{}}Supports\\ mandatory mask\end{tabular} & \begin{tabular}[c]{@{}c@{}}Supports\\ mandatory mask\end{tabular} \\ \hline & & & & \\ Treatment match & -0.0401 & -0.0415 & 0.0337 & 0.279 \\ & (0.234) & (0.196) & (0.433) & (0.382) \\ Treatment mismatch & 0.109 & 0.0457 & -0.351 & -0.284 \\ & (0.217) & (0.184) & (0.407) & (0.357) \\ Treatment match party*Rep & 0.710** & 0.498** & 0.0181 & -0.283 \\ & (0.284) & (0.220) & (0.621) & (0.570) \\ Treatment mismatch party*Rep & 0.206 & 0.177 & -0.0849 & -0.0833 \\ & (0.300) & (0.286) & (0.644) & (0.510) \\ Republican & -0.473*** & -0.260** & -0.671** & -0.402 \\ & (0.119) & (0.1000) & (0.301) & (0.321) \\ Democrat & 0.0699 & 0.188 & 1.957*** & 1.119*** \\ & (0.172) & (0.167) & (0.368) & (0.346) \\ Direct financial shock w4 & & & & 0.269 \\ & & & & (0.194) \\ log county deaths/100,000 by w5 & & & & 0.147** \\ & & & & (0.0593) \\ Knows hospitalized for COVID-19 & & & & -0.200 \\ & & & & (0.197) \\ Family/friends health pre-condition & & & & 0.00288 \\ & & & & (0.170) \\ Republican leaning news & & 0.0911 & & -0.532*** \\ & & (0.0965) & & (0.204) \\ Democratic leaning news & & 0.140 & & -0.0795 \\ & & (0.106) & & (0.256) \\ Confid. sci. com. in w1 & & 0.407*** & & 1.165*** \\ & & (0.0834) & & (0.175) \\ Confid. fed. gov. in w1 & & 0.0148 & & -0.0597 \\ & & (0.157) & & (0.297) \\ Constant & 2.836*** & 3.118*** & 4.002*** & 2.416 \\ & (0.0608) & (0.300) & (0.109) & (1.510) \\ \hline Controls & No & Yes & No & Yes \\ Observations & 1,089 & 1,089 & 1,198 & 1,197\\ \hline \multicolumn{5}{l}{Standard errors in parentheses; *** p$<$0.01, ** p$<$0.05, * p$<$0.1 }\\ \end{tabular}% } \end{table} \subsection*{GMO & mask - interactions with biased news and trust in the scientific community} \begin{table}[H] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{lcccc} \hline & 1 & 2 & 3 & 4 \\ VARIABLES & \begin{tabular}[c]{@{}c@{}}GMOs\\ are safe\end{tabular} & \begin{tabular}[c]{@{}c@{}}GMOs\\ are safe\end{tabular} & \begin{tabular}[c]{@{}c@{}}Supports\\ mandatory mask\end{tabular} & \begin{tabular}[c]{@{}c@{}}Supports\\ mandatory mask\end{tabular} \\ \hline & & & & \\ Treatment match & -0.0125 & -0.0688 & 0.0582 & 0.448 \\ & (0.220) & (0.254) & (0.450) & (0.439) \\ Treatment mismatch & 0.208 & 0.289 & -0.308 & -0.111 \\ & (0.194) & (0.227) & (0.394) & (0.574) \\ Treatment match party*Rep & 0.516* & 0.670** & -0.0877 & 0.279 \\ & (0.291) & (0.282) & (0.729) & (0.630) \\ Treatment mismatch party*Rep & 0.335 & -0.0426 & 0.496 & -0.0168 \\ & (0.300) & (0.264) & (0.583) & (0.719) \\ Republican & -0.253** & -0.262** & -0.426 & -0.375 \\ & (0.101) & (0.101) & (0.318) & (0.316) \\ Democrat & 0.185 & 0.178 & 1.134*** & 1.121*** \\ & (0.174) & (0.164) & (0.339) & (0.351) \\ Direct financial shock w4 & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & 0.279 & 0.284 \\ & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & (0.194) & (0.195) \\ Knows someone hospitalized for COVID-19 & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & -0.200 & -0.245 \\ & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & (0.198) & (0.190) \\ log county deaths/100,000 by w5 & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & 0.145** & 0.140** \\ & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & (0.0590) & (0.0595) \\ Family/friends have health pre-condition & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & 0.00110 & 0.0218 \\ & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & (0.166) & (0.168) \\ Republican leaning news & 0.174 & 0.113 & -0.469** & -0.506** \\ & (0.106) & (0.0906) & (0.228) & (0.204) \\ Democratic leaning news & 0.245* & 0.153 & -0.116 & -0.0931 \\ & (0.137) & (0.102) & (0.275) & (0.255) \\ Confid. in sci. comm in w1 & 0.416*** & 0.464*** & 1.170*** & 1.378*** \\ & (0.0840) & (0.101) & (0.168) & (0.221) \\ Confid. in fed. gov. in w1 & 0.00546 & 0.00609 & -0.0518 & -0.108 \\ & (0.153) & (0.160) & (0.301) & (0.299) \\ Treatment match party*Biased news & -0.0708 & & 0.499 & \\ & (0.247) & & (0.583) & \\ Treatment mismatch party*Biased news & -0.358 & & 0.0462 & \\ & (0.282) & & (0.550) & \\ Treatment match party*Biased news*Rep & -0.0348 & & -0.432 & \\ & (0.318) & & (0.928) & \\ Treatment mismatch party*Biased news*Rep & -0.156 & & -1.035 & \\ & (0.462) & & (0.749) & \\ Treatment match party*Confid. sci. com. & & 0.0491 & & -0.265 \\ & & (0.307) & & (0.461) \\ Treatment mismatch party*Confid. sci. com. & & -0.361 & & -0.274 \\ & & (0.281) & & (0.540) \\ Treatment match party*Confid. sci. com.*Rep & & -0.317 & & -1.124* \\ & & (0.356) & & (0.659) \\ Treatment mismatch party*Confid. sci. com.*Rep & & 0.303 & & -0.193 \\ & & (0.343) & & (0.677) \\ Constant & 3.051*** & 3.078*** & 2.551* & 2.326 \\ & (0.308) & (0.307) & (1.483) & (1.511) \\ \hline & & & & \\ Controls & Yes & Yes & Yes & Yes \\ Observations & 1,089 & 1,089 & 1,197 & 1,197 \\\hline \multicolumn{5}{l}{Standard errors in parentheses; *** p$<$0.01, ** p$<$0.05, * p$<$0.1 }\\ \end{tabular}% } \end{table} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In the constituent quark model (CQM) hadrons are described as a system of constituent (or valence) quarks and antiquarks, $qqq$ for baryons and $q \bar{q}$ for mesons. Despite the success of the quark model, there is strong evidence for the existence of exotic degrees of freedom (other than valence quarks) in hadrons from CQM studies of electromagnetic and strong couplings of baryons that are, on average, underpredicted by CQMs \cite{CRreview}. More direct evidence for the importance of quark-antiquark components in the proton comes from measurements of the $\bar{d}/\bar{u}$ asymmetry in the nucleon sea \cite{Kumano,GarveyPeng} and the proton spin crisis \cite{protonspin}. The pion cloud in the nucleon holds the key to understand the flavor asymmetry and the spin-crisis of the proton \cite{Kumano,GarveyPeng,Speth}. Moreover, angular momentum conservation of the pionic fluctuations of the nucleon leads to a relation between the flavor asymmetry and the contribution of orbital angular momentum to the spin of the proton ${\cal A}(p) = \Delta L$ \cite{Garvey}. The aim of this contribution is to discuss the role of valence and sea quarks in the nucleon in the framework of a simplified version of the unquenched quark model (UQM) in which only the effects of the pion cloud is taken into account. It is shown that the pion cloud offers a qualitative understanding of the results obtained in previous numerical studies \cite{uqm}, and thus provides important insights into the properties of the nucleon. \section{Unquenched quark model} The unquenched quark model developed in \cite{uqm} is motivated by earlier studies on extensions of the quark model in which the $q\bar{q}$ pairs are created in the $^{3}P_0$ state with the quantum numbers of the vacuum \cite{tornqvist,baryons}. The present approach is based on a CQM to which the quark-antiquark pairs are added as a perturbation, employing a $^{3}P_0$ model for the $q \bar{q}$ pair creation. The pair-creation mechanism is inserted at the quark level and the one-loop diagrams are calculated by summing over the intermediate baryon-meson states (see Fig.~\ref{diagram}). \begin{figure}[h] \centering \setlength{\unitlength}{1pt} \begin{picture}(145,130)(0,-25) \put( 20, 20) {\line(1, 0){50}} \put( 20, 40) {\line(1, 0){50}} \put( 20, 60) {\line(1, 0){50}} \put( 70, 60) {\line(2, 1){50}} \put( 70, 40) {\line(2,-1){50}} \put( 70, 20) {\line(2,-1){50}} \put( 90, 50) {\line(2, 1){30}} \put( 90, 50) {\line(2,-1){30}} \put( 20, 20) {\vector(1, 0){25}} \put( 20, 40) {\vector(1, 0){25}} \put( 20, 60) {\vector(1, 0){25}} \put( 70, 60) {\vector(2, 1){40}} \put( 70, 40) {\vector(2,-1){40}} \put( 70, 20) {\vector(2,-1){40}} \put(120, 65) {\vector(-2,-1){15}} \put( 90, 50) {\vector(2,-1){20}} \put( 5, 17) {$q_1$} \put( 5, 37) {$q_2$} \put( 5, 57) {$q_3$} \put(125, -8) {$q_1$} \put(125, 12) {$q_2$} \put(125, 32) {$q$} \put(125, 62) {$\bar{q}$} \put(125, 82) {$q_3$} \end{picture} \caption{\small Schematic quark line diagram for $A \rightarrow B C$.} \label{diagram} \end{figure} Under these assumptions, the baryon wave function consists of a zeroth order three-quark configuration $\mid A \rangle$ plus a sum over higher Fock components due to the creation of quark-antiquark pairs. The resulting baryon wave function is given by \cite{uqm} \begin{eqnarray} \left| \psi_A \right> = {\cal N} \left[ \left| A \right> + \sum_{BC l J} \int d \vec{K} k^2 dk \, \left| BC,l,J; \vec{K},k \right> \, \frac{ \left< BC,l,J; \vec{K},k \left| T^{\dagger} \right| A \right> } {\Delta E_{BC}(k)} \right] ~, \label{wf1} \end{eqnarray} where $\Delta E_{BC}(k) = M_A - E_B(k) - E_C(k)$ is the energy difference calculated in the rest frame of the initial baryon $A$. The operator $T^{\dagger}$ creates a quark-antiquark pair in the $^{3}P_0$ state with the quantum numbers of the vacuum: $L=S=1$ and $J=0$. The $^{3}P_{0}$ transition amplitude can be expressed as \cite{roberts} \begin{eqnarray} \langle BC,l,J; \vec{K},k | T^{\dagger} | A \rangle = \delta(\vec{K}) \, M_{A \rightarrow BC}(k) \end{eqnarray} where $\delta(\vec{K})$ is a consequence of momentum conservation in the rest frame of A. \subsection{Flavor and spin content} In this contribution, we employ a simplified version of the UQM in which only the contribution of the pion cloud is taken into account. Table~\ref{spinflavor} shows the results for the flavor and spin content of the proton. In the UQM, the coefficients appearing in Table~\ref{spinflavor} are expressed in terms of integrals over the relative momentum $k$ \begin{eqnarray} a^2 &\rightarrow& \int k^2 dk \frac{|M_{N \rightarrow N \pi}(k)|^2}{\Delta E_{N \pi}^2(k)} ~, \nonumber\\ b^2 &\rightarrow& \int k^2 dk \frac{|M_{N \rightarrow \Delta \pi}(k)|^2}{\Delta E_{\Delta \pi}^2(k)} ~, \nonumber\\ 2ab &\rightarrow& \int k^2 dk \frac{M^{\ast}_{N \rightarrow N \pi}(k) M_{N \rightarrow \Delta \pi}(k) + M^{\ast}_{N \rightarrow \Delta \pi}(k) M_{N \rightarrow N \pi}(k)} {\Delta E_{N \pi}(k) \Delta E_{\Delta \pi}(k)} ~, \end{eqnarray} which only depend on the $^{3}P_{0}$ coupling strength. The results for the UQM in Table~\ref{spinflavor} are also valid for the meson-cloud model in which the coefficients $a$ and $b$ multiply the $N \pi$ and $\Delta \pi$ components of the nucleon wave function \cite{Garvey}. The $ab$ term denotes the contribution from the cross terms between the $N \pi$ and $\Delta \pi$ components. In the UQM, the value of the cross term $ab$ is not equal to the product of $a$ and $b$, although the numerical values are close. Since the UQM contains the full spin and isospin structure, it satisfies the relation between the flavor asymmetry and the contribution of the orbital angular momentum to the spin of the proton ${\cal A}(p)=\Delta L$ \cite{Garvey}, and therefore $\Delta \Sigma = 1-2\Delta L$. In the absence of the pion cloud ($a^2=b^2=2ab=0$) we recover the results of the CQM. \begin{table} \centering \caption{\small Spin and flavor content of the proton in he constituyent quark model (CQM) and the unquenched quark model (UQM), normalized to the flavor asymmetry using the E866/NuSea value \cite{Towell} (UQM1) and using the NMC value \cite{NMC} (UQM2).} \label{spinflavor} \vspace{15pt} \begin{tabular}{cccrrc} \noalign{\smallskip} \hline \noalign{\smallskip} & CQM & UQM & UQM1 & UQM2 & Exp \\ \noalign{\smallskip} \hline \noalign{\smallskip} ${\cal A}(p)=\Delta L$ & $0$ & $\frac{2a^2-b^2}{3(1+a^2+b^2)}$ & $*0.118$ & $*0.158$ & $ 0.118 \pm 0.012$ \\ &&&&& $0.158 \pm 0.010$ \\ \noalign{\smallskip} $\Delta u$ & $ \frac{4}{3}$ & $ \frac{4}{3}-\frac{38a^2+b^2-16ab\sqrt{2}}{27(1+a^2+b^2)}$ & $ 1.132$ & $ 1.064$ & $ 0.842 \pm 0.013$ \\ \noalign{\smallskip} $\Delta d$ & $-\frac{1}{3}$ & $-\frac{1}{3}+\frac{2a^2+19b^2-16ab\sqrt{2}}{27(1+a^2+b^2)}$ & $-0.368$ & $-0.380$ & $-0.427 \pm 0.013$ \\ \noalign{\smallskip} $\Delta s$ & $0$ & $ 0$ & $ 0.000$ & $ 0.000$ & $-0.085 \pm 0.018$ \\ \noalign{\smallskip} $\Delta \Sigma = \Delta u + \Delta d + \Delta s$ & $1$ & $1-\frac{4a^2-2b^2}{3(1+a^2+b^2)}$ & $ 0.764$ & $ 0.684$ & $ 0.330 \pm 0.039$ \\ \noalign{\smallskip} $g_A = \Delta u - \Delta d$ & $\frac{5}{3}$ & $\frac{5}{3}-\frac{40a^2+20b^2-32ab\sqrt{2}}{27(1+a^2+b^2)}$ & $ 1.500$ & $ 1.444$ & $1.2723 \pm 0.0023$ \\ \noalign{\smallskip} \hline \end{tabular} \end{table} The results for the spin and flavor content of the proton are normalized to the proton flavor asymmetry. The fourth column is normalized to the E866/NuSea value \cite{Towell}, and the fifth column to the somewhat higher NMC value \cite{NMC}. The experimental values of the spin content were obtained by the HERMES \cite{Hermes} and the COMPASS \cite{Compass} Collaborations. Table~\ref{spinflavor} shows the results from the HERMES Collaboration. \subsection{Magnetic moments} The magnetic moments of the octet baryons constitute one of the early successes of the constituent quark model. Hence, for any extension of the quark model, it is important to verify whether the good agreement of the CQM is maintained. Table~\ref{moments} shows that this is indeed the case for the unquenched quark model. Just as for the CQM, the quark magnetic moments are fitted to the magnetic moments of the proton, neutron and $\Lambda$ hyperon. \begin{table}[ht] \centering \caption{\small Magnetic moments. The experimental values are taken from Ref.~\cite{PDG}.} \label{moments} \vspace{15pt} \begin{tabular}{crrrc} \noalign{\smallskip} \hline \noalign{\smallskip} & CQM & UQM1 & UQM2 & Exp \\ \noalign{\smallskip} \hline \noalign{\smallskip} $p$ & $ 2.793$ & $ 2.793$ & $ 2.793$ & $ 2.793$ \\ $n$ & $-1.913$ & $-1.913$ & $-1.913$ & $-1.913$ \\ $\Lambda$ & $-0.613$ & $-0.613$ & $-0.613$ & $-0.613 \pm 0.004$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} $\Sigma^+$ & $ 2.673$ & $ 2.589$ & $ 2.509$ & $ 2.458 \pm 0.010$ \\ $\Sigma^0$ & $ 0.791$ & $ 0.783$ & $ 0.751$ & \\ $\Sigma^-$ & $-1.091$ & $-1.023$ & $-1.007$ & $-1.160 \pm 0.025$ \\ $\Xi^0$ & $-1.435$ & $-1.359$ & $-1.290$ & $-1.250 \pm 0.014$ \\ $\Xi^-$ & $-0.493$ & $-0.530$ & $-0.552$ & $-0.651 \pm 0.003$ \\ $\Sigma^0/\Lambda$ & $ 1.630$ & $ 1.640$ & $-1.638$ & $ 1.61 \pm 0.08 $ \\ \noalign{\smallskip} \hline \end{tabular} \end{table} \subsection{Fluctuations} In the UQM, it is straightforward to calculate the fluctuation probabilities. The probability that a proton fluctuates in $n \pi^+$ is given by \begin{eqnarray} \left| \left< n \pi^+ | p \right> \right|^2 \;=\; \frac{2a^2}{3(1+a^2+b^2)} ~, \end{eqnarray} whereas the total probability for a pion fluctuation of the proton is given by \begin{eqnarray} \left| \left< N \pi | p \right> \right|^2 + \left| \left< \Delta \pi | p \right> \right|^2 \;=\; \frac{a^2+b^2}{1+a^2+b^2} ~. \end{eqnarray} The numerical results are shown in Table~\ref{fluctuations}. The UQM1 values are in good agreement with the experimental values as determined in an analysis of forward neutron production in electron-proton collisions by the H1 and ZEUS Collaborations at DESY \cite{Povh,Rosina1}, and in a study of the quark distribution functions measured in Drell-Yan experiments and semi-inclusive DIS experiments \cite{Chang}. The UQM2 values are about 30 \% higher than the UQM1 values. \begin{table}[ht] \centering \caption{\small Pion fluctuations of the proton.} \label{fluctuations} \vspace{15pt} \begin{tabular}{crrcc} \noalign{\smallskip} \hline \noalign{\smallskip} & UQM1 & UQM2 & Exp & Ref \\ \noalign{\smallskip} \hline \noalign{\smallskip} $\left| \left< n \pi^+ | p \right> \right|^2$ & 0.180 & 0.241 & $0.17 \pm 0.01$ & \cite{Povh,Rosina1} \\ $\left| \left< N \pi | p \right> \right|^2 + \left| \left< \Delta \pi | p \right> \right|^2$ & 0.455 & 0.609 & 0.470 & \cite{Chang} \\ \noalign{\smallskip} \hline \end{tabular} \end{table} \section{Summary and conclusions} In this contribution, we studied the properties of the proton in the framework of the unquenched quark model in which the $^{3}P_0$ coupling strength was normalized to the observed value of the proton flavor asymmetry. It was shown that whereas the pion fluctuations maintain the good results of the constituent quark model for the magnetic moments, they help to understand the discrepancies between the CQM and the experimental data. Their inclusion leads to a reduction of quark model value of $\Delta u$ and $g_A$, and give rise to a sizeable contribution (25 - 30 \%) of orbital angular momentum to the spin of the proton. In addition, it was found that the probabilities for pion fluctuations in the UQM are in good agreement with the values determined in analyses of the available experimental data. \begin{theacknowledgments} This work was supported in part by grant IN107314 from DGAPA-PAPIIT, Mexico. \end{theacknowledgments} \bibliographystyle{aipproc}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \noindent If $Z$ is a topological space, define the \textbf{configuration space} of $n$-tuples $$ \Conf(n, Z) = \{ (z_1, \ldots, z_n) \in Z^n \mbox{ so that $i \neq j \implies z_i \neq z_j$ } \}. $$ Let $X$ be an abstract simplicial complex, and write $|X|$ for its geometric realization. This note provides combinatorial models for $\Conf(n, |X|)$ and $\Conf(n, C^{\circ}|X|)$, where $C^{\circ}|X|$ denotes the open cone on $|X|$. \begin{definition} Given a partial ordering of the vertices of $X$ that restricts to a total order on every face $\sigma \subseteq X$, a \textbf{conf matrix} is a matrix $( v_{ij} )$ of vertices in $X$ so that \begin{enumerate} \item each row is weakly monotone increasing in the vertex-ordering, \item for every row, the vertices appearing within that row form a face of $X$, and \item no row is repeated. \label{item:distinct} \end{enumerate} A conf matrix is called \textbf{minimal} if deleting a column results in duplicate rows, no matter which column is deleted. \end{definition} \begin{example} \label{example:triangle} If $X$ has vertices $\{1, 2, 3\}$, ordered as usual, and facets $\{12, 13, 23\}$, then there are $60$ minimal conf matrices with three rows. By (\ref{item:distinct}), the symmetric group $S_3$ acts freely on these matrices by permuting the rows. We list a representative from each orbit: $$ \scalebox{.66}{\mbox{$ \left[ \begin{array}{c} 1 \\ 2 \\ 3 \\ \end{array} \right]\;\left[ \begin{array}{cc} 1 & 1 \\ 1 & 2 \\ 2 & 2 \\ \end{array} \right]\;\left[ \begin{array}{cc} 1 & 1 \\ 1 & 3 \\ 2 & 3 \\ \end{array} \right]\;\left[ \begin{array}{cc} 1 & 1 \\ 1 & 3 \\ 3 & 3 \\ \end{array} \right]\;\left[ \begin{array}{cc} 1 & 2 \\ 1 & 3 \\ 2 & 2 \\ \end{array} \right]\; \left[ \begin{array}{cc} 1 & 2 \\ 1 & 3 \\ 2 & 3 \\ \end{array} \right] \; \left[ \begin{array}{cc} 1 & 2 \\ 1 & 3 \\ 3 & 3 \\ \end{array} \right] \; \left[ \begin{array}{cc} 1 & 2 \\ 2 & 2 \\ 2 & 3 \\ \end{array} \right]\;\left[ \begin{array}{cc} 1 & 3 \\ 2 & 2 \\ 2 & 3 \\ \end{array} \right]\;\left[ \begin{array}{cc} 2 & 2 \\ 2 & 3 \\ 3 & 3 \\ \end{array} \right] $.}} $$ \end{example} \begin{theorem} \label{theorem:global} Let $C(n, X)$ be the simplicial complex whose vertices are the minimal conf matrices for $X$ with $n$ rows, and where a collection of matrices forms a face if their columns can be assembled into a single conf matrix. There is a homotopy equivalence $$ |C(n, X) | \simeq \Conf(n, |X|). $$ \end{theorem} \begin{example} The minimal conf matrices listed in Example~\ref{example:triangle} give the vertices of $C(3, X)$. There are $48$ facets. Up to $S_3$-symmetry, they correspond to the following conf matrices: $$ \scalebox{.55}{\mbox{$ \left[ \begin{array}{cccc} 1 & 1 & 1 & 2 \\ 1 & 1 & 3 & 3 \\ 2 & 3 & 3 & 3 \\ \end{array} \right]\, \left[ \begin{array}{cccc} 1 & 1 & 1 & 2 \\ 1 & 2 & 2 & 2 \\ 2 & 2 & 3 & 3 \\ \end{array} \right]\,\left[ \begin{array}{cccc} 1 & 1 & 1 & 2 \\ 1 & 3 & 3 & 3 \\ 2 & 2 & 3 & 3 \\ \end{array} \right]\,\left[ \begin{array}{cccc} 1 & 1 & 1 & 3 \\ 1 & 1 & 2 & 2 \\ 2 & 3 & 3 & 3 \\ \end{array} \right]\, \left[ \begin{array}{cccc} 1 & 1 & 1 & 3 \\ 1 & 2 & 2 & 2 \\ 2 & 2 & 3 & 3 \\ \end{array} \right]\,\left[ \begin{array}{cccc} 1 & 1 & 2 & 2 \\ 1 & 3 & 3 & 3 \\ 2 & 2 & 2 & 3 \\ \end{array} \right]\,\left[ \begin{array}{cccc} 1 & 1 & 2 & 2 \\ 2 & 2 & 2 & 3 \\ 2 & 3 & 3 & 3 \\ \end{array} \right]\,\left[ \begin{array}{cccc} 1 & 1 & 3 & 3 \\ 1 & 2 & 2 & 2 \\ 2 & 2 & 2 & 3 \\ \end{array} \right]$.}} $$ The vertices of the facet corresponding to $M$ are the minimal conf matrices whose columns appear in $M$. \end{example} \begin{remark}[Symmetric group action] The complex $C(n, X)$ carries a natural action of the symmetric group $S_n$ by permuting rows. Under geometric realization, this action matches the usual $S_n$ action on configuration space $\Conf(n, |X|)$. \end{remark} \begin{remark}[Automorphisms of $X$] The model produced in Theorem \ref{theorem:global} also carries any symmetry present in the simplicial complex $X$, provided this symmetry preserves the vertex-ordering. \end{remark} Configuration space as we have described it so far may be termed ``global'' configuration space since each point is allowed to wander anywhere in $|X|$. We now introduce \textbf{local configuration space}, modeling configuration space in the open cone near a point $p \in |X|$. Passing to a subdivision if necessary, we may assume that $p$ is a vertex of $X$. In this case, a small open neighborhood of $p$ is homeomorphic to the open cone on $\Link_X(p)$. Consequently, finding a model for local configuration space for all $X$ and all $p \in |X|$ is the same as finding a model for $\Conf(n, C^{\circ} |X|)$ for all $X$, and we take this perspective. \begin{remark} The cohomology of local configuration space appears in the stalks of the higher direct images $R^q f_* \underline{\mathbb{Z}}$ where $f$ is the inclusion $$ f \colon \Conf(n, |X|) \hookrightarrow |X|^n. $$ These sheaves feature prominently in Totaro's analysis of the Leray spectral sequence for this inclusion \cite{Totaro96}. More recently, the master's thesis of L\"utgehetmann \cite[Chapter 4]{lutgehetmann} gives a partial description of these sheaves in the case where $X$ is one-dimensional. \end{remark} In contrast to the model for $\Conf(n, |X|)$ in Theorem \ref{theorem:global}, the model for $\Conf(n, C^{\circ} |X|)$ to be given in Theorem \ref{theorem:local} does not rely on a vertex ordering. Nevertheless, we assume for notational convenience that the vertices are $\{1, \ldots, k\}$. \begin{definition} \label{definition:posetofposets} The \emph{poset of posets on $n$ labeled elements} $(\mathcal{P}(n), \preceq)$ has $$ \mathcal{P}(n) = \{ \mbox{ poset structures on the set $\{1, \ldots, n\}$ } \} $$ where $\preceq $ denotes inclusion of relations. \end{definition} \begin{remark} Study of the poset $\mathcal{P}(n)$ has been undertaken by Serge Bouc in \cite{bouc2013} who computes its M\"obius function and the homotopy types of its intervals. \end{remark} \begin{definition} If $(S_1, \ldots, S_k) \in \mathcal{P}(n)^k$ is a $k$-tuple of posets, then the \textbf{support} of an element $i \in \{1, \ldots, n \}$ is defined $$ \sigma_i(S_1, \ldots, S_k) = \{ v \in \{1, \ldots, k\} \mbox{ so that $i$ is not minimal in the poset $S_v$ } \}. $$ \end{definition} \begin{definition} \label{definition:local} Define a poset $\mathcal{S}(n, X) \subseteq \mathcal{P}(n)^k$ whose elements are tuples $(S_1, \ldots, S_k)$ that satisfy \begin{enumerate} \item for every $i, j \in \{1, \ldots, n\}$, there is some $S_v$ in which $i$ and $j$ are comparable, and \label{item:related} \item for every $i \in \{1, \ldots, n\}$, the support $\sigma_i(S_1, \ldots, S_k) \subseteq \{1, \ldots, k\}$ is a face of $X$. \label{item:support} \end{enumerate} The ordering comes from imposing $\preceq$ in every coordinate. \end{definition} \begin{theorem} \label{theorem:local} Let $L(n, X)$ be the simplicial complex whose vertices are the minimal elements of the poset $\mathcal{S}(n, X)$ and whose faces are those collections that can be dominated by some element. There is a homotopy equivalence $|L(n, X)| \simeq \Conf(n, C^{\circ}|X|)$ where $C^{\circ}|X|$ denotes the open cone on the realization of $X$. \end{theorem} \begin{remark} Although Theorem \ref{theorem:local} is stated and proved for the open cone $C^{\circ}|X|$, it is possible to find a homotopy equivalence $\Conf(n, C^{\circ}|X|) \simeq \Conf(n, C|X|)$ to configuration space in a closed cone. Consequently, $$ |L(n, X)| \simeq \Conf(n, C|X|). $$ \end{remark} \begin{example}[Two points in the open cone on a closed interval] Let $X$ have vertex set $\{1, 2\}$ and a single facet $12$. Since there are three poset structures on the set $\{1, 2\}$, there are nine elements in the poset $\mathcal{P}(2)^2$. Eight of these nine have the property that every pair $i, j \in \{1, 2\}$ is related in one of the two posets; the one poset pair without this property is the pair where both posets are discrete. There are four minimal poset pairs, and these are the vertices of $L(2, X)$: $$ (\; _{1 \; 2}\;, \; _{1}^2 \;) \;\;\;\; (\; _{1 \; 2}\;, \; _{2}^1 \;) \;\;\;\; ( \; _{1}^2 \;, \; _{1 \; 2}\;)\;\;\;\; ( \; _{2}^1 \;, \; _{1 \; 2}\;). $$ There are also four facets, all one-dimensional, corresponding to the poset pairs $$ (\; \; _{1}^2\;, \; _{1}^2 \;) \;\;\;\; (\; \; _{2}^1\;, \; _{1}^2 \;) \;\;\;\; (\; \; _{1}^2\;, \; _{2}^1 \;) \;\;\;\; (\; \; _{2}^1\;, \; _{2}^1 \;), $$ \begin{figure} \begin{centering} \includegraphics{square.pdf} \end{centering} \caption{The simplicial complex $L(2, X)$ where $X$ is a $1$-simplex.} \end{figure} \noindent so this computation squares with $\Conf(2, C^{\circ}|X|) \simeq S^1$. \end{example} \begin{remark} The local model is reminiscent of classical work of Fox and Neuwirth describing configuration space in the plane \cite{FoxNeuwirth}. Roughly speaking, they build their cells out of the three expressions $(x_1 < x_2), \; (x_1 = x_2), \; (x_1 > x_2)$. Since the middle relation is closed while the other two are open, they obtain a partition of configuration space into locally closed subsets. Our model does away with the equality relation, lending much greater flexibility. The price is that we do not partition configuration space; instead, we obtain a model that is homotopy equivalent. \end{remark} \begin{remark} Recent work of Tosteson \cite{Tosteson16} shows that, in many cases, the cohomology of configuration space is representation stable in the sense of \cite{ChurchEllenbergFarb15}. In particular, the dimension of cohomology over any field is eventually polynomial by \cite{ChurchEllenbergFarbNagpal14}. It is natural to ask, therefore, if configuration space admits combinatorial models with only polynomially-many cells in each dimension. \end{remark} \begin{remark} Configuration spaces of pairs (i.e., the case $n=2$) is a classical topic. These spaces are called deleted products, and have applications in embedding theory; see \cite{Hu60}, \cite{Patty61}, \cite{Whitley71}, \cite{Skopenkov02}, for example. The first results on configuration space in a simplicial complex for $n > 2$ are due to Gal \cite{Gal01} who computes the Euler characteristic of $\Conf(n, |X|)$. \end{remark} \section{Braid groups} \noindent The fundamental group $P_n(Z) = \pi_1 \Conf(n, Z)$ is called the \textbf{pure braid group on $n$ strands in $Z$} and sits inside $B_n(Z) = \pi_1 (\Conf(n, Z)/S_n)$, the \textbf{braid group on $n$ strands in $Z$}. These groups are related by the short exact sequence $$ 1 \longrightarrow P_n(Z) \longrightarrow B_n(Z) \longrightarrow S_n \longrightarrow 1. $$ The now-common interpretation of these fundamental groups as braid groups goes back to the paper of Fox and Neuwirth \cite{FoxNeuwirth}. Theorems \ref{theorem:global} and \ref{theorem:local} give presentations for the fundamental groupoids of configuration spaces, which makes algorithmic the computation of braid groups. If $Z$ is a smooth manifold, then the study of $P_n(Z)$ usually begins with a theorem of Fadell-Neuwirth \cite{FadellNeuwirth62} stating that the forgetful maps $\Conf(n+1, Z) \to \Conf(n, Z)$ are fibrations with fiber $Z - \{ \mbox{$n$ points} \}$. However, if $Z$ has singularities, then these forgetful maps are not fibrations, since the homotopy type of $Z - \{ \mbox{$n$ points} \}$ depends on which points are removed. Even the case of $n=2$ and $\dim Z = 1$ is interesting and subtle; see \cite{BarnettFarber09}. More generally, configuration spaces of graphs attract ongoing research interest. We contribute the following example. \begin{example} \label{example:k42} Let $Z$ be the complete bipartite graph $K_{4,2}$, and let $\Sigma_g$ be the smooth surface of genus $g$. In this and all subsequent examples, we use Sage \cite{sage}---and therefore, indirectly, the group theory program \cite{gap}---to compute braid groups: \begin{align*} P_3(Z) &= \pi_1 \, \Sigma_{13} \\ P_3(Z) &= \pi_1 \, \Sigma_{3}. \end{align*} By a result of Ghrist \cite{Ghrist01}, the configuration space of any graph besides the circle and the interval is a $K(\pi, 1)$, and so we actually have $\Conf(3, Z) \simeq \Sigma_{13}$. Abrams gave a similar result for two points in $K_5$ and the complete bipartite graph $K_{3,3}$; these spaces are homotopy equivalent to $\Sigma_6$ and $\Sigma_4$ respectively \cite{abrams_thesis}. That these are the only two-strand braid groups to give a surface group was shown by Ummel \cite{Ummel72}. \end{example} \begin{remark} Our method of computing the quotient $\Conf(n, Z)/S_n$ is to subdivide the model $C(n,X)$ so that pairs of vertices in the same $S_n$-orbit do not share a neighboring vertex. This endows the set of orbits $\{ \mbox{vertices} \}/ S_n$ with the structure of an abstract simplicial complex whose realization is the topological quotient. \end{remark} The next examples provide what seem to be the first computations of three-strand braid groups in a two-dimensional singular space. \begin{example}[Braids near the singular point of a nodal curve] \label{example:node} Let $Z$ be the affine hypersurface $Z = \{(z_1, z_2) \in \mathbb{C}^2\; : \; z_1 z_2 = 0 \}$. We compute \begin{align*} P_2(Z) &= \mathbb{Z} \\ B_2(Z) &= (\mathbb{Z}/2\mathbb{Z}) \ast (\mathbb{Z}/2\mathbb{Z}) \\ P_3(Z) &= \mathbb{Z} \ast \mathbb{Z} \ast \mathbb{Z} \ast \mathbb{Z} \ast \mathbb{Z} \\ B_3(Z) &= \langle a, b, c, d \mid a^2, b^3, (ab)^2, c^2, d^3, (cd)^2 \rangle. \end{align*} Further computing \begin{align*} H_2(\Conf(2, Z)\,;\; \mathbb{Q}) &= \mathbb{Q}^2 \\ H_2(\Conf(3, Z)\,; \; \mathbb{Q}) &= \mathbb{Q}^{18}, \end{align*} we see that neither $\Conf(2, Z)$ nor $\Conf(3, Z)$ is an Eilenberg-MacLane space, since free groups have no homology beyond dimension one. This is perhaps surprising in light of the result of Ghrist that $\Conf(n, G)$ is aspherical for $G$ a graph, and similarly for $\Conf(n, \mathbb{R}^2)$ by a result of Fadell-Neuwirth \cite{FadellNeuwirth62}. \end{example} \begin{example}[Braids in the nodal cubic] \label{example:degenerate_elliptic} Let $Z$ be the Riemann sphere with two points identified; this is a topological description of the degenerate elliptic curve $y^2 z = x^3 + x^2 z$ advertized in the abstract. We compute \begin{align*} P_2(Z) &= \mathbb{Z} \ast \mathbb{Z} \\ B_2(Z) &= \mathbb{Z} \ast (\mathbb{Z}/2\mathbb{Z}). \end{align*} We give a guess for $P_3$ and $B_3$ in Conjecture \ref{conjecture:nodal_cubic}. \end{example} \begin{example}[Braids in the suspension of $S^1 \sqcup \ast$] \label{example:sphere_plus_edge} Let $Z$ be the subset of $\mathbb{R}^3$ given by the union of the unit sphere and a line segment connecting its poles: $$ Z = \{ (x, y, z) \; \mid \; x^2 + y^2 + z^2 = 1 \} \cup \{ (0, 0, t) \; \mid \; -1 \leq t \leq 1 \}. $$ We compute \begin{align*} P_2(Z) &= \mathbb{Z} \ast \mathbb{Z} \\ B_2(Z) &= \mathbb{Z} \ast (\mathbb{Z}/2\mathbb{Z}) \\ P_3(Z) &= \mathbb{Z} \ast \mathbb{Z} \ast \mathbb{Z} \\ B_3(Z) &= \langle a, b, c \mid a^2, b^3, (ab)^2, acac^{-1} \rangle. \end{align*} Let $Z'$ be the same space but removing a small open neighborhood of the equator. Then $P_3(Z') = \mathbb{Z} \ast \mathbb{Z} \ast \mathbb{Z} \ast \mathbb{Z} \ast \mathbb{Z}$, matching a similar group in Example \ref{example:node}. \end{example} Although we have been so-far unable to compute the three-strand braid group in the nodal cubic from Example \ref{example:degenerate_elliptic}, we make the following conjecture on the strength of Example \ref{example:sphere_plus_edge}. \begin{conjecture} \label{conjecture:nodal_cubic} The three-strand braid groups $P_3$, $B_3$ for the nodal cubic coincide with the three-strand braid groups for the space from Example~\ref{example:sphere_plus_edge}. \end{conjecture} \section{Acknowledgements} Thanks to Melody Chan, Fred Cohen, Jordan Ellenberg, Mark Goresky, Greg Malen, Sam Payne, Jos\'e Samper, and Phil Tosteson for valuable conversations, the staff and organizers at ICERM for the stimulating program ``Topology in Motion'' at which this work began, and MathOverflow user Gabriel C. Drummond-Cole for answering a question related to Example \ref{example:k42} in MO:269017. This research was supported by the NSF through the Algebra RTG at the University of Wisconsin, DMS-1502553. \section{Combinatorial deletion of a subcomplex} In what follows, let $A \subseteq X$ be a subcomplex of the simplicial complex $X$. Topologically, subtracting $|X| - |A|$ makes perfect sense since $|A| \subseteq |X|$. However, the combinatorics of such a subtraction cannot be as straightforward as removing the faces of $A$ from $X$, since this completely ruins the simplicial complex structure. We put forth the following remedy, which seems not to appear in the literature despite its simplicity. \begin{definition} The \textbf{simplicial difference} $X \ominus A$ is the simplicial complex whose vertices are the faces of $X$ that are minimal nonfaces of $A$ and where a collection of such minimal nonfaces $\{ \sigma_0, \ldots, \sigma_d \}$ is a face of $X \ominus A$ whenever the union $\cup \sigma_i$ is a face of $X$. \end{definition} \begin{lemma} \label{lemma:delete} If $X$ is an abstract simplicial complex and $A \subseteq X$ is a subcomplex, then there is a homotopy equivalence $|X \ominus A| \simeq |X| - |A|$. \end{lemma} \begin{proof} Suppose the vertices of $X$ are $\{1, \ldots, k\}$. One standard definition of the geometric realization of $X$ is \begin{multline*} |X| = \\ \{ (\alpha_1, \ldots, \alpha_k) \in \mathbb{R}^k \mbox{ so that $\alpha_i \geq 0$, $\sum_i \alpha_i = 1$, and $\Supp(\alpha)$ is a face of $X$} \}, \end{multline*} where $\Supp(\alpha) = \{\mbox{ $i$ so that $\alpha_i > 0$ } \}$. If $\sigma \subset X$ is a nonempty face of $X$, then define an open subset of $|X|$ $$ U_X(\sigma) = \{ (\alpha_1, \ldots, \alpha_k) \in |X| \mbox{ so that $\alpha_i >0$ for $i \in \sigma$ } \}. $$ The set $U_X(\sigma)$ is nonempty because we may take $\alpha_i = 1/(d + 1)$ for $i \in \sigma$ and $\alpha_i = 0$ when $i \not \in \sigma$. Call this tuple $b_{\sigma}$ since it is the barycenter of the realization of $\sigma$. The property of $b_{\sigma}$ that we will need is that $\Supp(b_{\sigma}) \subseteq \Supp(u)$ for any other $u \in U_X(\sigma)$. We shall see that $U_X(\sigma)$ is contractible---indeed, we now show that it is star-shaped with star point $b_{\sigma}$. We must show that, given any other point $u \in U_X(\sigma)$ and $\gamma \in [0, 1]$, the linear combination $\gamma u + (1-\gamma) b_{\sigma}$ lies within $U_X(\sigma)$. This amounts to checking that $\Supp(\gamma u + (1-\gamma) b_{\sigma})$ is a face of $X$. However, it is immediate that $$ \Supp(\gamma u + (1-\gamma) b_{\sigma}) = \Supp(u) $$ for any $\gamma > 0$ since $\Supp(b_{\sigma}) \subseteq \Supp(u)$; and $\Supp(u)$ is certainly a face of $X$ since $u \in |X|$. We now use some of these open sets $U_X(\sigma)$ to build a good cover of $|X|-|A|$; the result will then follow by Borsuk's nerve lemma. Note that a point $\alpha \in |X|$ lies in $|X|-|A|$ exactly when $\Supp(\alpha)$ is not a face of $A$. Since any such nonface must contain a minimal nonface, we see that the open sets $U_X(\sigma)$ cover $X$ as $\sigma$ ranges over the vertices of $X \ominus A$. To see that this cover is good, we must check that every intersection of these opens is either empty or contractible. Such an intersection $$ U_X(\sigma_1) \cap \cdots \cap U_X(\sigma_l) $$ is empty when the union $\cup \sigma_i$ is not a face of $X$, and otherwise equals $U_X(\cup \sigma_i)$. Since this last set is already known to be contractible, we are done since the resulting combinatorics matches the definition of $X \ominus A$. \end{proof} \section{Global configuration space} In this section, we provide background and a proof for Theorem \ref{theorem:global}. First, we will build a standard triangulation of the product $|X|^n$ so that the ``fat diagonal'' of illegal configurations is a subcomplex. Then, we will apply Lemma \ref{lemma:delete} to remove the fat diagonal. Products are more convenient in the context of simplicial sets instead of simplicial complexes because the product of two simplicial sets comes with a natural triangulation. The extra structure required to upgrade a simplicial complex to a simplicial set is a partial ordering on the vertices that restricts to a total order on every face. \begin{definition} If $X$ is a simplicial complex with vertex set $\{1, \ldots, k\}$, and if the vertices carry a partial order that restricts to a total order on every face, then define the \textbf{simplicial set associated to $X$} which is a functor $$ X_{\bullet} \colon \Delta^{op} \to \mathbf{Set} $$ so that $X_n$ is the set of weakly increasing chains of vertices $v_0 \leq \ldots \leq v_n$ with the property that $\{v_0, \ldots, v_n \}$ is a face of $X$. Such a chain pulls back along a weakly monotone function $f \colon \{0, \ldots, m\} \to \{0, \ldots, n\}$ using the rule $$ v_0 \leq \ldots \leq v_n \;\;\;\;\; \mapsto \;\;\;\;\; v_{f(0)} \leq \ldots \leq v_{f(m)}. $$ \end{definition} We recall several basic facts about simplicial sets; see \cite[p. 538]{Hatcher}, for example. Writing $\| - \| \colon \mathbf{sSet} \to \mathbf{Top}$ for the realization functor on simplicial sets, we have $|X| \cong \| X_{\bullet} \|$. Also, as we have mentioned, we have a homeomorphism $\|S_{\bullet} \times T_{\bullet} \| \cong \|S_{\bullet} \| \times \|T_{\bullet} \|$ for any pair of simplicial sets $S_{\bullet}, T_{\bullet}$. Finally, we note that the degreewise diagonal map $X_{\bullet} \hookrightarrow X_{\bullet} \times X_{\bullet}$ and projection maps $X_{\bullet} \times X_{\bullet} \to X_{\bullet}$ realize to their topological counterparts $\| X_{\bullet} \| \hookrightarrow \| X_{\bullet} \| \times \| X_{\bullet} \|$ and $\|X_{\bullet}\| \times \|X_{\bullet}\| \to \|X_{\bullet}\|$. \begin{lemma}[Triangulation of a product {\cite[\S 3.B]{Hatcher} or \cite[p. 68]{EilenbergSteenrod1952}}] \label{lemma:triangulation} Suppose that $X$ and $Y$ are simplicial complexes, and that each is equipped with a partial order on its vertices that restricts to a total order on each of its faces. Write $X \times Y$ for the simplicial complex whose vertices are pairs of vertices $(x, y)$ with $x \in X, y \in Y$ and where the faces are of the form $\{ \, (x_1, y_1), \ldots, (x_d, y_d) \, \}$ with $x_1 \leq \cdots \leq x_d$ and $y_1 \leq \cdots \leq y_d$. Then $$ |X \times Y| \cong |X| \times |Y|. $$ \end{lemma} \begin{proof} In brief, we have $|X \times Y| \cong \| (X \times Y)_{\bullet} \| \cong \| X_{\bullet} \times Y_{\bullet} \| \cong \| X_{\bullet} \| \times \| Y_{\bullet} \| \cong |X| \times |Y|$. \end{proof} \begin{proof}[Proof of Theorem \ref{theorem:global}] Triangulating $|X|^n$ in the standard way described in Lemma \ref{lemma:triangulation}, we obtain a simplicial complex $X^n$ with a homeomorphism $|X^n| \cong |X|^n$. In detail, the vertices of $X^n$ are $n$-tuples of vertices of $X$, and a collection of $n$-tuples forms a face if they can be arranged to be weakly increasing in every coordinate. Writing each $n$-tuple as a column vector, the faces are matrices with $n$ rows where each row is weakly increasing. The fat diagonal $F \subseteq X^n$ is the subcomplex whose faces are matrices that have a repeated row, so a matrix is a face of $F$ if it is almost a conf matrix but fails condition (\ref{item:distinct}). By Lemma \ref{lemma:delete} we have a homotopy equivalence $$ |X^n \ominus F| \simeq |X^n| - |F| \cong |X|^n - |F| = \Conf(n, |X|). $$ Not by coincidence, the definition of $C(n, X)$ matches that of $X^n \ominus F$, and we are done. \end{proof} \section{Local configuration space} \noindent Define the \textbf{cone realization} of a simplicial complex with vertices $\{1, \ldots, k\}$ $$ |X|_{cone} = \{ \mbox{ $(\alpha_1, \ldots, \alpha_k) \in \mathbb{R}^k$ so that $\alpha_i \geq 0$ for all $i$ and $\Supp(\alpha)$ is a face of $X$ } \}, $$ where $\Supp(\alpha) = \{ \mbox{ $i$ so that $\alpha_i > 0$ } \}$ records the positions of the positive entries of $\alpha$. The usual realization $|X| \subset |X|_{cone}$ is the intersection of the cone realization with the hyperplane $\sum_i \alpha_i = 1$. We have a homeomorphism $$ |X|_{cone} \cong C^{\circ} |X| $$ to the open cone on $|X|$. The $n$-fold product $(|X|_{cone})^n$ sits inside the matrix space $\Mat_{\mathbb{R}}(n \times k)$ where each row gives a $k$-tuple that lives in $|X|_{cone}$: $$ (|X|_{cone})^n \simeq \left\{ \parbox{28em}{ real matrices $M \in \Mat_{\mathbb{R}}(n \times k)$ so that $M_{ri} \geq 0$ and for each $r \in \{1, \ldots n\}$, the $r^{\tiny \mbox{th}}$ row $(M_{r1}, M_{r2}, \ldots, M_{rk})$ lies in $|X|_{cone}$ } \right\}. $$ Such a matrix $M$ gives a point of $\Conf_n(|X|_{cone})$ when its rows are distinct: $$ \Conf_n(|X|_{cone}) \simeq \left\{ \parbox{29em}{ real matrices $M \in (|X|_{cone})^n \subseteq \Mat_{\mathbb{R}}(n \times k)$ having distinct rows } \right\}. $$ For any $i \in \{1, \ldots, k\}$ and partial order $P$ on the set $\{1, \ldots, n\}$, we define an open subset of $(|X|_{cone})^n$ by asking that column $i$ of the matrix $M$ obey the strict inequalities present in the partial order $P$: $$ U_P^i = \left\{ \parbox{21em}{ real matrices $M \in (|X|_{cone})^n$ so that for any $a, b \in \{1, \ldots, n\}$ we have $a <_P b \implies M_{ai} < M_{bi}$ } \right\}. $$ If no two points $a, b \in \{1, \ldots, n\}$ are comparable in $P$, then the open set $U_P^i$ imposes no condition. At the other extreme, if $P$ is a total order, then the open set $U_P^i$ consists of those matrices for which the real numbers in column $i$ are sorted according to $P$. \begin{lemma} \label{lemma:contractible} If $S = (S_1, \ldots, S_k)$ is a $k$-tuple of partial orders on the set $\{1, \ldots, n\}$, then the intersection $$ U_S = U_{S_1}^1 \cap \cdots \cap U_{S_k}^k $$ is either contractible or empty. \end{lemma} \begin{proof} The intersection in question may be written \begin{equation*} U_S = \left\{ \parbox{26em}{ real matrices $M \in (|X|_{cone})^n$ so that for any $a, b \in \{1, \ldots, n\}$ and $i \in \{1, \ldots, k \}$ we have $a <_{S_i} b \implies M_{ai} < M_{bi}$ } \right\}. \end{equation*} We show that this intersection is star-shaped by finding a star-point matrix whose entries are zero ``wherever possible.'' Write $\varepsilon_i \subseteq \{1, \ldots, n\}$ for the minimal elements of poset $S_i$, and $\eta_i \subseteq \{1, \ldots, n\}$ for the non-minimal elements, so that $\{1, \ldots, n\} = \varepsilon_i \sqcup \eta_i$. For any non-minimal $b \in \eta_i$ and any $M \in U_S$, it must be that $0 < M_{bi}$. Indeed, since $b$ is not minimal, there exists $a$ with $a <_{S_i} b$, and so $0 \leq M_{ai} < M_{bi}$. In other words, every matrix $M \in U_S$ has certain entries that must be nonzero. We now show that there exists a matrix $M^{\star} \in U_S$ that is nonzero at exactly these times, meaning $M_{ai}^{\star} = 0$ for all $a \in \varepsilon_i$. In order to fill in the other entries of $M^{\star}$, we must choose positive values for $M_{bi}^{\star}$ with $b \in \eta_i$. Restrict the partial order on $S_i$ to the subset $\eta_i$. Extend this restricted order to a total order arbitrarily, and embed the resulting total order in the positive reals $\mathbb{R}_{>0}$. This embedding extends to a function $\varphi_i \colon S_i \to \mathbb{R}_{\geq 0}$ so that $\varphi_i(a) = 0$ for $a \in \varepsilon_i$ and $\varphi_i(b) > 0$ for $b \in \eta_i$, using minimality. Setting $M_{ai}^{\star} = \varphi_i(a)$, we have produced a matrix of minimum support: its entries are zero everywhere that a zero is possible. In order to see that $M^{\star}$ is a star point, choose some other $M \in U_S$. We must show that every convex combination $M^{\delta} = \delta M^{\star} + (1-\delta) M$ remains in $U_S$, where $0 \leq \delta \leq 1$. Since every positive entry of $M^{\star}$ is also positive in $M$, the support of $M^{\delta}$ matches the support of $M$. As a result, the rows of $M^{\delta}$ are still elements of $|X|_{cone}$. The other conditions defining $U_S$ are of the form $M_{ai} < M_{bi}$, and these conditions are convex, which means they are automatically satisfied by $M^{\delta}$. In detail, suppose $a <_{S_i} b$. Then, since both $M^{\star}$ and $M$ are in $U_S$, we have $M_{ai}^{\star} < M_{bi}^{\star}$ and $M_{ai} < M_{bi}$. It immediately follows that $\delta M_{ai}^{\star} + (1-\delta)M_{ai} < \delta M_{bi}^{\star} + (1-\delta)M_{bi}$, and so $M^{\delta}_{ai} < M^{\delta}_{bi}$ as required. \end{proof} \begin{proof}[Proof of Theorem \ref{theorem:local}] Lemma \ref{lemma:contractible} supplies open subsets of $(|X|_{cone})^n$ that we will use to make a good cover of $\Conf(n, |X|_{cone})$. We will show that the combinatorics of this cover match the combinatorics defining $L(n, X)$, concluding the proof by the nerve lemma. For every tuple of posets $S = (S_1, \ldots, S_k) \in \mathcal{S}(n, X)$, we have defined an open set $U_S$. Using condition (\ref{item:support}) in Definition~\ref{definition:local}, these opens are nonempty. They are therefore contractible by Lemma~\ref{lemma:contractible}. Because of (\ref{item:related}), each of these opens is a subset of configuration space $U_S \subseteq \Conf(n, |X|_{cone})$. Given any configuration of distinct points in $|X|_{cone}$, we may record the partial order on $\{1, \ldots, n\}$ induced by each coordinate. Note that the ordering may not be a total order because two points may tie in any given coordinate. However, in order for the points to be distinct, any two must differ in some coordinate. This argument shows that the sets $U_S$ with $S \in \mathcal{S}(n, X)$ form an open cover. Next, we note that any intersection of open subsets drawn from the cover $\{ \, U_S \, \}_{S \in \mathcal{S}(n, X)}$ is either empty or else still a set from the cover. The reason is that a collection of partial orders on $\{1, \ldots, n \}$, when simultaneously imposed and closed under transitivity, either leads to a contradiction or else to a partial order extending all of those in the collection in a minimal way. The vertices of $L(n, X)$ consist of the minimal elements of $\mathcal{S}$. The corresponding open sets form an open cover since any other $U_S$ contained in one with $S$ minimal. An intersection of such opens is non-empty exactly when there is some $S \in \mathcal{S}$ so that $U_S$ is the intersection. We have already seen that $U_S$ is contractible, and so $L(n, X)$ is the simplicial complex that records the combinatorics of the good cover provided by the minimal elements of $\mathcal{S}$. The result then follows from Borsuk's nerve lemma. \end{proof} \bibliographystyle{amsalpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{section.intro} Financial systems are characterised by a complex and dynamic network of relationships between multiple agents. Network analysis offers a powerful way to describe and understand the structure and evolution of these relationships; background information can be found in \cite{kolaczyk2009statistical}, \cite{jackson2008social}, and \cite{caldarelli2007scale}. The network structure plays an important role in determining system stability in response to the spread of contagion, such as epidemics in populations or liquidity stress in financial systems. The importance of network studies in assessing stability and systemic risk has been emphasised in \cite{schweitzer2009economic} in the context of integrating economic theory and complex systems research. Liquidity stress is of special interest in banking networks. The topology of a banking network is recognised as one of the key factors in system stability against external shocks and systemic risks \cite{haldane2011}. In this respect, financial networks resemble ecological networks. Ecological networks demonstrate robustness against shocks by virtue of their continued survival and their network properties are thought to make them more resilient against disturbances \cite{may2008ecology}. Often they are disassortative in the sense that highly connected nodes tend to have most of their connections with weakly connected nodes (see \cite{newman2003mixing} for details). Disassortativity and other network properties are often used to judge stability of financial networks. There has been an explosion in empirical interbank network studies in the last years thanks largely to the introduction of electronic settlement systems. One of the first, reported in \cite{boss2004}, examines the Austrian interbank market, which involves about 900 participating banks. The data are drawn from the Austrian bank balance sheet database (MAUS) and the major loan register (GKE) containing all high-value interbank loans above \euro\( 0.36\times10^6 \); smaller loans are estimated by means of local entropy maximisation. The authors construct a network representation of interbank payments for ten quarterly periods from 1999 to 2003. They find that the network exhibits small-world properties and is characterised by a power-law distribution of degrees. Specifically, the degree distribution is approximated by a power law with the exponent \( -2.01 \) for degrees \( \gtrsim40 \). This result, albeit with different exponents, holds for the in- and out-degree distributions too (the exponent is \( -3.1 \) for out-degrees and \( -1.7 \) for in-degrees). A recent study of transactional data from the Austrian real-time interbank settlement system (ARTIS) reported in \cite{kyriakopoulos2009network} demonstrates a strong dependence of network topology on the time-scales of observation, with power-law tails exhibiting steeper slopes when long time-scales are considered. The network structure of transactions between Japanese banks, logged by the Bank of Japan Financial Network system (BOJ-NET), is analysed in \cite{inaoka2004fractal}. The authors consider several monthly intervals of data from June 2001 to December 2002 and construct monthly networks of interbank links corresponding to 21 transactions or more, i.e.\ one or more transaction per business day on average. Truncating in this way eliminates about 200 out of 546 banks from the network. The resulting monthly networks have a low connectivity of 3\% and a scale-free cumulative distribution of degrees with the exponent \( -1.1 \). More than half a million overnight loans from the Italian electronic broker market for interbank deposits (e-MID), covering the period from 1999 to 2002, are analysed in \cite{de2006fitness}. There are about 140 banks in the network, connected by about 200 links. The degree distribution is found to exhibit fat tails with power-law exponent \( 2.3 \) (\( 2.7 \) for in-degrees and \( 2.15 \) for out-degrees), the network is disassortative, with smaller banks staying on its periphery. In a related paper \cite{iori2007trading}, the authors make use of the same dataset to uncover liquidity management strategies of the participating banks, given the reserve requirement of 2\% on the 23rd of each month imposed by the central bank. Signed trading volumes are used as a proxy for the liquidity strategies and their correlations are analysed. Two distinct communities supporting the dichotomy in strategy are identified by plotting the correlation matrix as a graph. The two communities are mainly composed of large and small banks respectively. On average, small banks serve as lenders and large banks as borrowers, but the strategies reversed in July 2001, when target interest rates in the Euro area stopped rising and started to decrease. The authors also note that some mostly small banks tend to maintain their reserves through the maintenance period. The evolution of the network structure over the monthly maintenance period is examined in \cite{iori2008network}. A study of the topology of the Fedwire network, a real-time gross settlement (RTGS) system operated by the Federal Reserve System in the USA, is reported in \cite{soramaki2007}. The study covers 62 days in the 1st quarter of 2004, during which time Fedwire comprised more than 7500 participants and settled \( 3.45\times10^5 \) payments daily with total value {\us}1.3 trillion. It reveals that Fedwire is a small-world network with low connectivity (0.3\%), moderate reciprocity (22\%), and a densely connected sub-network of 25 banks responsible for the majority of payments. Both in- and out-degree distributions follow a power law for degrees $\gtrsim10$ (exponent \( 2.15 \) for in-degrees and \( 2.11 \) for out-degrees). The network is disassortative, with the correlation of out-degrees equal to \( -0.31 \). The topology of overnight loans in the federal funds market in the USA is examined in \cite{bech2010}, using a large dataset spanning 2415 days from 1999 to 2006. It is revealed that the overnight loans form a small-world network, which is sparse (connectivity 0.7\%), disassortative (assortativity ranging from \( -0.06 \) to \( -0.28 \)), and has low reciprocity of 6\%. The reciprocity changes slowly with time and appears to follow the target interest rate over the period of several years. A power law is the best fit for the in-degree distribution, but the fit is only good for a limited range of degrees. A negative binomial distribution, which requires two parameters rather than one for a power law, fits the out-degree distribution best. A comprehensive survey of studies of interbank networks is given in \cite{imakubo2010transaction}. The number of interbank markets being analysed continues to increase. For example, a study of the interbank exposures in Brazil for the period from 2004 to 2006 was reported in \cite{cajueiro2008role}. A topological analysis of money market flows logged in the Danish large-value payment system (Kronos) in 2006 was reported in \cite{rordam2008topology}, where customer-driven transactions are compared with the bank-driven ones. Empirical network studies have been used to guide the development of a network model of the interbank market based on the interbank credit lending relationships \cite{li2010}. Establishing basic topological features of interbank networks is essential for understanding these complex systems. Fundamentally, however, interbank money markets are flow networks, in which links between the nodes correspond to monetary flows. The dynamics of such flows has not been examined in depth in previous studies, which mostly viewed interbank networks as static or slowly varying. But the underlying flows are highly dynamic and complex. Moreover, monetary flows are inhomogeneous; loan flows are fundamentally different from the flows of other payments. Payments by the banks' customers and the banks themselves cause imbalances in the exchange settlement accounts of the banks. For some banks, the incoming flows exceed the outgoing flows on any given day; for other banks, the reverse is true. Banks with excess reserves lend them in the overnight money market to banks with depleted reserves. This creates interesting dynamics: payment flows cause imbalances, which in turn drive compensating flows of loans. Understanding this dynamic relationship is needed for advancing our ability to model interbank markets effectively. In this paper, our objective is to define empirically the dynamics of interbank monetary flows. Unlike most studies cited above, we aim to uncover the fundamental causal relationship between the flows of overnight loans and other payments. We choose to specialise in the Australian interbank market, where we have privileged access to a high-quality dataset provided by the Reserve Bank of Australia (RBA). Our dataset consists of transactions settled in the period from 19 to 23 February 2007 in the Australian interbank market. We separate overnight loans and other payments (which we call nonloans) using a standard matching procedure. The loan and nonloan transactions settled on a given day form the flow networks, which are the main target of our statistical analysis. We compare the topology and variation of the loan and nonloan networks and reveal the causal mechanism that ties them together. We investigate the dynamical stability of the system by testing how individual flows vary from day to day. Basic network properties such as the degree distribution and assortativity are examined as well. \section{Data} \label{section.data} High-value transactions between Australian banks are settled via the Reserve Bank Information and Transfer System (RITS) operated by the RBA since 1998 on an RTGS basis \citep{gallagher2010}. The transactions are settled continuously throughout the day by crediting and debiting the exchange settlement accounts held by the RBA on behalf of the participating banks. The banks' exchange settlement accounts at the RBA are continuously monitored to ensure liquidity, with provisions for intra-day borrowing via the intra-day liquidity facility provided to the qualifying banks by the RBA. This obviates the need for a monthly reserve cycle of the sort maintained by Italian banks as discussed in \cite{iori2008network}. The RITS is used as a feeder system for transactions originating from SWIFT\footnote{Society for Worldwide Interbank Financial Telecommunication} and Austraclear for executing foreign exchange and securities transactions respectively. The member banks can also enter transactions directly into RITS. The switch to real-time settlement in 1998 was an important reform which protects the payment system against systemic risk, since transactions can only be settled if the paying banks possess sufficient funds in their exchange settlement accounts. At present, about \( 3.2\times10^4 \) transactions are settled per day, with total value around {\au}168 billion. The data comprise all interbank transfers processed on an RTGS basis by the RBA during the week of 19 February 2007. During this period, 55 banks participated in the RITS including the RBA. The dataset includes transfers between the banks and the RBA, such as RBA's intra-day repurchase agreements and money market operations. The real bank names are obfuscated (replaced with labels from A to BP) for privacy reasons, but the obfuscated labels are consistent over the week. The transactions are grouped into separate days, but the time stamp of each transaction is removed. \begin{table} \centering \begin{tabular}{ccc} \hline Date & Volume & Value ({\au}\( 10^9 \))\\ \hline 19-02-2007 & 19425 & \phantom{0}82.2506 \\ 20-02-2007 & 27164 & 206.1023 \\ 21-02-2007 & 24436 & 161.9733 \\ 22-02-2007 & 25721 & 212.1350 \\ 23-02-2007 & 26332 & 184.9202 \\ \hline \end{tabular} \caption{The number of transactions (volume) and their total value (in units of {\au}\( 10^9 \)) for each day.} \label{total values} \end{table} During the week in question, around \( 2.5\times10^4 \) transactions were settled per day, with the total value of all transactions rising above {\au}\( 2\times10^{11} \) on Tuesday and Thursday. The number of transactions (volume\footnote{The term ``volume'' is sometimes used to refer to the combined dollar amount of transactions. In this paper, we only use the term ``volume'' to refer to the number of transactions and ``total value'' to refer to the combined dollar amount. This usage follows the one adopted by the RBA \cite{gallagher2010}. }) and the total value (the combined dollar amount of all transactions) for each day are given in Table~\ref{total values}. Figure~\ref{value histograms} shows the distribution of transaction values on a logarithmic scale. Local peaks in the distribution correspond to round values. The most pronounced peak occurs at {\au}\( 10^6 \). In terms of the number of transactions, the distribution consists of two approximately log-normal components, with lower-value transactions being slightly more numerous. The standard entropy maximisation algorithm for a Gaussian mixture model with two components \citep{mclachlan2000finite} produces a satisfactory fit with the parameters indicated in Table~\ref{GMM}. The lower- and higher-value components are typically centred around {\au}\( 10^4 \) and {\au}\( 10^6 \) respectively. The high-value component is small on Monday (19-02-2007) but increases noticeably on subsequent days, while the low-value component diminishes. By value, however, the distribution is clearly dominated by transactions above {\au}\( 10^6 \), with the highest contribution from around {\au}\( 2\times10^8 \). \begin{table} \centering \begin{tabular}{ccccccc} \hline Date & \multicolumn{3}{c}{Component 1} & \multicolumn{3}{c}{Component 2} \\ & \( \mean{u} \) & \( \sigma_u^2 \) & \( P \) & \( \mean{u} \) & \( \sigma_u^2 \) & \( P \) \\ \hline 19-02-2007 & 4.00 & 1.12 & 0.81 & 6.68 & 0.68 & 0.19 \\ 20-02-2007 & 3.55 & 0.72 & 0.43 & 5.73 & 1.49 & 0.57 \\ 21-02-2007 & 3.66 & 0.86 & 0.55 & 5.86 & 1.43 & 0.45 \\ 22-02-2007 & 3.87 & 1.01 & 0.68 & 6.42 & 1.07 & 0.32 \\ 23-02-2007 & 3.82 & 0.87 & 0.61 & 6.12 & 1.19 & 0.39 \\ \hline \end{tabular} \caption{Mean \( \mean{u} \), variance \( \sigma_u^2 \), and mixing proportion \( P \) of the Gaussian mixture components shown in Figure~\ref{value histograms} (\( u=\log_{10}v \), where \( v \) is value).} \label{GMM} \end{table} \section{Overnight loans} The target interest rate of the RBA during the week of our sample was \( r_t=6.25\% \) per annum. If the target rate is known, it is easy to extract the overnight loans from the data by identifying reversing transactions on consecutive days. A hypothetical interest rate can be computed for each reversing transaction and compared with the target rate. For instance, suppose a transaction of value \( v_1 \) from bank A to bank B on day 1 reverses with value \( v_2 \), from bank B to bank A, on day 2. These transactions are candidates for the first and second legs of an overnight loan from A to B. The hypothetical interest rate for this pair of transactions is given by \( r_h = 100\%\times365\times(v_2-v_1)/v_1 \); note that the quoted target rate is per annum. Since large banks participate in many reversing transactions that can qualify as loans, we consider all possible hypothetical pairs and prefer the one that gives \( r_h \) closest to the target rate. The algorithm for loan extraction is applied from Monday to Thursday; loans issued on Friday cannot be processed since the next day is not available. A similar procedure was pioneered by Furfine \cite{furfine2003interbank}; see also \cite{ashcraft2007systemic}. \begin{figure}[t] \includegraphics{value_histograms.eps} \caption{The distribution of transaction values \( v \) (in Australian dollars) on a logarithmic scale, with bin size \( \Delta\log_{10}v=0.1 \); the vertical axis is the number of transactions per bin. Components of the Gaussian mixture model are indicated by the dashed curves; the solid curve is the sum of the two components. The dotted histogram shows the relative contribution of transactions at different values to the total value (to compute the dotted histogram we multiply the number of transactions in a bin by their value). } \label{value histograms} \end{figure} \begin{figure}[t] \begin{center} \includegraphics{loans_scatter.eps} \caption{Hypothetical interest rate \( r_h \) versus value of the first leg of the transaction pairs detected by our algorithm, with no restrictions on value or interest rate. The dotted rectangle contains the transactions that we identify as overnight loans. The least-squares fit is shown with a solid red line. }\label{loans scatter} \end{center} \end{figure} The application of the above algorithm results in the scatter diagram shown in Figure~\ref{loans scatter}. There is a clearly visible concentration of the reversing transaction pairs in the region \( v>2\times10^5 \) and \( |r_t-r_h|<0.5\% \) (red box). We identify these pairs as overnight loans. Contamination from nonloan transaction pairs that accidentally give a hypothetical rate close to the target rate is insignificant. By examining the adjacent regions of the diagram, i.e.\ \( v>2\times10^5 \) and \( r_h \) outside of the red box, we estimate the contamination to be less than 2\% (corresponding to \( \le5 \) erroneous identifications per day). It is also possible that some genuine loans fall outside our selection criteria. However, it is unlikely that overnight interest rates are very different from the target rate; and the lower-value transactions (below {\au}\( 10^4 \)), even if they are real loans, contribute negligibly to the total value. \begin{table} \centering \begin{tabular}{ccccc} \hline Date & Volume & Value ({\au}\( 10^9 \)) & Loan fraction\\ \hline 19-02-2007 &185 & \phantom{0}7.50 & 9.12\% \\ 20-02-2007 & 221 & \phantom{0}9.18 & 4.45\% \\ 21-02-2007 & 226 & 11.08 & 6.84\% \\ 22-02-2007 & 265 & 14.93 & 7.04\% \\ \hline \end{tabular} \caption{Statistics of the overnight loans identified by our algorithm: the number of loans (volume), the total value of the first leg of the loans (in units of {\au}\( 10^9 \)), and the fraction of the total value of the loans (first legs only) with respect to the total value of all transactions on a given date.} \label{loans} \end{table} We identify 897 overnight loans over the four days. A daily breakdown is given in Table~\ref{loans}. Here and below, we refer to the first leg of the overnight loans as simply loans and to all other transactions as nonloans. The loans constitute less than 1\% of all transactions by number and up to 9\% by value (cf. Tables~\ref{total values} and~\ref{loans}). The distribution of loan values and interest rates is shown in Figures~\ref{loans values} and~\ref{loans rates}. The interest rate distribution peaks at the target rate 6.25\%. The mean rate is within one basis point (0.01\%) of the target rate, while the standard deviation is about 0.07\%. The average interest rate increases slightly with increasing value of the loan; a least-squares fit yields \( r_h=6.248+0.010\log_{10}(v/\mathrm{\au}10^6) \). The same technique can be used to extract two-day and longer-term loans (up to four-day loans for our sample of five consecutive days). Using the same selection criteria as for the overnight loans, our algorithm detects 27, 67, and 24 two-day loans, with total values {\au}1.3, {\au}2.2, and {\au}1.4 billion, on Monday, Tuesday, and Wednesday, respectively. The total value of the two-day loans is 1.5\%, 1.0\%, and 0.9\% of the total transaction values on these days respectively. \section{Nonloans} We display the distributions of the incoming and outgoing nonloan transactions, for which the bank is the destination and the source respectively, for the six largest banks in Figure~\ref{individual distributions}. The distributions are similar to the total distribution shown in Figure~\ref{value histograms}, with the notable exception of BA (see below). There is also an unusually large number of {\au}106 and {\au}400 transactions from W to T on Monday. Note that the daily imbalance for each bank is mostly determined by the highest value transactions; large discrepancies between incoming and outgoing transactions at lower values are less relevant. The distribution for BA is clearly bimodal; it contains an unusually high proportion of transactions greater than {\au}\( 10^6 \). Moreover, below {\au}\( 10^6 \), incoming transactions typically outnumber outgoing ones by a large amount. BA is also involved in many high value transactions that reverse on the same day. These transactions probably correspond to the central bank's repurchase agreements, which facilitate intra-day liquidity of the banks \citep{rbarepos}. \begin{figure}[t!] {\centering \subfloat[]{\includegraphics{loans_values.eps}\label{loans values}} \subfloat[]{\includegraphics{loans_rates.eps}\label{loans rates}} } \caption{\subref{loans values} The distribution of loan values \( v \) on a logarithmic scale. The vertical axis is the number of loans per bin for bin size \( \Delta\log_{10}v=0.25 \). The dotted line is the same distribution multiplied by the value corresponding to each bin (in arbitrary units). The date of the first leg of the loans is indicated. \subref{loans rates} The distribution of loan interest rates \( r_h \). The vertical axis is the number of loans per bin for bin size \( \Delta r_h=0.01 \). The date of the first leg of the loans is indicated. The mean and standard deviation are 6.25\% and 0.08\% on Monday (19-02-2007), and 6.26\% and 0.07\% on the other days.} \end{figure} The banks shown in Figure~\ref{individual distributions} are also the largest in term of the number of transactions, with the exception of BA. The rank order by the number of transactions matches that by value. For D, which is the largest, the number of nonloan transactions reaches 48043 over the week. By the number of transactions, the order of the top twelve banks is D, BP, AV, T, W, AH, AF, U, AP, BI, BA, P. By value, the order is D, BP, AV, BA, T, W, BG, U, A, AH, AB, BM. The situation is similar when considering the overnight loans. By value, AV, D, BP, and T dominate. For these four banks, weekly total loans range from {\au}11.5 to {\au}18 billion and number from 254 to 399. For the other banks the total loan value is less than {\au}3 billion. In view of the discussion above, it is noteworthy that Australia's retail banking system is dominated by four big banks (ANZ, CBA, NAB, and WBC)\footnote{Australia and New Zealand Banking Group, Commonwealth Bank of Australia, National Australia Bank, and Westpac Banking Corporation.} that in February 2007 accounted for 65\% of total resident assets, according to statistics published by Australian Prudential Regulation Authority (APRA); see http://www.apra.gov.au for details. The resident assets of the big four exceeded {\au}225 billion each, well above the next largest retail bank, St George Bank Limited\footnote{% In December 2008, St George Bank became a subsidiary of Westpac Banking Corporation.} ({\au}93 billion). The distinction between the big four and the rest of the banks in terms of cash and liquid assets at the time was less clear, with Macquarie Bank Limited in third position with {\au}8 billion. According to APRA, cash and liquid assets of the big four and Macquarie Bank Limited accounted for 56\% of the total. \section{Loan and nonloan imbalances} In order to maintain liquidity in their exchange settlement accounts, banks ensure that incoming and outgoing transactions roughly balance. However, they do not control most routine transfers, which are initiated by account holders. Therefore, the imbalances arise. On any given day, the nonloan imbalance of bank \( i \) is given by \begin{equation} \Delta v_i = -\sum_j\sum_k v_k(i,j) +\sum_j\sum_k v_k(j,i), \end{equation} where \( \{v_k(i,j)\}_k \) is a list of values of individual nonloan transaction from bank \( i \) to bank \( j \), settled on the day. The nonloan imbalances are subsequently compensated by overnight loans traded on the interbank money market. The loan imbalances are defined in the same way using transactions corresponding to the first leg of the overnight loans. Note that we do not distinguish between the loans initiated by the banks themselves and those initiated by various institutional and corporate customers. \begin{figure*}[t] \includegraphics{individual_value_histograms_1.eps} \caption{The distribution of nonloan transaction values of the six largest banks for Monday through Thursday (from left to right); the banks are selected by the combined value of incoming and outgoing transactions over the entire week. Black and red histograms correspond to incoming (bank is the destination) and outgoing (bank is the source) transactions; red histograms are filled in to improve visibility. The banks' anonymous labels, the combined daily value of the incoming and outgoing transactions, and the daily imbalance (incoming minus outgoing) are quoted at the top left of each panel (in units of {\au}\( 10^9 \)). The horizontal axis is the logarithm of value in {\au}.} \label{individual distributions} \end{figure*} For instance, if the funds of a corporate customer are depleted, this customer may borrow overnight to replenish the funds. In this case, the overnight loan is initiated by an account holder, who generally has no knowledge of the bank's net position. Nevertheless, the actions of this account holder in acquiring a loan reduce the bank's imbalance, provided that the customer deposits the loan in an account with the same bank. \begin{figure} \centering \includegraphics{imbalances_correlation.eps} \caption{Left: loan imbalance \( \Delta l \) vs nonloan imbalance \( \Delta v \) for individual banks and days of the week (in units of {\au}\( 10^9 \)). Right: the absolute value of loan imbalance \( |\Delta l| \) vs nonloan total value (incoming plus outgoing transactions) for individual banks and days of the week. Thursday data are marked with crosses. } \label{imbalances correlation} \end{figure} \begin{table*} \centering \begin{tabular}{lcccccccc} \hline & \multicolumn{2}{c}{19-02-2007} & \multicolumn{2}{c}{20-02-2007} & \multicolumn{2}{c}{21-02-2007} & \multicolumn{2}{c}{22-02-2007} \\ & nonloans & loans & nonloans & loans & nonloans & loans & nonloans & loans \\ \hline D & \( -0.51 \) & \( +0.12 \) & \( -0.28 \) & \( +0.12 \) & \( -0.76 \) & \( +0.45 \) & \( -1.25 \) & \( +1.44 \) \\ BP & \( +2.08 \) & \( -1.64 \) & \( +0.80 \) & \( -0.59 \) & \( +1.38 \) & \( -0.85 \) & \( +0.16 \) & \( +0.82 \) \\ AV & \( -0.32 \) & \( -0.17 \) & \( +1.39 \) & \( -0.79 \) & \( +0.55 \) & \( -0.65 \) & \( +1.08 \) & \( +0.25 \) \\ BA & \( +0.03 \) & \( -0.19 \) & \( -0.10 \) & \( -0.31 \) & \( -0.32 \) & \( -0.05 \) & \( -1.53 \) & \( -0.64 \) \\ T & \( -0.76 \) & \( +1.10 \) & \( -0.75 \) & \( +0.68 \) & \( -0.62 \) & \( +0.64 \) & \( -0.21 \) & \( +0.27 \) \\ W & \( -0.09 \) & \( +0.07 \) & \( +0.08 \) & \( +0.26 \) & \( -0.36 \) & \( +0.41 \) & \( +0.36 \) & \( -0.37 \) \\ \hline \end{tabular} \caption{Loan and nonloan imbalances for the six largest banks (in units of {\au}\( 10^9 \)).} \label{imbalances and loans} \end{table*} The loan and nonloan imbalances for the six largest banks are given in Table~\ref{imbalances and loans}. The data generally comply with our assumption that the overnight loans compensate the daily imbalances of the nonloan transactions. The most obvious exception is for BA on Thursday (22-02-2007), where a large negative nonloan imbalance is accompanied by a sizable loan imbalance that is also negative. Taking all the banks together, there is a strong anti-correlation between loan and nonloan imbalances on most days. We see this clearly in Figure~\ref{imbalances correlation}. The Pearson correlation coefficients for Monday through Thursday are \( -0.93 \), \( -0.88 \), \( -0.95 \), \( -0.36 \). It is striking to observe that many points fall close to the perfect anti-correlation line. The anti-correlation is weaker on Thursday (crosses in Figure~\ref{imbalances correlation}), mostly due to BA and AV. A correlation also exists between the absolute values of loan imbalances and the nonloan total values (incoming plus outgoing nonloan transactions); the Pearson coefficients are \( 0.74 \), \( 0.75 \), \( 0.66 \), \( 0.77 \) for Monday through Thursday. This confirms the intuitive expectation that larger banks tolerate larger loan imbalances. \section{Flow variability} \label{flow variability section} For each individual source and destination, we define the nonloan flow as the totality of all nonloan transactions from the given source to the given destination on any given day. The value of the flow is the sum of the nonloan transaction values and the direction is from the source to the destination. On any given day, the value of the flow from bank \( i \) to bank \( j \) is defined by \begin{equation} v_{\textrm{flow}}(i,j)=\sum_{k} v_k(i,j), \end{equation} where \( \{v_k(i,j)\}_k \) is a list of values of individual nonloan transaction from \( i \) to \( j \) on the day. For example, all nonloan transactions from D to AV on Monday form a nonloan flow from D to AV on that day. The nonloan transactions in the opposite direction, from AV to D, form another flow. A flow has zero value if the number of transactions is zero. Typically, for any two large banks there are two nonloan flows between them. The loan flows are computed in a similar fashion. \subsection{Nonloan flows} There are 55 banks in the network, resulting in \( N_\textrm{flow}=2970 \) possible flows. The actual number of flows is much smaller. The typical number of nonloan flows is \( \sim800 \) on each day (the actual numbers are 804, 791, 784, 797). Even though the number of nonloan flows does not change significantly from day to day, we find that only about 80\% of these flows persist for two days or more. The other 20\% are replaced by different flows, i.e.\ with a different source and/or destination, on the following day. Structurally speaking, the network of nonloan flows changes by 20\% from day to day. However, persistent flows carry more than 96\% of the total value. Even when the flow is present on both days, its value is rarely the same. Given that 80\% of the network is structurally stable from day to day, we assess variability of the network by considering persistent flows and their values on consecutive days. Figure~\ref{flow value correlations} shows the pairs of persistent flow values for Monday and Tuesday, Tuesday and Wednesday, and Wednesday and Thursday. If the flow values were the same, the points in Figure~\ref{flow value correlations} would lie on the diagonals. We observe that the values of some flows vary significantly, especially when comparing Monday and Tuesday. Moreover, there is a notable systematic increase in value of the flows from Monday to Tuesday by a factor of several, which is not observed on the other days. For each pair of days shown in Figure~\ref{flow value correlations}, we compute the Pearson correlation coefficient, which gives 0.53 for Monday and Tuesday, 0.70 for Tuesday and Wednesday, and 0.68 for Wednesday and Thursday. To characterize the difference between the flows on different days more precisely, we compute the Euclidean distance between normalised flows on consecutive days. We reorder the adjacency matrix \( \{v_{\textrm{flow}}(i,j)\}_{ij} \) of the flow network on day \( d \) as an \( N_\textrm{flow} \)-dimensional vector \( \mathbf{v}_d \) representing a list of all flows on day \( d \) (\( d=1,2,\ldots,5 \)). For each pair of consecutive days we compute the Euclidean distance between normalized vectors \( \mathbf{v}_d/\norm{\mathbf{v}_d} \) and \( \mathbf{v}_{d+1}/\norm{\mathbf{v}_{d+1}} \), which gives 0.62, 0.50, 0.50 for all flows and 0.61, 0.49, 0.49 for persistent flows (the latter are computed by setting non-persistent flows to zero on both days). Since the flow vectors are normalized, these quantities measure random flow discrepancies while systematic deviation such as between the flows on Monday and Tuesday are ignored. For two vectors of random values uniformly distributed in interval \( (0,1) \), the expected Euclidean distance is 0.71 and the standard deviation is 0.02 for the estimated number of persistent nonloan flows of \( 640 \). So the observed variability of the nonloan flows is smaller than what one might expect if the flow values were random. \begin{figure*}[t] \includegraphics{flow_value_correlations.eps} \caption{Nonloan flow value pairs on one day (horizontal axis) and the next (vertical axis). Only flows present on both days are considered. Flows that do not change lie on the diagonal (red dotted line). The solid line is the weighted orthogonal least squares fit to the scatter diagram; the weights have been defined to emphasize points corresponding to large flows. } \label{flow value correlations} \end{figure*} \begin{figure*}[t] \includegraphics{loan_flow_value_correlations.eps} \caption{As for Figure~\ref{flow value correlations} but for loan flows. } \label{loan flow value correlations} \end{figure*} \subsection{Loan flows} Variability of the loan flows is equally strong. The number of loan flows varies from 69 to 83 (actual numbers are 69, 75, 77, 83). Only about 50\% of these flows are common for any two consecutive days. Moreover, persistent flows carry only about 65\% of the total value of the loan flows on any given day, cf.\ 80\% of nonloan flows. For persistent loan flows, the Pearson correlation coefficients are 0.63, 0.90, and 0.76 for the consecutive pairs of days starting with Monday and Tuesday. The correlation is generally similar to that of the nonloan flows, with the notable exception of the loan flows on Tuesday and Wednesday, when the sub-network of persistent loan flows appears to be more stable. The Euclidean distances between the normalized loan flows for each pair of consecutive days are 0.85, 0.68, 0.73 for all flows and 0.63, 0.44, and 0.44 for persistent flows. For two vectors of random values uniformly distributed in interval \( (0,1) \), the expected Euclidean distance is 0.7 and the standard deviation is 0.1 for the estimated number of persistent loan flows of \( 40 \). So the observed variability of the persistent loan flows is much smaller than what one might expect if the flow values were random. \subsection{Relation between nonloan and loan flows} Some loan flows do not have corresponding nonloan flows between the same nodes on the same day. These flows carry about 14\% of loan value on Monday, and about 7\% on Tuesday through Thursday. Nonloan flows that have corresponding loan flows account for 35\% to 48\% of all nonloan flows by value, even though the number of these flows is less than 10\% of the total. To improve the statistics, we aggregate the flows on all four days. Figure~\ref{loan nonloan flow value correlations} shows nonloan and corresponding loan flow values. We fail to find any significant correlation between loan and nonloan flows (Pearson coefficient is 0.3). The correlation improves if we restrict the loan flows to those consisting of three transactions or more; such flows mostly correspond to large persistent flows. In this case the Pearson coefficient increases to 0.6; banks that sustain large nonloan flows can also sustain large loan flows, even though the loan flows on average are an order of magnitude lower than the corresponding nonloan flows. The lack of correlation when all loans are aggregated is due to the presence of many large loans that are not accompanied by large nonloan transactions, and vice versa. \section{Net flows} The net flow between any two banks is defined as the difference of the opposing flows between these banks. The value of the net flow equals the absolute value of the difference between the values of the opposing flows. The direction of the net flow is determined by the sign of the difference. If \( v_{\textrm{flow}}(i,j)>v_{\textrm{flow}}(j,i) \), the net flow value from \( i \) to \( j \) is given by \begin{equation} v_\textrm{net}(i,j)=v_{\textrm{flow}}(i,j)-v_{\textrm{flow}}(j,i). \end{equation} For instance, if the flow from D to AV is larger than the flow in the opposite direction, then the net flow is from D to AV. \subsection{General properties} The distributions of net loan and nonloan flow values are presented in Figure~\ref{net value distribution}. The parameters of the associated Gaussian mixture models are quoted in Table~\ref{GMMnet}. The distribution of net nonloan flow values has the same general features as the distribution of the individual transactions. However, unlike individual transactions, net flow values below {\au}\( 10^4 \) are rare; net flows around {\au}\( 10^8 \) are more prominent. \begin{figure}[t] \centering \includegraphics{loan_nonloan_flow_value_correlations.eps} \caption{Loan flow values versus nonloan flow values combined over four days. Triangles correspond to loan flows with three or more transactions per flow. The solid line is the orthogonal least squares fit to the scatter diagram; the weighting is the same as in Figure~\ref{flow value correlations}. } \label{loan nonloan flow value correlations} \end{figure} There are on average around 470 net nonloan flows each day. Among these, roughly 110 consist of a single transaction and 50 consist of two transactions, mostly between small banks. At the other extreme, net flows between the largest four banks (D, BP, AV, T) typically have more than \( 10^3 \) transactions per day each. Overall, the distribution of the number of transactions per net flow is approximated well by a power law with exponent \( \alpha=-1.0\pm0.2 \): \begin{equation} N_\textrm{net}(n)\propto n^{\alpha}, \end{equation} where \( N_\textrm{net}(n) \) is the number of net nonloan flows that consist of \( n \) transactions (\( n \) ranges from 1 to more than 1000). This is consistent with the findings for Fedwire reported in \cite{bech2010} (see right panel of Fig.~14 in \cite{bech2010}). There are roughly 60 net loan flows each day. As many as 40 consist of only one transaction. On the other hand, a single net loan flow between two large banks may comprise more than 30 individual loans. The distribution of the number of transactions per net loan flow is difficult to infer due to poor statistics, but it is consistent with a power law with a steeper exponent, \( -1.4\pm0.2 \), than that of the nonloan distribution. There are no net loan flows below {\au}\( 10^5 \) or above {\au}\( 10^9 \). Comparing net loan and nonloan flows, it is obvious that net loan flows cannot compensate each and every net nonloan flow. Not only are there fewer net loan flows than nonloan flows, but the total value of the former is much less than the total value of the latter. \begin{table} \centering \begin{tabular}{ccccccc} \hline Date & \multicolumn{3}{c}{Component 1} & \multicolumn{3}{c}{Component 2} \\ & \( \mean{u} \) & \( \sigma_u^2 \) & \( P \) & \( \mean{u} \) & \( \sigma_u^2 \) & \( P \) \\ \hline 19-02-2007 & 5.14 & 1.88 & 0.60 & 7.51 & 0.36 & 0.40 \\ 20-02-2007 & 5.70 & 2.17 & 0.51 & 7.82 & 0.50 & 0.49 \\ 21-02-2007 & 5.73 & 1.97 & 0.52 & 7.72 & 0.44 & 0.48 \\ 22-02-2007 & 5.78 & 2.06 & 0.57 & 7.86 & 0.45 & 0.43 \\ \hline \end{tabular} \caption{Mean \( \mean{u} \), variance \( \sigma_u^2 \), and mixing proportion \( P \) of the Gaussian mixture components appearing in Figure~\ref{net value distribution} (\( u=\log_{10}v \)).} \label{GMMnet} \end{table} \begin{figure} \includegraphics{net_value_distribution.eps} \caption{The distribution of values of net nonloan flows (black histogram) on a logarithmic scale with bin size \( \Delta\log_{10}v=0.1 \). The components of the Gaussian mixture model are indicated with the dashed curves; the solid curve is the sum of the two components. Net loan flows are overplotted in red. The vertical axis counts the number of net flows per bin.} \label{net value distribution} \end{figure} Net loan and net nonloan flows are not correlated; the correlation coefficient is 0.3. Restricting net loan flows to those that have three transactions or more does not improve the correlation. If a net loan flow between two banks was triggered to a significant degree by the magnitude and the direction of net nonloan flow between these bank, one expects a correlation between net loan and nonloan flows. Our examination shows that in this respect loan flows are decoupled from nonloan flows. The connection between them is indirect. Namely, nonloan flows cause an imbalance in the account of each bank, which is subsequently compensated by loan flows, which are largely unrelated to the nonloan flows that caused the imbalance. \subsection{Degree distribution and assortativity} We define the in-degree of node \( i \) as the number of net flows that terminate at \( i \), i.e.\ the number of net flows with destination \( i \), and the out-degree as the number of net flows that originate from \( i \), i.e.\ the number of net flows with source \( i \). The degree distribution of the nonloan networks is shown in Figure~\ref{degree distribution}. Node BA has the highest in-degree of 37 on Monday, but on the other days it drops to 15 on average, while the out-degree is 11.75 on average for this node. The highest in-degrees are usually found among the four largest banks (D, BP, AV, T); the only exception is Monday, when AF's in-degree of 22 is greater than AV's 21, and BA has the highest in-degree. The highest out-degrees are usually achieved by D, BP, AV, T, W, and AH; the exceptions are Monday, when D's out-degree of 17 is less than AR's and AP's 18, and Thursday, when AV's out-degree of 16 is less than P's 18. \begin{figure}[t!] {\centering \subfloat[]{\includegraphics{degree_distribution.eps}\label{degree distribution}} \subfloat[]{\includegraphics{combined_degree_distribution.eps}\label{combined degree distribution}} } \caption{\subref{degree distribution} Degree distribution of the net nonloan flow networks (for convenience, in-degrees are positive and out-degrees are negative). The total value of the net flows corresponding to the specific degrees is shown with red dots (the log of value in {\au}\( 10^9 \) is indicated on the right vertical axis). \subref{combined degree distribution} Degree distribution of the net nonloan flows when the degree data for all four days are aggregated (in-degrees are circles; out-degrees are triangles). } \end{figure} It is difficult to infer the shape of the degree distribution for individual days due to poor statistics. The two-sample Kolmogorov-Smirnov (KS) test does not distinguish between the distributions on different days at the 5\% significance level. With this in mind, we combine the in- and out-degree data for all four days and graph the resulting distributions in Figure~\ref{combined degree distribution}. We find that a power law distribution does not provides a good fit for either in- or out-degrees. Visually, the distribution is closer to an exponential. However, the exponential distribution is rejected by the Anderson-Darling test. \begin{figure}[t!] {\centering \subfloat[]{\includegraphics{degree_distribution_loans.eps}\label{degree distribution loans}} \subfloat[]{\includegraphics{combined_degree_distribution_loans.eps}\label{combined degree distribution loans}} } \caption{\subref{degree distribution loans} Same as Figure~\ref{degree distribution}, but for the net loan flow networks. \subref{combined degree distribution loans} Same as Figure~\ref{combined degree distribution}, but for the net loan flows. } \end{figure} The degree distribution conceals the fact that flows originating or terminating in nodes of various degrees have different values and therefore provide different contributions to the total value of the net flows. Nodes with lower degrees are numerous, but the flows they sustain are typically smaller than those carried by a few high-degree nodes. In particular, for the nonloan flows, nodes with in-degree \( d\leq10 \) are numerous, ranging from 35 to 37, but their outgoing net flows carry about 20\% of the value on average. On the other hand, nodes with \( d\geq17 \) are rare, but their flows carry 50\% of the value. The same effect is observed for the out-degrees. The degree distribution of the network of net loan flows is shown in Figure~\ref{degree distribution loans} (we ignore the nodes that have zero in- and out- degrees over four days). Similarly to nonloan flows, the KS test does not distinguish between the distributions on different days at the 5\% significance level. The combined distribution is shown in Figure~\ref{combined degree distribution loans}. To probe assortativity of the net flow networks, we compute the in-assortativity defined in \cite{piraveenan2010assortative} as the Pearson correlation coefficient between the in-degrees of sources and destinations of the net flows (out-assortativity is computed similarly using the out-degrees). The net nonloan flow network is disassortative, with in-assortativity of \( -0.39 \), \( -0.37 \), \( -0.38 \), \( -0.37 \) and out-assortativity of \( -0.35 \), \( -0.38 \), \( -0.39 \), \( -0.37 \) on Monday, Tuesday, Wednesday, and Thursday, respectively. The net loan flow network is less disassortative; the in-assortativity is \( -0.16 \), \( -0.26 \), \( -0.18 \), \( -0.19 \) and the out-assortativity is \( -0.03 \), \( -0.10 \), \( 0.02 \), \( -0.20 \) for the same sequence of days. In biological networks, the tendency of out-assortativity to be more assortative than in-assortativity has been noted in \cite{piraveenan2010assortative}. \subsection{Topology of the net flows} Given the source and destination of each net flow, we can construct a network representation of the net flows. An example of the network of net nonloan flows is shown in Figure~\ref{nonloan network 20}. The size of the nodes and the thickness of the edges are proportional to the net imbalances and net flow values respectively (on a logarithmic scale). We use the Fruchterman-Reingold algorithm to position the nodes \cite{fruchterman1991graph}; the most connected nodes are placed in the centre, and the least connected nodes are moved to the periphery. The core of the network is dominated by the four banks with the largest total value and the largest number of transactions: D, BP, AV, and T. The other big banks, such as AF, AH, and W, also sit near the core. It is interesting to note the presence of several poorly connected nodes (Q, V, BF, and especially X) that participate in large incoming and outgoing flows, which produce only negligible imbalances in the banks themselves. The sub-network consisting of D, BP, AV, BA, T, W, U, A, AH, AF, AP, and P is fully connected on all five days, i.e.\ every node is connected to every other node. The sub-network of D, AV, and BP is fully connected, even if we restrict the net flows to values above {\au}\( 10^8 \). \begin{figure*}[t!] \includegraphics{rits-20-02-2007.csv-noloops.net.eps} \caption{Network of net nonloan flows on Tuesday, 20-02-2007. White (grey) nodes represent negative (positive) imbalances. The bank labels are indicated for each node. The size of the nodes and the thickness of the edges are proportional to the logarithm of value of the imbalances and the net flows respectively. } \label{nonloan network 20} \end{figure*} In Figure~\ref{nonloan network 20}, the flows between the largest nodes are difficult to discern visually, because the nodes are placed too close to each other in the image. We therefore employ the following procedure to simplify the network. We consider the fully connected sub-network of twelve nodes, plus node BG, and combine all other nodes into a new node called ``others'' in such a way that the net flows are preserved (BG is included because it usually participates in large flows and is connected to almost every node in the complete sub-network). The result of this procedure applied to the daily nonloan networks is presented in Figures~\ref{restricted nonloan network 19}--\ref{restricted nonloan network 22}. For these plots, we employ the weighted Fruchterman-Reingold algorithm, which positions the nodes with large flows between them close to each other. The imbalances shown in Figure~\ref{restricted nonloan network 20} are the same as those of the full network in Figure~\ref{nonloan network 20}. The daily networks of net loan flows for the same nodes are shown in Figures~\ref{loan network 19}--\ref{loan network 22}. \begin{figure*} {\centering \subfloat[19-02-2007]{\includegraphics{rits-19-02-2007.csv-noloops.net.restricted.eps} \label{restricted nonloan network 19}} \makebox[0pt]{\raisebox{-2cm}[0cm][0cm]{\includegraphics{test_flow_graph_arrows.eps}}} \subfloat[20-02-2007]{\includegraphics{rits-20-02-2007.csv-noloops.net.restricted.eps} \label{restricted nonloan network 20}}\\ \subfloat[21-02-2007]{\includegraphics{rits-21-02-2007.csv-noloops.net.restricted.eps} \label{restricted nonloan network 21}} \subfloat[22-02-2007]{\includegraphics{rits-22-02-2007.csv-noloops.net.restricted.eps} \label{restricted nonloan network 22}} } \caption{Networks of daily net nonloan flows for D, AV, BP, T, W, BA, AH, AF, U, AP, P, A, BG. All the other nodes and the flows to and from them are combined in a single new node called ``others''. The size of the nodes and the thickness of the edges are proportional to the logarithm of value of the imbalances and the net flows respectively. The value of the flows and the imbalances can be gauged by referencing a network shown in the middle, where the values of the flows are indicated in units of {\au}1 billion. } \end{figure*} \begin{figure*} { \subfloat[19-02-2007]{\includegraphics{overnight_loans-19-02-2007_to_20-02-2007.net.eps} \label{loan network 19}} \makebox[0pt]{\raisebox{-2cm}[0cm][0cm]{\includegraphics{test_flow_graph_arrows.eps}}} \subfloat[20-02-2007]{\includegraphics{overnight_loans-20-02-2007_to_21-02-2007.net.eps} \label{loan network 20}}\\ \subfloat[21-02-2007]{\includegraphics{overnight_loans-21-02-2007_to_22-02-2007.net.eps} \label{loan network 21}} \subfloat[22-02-2007]{\includegraphics{overnight_loans-22-02-2007_to_23-02-2007.net.eps} \label{loan network 22}} } \caption{Networks of daily net loan flows. The same nodes as in Figures~\ref{restricted nonloan network 19}--\ref{restricted nonloan network 22} are used. The scale of the loan flows, the imbalances, and the positions of the nodes are the same as those used for the nonloan flows in Figures~\ref{restricted nonloan network 19}--\ref{restricted nonloan network 22} to simplify visual comparison. } \end{figure*} We observe that the largest flows on Monday (19-02-2007) were significantly lower than the flows on the subsequent days. The largest nodes (D, BP, AV, T, W) are always placed close to the center of the network, because they participate in the largest flows. The topology of the flows is complex and difficult to disentangle, even if one concentrates on the largest flows (above {\au}\( 5\times10^8 \)). For instance, on Monday, probably the simplest day, the flow of nonloans is generally from BG to ``others'' to D to BP. There are also sizable flows from T to AV and from AV to ``others'' and BP. However, lower value flows (below {\au}\( 5\times10^8 \)) cannot be neglected completely because they are numerous and may contribute significantly to the imbalance of a given node. Nodes D, T, BP, AV, and W form a complete sub-network of net loan flows on Monday, Tuesday, and Wednesday. This sub-network is almost complete on Thursday too, except for the missing link between BP and W. The appearance of the net loan network is different from that of the nonloan network, since the same nodes participate in only a few loan flows. Therefore, the position of a node in the network image is strongly influenced by the number of connections of that node. Some of the poorly connected nodes are placed at the periphery despite the fact that they possess large flows. The four largest nodes (D, T, BP, AV) are always positioned at the center of the network. \subsection{Network variability} The net nonloan flow network is extremely volatile in terms of flow value and direction. For example, a {\au}\( 10^9 \) flow from D to BP on Monday transforms into a {\au}\( 3.2\times10^9 \) flow in the same direction on Tuesday, only to be replaced by a {\au}\( 6.3\times10^8 \) flow in the opposite direction on Wednesday, which diminishes further to {\au}\( 2.5\times10^9 \) on Thursday. Nodes T and BP display a similar pattern of reversing flows between Tuesday and Wednesday. On the other hand, the net flow between T and AV maintains the same direction, but the flow value is strongly fluctuating. In particular, a moderate {\au}\( 4.8\times10^8 \) flow on Monday rises to {\au}\( 1.9\times10^9 \) on Tuesday, then falls sharply to {\au}\( 2\times10^8 \) on Wednesday and again rises to {\au}\( 2.2\times10^9 \) on Thursday. Considering any three nodes, we observe that circular and transitive flows are present on most days, the latter being more common. The most obvious example is a circular flow between D, T, and BP on Thursday and a transitive flow involving BG, T, and AV on the same day. The circular flows are unstable in the sense that they do not persist over two days or more. The net loan flow network exhibits similar characteristics. Few net loan flows persist over the four days. For example, the flow from AV to T has the same direction and is similar in value on all four days. Circular loan flows are also present, as the flow between AV, T, and BP on Thursday demonstrates. \section{Conclusions} In this paper, we study the properties of the transactional flows between Australian banks participating in RITS. The value distribution of transactions is approximated well by a mixture of two log-normal components, possibly reflecting the different nature of transactions originating from SWIFT and Austraclear. For the largest banks, the value distributions of incoming and outgoing transactions are similar. On the other hand, the central bank displays a high asymmetry between the incoming and outgoing transactions, with the former clearly dominating the latter for transactions below {\au}\( 10^6 \). Using a matching algorithm for reversing transactions, we successfully separate transactions into loans and nonloans. For overnight loans, we estimate the identification rate at 98\%. The mean derived interest rate is within 0.01\% of the central banks' target rate of 6.25\%, while the standard deviation is about 0.07\%. We find a strong anti-correlation between loan and nonloan imbalances (Pearson coefficient is about 0.9 on most days). A likely explanation is that nonloan flows create surpluses in some banks. The banks lend the surplus to banks in deficit, creating loan flows that counteract the imbalances due to the nonloan flows. Hence, loan and nonloan imbalances of individual banks are roughly equal in value and opposite in sign on any given day. The flow networks are structurally variable, with 20\% of nonloan flows and 50\% of loan flows replaced every day. Values of persistent flows, which maintain the same source and destination over at least two consecutive days, vary significantly from day to day. Some flow values change by several orders of magnitude. Persistent flows increase in value several-fold between Monday and Tuesday. Individual flow values can change by several orders of magnitude on the following day. Overall, there is a reasonable correlation between the flow values on consecutive days (Pearson coefficient is 0.65 for nonloans and 0.76 for loans on average). We also find that larger banks tend to sustain larger loan flows, in accord with the intuitive expectations. However, there is no correlation between loan and nonloan flows. We examine visually the topology of the net loan and nonloan flow networks. The centre of both networks is dominated by the big four banks. Twelve banks form a complete nonloan sub-network, in which each bank is connected to every other bank in the sub-network. The three largest banks form a complete sub-network even if the net flows are restricted to values above {\au}\( 10^8 \). Our examination reveals that the network topology of net flows is complicated, with even the largest flows varying greatly in value and direction on different days. Our findings suggest a number of avenues for future research on interbank networks. Firstly, the relationships we uncovered can be used to constrain analytical models and numerical simulations of interbank flows in financial networks. In particular, our explanation of the link between the loan and nonloan imbalances needs to be tested in numerical simulations. Secondly, it is necessary to analyse interbank markets in other countries to establish what elements of our results are signatures of general dynamics and what aspects are specific to the epoch and location of this study. Even when high quality data are available, most previous studies concentrate on analysing static topological properties of the networks or their slow change over time. The internal dynamics of monetary flows in interbank networks has been largely ignored. Importantly, one must ask whether the strong anti-correlation between loan and nonloan imbalances is characteristic of RTGS systems whose institutional setup resembles the Australian one or whether it is a general feature. For instance, in Italy a reserve requirement of 2\% must be observed on the 23rd of each month, which may encourage strong deviations between loan and nonloan imbalances on the other days. \section*{Acknowledgement} We thank the Reserve Bank of Australia for supplying the data. AS acknowledges generous financial support from the Portland House Foundation. \section*{References} \bibliographystyle{elsarticle-num}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Phase retrieval (PR) is a classical nonlinear inverse problem in computational imaging. The problem concerns recovering a complex signal $\mathbf X \in \bb C^{n \times n}$ from the oversampled Fourier magnitudes $\mathbf Y = \abs{\mathcal F\pqty{\mathbf X}}^2 \in \reals^{m \times m}$, where $m \ge 2n-1$ is necessary for recoverability. The problem has three intrinsic symmetries: any of 1) 2D translation of the nonzero content of $\mathbf X$, 2) 2D conjugate flipping of $\mathbf X$, and 3) global phase offset to $\mathbf X$ ($\mathbf X e^{i \theta}$ for any $\theta \in [-\pi, \pi)$) and their compositions will leave the observation $\mathbf Y$ unchanged. When $\mathbf X$ is real-valued and positive, numerous classical methods such as hybrid input-output (HIO, \cite{fienup1982phase}) can take advantage of the realness and/or positivity constraints to recover $\mathbf X$ in practice. However, when $\mathbf X$ is complex-valued---which pertains to most real applications, these constraints are not applicable. In such scenarios, knowing the precise support of $\mathbf X$ proves crucial for empirical successes~\cite{marchesini2003x}. Practical iterative algorithms for PR typically start with a loose support estimated from the autocorrelation (as $\mathcal F^{-1}(\abs{\mathcal F(\mathbf X)}^2)$ leads to the autocorrelation $\mathbf X \star \mathbf X$, from which one can obtain a crude estimate of the support of $\mathbf X$---less reliable when $\mathbf Y$ is affected by noise), and then gradually refine the support using thresholding (e.g., the popular shrinkwrap heuristic~\cite{marchesini2003x}) as the iteration proceeds. But the final recovery quality is often sensitive to the parameter setting in support refinement. Recently, two families of deep learning (DL) methods have been proposed for PR. They either directly learn the inverse mapping parametrized by deep neural networks (DNNs) based on an extensive training set, or refine the results from classical methods by integrating DL modules with classical iterative algorithms. But, as discussed in our prior work~\cite{manekarnips,TayalEtAl2020Unlocking}, most of these methods are only evaluated on real-valued natural image datasets that are distant from PR applications and also do not reflect the essential difficulty of PR. Here, we focus on complex-valued PR in real applications. We consider the single-instance setting for PR. The most natural formulation used is probably the least squares \begin{align} \min_{\mathbf X \in \bb C^{n \times n}} \; \|\mathbf Y - \abs{\mathcal F\pqty{\mathbf X}}^2\|_{F}^2. \end{align} Empirically, this almost always fails to recover anything meaningful on even simple test cases, probably due to the prevalence of bad local minimizers. Here, we propose a simple modification to make it work: parameterizing $\mathbf X$ using a deep generative prior, i.e., replacing $\mathbf X$ by $G_\theta\pqty{\mathbf z}$, where $G_\theta$ is a trainable DNN parameterized by $\theta$, and $\mathbf z$ is a fixed seed vector. This leads to our main formulation for PR: \begin{align} \min_{\mathbf \theta} \; \|\mathbf Y - \abs{\mathcal F \circ G_\theta\pqty{\mathbf z}}^2\|_{F}^2. \end{align} Recently, deep generative priors of the above form $G_\theta\pqty{\mathbf z}$ have been used to solve inverse problems in computational imaging and beyond~\cite{ongie2020deep}. There are two major variants: 1) $G_\theta$ as a generator is pretrained on a large training set using deep generative models, e.g., GANs, and $\mathbf z$ is trainable. We call this variant \emph{multiple-instance deep generative prior} (MIDGP). The training set needs to come from the same domain as the object to be recovered; 2) $G_\theta$ is trainable and $\mathbf z$ is either fixed or trainable. We call this variant \emph{single-instance deep generative prior} (SIDGP), also referred as the untrained DNN prior. Deep image prior~\cite{deepimageprior} and deep decoder~\cite{heckel2018deep} are two notable models of SIDGP, and they only differ in the choice of architecture for $G$. Here, we take the SIDGP approach, as it does not need a training set, which may be expensive to collect for certain applications of PR. SIDGP has been proposed to solve several variants of PR, including Gaussian PR~\cite{jagatap2019algorithmic}, Fourier holography~\cite{lawrence2020phase}, and phase microscopy~\cite{bostan2020deep} that all simplify PR to certain degrees. But no claim on PR has been made before the current work. \section{Experimental Results} We test our method on several real-valued toy images and simulated complex-valued crystal data. Testing on the crystal data is inspired by the Bragg CDI application for material study~\cite{robinson2001reconstruction}. The data is generated by first creating 2D convex and concave shapes based on random scattering points in a $110\times 110$ grid on a $128 \times 128$ background. The complex magnitudes are uniformly $1$, and the complex phases are determined by projecting the simulated 2D displacement fields (due to crystal defects) to the corresponding momentum transfer vectors for each reflection. As shown in \cref{fig:result}, \begin{figure}[!htbp] \centering \begin{minipage}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{result_images/two_column_comparison_natural.png} \end{minipage} \begin{minipage}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{result_images/Deep_Image.png} \end{minipage} \caption{Visualization of recovery results on (left) real-valued random toy images and (right) complex-valued simulated crystal data. For each of them, the first row is the groundtruth, and second row is the recovered result by HIO, and the third row by our method. } \label{fig:result} \end{figure} we do not assume the knowledge of the precise support, and hence the objects have translation freedom on all test images. In all cases, our method produces good visual recovery results, while HIO, a representative classical algorithm for PR, leads to much worse recovery on the real-valued toy images, and completely fails on the complex-valued crystal data. We have used the plain version of HIO. Although incorporating support refinement strategies such as shrinkwrap will likely improve the results of HIO, it is amazing that our method based on SIDGP does not need any special handling of the support and works reasonably well. \section{Problem Setup} We formulated as optimization problem of the form: \begin{align} \label{eq:passive_main} \min_{\mathbf \theta} \; \ell\pqty{\mathbf Y, \abs{\mathcal F \circ G_{\mathbf \theta}\pqty{\mathbf z}}^2}, \quad \end{align} where the latent code $z$ is a fixed low-dimensional generic vector. In this paper, the weights $\theta$ are not \emph{pre-trained}; rather, the task is to estimate the signal $\hat{\mathbf X} = G_{\mathbf \theta}\pqty{\mathbf z}$ and corresponding weight $\theta$, for a fixed seed $z$. Due to symmetries, the network output any symmetrical copy of ${\mathbf X}$ i.e. $\abs{\mathcal F \circ G_{\mathbf \theta}\pqty{\mathbf z}}^2 \approx \mathbf Y $. The network $G_\theta$ takes the form of an neural network architecture similar to the one in ~\cite{heckel2018deep}. \section{Experimental Results} \begin{figure}[tp] \centering \begin{minipage}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{result_images/Deep_Image.png} \label{fig:result} \end{minipage} \hfill \begin{minipage}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{result_images/two_column_comparison_natural.png} \label{fig:datasetbias} \end{minipage} \caption{Visualization of recovery results. (left) First row contains complex groundtruth images, (right) First row contains natural images. 2 nd and 3 rd columns are reconstructions produced by HIO and Structural Models, respectively.} \end{figure} In this section, we empirically show that structural models are able to accurately recover the object upto its intrinsic symmetry. We use natural images with translation (real valued) and simulated crystal dataset(complex imaged) to test our method. Simulated crystal data was generated by creating 2D convex and concave complex shapes based on random scattering points in a 110-by-110 grid within a 128-by-128 background. The phase information was determined by projecting the simulated 2D crystal defect displacement fields to the corresponding momentum transfer vectors for each reflection. We compared our method with popular iterative method HIO. Results in Figure \ref{fig:result}. Visibly, we can observe that HIO reconstruction has blurred boundary because of the translation and symmetry. We note that structural models give good recovery for both complex and real images and is able to resolve the symmetry without any difficulty. While this work demonstrated our approach success on simulated observations, we expect to conform its finding on experimental data in future work.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Hidden symmetries of dynamics play a crucial role for the study of diverse physical systems that can be relativistic or not, with or without gravity, classical or quantum \cite{Cariglia:2014ysa}. Particularly useful for black hole physics is the hidden symmetry of the {\em principal Killing--Yano tensor} \cite{Frolov:2017kze}. Such a tensor not only generates other explicit and hidden symmetries, it also determines the algebraic type of the solution, and allows for a separation of variables when studying physical field equations in black hole spacetimes. The Golden Era of field equation separability in Kerr geometry spanned the late '60s and '70s. During this period, separable solutions were found for particles following geodesic motion, test scalar and spinor fields, as well as massless vector (spin 1) and tensor (spin 2) fields \cite{Carter:1968cmp, Carter:1968rr, Unruh:1973, Chandrasekhar:1976, Teukolsky:1972, Teukolsky:1973}. Much later, after the generalization of Kerr geometry to higher dimensions by Myers and Perry \cite{MyersPerry:1986}, a flurry of papers extended the results for geodesic motion \cite{Page:2006ka}, and test scalar \cite{Frolov:2007cb} and spinor \cite{OotaYasui:2008} fields to arbitrary dimensions---all thanks to the principal tensor. However, the appropriate separation scheme for vector and tensor in dimensions $D>4$ remained elusive. A breakthrough on this front came in 2017 when Lunin demonstrated the separability of Maxwell's equations in Myers--Perry-(A)dS geometry \cite{Gibbons:2004uw}. Lunin's approach was novel in that it provided a separable ansatz for the vector potential rather than the field strength, a method that had previously seen success in $D=4$ dimensions \cite{Teukolsky:1972, Teukolsky:1973}. In 2018, Frolov, Krtous, and Kubiznak showed that Lunin's ansatz can be written in a covariant form, in terms of the principal tensor \cite{Frolov:2018pys, Krtous:2018bvk}, allowing them to extend Lunin's result to general (possibly off-shell) Kerr--NUT--AdS spacetimes \cite{Chen:2006xh}. The separation of massive vector (\emph{Proca}) field perturbations in these spacetimes (an achievement previously absent even for the four-dimensional Kerr geometry) followed shortly after that \cite{Frolov:2018ezx}, see also \cite{Dolan:2018dqv, Frolov:2018eza,Dolan:2019hcw}. The separability of the vector field hinges on the existence of the principal tensor. Such a tensor: i) determines the canonical (preferred) coordinates in which the separation occurs, ii) generates the towers of explicit and hidden symmetries linked to the `symmetry operators' of the separated vector equation, and iii) explicitly enters the separation ansatz for the vector potential $\tens{P}$. Namely, this ansatz can be written in the following covariant form: \be\label{LFKK} P^a=B^{ab}\nabla_b Z\,, \ee where the tensor $\tens{B}$ is determined by the principal tensor $\tens{h}$ and the metric $\tens{g}$ according to \be\label{Btens} B^{ab}(g_{bc}+i\mu h_{bc})=\delta^a_c\,, \ee with $\mu$ a `separation constant'. The solution for the scalar functions $Z$ is then sought in a standard multiplicative separable form. In what follows, we shall refer to the ansatz \eqref{LFKK} with $\tens{B}$ given by \eqref{Btens} as the {\em Lunin--Frolov--Krtous--Kubiznak (LFKK) ansatz}. Remarkably, the LFKK ansatz works equally well for both massless and massive vector perturbations. It is also valid in \emph{any} dimension, even or odd, as discussed in Appendix \ref{appendix}. However striking the above results are, they have one serious drawback: the existence of the principal tensor is largely limited to vacuum black hole spacetimes. In fact, the conditions for its existence are so strong that they uniquely determine a (rather restricted though physically interesting) class of admissible spacetimes---the off shell Kerr--NUT--AdS metrics \cite{Krtous:2008tb}. For this reason various generalizations of the notion of hidden symmetries, that would allow for more general spacetimes while preserving integrability features of the symmetry, were sought. One such generalization, that of the Killing--Yano tensor with torsion \cite{yano1953curvature, Rietdijk:1995ye, Kubiznak:2009qi}, turns out to be quite fruitful. Such a symmetry exists in a number of supergravity and string theory spacetimes, where the torsion can be naturally identified with a defining 3-form of the theory. Although less restrictive, the principal tensor with torsion still implies the essential integrability features of its torsion-less cousin and underlies separability of Hamilton--Jacobi, Klein--Gordon, and torsion-modified Dirac equations on the black hole background. With this in mind a natural question arises: is the vector field separability described above limited to the vacuum spacetimes? It is the purpose of the present paper to show that it is not so. To this end, we concentrate on a prototype non-vacuum black hole spacetime known to admit the principal tensor with torsion \cite{Kubiznak:2009qi}, the Chong--Cveti\v{c}--L\"u--Pope black hole \cite{Chong:2005hr} of $D=5$ minimal gauged supergravity. We show that the LFKK ansatz can be used for separability of the (properly torsion modified) vector equation in this background. Our result shows that, despite the presence of torsion considerably weakening the structure, the principal tensor with torsion remains a powerful tool for separability. At the same time it means that the applicability of the LFKK ansatz goes far beyond the original setup in which it was first considered. Our paper is organized as follows. In Sec. \ref{KYTT}, we review the torsion generalization of Killing--Yano tensors and introduce the principal tensor with torsion. Sec.~\ref{BH} is devoted to the Chong--Cveti\v{c}--L\"u--Pope black hole of the $D=5$ minimal gauged supergravity and its basic characteristics. The novel contribution of the paper comes in Sec.~\ref{Sep} where the torsion modified massive vector (Troca) equation is introduced and shown to separate using the LFKK separability ansatz. We conclude in Sec.~\ref{summary}. Appendix~\ref{appendix} gathers the results on separability of the vector equations in the off-shell Kerr--NUT--AdS spacetimes---the results in odd dimensions are presented for the first time and allow us to compare the five-dimensional vacuum result to the supergravity case studied in the main text. \section{Killing--Yano tensors with torsion}\label{KYTT} Let us start by briefly recapitulating a `torsion generalization' of Killing--Yano tensors which has applications for a variety of supergravity black hole solutions. In what follows we assume that the torsion is completely antisymmetric and described by a 3-form $\tens{T}$, with the standard torsion tensor given by $T^{d}_{ab}=T_{abc}g^{cd}$, where $\tens{g}$ is the metric. The torsion connection $\tens{\nabla}^T$ acting on a vector field $\ts{X}$ is defined as \be\label{j1-m} \nabla^T_{\!a} X^b = \nabla_{\!a} X^b + \frac{1}{2}\,T_{ac}^b X^c\,, \ee where $\tens{\nabla}$ is the Levi-Civita (torsion-free) connection. The connection ${\tens{\nabla}}} % covariant derivative %:ex: \covd \ts{T^T$ satisfies the metricity condition, ${\tens{\nabla}}} % covariant derivative %:ex: \covd \ts{T^T \tens{g}=0$, and has the same geodesics as ${\tens{\nabla}}} % covariant derivative %:ex: \covd \ts{T$. It induces a connection acting on forms. Namely, let $\tens{\omega}$ be a $p$-form, then \be\label{j5-m} {\tens{\nabla}}} % covariant derivative %:ex: \covd \ts{T^T_{\!\ts{X}} \tens{\omega}={\tens{\nabla}}} % covariant derivative %:ex: \covd \ts{T_{\!\ts{X}} \tens{\omega} -\frac{1}{2} \bigl(\ts{X}\cdot\tens{T}\bigr)\underset{1}{\wedge} \tens{\omega}\,, \ee where we have used a notation of contracted wedge product introduced in \cite{Houri:2010qc}, defined for a $p$-form $\tens{\alpha}$ and $q$-form $\tens{\beta}$ as \begin{eqnarray}\label{ContrProd} (\alpha\underset{m}{\wedge}\beta)_{a_1\dots a_{p{-}m}b_1\dots b_{q{-}m}}&&\nonumber\\ =\frac{(p+q-2m)!}{(p\!-\!m)!(q\!-\!m)!}&&\! \alpha_{c_1\dots c_m[a_1\dots a_{p{-}m}}\beta^{c_1\dots c_m}{}_{b_1\dots b_{q{-}m}]}.\quad\ \end{eqnarray} Equipped with this, one can define the following two operations: \begin{align} \tens{d}^T \tens{\omega}&\equiv {\tens{\nabla}}} % covariant derivative %:ex: \covd \ts{T^T \wedge \tens{\omega}=\tens{d\omega}-\tens{T}\underset{1}{\wedge}\tens{\omega}\,, \label{j6-m}\\ {\tens{\delta}}} % coderivativ %:ex: \coder \ts{f^T \tens{\omega}&\equiv-{\tens{\nabla}}} % covariant derivative %:ex: \covd \ts{T^T\cdot\tens{\omega}={\tens{\delta}}} % coderivativ %:ex: \coder \ts{f\tens{\omega}-\frac{1}{2}\,\tens{T}\underset{2}{\wedge} \tens{\omega}\,.\label{j7-m} \end{align} Note that in particular we have ${\tens{\delta}}} % coderivativ %:ex: \coder \ts{f^T \tens{T}={\tens{\delta}}} % coderivativ %:ex: \coder \ts{f \tens{T}$. A {\em conformal Killing--Yano tensor with torsion} $\tens{k}$ is a $p$-form which for any vector field $\tens{X}$ satisfies the following equation \cite{yano1953curvature, Rietdijk:1995ye, Kubiznak:2009qi}: \be\label{generalizedCKY} \nabla^T_X \tens{k}-\frac{1}{p+1}\tens{X}\cdot \tens{d}^T \tens{k}+\frac{1}{D-p+1} \tens{X} \wedge \tens{\delta}^T \tens{k}=0\,, \ee where $D$ stands for the total number of spacetime dimensions. In analogy with the Killing--Yano tensors defined with respect to the Levi-Civita connection, a conformal Killing--Yano tensor with torsion $\tens{f}$ obeying $\tens{\delta}^T \tens{f}=0$ is called a {\em Killing--Yano tensor with torsion}, and a conformal Killing--Yano tensor with torsion $\tens{h}$ obeying $\tens{d}^T \tens{h}=0$ is a {\em closed conformal Killing--Yano tensor with torsion}. Despite the presence of torsion, the conformal Killing--Yano tensors with torsion possess many remarkable properties. The following three are especially important for `generating other symmetries' and separability of test field equations (see \cite{Kubiznak:2009qi,Houri:2010fr} for the proof and other properties): \begin{enumerate} \item The Hodge star $\tens{*}$ maps conformal Killing--Yano with torsion $p$-forms to conformal Killing--Yano with torsion $(D-p)$-forms. In particular, the Hodge star of a closed conformal Killing--Yano with torsion $p$-form is a Killing--Yano with torsion $(D-p)$-form and vice versa. \item Closed conformal Killing--Yano tensors with torsion form a (graded) algebra with respect to a wedge product. Namely, let $\tens{h}_1$ and $\tens{h}_2$ be a closed conformal Killing--Yano tensor with torsion $p$-form and $q$-form, respectively, then $\tens{h}_3=\tens{h}_1 \wedge \tens{h}_2$ is a closed conformal Killing--Yano with torsion $(p+q)$-form. \item Let $\tens{h}$ and $\tens{k}$ be two (conformal) Killing--Yano tensors with torsion of rank $p$. Then \be K_{ab}=h_{(a |c_1\ldots c_{p-1}|}k_{b)}{}^{c_1\ldots c_{p-1}} \ee is a (conformal) Killing tensor of rank 2. \end{enumerate} In what follows, we shall concentrate on a {\em principal tensor} with torsion, $\tens{h}$, which is a non-degenerate closed conformal Killing--Yano with torsion 2-form. It obeys \be\label{principal} \nabla^T_X \tens{h}=\tens{X} \wedge \tens{\xi}\,,\quad \tens{\xi}=-\frac{1}{D-p+1} {\delta}^T \tens{h}\,. \ee The condition of non-degeneracy means that $\tens{h}$ has the maximal possible (matrix) rank and possesses the maximal number of functionally independent eigenvalues. Starting from one such tensor, one can generate (via the three properties above) the towers of Killing tensors, and (closed conformal) Killing--Yano tensors with torsion. In their turn, such symmetries can typically be associated with symmetry operators for a given field operator. For example, Killing tensors give rise to operators commuting with the scalar wave operator, and (closed conformal) Killing--Yano tensors with torsion to operators commuting with the torsion Dirac operator. When a full set of mutually commuting operators can be found, one can typically separate the corresponding field equation. It is in this respect the principal tensor underlies separability of various field equations. The existence of the principal tensor imposes severe restrictions on the spacetime. In the torsion-less case, such restrictions uniquely determine the Kerr--NUT--AdS class of black hole spacetimes \cite{Krtous:2008tb} (see also \cite{Frolov:2017whj}). Although no full classification is available for spacetimes with torsion, nor is it clear if such spacetimes have to admit any isometries \cite{Houri:2012eq}, several explicit examples of supergravity solutions with a principal tensor where the torsion is naturally identified with a 3-form field strength of the theory are known. Among them, perhaps the most `beautiful' are the $D$-dimensional Kerr--Sen spacetimes \cite{Houri:2010fr} and black holes of $D=5$ minimal gauged supergravity \cite{Kubiznak:2009qi}. In this paper we concentrate on the latter, very `clean' solution without scalar fields, known as the Chong--Cveti{\v c}--L{\" u}--Pope black hole \cite{Chong:2005hr}. It is known that for such a solution the principal tensor guarantees integrability of the geodesic motion \cite{Davis:2005ys}, as well as separability of scalar \cite{Davis:2005ys} and modified Dirac \cite{Davis:2005ys, Wu:2009cn, Wu:2009ug} equations. Our aim is to show that it also guarantees the separability of properly torsion modified (massive) vector field equations. \section{Black hole of minimal gauged supergravity}\label{BH} The bosonic sector of $D=5$ minimal gauged supergravity is governed by the Lagrangian \be\label{L} \pounds=\tens{*}(R+\Lambda)-\frac{1}{2}\tens{F}\wedge \tens{*F}\!+ \frac{1}{3\sqrt{3}}\,\tens{F} \wedge \tens{F}\wedge \tens{A}\,, \ee where $\Lambda$ is the cosmological constant. This yields the following set of Maxwell and Einstein equations: \begin{eqnarray} \tens{dF}=0\,,\quad \tens{d* F}-\frac{1}{\sqrt{3}}\,\tens{F}\wedge\tens{F}\!&=&\!0\,,\quad \label{F}\label{Maxwell}\\ R_{ab}-\frac{1}{2}\Bigl(F_{ac}F_b^{\ c}-\frac{1}{6}\,g_{ab}F^2\Bigr)+\frac{1}{3}\Lambda g_{ab}\!&=&\!0\,. \label{Einstein} \end{eqnarray} In this case the torsion can be identified with the Maxwell field strength according to \cite{Kubiznak:2009qi} \be\label{TF} \tens{T}=-\frac{1}{\sqrt{3}}\, \tens{*F}\,. \ee Having done so, the Maxwell equations can be written as \be \tens{\delta}^T\tens{T}=0\,, \quad \tens{d}^T\tens{T}=0\,. \ee In other words, the torsion $\tens{T}$ is `$T$-harmonic'. Here, the first equality follows from the fact that ${\tens{\delta}}} % coderivativ %:ex: \coder \ts{f^T \tens{T}={\tens{\delta}}} % coderivativ %:ex: \coder \ts{f \tens{T}$, while the second is related to the special property in five dimensions (with Lorentzian signature), \be \tens{d}^T \tens{\omega}=\tens{d\omega}+(\tens{*T})\wedge (\tens{*\omega})\,, \ee valid for any 3-form $\tens{\omega}$. The principal tensor equation \eqref{principal} can now explicitly be written as \begin{eqnarray}\label{PCKY2} \nabla_c h_{ab}&=&2g_{c[a}\xi_{b]}+\frac{1}{\sqrt{3}}\,(*F)_{cd[a}h^d_{\ \,b]}\,,\nonumber\\ \xi^a&=&\frac{1}{4} \nabla_b h^{ba}-\frac{1}{2\sqrt{3}}(*F)^{abc} h_{bc}\,. \end{eqnarray} A general doubly spinning charged black hole solution in this theory has been constructed by Chong, Cveti\v{c}, L\"u, and Pope \cite{Chong:2005hr}. It can be written in a symmetric Wick-rotated form, c.f. \cite{Kubiznak:2009qi}: \begin{eqnarray} \tens{g}\!&=&\!\sum_{\mu=1,2}\bigl(\tens{\omega}^{\mu}\tens{\omega}^{\mu}+ \tens{\tilde \omega}^{\mu}\tens{\tilde\omega}^{\mu}\bigr) +\tens{\omega}^{0}\tens{\omega}^{0}\,,\label{can_odd}\\ \tens{A}\!&=&\!\sqrt{3}c(\tens{A}_1+\tens{A}_2)\,,\label{A} \end{eqnarray} where \begin{eqnarray}\label{omega} \tens{\omega}^{1} \!&=&\sqrt{\frac{U_1}{X_1}}\,\tens{d}x_1\,,\quad \tens{\tilde \omega}^{1}=\sqrt{\frac{X_1}{U_1}}(\tens{d}\psi_0+x_2^2\tens{d}\psi_1)\,,\nonumber\\ \tens{\omega}^{2} \!&=&\!\sqrt{\frac{U_2}{X_2}}\,\tens{d}x_2\,,\quad \tens{\tilde \omega}^{2}=\sqrt{\frac{X_2}{U_2}}(\tens{d}\psi_0+x_1^2\tens{d}\psi_1)\,,\nonumber\\ \tens{\omega}^{0}\!&=&\!\frac{ic}{x_1x_2}\!\Bigl[\tens{d}\psi_0\!+\!(x_1^2\!+\!x_2^2)\tens{d}\psi_1\!+\!x_1^2x_2^2\tens{d}\psi_2\! -\!x_2^2\tens{A}_1\!-\!x_1^2\tens{A}_2\Bigr]\,,\nonumber\\ \tens{A}_1\!&=&\!-\frac{e_1}{U_1}(\tens{d}\psi_0+x_2^2\tens{d}\psi_2)\,,\quad \tens{A}_2=-\frac{e_2}{U_2}(\tens{d}\psi_0+x_1^2\tens{d}\psi_1)\,,\nonumber\\ U_1&=&x_2^2-x_1^2=-U_2\,. \end{eqnarray} The solution is stationary and axisymmetric, corresponding to three Killing vectors ${\tens{\partial}}_{\psi_0}, {\tens{\partial}}_{\psi_1}, {\tens{\partial}}_{\psi_2}$, and possesses two non-trivial coordinates $x_1$ and $x_2$. Here we choose $x_2>x_1>0$ and note that the metric written in this form has $\det {g}<0$. We have also used a `symmetric gauge' for the $U(1)$ potential; the electric charge of the Maxwell field $\ts{F}=\ts{dA}$ depends on a difference $(e_1-e_2)$. In order to satisfy the Einstein--Maxwell equations, the metric functions take the following form: \begin{eqnarray} X_1&=&A+Cx_1^2-\frac{\Lambda}{12}x_1^4+\frac{c^2(1+e_1)^2}{x_1^2}\,,\nonumber\\ X_2&=&B+Cx_2^2-\frac{\Lambda}{12}x_2^4+\frac{c^2(1+e_2)^2}{x_2^2}\,\label{XY}, \end{eqnarray} where of the four free parameters $A,B,C, c$ only three are physical (one can be scaled away) and are related to the mass and two rotations. As per usual, the separability property shown below does not need the special form \eqref{XY} and occurs ``off-shell'', for arbitrary functions \be X_1=X_1(x_1)\,,\quad X_2=X_2(x_2)\,. \ee As shown in \cite{Kubiznak:2009qi}, the spacetime admits a principal tensor with torsion, which takes the form \be \ts{h}=\sum_{\mu=1,2} x_\mu \tens{\omega}^{\mu}\wedge \tens{\tilde \omega}^{\mu}\,. \ee Interestingly, the torsion \eqref{TF} in Chong--Cveti{\v c}--L{\"u}--Pope spacetimes is very special as it satisfies the following conditions: \be (*F)_{d[ab}h^d_{\ \,c]}=0\,,\quad (*F)_{abc} h^{bc}=0\,. \ee This implies that the tensor is not only $\tens{d}^T$-closed (as it must be), but it is also $\tens{d}$-closed and obeys: \be \tens{d}^T\tens{h}=\tens{dh}=0\,,\quad \tens{\xi}=-\frac{1}{4}\tens{\delta}^T \tens{h}=-\frac{1}{4}\tens{\delta} \tens{h}={\tens{\partial}}_{\psi_0}\,. \ee Therefore it can be locally written in terms of a potential \be \tens{h}=\tens{db}\,,\quad \tens{b}=-\frac{1}{2}\Bigl[(x_1^2+x_2^2)\tens{d}\psi_0+x_1^2x_2^2 \tens{d}\psi_1\Bigr]\,. \ee Using the properties of closed conformal Killing--Yano tensors with torsion, the principal tensor generates a Killing--Yano with torsion 3-form $\tens{*h}$, and a rank-2 Killing tensor \be\label{KT} K_{ab}=(*h)_{acd}(*h)_b^{\ cd}=h_{ac}h_b^{\ c}-\frac{1}{2}g_{ab}h^2\,. \ee Such symmetries are responsible for separability of the Hamilton--Jacobi, Klein--Gordon, and torsion-modified Dirac equations in these spacetimes. \section{Separability of vector perturbations}\label{Sep} \subsection{Troca equation} Let us now proceed and consider a test massive vector field \tens{\ensuremath{P}} on the above background. It is reasonable to expect that, similar to the Dirac case \cite{Kubiznak:2009qi,Houri:2010qc}, the corresponding Proca equation will pick up the due torsion generalization. In what follows we shall argue that the natural sourceless massive vector equation to consider is \be \label{eq:troca} {\tens{\nabla}}} % covariant derivative %:ex: \covd \ts{T \cdot \tens{\ensuremath{\mathcal{F}}} - m^2\tens{P} = 0\,, \ee where $m$ is the mass of the field, and the field strength $\tens\ensuremath{\mathcal{F}}$ is defined via the torsion exterior derivative, \be \tens\ensuremath{\mathcal{F}} \equiv \tens{d}^T\!\tens{P}=\div\tens{P} - \tens{P}\cdot \tens T\,. \ee Being a torsion generalization of the Proca equation, we shall refer to the equation \eqref{eq:troca} as a `{\em Troca equation}'. It implies the `Lorenz condition' \be\label{eq:Lorenz} {\tens{\nabla}}} % covariant derivative %:ex: \covd \ts{T \cdot \tens{P}=0\,. \ee To motivate the above form of the Troca equation, we demand that it is linear in $\tens{P}$, reduces to the Proca equation in the absence of torsion, and would obey the current conservation in the presence of sources. We have three natural candidates for generalizing the `Maxwell operator' ${\tens{\nabla}}} % covariant derivative %:ex: \covd \ts{T\cdot \tens{dP}$, namely: \be \tens{O}_1={\tens{\nabla}}} % covariant derivative %:ex: \covd \ts{T \cdot \tens{d}^T\!\tens{P}\,,\quad \tens{O}_2={\tens{\nabla}}} % covariant derivative %:ex: \covd \ts{T^T \cdot \tens{d}\!\tens{P}\,,\quad \tens{O}_3={\tens{\nabla}}} % covariant derivative %:ex: \covd \ts{T^T \cdot \tens{d}^T\!\tens{P}\,. \ee However, the last two do not obey the current conservation equation. Indeed, due to ${\tens{\nabla}}} % covariant derivative %:ex: \covd \ts{T^T \cdot ({\tens{\nabla}}} % covariant derivative %:ex: \covd \ts{T^T \cdot \ )\neq 0$, we have ${\tens{\nabla}}} % covariant derivative %:ex: \covd \ts{T\cdot \tens{O}_2={\tens{\nabla}}} % covariant derivative %:ex: \covd \ts{T^T \cdot \tens{O}_2\neq 0$, and similarly for $\tens{O}_3$. So we are left with $\tens{O}_1$ which, when extended to the massive case, yields the Troca equation \eqref{eq:troca}. Let us also note that the choice of operator $\tens{O}_1$ is `consistent' with the Maxwell equation for the background Maxwell field. Namely, due to the identity \be \tens{X}\cdot \tens{*\omega}=\tens{*}(\tens{\omega}\wedge \tens{X})\,, \ee valid for any vector $\tens{X}$ and any $p$-form $\tens{\omega}$, the field equations \eqref{Maxwell} can be written as \begin{eqnarray} 0&=&\tens{d* F}-\frac{1}{\sqrt{3}}\,\tens{F}\wedge\tens{F}=\tens{d*dA}-\tens{d}\Bigl(\frac{1}{\sqrt{3}}\tens{F}\wedge \tens{A}\Bigr)\nonumber\\ &=&\tens{d*dA}+\tens{d*}\Bigl(\tens{A}\cdot \frac{1}{\sqrt{3}}\tens{*F}\Bigr)\nonumber\\ &=&\tens{d*}\tens{d}^T\tens{A}\,. \end{eqnarray} That is, identifying $\tens{A}$ with the Proca field in the test field approximation \be {\tens{\nabla}}} % covariant derivative %:ex: \covd \ts{T\cdot \tens{d}^T\tens{P}=0\,, \ee which is the massless Troca equation \eqref{eq:troca} (upon treating the torsion as an independent field). \subsection{Separability} Having identified the Troca equation \eqref{eq:troca}, let us now find its general solution in the supergravity background \eqref{can_odd}. To this purpose we employ the LFKK ansatz: \be\label{LFKK2} P^a=B^{ab}\nabla_b Z\,,\quad B^{ab}(g_{bc}+i\mu h_{bc})=\delta^a_c\,, \ee and seek the solution in a separated form \begin{equation}\label{separ} Z = {\underset{\nu = 1,2}{\prod}} R_{\nu} \left( x_{\nu} \right) \exp \Bigl[i \overset{2}{\underset{j = 0}{\sum}} L_{j} \psi_{j} \Bigr]\,, \end{equation} where $\{\mu, L_0, L_1, L_2\}$ are four `separation constants'. As in the case without torsion, it is useful to start with the Lorenz condition \eqref{eq:Lorenz}. We find \be\label{eq: ODEs for mode functions} \nabla_a P^a= Z\sum_{\nu=1,2}\frac{1}{U_{\nu}}\frac{\mathcal{D}_{\nu}R_{\nu}}{q_{\nu} R_{\nu}}\,, \ee where the differential operator $\mathcal{D}_{\nu}$ is given by \begin{eqnarray}\label{Operator Sugra} \mathcal{D}_{\nu} &=& \dfrac{q_{\nu}}{x_{\nu}}\partial_{\nu}\left( \frac{X_{\nu} x_{\nu}}{q_{\nu}}\partial_{\nu} \right) - \frac{1}{X_{\nu}} \Bigl(\overset{2}{\underset{j=0}{\sum}} (-x_{\nu}^2)^{1-j}{L}_{j\nu}\Bigr)^{\!2} \nonumber \\ &&+ \frac{ 2 \mu }{q_{\nu}}\ \overset{2}{\underset{j=0}{\sum}} (-\mu^2)^{j-1}{L}_{j\nu} + \frac{L_2^2 q_{\nu}}{c^2 x_{\nu}^2 }\,, \end{eqnarray} and we have defined \be q_\nu=1-\mu^2 x_\nu^2\,, \quad {L}_{j\nu} = L_{j}(1+\delta_{j2}e_{\nu})\,, \ee the latter definition being essentially the only difference when compared to the five-dimensional torsion-less case, c.f. \eqref{eq: Operator odd D}. In order to impose the Lorenz condition, we could now follow the procedure developed in \cite{Frolov:2018ezx}. Instead, let us proceed in a slightly different way, by using the following `{separability lemma}' \cite{Frolov:2007cb}:\\ {\bf Lemma.} {\em The most general solution of \be \sum_{\nu=1}^n \frac{f_\nu(x_\nu)}{U_\nu}=0\,\quad \mbox{where}\quad U_{\nu} = \overset{n}{\underset{\mu \neq \nu}{\underset{\mu = 1}{\prod}}} \left( x_{\mu}^2 - x_{\nu}^2 \right)\,, \ee is given by \be\label{sepfnu} f_\nu=\sum_{j=0}^{n-2} C_j(-x_\nu^2)^{n-2-j}\,, \ee where $C_j$ are arbitrary constants.} Thus, demanding $\nabla_a P^a=0$, using the expression \eqref{eq: ODEs for mode functions}, and the above lemma for $n=2$ and $f_\nu=\mathcal{D}_{\nu}R_{\nu}/(q_{\nu} R_{\nu})$, yields the separated equations \be\label{eq: separation equations0} \mathcal{D}_{\nu}R_{\nu}=q_{\nu}f_\nu R_{\nu}\,, \ee where $f_\nu$ is given by \eqref{sepfnu}, that is, $f_\nu=C_0$. With this at hand, let us now turn to the Troca equation \eqref{eq:troca}. Using the ansatz \eqref{LFKK2} and the Lorenz condition \eqref{eq:Lorenz}, the L.H.S. of the Troca equation takes the following form: \begin{equation} \nabla_b \ensuremath{\mathcal{F}}^{ba} - m^2 \ensuremath{P}^a = B^{ab} \nabla_{b} J\,, \end{equation} where the `current' $J$ is, similar to the Kerr--NUT--AdS case~\cite{Frolov:2018ezx}, given by \begin{equation} J = \Box Z - 2 i \mu \xi_{a} B^{ab} \partial_{b}Z - m^2 Z\,, \end{equation} or more explicitly, \begin{equation}\label{eq: current J} J = Z \sum_{\nu=1,2}\frac{1}{U_{\nu}R_{\nu}}\Bigl[ \mathcal{D}_{\nu} - m^2 \left( -x_{\nu}^2 \right) \Bigr]R_{\nu}\,. \end{equation} Using the separation equations \eqref{eq: separation equations0} and the lemma again, we find \be J=-Z(\mu^2 C_0-m^2)\sum_{\nu=1,2}\frac{x_\nu^2}{U_\nu}=Z(\mu^2 C_0-m^2)\,. \ee In order for this to vanish, we require \be C_0=\frac{m^2}{\mu^2}\,. \ee Of course, in the case of massless vectors, we can set $m=0$. To summarize, we have shown the separation of variables for the Troca equation \eqref{eq:troca} in the Chong--Cveti\v{c}--L\"u--Pope black hole spacetime. The solution can be found in the form of the LFKK ansatz \eqref{LFKK2}, where the scalar function $Z$ is written in the multiplicative separated form \eqref{separ}, and the modes $R_\nu$ satisfy the ordinary differential equations \eqref{eq: separation equations0} with $f_\nu=C_0=m^2/\mu^2$. The obtained solution is general in that it depends on four independent separation constants $\{\mu, L_0, L_1, L_2\}$. It remains to be seen if, similar to the Kerr--NUT--AdS case \cite{Dolan:2018dqv}, all polarizations (four in the case of massive field and three for $m=0$) are captured by our solution. \section{Conclusions}\label{summary} The principal tensor is a very powerful object. It uniquely characterizes the class of vacuum black hole spacetimes (known as the Kerr--NUT--AdS metrics), and it generates towers of explicit and hidden symmetries. In turn, such symmetries underlie separability of the Hamilton--Jacobi, Klein--Gordon, Dirac, and as only recently shown also Maxwell and Proca equations in these spacetimes. The key to separating the vector equations is not to concentrate on the field strength (as previously thought) but rather employ a new LFKK separability ansatz \eqref{LFKK} and \eqref{Btens} for the vector potential itself. In this paper we have shown that the applicability of the LFKK ansatz goes far beyond the realm previously expected. Namely, we have demonstrated the separability of the vector field equation in the background of the Chong--Cveti\v{c}--L\"u--Pope black hole of minimal gauged supergravity. Such a black hole no longer possesses a principal tensor. However, upon identifying the Maxwell 3-form of the theory with torsion, a weaker structure, the principal tensor with torsion, \emph{is} present. Remarkably, such a structure enters the LFKK ansatz in precisely the same way as the standard (vacuum) principal tensor and allows one to separate the naturally torsion modified vector (Troca) field equations: `{\em principal tensor strikes again'}. This result opens future horizons for applicability of both the LFKK ansatz and the torsion modified principal tensor. It is an interesting open question to see where the principal tensor is going to strike next. \section*{Acknowledgements} \label{sc:acknowledgements} We would like to thank Ma{\"i}t{\'e} Dupuis and Lenka Bojdova for organizing the PSI Winter School where this project was mostly completed and the PSI program for facilitating this research. The work was supported in part by the Natural Sciences and Engineering Research Council of Canada. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Economic Development, Job Creation and Trade. R.G.S thanks IFT-UNESP/ICTP-SAIFR and CNPq for partial financial support. L.T. acknowledges support by the Studienstiftung des Deutschen Volkes.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Since 1976, the R\"ossler system \cite{rossler1976equation} is well know for it's simplicity (three differential equations with only one non linear term) and its dynamical richness producing chaos. Used as a basic system to demonstrate various properties of dynamical systems, this system is still a source inspiration for researchers. This system has been widely explored with several tools. The main goal of this paper is to extend the use of the topological characterization method to several chaotic attractors. We introduce a way to use the template as a global description that contains various attractors templates of the R\"ossler system. In this paper, we will study this system in a parameter space to highlight common properties of neighbours attractors in this space. Castro {\it et al.} \cite{castro2007characterization} study of the parameter space of this system using Lyapunov exponents reflects its dynamics (stable, chaotic or trajectory diverging). The maps are built varying parameters $a$ and $c$ of the system. These maps display fractal structure and illustrate period doubling cascades. This principle is also employed by Barrio {\it et al.} \cite{barrio2009qualitative} for the three parameters of the R\"ossler system. Their analysis of local and global bifurcations of limit cycles permits to a have a better understanding of the parameter space. Additionally, attractors with equilibrium points associated to their first return maps are also plotted to illustrate the various dynamics of this system, including coexistence of attractors. To enlarge this overview of recent work on the bifurcations and dynamics on the R\"ossler system, Genesio {\it et al.} \cite{genesio2008global} use the first-order harmonic balance technique to study fold, flip and Neimark-Sacker bifurcations in the whole parameter space. Finally, the recent work of Sprott \& Li \cite{sprott2015asymmetric} introduce another way to reach coexisting attractors in addition to the cases identified by Barrio {\it et al.} \cite{barrio2009qualitative}. In this paper we study a bifurcation diagram of the R\"ossler system exhibiting various dynamics using topological properties of attractors. We use the topological characterization method based on the topological properties of the attractor's periodic orbits \cite{gilmore2002topology}. The purpose is not only to obtain templates of chaotic attractors but also to find common points or properties as it as already been shown for this system by Letellier {\it et al.} \cite{letellier1995unstable} for a ``funnel attractor''. In this particular case, a linking matrix describes the template depending on the number of branches. In this paper, we will explore a bifurcation diagram and show that only one template contains all the templates of attractors as subtemplates. This paper is organized as follow. The first part introduces the Sprott \& Li \cite{sprott2015asymmetric} work with their bifurcation diagram. The second part details the topological characterization method; height attractors are studied and their templates are obtained. Then we prove that the eight templates are subtemplates of a unique template. It describes the topological structure of all the attractors of the entire bifurcation diagram. Finally we provide a partition of the bifurcation diagram giving the symbolic dynamic associated with the unique template depending on the bifurcation parameter. \section{Bifurcation diagram} Barrio \textit{et al.} \cite{barrio2009qualitative} highlight the fact that a R\"ossler system can have to coexisting attractors as solutions for a set of parameters. Sprott \& Li \cite{sprott2015asymmetric} parametrize the R\"ossler system \cite{rossler1976equation} with the parameter $\alpha$ \begin{equation} \left\{\begin{aligned} \dot{x} &= -y -z \\ \dot{y} &= x +ay \\ \dot{z} &= b+z(x-c) \end{aligned}\right. \text{ with } \left\{\begin{aligned} a &= 0.2 + 0.09\alpha \\ b &= 0.2-0.06\alpha \\ c &= 5.7-1.18\alpha \end{aligned}\right. \label{eq:rossler_alpha} \end{equation} in order to explore bifurcations. When $\alpha = 1$, two attractors coexist in the phase space. We reproduce their bifurcation diagram when $\alpha$ varies. The value of the fixed points of the system are \begin{equation} S_\pm = \left| \begin{array}{l} \displaystyle x_\pm = \frac{c\pm \sqrt{c^2-4ab}}{2} \\[0.3cm] \displaystyle y_\pm = \frac{-c\mp \sqrt{c^2-4ab}}{2a} \\[0.3cm] \displaystyle z_\pm = \frac{c \pm \sqrt{c^2-4ab}}{2a} \, . \end{array} \right. \label{eq:rossler_fixed_point} \end{equation} The bifurcation diagram Fig.~\ref{fig:rossler_bifurcation} is obtained using the following Poincar\'e section \begin{equation} \mathcal{P} \equiv \left\{ (y_n,-z_n) \in \mathbb{R}^2\ | -x_n = -x_- \right\} \label{eq:rossler_X_section} \end{equation} where $x_-$ is the $x$ value of the fixed point $S_-$ (see \cite{rosalie2014toward} for details on this Poincar\'e section). The uses of this Poincar\'e section explains why Fig.~\ref{fig:rossler_bifurcation} is similar to FIG. 4 of \cite{sprott2015asymmetric}: we use $y_n$ and they use $M$ which is a local maximum of $x$. Consequently, in both case, values close to zero correspond to value close to the center of the attractor and oppositely, high absolute values correspond to the outside boundary of the attractor. \begin{figure}[hbtp] \centering \includegraphics[width=.70\textwidth]{rossler_bif_01.eps} \vspace{-.5em} \caption{ Bifurcation diagram when $\alpha$ varies: $\alpha$ increasing (red) and $\alpha$ decreasing (black). This figure reproduces the FIG.~4 of \cite{sprott2015asymmetric}. Using $\alpha$ increasing or decreasing replaces different initial conditions used in the original paper \cite{sprott2015asymmetric}. $\mathcal{A}$, $\mathcal{B}$, $\mathcal{C}$, $\mathcal{D}$, $\mathcal{E}_1$, $\mathcal{E}_2$, $\mathcal{F}$ and $\mathcal{G}$ refers to attractors solutions with the parameters indicated in \eqref{eq:alpha_value}. } \label{fig:rossler_bifurcation} \end{figure} This diagram indicates parameter for the R\"ossler system where the solution is a limit cycle or chaotic. This diagram exhibits a doubling period cascade that is a classical route to chaos for $-2<\alpha<0.2$. This is followed by a chaotic puff ($\alpha = 0.5$) and by various regimes (banded chaos and almost fully developed chaos). We chose representative values of $\alpha$ where one ore two attractors are solutions of the system \begin{equation} \begin{array}{ll|ll} \mathcal{A} & \alpha=-0.25& \mathcal{E}_1 & \alpha=1 \\ \mathcal{B} & \alpha=0.5 & \mathcal{E}_2 & \alpha=1 \\ \mathcal{C} & \alpha=0.78 & \mathcal{F} & \alpha=1.135 \\ \mathcal{D} & \alpha=0.86 & \mathcal{G} & \alpha=1.22 \;. \end{array} \label{eq:alpha_value} \end{equation} We analyse these attractors using topological characterization method in order to obtain a generic description of the attractors while $\alpha$ is varied. \section{Topological characterization} The main purpose of the topological characterization method is to build a template using topological properties of periodic orbits. The template has been introduced by \cite{birman1983knotted,franks1985entropy} further to the works of Poincar\'e \cite{poincare1899methode}. According to Ghrist {\it et al.} \cite{ghrist1997knots}, a template is a compact branched two-manifold with boundary and smooth expansive semiflow built locally from two type of charts: joining and splitting. Each charts carries a semiflow, endowing the template with an expanding semiflow, and the gluing maps between charts must respect the semiflow and act linearly on the edges. This topological characterization method is detailed by Gilmore \& Lefranc \cite{gilmore1998topological,gilmore2002topology}. Recently, we detail additional conventions to obtain templates that can be compared and sorted \cite{rosalie2013systematic,rosalie2015systematic}. We start with a brief description of the method. As the trajectories are chaotic, they are unpredictable in a long term behavior. But attractors have a time invariant global structure where its orbits compose its skeleton. The purpose of the method is to use the topological properties of these orbits to describe the structure of the attractor. We provide a sum up of this method with eight steps (including our conventions): \begin{enumerate} \item Display the attractor with a clockwise flow; \item Find the bounding torus; \item Build a Poincar\'e section; \end{enumerate} The first step permits to ensure that the study will be carried out to the respect of conventions: clockwise flow having a clockwise toroidal boundary and described by a clockwise template. This clockwise convention ensures us to describe template with a unique linking matrix, a keystone to work only with linking matrices \cite{rosalie2013systematic}. The toroidal boundary give a global structure that permit to classify attractors. For a given toroidal boundary, a typical Poincar\'e section is associated according to the Tsankov \& Gilmore theory \cite{tsankov2003strange}. This Poincar\'e section contains one or more components to permit an effective discretization of trajectories and consequently an efficient partition of the attractor. \begin{enumerate}\setcounter{enumi}{3} \item Compute the first return map and define a symbolic dynamic; \item Extract and encode periodic orbits; \item Compute numerically the linking numbers between couple of orbits; \end{enumerate} The first return map details how two consecutive crossings of a trajectory through the Poincar\'e section are related and permits to associate a symbol to each point. It permits to define a partition of the attractor and a symbolic dynamic. Associated symbols depend on the parity of the slope (even for the increasing one, odd for the other). Up to this point, periodic orbits structuring the attractor are extracted and encoded using this symbolic dynamic. The linking number between a pair of orbits is a topological invariant indicating how orbits are wind one around another. In this paper, we use the orientation convention of Fig.~\ref{fig:convention}. \begin{figure}[hbtp] \centering \begin{tabular}{cccccc} \multicolumn{2}{c}{Convention} & \multicolumn{2}{c}{Permutations} & \multicolumn{2}{c}{Torsions} \\ \includegraphics[height=4em]{topo_pos.ps} & \includegraphics[height=4em]{topo_neg.ps} & \includegraphics[height=4em]{topo_permutation_pos.ps} & \includegraphics[height=4em]{topo_permutation_neg.ps} & \includegraphics[height=4em]{topo_torsion_pos.ps} & \includegraphics[height=4em]{topo_torsion_neg.ps} \\ $+1$ & $-1$ & positive & negative & positive & negative \end{tabular} \vspace{-.5em} \caption{ Convention representation of oriented crossings. The permutation between two branches is positive if the crossing generated is equal to $+1$, otherwise it is a negative permutation. We use the same convention for torsions. } \label{fig:convention} \end{figure} The final steps concern the template: \begin{enumerate}\setcounter{enumi}{6} \item Propose a template; \item Validate the template with the theoretical computation of linking numbers. \end{enumerate} The template is clockwise oriented. The template of an attractor bounded by genus one torus is defined by a unique linking matrix. This matrix describes how branches are torn and permuted. We use the Melvin \& Tufillaro \cite{melvin1994template} standard insertion convention: when the branches stretch and squeeze, the left to the right order of the branches corresponds to the bottom to top order. The diagonal elements of the matrix indicate the torsions of branches and off diagonal elements give the permutations between two branches. Finally, to validate a template, we use a procedure introduced by Le Sceller {\it et al.} \cite{lesceller1994algebraic} that permits to compute linking numbers theoretically from a linking matrix. Linking numbers obtained theoretically with this method have to correspond with those obtained numerically at the step (vi) to \emph{validate the template}. The challenge of this procedure resides in the step (vii) because it is non trivial to find a template whose theoretical linking numbers correspond to the numerically computed linking numbers. \subsection{Attractor $\mathcal{A}$} In this section we will detail the previously described procedure step by step for attractor $\mathcal{A}$. \begin{enumerate}\setcounter{enumi}{0} \item Display the attractor with a clockwise flow; \end{enumerate} We propose to make a rotation of the attractor around the $y$-axis. Displaying the attractor in the phase space $(-x,y)$, the flow evolves clockwise (Fig.~\ref{fig:rossler_A_attra_appli}a). \begin{enumerate}\setcounter{enumi}{1} \item Find the bounding torus; \end{enumerate} The attractor is bounded by a genus one torus: a surface with only one hole. Consequently, a Poincar\'e section with one-component is required. \begin{enumerate}\setcounter{enumi}{2} \item Build a Poincar\'e section; \end{enumerate} We build our Poincar\'e section using $x_-$ \eqref{eq:rossler_fixed_point} \begin{equation} \mathcal{P} \equiv \left\{ (y_n,-z_n) \in \mathbb{R}^2\ | -x_n = -x_- \right\}\;. \label{eq:rossler_A_section} \end{equation} This Poincar\'e section is a half-plan transversal to the flow as illustrated in grey Fig.~\ref{fig:rossler_A_attra_appli}a. \begin{figure}[h!] \centering \begin{tabular}{ccc} \includegraphics[width=.35\textwidth]{rossler_A_attra_section.eps} & & \includegraphics[width=.33\textwidth]{rossler_A_appli.eps} \put(-100,90){0} \put(-25,80){1} \\ (a) Chaotic attractor $\mathcal{A}$ & & (b) First return map \end{tabular} \vspace{-.5em} \caption{ (a) Chaotic attractor $\mathcal{A}$ solution to the equation \eqref{eq:rossler_alpha}. Parameter value $\alpha=-0.25$ with the initial conditions $x=-1.25 $, $y=-0.72 $ and $z=-0.1$. (b) First return map to the Poincar\'e section \eqref{eq:rossler_A_section} using $\rho_n$ (the arrow indicates the orientation). } \label{fig:rossler_A_attra_appli} \end{figure} \begin{enumerate}\setcounter{enumi}{3} \item Compute the first return map and define a symbolic dynamic; \end{enumerate} To compute the first return map, we first normalize the intersection value of the flow through the Poincar\'e section: $\rho_n$. This value is oriented from the inside to the outside (Fig.~\ref{fig:rossler_A_attra_appli}a). Then the first return map is obtained by plotting $\rho_{n+1}$ versus $\rho_n$ (Fig.~\ref{fig:rossler_A_attra_appli}b). This return map is the classical unimodal map made of an increasing branch followed by a decreasing one. This first return map indicates that the classical ``horseshoe'' mechanism generate this chaotic attractor. The symbolic dynamic is defined as follow: ``$0$'' for the increasing branch and ``$1$'' for the decreasing one. \begin{enumerate}\setcounter{enumi}{4} \item Extract and encode periodic orbits; \end{enumerate} Using, the first return map, we can locate periodic orbits that the flow visits while it covers the attractor. For instance, there is only one period one orbit because the bisector cross the map once. We extract five orbits with a period lower than six: $1$, $10$, $1011$, $10110$ and $10111$ (Fig.~\ref{fig:rossler_A_orbits}). \begin{figure}[ht] \centering \includegraphics[width=.5\textwidth]{rossler_A_orbites.eps} \vspace{-.5em} \caption{ Orbits of the chaotic attractor $\mathcal{A}$. } \label{fig:rossler_A_orbits} \end{figure} \begin{enumerate}\setcounter{enumi}{5} \item Compute numerically the linking numbers; \end{enumerate} We compute linking numbers between each pair of periodic orbits. The linking number between a pair of orbits is obtained numerically (Tab.~\ref{tab:rossler_A_lk}). \begin{table}[hb] \centering \caption{ Linking numbers between pairs of orbits extracted from the symmetric chaotic attractor $\mathcal{A}$. } \resizebox{.4\textwidth}{!}{ \begin{tabular}{ccccc} \\[-0.3cm] \hline \hline \\[-0.3cm] & (1) & (10) & (1011) & (10110) \\[0.1cm] \hline \\[-0.3cm] (10) & -1 & \\[0.1cm] (1011) & -2 & -3 & \\[0.1cm] (10110) & -2 & -4 & -8 \\[0.1cm] (10111) & -2 & -4 & -8 & -10 \\[0.1cm] \hline \hline \end{tabular} } \vspace{-.5em} \label{tab:rossler_A_lk} \end{table} \begin{enumerate}\setcounter{enumi}{6} \item Propose a \emph{template}; \end{enumerate} Using these linking numbers and the first return map structure, we propose the template (Fig.~\ref{fig:rossler_A_template}). This template is made of trivial part with a chaotic mechanism. The latter is composed by a splitting chart that separate continuously the flow into two branches. The left one encoded ``$0$'' permute negatively over the right one encoded ``$1$''; the latter have a negative torsion. After the torsion and permutation, branches stretch and squeeze to a branch line using the standard insertion convention. This template is thus described by the linking matrix \begin{equation} T(\mathcal{A}) = \left[ \begin{array}{C{1.3em}C{1.3em}} 0 & -1 \\ -1 & -1 \end{array} \right\rsem\;. \end{equation} \begin{figure}[hbtp] \centering \includegraphics[height=.3\textheight]{rossler_A_template.eps} \vspace{-.5em} \caption{ $T_\mathcal{A}$: template of attractor $\mathcal{A}$. } \label{fig:rossler_A_template} \end{figure} \begin{enumerate}\setcounter{enumi}{7} \item Validate the template computing theoretically the linking numbers. \end{enumerate} The theoretical calculus using the linking matrix and the orbits permits to obtain the same table of linking numbers. This validates the template of $\mathcal{A}$ defined by $T(\mathcal{A})$ (Fig.~\ref{fig:rossler_A_template}). \subsection{Attractors $\mathcal{B}$ to $\mathcal{G}$} In this section, we will only give the key steps for others attractors: $\mathcal{B}$, $\mathcal{C}$, $\mathcal{D}$, $\mathcal{E}_1$, $\mathcal{E}_2$, $\mathcal{F}$ and $\mathcal{G}$. We start with the Fig.~\ref{fig:rossler_A_to_G_attra} displaying these attractors for parameters \eqref{eq:alpha_value} and with a clockwise flow evolution. We can observe that these attractors are made of fully developed chaos (Fig.~\ref{fig:rossler_A_to_G_attra}acg) or banded chaos (Fig.~\ref{fig:rossler_A_to_G_attra}bdf) or coexisting attractors of banded chaos (Fig.~\ref{fig:rossler_A_to_G_attra}e). \begin{figure}[htbp] \centering \begin{tabular}{ccc} \includegraphics[width=.3\textwidth]{rossler_A_attra.eps} & \includegraphics[width=.3\textwidth]{rossler_B_attra.eps} & \includegraphics[width=.3\textwidth]{rossler_C_attra.eps} \\ (a) $\mathcal{A}$ for $\alpha=-0.25$ & (b) $\mathcal{B}$ for $\alpha=0.5$ & (c) $\mathcal{C}$ for $\alpha=0.78$ \\[.5em] \includegraphics[width=.3\textwidth]{rossler_D_attra.eps} & \includegraphics[width=.3\textwidth]{rossler_E1_E2_attra.eps} & \includegraphics[width=.3\textwidth]{rossler_F_attra.eps} \\ (d) $\mathcal{D}$ for $\alpha=0.86$ & (e) $\mathcal{E}_1$ and $\mathcal{E}_2$ for $\alpha=1$ & (f) $\mathcal{F}$ for $\alpha=1.135$ \\[.5em] & \includegraphics[width=.3\textwidth]{rossler_G_attra.eps} & \\ & (g) $\mathcal{G}$ for $\alpha=1.22$ & \end{tabular} \vspace{-.5em} \caption{ Eight chaotic attractors for different values of $\alpha$ from the bifurcation diagram (Fig.~\ref{fig:rossler_bifurcation}) } \label{fig:rossler_A_to_G_attra} \end{figure} All these attractors are bounded by a genus one torus. We use the Poincar\'e sections: \eqref{eq:rossler_A_C_G_section} for $\mathcal{A}$, $\mathcal{C}$, $\mathcal{G}$, \eqref{eq:rossler_D_E_F_section} $\mathcal{D}$, $\mathcal{E}_1$, $\mathcal{E}_2$, $\mathcal{F}$ and \eqref{eq:rossler_B_section} for $\mathcal{B}$: \begin{equation} \mathcal{P} \equiv \left\{ (y_n,-z_n) \in \mathbb{R}^2\ | -x_n = -x_- \right\}\ , \label{eq:rossler_A_C_G_section} \end{equation} \begin{equation} \mathcal{P} \equiv \left\{ (y_n,-z_n) \in \mathbb{R}^2\ | -x_n = -x_- , \ -\dot{x}_n<0,y<-7\right\}\ , \label{eq:rossler_D_E_F_section} \end{equation} \begin{equation} \mathcal{P} \equiv \left\{ (y_n,-z_n) \in \mathbb{R}^2\ | -x_n = -x_- , \ -\dot{x}_n<0,y<-9\right\}\;. \label{eq:rossler_B_section} \end{equation} We compute the first return maps to these Poincar\'e sections using a normalized variable $\rho_n$ oriented from the inside to the outside of the boundary. The eight return maps (Fig.~\ref{fig:rossler_A_to_G_appli}) are multimodal with differential points and the number of their branches are two, three or four. We chose a symbolic dynamic for each first return map with respect to the slope orientation of branches \begin{equation} \begin{array}{ll|ll} \mathcal{A} & 0\ 1 & \mathcal{E}_1 & 0\ 1 \\ \mathcal{B} & 0\ 1 & \mathcal{E}_2 & 1\ 2 \\ \mathcal{C} & 0\ 1\ 2& \mathcal{F} & 0\ 1\ 2\ 3 \\ \mathcal{D} & 0\ 1\ 2& \mathcal{G} & 0\ 1\ 2 \;. \end{array} \label{eq:symbolic_dynamic} \end{equation} \begin{figure}[htbp] \centering \begin{tabular}{ccc} \includegraphics[width=.3\textwidth]{rossler_A_appli.eps} \put(-90,80){0} \put(-20,70){1} & \includegraphics[width=.3\textwidth]{rossler_B_appli.eps} \put(-90,90){0} \put(-15,70){1} & \includegraphics[width=.3\textwidth]{rossler_C_appli.eps} \put(-90,90){0} \put(-40,60){1} \put(-15,30){2} \\ (a) $\mathcal{A}$ for $\alpha=-0.25$ & (b) $\mathcal{B}$ for $\alpha=0.5$ & (c) $\mathcal{C}$ for $\alpha=0.78$ \\ \includegraphics[width=.3\textwidth]{rossler_D_appli.eps} \put(-90,90){0} \put(-70,70){1} \put(-15,70){2} & \includegraphics[width=.3\textwidth]{rossler_E1_appli.eps} \put(-90,90){0} \put(-20,70){1} & \includegraphics[width=.3\textwidth]{rossler_E2_appli.eps} \put(-70,70){1} \put(-20,40){2} \\ (d) $\mathcal{D}$ for $\alpha=0.86$ & (e$_1$) $\mathcal{E}_1$ for $\alpha=1$ & (e$_2$) $\mathcal{E}_2$ for $\alpha=1$ \\ \includegraphics[width=.3\textwidth]{rossler_F_appli.eps} \put(-90,90){0} \put(-73,80){1} \put(-53,80){2} \put(-23,70){3} & \includegraphics[width=.3\textwidth]{rossler_G_appli.eps} \put(-90,80){0} \put(-50,70){1} \put(-20,45){2} & \\ (f) $\mathcal{F}$ for $\alpha=1.135$ & (g) $\mathcal{G}$ for $\alpha=1.22$ & \end{tabular} \vspace{-.5em} \caption{ First return maps of the eight attractors of the Fig.~\ref{fig:rossler_A_to_G_attra}. } \label{fig:rossler_A_to_G_appli} \end{figure} We extract a set of orbits for each attractors and numerically compute linking numbers between pairs of orbits (Tab.~\ref{tab:rossler_A-G_lk_template}). Thus, we propose templates. They are validated using Le Sceller {\it et al.} procedure \cite{lesceller1994algebraic}. All the results are summed up in the Tab.~\ref{tab:rossler_A-G_lk_template}. \begin{table}[p] \centering \caption{ Linking numbers between pairs of orbits extracted from the attractors of Fig.~\ref{fig:rossler_A_to_G_attra} and their associated linking matrix describing their template. } \vspace{.5em} \begin{tabular}{clc} & \hfill Linking numbers \hfill\null& Linking matrix\\[-1em]\\ $\mathcal{A}$ & {\footnotesize \begin{tabular}{ccccc} \hline & (1) & (10) & (1011) & (10110) \\[0.1cm] \hline \\[-0.3cm] (10) & -1 & \\[0.1cm] (1011) & -2 & -3 & \\[0.1cm] (10110) & -2 & -4 & -8 \\[0.1cm] (10111) & -2 & -4 & -8 & -10 \\[0.1cm] \hline \end{tabular}} & $T(\mathcal{A}) = \left[ \begin{array}{C{1.3em}C{1.3em}} 0 & -1 \\ -1 & -1 \end{array} \right\rsem$ \\[1cm] $\mathcal{B}$ & {\footnotesize \begin{tabular}{ccccc} \hline & (1) & (10) & (1011) & (10110) \\[0.1cm] \hline \\[-0.3cm] (10) & -5 & \\[0.1cm] (1011) & -10 & -21 & \\[0.1cm] (10110) & -13 & -26 & -52 \\[0.1cm] (10111) & -13 & -26 & -52 & -65 \\[0.1cm] \hline \end{tabular}}& $T(\mathcal{B}) = \left[ \begin{array}{C{1.3em}C{1.3em}} -6 & -6 \\ -6 & -5 \end{array} \right\rsem$ \\[1cm] $\mathcal{C}$ & {\footnotesize \begin{tabular}{ccccc} \hline & (1) & (10) & (2010) & (2011) \\[0.1cm] \hline \\[-0.3cm] (10) & -1 & \\[0.1cm] (2010) & -2 & -3 & \\[0.1cm] (2011) & -2 & -3 & -6 \\[0.1cm] (1011) & -2 & -3 & -6 & -6 \\[0.1cm] \hline \end{tabular} }& $ T(\mathcal{C}) = \left[ \begin{array}{C{1.3em}C{1.3em}C{1.3em}} 0 & -1 & -1 \\ -1 & -1 & -1 \\ -1 & -1 & 0 \end{array} \right\rsem $ \\[1cm] $\mathcal{D}$ & {\footnotesize \begin{tabular}{cccccc} \hline & (1) & (221) & (211) & (2221) & (2211) \\[0.1cm] \hline \\[-0.3cm] (221) & -4 & \\[0.1cm] (211) & -4 & -12 & \\[0.1cm] (2221) & -5 & -15 & -15 \\[0.1cm] (2211) & -5 & -15 & -15 & -20 \\[0.1cm] (22110) & -6 & -18 & -18 & -24 &-24 \\[0.1cm] \hline \end{tabular} }& $ T(\mathcal{D}) = \left[ \begin{array}{C{1.3em}C{1.3em}C{1.3em}} -4 & -4 & -4 \\ -4 & -3 & -3 \\ -4 & -3 & -2 \end{array} \right\rsem $ \\[1.2cm] $\mathcal{E}_1$ & {\footnotesize \begin{tabular}{ccccc} \hline & (1) & (10) & (1011) & (101111) \\[0.1cm] \hline \\[-0.3cm] (10) & -3 & \\[0.1cm] (1011) & -6 & -11 & \\[0.1cm] (101111) & -9 & -17 & -34 \\[0.1cm] (101110) & -9 & -17 & -34 & -51 \\[0.1cm] \hline \end{tabular} }& $ T(\mathcal{E}_1) = \left[ \begin{array}{C{1.3em}C{1.3em}} -2 & -3 \\ -3 & -3 \end{array} \right\rsem $ \\[1cm] $\mathcal{E}_2$ & {\footnotesize \begin{tabular}{ccccc} \hline & (1) & (21) & (2111) & (211111) \\[0.1cm] \hline \\[-0.3cm] (21) & -3 & \\[0.1cm] (2111) & -6 & -11 & \\[0.1cm] (211111) & -9 & -17 & -34 \\[0.1cm] (212111) & -9 & -17 & -34 & -51 \\[0.1cm] \hline \end{tabular} }& $ T(\mathcal{E}_2) = \left[ \begin{array}{C{1.3em}C{1.3em}} -3 & -3 \\ -3 & -2 \end{array} \right\rsem $ \\[1cm] $\mathcal{F}$ & {\footnotesize \begin{tabular}{cccccccc} \hline & (3) & (30) & (31) & (32) & (313) & (312) & (322) \\[0.1cm] \hline \\[-0.3cm] (30) & -3 & \\[0.1cm] (31) & -3 & -6 & \\[0.1cm] (32) & -3 & -6 & -6 \\[0.1cm] (313) & -4 & -9 & -9 & -8 \\[0.1cm] (312) & -4 & -9 & -9 & -8 & -12 \\[0.1cm] (322) & -4 & -9 & -9 & -8 & -12 & -12 \\[0.1cm] (332) & -4 & -9 & -9 & -8 & -12 & -12 & -12 \\[0.1cm] \hlin \end{tabular} }& $ T(\mathcal{F}) = \left[ \begin{array}{C{1.3em}C{1.3em}C{1.3em}C{1.3em}} -4 & -4 & -4 & -4 \\ -4 & -3 & -3 & -3 \\ -4 & -3 & -2 & -3 \\ -4 & -3 & -3 & -3 \end{array} \right\rsem $ \\[1.7cm] $\mathcal{G}$ & {\footnotesize \begin{tabular}{ccccc} \hline & (1) & (21) & (2111) & (2021) \\[0.1cm] \hline \\[-0.3cm] (21) & -1 & \\[0.1cm] (2111) & -2 & -3 & \\[0.1cm] (2021) & -2 & -3 & -6 \\[0.1cm] (2110) & -2 & -3 & -6 & -6 \\[0.1cm] \hline \end{tabular} }& $ T(\mathcal{G}) = \left[ \begin{array}{C{1.3em}C{1.3em}C{1.3em}} 0 & -1 & -1 \\ -1 & -1 & -1 \\ -1 & -1 & 0 \end{array} \right\rsem $ \end{tabular} \label{tab:rossler_A-G_lk_template} \end{table} Only attractors $\mathcal{C}$ and $\mathcal{G}$ have the same template even if they have not the same orbits of period lower than five. The dynamic is not fully developed on each branch: some orbits are missing in both attractors compare to the full set of orbits. On the other hand, the two coexisting attractors have the same linking numbers but the orbits encode in two distinct ways. Their linking matrix are related as their template too. In fact, the orientation of the chaotic mechanism is the opposite one. This case permits to underline how orientation conventions are necessary to distinguish these two attractors. \section{Subtemplate} \subsection{Algebraical relation between linking matrix} In this section, we will use some algebraic relation between linking matrix already defined in our previous papers \cite{rosalie2013systematic,rosalie2015systematic}. Here, we provide an overview of these relations. In the following description, a \emph{strip} also nominates a branch of a branched manifold. First of all, a \emph{linker} is a synthesis of the relative organization of $n$ strips: torsions and permutations in a planar projection (Fig.~\ref{fig:convention}). A \emph{mixer} is a linker ended by a joining chart that stretch and squeeze strips to a branch line. In the previous section, templates are only composed by one mixer defined by a linking matrix. We also define the concatenation of a torsion with a mixer and the concatenation of two mixers (see \cite{rosalie2013systematic,rosalie2015systematic} for more details). We remind that $\mathcal{X}$ designates the attractor, $T_\mathcal{X}$ its template and $T(\mathcal{X})$ the linking matrix that define its template. Using these algebraical relations between mixers and torsion, we can link the mixers of previously studied attractors: \begin{equation} \begin{array}{ll} T(\mathcal{C}) = T(\mathcal{G}) & \\[0.5cm] T(\mathcal{E}_1) = T(\mathcal{E}_2)^p & \left[ \begin{array}{C{1.3em}C{1.3em}} -3 & -3 \\ -3 & -2 \end{array} \right\rsem = \left[ \begin{array}{C{1.3em}C{1.3em}} -2 & -3 \\ -3 & -3 \end{array} \right\rsem^p \\[0.5cm] T(\mathcal{B}) = [-5] \oplus T(\mathcal{A}) & \left[ \begin{array}{C{1.3em}C{1.3em}} -6 & -6 \\ -6 & -5 \end{array} \right\rsem = \left[ \begin{array}{C{1.3em}C{1.3em}} -5 & -5 \\ -5 & -5 \end{array} \right] + \left[ \begin{array}{C{1.3em}C{1.3em}} 0 & -1 \\ -1 & -1 \end{array} \right\rsem\\[0.5cm] T(\mathcal{E}_2) = [-2] \oplus T(\mathcal{A}) & \left[ \begin{array}{C{1.3em}C{1.3em}} -2 & -3 \\ -3 & -3 \end{array} \right\rsem = \left[ \begin{array}{C{1.3em}C{1.3em}} -2 & -2 \\ -2 & -2 \end{array} \right] + \left[ \begin{array}{C{1.3em}C{1.3em}} 0 & -1 \\ -1 & -1 \end{array} \right\rsem \end{array} \label{eq:torsion_mixer} \end{equation} \subsection{Subtemplates} A subtemplate is defined as follow by Ghrist {\it et al.} \cite{ghrist1997knots}: a \emph{subtemplate} $\mathcal{S}$ of a template $\mathcal{T}$, written $\mathcal{S} \subset \mathcal{T}$, is a topological subset of $\mathcal{T}$ which, equipped with the restriction of a semiflow of $\mathcal{T}$ to $\mathcal{S}$, satisfies the definition of a template. For the eight attractors previously studied ($\mathcal{A}$, $\mathcal{B}$, $\mathcal{C}$, $\mathcal{D}$, $\mathcal{E}_1$, $\mathcal{E}_2$, $\mathcal{F}$ and $\mathcal{G}$) we will demonstrate that their templates are subtemplate of the template of $\mathcal{C}$ made of one mixer defined by \begin{equation} \left[ \begin{array}{C{1.3em}C{1.3em}C{1.3em}} 0 & -1 & -1 \\ -1 & -1 & -1 \\ -1 & -1 & 0 \end{array} \right\rsem\;. \end{equation} Using the linking matrix defining $T_\mathcal{A}$ and $T_\mathcal{C}$, we directly find that $T(\mathcal{A})$ is a subset of $T(\mathcal{C})$ with the two first lines and columns \begin{equation} \left[ \begin{array}{C{1.3em}C{1.3em}} 0 & -1 \\ -1 & -1 \end{array} \right\rsem \subset \left[ \begin{array}{C{1.3em}C{1.3em}C{1.3em}} 0 & -1 & -1 \\ -1 & -1 & -1 \\ -1 & -1 & 0 \end{array} \right\rsem\;. \end{equation} \begin{figure}[hbtp] \centering \begin{tabular}{cc} \includegraphics[height=.3\textheight]{rossler_C_template.ps} \put(-105,150){\large 0} \put(-63,150){\large 1} \put(-18,150){\large 2} & \includegraphics[height=.3\textheight]{rossler_A_subtemplate.ps} \\ (a) $T_\mathcal{C}$ & (b) $T_\mathcal{A} \subset T_\mathcal{C}$ \end{tabular} \vspace{-.5em} \caption{ Template of $\mathcal{A}$ is a subtemplate of the template of $\mathcal{C}$. } \label{fig:rossler_A_subtemplate} \end{figure} The strip organization of $T_\mathcal{A}$ are the same of the two first strips of $T_\mathcal{C}$. This means that $T_\mathcal{A}$ is a subtemplate of $T_\mathcal{C}$: $T_\mathcal{A} \subset T_\mathcal{C}$. This is illustrated on Fig.~\ref{fig:rossler_A_subtemplate} where we only display the mixers and not the trivial part of the template that link the bottom to top on the left side to have a clockwise flow as shown Fig.~\ref{fig:rossler_A_template}; this is also the case for the remainder of this article. We will use graphical representation of the templates and subtemplates because it details the relation between template and subtemplate. This representation combined with algebraical relations between linking matrices proves that a template is a subtemplate of a template. \subsubsection{Banded chaos} \label{ssec:banded} Attractors $\mathcal{B}$, $\mathcal{E}_1$ and $\mathcal{E}_2$ display \emph{banded chaos} because they are composed by several strips, or bands, with writhes. We start with the template $T_\mathcal{B}$. We know that this template can be considered with five negative torsions before a ``horseshoe'' mechanism \eqref{eq:torsion_mixer}. Thus we have to find a subtemplate that goes through a ``horseshoe'' mechanism and have five negative torsions. Letellier {\it et al.} \cite{letellier1999topological} underline the fact that if a writhe is observed in an attractor bounded by a genus one torus, this is equivalent to two torsions by isotopy; the sign of the torsions is the same of the writhe one (see FIG.~3 and FIG.~4 of \cite{letellier1999topological} for additional details). As a consequence, a subtemplate with $n$ portions induces $n$ writhes that are equivalent to $2n$ torsions by isotopy; the sign of these $2n$ torsions is the sign of the permutations of subtemplate portions. We propose to build such a subtemplate where Fig.~\ref{fig:rossler_B_subtemplate}a is $T_\mathcal{B}$ as the subtemplate of $T_\mathcal{C}$. \begin{figure}[hbtp] \centering \begin{tabular}{cc} \includegraphics[width=.3\textwidth]{rossler_B_subtemplate.ps} & \resizebox{.3\textwidth}{!}{\huge \begingroup% \makeatletter% \setlength{\unitlength}{371.83000488bp}% \makeatother% \begin{picture}(1,1.50888399)% \put(0,0){\includegraphics[width=\unitlength]{rossler_B_subtemplate_v2.ps}}% \put(0.11882467,1.41480716){\color[rgb]{0,0,0}\makebox(0,0)[b]{\smash{B}}}% \put(0.46188194,1.41480716){\color[rgb]{0,0,0}\makebox(0,0)[b]{\smash{C}}}% \put(0.88117426,1.41480716){\color[rgb]{0,0,0}\makebox(0,0)[b]{\smash{A}}}% \put(0.11882415,0.02351855){\color[rgb]{0,0,0}\makebox(0,0)[b]{\smash{B}}}% \put(0.46188194,0.02351855){\color[rgb]{0,0,0}\makebox(0,0)[b]{\smash{C}}}% \put(0.88117426,0.02351855){\color[rgb]{0,0,0}\makebox(0,0)[b]{\smash{A}}}% \end{picture}% \endgroup}\\ (a) $T_\mathcal{B} \subset T_\mathcal{C}$ & (b) $T_\mathcal{B}$ \end{tabular} \vspace{-.5em} \caption{ (a) Template of attractor $\mathcal{B}$ is a subtemplate of the template of $\mathcal{C}$. (b) Template of attractor $\mathcal{B}$ shaped as a subtemplate of $\mathcal{C}$. The arrow over $A$ refers to the Poincar\'e section \eqref{eq:rossler_B_section} with the $\rho_n$ orientation. } \label{fig:rossler_B_subtemplate} \end{figure} We propose to use algebraical relation between linking matrices to validate subtemplates. Fig.~\ref{fig:rossler_B_subtemplate}b is the template of $\mathcal{B}$ shaped as a subtemplate of $\mathcal{C}$. When we establish the template of $\mathcal{B}$, we chose to use the Poincar\'e section \eqref{eq:rossler_B_section} that corresponds to the portion far from the inside of the attractor; this portion is labeled $A$ on Fig.~\ref{fig:rossler_B_subtemplate}b. We propose to concatenate its components with respect to their relative order: from $A$ to $B$, then $B$ to $C$ and $C$ to $A$ that are respectively: a mixer, a strip without torsion and a negative torsion. We finally consider the transformation by isotopy that does not have an impact on the orientation of the strips. Thus we decide to concatenate the $2n$ torsions after the concatenation of the components. As illustrated Fig.~\ref{fig:rossler_B_subtemplate}b, there is two negative permutations ($B$ to $C$ over $A$ to $B$ and $C$ to $A$ over $A$ to $B$) that are equivalent to $2\times 2=4$ negative torsions (dot circles of fig.~\ref{fig:rossler_B_subtemplate}b). Consequently, we concatenate all these mixers and torsions \begin{equation} \left[ \begin{array}{C{1.3em}C{1.3em}} -1 & -1 \\ -1 & 0 \end{array} \right\rsem + \left[ -1 \right] + \left[ -2 \right] + \left[ -2 \right] = \left[ \begin{array}{C{1.3em}C{1.3em}} -6 & -6 \\ -6 & -5 \end{array} \right\rsem = T(\mathcal{B}) \label{eq:BsubC} \end{equation} and obtain the linking matrix defining the template of $\mathcal{B}$. Consequently $T_\mathcal{B}$ is a subtemplate of $T_\mathcal{C}$. We now consider the two coexisting attractors $\mathcal{E}_1$ and $\mathcal{E}_2$. They have a similar structure and coexist in the phase space for distinct initial conditions. The mixer of $T_\mathcal{C}$ with three branches has also a symmetric structure: the middle of the second strip of the mixer $T(\mathcal{C})$ is a reflecting symmetry axis where the left side is symmetric of the right side. To build the $T_{\mathcal{E}_1}$ and $T_{\mathcal{E}_2}$ as subtemplate of $T_\mathcal{C} $, we take this symmetry into account. \begin{figure}[htbp] \centering \begin{tabular}{ccc} \includegraphics[width=.3\textwidth]{rossler_E1_subtemplate.ps} & \includegraphics[width=.3\textwidth]{rossler_E2_subtemplate.ps} & \includegraphics[width=.3\textwidth]{rossler_E1_E2_subtemplate.ps}\\ (a) $T_{\mathcal{E}_1} \subset T_\mathcal{C}$ & (b) $T_{\mathcal{E}_2} \subset T_\mathcal{C}$ & (c) $T_{\mathcal{E}_1}$ and $T_{\mathcal{E}_2}$ coexisting \end{tabular} \vspace{-.5em} \caption{ Coexisting templates $T_{\mathcal{E}_1}$ and $T_{\mathcal{E}_2}$ are subtemplates of the template of $\mathcal{C}$. } \label{fig:rossler_E_subtemplate} \end{figure} We propose these subtemplates as illustrated Fig.~\ref{fig:rossler_E_subtemplate}. We compute the concatenation of torsions and mixer that appears in these figures. For $T_{\mathcal{E}_1}$, we have two parts: one is a mixer and the other is a strip without torsion. These parts permute negatively once in a writhe; it algebraically corresponds to a concatenation of two negative torsions. Thus, the linking matrix of such a subtemplate (Fig.~\ref{fig:rossler_E_subtemplate}a) is \begin{equation} \left[ \begin{array}{C{1.3em}C{1.3em}} 0 & -1 \\ -1 & -1 \end{array} \right\rsem + \left[ -2 \right] = \left[ \begin{array}{C{1.3em}C{1.3em}} -2 & -3 \\ -3 & -3 \end{array} \right\rsem = T(\mathcal{E}_1)\;. \end{equation} Similarly, the linking matrix of $T_{\mathcal{E}_2}$ as a subtemplate (Fig.~\ref{fig:rossler_E_subtemplate}b) is \begin{equation} \left[ \begin{array}{C{1.3em}C{1.3em}} -1 & -1 \\ -1 & 0 \end{array} \right\rsem + \left[ -2 \right] = \left[ \begin{array}{C{1.3em}C{1.3em}} -3 & -3 \\ -3 & -2 \end{array} \right\rsem = T(\mathcal{E}_2)\;. \end{equation} These algebraical relations between template and subtemplate linking matrices with Fig.~\ref{fig:rossler_E_subtemplate} prove that $T_{\mathcal{E}_1} \subset T_\mathcal{C}$ and $T_{\mathcal{E}_2} \subset T_\mathcal{C}$. Moreover, these two subtemplates are symmetric one to the other by reflection and coexist in the template of $\mathcal{C}$ (Fig.~\ref{fig:rossler_E_subtemplate}c). \subsubsection{Concatenation of mixers} We now consider $T_\mathcal{F}$, the template of $\mathcal{F}$, made of four strips. In a previous paper \cite{rosalie2015systematic}, we demonstrate that the concatenation of two mixers is a mixer and its number of strips is the product of the number of strips of each mixer. Thus, the concatenation of two mixers made of two branches is a mixer with four branches. Our hypothesis is that the four strips of $T_\mathcal{F}$ are the result of this process. To validate it, we draw its subtemplate by splitting the template of $\mathcal{C}$ into the two symmetric part containing a mixer; we obtain the Fig.~\ref{fig:rossler_F_subtemplate}a. \begin{figure}[hbtp] \centering \begin{tabular}{cc} \includegraphics[height=.3\textheight]{rossler_F_subtemplate.ps} & \includegraphics[height=.3\textheight]{rossler_F_subtemplate_v2.ps}\\ (a) $T_\mathcal{F} \subset T_\mathcal{C}$ & (b) $T_\mathcal{F}$ \end{tabular} \vspace{-.5em} \caption{ (a) The template of $\mathcal{F}$ is a subtemplate of the template of $\mathcal{C}$. (b) $T_\mathcal{F}$ with two mixers. } \label{fig:rossler_F_subtemplate} \end{figure} As we do previously, we decompose this subtemplate (Fig.~\ref{fig:rossler_F_subtemplate}b) in parts: the two parts contain a mixer and these parts permute negatively once. Thus, we concatenate a mixer before a mixer and two negative torsions (cf. \ref{ssec:banded}). The first concatenation gives a mixer defined by a linking matrix; the algebraical relation necessary to obtain this matrix are detailed in \cite{rosalie2015systematic}. The linking matrix of $T_\mathcal{F}$ as a subtemplate (Fig.~\ref{fig:rossler_F_subtemplate}a) is: {\small \begin{equation} \begin{array}{l} \left[ \begin{matrix} -1 & -1 \\ -1 & 0 \end{matrix}\right\rsem + \left[ \begin{matrix} 0 & -1 \\ -1 & -1 \end{matrix}\right\rsem + [-2] \\ \quad =\left[\left| \begin{array}{C{1.3em}C{1.3em}C{1.3em}C{1.3em}} -1 & -1 & -1 & -1 \\ -1 & -1 & -1 & -1 \\ -1 & -1 & 0 & 0 \\ -1 & -1 & 0 & 0 \end{array}\right| + \left| \begin{array}{C{1.3em}C{1.3em}C{1.3em}C{1.3em}} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{array}\right| + \left| \begin{array}{C{1.3em}C{1.3em}C{1.3em}C{1.3em}} -1 & -1 & -1 & -1 \\ -1 & 0 & 0 & -1 \\ -1 & 0 & 0 & -1 \\ -1 & -1 & -1 & -1 \end{array}\right| \right\rsem + [-2] \\ \quad = \left[ \begin{array}{C{1.3em}C{1.3em}C{1.3em}C{1.3em}} -2 & -2 & -2 & -2 \\ -2 & -1 & -1 & -1 \\ -2 & -1 & 0 & -1 \\ -2 & -1 & -1 & -1 \end{array}\right\rsem + [-2] = \left[ \begin{array}{C{1.3em}C{1.3em}C{1.3em}C{1.3em}} -4 & -4 & -4 & -4 \\ -4 & -3 & -3 & -3 \\ -4 & -3 & -2 & -3 \\ -4 & -3 & -3 & -3 \end{array}\right\rsem = T(\mathcal{F}) \end{array} \end{equation} } This algebraical relation between template and subtemplate linking matrices associated to Fig.~\ref{fig:rossler_F_subtemplate} prove that $T_\mathcal{F} \subset T_\mathcal{C}$. We now consider the attractor $\mathcal{D}$, we have $T_\mathcal{D} \subset T_\mathcal{F}$ directly from their mixers \begin{equation} \left[ \begin{array}{C{1.3em}C{1.3em}C{1.3em}} -4 & -4 & -4 \\ -4 & -3 & -3 \\ -4 & -3 & -2 \end{array} \right\rsem \subset \left[ \begin{array}{C{1.3em}C{1.3em}C{1.3em}C{1.3em}} -4 & -4 & -4 & -4 \\ -4 & -3 & -3 & -3 \\ -4 & -3 & -2 & -3 \\ -4 & -3 & -3 & -3 \end{array} \right\rsem\;; \end{equation} this is illustrated on Fig.~\ref{fig:rossler_D_subtemplate}. We previously obtain that $T_\mathcal{F} \subset T_\mathcal{C}$, thus we prove that $T_\mathcal{D} \subset T_\mathcal{C}$. \begin{figure}[hbtp] \centering \includegraphics[height=.3\textheight]{rossler_D_subtemplate.ps} \caption{ Template of $\mathcal{D}$ is a subtemplate of the template of $\mathcal{F}$ and consequently, it is also a subtemlate of the template of $\mathcal{C}$. } \label{fig:rossler_D_subtemplate} \end{figure} Consequently, we prove that the six templates of attractors $\mathcal{A}$, $\mathcal{B}$, $\mathcal{D}$, $\mathcal{E}_1$, $\mathcal{E}_2$ and $\mathcal{F}$ are subtemplate of the template of the attractor $\mathcal{C}$; it is a template with six subtemplates. We remind that $\mathcal{G}$ and $\mathcal{C}$ have the same template ($T_\mathcal{G}=T_\mathcal{C}$). \section{A template for the whole bifurcation diagram} We obtain a unique template containing the eight templates of attractors. We now consider the whole bifurcation diagram (Fig.~\ref{fig:rossler_bifurcation}) and not only specific attractors. In this section, we will show that the template of $\mathcal{C}$ contains all attractors templates for any parameter value take from this bifurcation diagram. We use the Poincar\'e section \eqref{eq:rossler_X_section} and build return maps using $y_n$ for $\alpha \in ]-2;1.8[$ when an attractor is solution. We associate a symbolic dynamic with the three symbols ``$0$'', ``$1$'', ``$2$'' of $T_\mathcal{C}$. Note that Barrio {\it et al.} \cite{barrio2012topological} also use this process to study return maps of a R\"ossler system from a Lyapunov diagram. The authors display return maps with superstability curve and coexisting stable points. Here we prefer collect extrema points to make a partition of the bifurcation diagram. \begin{figure}[htpb] \centering \includegraphics[width=.7\textwidth]{rossler_bif_02.eps} \caption{ Partition of the bifurcation diagram when $\alpha$ varies build using first return maps on $y_n$ of the Poincar\'e section \eqref{eq:rossler_X_section}. This partition give a symbolic dynamic with three symbols ``$0$'', ``$1$'', ``$2$'' depending on $\alpha$. } \label{fig:rossler_bifurcation_symbolic} \end{figure} In the diagram Fig.~\ref{fig:rossler_bifurcation_symbolic}, we indicate the values of $y_n$ splitting the return maps into two or three parts. We remind the reader that we orientate application from the inside to the outside of the attractor. Thus, the branches are labelled with symbol number increasing while $y_n$ decrease. Fig.~\ref{fig:rossler_bifurcation_symbolic} reveals that the separator values are linear to $\alpha$. We note $y_{0|1}(\alpha)$ the value of $y_n$ that split branches ``$0$'' and ``$1$'' and $y_{1|2}(\alpha)$ the value of $y_n$ that split branches ``$1$'' and ``$2$''. A linear regression gives \begin{equation} \begin{array}{l} y_{0|1}(\alpha) = 1.43638 \alpha -6.76016 \\ y_{1|2}(\alpha) = 2.18237 \alpha -11.1289 \;. \end{array} \label{eq:partition} \end{equation} Up to this point, if there is an attractor solution, its orbits can be encoded with the symbols depending on the previous equations. This also requires the use of the Poincar\'e section \eqref{eq:rossler_X_section}. For a given range of a bifurcation parameters ($\alpha \in ]-2;1.8[$), the parameters of the R\"ossler system depends on $\alpha$: $a(\alpha)$, $b(\alpha)$ and $c(\alpha)$. The fixed points depend on the parameters and the Poincar\'e section depend on the fixed points. Thus, we obtain a Poincar\'e section and its partition \eqref{eq:partition} depending on $\alpha$ while the template is define by the linking matrix \begin{equation} \left[ \begin{array}{C{1.3em}C{1.3em}C{1.3em}} 0 & -1 & -1 \\ -1 & -1 & -1 \\ -1 & -1 & 0 \end{array} \right\rsem\;. \end{equation} The main result is that the topological characterization of chaotic attractors can be extended as a description of various attractors whose parameters come from one bifurcation diagram. In this bifurcation diagram, we show that there are regimes where the chaotic mechanisms are topologically equivalent ($T_\mathcal{C} = T_\mathcal{G}$), symmetric ($T_{\mathcal{E}_1}$ and $T_{\mathcal{E}_2}$) and they are a subset of the same chaotic mechanism. The point is that our work can help to understand the complex structure of attractors considering them as subtemplates of their neighbors (in term of bifurcation parameter). This also enlarge the possibility to use the topological characterization to describe more than an attractor, but an entire bifurcation diagram. \section{Conclusion} In this paper we study eight attractors of the R\"ossler system. The parameters values of these attractors come from a bifurcation diagram that exhibits various dynamics such as coexisting attractors. For each attractor we apply the topological characterization method that give us a template of the attractor. These templates detail the chaotic mechanism and these are only made of stretching and folding mechanism followed by a squeezing mechanism with two, three and four strips. The second part of this paper is dedicated to the proof that the eights templates are subtemplates of a unique template: the template of $\mathcal{C}$. The main result here is that a template is no longer a tool to describe one attractor but also a set of neighbours attractors (in the parameter space). Thus for a bifurcation diagram, we build a partition using symbols of $T_\mathcal{C}$. This partition over the whole diagram give a global structure of attractors for a range of parameters. This better understanding of the structure of bifurcation diagram can help researchers that want to explore the behaviour of their system, especially if it exhibits chaotic properties. For instance Matthey \textit{et al.} \cite{matthey2008experimental} design a robot using coupled R\"ossler oscillators to simulate its locomotion. A similar theoretical analysis of their system can provide various set of parameters with specific chaotic properties that might induce new locomotion pattern. A partitioned bifurcation diagram details the various non equivalent dynamical behavior of the system to find them. This work on templates of attractors from a unique bifurcation diagram is a first step that can lead to a description of manifolds using templates. It is also a new way to classify templates grouping them as subtemplates and not claiming that it exists six new attractors for the R\"ossler system. To conclude, this work permits to apply the topological characterization method to a set of attractors from a bifurcation diagram. The partition of a bifurcation diagram associated with a unique template is a new tool to describe the global dynamical properties of a system while a parameter is varied. In future works, we will apply this method on attractors bounded by higher genus torus to highlight how symmetry breaking or template number of branches are related in a bifurcation diagram. \section*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} A statistical mechanical model is said to be \textit{disorder irrelevant} if introducing a sufficiently small but fixed level of disorder to the system will have a vanishing influence on the behavior of the model in a large-scale limit~\cite{Giacomin}. In other terms, the presence of a weak enough disorder is overpowered by the entropy as the system grows. Alternatively, if the perturbative effect of any fixed disorder strength increases as the system is scaled up, the model is classified as~\textit{disorder relevant}. Disorder relevance opens up the possibility that the system can be driven towards a nontrivial limit through an appropriate weak-disorder/coarse-graining transformation for which the limit is an attractive fixed point within some space of models~\cite{CSZ2}. Borderline cases of disorder relevant models are referred to as \textit{marginally relevant}, and their renormalization procedures tend to require scalings with slowly-varying functions rather than power laws and to exhibit nonlinear behaviors that are precluded by a more robust form of disorder. In this article, we will construct and analyze a one-parameter ($r\in {\mathbb R}$) family of continuum random polymer measures (CRPMs), $\mathbf{M}_r$, whose laws are derived through a weak-disorder limiting regime introduced in~\cite{Clark1,Clark3} for models of random polymers on hierarchical graphs with marginally relevant disorder. This scaling limit is similar to the critical regime for (2+1)-dimensional polymers studied by Caravenna, Sun, and Zygouras in the article~\cite{CSZ4}, which extends their previous work~\cite{CSZ0,CSZ1,CSZ2,CSZ3} and is related to the recent result~\cite{GQT} by Gu, Quastel, and Tsai on the two-dimensional stochastic heat equation at criticality. The weak-disorder regime for (2+1)-dimensional polymers poses fundamental new challenges from the disorder relevant (1+1)-dimensional polymer model studied by Alberts, Khanin, and Quastel~\cite{alberts,alberts2}, where the convergence of the partition functions can be handled through a term-by-term analysis of polynomial chaos expansions that limit to corresponding Wiener chaos expansions. The exact renormalization symmetry baked into our hierarchical model allows us to proceed further in developing a theory for the CRPMs $(\mathbf{M}_r)_{r\in {\mathbb R}}$ in this setting than has currently been achieved for the marginally relevant (2+1)-dimensional model at criticality, and the results here suggest some ideas for what to expect in general for similar critical continuum polymer models. The disordered measures $\mathbf{M}_r$ act on a space of directed paths $\Gamma$ crossing a compact diamond fractal $D$ having Hausdorff dimension two. Each path $p\in \Gamma$ is an isometric embedding of the unit interval $[0,1]$ into $D$ that bridges points $A$ and $B$ on opposite ends of the fractal. An analogous theory for the subcritical case in which the Hausdorff dimension of the diamond fractal is $<2$ was developed in~\cite{Clark2} for a family $M_{\beta}$ of CRPMs indexed by $\beta\geq 0$ whose laws arise from weak-disorder scaling limits of disorder relevant polymer models on hierarchical graphs~\cite{US} (with either vertex or edge disorder). The motivation for~\cite{Clark2} was to define a counterpart to the continuum (1+1)-dimensional polymer~\cite{alberts2} in the setting of diamond hierachical graphs. The measure $M_{\beta}$ has expectation $\mu$, where $\mu$ is a canonical `uniform' probability measure on the space of paths $\Gamma$, and $M_{\beta}$ can be constructed as a subcritical Gaussian multiplicative chaos (GMC) formally given by \begin{align}\label{GMC} M_{\beta}(dp)\,=\,e^{\beta \mathbf{W}(p)-\frac{\beta^2}{2}\mathbb{E}[\mathbf{W}^2(p)] }\mu(dp)\hspace{.5cm}\text{for the Gaussian field}\hspace{.5cm}\mathbf{W}(p)\,:=\,\int_0^1W\big(p(t) \big)dt \end{align} over $p\in\Gamma$, where $W\equiv \{W(x)\}_{x\in D}$ is a Gaussian white noise over $D$. The point-wise correlations of $M_{\beta}$ can be expressed as \begin{align*} \mathbb{E}\big[M_{\beta}(dp)M_{\beta}(dq) \big]\,=\,e^{\beta^2 T(p,q) }\mu(dp)\mu(dq) \hspace{.5cm}\text{for}\hspace{.3cm}p,q\in \Gamma \,, \end{align*} where $T(p,q)$ is the \textit{intersection time}, i.e., a quantity measuring the fractal set of intersection times $I_{p,q}:=\{t\in [0,1]\,|\,p(t)=q(t) \} $ between paths $p$ and $q$. When the diamond fractal $D$ has Hausdorff dimension $d\in (1,2)$, the set $I_{p,q}$ either has Hausdorff dimension $2-d$ or is finite (negligible intersection set) for $\mu\times \mu$-a.e.\ pair of paths $(p,q)$. The above subcritical GMC construction from the pure measure $\mu$ and the Gaussian white noise $W$ breaks down for the critical CRPMs, $\mathbf{M}_r$. One reason for this constructive limitation is that the pure product $\mu\times \mu$ is supported on the set of pairs of paths $(p,q)$ having trivial intersections ($I_{p,q}$ is finite) when $D$ has Hausdorff dimension two. The construction of the CRPMs, $\mathbf{M}_r$, in this article is a straightforward task using the limiting partition function laws derived in~\cite{Clark3}. Beyond outlining some of the basic features of $\mathbf{M}_r$, such as that $ \mathbf{M}_r$ is a.s.\ non-atomic and mutually singular to $\mathbb{E}[\mathbf{M}_r]=\mu$, our main focus is on characterizing the typical size of intersection-times sets $I_{p,q}$ for $(p,q)\in \Gamma\times \Gamma$ in the support of $\mathbf{M}_r\times \mathbf{M}_r$. When not finite, the sets $I_{p,q}$ are a.s.\ uncountable but of Hausdorff dimension zero. A more refined understanding of the size of these sparse sets can be achieved using generalized Hausdorff measures~\cite{Hausdorff} defined in terms of logarithmic dimension functions $h_{\epsilon}(a)=\frac{1}{|\log(1/a)|^\epsilon}$ for exponent $\epsilon>0$ in place of standard power functions $h_\alpha(a)=a^\epsilon$; see Definition~\ref{DefLogHaus}. Generalized Hausdorff measures of this form have been considered, for instance, in the theory of Furstenberg-type sets~\cite{Rela,Rela2}. The trivial-to-nontrivial gap in the behavior of $I_{p,q}$ between the pure product measure $\mu\times \mu$ and realizations of the disordered product measures $\mathbf{M}_r\times \mathbf{M}_r$ is a strong localization property that suggests $\mathbf{M}_r$ is supported on a set of paths restricted to a measure zero subspace of $D$. This effective constriction of the space available to paths drives them into having richer intersection sets when chosen independently according a fixed realization of $\mathbf{M}_r$. As $R\searrow -\infty$ the law of the random measure $\mathbf{M}_R$ converges to the deterministic pure measure $\mu$ on paths. In heuristic terms, a second reason that the CRPM $\mathbf{M}_r$ does not fit into the mold of a subcritical GMC on $\mu$ is that it would require an infinite coupling strength $\beta=\infty$ to a field. There is, however, a conditional GMC construction of $\mathbf{M}_r$ from $\mathbf{M}_R$ for any $R\in (-\infty, r)$, which is discussed in~\cite{Clark4}. To summarize the construction, we write \begin{align}\label{ReGMC} \mathbf{M}_{r}(dp)\,\stackrel{\mathcal{L}}{=}\,e^{\sqrt{r-R}\mathbf{W}_{ \mathsmaller{\mathbf{M}_{R}} }(p)-\frac{r-R}{2}\mathbb{E}[ \mathbf{W}^2_{\mathsmaller{\mathbf{M}_{R}} }(p) ] }\mathbf{M}_{R}(dp)\,, \end{align} where, roughly, $\mathbf{W}_{ \mathbf{M}_{R} }(p)$ is a Gaussian field on $( \Gamma, \mathbf{M}_{R} )$ when conditioned on $\mathbf{M}_{R}$ that has correlation kernel $$ \mathbb{E}\big[\mathbf{W}_{ \mathbf{M}_{r} }(p)\mathbf{W}_{ \mathbf{M}_{r} }(q)\,\big|\,\mathbf{M}_{R} \big]\,=\,T(p,q)\,, $$ for an intersection time $T(p,q)$ defined in Section~\ref{SecTwoPaths} that measures the size of the Hausdorff dimension zero sets $I_{p,q}$. An analogous construction of the subcritical GMC $M_{\beta}$ in~(\ref{GMC}) from $M_{\beta'}$ for $\beta'\in [0,\beta)$ holds, but an obvious difference in our critical model is that the parameter $R$ is not bounded from below. In particular, the coupling strength $\beta=\sqrt{r-R}$ in~(\ref{ReGMC}) tends to infinity as $R\searrow -\infty$ and the law of $\mathbf{M}_{R}$ approaches $\mu$. Although the conditional GMC structure is of mathematical interest in itself, it also enables an easy proof that the continuum polymer model transitions to strong disorder as $r\nearrow \infty$ in the sense that the total mass, $ \mathbf{M}_{r}(\Gamma)$, converges in probability to $0$. \subsection{Article organization} This article has the following organization: Sections~\ref{SecHDG}-\ref{SecDHL} outline the basic definitions and notation related to diamond fractals and their paths space, and Sections~\ref{SecCoMe}-\ref{SectionHilbert} state the main results regarding the construction and properties of the continuum random polymer measures (CRPMs). Section~\ref{SectionDHLConst} formulates the diamond fractal-related structures more precisely. Sections~\ref{SecCorrMeas}-\ref{SectionLast} contain the proofs of propositions from Section~\ref{SecMainResults}. \section{Continuum random polymers on the diamond hierarchical lattice}\label{SecMainResults} \subsection{Construction of the hierarchical diamond graphs}\label{SecHDG} With a branching number $b\in \{2,3,\ldots\}$ and a segmenting number $s\in \{2,3,\ldots\}$, we define the hierarchical diamond graphs $(D_{n}^{b,s})_{n\in \mathbb{N}}$ inductively as follows: \begin{itemize} \item The first diamond graph $D_{1}^{b,s}$ is defined by $b$ parallel branches connecting two nodes, $A$ and $B$, wherein each branch is formed by $s$ edges running in series. \item The graph $D_{n+1}^{b,s}$ is defined from $D_{n}^{b,s}$ by replacing each edge on $D_{1}^{b,s}$ by a nested copy of $D_{n}^{b,s}$. \end{itemize} We can extend the definition of $D_n^{b,s}$ consistently to the $n=0$ case by defining $D_{0}^{b,s}$ as having a single edge that connects $A$ and $B$. The illustration below depicts the first few diamond graphs with $(b,s)=(2,3)$. \begin{center} \includegraphics[scale=.8]{BranchingGraphsPrime3Crop.pdf} \end{center} A \textit{directed path} on $D_{n}^{b,s}$ is defined as a function $p:\{1,\ldots, s^n\}\rightarrow E^{b,s}_n$ for which $p(1)$ is incident to $A$, $p(s^n)$ is incident to $B$, and successive edges $p(k)$ and $p(k+1)$ share a common vertex for $1\leq k<s^n$. Thus the path starts at $A$ and moves progressively up to $B$. The set of directed paths on $D_{n}^{b,s}$ is denoted by $\Gamma_{n}^{b,s}$. \vspace{.4cm} \subsection{Hierarchical diamond graph notation}\label{SecContinuum} The hierarchical diamond graphs are canonically embedded on a compact fractal with Hausdorff dimension $(\log s+\log b )/\log s$ that we refer to as the \textit{diamond hierarchical lattice} (DHL). Before discussing the DHL further, we will prune and then extend our diamond graph notations. For the remainder of this article, we will focus on the Hausdorff dimension two case of the DHL in which the segmenting and branching parameters are equal ($b=s$) and treat $b\in \{2,3,\ldots\}$ as a fixed, underlying parameter that will not be appended to notations for objects depending on it, e.g., $D^{b,b}_n\equiv D_n$. For easy reference, we list the following notations relating to the diamond graph, $D_n$: \begin{align*} \vspace{.3cm}&V_n \hspace{1.8cm} \text{Set of vertex points on $D_n$}& \\ &E_{n} \hspace{1.75cm} \text{Set of edges on the graph $D_{n}$}& \\ &\Gamma_n \hspace{1.8cm} \text{Set of directed paths on $D_{n}$}&\\ &[\textbf{p}]_N \hspace{1.5cm} \text{The path in $\Gamma_N$ enveloping the path $\textbf{p}\in \Gamma_n $ where $n>N$}& \end{align*} The following are a few basic observations about the diamond graphs that derive from their recursive construction: for $n>N$, \begin{itemize} \item $V_N$ is canonically embedded in $V_n$, \item $E_N$ determines a canonical equivalence relation on $E_n$, and \item $\Gamma_N$ determines a canonical equivalence relation on $\Gamma_n$. \end{itemize} \subsection{Diamond hierarchical lattice}\label{SecDHL} The definitional interpretation of the DHL, $D$, that we outline here was introduced in~\cite{Clark2}. Under this point of view, $D$ is a compact metric space on which each directed path, $p\in \Gamma$, is an isometric embedding $p:[0,1]\rightarrow D$ with $p(0)=A$ and $p(1)=B$. Thus $D$ is a network of interweaving copies of $[0,1]$ and distances are measured with a travel metric. We make the definitions more precise in Section~\ref{SectionDHLConst}, and, for now, we extend our notations as follows: \begin{align*} &V \hspace{1.73cm} \text{Set of vertex points on $D$}& \\ &E \hspace{1.7cm} \text{Complement of $V$ in $D $ }& \\ &\Gamma \hspace{1.8cm} \text{Set of directed paths on $D$}& \\ &D_{i,j}\hspace{1.36cm}\text{First-generation embedded copies of $D$ on the $j^{th}$ segment of the $i^{th}$ branch}& \\ &\nu \hspace{1.81cm} \text{Uniform probability measure on $D$} \\ &\mu \hspace{1.81cm} \text{Uniform probability measure on $\Gamma$}&\\ &\mathcal{B}_{\Gamma} \hspace{1.61cm} \text{Borel $\sigma$-algbra on $\Gamma$}& \\ &[p]_n \hspace{1.47cm} \text{The path in $ \Gamma_n $ enveloping the path $p\in \Gamma$}& \end{align*} The following are some canonical identifications between the diamond graph structures and subsets of the DHL and its path space. \begin{itemize} \item $V$ is a countable, dense subset of $D$ that is identifiable with $\cup_{n=1}^\infty V_n$. \item The edge set $E_n$ defines an equivalence relation on $E$ in which elements of $E_n$ correspond to cylinder subsets of $E$. \item The path set $\Gamma_n$ defines an equivalence relation on $\Gamma$ in which each element in $\Gamma_n$ corresponds to a cylinder subset of $\Gamma$. \item Under the identifications above, the measures $(D,\nu)$ and $(\Gamma,\mu)$ assign weights $ \nu(\mathbf{e})=1/|E_n|$ and $\mu(\mathbf{p})=1/|\Gamma_n|$ to the cylinder sets $\mathbf{e}\in E_n$ and $\mathbf{p}\in \Gamma_n$, respectively. \end{itemize} \begin{remark}\label{RemarkConcat} Let $\big(\Gamma,\mu^{(i,j)}\big)$ be copies of $(\Gamma,\mu)$ corresponding to the embedded subcopies, $D_{i,j}$, of $D$. The path space $(\Gamma,\mu)$ can be decomposed as \begin{align*} \mu\,=\,\,\frac{1}{b}\sum_{i=1}^b \prod_{j=1}^b \mu^{(i,j)} \hspace{1cm}\text{under the identification}\hspace{1cm} \Gamma\,\equiv\, \bigcup_{i=1}^{b}\bigtimes_{j=1}^b \Gamma \, \end{align*} by way of $b$-fold concatenation of the paths. \end{remark} \begin{remark}\label{RemarkCylinder} For $n\in \mathbb{N}$, there is a canonical bijection between $\Gamma$ and $\Gamma_n\times \bigtimes_{k=1}^{b^n}\Gamma $ in which $p\in \Gamma$ corresponds to the $(b^n+1)$-tuple $\big([p]_n;\, p_1^{(n)} ,\ldots ,p_{b^n}^{(n)}\big)$, where \begin{itemize} \item $[p]_n\in \Gamma_n$ is the $n^{th}$ generation coarse-grained version of the path $p$ referred to above, and \item $p_j^{(n)}\in \Gamma$ is a dilation of the part of the path $p$ through the shrunken, embedded copy of the DHL corresponding to the edge $[p]_n(j)\in E_n$. \end{itemize} Thus any $\mathbf{p}\in \Gamma_n$ is identified with a cylinder set $\{p\in \Gamma\,|\,[p]_n=\mathbf{p} \}$. \end{remark} The following proposition implies that two paths $p,q\in \Gamma$ chosen uniformly at random a.s.\ have a finite (trivial) intersection set. This contrasts with the DHL in the case $b<s$ for which there is a positive probability that the set of intersection times will have Hausdorff dimension $(\log s -\log b)/\log s$, and thus be uncountably infinite~\cite{Clark2}. \begin{proposition}\label{PropPathInter} If $p\in \Gamma$ is fixed and $q\in \Gamma$ is chosen uniformly at random, i.e., according to the measure $\mu(dq) $, then the set of intersection times $I_{p,q}:=\{r\in [0,1]\,|\,p(r)=q(r) \}$ is a.s.\ finite. Moreover, the intersection points $p(r)=q(r)\in D$ occur only at the vertex points of $D$. \end{proposition} \begin{proof}For $p,q\in \Gamma$ and $n\in \mathbb{N}$, let $\xi_n(p,q)$ denote the number of graphical edges shared by the discrete paths $[p]_n,[q]_n\in \Gamma_n$. It suffices to show that the sequence $\big(\xi_n(p,q)\big)_{n\in \mathbb{N}}$ has only finitely many nonzero terms for $\mu$-a.e.\ $q\in \Gamma$. The sequence $\xi_n (p,q)$ can be understood as the number of members at generation $n\in \mathbb{N}_0$ in a simple Markovian population model that begins with a single member ($\xi_0(p,q)=1 $) and where each member of the population independently has either no children with probability $\frac{b-1}{b}$ or $b$ children with probability $\frac{1}{b}$. If $\frak{p}_{n}$ denotes the probability of extinction by generation $n$, then $\frak{p}_{0}=0$ and, by hierarchical symmetry, $\{\frak{p}_{n}\}_{n\in \mathbb{N}_0}$ satisfies the recursive relation $\frak{p}_{n+1}=\psi(\frak{p}_{n})$ for $\psi:[0,1]\rightarrow [0,1]$ defined by $\psi(x):=\frac{b-1}{b}+\frac{1}{b}x^b $. The map $\psi$ has a unique attractive fixed point at $x=1$, and thus $\frak{p}_{n}$ converges to $1$ with large $n$, and the probability of eventual extinction is $1$. \end{proof} \begin{corollary}\label{CorrMuMu} For $(p,q)\in \Gamma\times \Gamma$, define $\xi_n(p,q)\in \mathbb{N}_0$ as the number of edges shared by the coarse-grained paths $[p]_n,[q]_n\in \Gamma_n$. The set, $\mathbf{S}_\emptyset $, of pairs $(p,q)\in \Gamma\times \Gamma$ such that $ \xi_n(p,q)=0$ for large enough $n$ is a full measure set for $\mu\times \mu$. \end{corollary} \subsection{Correlation measure}\label{SecCoMe} In Section~\ref{SecCDRPDef} we will introduce a canonical family of random measures $(\mathbf{M}_{r})_{r\in {\mathbb R}}$ on $\Gamma$ that emerge as a continuum limit of models for random polymers on diamond graphs. First we define a function $R:{\mathbb R}\rightarrow [0,\infty)$ that defines the variance of the total mass of $\mathbf{M}_{r}$ and a measure $\upsilon_r$ on $\Gamma\times \Gamma$ that characterizes the local correlations of $\mathbf{M}_{r}$: $$R(r)= \textup{Var}\big( \mathbf{M}_{r}(\Gamma) \big)\hspace{1cm} \text{and}\hspace{1cm} \upsilon_r(dp,dq)=\mathbb{E}\big[ \mathbf{M}_{r}(dp) \mathbf{M}_{r}(dq) \big]\,. $$ The following lemma was proven in~\cite{Clark1}. \begin{lemma}[total mass variance function]\label{LemVar}For $b\in \{2,3,\ldots\}$, there exits a unique continuously differentiable increasing function $R:{\mathbb R}\rightarrow {\mathbb R}_+$ satisfying the properties (I)-(III): \begin{enumerate}[(I)] \item Composition with the map $M_{b}(x):=\frac{1}{b}[(1+x)^b-1]$ translates the parameter $r\in {\mathbb R}$ by $1$:\, $ M_{b}\big(R(r)\big)\,=\, R(r+1) $. \item As $r\rightarrow \infty$, $R(r)$ grows without bound. As $r\rightarrow -\infty$, $R(r)$ has the vanishing asymptotics $ R(r)=-\frac{ \kappa^2 }{ r }+ \frac{ \kappa^2\eta\log(-r) }{ r^2 }+\mathit{O}\big( \frac{\log^2(-r)}{r^3} \big) $. \item The derivative $R'(r)$ admits the limiting form $ R'(r)\,=\,\lim_{n\rightarrow \infty}\frac{\kappa^2}{n^2}\prod_{k=1}^n\big(1+R(r-k)\big)^{b-1} $. \end{enumerate} \end{lemma} \begin{remark}\label{RemarkDerR} Notice that applying the chain rule to the recursive relation (I) implies the identity $ R'(r)\,=\,R'(r-n) \prod_{k=1}^n\big( 1+R(r-k)\big)^{b-1} $. Thus property (III) above is equivalent to stating that $R'(r-n)\,=\,\frac{\kappa^2}{n^2}+\mathit{o}\big( \frac{1}{n^2} \big)$ with large $n$. \end{remark} \begin{lemma}[correlation measure]\label{LemCorrelate} Let $R:{\mathbb R}\rightarrow {\mathbb R}_+$ be defined as in Lemma~\ref{LemVar}. The following statements hold for any $r\in {\mathbb R}$. \begin{enumerate}[(i)] \item There is a unique measure $\upsilon_{r}$ on $\Gamma\times \Gamma$ such that for any two cylinder sets $\mathbf{p},\mathbf{q}\in \Gamma_{n}$ with $n\in \mathbb{N}_0$ \begin{align}\label{UpsilonCond} \upsilon_{r}(\mathbf{p}\times \mathbf{q} )\,=\, \frac{1}{|\Gamma_n|^2} \big(1+R(r-n)\big)^{\xi_n(\mathbf{p},\mathbf{q}) } \,, \end{align} where $\xi_n(\mathbf{p},\mathbf{q}) $ is the number of edges shared by the paths $\mathbf{p}$ and $\mathbf{q}$. The marginals of $(\Gamma\times \Gamma, \upsilon_r)$ are both equal to $\big(1+R(r)\big) \mu$. \item The Lebesgue decomposition of $(\Gamma\times \Gamma,\upsilon_{r})$ with respect to the product measure $\mu\times \mu$ is given by $\upsilon_{r}=\mu\times \mu\,+\,R(r)\rho_r$, where $\rho_r$ is a probability measure on $\Gamma\times \Gamma$ that is supported on the set of pairs $(p,q)$ such that $\xi_n(p,q )>0$ for all $n$, i.e., the complement of $\mathbf{S}_{\emptyset}$, as defined in Corollary~\ref{CorProp4}. The marginals of $(\Gamma\times \Gamma, \rho_r)$ are both equal to $\mu$. \end{enumerate} \end{lemma} \begin{remark}\label{RemarkTotalMass} Note that we get $\upsilon_r(\Gamma\times \Gamma )=1+R(r)$ by applying~(\ref{UpsilonCond}) with $n=0$. \end{remark} \begin{remark}\label{RemarkCond} The measure $\rho_r$ on $\Gamma\times \Gamma$ from part (ii) of Lemma~\ref{LemCorrelate} can be defined through conditioning the correlation measure $\upsilon_r$ on the event $\mathbf{S}_{\emptyset}^c$, i.e., $\rho_r(A)=\upsilon_r(A\cap \mathbf{S}_{\emptyset}^c)/\upsilon_r(\mathbf{S}_{\emptyset}^c) $ for $A\in \mathcal{B}_{\Gamma\times\Gamma}$. Thus $\rho(dp,dq)$ defines a law on pairs of paths $(p,q)$ whose coarse-grainings $([p]_n,[q]_n)$ are conditioned to have overlapping edges at all generations $n\in \mathbb{N}$. \end{remark} The following proposition connects the correlation measure $\upsilon_r$ with a singular kernel $T(p,q)$ that characterizes the intersection time between two paths $p,q\in \Gamma$. \begin{proposition}\label{PropLemCorrelate} Let the family of measures $(\upsilon_{r})_{r\in{\mathbb R}}$ on $\Gamma\times \Gamma$ be defined as in Lemma~\ref{LemCorrelate}. For $p,q\in \Gamma$ and $n\in \mathbb{N}_0$, define $\xi_n(p,q)$ has the number of edges shared by the coarse-grained paths $[p]_n, [q]_n\in \Gamma_n$. The statements below hold for any $r\in {\mathbb R}$. \begin{enumerate}[(i)] \item The sequence $ \frac{\kappa^2}{n^2} \xi_n(p,q) $ converges $ \upsilon_r$-a.e.\ with large $n$ to a finite limit that we denote by $T(p,q)$. We define $T(p,q)=\infty$ for pairs $(p,q)$ such that the sequence $ \frac{\kappa^2}{n^2} \xi_n(p,q) $ is divergent as $n\rightarrow \infty$. \item In particular, $\upsilon_r$ assigns full measure to the set of pairs $(p,q)\in \Gamma\times \Gamma$ such that $ T(p,q)<\infty$. Moreover, for the measures $\mu\times \mu$ and $\rho_r$ from the Lebesgue decomposition of $\upsilon_r$ in part (ii) of Lemma~\ref{LemCorrelate}, $\mu\times \mu$ is supported on the set of pairs $(p,q)$ with $T(p,q)=0$ as a consequence of Proposition~\ref{PropPathInter}, and $\rho_r$ is supported on the set of pairs $(p,q)$ with $0<T(p,q)<\infty$. \item $\upsilon_t$ has Radon-Nikodym derivative $\textup{exp}\{(t-r)T(p,q)\}$ with respect to $\upsilon_r$ for any $t\in {\mathbb R}$. \item The exponential moments of $T(p,q)$ under $\upsilon_r$ have the form $$ 1+R(r+a)\,=\,\int_{\Gamma\times \Gamma } e^{aT(p,q) } \upsilon_{r}(dp,dq) \hspace{1cm}\text{for any}\,\, a\in {\mathbb R} \,. $$ \end{enumerate} \end{proposition} \subsection{The continuum random polymer measure }\label{SecCDRPDef} The following theorem formulates a canonical one-parameter family of random measures $(\Gamma,\mathbf{M}_{r})$ that are defined on an underlying probability space $(\Omega, \mathscr{F}, \mathbb{P})$. We will suppress the dependence on $\omega\in \Omega$ as in $\mathbf{M}_{r}\equiv \mathbf{M}_{r}^{\omega} $ and denote expectations with respect to $\mathbb{P}$ by $\mathbb{E}$. \begin{theorem}\label{ThmExistence} There is a unique one-parameter family of laws for random measures $(\mathbf{M}_{r})_{r\in {\mathbb R}}$ on the path space, $\Gamma$, of $D$ satisfying properties (I)-(IV) below. \begin{enumerate}[(I)] \item The expectation of the measure $\mathbf{M}_{r}$ with respect to the underlying probability space is the uniform measure on paths: $\mathbb{E}[\mathbf{M}_r ]=\mu $. More precisely, this means $\mathbb{E}[ \mathbf{M}_{r}(A) ]=\mu(A)$ for any $A\in \mathcal{B}_{\Gamma}$. \item For the measure $(\Gamma\times \Gamma , \upsilon_r) $ of Lemma~\ref{LemCorrelate}, we have the relation $\mathbb{E}[ \mathbf{M}_r \times \mathbf{M}_r ]= \upsilon_r $. In other terms, for measurable $g:\Gamma\times \Gamma\rightarrow [0,\infty)$ $$ \mathbb{E}\bigg[ \int_{ \Gamma \times \Gamma } g(p,q) \mathbf{M}_r(dp) \mathbf{M}_r(dq) \bigg]\,=\,\int_{ \Gamma\times \Gamma }g(p,q) \upsilon_r(dp,dq) \,. $$ \item For each $m\in \{2,3,\ldots \}$, the $m^{th}$ centered moment of the total mass, $ \mathbb{E}\big[ \big(\mathbf{M}_r(\Gamma)-1\big)^m\big]$, is finite and equal to $R^{(m)}(r)$ for a function $R^{(m)}:{\mathbb R}\rightarrow {\mathbb R}_+$ that vanishes with order $(\frac{1}{-r})^{\lceil m/2\rceil}$ as $r\rightarrow -\infty$ and grows without bound as $r\rightarrow \infty$. \item Let $(\Gamma,\mathbf{M}_{r}^{(i,j)}) $ be independent copies of $(\Gamma,\mathbf{M}_{r}) $ corresponding to the first-generation embedded copies, $D_{i,j}$, of $D$. Then there is equality in distribution of random measures \begin{align*} \mathbf{M}_{r+1 }\,\stackrel{d}{=}\, \frac{1}{b}\sum_{i=1}^b \prod_{j=1}^b \mathbf{M}_{r }^{(i,j)} \hspace{.5cm}\text{under the identification}\hspace{.5cm} \Gamma\,\equiv\, \bigcup_{i=1}^{b}\bigtimes_{j=1}^b \Gamma \,. \end{align*} \end{enumerate} \end{theorem} \begin{remark} As a consequence of property (I) of Theorem~\ref{ThmExistence}, the expectation of $\mathbf{M}_r(\Gamma)$ is $\mu(\Gamma)=1$. Moreover, (II) of Theorem~\ref{ThmExistence} implies that $\mathbb{E}\big[ \big(\mathbf{M}_r(\Gamma)\big)^2 \big]=\upsilon_r(\Gamma\times \Gamma)=1+R(r)$. Hence, the variance of the total mass $\mathbf{M}_{r}(\Gamma)$ is $R(r)$. \end{remark} The corollary below unfurls a structural consequence from property (IV) of Theorem~\ref{ThmExistence} held for a.e.\ realization of the random measure $\mathbf{M}_{r}$. We will use the following notation: \begin{notation}\label{NotationSub} \textup{For $a\in E_k$ and $(i,j)\in \{1,\ldots,b\}^2$, let $a\times (i,j)$ denote the element in $E_{k+1}$ corresponding to the $j^{th}$ segment along the $i^{th}$ branch on the subcopy of $D_1$ identified with $a\in E_k$ embedded within $D_{k+1}$. } \end{notation} \begin{corollary}\label{CorProp4} For $r\in {\mathbb R}$, the random measure $(\Gamma,\mathbf{M}_r)$ of Theorem~\ref{ThmExistence} can be defined on the same probability space as a family of random measures $(\Gamma,\mathbf{M}^{e}_{r-k})$ for $k\in \mathbb{N}$ and $e\in E_k$ that a.s.\ satisfies the properties below for every $k\in \mathbb{N}_0$. \begin{enumerate}[(i)] \item $\big\{\mathbf{M}^{e}_{r-k}\big\}_{e\in E_k}$ is an i.i.d.\ family of copies of the random measure $\big(\Gamma,\mathbf{M}_{r-k})$. \item $\displaystyle \mathbf{M}_{r-k }^{e}\,=\, \frac{1}{b}\sum_{i=1}^b \prod_{j=1}^b \mathbf{M}_{r }^{e\times(i,j)} $ under the identification $\displaystyle \Gamma\equiv \bigcup_{i=1}^{b}\bigtimes_{j=1}^b \Gamma$ for any $e\in E_k$. \item More generally, $\displaystyle\mathbf{M}_{r }\,=\, \frac{1}{|\Gamma_k|}\sum_{\mathbf{p}\in \Gamma_k} \prod_{\ell=1}^{b^k} \mathbf{M}_{r-k }^{\mathbf{p}(\ell)}$ under the identification $\displaystyle\Gamma\,\equiv\, \Gamma_k\times \bigtimes_{\ell=1}^{b^k}\Gamma$. \end{enumerate} \end{corollary} \begin{remark}\label{RemarkProp4} Corollary~\ref{CorProp4} has the following consequences for $n\in \mathbb{N}$ and a.e.\ realization of $\mathbf{M}_{r}$: \begin{itemize} \item The restriction of $\mathbf{M}_{r}$ to a cylinder set $\mathbf{p}\in \Gamma_n$ is a product measure, $\displaystyle\prod_{\ell=1}^{b^n} \mathbf{M}_{r-n }^{\mathbf{p}(\ell)}$. \item Let us identify $p\in \Gamma$ with the tuple $\big(\mathbf{p};\,\,p_1^{(n)},\,p_2^{(n)},\ldots , p_{b^{n}}^{(n)}\big)$ for $\mathbf{p}\in \Gamma_n$ and $p_\ell^{(n)}\in \Gamma$ through the interpretation in Remark~\ref{RemarkCylinder}. When the path $p$ is conditioned to pass through a given $e\in E_n$, i.e., $e=\mathbf{p}(l)$ for some $l\in \{1,\ldots,b^n\}$, the distribution of $p_l^{(n)}$ is independent of $\mathbf{p}$ and the $p_\ell^{(n)}$'s for $\ell\neq l$: $$\mathcal{L}\Big(\,p_l^{(n)}\,\Big|\, \mathbf{p}\,\,\,\text{and}\,\,\, \big\{ p_\ell^{(n)} \big\}_ {\ell \neq l} \Big)=\frac{1}{M_{r-n}^{e}(\Gamma)} M_{r-n}^{e}\,. $$ \end{itemize} \end{remark} \begin{proposition}\label{PropProperties} Let the random measures $\{\mathbf{M}_{r}\}_{r\in {\mathbb R}}$ be defined as in Theorem~\ref{ThmExistence}. The statements (i)-(iii) below hold a.s.\ for any $r\in {\mathbb R}$. \begin{enumerate}[(i)] \item $\mathbf{M}_r$ is mutually singular to $\mu$. \item $\mathbf{M}_r$ has no atoms. \item The support of $\mathbf{M}_r$ is dense in $\Gamma$. In other terms, $\mathbf{M}_r(A)>0$ for any open set $A\subset\Gamma$. \item $\mathbf{M}_r$ converges to $\mu$ as $r\rightarrow -\infty$ in the sense that for any $g\in L^2(\Gamma ,\mu)$ $$\mathbb{E}\Bigg[\bigg(\int_{\Gamma}g(p)\mathbf{M}_r(dp)\,-\, \int_{\Gamma}g(p)\mu(dp) \bigg)^2\Bigg]\,\stackrel{r\rightarrow -\infty }{\longrightarrow} \, 0 \,. $$ \item The total mass, $\mathbf{M}_r(\Gamma)$, converges in probability to zero as $r\rightarrow \infty$. \end{enumerate} \end{proposition} \begin{remark} Part (v) of Proposition~\ref{PropProperties} characterizes a transition to strong disorder as $r\rightarrow \infty$, and its proof is in~\cite{Clark4}. \end{remark} \begin{remark}In the language of~\cite{alberts2}, the \textit{continuum directed random polymer} (CDRP) on $D$ with parameter $r\in R$ refers to the random probability measure $Q_r(dp)=\mathbf{M}_r(dp)/\mathbf{M}_r(\Gamma)$. This is a.s.\ a well-defined probability measure since the measure $(\Gamma, \mathbf{M}_r)$ is a.s.\ finite and $\mathbf{M}_r(\Gamma)>0$ by (iii) of Proposition~\ref{PropProperties}. \end{remark} \subsection{Weak-disorder limit theorem for disordered Gibbsian measures }\label{SecLimitThmForMeasures} Next we will describe how the CRPMs $( \mathbf{M}_r)_{r\in {\mathbb R}}$ arise as distributional limits of disordered Gibbsian measures on the space of discrete polymers $\Gamma_n$ as $n\rightarrow \infty$. Let $\{\omega_{h}\}_{h\in E_n} $ be an i.i.d.\ family of random variables having mean zero, variance one, and finite exponential moments, $\mathbb{E}\big[\exp\{\beta \omega_h\}\big]$ for $\beta\geq 0$. Given an inverse temperature value $\beta\in [0,\infty)$, we define a random path measure $\mathbf{M}^{\omega}_{\beta, n}$ on the set of generation-$n$ directed paths such that $\mathbf{p}\in \Gamma_n$ is assigned weight \begin{align}\label{DefEM} \mathbf{M}^{\omega}_{\beta, n}(\mathbf{p}) \, = \,\frac{1}{|\Gamma^{b,s}|} \frac{ e^{\beta H_{n}^{\omega}(\mathbf{p}) } }{ \mathbb{E}\big[ e^{\beta H_{n}^{\omega}(\mathbf{p}) } \big] }\hspace{1cm} \text{ for path energy }\hspace{1cm} H_{n}^{\omega}(\mathbf{p}) \, := \, \sum_{h{\triangleleft} \mathbf{p}} \omega_{h} \, , \end{align} where $h{\triangleleft} \mathbf{p}$ means that the edge $h\in E_n$ lies along the path $\mathbf{p}$. \begin{definition}[critical weak-disorder scaling]\label{DefCS} For $b\in \{2,3,\ldots\}$ and a fixed value $r\in {\mathbb R}$, let the sequence $(\beta_{n, r})_{n\in \mathbb{N}}$ have the large $n$ asymptotics \begin{align}\label{BetaForm} \beta_{n, r}\, :=\, \frac{\kappa}{\sqrt{n}}\,-\,\frac{ \tau\kappa^2 }{2n}\,+\,\frac{\kappa\eta \log n}{n^{\frac{3}{2}}}\,+\,\frac{\kappa r}{n^{\frac{3}{2}}}\, +\,\mathit{o}\Big( \frac{1}{n^{\frac{3}{2}}} \Big) \,, \end{align} where $\kappa:=(\frac{2}{b-1})^{1/2}$, $\eta:=\frac{b+1}{3(b-1) }$, and $\tau:=\mathbb{E}[\omega_h^3]$ is the skew of the disorder variables $\omega_h$. \end{definition} \begin{remark} The scaling $\beta_{n, r}$ occurs in a vanishing window around the critical point $\epsilon=\kappa$ for coarser scalings of the form $\beta_n^{(\epsilon)}=\epsilon/ \sqrt{n}+\mathit{o}(1/\sqrt{n}) $ for a parameter $\epsilon\in[0,\infty)$. Discussion of the weak-disorder scaling in Definition~\ref{DefCS}, which is comparable to the critical window scaling for (2+1)-dimensional polymers in~\cite{CSZ4}, can be found in~\cite[Section 2.3]{Clark3}. \end{remark} \begin{definition}\label{DefLocAvg} Let $\varrho_n$ be a finite measure on the path space, $\Gamma_n$. We define the measure $(\Gamma,\overline{\varrho}_n)$ to satisfy that \begin{itemize} \item $\varrho_n(\mathbf{p})= \overline{\varrho}_n(\mathbf{p}) $ for every $\mathbf{p}\in \Gamma_n$, and \item $\overline{\varrho}_n$ is uniform on $\mathbf{p}\subset \Gamma$. In other terms, the restriction of $\overline{\varrho}_n$ to $\mathbf{p}$, viewed as a subset of $\Gamma$, is the product measure $$ \overline{\varrho}_n|_{\mathbf{p}}=\varrho_n(\mathbf{p})\prod_{h\lhd \mathbf{p} } \mu \hspace{1cm} \text{ with the identification } \hspace{1cm}\mathbf{p}\,\equiv\,\bigtimes_{h \lhd \mathbf{p}} \Gamma\,. $$ \end{itemize} \end{definition} \begin{theorem}[weak-disorder/continuum limit]\label{ThmUniversality} Let the random measure $(\Gamma_n, \mathbf{M}^{\omega}_{\beta, n})$ be defined as in~(\ref{DefEM}) and $\beta_{n,r}>0$ be defined as in~(\ref{BetaForm}). With $\beta= \beta_{n,r} $ define the averaged measure $ \mathbf{M}^{\omega}_{r, n}\,:=\, \mathbf{\overline{M}}^{\omega}_{\beta , n} $ on $\Gamma$ in the sense of Definition~\ref{DefLocAvg}. The sequence of random measures $\{\mathbf{M}^{\omega}_{r, n}\}_{n\in \mathbb{N}}$ converges in law as $n\rightarrow \infty$ to $\mathbf{M}_{r}$ in the sense that for any continuous function $g:\Gamma\rightarrow{\mathbb R}$ \begin{align*} \mathbf{M}^{\omega}_{r, n}(g)\,:=\, \int_{\Gamma}g(p)\mathbf{M}^{\omega}_{r, n}(dp) \hspace{1cm} \stackrel{\mathcal{L}}{\Longrightarrow}\hspace{1cm}\mathbf{M}_{r}(g)\,:=\, \int_{\Gamma}g(p)\mathbf{M}_{r}(dp) \,. \end{align*} \end{theorem} \subsection{Intersection-times set of two independently chosen paths}\label{SecTwoPaths} By Corollary~\ref{CorProp4}, $ \mu\times \mu$ assigns full measure to the set of pairs $(p,q)\in \Gamma\times \Gamma$ with intersection-times sets, $I_{p,q}=\{t\in [0,1]\,|\, p(t)=q(t)\} $, that consist of only finitely many points. In contrast, for a.e.\ realization of the random measure $\mathbf{M}_r$ the product $\mathbf{M}_r\times \mathbf{M}_r$ assigns a positive weight to pairs $(p,q)$ such that $I_{p,q}$ is uncountably infinite and has Hausdorff dimension zero. The definitions below provide us with a framework for characterizing the size of these intersection-times sets. \begin{definition} A \textit{dimension function} is a continuous, non-decreasing function $h:[0,\infty)\rightarrow [0,\infty) $ satisfying $h(0)=0$. Given a dimension function $h$, the \textit{generalized Hausdorff outer measure} of a compact set $S\subset {\mathbb R}^d$ is defined through the limit \begin{align} H^{h}(S)\,:=\,\lim_{\delta\searrow 0}H^{h}_{\delta}(S) \hspace{1cm}\text{for}\hspace{1cm} H^{h}_{\delta}(S)\,:=\,\inf_{\substack{S\subset \cup_{k}\mathcal{I}_{k} \\ \textup{diam}(\mathcal{I}_k)\leq \delta }} \sum_{k}h\big( \textup{diam}(\mathcal{I}_k)\big), \end{align} where the infimum is over all countable coverings of $S$ by sets $\mathcal{I}\subset {\mathbb R}^d$ of diameter $\leq \delta$. A dimension function $h$ is said to be \textit{zero-dimensional} if $h(x)\ll x^{\alpha}$ as $x\searrow 0$ for any $\alpha>0$. \end{definition} \begin{remark}\label{Remark}When the dimension function has the form $h(x)=x^\alpha$ for $\alpha>0$, then $H^{h}$ reduces to the standard dimension-$\alpha$ Hausdorff outer measure. The \textit{Hausdorff dimension} of a compact set $S\subset {\mathbb R}^d$ is defined as the supremum of the set of $\alpha\in [0,d]$ such $H^{h}(S)=\infty $. \end{remark} \begin{definition}[Log-Hausdorff exponent]\label{DefLogHaus} Let $S\subset {\mathbb R}$ be a compact set of Hausdorff dimension zero. For $\frak{h}\geq 0$ and $0<\delta<1$, define $H^{\log}_{\frak{h},\delta}(S):= H^{h}_\delta(S) $ and $H^{\log}_{\frak{h}}(S) := H^{h}(S) $ for dimension function $h(x)\,=\, 1/ \log^{\frak{h}}(1/x)$. We define the \textit{log-Hausdorff exponent} of $S$ as the supremum over the set of $\frak{h}$ such that $H^{\log}_{\frak{h}}(S)=\infty $. \end{definition} \begin{lemma}\label{LemIntSet} Let the measure $(\Gamma\times \Gamma, \upsilon_{r})$ be defined in Lemma~\ref{LemCorrelate}. The normalized measure $\rho_r=\frac{1}{R(r)}(\upsilon_{r}-\mu\times \mu) $ assigns probability one to the set of pairs $(p,q)$ such that the intersection-times set $I_{p,q}=\{t\in[0,1]\,|\,p(t)=q(t) \} $ has log-Hausdorff exponent $\frak{h}=1$. \end{lemma} \begin{theorem}\label{ThmPathMeasure} Let the random measures $(\mathbf{M}_r)_{r\in {\mathbb R}}$ be defined as in Theorem~\ref{ThmExistence}. The statements below hold for any $r\in {\mathbb R}$ and a.e.\ realization of the measure $\mathbf{M}_r$. \begin{enumerate}[(i)] \item The set of intersection times $I_{p,q}=\{r\in [0,1]\,|\,p(r)=q(r) \}$ has Hausdorff dimension zero for $\mathbf{M}_r\times \mathbf{M}_r $-a.e.\ pair $(p,q)\in \Gamma\times \Gamma$. \item The product $\mathbf{M}_r\times \mathbf{M}_r $ is supported on the set of $(p,q)\in \Gamma\times \Gamma$ such that $T(p,q)<\infty$, i.e., where the sequence $\frac{1}{n^2}\xi_n(p,q)$ converges to a finite limit as $n\rightarrow \infty$. Moreover, the exponential moments of $T(p,q)$ with respect to $\mathbf{M}_r\times \mathbf{M}_r$ have expected value $$\text{}\hspace{.1cm} \mathbb{E}\bigg[ \int_{\Gamma\times \Gamma}e^{aT(p,q)} \mathbf{M}_r(dp)\mathbf{M}_r(dq) \bigg]\,=\, 1+R(r+a)\hspace{.6cm}\text{for any $a\in {\mathbb R}$}\,. $$ \item The product $\mathbf{M}_r\times \mathbf{M}_r $ assigns full measure to the set of $(p,q)\in \Gamma\times \Gamma$ such that one of the following holds:\vspace{.2cm}\\ \text{}\hspace{1.5cm} (a) $I_{p,q}$ is finite \hspace{1cm} or\hspace{1cm} (b) $I_{p,q}$ has log-Hausdorff exponent $\frak{h}=1$.\vspace{.2cm} \\ Moreover, $\mathbf{M}_r\times \mathbf{M}_r $ assigns both of these events positive measure. \item Given $p\in \Gamma$ let $\mathbf{s}_p$ be the set of $q\in \Gamma$ such that the set of intersection times $I_{p,q}$ has log-Hausdorff exponent $\frak{h}=1$. Then $\mathbf{M}_r$ satisfies that $\mathbf{M}_r(\mathbf{s}_p)>0$ for $\mathbf{M}_r$-a.e.\ $p\in \Gamma$. The analogous statement holds for the sets $\mathbf{\widehat{s}}_p:=\{q\in \Gamma\,|\, T(p,q)>0\}$. \end{enumerate} \end{theorem} \begin{remark}Parts (iii) and~(iv) of Theorem~\ref{ThmPathMeasure} imply a form of locality for the disordered measure $\mathbf{M}_{r}$. Paths chosen independently according to $\mathbf{M}_r$ may intersect nontrivially, whereas this is impossible under the pure measure $\mu$ by Corollary~\ref{CorProp4}. \end{remark} \subsection{The spatial concentration of path intersections} \label{SecLocalResults} Our formalism for analyzing path intersections under the disordered product measure $\mathbf{M}_r\times \mathbf{M}_r$ can be extended by considering a measure $\vartheta_{\mathbf{M}_r}$ induced on the DHL, $D$, through weighing $A\in \mathcal{B}_D$ in proportion to how much pairs of paths independently generated from $\mathbf{M}_r$ intersect in $A$. First we revisit the intersection-time kernel $T(p,q)$ from Proposition~\ref{PropLemCorrelate} by defining a canonical measure $([0,1],\tau_{p,q})$ having the properties listed in Proposition~\ref{PropIntMeasure}. \begin{proposition}\label{PropIntMeasure} For $\upsilon_r$-a.e.\ pair $(p,q)\in \Gamma\times \Gamma$ there is a finite Borel measure $\tau_{p,q}$ on $[0,1]$ with (I)-(III) below. \begin{enumerate}[(I)] \item $\tau_{p,q}$ is non-atomic is supported on the set of intersection times, $I_{p,q}$. \item $\tau_{p,q}$ has total mass $T(p,q)$. \item The measure $\tau_{p,q}$ assigns an open set $A\subset [0,1]$ weight $ \displaystyle \tau_{p,q}(A)=\lim_{n\rightarrow \infty}\frac{\kappa^2}{n^2}\sum_{\substack{1\leq \ell \leq b^n \\ [p]_n(\ell)=[q]_n(\ell) } }\chi_{ [\frac{\ell-1}{b^n}, \frac{\ell}{b^n}]\subset A } $. \end{enumerate} \end{proposition} \begin{remark} Since $\upsilon_r=\mathbb{E}[\mathbf{M}_{r}\times \mathbf{M}_r]$, the product $\mathbf{M}_{r}\times \mathbf{M}_r$ a.s.\ assigns full measure to the set of pairs $(p,q)$ such that $\tau_{p,q}$ is well-defined and satisfies (I)-(III) of Proposition~\ref{PropIntMeasure}. \end{remark} \begin{definition}\label{DefVartheta} For $r\in {\mathbb R}$ and a realization of the random measure $\mathbf{M}_r$ from Theorem~\ref{ThmPathMeasure}, define $\vartheta_{\mathbf{M}_r}$ as the Borel measure on $D$ given by $$ \vartheta_{\mathbf{M}_r}\,:=\,\int_{\Gamma\times \Gamma}\gamma_{p,q}\mathbf{M}_r(dp)\mathbf{M}_r(dq) \,, $$ where $\gamma_{p,q}:=\tau_{p,q}\circ p^{-1}$ is the push forward measure of $\tau_{p,q}$ on $D$ determined by the path $p:[0,1]\rightarrow D$. \end{definition} \begin{remark}\label{RemarkVarthetaMass} Since $\gamma_{p,q}$ has total mass $\gamma_{p,q}(D)=T(p,q)$, the total mass of $\vartheta_{\mathbf{M}_r}$ is equal to $\int_{\Gamma\times \Gamma}T(p,q)\mathbf{M}_r(dp)\mathbf{M}_r(dq)$, which has expectation $R'(r)$ as a consequence of part (ii) of Theorem~\ref{ThmPathMeasure}. \end{remark} \begin{remark}\label{RemarkVarthetaSymm}Given $(\Gamma,\mathbf{M}_{r})$ and a pair $(i,j)\in \{1,\ldots, b\}^2$, let $\mathbf{M}_{r-1}^{(i,j)}$ be the component of $\mathbf{M}_{r}$ identified with the first-generation subcopy, $D_{i,j}$, of $D$ positioned at the $j^{th}$ segment along the $i^{th}$ branch (in the sense of Corollary~\ref{CorProp4}). If $\big(D_{i,j},\vartheta_{\mathbf{M}_{r-1}^{(i,j)}}\big)$ denotes the corresponding measure defined as above, then $\vartheta_{\mathbf{M}_r}$ can be decomposed as \begin{align*} \vartheta_{\mathbf{M}_r}\,=\,\frac{1}{b^2}\bigoplus_{1\leq i,j\leq b}\bigg(\prod_{\ell\neq j} \mathbf{M}_{r-1}^{(i,\ell)}(\Gamma) \bigg)^2\vartheta_{\mathbf{M}_{r-1}^{(i,j)}}\hspace{.5cm}\text{where we identify}\hspace{.5cm} D\equiv \bigcup_{1\leq i,j\leq b} D_{i,j}\,. \end{align*} \end{remark} Recall that $\nu$ is the uniform probability measure over the space $D$, which has Hausdorff dimension two. Let $d_D:D\times D\rightarrow [0,1]$ denote the travel metric on $D$ (defined in Section~\ref{SectionDHLConst}) and $\frak{g}_D(x,y)$ denote the first $n\in \mathbb{N}$ such that $x\in E$ and $y\in E$ do not belong to the same equivalence class in $E_n$. \begin{theorem}\label{ThmVartheta} Let $(D,\vartheta_{\mathbf{M}_r})$ be defined as in Definition~\ref{DefVartheta} for a given realization of $(\Gamma,\mathbf{M}_r)$. \begin{enumerate}[(i)] \item $\mathbb{E}[\vartheta_{\mathbf{M}_r}]=R'(r)\nu$ \item $\vartheta_{\mathbf{M}_r}$ a.s.\ has Hausdorff dimension two, i.e., if $A\in \mathcal{B}_{D} $ and $\vartheta_{\mathbf{M}_r}(A)>0$ then $\textup{dim}_{H}(A)=2$. In particular $\vartheta_{\mathbf{M}_r}$ assigns the countable set $V$ measure zero. \item When $x,y\in E$ the point correlations are formally given by $ \mathbb{E}\big[ \vartheta_{\mathbf{M}_r}(dx) \vartheta_{\mathbf{M}_r}(dy) \big]= C_r(x,y)\nu(dx)\nu(dy)$ for $C_r:E\times E\rightarrow {\mathbb R}_+$ satisfying the asymptotics $$ C_r(x,y)\,\sim \,\frac{\mathbf{c}}{\big( \frak{g}_D(x,y) \big)^8}\hspace{.2cm}\text{for some $\mathbf{c}>0$ when $\frak{g}_D(x,y)\gg 1$.} $$ \item Given $\lambda>0$ let $h_\lambda:[0,1]\rightarrow [0,\infty]$ be the dimension function $h_\lambda(a):=\frac{a^2}{ |\log(1/a)|^{\lambda}} $. For a Borel measure $\varrho$ on $D$, define the energy \begin{align*} Q_{\lambda}(\varrho)\,:=\, \int_{D\times D} \frac{1}{ h_\lambda\big(d_D(x,y)\big)}\varrho(dx)\varrho(dy)\, . \end{align*} For a.e.\ realization of $\mathbf{M}_r$, the energy $Q_{\lambda}(\vartheta_{\mathbf{M}_r})$ is finite for any $\lambda>9$, and the expectation of $Q_{\lambda}(\vartheta_{\mathbf{M}_r})$ is infinite for $\lambda\leq 9$. \end{enumerate} \end{theorem} \begin{remark} In part (iii) of Theorem~\ref{ThmVartheta}, $\frak{g}_D(x,y)$ can be roughly identified with $\log_b\big( \frac{1}{d_D(x,y)} \big)$, so $C(x,y)$ essentially has a logarithmic blow-up around the diagonal $x=y$. \end{remark} \begin{remark} A similar generalized Hausdorff dimension analysis as formed in Section~\ref{SecTwoPaths} for the intersection-times sets $I_{p,q}\subset [0,1]$ could possibly be made for Borel sets $S\subset D$ having full $\vartheta_{\mathbf{M}_r}$-measure by defining a generalized Hausdorff measure $H_{\lambda}$ in terms of the dimension functions $h_{\lambda}$ from part (iv) of Theorem~\ref{ThmVartheta}. By a standard argument (see Appendix~\ref{AppendixHausdorff}), having finite energy $Q_{\lambda}(\vartheta_{\mathbf{M}_r})$ for all $\lambda>9$ implies that $ H_{\lambda}(S)=\infty $ for any $\vartheta_{\mathbf{M}_r}$-full-measure set, $S$, and $\lambda>9$.\footnote{Note that increasing the parameter $\lambda$ makes the dimension function $h_\lambda(a)=\frac{a^2}{ |\log(1/a)|^{\lambda}} $ smaller for $0\leq a\ll 1$.} Thus a more complete result here would be to show that there exists a set $S$ with full $\vartheta_{\mathbf{M}_r}$-measure such that $H_{\lambda}(S)=0 $ for all $\lambda<9$. \end{remark} \subsection{The Hilbert-Schmidt operator defined by the intersection-time kernel}\label{SectionHilbert} Next we discuss the linear operator $\mathbf{T}_{\mathbf{M}_r}$ on the space $L^2(\Gamma,\mathbf{M}_r)$ defined through integrating against the intersection-time kernel $T(p,q)$, i.e., $$ (\mathbf{T}_{\mathbf{M}_r}\psi)(p)\,=\,\int_{\Gamma}T(p,q)\psi(q)\mathbf{M}_r(dq)\hspace{.5cm} \text{for}\hspace{.5cm} \psi\in L^2(\Gamma,\mathbf{M}_r)\,. $$ Our analysis will be rooted in the measures $\vartheta_{\mathbf{M}_r}$, and the results here on $\mathbf{T}_{\mathbf{M}_r}$ will be applicable in~\cite{Clark4}, where $\mathbf{T}_{\mathbf{M}_r}$ is the correlation operator for a Gaussian field on $(\Gamma,\mathbf{M}_r)$. Recall that, intuitively, $\vartheta_{\mathbf{M}_r}(dx)$ measures how much pairs of paths generated from the product $\mathbf{M}_r\times\mathbf{M}_r$ intersect at a point $x\in D$. Since the behavior of $\vartheta_{\mathbf{M}_r}(dx)$ is related to the space of paths crossing over $x$, we introduce the following specialized notations: \begin{align*} &\Gamma^{\updownarrow x} \hspace{1.73cm} \text{Space of paths passing through $x\in D$}& \\ &\Theta^{ \updownarrow x}_{\mathbf{M}_r} \hspace{1.5cm} \text{The conditioning of $\mathbf{M}_r$ to the event $\Gamma^{\updownarrow x}$}& \\ &E^{\updownarrow x} \hspace{1.7cm} \text{The set of points in $E$ that share a path with $x\in D$, i.e., the \textit{path horizon} of $x$}& \end{align*} More precisely, we define $E^{\updownarrow x} $ for $x\in D$ as the set of $y\in E$ such that there exists a $p\in \Gamma$ with $x,y\in \textup{Range}(p)$. Since the set $\Gamma^{\updownarrow x} $ will a.s.\ have measure zero under $\mathbf{M}_r$, defining $\Theta^{ \updownarrow x}_{\mathbf{M}_r} $ requires a closer look at the structures involved. \begin{remark} The path horizon of $x\in E$ can be decomposed as a countable union of `blocks' (cylinder sets) $E^{\updownarrow x}=\bigcup_{k=1}^\infty \bigcup_{e\in \cap E_k^{\updownarrow x} } e $, where $E_k^{\updownarrow x}$ is a subset of $E_k$ satisfying \begin{itemize} \item $x\notin e$, \item $x$ and $e$ are contained in the same equivalence class in $E_{k-1}$, and \item there is a coarse-grained path $\mathbf{p}\in \Gamma_k$ passing over both $x$ and $e$. \end{itemize} Each set $E_k^{\updownarrow x}$ contains $b-1$ elements, and $E^{\updownarrow x}$ has measure $\nu( E^{\updownarrow x} )=\sum_{k=1}^{\infty}\frac{b-1}{b^{2k}}=\frac{1}{b+1} $. \end{remark} \begin{remark}\label{RemarkDecomp} For $x\in E$, a path $p\in \Gamma^{\updownarrow x}$ can be decomposed as a sequence of $p_e\in \Gamma$ labeled by $e\in E_k^{\updownarrow x} $ for $k\in \mathbb{N}$: \begin{align} p\,\equiv\, \{ p_e \}_{ e\in \cup_{k=1}^\infty E_k^{\updownarrow x} } \hspace{1.6cm}\text{and}\hspace{1.6cm} \Gamma^{\updownarrow x}\,\equiv\,\bigtimes_{k=1}^\infty \bigtimes_{ e\in E_k^{\updownarrow x} } \Gamma \,, \end{align} where $ p_e \in \Gamma $ is a dilation of the part of the path $p\in \Gamma$ passing through the shrunken, embedded copy of $D$ corresponding to $e\in E_k^{\updownarrow x} $. \end{remark} \begin{definition}\label{DefTheta} Let the family of measures $(\Gamma,\mathbf{M}^{(e)}_{r-k})$ for $k\in \mathbb{N}$ and $e\in E_k$ be defined in relation to $(\Gamma,\mathbf{M}_r )$ as in Corollary~\ref{CorProp4}. We define $(\Gamma,\Theta^{ \updownarrow x}_{\mathbf{M}_r})$ as the probability measure assigning probability one to the event $\Gamma^{\updownarrow x}$ and having the decomposition \begin{align*} \Theta^{ \updownarrow x}_{\mathbf{M}_r}\,=\ \prod_{k=1}^\infty \prod_{e\in E_k^{\updownarrow x} } \frac{1}{ \mathbf{M}^{(e)}_{r-k}(\Gamma) }\mathbf{M}^{(e)}_{r-k} \hspace{1cm}\text{under the identification}\hspace{1cm}\Gamma^{\updownarrow x}\,\equiv\,\bigtimes_{k=1}^\infty \bigtimes_{ e\in E_k^{\updownarrow x}} \Gamma \,. \end{align*} \end{definition} \begin{proposition}\label{PropDecomp} Let the random measures $(\Gamma,\mathbf{M}_r)$, $(D,\gamma_{p,q})$, $(D,\vartheta_{\mathbf{M}_r})$, and $(\Gamma,\Theta^{ \updownarrow x}_{\mathbf{M}_r})$ be defined as in Theorem~\ref{ThmPathMeasure}, Definition~\ref{DefVartheta}, and Definition~\ref{DefTheta}. The following identity between measures on $D\times \Gamma\times \Gamma$ holds: \begin{align*} \gamma_{p,q}(dx)\mathbf{M}_r(dp)\mathbf{M}_r(dq)\,=\,\Theta^{ \updownarrow x}_{\mathbf{M}_r}(dp)\Theta^{ \updownarrow x}_{\mathbf{M}_r}(dq)\vartheta_{\mathbf{M}_r}(dx)\hspace{.5cm}\text{for $x\in D$ and $p,q\in \Gamma$.} \end{align*} \begin{theorem}\label{ThmOperator} For $r\in {\mathbb R}$ let the random measure $(\Gamma,\mathbf{M}_r)$ be defined as in Theorem~\ref{ThmPathMeasure} and the kernel $T(p,q)$ be defined as in Proposition~\ref{PropLemCorrelate}. For a.e.\ realization of $\mathbf{M}_r$, the kernel $T(p,q)$ defines a bounded linear map $\mathbf{T}_{\mathbf{M}_r}:L^2(\Gamma,\mathbf{M}_r)\rightarrow L^2(\Gamma,\mathbf{M}_r)$ satisfying (i)-(ii) below. \begin{enumerate}[(i)] \item $\mathbf{T}_{\mathbf{M}_r}$ is Hilbert-Schmidt but not trace class. \item $\mathbf{T}_{\mathbf{M}_r}= \hat{Y}_{\mathbf{M}_r}\hat{Y}_{\mathbf{M}_r}^* $ for the compact linear operator $\hat{Y}_{\mathbf{M}_r}:L^2(D,\vartheta_{\mathbf{M}_r})\rightarrow L^2(\Gamma,\mathbf{M}_r) $ defined by $(\hat{Y}_{\mathbf{M}_r}g)(p)= \int_{D\times \Gamma}g(x)\gamma_{p,q}(dx)\mathbf{M}_{r}(dq) $ for $p\in \Gamma$ and $g\in L^2(D,\vartheta_{\mathbf{M}_r}) $. \end{enumerate} \end{theorem} \end{proposition} \begin{remark}In particular (ii) of Theorem~\ref{ThmOperator} implies that $\mathbf{T}_{\mathbf{M}_r}$ is a positive operator. \end{remark} \begin{remark}For $(i,j)\in \{1,\ldots,b\}^2$, let $\big(\Gamma, \mathbf{M}_{r-1}^{(i,j)}\big)$ and $\big(D, \vartheta_{ \mathbf{M}_{r-1}^{(i,j)} }\big)$ be the component measures related to $\big(\Gamma, \mathbf{M}_{r}\big)$ as in Remark~\ref{RemarkVarthetaSymm}. We can decompose $\hat{Y}_{\mathbf{M}_{r}}$ in terms of the operators $\hat{Y}_{\mathbf{M}_{r-1}^{(i,j)}}$ as \begin{align*} (\hat{Y}_{\mathbf{M}_{r}}g)(p)\,=\,\sum_{1\leq j\leq b }\big(\hat{Y}_{\mathbf{M}_{r-1}^{(i,j)}}g_{i,j}\big) (p_j)\,, \end{align*} where $g_{i,j}\in L^2\big(D_{i,j},\vartheta_{ \mathbf{M}_{r-1}^{(i,j)} }\big)$ are the components of $g\in L^2(D,\mathbf{M}_{r})$ under the identification $D\equiv \bigcup_{1\leq i,j \leq b }D_{i,j}$, and $p\in \Gamma$ is identified with the tuple $(i; p_1,\ldots,p_b) \in \{1,\ldots, b\}\times \Gamma^b$ as in Remark~\ref{RemarkCylinder}. \end{remark} We end this section with the following easy-to-prove lemma, which characterizes a simple and natural approximation method for $\hat{Y}_{\mathbf{M}_r}$ and $\mathbf{T}_{\mathbf{M}_r}^{(n)}$ by finite rank operators. \begin{lemma}\label{LemApprox} Let the linear operator $\hat{Y}_{\mathbf{M}_r}:L^2(D,\vartheta_{\mathbf{M}_r})\rightarrow L^2(\Gamma,\mathbf{M}_r) $ be defined as in Theorem~\ref{ThmOperator}. The sequence of finite-rank operators $\hat{Y}_{\mathbf{M}_r}^{(n)}:L^2(D,\vartheta_{\mathbf{M}_r})\rightarrow L^2(\Gamma,\mathbf{M}_r) $ defined through generation-$n$ coarse-graining as $(\hat{Y}_{\mathbf{M}_r}^{(n)}g)(p):=\frac{1}{\mathbf{M}_r([p]_n) }\int_{[p]_n} \big(\hat{Y}_{\mathbf{M}_r}g\big)(q) \mathbf{M}_r(dq)$ a.s.\ has the following properties as $n\rightarrow \infty$: \begin{enumerate}[(I)] \item $\hat{Y}_{\mathbf{M}_r}^{(n)}$ converges to $ \hat{Y}_{\mathbf{M}_r}$ in operator norm, \item the kernels $\mathbf{T}_{\mathbf{M}_r}^{(n)}(p,q)$ of $\mathbf{T}_{\mathbf{M}_r}^{(n)}:=\hat{Y}_{\mathbf{M}_r}^{(n)}\big(\hat{Y}_{\mathbf{M}_r}^{(n)}\big)^*$ converge $\mathbf{M}_r\times \mathbf{M}_r$-a.e.\ to $T(p,q)$, and \item for any $a\in {\mathbb R}$ the exponential moments of $\mathbf{T}_{\mathbf{M}_r}^{(n)}(p,q)$ converge up to those of $T(p,q)$: $$ \int_{\Gamma\times \Gamma}e^{a\mathbf{T}_{\mathbf{M}_r}^{(n)}(p,q)}\mathbf{M}_r(dp) \mathbf{M}_r(dq)\hspace{.5cm}\stackrel{n\rightarrow \infty}{\nearrow} \hspace{.5cm} \int_{\Gamma\times \Gamma}e^{aT(p,q)}\mathbf{M}_r(dp)\mathbf{M}_r(dq)\,<\, \infty\,. $$ \end{enumerate} \end{lemma} \section{DHL construction, directed paths, and measures}\label{SectionDHLConst} The DHL construction that we sketch here was introduced in~\cite{Clark2}, and our presentation will be specialized to the $b=s$ case. A closely related perspective on diamond fractals that is oriented towards a discussion of diffusion can be found in~\cite{Ruiz,Ruiz2}. Diffusion has also been studied on critical percolation clusters constructed in a diamond graph setting~\cite{HamblyII}. \vspace{.3cm} \noindent \textbf{DHL construction through sequences:} The recursive construction of the diamond graphs implies an obvious one-to-one correspondence between the edge set, $E_n$, of the diamond graph $D_n$ and the set of length-$n$ sequences, $\{(b_k,s_k)\}_{k\in \{1,\ldots ,n\} }$, of pairs $(b_k,s_k)\in \{1,\ldots, b\}^2$. In other terms, $E_n$ is canonically identifiable with the product set $ \big(\{1,\ldots, b\}^2\big)^{n}$. For $\mathcal{D}:=\big(\{1,\ldots, b\}^2\big)^{\infty}$ we define the DHL as a metric space $(D, d_D)$ where $$ D\, := \, \mathcal{D} /\big(x,y\in \mathcal{D} \text{ with } d_D(x,y )=0\big)\, $$ for a semi-metric $d_D:\mathcal{D}\times \mathcal{D}\longrightarrow [0,1]$ to be defined below in~(\ref{DefSemiMetric}) that, intuitively, measures the traveling distance along paths. \vspace{.3cm} \noindent \textbf{The metric:} Define the map $\widetilde{\pi}: \mathcal{D}\rightarrow [0,1]$ such that a sequence $x= \{(b_k^x,s_k^x)\}_{k\in \mathbb{N}}$ is assigned the number, $ \widetilde{\pi}(x)$, with base $b$ decimal expansion having $s_k^x-1\in\{0,\ldots, b-1\}$ as its $k^{th}$ digit for each $k\in \mathbb{N}$: $$\widetilde{\pi}(x)\,:=\, \sum_{k=1}^{\infty} \frac{s_{k}^{x}-1}{ b^{k}} \,.$$ Define $A:=\{x\in \mathcal{D}\,|\, \widetilde{\pi}(x)=0 \}$ and $B:=\{x\in \mathcal{D}\,|\, \widetilde{\pi}(x)=1 \}$ (the root nodes). For $x,y\in\mathcal{D}$ we write $x\updownarrow y$ if $x$ or $y$ belongs to one of the sets $A$, $B$ or if the sequences of pairs $\{(b_k^x,s_k^x)\}_{k\in \mathbb{N}}$ and $\{(b_k^y,s_k^y)\}_{k\in \mathbb{N}}$ defining $x$ and $y$, respectively, have their first disagreement at an $s$-component value, i.e., there exists an $n\in \mathbb{N}$ such that $ b_k^x=b_k^y$ for all $ 1\leq k \leq n $ and $ s_n^x\neq s_n^y$. We define the semi-metric $d_D$ in terms of $\widetilde{\pi}$ as \begin{align}\label{DefSemiMetric} d_D(x,y)\,:=\,\begin{cases} \quad \quad \big|\widetilde{\pi}(x)-\widetilde{\pi}(y)\big| & \quad\text{if } x\updownarrow y, \\ \,\, \displaystyle \inf_{z \in \mathcal{D}^{({b})},\, z\updownarrow x,\, z\updownarrow y }\big( d_D(x,z)+d_D(z,y) \big) & \quad \text{otherwise.} \end{cases} \end{align} The semi-metric $d_D(x,y)$ takes values $\leq 1$ since, by definition, $z\updownarrow x $ and $z\updownarrow y $ for any $z\in A$ or $z\in B$, and thus $d_D(x,y)\leq \min\big(\widetilde{\pi}(x)+\widetilde{\pi}(y),\, 2-\widetilde{\pi}(x)-\widetilde{\pi}(y)\big)$.\vspace{.3cm} \noindent \textbf{Self-similarity:} The fractal decomposition of $D$ into embedded, shrunken subcopies of $D$ is easy to see through the family of shift maps $S_{i,j}:\mathcal{D}\rightarrow \mathcal{D}$ for $(i,j)\in \{1,\ldots , b\}^2$ that send a sequence $x\in \mathcal{D}$ to a shifted sequence $y=S_{i,j}(x)$ having initial term $(i,j)$. In other terms, $\{(b_k^x,s_k^x)\}_{k\in \mathbb{N}}$ is mapped to $\{(b_k^y,s_k^y)\}_{k\in \mathbb{N}}$ for $(b_1^y,s_1^y)=(i,j)$ and $(b_k^y,s_k^y)=(b_{k-1}^x,s_{k-1}^x)$ for $k\geq 2$. The $S_{i,j}$'s are well-defined as maps from $D$ onto $D_{i,j}$ with the contractive property $$d_D\big(S_{i,j}(x),S_{i,j}(y)\big)\,=\,\frac{1}{b} d_D(x,y) \hspace{.5cm} \text{for} \hspace{.5cm} x,y\in D \,. $$ These maps $S_{i,j}$ are ``simitudes" of the fractal $D$, and the above property implies that the space $(D,d_D )$ has Hausdorff dimension two. \vspace{.3cm} \noindent \textbf{The vertex set:} The sets $A, B \subset \mathcal{D}$ form equivalence classes under the metric $d_D $ that correspond to the root nodes of $D$. Similarly, the higher-generation vertices, $V_n\backslash V_{n-1}$ for $n\in \mathbb{N}$, of the diamond graphs are identified with large (uncountably infinite) equivalence classes of $\mathcal{D}$ of the form $$ \big( S_{ b_1, s_1 } \circ \cdots \circ S_{b_{n-1}, s_{n-1} }\circ S_{b_n, s_n }(B) \big) \cup \big(S_{b_1,s_1 } \circ \cdots \circ S_{b_{n-1},s_{n-1}} S_{b_n,s_n+1 }(A) \big) \, $$ for a length-$n$ sequence of pairs $(b_k,s_k)\in \{1,\ldots , b\}^2$ with $s_n<b$; see~\cite[Appendix A.1] {Clark2} for a more explicit construction of these vertex equivalence classes. In contrast, elements of $E:=D\backslash V$ for $V:=\bigcup_n V_n$ have unique representations in $\mathcal{D}$. The vertex set $V$ is dense in $D$.\vspace{.3cm} \noindent \textbf{Measure theoretic structures on the DHL:} For $E:=D\backslash V$, define the (cylinder-like) subsets of $E$ \begin{align*} C_{ (b_1,s_1)\times \cdots \times (b_n,s_n)} \,:=\, S_{b_1 ,s_1 } \circ \cdots \circ S_{ b_n,s_n }\big(E \big) \end{align*} for a given length-$n$ sequence of pairs $( b_k ,s_k)\in \{1,\ldots , b\}^2$. These sets are canonically identifiable with elements in $E_n$, and, under this association, the Borel $\sigma$-algebra, $\mathcal{B}_D$, of $(D,d_D)$ is generated by the algebra $\mathcal{A}_{D}$ formed by finite unions of elements in $V\cup \bigcup_{k=0}^{\infty}E_k$. There is a unique normalized measure $\nu$ on $(D,\mathcal{B}_D )$ such that $\nu(V)=0$ and such that $\nu\big(C_{ (b_1,s_1)\times \cdots \times (b_n,s_n)} \big)=|E_n |^{-1}= b^{-2n}$.\vspace{.3cm} \noindent \textbf{Directed paths:} A \textit{directed path} on $D$ is a continuous function $p:[0,1]\rightarrow D$ such that $\widetilde{\pi}\big( p(r)\big)=r $ for all $r\in[0,1]$. Thus the path moves progressively at a constant speed from $A$ to $B$. We can measure the distance between paths using the uniform metric: $$ d_\Gamma\big(p,q\big) \, = \, \max_{0\leq r\leq 1}d_D\big( p(r), q(r) \big) \hspace{.5cm} \text{for} \hspace{.5cm} p,q\in \Gamma \,. $$ This form implies that distances always have the discrete form $ d_\Gamma\big(p,q\big)\, = \, b^{-(n-1)} $ for $n\in \mathbb{N}$, where $n\in \mathbb{N}$ is the lowest generation of any vertex that sits on the trajectory $p$ but not $q$.\vspace{.2cm} \noindent \textbf{Measure theoretic structures on paths:} The set of directed paths $\Gamma_n$ on the $n^{th}$ diamond graph defines an equivalence relation on $\Gamma$ for which $q\equiv_n p$ iff the coarse-grained paths $[p]_n$ and $[q]_n$ are equal. The Borel $\sigma$-algebra, $\mathcal{B}_\Gamma $, on $(\Gamma, d_\Gamma)$ is generated by the semi-algebra $\cup_{n=1}^\infty \Gamma_n $, and there is a unique measure $\mu$ on $(\Gamma,\mathcal{B}_\Gamma)$ satisfying $\mu(\mathbf{p})=| \Gamma_n |^{-1} $ for all $n\in \mathbb{N}$ and $\mathbf{p}\in \Gamma_n$; see~\cite[Appendix A]{Clark2} for more detail. The \textit{uniform measure} on $\Gamma$ refers to the triple $\big(\Gamma,\mathcal{B}_\Gamma, \mu\big)$. \section{The correlation measure construction and properties}\label{SecCorrMeas} \subsection{Set algebras on $\mathbf{\Gamma}$ and $\mathbf{\Gamma\times \Gamma}$ } Recall that $\mathcal{B}_{\Gamma}$ denotes the Borel $\sigma$-algebra on $\Gamma$. Lemma~\ref{LemAlgebra} below is worth stating to avoid repetition in the proofs concerned with defining measures on $\Gamma$ and $\Gamma\times \Gamma$. \begin{definition}\label{DefAlgebra} For a set $A$ let $\mathcal{P}(A)$ denote its power set. \begin{itemize} \item Define $\mathcal{A}_{\Gamma}:= \cup_{n=1}^\infty \mathcal{P}(\Gamma_{n})$ as a subset of $\mathcal{B}_{\Gamma}$ through the canonical identification of subsets of $\Gamma_{n}$ with cylinder sets in $\Gamma$. \item Define $\mathcal{A}_{\Gamma\times \Gamma}:= \cup_{n=1}^\infty \mathcal{P}(\Gamma_{n}\times \Gamma_{n})$ as a subset of $\mathcal{B}_{\Gamma\times \Gamma}:=\mathcal{B}_{\Gamma}\otimes \mathcal{B}_{\Gamma}$ through the analogous identification. \end{itemize} \end{definition} \begin{remark} In different terms, $\mathcal{P}(\Gamma_{n})$ is the finite algebra of subsets of $\Gamma$ generated by the map from $\Gamma$ to $\Gamma_n$ that sends a path $p$ to its $n^{th}$-generation coarse-graining, $[p]_n$. \end{remark} \begin{remark} For $n>N$ the algebra $\mathcal{P}(\Gamma_{N})$ is a subset of $\mathcal{P}(\Gamma_{n})$. \end{remark} \begin{remark}\label{RemarkOpenClosed} Every element $A\in\mathcal{A}_{\Gamma}$ is both open and closed in the topology of the metric space $(\Gamma,d_\Gamma )$. This holds because there exists an $n\in \mathbb{N}$ such that $A$ is a finite union of cylinder sets $\mathbf{p}\in \Gamma_n$ and $\mathbf{p} \,=\,\big\{ q\in \Gamma \,\big|\, d_\Gamma(p,q)\leq \delta \big\} $ for any element $p\in \mathbf{p}$ and any choice of $\frac{1}{b^n}\leq \delta<\frac{1}{b^{n-1}}$.\footnote{Recall that the metric $d_\Gamma$ is discrete-valued with range $\{ 1/b^{n}\,|\, n\in \mathbb{N}_0 \}$. } Similarly every $A\in\mathcal{A}_{\Gamma\times \Gamma }$ is both open and closed. \end{remark} \begin{lemma}\label{LemAlgebra} Let $\mathcal{A}_{\Gamma}\subset \mathcal{B}_{\Gamma}$ and $\mathcal{A}_{\Gamma\times \Gamma}\subset \mathcal{B}_{\Gamma\times \Gamma}$ be as in Definition~\ref{DefAlgebra} \begin{enumerate}[(i)] \item $\mathcal{A}_{\Gamma}$ is an algebra that generates $\mathcal{B}_{\Gamma}$. Moreover, any finitely additive function $\varrho:\mathcal{A}_{\Gamma}\rightarrow [0,\infty)$ must be a premeasure and thus extend to a measure on $(\Gamma, \mathcal{B}_{\Gamma})$ through the Carath\'eodory process. \item The analogous statement holds for $\mathcal{A}_{\Gamma\times \Gamma}$ and $(\Gamma\times \Gamma, \mathcal{B}_{\Gamma\times \Gamma})$. \end{enumerate} \end{lemma} \begin{proof} The only statement from (i) that is not obvious is that any finitely additive function $\varrho:\mathcal{A}_{\Gamma}\rightarrow [0,\infty]$ must automatically be countably subadditive, and thus be a premeasure. The countable subadditivity property holds vacuously because if $A=\cup_{k=1}^\infty A_k $ is a disjoint union with $A\in\mathcal{A}_{\Gamma}$ and $A_k\in\mathcal{A}_{\Gamma}$ for all $k\in \mathbb{N}$, then only finitely many of the sets $A_k$ are nonempty. To see this, note that as a consequence of Remark~\ref{RemarkOpenClosed} the set $\bigcap_{n=1}^\infty \big(A- \bigcup_{k=1}^{n} A_k\big)$ is an intersection of nested, closed sets that will be nonempty unless $A=\bigcup_{k=1}^{n} A_k$ for large enough $n$. The analogous statement (ii) for the algebra $\mathcal{A}_{\Gamma\times \Gamma}$ holds by the same argument. \end{proof} \subsection{Proof of Lemma~\ref{LemCorrelate}} \begin{proof}[Proof of Lemma~\ref{LemCorrelate}]Part (i): By Lemma~\ref{LemAlgebra}, it suffices to define a finitely additive function $\upsilon_{r}:\mathcal{A}_{\Gamma\times \Gamma}\rightarrow [0,\infty)$ consistent with condition~(\ref{UpsilonCond}). Define $\upsilon_{r}$ to assign $A \subset \Gamma_N\times \Gamma_N $ weight \begin{align}\label{UpsilonCalcI} \upsilon_{r}(A )\, =\,& \sum_{\substack{\mathbf{p}\times \mathbf{q}\in \Gamma_N\times \Gamma_N \\ \mathbf{p}\times \mathbf{q}\in A } } \frac{1}{|\Gamma_N|^2} \big(1+R(r-N)\big)^{ \xi_N(\mathbf{p},\mathbf{q} ) } \,, \intertext{or equivalently} \, =\,& \sum_{\substack{\mathbf{p}\times \mathbf{q}\in \Gamma_N\times \Gamma_N \\ \mathbf{p}\times \mathbf{q}\in A } } \frac{1}{|\Gamma_N|^2} \prod_{k=1}^{b^N}\big(1+R(r-N)\big)^{ \chi(\mathbf{p}(k)=\mathbf{q}(k) ) }\,. \nonumber \intertext{Since $A $ can also be viewed as an element of $\mathcal{P}(\Gamma_n\times\Gamma_n)$ for any $n>N$, we need to show through an induction argument that the above definition of $\upsilon_{r}(A )$ remains consistent when $N$ is replaced by any larger $n\in \mathbb{N}$. By the recursive relation $1+ R(r-N)\,=\,\frac{1}{b}\big[ b-1+ \big( 1+R(r-N-1)\big)^b \big] $ from Lemma~\ref{LemVar} and the combinatorial identity $|\Gamma_{N+1}|=b^{b^N}|\Gamma_N| $, the above can be written as } \,=\,&\sum_{\substack{\mathbf{p}\times \mathbf{q}\in \Gamma_N\times \Gamma_N \\ \mathbf{p}\times \mathbf{q}\in A } }\sum_{\substack{\mathbf{\widehat{p}},\mathbf{\widehat{q}}\in \Gamma_{N+1} \\ \mathbf{\widehat{p}}\subset \mathbf{p}, \mathbf{\widehat{q}}\subset \mathbf{q} } }\frac{1}{|\Gamma_{N+1}|^2} \prod_{k=1}^{b^{N+1}}\big(1+R(r-N-1)\big)^{ \chi(\mathbf{\widehat{p}}(k)=\mathbf{\widehat{p}}(k) ) } \nonumber \\ \,=\,&\sum_{\substack{\mathbf{\widehat{p}}\times \mathbf{\widehat{q}}\in \Gamma_{N+1}\times \Gamma_{N+1} \\ \mathbf{\widehat{p}}\times \mathbf{\widehat{q}}\in A } }\frac{1}{|\Gamma_{N+1}|^2} \big(1+R(r-N-1)\big)^{\xi_{N+1}(\mathbf{\widehat{p}},\mathbf{\widehat{q}}) }\,. \label{UpsilonCalcII} \end{align} Thus the equation~(\ref{UpsilonCalcI}) for $\upsilon_r(A)$ holds with $N$ replaced by $N+1$. By induction, ~(\ref{UpsilonCalcI}) holds with $N$ replaced by $n\in \mathbb{N}$ for any $n>N$. This consistency implies that $\upsilon_{r}$ is well-defined and finitely additive on $\mathcal{A}_{\Gamma\times \Gamma}$, and thus $\upsilon_{r}$ extends to a measure on $\mathcal{B}_{\Gamma\times \Gamma}$. \vspace{.3cm} \noindent Part (ii): Let $\mathbf{S}_\emptyset \subset\Gamma\times \Gamma $ be defined as in Corollary~\ref{CorrMuMu}. The set $\mathbf{S}_\emptyset$ can be written in terms of cylinder sets as follows: \begin{align}\label{DefS} \mathbf{S}_{\emptyset}\,=\,\bigcup_{n=1}^{\infty} \mathbf{S}^{(n)}_{\emptyset} \hspace{1.3cm}\text{for}\hspace{1.3cm} \mathbf{S}^{(n)}_{\emptyset}\,:=\,\bigcup_{\substack{\mathbf{p},\mathbf{q}\in \Gamma_n \\ (\forall k) \mathbf{p}(k)\neq \mathbf{q}(k) } } \mathbf{p}\times\mathbf{q} \,. \end{align} For $\mathbf{p},\mathbf{q}\in \Gamma_n$ sharing no edges, the definition of $\upsilon_r$ reduces to $\upsilon_r(\mathbf{p}\times\mathbf{q} )=\frac{1}{|\Gamma_n|^2}=\mu\times \mu( \mathbf{p}\times\mathbf{q} ) $. Hence the difference $\upsilon_r-\mu\times \mu$ is supported on $\mathbf{S}^c_{\emptyset}$. However, by Corollary~\ref{CorrMuMu}, the product $\mu\times \mu$ assigns full measure to $\mathbf{S}_{\emptyset}$, and therefore the measures $\mu\times \mu$ and $\upsilon_r-\mu\times \mu$ are mutually singular and form the Lebesgue decomposition of $\upsilon_r$ with respect to $\mu\times \mu$. The measure $\upsilon_r-\mu\times \mu$ has total mass $R(r)$ since $\upsilon_r$ and $\mu\times \mu$ have total mass $1+R(r)$ and $1$, respectively. \end{proof} \subsection{Some useful martingales under the correlation measure} \begin{definition}\label{DefSigmaAlgebra} . Define $\mathcal{F}_n$ as the $\sigma$-algebra of subsets of $\Gamma\times \Gamma$ generated by the map $F: \Gamma\times \Gamma\rightarrow \Gamma_n\times \Gamma_n$ defined by $F(p,q)=([p]_n,[q]_n)$.\footnote{In the notation of Lemma~\ref{LemAlgebra}, $\mathcal{F}_n$ is equal to $\mathcal{P}(\Gamma_n\times \Gamma_n)$.} \end{definition} \begin{proposition}\label{PropMartingales} Let the family of measures $(\upsilon_{r})_{r\in{\mathbb R}}$ on $\Gamma\times \Gamma$ be defined as in Lemma~\ref{LemCorrelate}. For $n\in \mathbb{N}$ and $r,t\in {\mathbb R}$, define $\phi_n^{(r,t)}:\Gamma\times \Gamma\rightarrow (0,\infty) $ as $$\phi_n^{(r,t)}\,\equiv \,\phi_n^{(r,t)}(p,q)\,=\,\bigg(\frac{ 1+ R(t-n) }{ 1+ R(r-n) }\bigg)^{\xi_n(p,q)} \,,$$ where $\xi_n(p,q)\in\{1,\ldots, b^n\}$ is the number of edges shared by the coarse-grained paths $[p]_n,[q]_n\in \Gamma_n$. Then $(\upsilon_{r})_{r\in {\mathbb R}}$ and $\phi_n^{(r,t)}$ satisfy the properties (i)-(iii) below for all $r,t\in {\mathbb R}$. \begin{enumerate}[(i)] \item Under the probability measure $\widehat{\upsilon}_r :=\frac{1}{1+R(r)}\upsilon_r $, the sequence $\big(\phi_n^{(r,t)}\big)_{n\in \mathbb{N}}$ forms a nonnegative martingale with respect to the filtration $(\mathcal{F}_n)_{n\in \mathbb{N}} $. The martingale $\big(\phi_n^{(r,t)}\big)_{n\in \mathbb{N}}$ converges $\upsilon_r$-a.e.\ and in $L^2(\Gamma\times \Gamma, \upsilon_r)$ to a finite limit $\phi_{\infty}^{(r,t)}:=\textup{exp}\{(t-r)T(p,q)\}$, where $T(p,q)$ is $\upsilon_r$-a.e.\ equal to the limit of $\frac{\kappa^2}{n^2} \xi_n(p,q) $ as $n\rightarrow \infty$. \item Similarly, $\frac{d}{dt}\phi_n^{(r,t)}\big|_{t=r}= \frac{R'(r-n) }{1+R(r-n) }\xi_n(p,q) $ forms a nonnegative martingale under $\widehat{\upsilon}_r$ with respect to the filtration $(\mathcal{F}_n)_{n\in \mathbb{N}} $ and converges $\upsilon_r$-a.e.\ to $T(p,q)$ as $n\rightarrow \infty$. \item $\phi_{\infty}^{(r,t)}$ is the Radon-Nikodym derivative of $\upsilon_t$ with respect to $\upsilon_r$ for any $r,t\in {\mathbb R}$. \end{enumerate} \end{proposition} \begin{proof} Part (i): Let $\mathbbmss{E}_{\widehat{\upsilon}_r}$ denote the expectation with respect to the probability measure $(\Gamma\times \Gamma,\widehat{\upsilon}_r)$. The $\sigma$-algebra $\mathcal{F}_n$ is generated by the product sets $\mathbf{p}\times \mathbf{q}\subset \Gamma\times \Gamma$ for $\mathbf{p}, \mathbf{q}\in \Gamma_n $. To see that $(\phi_{n}^{(r,t)} )_{n\in \mathbb{N}}$ is a martingale with respect to the filtration $(\mathcal{F}_n)_{n\in \mathbb{N}}$ under the measure $\widehat{\upsilon}_r$, notice that for cylinder sets $\mathbf{p},\mathbf{q}\in \Gamma_{N}$ with $N<n$ the conditional expectation of $\phi_{n}^{(r,t)}$ is \begin{align}\label{CondThing} \mathbbmss{E}_{\widehat{\upsilon}_r}\big[ \phi_{n}^{(r,t)} \,\big|\, \mathbf{p}\times \mathbf{q}\big]\,=\,& \frac{1}{ \upsilon_r(\mathbf{p}\times\mathbf{q} ) } \int_{\mathbf{p}\times \mathbf{q}} \phi_{n}^{(r,t)} (p,q) \upsilon_r(dp,dq)\,, \intertext{where the above uses that the normalizing constant $\frac{1}{1+R(r)}$ in the definition of $\widehat{\upsilon}_r$ cancels out. We can write $\mathbf{p}\times \mathbf{q}$ as a disjoint union of product sets $\mathbf{\widehat{p}}\times \mathbf{\widehat{q}}\in \Gamma_n\times \Gamma_n $ with $\mathbf{\widehat{p}}\subset \mathbf{p}$ and $\mathbf{\widehat{q}}\subset \mathbf{q}$: } \,=\,&\frac{1}{\upsilon_r(\mathbf{p}\times\mathbf{q} ) } \sum_{\substack{\mathbf{\widehat{p} },\mathbf{\widehat{q} }\in \Gamma_{n} \\ \mathbf{\widehat{p} }\subset \mathbf{p},\, \mathbf{\widehat{q} }\subset \mathbf{q} } } \int_{\mathbf{\widehat{p}}\times \mathbf{\widehat{q}}} \phi_{n}^{(r,t)} (p,q)\upsilon_r(dp, dq ) \,.\nonumber \intertext{Since $\phi_{n}^{(r,t)}(p,q)$ is constant and equal to $\big(\frac{ 1+ R(t-n) }{ 1+ R(r-n) }\big)^{\xi_{n}(\mathbf{\widehat{p}},\mathbf{\widehat{q}}) }$ for $(p,q)\in \mathbf{\widehat{p}}\times \mathbf{\widehat{q}} $ and $\upsilon_r(\mathbf{\widehat{p}}\times \mathbf{\widehat{q}} )$ has the form~(\ref{UpsilonCond}), the above is equal to} \,=\,&\frac{1}{ \upsilon_r(\mathbf{p}\times\mathbf{q} ) } \sum_{\substack{\mathbf{\widehat{p} },\mathbf{\widehat{q} }\in \Gamma_{n}\nonumber \\ \mathbf{\widehat{p} }\subset \mathbf{p},\, \mathbf{\widehat{q} }\subset \mathbf{q} } }\bigg(\frac{ 1+ R(t-n) }{ 1+ R(r-n) }\bigg)^{\xi_{n}(\mathbf{\widehat{p} },\mathbf{\widehat{q} })} \frac{1}{|\Gamma_{n}|^2} \big(1+R(r-n)\big)^{\xi_{n}(\mathbf{\widehat{p} }\times \mathbf{\widehat{q} } ) }\nonumber \\ \,=\,&\frac{1}{ \upsilon_r(\mathbf{p}\times\mathbf{q} ) } \sum_{\substack{\mathbf{\widehat{p} },\mathbf{\widehat{q} }\in \Gamma_{n} \\ \mathbf{\widehat{p} }\subset \mathbf{p},\, \mathbf{\widehat{q} }\subset \mathbf{q} } }\frac{1}{|\Gamma_{n}|^2} \big(1+R(t-n)\big)^{ \xi_{n}(\mathbf{\widehat{p} },\mathbf{\widehat{q} }) } \,. \nonumber \intertext{By our previous computation~(\ref{UpsilonCalcII}), we have} \,=\,&\frac{1}{\upsilon_r(\mathbf{p}\times\mathbf{q} ) } \frac{1}{|\Gamma_N|^2} \big(1+R(t-N)\big)^{ \xi_{N}(\mathbf{p},\mathbf{q} ) }\,.\label{FindUpsilon} \intertext{Finally we use~(\ref{UpsilonCond}) again for $\upsilon_{r}(\mathbf{p}\times\mathbf{q})$ to get} \,=\,&\bigg(\frac{ 1+ R(t-N) }{ 1+ R(r-N) }\bigg)^{\xi_{N}(p,q)} \,=:\,\phi_{N}^{(r,t)}(p,q)\,.\nonumber \end{align} Therefore $(\phi_{n}^{(r,t)})_{n\in \mathbb{N}}$ forms a martingale. Since the martingale is nonnegative and $\upsilon_r$-integrable ($\int_{\Gamma\times \Gamma} \phi_{n}^{(r,t)}(p,q)\upsilon_r(dp,dq) = 1+R(t)$), the sequence $(\phi_{n}^{(r,t)})_{n\in \mathbb{N}}$ converges $\upsilon_r$-a.e.\ to a nonnegative, $\upsilon_r$-integrable limit $\phi_{\infty}^{(r,t)}(p,q)$. \vspace{.2cm} Next we show that the log of $\phi_{\infty}^{(r,t)}(p,q)$ is $\upsilon_r$-a.e.\ equal to the large $n$ limit of $(t-r)\frac{\kappa^2}{n^2}\xi_n(p,q)$. The log of $ \phi_{n}^{(r,t)}\equiv \phi_{n}^{(r,t)}(p,q)$ is \begin{align*} \log \big(\phi_{n}^{(r,t)}\big) \,=\,&\xi_n(p,q) \Big( \log\big( 1+ R(t-n) \big)\,-\, \log\big( 1+ R(r-n) \big) \Big)\,. \intertext{The small $x$ asymptotics $\log(1+x)=x-\frac{x^2}{2}+\mathit{O}(x^3)$ combined with the asymptotics $R(r)=\frac{\kappa^2}{-r}+\frac{\eta\kappa^2\log(-r)}{r^2}+\mathcal{O}\big(\frac{\log^2(-r)}{r^3} \big) $ for $-r\gg 1$ from property (II) of Lemma~\ref{LemVar} yields that for $n\gg 1$ } \,=\,&\xi_n(p,q) \bigg( \frac{\kappa^2}{n-t}\,+\,\frac{\eta\kappa^2 \log(n-t)}{(n-t)^2}\,-\, \frac{1}{2} \frac{\kappa^4}{(n-t)^2} \, +\, \mathit{o}\Big( \frac{1}{n^2} \Big) \bigg)\\ &-\xi_n(p,q) \bigg( \frac{\kappa^2}{n-r}\,+\,\frac{\eta\kappa^2 \log (n-r)}{(n-r)^2}\,-\, \frac{1}{2} \frac{\kappa^4}{(n-r)^2} +\mathit{o}\Big( \frac{1}{n^2} \Big) \bigg)\\ \,=\,&\xi_n(p,q)\bigg( (t-r) \frac{\kappa^2}{n^2} +\mathit{o}\Big( \frac{1}{n^2} \Big) \bigg)\,. \end{align*} Since the sequence $(\phi_{n}^{(r,t)})_{n\in \mathbb{N}}$ converges $\upsilon_r$-a.e.\ to a finite limit $\phi_{\infty}^{(r,t)}$ for any $r,t\in {\mathbb R}$, the sequence $\big(\frac{\kappa^2}{n^2}\xi_n \big)_{n\in \mathbb{N}}$ converges $\upsilon_r$-a.e.\ to a finite limit, which we denote by $T(p,q)$. \vspace{.1cm} The martingale $(\phi_{n}^{(r,t)})_{n\in \mathbb{N}}$ has uniformly bounded second moments. To see this notice that $(\phi_{n}^{(r,t)})^2$ is bounded by the $\upsilon_r$-integrable function $\phi_{\infty}^{(r,2t-r+1)}:=\exp\{(2t-2r+1)T(p,q) \}$ for large enough $n$. Thus the sequence $(\phi_{n}^{(r,t)})_{n\in \mathbb{N}}$ converges in $L^2(\Gamma\times \Gamma, \upsilon_r)$ to $\phi_{\infty}^{(r,t)}$. \vspace{.4cm} \noindent Part (ii): Since $\big(\phi_{n}^{(r,t)}\big)_{n\in \mathbb{N}}$ is a martingale with respect to $\mathcal{F}_n$ under $\widehat{\upsilon}_r=\frac{1}{1+R(r)}\upsilon_r$ for any $t\in \mathbb{R}$ by part (i), the derivative $\frac{d}{ds}\phi_n^{(r,t)}\big|_{t=r}= \frac{R'(r-n) }{1+R(r-n) }\xi_n(p,q) $ is also a martingale. However, $R(r-n)$ vanishes with large $n$ and $R'(r-n)=\frac{\kappa^2}{n^2}+\mathit{o}\big(\frac{1}{n^2}\big)$ by Remark~\ref{RemarkDerR}. Hence $ \frac{R'(r-n) }{1+R(r-n) }\xi_n(p,q) $ becomes close to $\frac{\kappa^2}{n^2}\xi_n(p,q)$ with large $n$, and thus converges $\upsilon_r$-a.e.\ to $T(p,q)$. \vspace{.3cm} \noindent Part (iii): For $\mathbf{p},\mathbf{q}\in \Gamma_N$ notice that the expression (\ref{FindUpsilon}) is equal to $\frac{\upsilon_t(\mathbf{p}\times \mathbf{q})}{\upsilon_r(\mathbf{p}\times \mathbf{q}) }$ by definition of $\upsilon_t$. Thus by canceling $\upsilon_r(\mathbf{p}\times \mathbf{q})$, the equality between (\ref{CondThing}) and (\ref{FindUpsilon}) reduces to \begin{align}\label{UpLim} \upsilon_t(\mathbf{p}\times \mathbf{q})\,=\,\int_{\mathbf{p}\times \mathbf{q}} \phi_n^{(r,t)}(p,q) \upsilon_r(dp,dq) \end{align} for any $n\geq N$. Since the functions $\phi_n^{(r,t)}$ are uniformly bounded in $L^2(\Gamma\times \Gamma, \upsilon_r)$-norm and converge $\upsilon_r$-a.e.\ to $\phi_\infty^{(r,t)}$, the equality~(\ref{UpLim}) holds for the limit function $\phi_\infty^{(r,t)}=\textup{exp}\{(t-r)T(p,q)\} $. The measure $\upsilon_t$ is determined by its assignment on products of cylinder sets, and therefore $\phi_\infty^{(r,t)}$ is the Radon-Nikodym derivative of $\upsilon_t$ with respect to $\upsilon_r$. \end{proof} \subsection{Proof of Proposition~\ref{PropLemCorrelate}} \begin{proof}[Proof of Proposition~\ref{PropLemCorrelate}] Parts (i) and (iii) follow directly from Proposition~\ref{PropMartingales}. The formula in part (iv) holds because $e^{aT(p,q)} $ is the Radon-Nikodym derivative of $\upsilon_{r+a}$ with respect to $\upsilon_r$ by part (iii) of Proposition~\ref{PropMartingales}, and thus \begin{align}\label{Sh} \int_{\Gamma\times \Gamma}e^{aT(p,q)}\upsilon_r(dp,dq)\,=\upsilon_{r+a}\big(\Gamma\times \Gamma\big)\,=\,1\,+\,R(r+a)\,. \end{align} The only statement from part (ii) that does not follow from Lemma~\ref{LemCorrelate} and Proposition~\ref{PropMartingales} is the claim that $\upsilon_r-\mu\times \mu$ assigns full measure to the set of pairs $(p,q)$ such that $T(p,q)>0$. Define $\mathbf{\widehat{S}}=\{ (p,q)\,|\,T(p,q)=0 \}$, and let $\mathbf{S}_{\emptyset}\subset \Gamma\times \Gamma$ be defined as in Corollary~\ref{CorrMuMu}. As $a\rightarrow -\infty$ the left side of~(\ref{Sh}) converges to $\upsilon_r\big(\mathbf{\widehat{S}}\big)$, and the right side converges to $1$ since $R(r)$ vanishes as $r\rightarrow -\infty$. However, $\mathbf{S}_{\emptyset}\subset\mathbf{\widehat{S}} $ and $\upsilon_r(\mathbf{S}_{\emptyset})=\mu\times \mu(\mathbf{S}_{\emptyset})=1$ by Lemma~\ref{LemCorrelate}. Therefore, $\upsilon_r\big(\mathbf{\widehat{S}}-\mathbf{S}_{\emptyset} \big)=0$, and $\upsilon_r-\mu\times \mu$ is supported on the set of pairs $(p,q)$ such that $T(p,q)>0$. \end{proof} \section{The continuum random polymer measures}\label{SecCDRP} \subsection{Proof of Theorem~\ref{ThmExistence}} The proof of Theorem~\ref{ThmExistence} below relies on Theorem~\ref{ThmExist}, which was proven in~\cite{Clark3}. \begin{notation}[edge-labeled number arrays]\label{DefArray}\textup{For numbers $x_{e}\in{\mathbb R}$ labeled by $e\in E_k$, the notation $\{ x_e \}_{e\in E_{k}}$ denotes an element in ${\mathbb R}^{b^{2k} }$, which we will refer to as an \textit{array}.} \end{notation} \begin{theorem}[Theorem 3.12 of~\cite{Clark3}\footnote{The variables $\mathbf{W}_e^{(k)}$ are related to the variables $\mathbf{X}_e^{(k)}$ in the statement of~\cite[Theorem~3.12]{Clark3} through $\mathbf{W}_e^{(k)}=1+\mathbf{X}_e^{(k)}$. }]\label{ThmExist} For any $r\in {\mathbb R}$, there exists a unique law on sequences in $k\in \mathbb{N}_0$ of edge-labeled arrays of nonnegative random variables, $\big\{ \mathbf{W}_e^{(k)} \big\}_{e\in E_{k} }$, holding the properties (I)-(IV) below for each $k\in \mathbb{N}_0$. \begin{enumerate}[(I)] \item The variables in the array $\big\{ \mathbf{W}_e^{(k)} \big\}_{e\in E_{k} }$ are i.i.d. \item The variables in the array $\big\{ \mathbf{W}_e^{(k)} \big\}_{e\in E_{k} }$ have mean one and variance $R(r-k)$. \item For each $m\in \{2, 3,\ldots\}$, the variables in the array $\big\{ \mathbf{W}_e^{(k)} \big\}_{e\in E_{k} }$ have finite $m^{th}$ centered moment equal to $R^{(m)}(r-k)$, where $R^{(m)}:{\mathbb R}\rightarrow {\mathbb R}_+$ is an increasing function with $ R^{(m)}(t)\,\propto\, (\frac{1}{-t})^{\lceil m/2 \rceil}$ as $t\rightarrow -\infty$ and $ R^{(m)}(t)$ grows without bound as $t\rightarrow \infty$ \item The variables in the array $\big\{ \mathbf{W}_e^{(k)} \big\}_{e\in E_{k} }$ are a.s.\ equal to $\displaystyle \mathbf{W}_e^{(k)} = \frac{1}{b}\sum_{i=1}^{b}\prod_{j=1}^b \mathbf{W}_{e\times (i,j)}^{(k+1)}$. \end{enumerate} \end{theorem} \begin{proof}[Proof of Theorem~\ref{ThmExistence}] We will construct the random measure $\mathbf{M}_r$ using the sequence in $k\in\mathbb{N}$ of arrays of random variables $\big\{\mathbf{W}_e^{(k)}\big\}_{e\in E_k}$ from Theorem~\ref{ThmExist}. Recall from Lemma~\ref{LemAlgebra} that $\mathcal{B}_\Gamma$ is generated by the algebra $\mathcal{A}_{\Gamma}=\bigcup_{k=1}^{\infty} \mathcal{P}(\Gamma_k ) $, where, as before, subsets of $\Gamma_k$ are interpreted as cylinder subsets of $\Gamma$. For $A\in \mathcal{P}(\Gamma_N)$ define $\mathbf{M}_r(A)$ as \begin{align} \mathbf{M}_r(A)\,=\,&\sum_{\substack{ \mathbf{p}\in \Gamma_N \\ \mathbf{p}\in A } } \frac{1}{|\Gamma_N| }\prod_{\ell=1}^{b^N} \mathbf{W}_{\mathbf{p}(\ell)}^{(N)} \,. \label{ConI} \\ \intertext{To see that the above definition for $\mathbf{M}_r(A)$ is consistent, notice that for any $n\in \mathbb{N}$ with $n> N$ an inductive application of property (IV) of Theorem~\ref{ThmExist} along with the identity $|\Gamma_{k+1}|=b^{b^k}|\Gamma_k|$ yields } \,=\,&\sum_{ \substack{\mathbf{p}\in \Gamma_N \\ \mathbf{p}\in A } } \sum_{ \substack{\mathbf{q}\in \Gamma_n \\ \mathbf{q} \subset \mathbf{p} } } \frac{1}{|\Gamma_n| }\prod_{\ell=1}^{b^n} \mathbf{W}_{\mathbf{q}(\ell)}^{(n)}\,=\, \sum_{ \substack{\mathbf{q}\in \Gamma_n \\ \mathbf{q} \in A } } \frac{1}{|\Gamma_n| }\prod_{\ell=1}^{b^n} \mathbf{W}_{\mathbf{q}(\ell)}^{(n)} \,.\label{ConII} \end{align} Thus the weight assigned to $A\in \mathcal{A}_{\Gamma}$ by this definition of $\mathbf{M}_r$ does not depend on which subalgebra, $\mathcal{P}(\Gamma_k)$, that we view $A\subset \Gamma\times \Gamma$ as an element of. The consistency between~(\ref{ConI}) and~(\ref{ConII}) implies that the set function $\mathbf{M}_r:\mathcal{A}_{\Gamma}\rightarrow [0,\infty)$ is well-defined and finitely additive. Thus $\mathbf{M}_r$ extends to a measure on $(\Gamma,\mathcal{B}_\Gamma)$ by Lemma~\ref{LemAlgebra}. \vspace{.3cm} Next we prove properties (I)-(IV) in the statement of Theorem~\ref{ThmExistence} for this construction of $\mathbf{M}_r$.\vspace{.2cm} \\ \noindent (I) For any $A\in \mathcal{A}_{\Gamma}$ the expectation of $\mathbf{M}_{r}(A)$ reduces to $ \mu(A) $ since the random variables in the arrays $\{ \mathbf{W}_e \}_{e\in E_k}$ are independent and mean one. This extends to $\mathbb{E}[\mathbf{M}_r(A)]=\mu(A)$ for any $A\in \mathcal{B}_{\Gamma}$ since $\mathcal{A}_{\Gamma}$ generates $\mathcal{B}_{\Gamma}$. \vspace{.3cm} \noindent (II) For any $B\in \mathcal{P}( \Gamma_k\times \Gamma_k) $ viewed as a cylinder subset of $\Gamma\times \Gamma$, \begin{align} \mathbb{E}\big[\mathbf{M}_{r}\times \mathbf{M}_{r}(B) \big]\,=\,&\mathbb{E}\Bigg[\sum_{ \substack{ \mathbf{p}, \mathbf{q}\in \Gamma_k \\ \mathbf{p}\times \mathbf{q}\in B } } \frac{1}{|\Gamma_k| }\prod_{\ell=1}^{b^k} \mathbf{W}_{\mathbf{p}(\ell)}^{(k)} \frac{1}{|\Gamma_k| }\prod_{\ell=1}^{b^k} \mathbf{W}_{\mathbf{q}(\ell)}^{(k)} \Bigg]\,,\nonumber \intertext{and part (II) of Theorem~\ref{ThmExist} implies that } \,=\,&\sum_{\substack{ \mathbf{p}, \mathbf{q}\in \Gamma_k \\ \mathbf{p}\times \mathbf{q}\in B } } \frac{1}{|\Gamma_k|^2 }\big(1+R(r-k) \big)^{\xi_k(\mathbf{p}, \mathbf{q})}\, =:\,\upsilon_r(B)\,, \end{align} where, recall, $\xi_k(\mathbf{p}, \mathbf{q})$ is the number of edges shared by the coarse-grained paths $\mathbf{p}, \mathbf{q}\in \Gamma_k$. Thus the formula $\mathbb{E}\big[\int_{\Gamma\times \Gamma}g(p,q) \mathbf{M}_{r}(dp)\mathbf{M}_{r}(dq)\big] = \int_{\Gamma\times \Gamma }g(p,q)\upsilon_r(dp,dq) $ holds for $g=\chi_B$ when $B\in \mathcal{A}_{\Gamma\times\Gamma} $. We can then generalize to nonnegative Borel measurable functions through the monotone convergence theorem. \vspace{.3cm} \noindent (III) By definition, $\mathbf{M}_{r}(\Gamma)$ is equal to the random variable $\mathbf{W}^{(k)}_{e}$ with $k=0$ and $e\in E_0$. Thus the $m^{th}$ centered moment of $\mathbf{M}_{r}(\Gamma)$ is equal to $R^{(m)}(r)$ by part (III) of Theorem~\ref{ThmExist}. \vspace{.3cm} \noindent (IV) Any $\mathbf{p}\in \Gamma_{k+1}$ can be written as a $b$-fold concatenation $ \mathbf{p}_1 \times\cdots \times\mathbf{p}_b \subset \Gamma_{n}^k$, where $\mathbf{p}_{j}(l)$ is identified with $ \mathbf{p}\big((j-1)b^{k}+l \big) $ for $ j\in\{1,\ldots, b\}$ and $l\in \{1,\ldots, b^{k}\} $. Since $|\Gamma_{k+1}| = b|\Gamma_{k}|^b$, the random variable $\mathbf{M}_r(\mathbf{p})$ can be written as \begin{align*} \mathbf{M}_{r+1}(\mathbf{p})\,=\, \frac{1}{|\Gamma_k| }\prod_{\ell=1}^{b^k} \mathbf{W}_{\mathbf{p}(\ell)}^{(k)} \,=\,\frac{1}{b}\prod_{ j=1}^b \frac{1}{|\Gamma_{k}|}\prod_{l=1}^{b^{k}} \mathbf{W}_{\mathbf{p}_j(l)}^{(k)} \, \stackrel{\mathcal{L}}{=}\,\frac{1}{b}\prod_{j=1}^b \mathbf{M}_r^{(j)}(\mathbf{p}_j)\,, \end{align*} where $\mathbf{M}_r^{(j)}$ are i.i.d.\ copies of $\mathbf{M}_r$.\vspace{.3cm} The above establishes the existence of a family of random measure laws $(\mathbf{M}_r)_{r\in {\mathbb R}}$ satisfying (I)-(IV) of Theorem~\ref{ThmExistence}. To see uniqueness, let $(\mathbf{\hat{M}}_r)_{r\in {\mathbb R}}$ be such a family, and define the random measures $\big(\Gamma,\mathbf{\hat{M}}_{r-k}^e\big)$ for $k\in \mathbb{N}_0$ and $e\in E_k$ as in Corollary~\ref{CorProp4}. The family of random variables $\mathbf{\hat{W}}_e^{(k)}:=\mathbf{\hat{M}}_{r-k}^e(\Gamma)$, i.e., the total masses of the measures $\mathbf{\hat{M}}_{r-k}^e$, satisfies properties (I)-(IV) of Theorem~\ref{ThmExist}. This uniquely determines the joint law of the family $\{\mathbf{\hat{W}}_e^{(k)}\}_{e\in E_k}^{k\in \mathbb{N}_0}\stackrel{\mathcal{L}}{=}\{\mathbf{W}_e^{(k)}\}_{e\in E_k}^{k\in \mathbb{N}_0}$, implying that $\mathbf{\hat{M}}_r $ is equal in law to the random measure $\mathbf{M}_r $ constructed above. \end{proof} \subsection{Proof of Proposition~\ref{PropProperties}} \begin{proof}[Proof of Proposition~\ref{PropProperties}] Part (i): Let us write $\mathbf{M}_r=\widetilde{M}_r+A_r$, where $\widetilde{M}_r$ and $A_r$ are respectively the singular and continuous components in the Lebesgue decomposition of $\mathbf{M}_r$ with respect to $\mu$. We must show that $A_r=0$ holds a.s. By symmetry of the path space $\Gamma$, the expectations of $\widetilde{M}_r$ and $A_r$ are multiples of the uniform measure: $\mathbb{E}[\widetilde{M}_r]=\alpha_r \mu$ and $\mathbb{E}[A_r]= \beta_r \mu$ for $\alpha_r,\beta_r\in [0,1]$ with $\alpha_r+\beta_r=1$. By property (II) of Theorem~\ref{ThmExistence} and part (ii) of Lemma~\ref{LemCorrelate}, respectively, $\upsilon_r=\mathbb{E}\big[\mathbf{M}_r\times \mathbf{M}_r \big]$ and $\upsilon_r =\mu\times\mu\,+\,R(r)\rho_r $. Thus we have the equality \begin{align} \mu\times\mu\,+\,R(r)\rho_r \,=\,&\underbrace{\mathbb{E}\big[ \widetilde{M}_r \times \widetilde{M}_r \big]}_{\geq \alpha^2_r\mu\times \mu }\,+\,\underbrace{\mathbb{E}\big[ \widetilde{M}_r \times A_r \big]}_{\geq \alpha_r\beta_r\mu\times \mu }\,+\,\underbrace{\mathbb{E}\big[ A_r\times\widetilde{M}_r \big]}_{ \geq\alpha_r\beta_r\mu\times \mu }\,+\,\underbrace{\mathbb{E}\bigg[ \frac{dA_r}{d\mu}(p)\frac{dA_r}{d\mu}(q) \bigg]}_{ \leq\beta_r^2 } \mu \times \mu \,.\label{Compare} \end{align} The inequalities for the braced terms will be explained below. The lower bound for $\mathbb{E}\big[ \widetilde{M}_r \times \widetilde{M}_r \big]$ by $\alpha^2_r\mu \times \mu$ holds since the restriction of $\mathbb{E}\big[ \widetilde{M}_r \times \widetilde{M}_r \big]$ to the set $\mathbf{S}_{\emptyset} \subset \Gamma\times \Gamma$, defined as in Corollary~\ref{CorrMuMu}, is \begin{align}\label{Y} \mathbb{E}\big[ \widetilde{M}_r \times \widetilde{M}_r \big]\Big|_{ \mathbf{S}_{\emptyset} }\,=\,\alpha^2_r\mu \times \mu \big|_{ \mathbf{S}_{\emptyset} } \,=\,\alpha^2_r\mu \times \mu \,. \end{align} The first equality holds since $\mathbf{S}_{\emptyset} $ is a union of disjoint cylinder sets (\ref{DefS}) on which $\mathbf{M}_r $, and thus also $\widetilde{M}_r $, assigns weights independently. The second equality above holds since $\mu\times \mu$ is supported on $\mathbf{S}_{\emptyset} $ by Corollary~\ref{CorrMuMu}. The lower bounds of $\mathbb{E}\big[ \widetilde{M}_r \times A_r \big]$ and $\mathbb{E}\big[ A_r \times \widetilde{M}_r \big]$ by $\alpha_r\beta_r\mu\times \mu$ follow by the same argument. Since $\rho_r$ is mutually singular to $\mu\times \mu$ and $1=\alpha_r+\beta_r$, the rightmost term in (\ref{Compare}) must be $\leq \beta_r^2\mu\times \mu$, and thus $\mathbb{E}\big[ \frac{dA_r}{d\mu}(p)\frac{dA_r}{d\mu}(q) \big]$ is $\mu\times \mu$-a.e.\ less than or equal to $\beta_r^2$. For a continuous function $g:\Gamma\rightarrow [0,\infty)$, we have \begin{align*} \textup{Var}\Big(\int_{\Gamma} g(p) A_r(dp) \Big)\,=\,&\mathbb{E}\bigg[ \Big( \int_{\Gamma} g(p) A_r(dp) \Big)^2 \bigg]\,-\,\bigg(\mathbb{E}\bigg[ \int_{\Gamma} g(p) A_r(dp) \bigg]\bigg)^2 \nonumber \\ \,=\,& \int_{\Gamma} g(p)g(q) \mathbb{E}\bigg[ \frac{dA_r}{d\mu}(dp) \frac{dA_r}{d\mu}(dq) \bigg] \mu(dp)\mu(dq)\,-\,\beta_r^2 \Big(\int_{\Gamma} g(p) \mu(dp) \Big)^2 \end{align*} since $\mathbb{E}[A_r(dp)]=\beta_r\mu(dp)$. However, the $\mu\times \mu$-a.e.\ inequality $\mathbb{E}\big[ \frac{dA_r}{d\mu}(dp) \frac{dA_r}{d\mu}(dq) \big] \leq \beta_r^2 $ implies that the variance above is equal to zero. Therefore $\int_{\Gamma} g(p) A_r(dp) $ is a non random constant. Since this holds for any $g$, the measure $A_r $ must be deterministic and equal to $\beta_r\mu$. Since $A_r=\beta_r\mu$, equation~(\ref{Compare}) reduces to \begin{align}\label{CompareII} \mu\times\mu\,+\,R(r)\rho_r \,=\, \mathbb{E}\big[ \widetilde{M}_r\times \widetilde{M}_r \big]\,+\,2\alpha_r\beta_r \mu\times\mu \,+\,\beta_r^2\mu\times\mu \,, \end{align} and $\mathbf{M}_r= \widetilde{M}_{r} + \beta_r\mu $ for $\widetilde{M}_{r} $ with expectation $\alpha_r\mu$. The first and second moments of $\widetilde{M}_r(\Gamma)$ are respectively $\mathbb{E}\big[ \widetilde{M}_r(\Gamma) \big]=\alpha_r $ and $\mathbb{E}\big[ \big(\widetilde{M}_r(\Gamma)\big)^2 \big]=\alpha_r^2+R(r) $, where the form for the second moment holds by evaluating both sides of~(\ref{CompareII}) with the set $\Gamma\times \Gamma$. As a consequence of Remark~\ref{RemarkConcat} and property (IV) of Theorem~\ref{ThmExistence}, $\beta_r$ must satisfy the recurrence relation $\beta_{r+1}\,=\,\beta_{r}^b$ for any $r\in {\mathbb R}$. In particular, if $\beta_r > 0$ for some $r$, then $\beta_{r-n }= \beta_{r}^{1/b^n}$ converges to $1$ as $n\rightarrow \infty$ with an error, $\alpha_{r-n}=1-\beta_{r-n} $, of order $b^{-n}$. The third moment of $\mathbf{M}_{r-n}(\Gamma)$ has the lower bound $$ \mathbb{E}\big[ | \mathbf{M}_{r-n}(\Gamma) |^3\big] \,\geq \, \mathbb{E}\big[ | \widetilde{M}_{r-n}(\Gamma) |^3\big] \,\geq \,\frac{ \mathbb{E}\big[ | \widetilde{M}_{r-n}(\Gamma) |^2\big]^2 }{ \mathbb{E}\big[ \widetilde{M}_{r-n}(\Gamma) \big] } \, >\, \frac{ \big(R(r-n)\big)^2 }{ \alpha_{r-n} } \,. $$ The inequality above implies that the third moment of $\mathbf{M}_{r-n}(\Gamma)$ grows exponentially as $n\rightarrow \infty$ since $\alpha_{r-n}=1-\beta_{r-n}$ vanishes exponentially and $R(r-n) \propto \frac{1}{n} $ by Lemma~\ref{LemVar}. This contradicts that the moments of $ \mathbf{M}_{r}(\Gamma) $ converge to $1$ as $r\rightarrow -\infty$ as a consequence of part (III) of Theorem~\ref{ThmExistence} \vspace{.3cm} \noindent Part (ii): The a.s.\ absence of atoms for $\mathbf{M}_r$ follows trivially from part (i) of Theorem~\ref{ThmPathMeasure}, which we will prove in Section~\ref{SecPathIntMxM}. \vspace{.3cm} \noindent Part (iii): We will first show that the total mass of $\mathbf{M}_{r}$ is a.s.\ positive. Define $x_r\in [0,1]$ as the probability that $\mathbf{M}_{r}(\Gamma)=0$. Notice that $$x_r\,\leq \,\mathbb{E}\big[ | \mathbf{M}_{r}(\Gamma) \,-\,1 |^2 \big]\,=\,\textup{Var}\big(\mathbf{M}_{r}(\Gamma)\big) \,= \,R(r)\,, $$ and thus $x_r$ must vanish as $r\rightarrow -\infty$. By the distributional recursive relation in property (IV) of Theorem~\ref{ThmExistence}, $(x_r)_{r\in {\mathbb R}}$ must satisfy the recursive relation $x_{r+1}=\psi(x_r)$ for the map $\psi:[0,1]\rightarrow [0,1]$ given by $\psi(x)= \big(1-(1-x)^b \big)^b$. Notice that $\psi$ is contractive towards $0$ for small $x>0$ because $$\psi(x)\, = \, x^b\Bigg(\sum_{k=0}^{b-1}(1-x)^k \Bigg)^b\,\leq \, (bx)^b \,. $$ Since $x_r\rightarrow 0$ as $r\rightarrow -\infty$ and $x_r$ contracts towards zero through the operation of $\psi$, it follows that $x_r=0$ for all $r\in {\mathbb R}$. Therefore, for any $r\in {\mathbb R}$ the measure $\mathbf{M}_{r}$ is a.s.\ non zero. Next we leverage this result to show $\mathbf{M}_r(A)>0$ for any open set $A\subset \Gamma$. There exists an $N\in \mathbb{N}$ and a cylinder set $\mathbf{p}\in\Gamma_N$ such that $\mathbf{p}\subset A$. Then $\mathbf{M}_r(A)$ has the distributional lower bound $$ \mathbf{M}_r(A)\,\geq \, \mathbf{M}_r(\mathbf{p})\,\stackrel{d}{=}\,\frac{1}{|\Gamma_N|}\prod_{e\lhd \mathbf{p} }\mathbf{M}_{r-N}^{e}(\Gamma) \,, $$ where the product is over the edges, $e\in E_N$, along the path $\mathbf{p}$, and the i.i.d.\ random measures $\mathbf{M}_{r-N}^{e}$ are defined as in Corollary~\ref{CorProp4}. Thus $ \mathbf{M}_r(A)$ is a.s.\ nonzero by our result above for the total mass. \vspace{.3cm} \noindent Part (iv): Let $g\in L^2\big(\Gamma,\mu\big)$. Since $\mathbb{E}\big[ \mathbf{M}_{r}\big]=\mu$ and $\mathbb{E}\big[ \mathbf{M}_{r}\times \mathbf{M}_{r}\big]=\upsilon_r=\mu\times \mu+R(r)\rho_r$, \begin{align} \mathbb{E}\Bigg[ \bigg( \int_{\Gamma} g(p) \mathbf{M}_{r}(dp) \,-\,\int_{\Gamma} g(p) \mu(dp) \bigg)^2 \Bigg]\,=\,& R(r)\int_{\Gamma} g(p)g(q)\rho_r(dp,dq) \nonumber \\ \,\leq \,& R(r)\int_{\Gamma}\Big(\frac{1}{2} \big|g(p)\big|^2\,+\,\frac{1}{2} \big|g(q)\big|^2 \Big)\rho_r(dp,dq)\,.\nonumber \intertext{The marginals of $\rho_r$ are both equal to $\mu$, so we have } \,=\,& R(r)\int_{\Gamma}\big|g(p)\big|^2 \mu(dp) \,.\label{LTwo} \end{align} The result then follows since $R(r)=\mathit{O}(1/|r|)$ as $r\rightarrow -\infty$. \end{proof} \subsection{Proof of Theorem~\ref{ThmUniversality}} The proof of Theorem~\ref{ThmUniversality} relies on Theorem~\ref{ThmUnique} which was proven in~\cite{Clark3}. \begin{definition}\label{DefInduct} For some fixed $r\in {\mathbb R}$, let $\beta_{n,r}>0$ be defined as in~(\ref{BetaForm}). Let the random variables $\{ \omega_h \}_{h\in E_n}$ be as in~(\ref{DefEM}). We inductively define the i.i.d.\ arrays of random variables $\big\{ W_{e}^{(k,n)} \big\}_{e\in E_k} $ for $k\in \{0,1,\ldots, n\}$ as follows: \begin{enumerate}[(I)] \item For $h\in E_n$ define the random variable $ W_{h}^{(n,n)}:= \frac{ e^{\beta_{n,r}\omega_h } }{\mathbb{E}[e^{\beta_{n,r}\omega_h} ] } $. \item For $k<n$ define the random variables in the array $\big\{ W_{e}^{(k,n)} \big\}_{e\in E_k}$ in terms of $\big\{ W_{e}^{(k+1,n)} \big\}_{e\in E_{k+1}}$ as $ W_{e}^{(k,n)} = \frac{1}{b}\sum_{i=1}^b\prod_{j=1}^b W_{e\times (i,j)}^{(k+1,n)} $, where $e\times (i,j)\in E_{k+1}$ is defined as in Notation~\ref{NotationSub}. \end{enumerate} \end{definition} \begin{remark}\label{RemarkMeasure} Let the random measure $\big(\Gamma_n, \mathbf{M}_{r,n}^{\omega}\big)$ be defined as in Theorem~\ref{ThmExistence}. For any $k\in \{0,1,\ldots, n\}$ and $\mathbf{q}\in \Gamma_k$, interpreted as a subset of $\Gamma_n$, the variables $\big\{ W_{e}^{(k,n)} \big\}_{e\in E_k}$ relate to the measures $\mathbf{M}_{r,n}^{\omega}$ through \begin{align*} \mathbf{M}_{r,n}^{\omega}(\mathbf{q})\,=\,\frac{1}{|\Gamma_n|}\sum_{\substack{\mathbf{p}\in \Gamma_n \\ \mathbf{p}\in \mathbf{q} } } \prod_{h\triangleleft \mathbf{p} }W_{h}^{(n,n)} \,=\,\frac{1}{|\Gamma_k|}\prod_{e\in \mathbf{q}}W_{e}^{(k,n)}\,, \end{align*} where the first equality follows immediately from the definition of $\mathbf{M}_{r,n}^{\omega}$, and the second equality follows from iterative use of (II). \end{remark} \begin{theorem}[Theorem 3.14 of~\cite{Clark3}\footnote{The random variables $ W_{e}^{(k,n)} $ and $\mathbf{W}_e^{(k)}$ are related to the random variables $ X_{e}^{(k,n)} $ and $\mathbf{X}_e^{(k)}$ in~\cite[Section~3]{Clark3} through $ W_{e}^{(k,n)}=1+X_{e}^{(k,n)} $ and $\mathbf{W}_e^{(k)}=1+\mathbf{X}_e^{(k)}$. }]\label{ThmUnique} Fix $r\in {\mathbb R}$, and for $k,n\in \mathbb{N}_0$ let the arrays $\big\{ W_e^{(k,n)} \big\}_{e\in E_{k} } $ and $\big\{ \mathbf{W}_e^{(k)} \big\}_{e\in E_{k} } $ be defined as Definition~\ref{DefInduct} and Theorem~\ref{ThmExist}, respectively. \begin{enumerate}[(i)] \item The array $\big\{ W_e^{(k,n)} \big\}_{e\in E_{k} } $, viewed as an ${\mathbb R}^{b^{2k }}$-valued random variable, converges in law as $n\rightarrow \infty$ to $\big\{ \mathbf{W}_e^{(k)} \big\}_{e\in E_{k} } $ for each $k\in \mathbb{N}_0$. \item The centered moment $\mathbb{E}\big[ \big(W_e^{(k,n)}-1\big)^m \big]$ converges to $R^{(m)}(r-k)=\mathbb{E}\big[ \big(\mathbf{W}_e^{(k)}-1\big)^m \big] $ as $n\rightarrow \infty$ for each $k\in \mathbb{N}_0$ and $m\in\{2,3,\ldots\}$. \end{enumerate} \end{theorem} \begin{proof}[Proof of Theorem~\ref{ThmUniversality}] Let $g:\Gamma\rightarrow {\mathbb R} $ be continuous and the algebra $\mathcal{A}_{\Gamma}$ be defined as in Lemma~\ref{LemAlgebra}. For any $\epsilon>0$, there exists a $\mathcal{A}_{\Gamma}$-measurable simple function $\psi =\sum_{j=1}^J\alpha_j \chi_{A_j } $ such that $|g(p)-\psi(p)|<\epsilon $ for all $p\in \Gamma$. Then \begin{align} \mathbb{E}\bigg[\Big|\int_{\Gamma}g(p)\mathbf{M}_{r,n}^{\omega}(dp)\,-\,\int_{\Gamma}\psi(p)\mathbf{M}_{r,n}^{\omega}(dp) \Big|^2 \bigg]\,\leq \,\epsilon^2 \mathbb{E}\big[\big( \mathbf{M}_{r,n}^{\omega}(\Gamma) \big)^2\big]\,\stackrel{n\rightarrow \infty}{\longrightarrow} \,\epsilon^2\big(1+R(r)\big)\,, \end{align} where the convergence holds by (ii) of Theorem~\ref{ThmUnique}. The same argument applies to bound the $L^2$ distance between $\int_{\Gamma}g(p)\mathbf{M}_{r}(dp)$ and $\int_{\Gamma}\psi(p)\mathbf{M}_{r}(dp)$ except that the limit is replaced by an equality. Pick $N\in \mathbb{N}$ large enough so that $A_j\in \mathcal{P}(\Gamma_N)$ for all $j$ and let $n>N$. Since $\psi$ is a simple function, we have the explicit integral \begin{align} \int_{\Gamma}\psi(p)\mathbf{M}_{r,n}^{\omega}(dp)\,=\,& \sum_{j=1}^{J}\alpha_j\mathbf{M}_{r,n}^{\omega}( A_j ) \\ \,=\,& \sum_{j=1}^{J}\alpha_j\sum_{\substack{ \mathbf{p}\in \Gamma_n \\ \mathbf{p}\subset A_j } } \frac{1}{|\Gamma_n|} \prod_{h\triangleleft \mathbf{p} }W_{ h }^{(n,n)}\,. \nonumber \intertext{Since every $A_j$ is subset of $\Gamma_N$, we can rewrite the above slightly differently as } \,=\,&\sum_{j=1}^{J}\alpha_j\sum_{\substack{ \mathbf{q}\in \Gamma_N \\ \mathbf{q}\in A_j } } \sum_{\substack{ \mathbf{p}\in \Gamma_n \nonumber \\ \mathbf{p}\in \mathbf{q} } } \frac{1}{|\Gamma_n|} \prod_{h\triangleleft \mathbf{p} }W_{ h }^{(n,n)}\,, \intertext{and by Remark~\ref{RemarkMeasure} the above is equal to } \,=\,&\sum_{j=1}^{J}\alpha_j\sum_{\substack{ \mathbf{q}\in \Gamma_N \\ \mathbf{q}\in A_j } } \frac{1}{|\Gamma_N|}\prod_{e\triangleleft \mathbf{q} } W^{(N,n)}_{e} \,.\label{WW} \end{align} By Theorem~\ref{ThmUniversality}, the array of random variables $\big\{W^{(N,n)}_{e}\big\}_{e\in E_N } $ converges in law as $n\rightarrow \infty$ to the array $\big\{\mathbf{W}_e^{(N)}\big\}_{e\in E_N }$. Therefore~(\ref{WW}) converges in law as $n\rightarrow \infty$ to \begin{align} \sum_{j=1}^{J}\alpha_j\sum_{\substack{ \mathbf{q}\in \Gamma_N \\ \mathbf{q}\in A_j } } \frac{1}{|\Gamma_N|}\prod_{e\triangleleft \mathbf{q}} \mathbf{W}^{(r)}_e \,=\,\sum_{j=1}^{J}\alpha_j\mathbf{M}_r(A_j )\,=\,\int_{\Gamma}\psi(p)\mathbf{M}_r(dp )\,. \end{align} Since $\epsilon>0$ is arbitrary, $\int_{\Gamma}g(p)\mathbf{M}_{r,n}^{\omega}(dp)$ converges in law to $\int_{\Gamma}g(p)\mathbf{M}_{r}(dp)$. \end{proof} \section{Path intersections under the correlation measure }\label{SecPathIntCorrMeas} In this section we prove Lemma~\ref{LemIntSet}. The main step is to show that for $\rho_{r}$-a.e.\ pair $(p,q)$ the set of intersection times $I_{p,q}=\{t\in[0,1]\,|\, p(t)=q(t)\}$ has log-Hausdorff exponent $\geq 1$. This is achieved through an energy bound in Proposition~\ref{PropFrostman}. \subsection{Path intersections as a generation-inhomogeneous Markovian population model }\label{SecPopModel} Recall from Corollary~\ref{CorrMuMu} that the product $\mu\times \mu$ assigns full measure to the set of pairs $(p,q)\in \Gamma\times \Gamma$ such that the number, $\xi_n(p,q)\in \{1,\ldots ,b^n\}$, of edges shared by the coarse-grained paths $[p]_n ,[q]_n\in \Gamma_n$ a.s.\ becomes zero for all large enough $n$. This is equivalent to the a.s.\ extinction of a population beginning with a single member where each member of the generation $n\in \mathbb{N}_0$ population independently has either $b$ children with probability $\frac{1}{b}$ or no children at all. Under the normalized measure $\widehat{\upsilon}_r=\upsilon_r/(1+R(r)) $, the intersection number $\xi_n(p,q)$ has a similar, but generation-inhomogeneous population interpretation wherein a member of generation $n$ has $b$ children with probability $\frac{1}{b}(R(r-n-1))^b/(1+R(r-n))$ or is childless.\footnote{The consistency of the interpretation uses the identity $R(t+1)=\frac{1}{b}[(1+R(t))^{b}-1]$ for any $t\in {\mathbb R}$.} The following list summarizes our previous definitions/results related to the probability measures $\mu\times \mu$, $\widehat{\upsilon}_r$, and $\rho_r$ on the space $\Gamma\times \Gamma$ in the population language. \begin{itemize} \item The events $\mathbf{S}_{\emptyset}$ and $\mathbf{S}_{\emptyset}^c $ correspond to eventual extinction and perpetual survival, respectively. \item Extinction is certain under $\mu\times \mu$ by Corollary~\ref{CorrMuMu}. \item Perpetual survival occurs with probability $\frac{R(r)}{1+R(r)}$ under $\widehat{\upsilon}_r$ by (ii) of Lemma~\ref{LemCorrelate}. \item $\rho_r$ is the conditioning of $\widehat{\upsilon}_r$ on the event of survival by Remark~\ref{RemarkCond}. \item The population grows quadratically with $n$ in the event of survival by (ii) of Proposition~\ref{PropLemCorrelate}. \end{itemize} Lemma~\ref{LemMargPop} below characterizes the asymptotic growth of the number, $\widetilde{\xi}_n(p,q)$, of members within the generation-$n$ population having progeny that will never go extinct when conditioned on the indefinite survival of the total population. First we require a few more definitions. \begin{definition}\label{DefSurvProg} For $p,q\in \Gamma$ and $n\in \mathbb{N}$, let $p\equiv \big([p]_n;\, p_1^{(n)},\ldots, p_{b^n}^{(n)} \big)$ and $q\equiv \big([q]_n;\, q_1^{(n)},\ldots, q_{b^n}^{(n)} \big)$ be the corresponding decompositions in $\Gamma_n\times \bigtimes_{\ell=1}^{b^n}\Gamma$ from Remark~\ref{RemarkCylinder}. Let $\mathbf{I}_n(p,q)$ be the set of $\ell\in \{1,\ldots ,b^n\}$ such that $[p]_n(\ell)=[q]_n(\ell)$ and $\big(p_\ell^{(n)},q_\ell^{(n)}\big)\in \mathbf{S}_{\emptyset}^{c}$, where $\mathbf{S}_{\emptyset}$ is defined as in Corollary~\ref{CorrMuMu}.\footnote{As before, $\mathbf{S}_{\emptyset}^{c}$ denotes the complement of $\mathbf{S}_{\emptyset}$ in $\Gamma\times \Gamma$.} \begin{enumerate}[(I)] \item Define $\widetilde{\xi}_{n}\equiv\widetilde{\xi}_{n}(p,q) $ as the number of elements in $ \mathbf{I}_n(p,q)$. \item Define $\mathpzc{F}_n$ as the $\sigma$-algebra on $\Gamma\times \Gamma$ generated by the function $\mathbf{I}_n:\Gamma\times \Gamma\rightarrow \mathcal{P}\big(\{1,\ldots,b^n\} \big)$ and the $\sigma$-algebra $\mathcal{F}_n$ defined in Definition~\ref{DefSigmaAlgebra}. \end{enumerate} \end{definition} \begin{remark} In different terms, $\mathbf{I}_n(p,q)$ is the set of $\ell\in \{1,\ldots ,b^n\}$ such that $[p]_n(\ell)=[q]_n(\ell)$ and $$ \text{ }\hspace{.5cm} [p]_n(\ell)\cap \bigcup_{l=1}^{b^\mathbf{n}} [p]_\mathbf{n}(l) \cap \bigcup_{l=1}^{b^\mathbf{n}} [q]_\mathbf{n}(l) \,\neq \, \emptyset \hspace{.7cm} \text{for all $ \mathbf{n}\in \mathbb{N}$ with $\mathbf{n}>n$} \,, $$ where the edges $[p]_N(L )\in E_N$ in the expression above are identified with their canonically corresponding cylinder subsets of $E$. \end{remark} \begin{remark}\label{RemarkAncestors}Given paths $p,q\in \Gamma$, the variable $\widetilde{\xi}_{n} \in \mathbb{N}$ counts the number of shrunken embedded subcopies of the DHL corresponding to $e\in E_n$ on which the paths $p$ and $q$ have nontrivial intersections (indefinitely surviving progeny). The $\sigma$-algebra $\mathpzc{F}_n$ corresponds to knowing the coarse-grained paths $[p]_n, [q]_n\in \Gamma_n$ and also which edges $e\in E_n$ shared by $[p]_n$, $[q]_n$ have nontrivial intersections between $p$ and $q$ within them. \end{remark} \begin{lemma}(critical population model)\label{LemMargPop} For $(p,q)\in \Gamma\times \Gamma$ let $\widetilde{\xi}_{n}\equiv \widetilde{\xi}_{n}(p,q)$ be defined as in Definition~\ref{DefSurvProg}, and recall that $\xi_n\equiv\xi_n(p,q)$ is the number of edges shared by the coarse-grained paths $[p]_n, [q]_n\in \Gamma_n$. \begin{enumerate}[(i)] \item Under the measure $\rho_r$, $(\widetilde{\xi}_{n})_{n\in \mathbb{N}}$ is a Markov chain starting with $\widetilde{\xi}_{0}=1$ and having transition law $ \widetilde{\xi}_{n+1}\,\stackrel{d}{=}\,\sum_{k=1}^{\widetilde{\xi}_{n}}\mathbf{n}_n^{(k)} $, where the random variables $\mathbf{n}^{(k)}_n$ are independent and take values in $\{1,\ldots , b\}$ with probability \begin{align}\label{FamilyLine} \mathbbmss{P}\big[\mathbf{n}_n^{(k)}=\ell\big]\,:=\,\frac{1}{b}{b \choose \ell }\frac{ \big( R(r-n-1) \big)^{\ell} }{R(r-n) }\,. \end{align} \item Under the measure $\rho_r$, the sequence $\mathbf{\widetilde{m}}_n:= \frac{ R'(r-n) }{R(r-n) } \widetilde{\xi}_{n} $ is a martingale with respect to the filtration $\mathpzc{F}_n$. Moreover, $(\mathbf{\widetilde{m}}_n)_{n\in \mathbb{N}}$ converges $\rho_r$-a.s.\ with large $n$ to $T(p,q)>0$. Since $\frac{ R'(r-n) }{R(r-n) }=\frac{1}{n}+\mathit{o}(\frac{1}{n})$ for $n\gg 1$, this, in particular, implies that $\widetilde{\xi}_{n}$ $\rho_r$-a.s.\ grows linearly. \item Under the measure $\widehat{\upsilon}_r$, the conditional expectation of $\mathbf{\widetilde{m}}_n$ with respect to the $\sigma$-algebra $\mathcal{F}_n$ is equal to $\mathbf{m}_n:= \frac{ R'(r-n) }{1+R(r-n) } \xi_{n} $, and $\mathbf{m}_n$ converges $\widehat{\upsilon}_r$-a.s.\ with large $n$ to $T(p,q)$. Since $\frac{ R'(r-n) }{1+R(r-n) }=\frac{\kappa^2}{n^2}+\mathit{o}(\frac{1}{n^2})$ for $n\gg 1$, this, in particular, implies that $\xi_{n}$ $\rho_r$-a.s.\ grows quadratically. \end{enumerate} \end{lemma} \begin{remark}\label{RemarkMargPop} It is interesting to compare the linear growth of the number, $\widetilde{\xi}_n$, of generation-$n$ members that have indefinitely surviving progeny with the quadratic growth of the total population, $\xi_n$. Thus, in this critical population model, where there is neither inevitable extinction nor the possibility of asymptotically exponential growth, a vanishing portion of the population has unending family lines. A member of the generation-$n$ population with surviving progeny will have $b$ children, but when $n\gg 1$ typically only one of them will carry the family line.\footnote{This follows from~(\ref{FamilyLine}) and $R(t)\approx \frac{\kappa^2}{-t}$ for $-t\gg 1$.} \end{remark} \begin{proof}[Proof of Lemma~\ref{LemMargPop}] The Markovian interpretation in (i) is made possible through the identity $$R_{b}(t+1)=\frac{1}{b}\big[(1+R(t))^b -1 \big]=\sum_{\ell=1}^{b}\frac{1}{b}{b \choose \ell } \big( R(t) \big)^{\ell} $$ with $t=r-n-1$. \vspace{.2cm} \noindent Parts (ii) and (iii): The martingale property for $(\mathbf{\widetilde{m}}_n)_{n\in \mathbb{N}} $ holds since $$\mathbbmss{E}_{\rho_r}\big[ \mathbf{\widetilde{m}}_{n+1} \,\big|\,\mathpzc{F}_n \big]\,=\, \frac{ R'(r-n-1) }{R(r-n-1) }\mathbbmss{E}\big[ \mathbf{n}_n^{(k)} \big] \widetilde{\xi}_n \,=\,\frac{R'(r-n) }{R(r-n) }\widetilde{\xi}_n\,=:\,\mathbf{\widetilde{m}}_{n}\,, $$ where the second equality holds by the calculation below. Using part (i) \begin{align}\label{Expect} \frac{ R'(r-n-1) }{R(r-n-1) }\mathbbmss{E}\big[ \mathbf{n}_n^{(k)} \big]\,=\,&\frac{ R'(r-n-1) }{R(r-n-1) }\sum_{\ell=1}^b \frac{\ell}{b}{b \choose \ell }\frac{ \big( R(r-n-1) \big)^{\ell} }{R(r-n) }\,.\nonumber \intertext{The recursive identity $R(t+1)=\frac{1}{b}\big[(1+R(t)\big)^b -1 \big]$ implies the derivative formula $R'(t+1)=\big(1+R(t)\big)^{b-1}R'(t)$, so the above is equal to} \,=\,&\frac{\frac{1}{b}\frac{d}{dr}\big[(1+R(r-n-1)\big)^b -1 \big]}{ R(r-n) }\,=\,\frac{R'(r-n) }{R(r-n) }\,. \end{align} Next we shift our focus to the conditional expectation connection between $ \mathbf{m}_n $ and $\mathbf{\widetilde{m}}_n$. Note that we can rewrite $\mathbf{\widetilde{m}}_n$ in the form \begin{align*} \mathbf{\widetilde{m}}_n\,=\,\frac{R'(r-n) }{R(r-n) } \sum_{\substack{ 1\leq \ell\leq b^n \\ [p]_n(\ell)=[q]_n(\ell)}} \chi\big( \ell \in \mathbf{I}_n(p,q) \big) \,. \end{align*} However, under the measure $\widetilde{\upsilon}_r$, the probability that $\ell\in \mathbf{I}_n(p,q)$ when conditioned on the event $[p]_n(\ell)=[q]_n(\ell)$ is $\frac{R(r-n)}{1+R(r-n) }$. Thus $ \mathbf{m}_n$ is the conditional expectation of $\mathbf{\widetilde{m}}_n $ given $\mathcal{F}_n$: \begin{align*} \mathbbmss{E}_{\upsilon_r}\big[ \mathbf{\widetilde{m}}_n \,\big|\, \mathcal{F}_n \big]\,=\,\frac{R'(r-n)}{1+R(r-n) } \xi_n \,=\, \mathbf{m}_n\,. \end{align*} Since $(\mathbf{\widetilde{m}}_{n})_{n\in \mathbb{N}}$ is a nonnegative martingle with finite expectation, $\frac{R'(r)}{R(r)}$, the martingale convergence theorem implies that $(\mathbf{\widetilde{m}}_{n})_{n\in \mathbb{N}}$ converges $\rho_r$-a.s.\ to a limit $\mathbf{\widetilde{m}}_{\infty}$ with finite expectation. This a.s.\ convergence extends trivially to the measure $\widehat{\upsilon}_r=\frac{1}{1+R(r)}(\mu\times \mu+ R(r)\rho_r )$ since $\mathbf{\widetilde{m}}_{n}=0$ for large enough $n$ on the support of $\mu\times \mu$ as a consequence of Corollary~\ref{CorrMuMu}. By part (ii) of Proposition~\ref{PropMartingales} the sequence $( \mathbf{m}_n)_{n\in \mathbb{N}}$ converges $\widehat{\upsilon}_r$-a.e.\ to $T(p,q)$. The calculation below shows that $\mathbbmss{E}_{\widetilde{\upsilon}_r}\big[ (\mathbf{\widetilde{m}}_n -\mathbf{m}_n )^2\big]$ vanishes with large $n$, and thus $T(p,q)=\mathbf{\widetilde{m}}_{\infty}$ for $\widehat{\upsilon}_r$-a.e.\ pair $(p,q)\in \Gamma\times \Gamma$. We can write the $L^2$ distance between $\mathbf{\widetilde{m}}_n $ and $ \mathbf{m}_n$ as \begin{align*} \mathbbmss{E}_{\widetilde{\upsilon}_r}\big[ (\mathbf{\widetilde{m}}_n -\mathbf{m}_n )^2\big]\,=\,\mathbbmss{E}_{\widetilde{\upsilon}_r}\Big[\mathbbmss{E}_{\widetilde{\upsilon}_r}\big[ (\mathbf{\widetilde{m}}_n -\mathbf{m}_n )^2\,\big|\, \mathcal{F}_n \big] \Big]\,=\,&\bigg(\frac{R'(r-n) }{R(r-n) } \bigg)^2\mathbbmss{E}_{\widetilde{\upsilon}_r}[\xi_n] \textup{Var}\big( \mathbf{n}_n \big) \\ \,=\,& \big(1+R(r-n)\big)\frac{ R'(r) }{1+R(r) }\underbrace{\frac{R'(r-n) }{\big(R(r-n)\big)^2 }}_{ \approx\,\kappa^{-2} \text{ for }n\gg 1 }\textup{Var}( \mathbf{n}_n)\,, \end{align*} where the third equality follows as a consequence of $\mathbf{m}_{n}=\frac{ R'(r-n) }{1+R(r-n) }\xi_n $ being a martingale with expectation $\frac{ R'(r) }{1+R(r) }$ by part (ii) of Proposition~\ref{PropMartingales}. Since $R(t)\sim \frac{\kappa^2}{-t}$ and $R'(t)\sim \frac{\kappa^2}{t^2}$ with $-t\gg 1$ by Remark~\ref{RemarkDerR}, $\textup{Var}(\mathbf{n}_n)$ is of order $\frac{1}{n}$ with large $n$, and $\mathbbmss{E}_{\widetilde{\upsilon}_r}\big[ (\mathbf{\widetilde{m}}_n -\mathbf{m}_n )^2\big]$ also vanishes with order $\frac{1}{n}$. \end{proof} \subsection{Construction of the measure on the intersection-times set} \begin{proof}[Proof of Proposition~\ref{PropIntMeasure}] We will break the proof into parts (a)-(e), where (a)-(c) construct $\tau_{p,q}$ and the remaining parts concern the properties of the measures $([0,1],\tau_{p,q})$. It suffices to work with the measure $\rho_{r}=\frac{1}{R(r)}(\upsilon_r-\mu\times \mu )$ rather than $\upsilon_r$ since $\mu\times \mu $ assigns full measure to the set of pairs $(p,q)\in \Gamma\times \Gamma$ such that $T(p,q)=0$, in which case we define $\tau_{p,q}:=0$. \vspace{.2cm} \noindent \textbf{(a) Decomposing the Borel $\sigma$-algebra on $[0,1]$:} Define $\mathcal{V} $ as the set of $x\in [0,1]$ such that $ x=\frac{k}{b^n}$ for some $k,n\in \mathbb{N}_0$ and $\mathcal{E}:=[0,1]-\mathcal{V} $. Points in $\mathcal{V}$ correspond to the dense set of times when directed paths cross through vertex points, i.e., $p(t)\in V$ iff $t\in \mathcal{V}$ for any $p\in \Gamma$. An arbitrary Borel set $A\subset\mathcal{B}_{[0,1]}$ can be decomposed into a disjoint union $A=A_\mathcal{V}\cup A_\mathcal{E}$ for $A_\mathcal{V}\subset\mathcal{V}$ and $A_\mathcal{E} \subset \mathcal{E}$. We denote the restriction of the Borel $\sigma$-algebra $\mathcal{B}_{[0,1]}$ to $\mathcal{E}$ by $\mathcal{B}_{\mathcal{E}}$. Define the algebra $\mathcal{A}_\mathcal{E}:=\bigcup_{k=1}^\infty \mathcal{A}_\mathcal{E}^{(n)}$, where $\mathcal{A}_\mathcal{E}^{(n)}$ is the collection of all finite unions of sets of the form $\big[\frac{\ell-1}{b^n},\frac{\ell}{b^n}\big]\cap \mathcal{E}$ for $n\in \mathbb{N}_0$ and $\ell \in \{1,\ldots, b^n\}$. Note that $\mathcal{A}_\mathcal{E}$ countable base for the topology of $[0,1]$ restricted to $\mathcal{E}$, and, in particular, $\mathcal{A}_{\mathcal{E}}$ generates $\mathcal{B}_{\mathcal{E}}$. \vspace{.2cm} \noindent \textbf{(b) Sequence of measures:} For $p,q\in \Gamma$ and $n\in \mathbb{N}$, define $\mathbf{I}_{n}(p,q)\subset \{1,\ldots,b^n\}$ as in Definition~\ref{DefSurvProg}, and let $S_{p,q}^{(n)}$ be the set of intervals $\big[\frac{\ell-1}{b^n},\frac{\ell}{b^n}\big]$ such that $\ell \in \mathbf{I}_{n}(p,q)$. In the language of Section~\ref{SecPopModel}, $S_{p,q}^{(n)}$ is the generation-$n$ subpopulation that have indefinitely surviving progeny. We define the measure $\tau_{p,q}^{(n)}$ on $[0,1]$ to have density \begin{align}\label{TauN} \frac{d\tau_{p,q}^{(n)}}{dx}\,=\,\frac{R'(r-n)}{R(r-n)}b^{n}\sum_{e\in S_{p,q}^{(n)} } \chi_{e} \,. \end{align} In the analysis below, we will show that for $\rho_r$-a.e.\ pair $(p,q)$ the sequence $\big(\tau_{p,q}^{(n)}(A)\big)_{n\in \mathbb{N}}$ converges to a limit $\widehat{\tau}_{p,q}(A)$ for any $A\in \mathcal{A}_\mathcal{E}$. The limit defines a finitely additive set function $\widehat{\tau}_{p,q}:\mathcal{A}_\mathcal{E}\rightarrow [0,\infty)$. Of course, it would make not make a difference if we worked with the closures $\overline{A}$ for $A\in\mathcal{A}_\mathcal{E}$ since $\mathcal{V}=[0,1]-\mathcal{E}$ is countable, and thus $\tau_{p,q}^{(n)}(\overline{A}-A)=0$. Let $A\in \mathcal{A}_{\mathcal{E}}$ be arbitrary and pick $N\in \mathbb{N}$ large enough so that $A\in \mathcal{A}_{\mathcal{E}}^{(N)}$. The computation below is similar to~(\ref{Expect}) and shows that the sequence of random variables $\big(\tau_{p,q}^{(n)}(A)\big)_{n\geq N} $ forms a martingale on the probability space $(\Gamma\times \Gamma, \rho_r )$ w.r.t.\ the filtration, $(\mathpzc{F}_n)_{n\geq N}$. \begin{align} \mathbbmss{E}_{\rho_r}\Big[ \tau_{p,q}^{(n+1)}(A) \,\big|\, \mathpzc{F}_n \Big]\,=\,& \mathbbmss{E}_{\rho_r}\Bigg[ \frac{R'(r-n-1) }{R(r-n-1) } \sum_{f\in S_{p,q}^{(n+1)}} \chi\big( f\subset A \big)\,\bigg|\, \mathpzc{F}_n\Bigg]\nonumber \\ \,=\,& \frac{ R'(r-n-1) }{R(r-n-1) }\sum_{ e\in S_{p,q}^{(n)} }\chi\big( e\subset A \big)\mathbbmss{E}_{\rho_r}\Bigg[\sum_{f\in S_{p,q}^{(n+1)} } \chi(f\subset e) \,\bigg|\, e\in S_{p,q}^{(n)} \Bigg] \nonumber \intertext{Since each $e\in S_{p,q}^{(n)}$ has $j\in \{1,\ldots, b\}$ children in $ S_{p,q}^{(n+1)}$ with probability $ \frac{1}{b} {b \choose j } \big(R(r-n-1) \big)^{j} / R(r-n) $, the above is equal to } \,=\,&\frac{ R'(r-n-1) }{R(r-n-1) }\sum_{e\in S_{p,q}^{(n)} }\chi\big( e\subset A \big)\sum_{j=1}^{b}j\frac{1}{b} {b \choose j } \frac{\big(R(r-n-1) \big)^{j} }{ R(r-n)}\,,\nonumber \intertext{which by the chain rule and the identity $R(r-n)=\frac{1}{b}\big[(1+R(r-n-1))^b-1 \big]$ can be written as } \,=\,& \sum_{ e\in S_{p,q}^{(n)} }\chi(e\subset A) \frac{\frac{d}{dr} \left[\frac{1}{b}\left(\big(1+ R(r-n-1) \big)^b -1 \right)\right] }{R(r-n) }\nonumber \\ \,=\,& \sum_{e\in S_{p,q}^{(n)} }\chi(e\subset A) \frac{R'(r-n) }{R(r-n) }\,=\, \tau_{p,q}^{(n)}(A) \,. \end{align} Thus $\big(\tau_{p,q}^{(n)}(A)\big)_{n\in \mathbb{N}} $ forms a nonnegative martingale for any $A\in \mathcal{A}_{\mathcal{E}}$ when $n\in \mathbb{N}$ is large enough. Notice that $\tau_{p,q}^{(n)}(A) \leq \tau_{p,q}^{(n)}([0,1])$ since $A\subset [0,1]$ and $\mathbf{\widetilde{m}}_n=\tau_{p,q}^{(n)}([0,1])$ is the martingale from part (ii) of Proposition~\ref{LemMargPop}. In particular the expectation of $\tau_{p,q}^{(n)}(A)$ has the bound \begin{align} \mathbbmss{E}_{\rho_r}\big[\tau_{p,q}^{(n)}(A)\big]\,\leq \,\mathbbmss{E}_{\rho_r}\big[\tau_{p,q}^{(n)}([0,1])\big]\,=\,\mathbbmss{E}_{\rho_r}[\mathbf{\widetilde{m}}_n]\,=\,\frac{R'(r)}{R(r)}\,. \end{align} By the martingale limit theorem, $ \big(\tau_{p,q}^{(n)}(A)\big)_{n\in \mathbb{N}} $ converges with large $n$ to a limit $ \widehat{\tau}_{p,q}(A) $ for $\rho_r$-a.e.\ pair $(p,q)$. Also, since $\tau_{p,q}^{(n)}(A)\leq \mathbf{\widetilde{m}}_n$ and $\displaystyle \sup_{n\in \mathbb{N}}\mathbbmss{E}_{\rho_r}[\mathbf{\widetilde{m}}_n^2 ]<\infty$, the sequence $\tau_{p,q}^{(n)}(A) $ converges in $L^2$ to $\widehat{\tau}_{p,q}(A) $, and for $A\in A_{\mathcal{E}}^{(N)}$ with $N<n$ \begin{align}\label{CondRel} \tau_{p,q}^{(n)}(A)\,=\,\mathbbmss{E}_{\rho_r}\big[ \widehat{\tau}_{p,q}(A)\,\big|\, \mathpzc{F}_n \big]\,. \end{align} Since the algebra $\mathcal{A}_{\mathcal{E}}$ is countable, the sequence $ \big(\tau_{p,q}^{(n)}(A)\big)_{n\in \mathbb{N}}$ converges to a limit $ \widehat{\tau}_{p,q}(A)$ for all $A\in \mathcal{A}_{\mathcal{E}}$ for $\rho_r$-a.e.\ pair $(p,q)$. In parts (c)-(e) below, we show that $ \widehat{\tau}_{p,q}$ extends to a Borel measure $([0,1],\tau_{p,q})$. \vspace{.3cm} \noindent \textbf{(c) Limit measure:} Let $g:[0,1]\rightarrow {\mathbb R}$ be continuous. Given any $\epsilon>0$ there is a step function $\psi=\sum_{j=1}^{J}\alpha_j\chi_{A_j } $ for disjoint sets $A_j\in \mathcal{A}_{\mathcal{E}}$ such that $\sup_{x\in \mathcal{E} }|g(x)-\psi(x)| <\epsilon$. Since the sequences $ \big(\tau_{p,q}^{(n)}(A_j)\big)_{n\in \mathbb{N}}$ are convergent, there exists $N>0$ large enough so that for any $n,m>N$ \begin{align}\label{Drizzle} \sum_{j=1}^{J}|\alpha_j|\, \big|\tau^{(n)}_{p,q}(A_j) - \tau_{p,q}^{(m)}(A_j)\big| \,<\,\epsilon\,. \end{align} Thus for $n,m>N$ the triangle inequality yields \begin{align*} \bigg|&\int_{[0,1] }g(x)\tau_{p,q}^{(n)}(dx) \,-\, \int_{[0,1] }g(x)\tau_{p,q}^{(m)}(dx) \bigg|\nonumber \\ &\,\leq \,\int_{[0,1] }\big| g(x)-\psi(x) \big|\tau_{p,q}^{(n)}(dx)\,+\,\bigg|\int_{[0,1] }\psi(x)\big( \tau_{p,q}^{(n)} - \tau_{p,q}^{(m)}\big)(dx)\bigg| \,+\,\int_{[0,1] }\big| \psi(x) - g(x)\big|\tau_{p,q}^{(m)}(dx)\,.\nonumber \intertext{Since $\tau_{p,q}^{(n)}$ and $\tau_{p,q}^{(m)}$ assign $\mathcal{V}=[0,1]-\mathcal{E}$ measure zero and $\sup_{x\in \mathcal{E} }\big|g(x)-\psi(x)\big| <\epsilon$, the inequality ~(\ref{Drizzle}) implies } &\,\leq \, 2\epsilon\sup_{n}\tau_{p,q}^{(n)}([0,1] ) \,+\,\epsilon \,. \end{align*} The supremum of $\big(\tau_{p,q}^{(n)}([0,1] )\big)_{n\in \mathbb{N}}$ is bounded since the sequence is convergent. Since $\epsilon>0$ is arbitrary, the sequence $\big(\int_{[0,1] }g(x)\tau_{p,q}^{(n)}(dx)\big)_{n\in\mathbb{N}} $ is Cauchy and thus convergent. Since $g$ is an arbitrary continuous function on $[0,1]$, the sequence of measures $\big(\tau^{(n)}_{p,q}\big)_{n\in \mathbb{N}}$ converges weakly to a limit measure $\tau_{p,q}$. \vspace{.3cm} \noindent \textbf{(d) The limit measure assigns $\mathcal{V}$ weight zero:} Next we argue that $\tau_{p,q}(\mathcal{V})=0$ for $\rho_r$-a.e.\ $(p,q)$, which follows from a sense in which the measures $\tau_{p,q}^{(n)}$ are asymptotically concentrated away from $\mathcal{V}$ with large $n$. For $\rho_r$-a.e.\ $(p,q)$ the following statement holds: given any $x\in \mathcal{V}$ there exists $\varepsilon_x, N_x>0$ such that the support of $d\tau_{p,q}^{(n)}/dx$ is disjoint from $(x-\varepsilon_x,x+\varepsilon_x)$ for all $n>N_x$. To see this, note that if $n\gg 1$ and $x\in \mathcal{V}$ is on the boundary of some interval $e=\big[\frac{k-1}{b^n},\frac{k}{b^n} \big]\in S^{(n)}_{p,q}$, then there is roughly a $\frac{1}{b}$ probability that $x\in f $ for some child $f\in S_{p,q}^{(n+1)}$ of $e$ since $R(r-n-1)\sim \kappa^2/n$ as $n\rightarrow \infty$; see Remark~\ref{RemarkMargPop}. If $x\notin f$ for all of the children $f\subset e$, then $x$ will have a distance $\geq \frac{1}{b^{n+1}}$ from the set $\cup_{\mathsmaller{f\in S^{(n+1)}_{p,q}}}f$, i.e., the support of $d\tau_{p,q}^{(n)}/dx$. Thus the probability that $x$ is not gapped from the support of $d\tau_{p,q}^{(n)}/dx$ vanishes exponentially with large $n$, and with probability one there exist $\varepsilon_x,N_x>0$ such that $\tau_{p,q}^{(n)}(x-\varepsilon_x,x+\varepsilon_x)=0$ for all $n>N_x$. Since $\mathcal{V}$ is countable, this property a.s.\ holds for all elements in $\mathcal{V}$. The weak convergence of $\tau_{p,q}^{(n)}$ to $\tau_{p,q}$ for $\rho_r$-a.e.\ $(p,q)$ implies that there exists an open set $O_{p,q}$ such that $\mathcal{V}\subset O_{p,q}$ and $\tau_{p,q}(O_{p,q})=0$, and, in particular, $\tau_{p,q}(\mathcal{V})=0$. \vspace{.3cm} \noindent \textbf{(e) Properties of the limit measure:} Next we address the properties (I)-(III) claimed in the statement of Proposition~\ref{PropIntMeasure}. The conditional relation in (V) will be useful in next section. \begin{enumerate}[(I)] \item We can quickly verify the claim that $I_{p,q}$ is a full measure set for $\tau_{p,q}$. The complement of $I_{p,q}$ in $[0,1]$ is $$ [0,1]\,-\,I_{p,q}\,=\,\bigcup_{N=1}^\infty O_N \hspace{1cm}\text{for} \hspace{1cm} O_N \,=\, \bigcup_{\substack{ 1\leq k\leq b^N \\ [p]_N(k)\neq [q]_N(k) }} \Big(\frac{k-1}{b^N},\frac{k}{b^N}\Big) $$ and $\tau^{(n)}_{p,q}(O_N)=0$ for $n>N$. Since $O_N$ is open, $O_N$ must also be a measure zero set for the limit measure $\tau_{p,q}$. Therefore $\tau_{p,q}$ assigns $[0,1]\,-\,I_{p,q}$ measure zero. That $\tau_{p,q}$ is a.s.\ nonatomic follows trivially from the energy estimate in Proposition~\ref{PropFrostman}. \item By applying the weak convergence of $\tau_{p,q}^{(n)}$ to $\tau_{p,q}$ with $g=1$, we get that the total mass of $\tau_{p,q}$ is $\rho_r$-a.e.\ equal to $$\tau_{p,q}\big([0,1] \big)\,=\,\lim_{n\rightarrow\infty}\tau_{p,q}^{(n)}\big([0,1] \big)\,=\,\lim_{n\rightarrow\infty}\tau_{p,q}^{(n)}(\mathcal{E} )\,=\,T(p,q)\,. $$ \item For an open set $A\subset [0,1]$, the same argument as used in the proof part (iii) of Lemma~\ref{LemMargPop} shows the first equality below $$\lim_{n\rightarrow \infty}\frac{\kappa^2}{n^2}\sum_{\substack{1 \leq \ell \leq b^n \\ [p]_n(\ell)=[q]_n(\ell) } }\chi_{[\frac{\ell-1}{b^n},\frac{\ell}{b^n}]\subset A} \,=\, \lim_{n\rightarrow \infty}\frac{\kappa^2}{n}\sum_{\substack{1 \leq \ell \leq b^n \\ \ell \in \mathbf{I}_n(p,q) } }\chi_{[\frac{\ell-1}{b^n},\frac{\ell}{b^n}]\subset A}\,= \,\tau_{p,q}(A) \,.$$ \item The fact that $\tau_{p,q}(\mathcal{V})=0$ has a few implications. Firstly, the measure $\tau_{p,q}$ is determined by its operation on $ \mathcal{A}_{\mathcal{E}}$. Moreover, we can use $\tau_{p,q}(\mathcal{V})=0$ to prove that $\tau_{p,q}(A)=\widehat{\tau}_{p,q}(A)$ for all $A\in \mathcal{A}_{\mathcal{E}}$ using the argument that follows. By (I) and $\tau_{p,q}(\mathcal{V})=0$, for any $A\in \mathcal{A}_{\mathcal{E}}$, $$\tau_{p,q}(A)\,+\,\tau_{p,q}(\mathcal{E}-A) \,=\,\tau_{p,q}(\mathcal{E})\, =\, \tau_{p,q}([0,1])\, =\, \widehat{\tau}_{p,q}(\mathcal{E})\,=\,\widehat{\tau}_{p,q}(A)\,+\,\widehat{\tau}_{p,q}(\mathcal{E}-A)\,. $$ For a closed set $C\subset [0,1]$, the weak convergence of $\tau_{p,q}^{(n)}$ to $\tau_{p,q}$ as $n\rightarrow \infty$ implies that $ \tau_{p,q}(C)$ is bounded from below by the limsup of $\tau_{p,q}^{(n)}(C) $ as $n\rightarrow \infty$, and thus for $A\in \mathcal{A}_{\mathcal{E}}$ $$ \tau_{p,q}(A)\, =\, \tau_{p,q}(\overline{A})\, \geq \,\limsup_{n\rightarrow \infty} \tau_{p,q}^{(n)}(\overline{A}) \,=\,\lim_{n\rightarrow \infty} \tau_{p,q}^{(n)}(A) \,=:\, \widehat{\tau}_{p,q}(A) \,. $$ Since the same reasoning applies with $A$ replaced by $\mathcal{E}-A$, we get that $\tau_{p,q}(A)= \widehat{\tau}_{p,q}(A) $ for all $A\in \mathcal{A}_{\mathcal{E}}$. \item The equality $\tau_{p,q}= \widehat{\tau}_{p,q} $ on $\mathcal{A}_{\mathcal{E}}$ and~(\ref{CondRel}) implies that $\mathbbmss{E}_{\rho_r}\big[\tau_{p,q}(A)\,|\, \mathpzc{F}_n ]=\tau_{p,q}^{(n)}(A)$ holds for any $A\in \mathcal{A}_{\mathcal{E}}^{(N)}$ and $n\geq N$. \end{enumerate} \end{proof} \subsection{A lower bound for the log-Hausdorff exponent of the intersection-times set } The following is a corollary of Proposition~\ref{PropFrostman} below. Recall that $\rho_r$ is the probability measure on $\Gamma\times \Gamma$ from part (ii) of Lemma~\ref{LemCorrelate} and $I_{p,q}=\big\{t\in [0,1]\,\big|\,p(t)=q(t) \big\}$ is the set of intersection times of two paths $p,q\in \Gamma$. \begin{corollary}\label{CorIntSet} The set of intersection times $I_{p,q}$ has log-Hausdorff exponent $\geq 1$ for $\rho_r$-a.e.\ pair $(p,q)$. \end{corollary} The proof of Corollary~\ref{CorIntSet} is placed in Appendix~\ref{AppendixHausdorff} since it proceeds from the energy bound in Proposition~\ref{PropFrostman} without change from the analogous method for obtaining a lower bound for the Hausdorff dimension of a set using an energy bound. \begin{proposition}[energy bound]\label{PropFrostman} Let the measure $([0,1],\tau_{p,q})$ be defined as in Proposition~\ref{PropIntMeasure}. For $\rho_r$-a.e.\ pair $(p,q)$ and any $\frak{h}\in [0,1)$, \begin{align}\label{Energy} Q_\frak{h}(\tau_{p,q})\,:=\,\int_{[0,1]\times [0,1]} \log^\frak{h} \Big(\frac{1}{|x-y|}\Big) \tau_{p,q}(dx) \tau_{p,q}(dy)\,\,<\,\,\infty \,. \end{align} \end{proposition} \begin{proof} We divide the proof into parts (a)-(e).\vspace{.3cm} \noindent \textbf{(a) Energy estimate:} It suffices to prove that $\mathbbmss{E}_{\rho_r}\big[ Q_\frak{h}( \tau_{p,q}) \big]<\infty$ for any $\frak{h}\in [0,1)$. We will define a slightly different energy function $\widetilde{Q}_\frak{h}$ below that fits conveniently with the hierarchical structure of our model and that can be used to bound $Q_\frak{h}$. For $x,y\in [0,1]$ define $\frak{g}(x,y)$ as the smallest value $n\in \mathbb{N}$ such that $x$ and $y$ do not belong to the same interval $ [\frac{k-1}{b^n},\frac{k}{b^n}]$ for some $k\in \{1,\ldots, b^n\}$. For a measure $ \varrho $ on $[0,1]$ define \begin{align*} \widetilde{Q}_\frak{h}( \varrho)\,:=\,\int_{[0,1]\times [0,1]} \big(\frak{g}(x,y) \big)^\frak{h} \varrho(dx)\varrho(dy) \,, \end{align*} and define $\widetilde{Q}_\frak{h}^{(n)}\big( \varrho\big)$ analogously with $\frak{g}$ replaced by its cut-off version $\frak{g}_n:=\min( \frak{g}, n )$ for $n\in \mathbb{N}$. For $\mathbf{c}:=\log^\frak{h} b+ \int_0^1\int_0^1 \log^\frak{h} \big(\frac{1}{|r+s|}\big)drds $, our analysis will be split between showing (I) and (II) below. \begin{align}\label{Q2Q} \mathbbmss{E}_{\rho_r}\big[ Q_\frak{h}( \tau_{p,q}) \big]\,\underbrace{\leq}_{\textup{(I)}} \,\mathbf{c}\mathbbmss{E}_{\rho_r}\big[ \widetilde{Q}_\frak{h}( \tau_{p,q}) \big]\, \leq \,\mathbf{c}\liminf_{n\rightarrow\infty}\mathbbmss{E}_{\rho_r}\big[ \widetilde{Q}_\frak{h}^{(n)}( \tau_{p,q}^{(n)}) \big] \,\underbrace{<}_{ \textup{(II)} }\,\infty \,, \end{align} where the measures $\tau_{p,q}^{(n)}$ are defined as in~(\ref{TauN}). Recall from the discussion in part (e) of the proof of Proposition~\ref{PropIntMeasure} that $\tau_{p,q}^{(n)}(A)=\mathbbmss{E}_{\rho_r}[ \tau_{p,q}(A)\, |\,\mathpzc{F}_n]$ forms a martingale for each $A\in \mathcal{A}_{\mathcal{E}}$ that a.s.\ converges \ to $\tau_{p,q}^{(n)}(A)$ with large $n$. The second inequality above holds since $\widetilde{Q}_\frak{h}( \tau_{p,q})\leq \liminf_{n\rightarrow\infty}\widetilde{Q}_\frak{h}^{(n)}( \tau_{p,q}^{(n)}) $ by a generalized version of Fatou's lemma~\cite[Section 11.4]{Royden} since $\frak{g}_n$ converges point-wise to $\frak{g}$ and the measures $\tau_{p,q}^{(n)}\times \tau_{p,q}^{(n)}$ converge set-wise to $\tau_{p,q}\times \tau_{p,q}$ on $\mathcal{A}_{\mathcal{E}}\oplus \mathcal{A}_{\mathcal{E}}$. \vspace{.3cm} \noindent \textbf{(b) Proof of (I):} We write the expectation of $Q_\frak{h}( \tau_{p,q}) $ in terms of nested conditional expectations as \begin{align} \mathbbmss{E}_{\rho_r}\big[ Q_\frak{h}( \tau_{p,q}) \big]\,=\,&\sum_{n=1}^\infty \mathbbmss{E}_{\rho_r}\Bigg[ \underbrace{\mathbbmss{E}_{\rho_r}\bigg[ \int_{n=\frak{g}(x,y)} \log^\frak{h} \Big(\frac{1}{|x-y|}\Big) \tau_{p,q}(dx) \tau_{p,q}(dy)\,\bigg|\, \mathpzc{F}_n\bigg]}_{\mathbf{Q}_\frak{h}^{(n)}(p,q) }\Bigg]\,, \intertext{where $\mathbf{Q}_\frak{h}^{(n)}(p,q) $ denotes the conditional expectation, and similarly } \,\mathbbmss{E}_{\rho_r}\big[ \widetilde{Q}_\frak{h}( \tau_{p,q}) \big]\,=\,&\sum_{n=1}^\infty \mathbbmss{E}_{\rho_r}\Bigg[\underbrace{\mathbbmss{E}_{\rho_r}\bigg[ \int_{n=\frak{g}(x,y)} \big(\frak{g}(x,y)\big)^\frak{h} \tau_{p,q}(dx) \tau_{p,q}(dy)\,\bigg|\, \mathpzc{F}_n\bigg]}_{\mathbf{\widetilde{Q}}_\frak{h}^{(n)}(p,q) }\Bigg] \,. \end{align} It suffices to show that $\mathbf{Q}_\frak{h}^{(n)}(p,q) $ is bounded from above by $\mathbf{c}\mathbf{\widetilde{Q}}_\frak{h}^{(n)}(p,q)$. The expression $\mathbf{Q}_\frak{h}^{(n)}(p,q)$ can be written as \begin{align} \mathbf{Q}_\frak{h}^{(n)}(p,q) \,=\,& \int_{n=\frak{g}(x,y)} \log^\frak{h} \Big(\frac{1}{|x-y|}\Big) \tau_{p,q}^{(n)}(dx) \tau_{p,q}^{(n)}(dy) \,.\label{ConditionedQ} \intertext{To see the above equality, first recall that the measure $\tau_{p,q}^{(n)}(A)$ is the conditional expectation of $\tau_{p,q}(A)$ with respect to $\mathpzc{F}_n$ for any set $A$ that is a union of intervals $(\frac{\ell-1}{b^n}, \frac{\ell}{b^n})$. The set $\{(x,y)\,|\, n=\frak{g}(x,y)\}$ is a union of intervals $(\frac{\ell_1 -1}{b^n}, \frac{\ell_1}{b^n})\times (\frac{\ell_2 -1}{b^n}, \frac{\ell_2}{b^n})$ for $\ell_1,\ell_2\in \mathbb{N}$ with $\ell_1\neq \ell_2$, however, the measure $\tau_{p,q}$ acts independently on the intervals $ (\frac{\ell_1 -1}{b^n}, \frac{\ell_1}{b^n})$ and $ (\frac{\ell_2 -2}{b^n}, \frac{\ell_2}{b^n})$ when conditioned on $\mathpzc{F}_n $. By definition of $ \tau_{p,q}^{(n)}$ we have the equality } \,=\,&\sum_{\substack{e_1, e_2 \in S^{(n)}_{p,q}\\ \frak{g}(e_1,e_2)=n } } \bigg(\frac{R'(r-n)}{R(r-n) } \bigg)^2 \underbracket{b^{2n}\int_{e_1\times e_2} \log^\frak{h} \Big(\frac{1}{|x-y|}\Big) dx dy} \,. \nonumber \intertext{The bracketed expression is smaller than $\mathbf{c}n^\frak{h}$ by the computation~(\ref{prelim}) below, and thus } \,\leq \,&\mathbf{c}\sum_{\substack{e_1, e_2 \in S^{(n)}_{p,q}\\ \frak{g}(e_1,e_2)=n } } \bigg(\frac{R'(r-n)}{R(r-n) } \bigg)^2 n^\frak{h} \,. \label{CritInq} \intertext{Using the definition of $\tau_{p,q}^{(n)}$ again, we have } =\, &\mathbf{c} \int_{n=\frak{g}(x,y)} \big(\frak{g}(x,y)\big)^\frak{h} \tau_{p,q}^{(n)}(dx) \tau_{p,q}^{(n)}(dy)\, =\, \mathbf{c}\mathbf{\widetilde{Q}}_\frak{h}^{(n)}(p,q) \,. \nonumber \end{align} The last equality follows by the same argument as for~(\ref{ConditionedQ}). Thus (I) follows once the inequality~(\ref{CritInq}) is justified. To see~(\ref{CritInq}), recall that the sets $e_1,e_2$ in~(\ref{CritInq}) have the forms $(\frac{\ell_1-1}{b^{n}},\frac{\ell_1}{b^{n}})$ and $(\frac{\ell_2-1}{b^{n}},\frac{\ell_2}{b^{n}})$ for some $\ell_1 , \ell_2\in \mathbb{N}$ with $\ell_1 \neq \ell_2$. Without loss of generality, we can assume $\ell_1 < \ell_2$. The left side below is maximized when $\ell_2=\ell_1+1$, i.e., the intervals are adjacent, so we have the inequality \begin{align} b^{2n}\int_{e_1\times e_2} \log^\frak{h} \Big(\frac{1}{|x-y|}\Big) dxdy\,\leq \,&b^{2n}\int_{(\frac{\ell_1-1}{b^{n}},\frac{\ell_1}{b^{n}})\times (\frac{\ell_1}{b^{n}},\frac{\ell_1+1}{b^{n}})} \log^\frak{h} \Big(\frac{1}{|x-y|}\Big) dxdy \,. \nonumber \intertext{The change of variables $s=\ell_1- b^{n}x $ and $t=b^{n}y-\ell_1$ yields that } \, = \,&\int_0^1\int_0^1\bigg( n\log b+ \log \Big(\frac{1}{|s+t|}\Big)\bigg)^\frak{h} dsdt \,. \label{prelim} \end{align} Finally the above is less than $\mathbf{c}n^\frak{h}$. \vspace{.3cm} \noindent \textbf{(c) First step towards proving (II):} Note that $\widetilde{Q}_\frak{h}^{(n)}\big( \tau_{p,q}^{(n)}\big)$ can be written as \begin{align*} \widetilde{Q}_\frak{h}^{(n)}\big( \tau_{p,q}^{(n)}\big)\,=\,\int_{[0,1]\times [0,1]} \big(\frak{g}_n(x,y) \big)^\frak{h} \tau_{p,q}^{(n)}(dx)\tau_{p,q}^{(n)}(dy) \,=\, \bigg(\frac{ R'(r-n) }{R(r-n) }\bigg)^2 \sum_{ e_1,e_2\in S_{p,q}^{(n)} } \big( \frak{g}_n(e_1,e_2)\big)^\frak{h} \,, \end{align*} where $ \frak{g}_n(e_1,e_2):= \frak{g}_n(x,y)$ for representatives $x\in e_1$ and $y\in e_2$. The conditional expectation of $\widetilde{Q}^{(n+1)}_\frak{h}\big( \tau_{p,q}^{(n+1)}\big)$ with respect to $ \mathpzc{F}_n $ has the form \begin{align} \mathbbmss{E}_{\rho_r}\big[ \widetilde{Q}^{(n+1)}_\frak{h}\big( \tau_{p,q}^{(n+1)}\big) \,\big|\,\mathpzc{F}_n \big]\,=\,&\bigg(\frac{ R'(r-n-1) }{R(r-n-1) }\bigg)^2\sum_{ e_1,e_2\in S_{p,q}^{(n)} } \mathbbmss{E}_{\rho_r}\Bigg[ \sum_{\substack{ f_1, f_2\in S_{p,q}^{(n+1)} \nonumber \\ f_1 \subset e_1, f_2\subset e_2 } } \big( \frak{g}_{n+1}(f_1,f_2) \big)^\frak{h} \,\bigg|\, \mathpzc{F}_n \Bigg]\,, \nonumber \intertext{and we can split our sum into the cases $e_1\neq e_2$ and $e_1 = e_2$ to write the above as } \,=\,& \bigg(\frac{ R'(r-n) }{R(r-n) }\bigg)^2 \sum_{\substack{ e_1 ,e_2\in S_{p,q}^{(n)}\\ e_1\neq e_2 }} \big(\frak{g}_n(e_1,e_2) \big)^\frak{h} \nonumber \\ & \,+\,\bigg(\frac{ R'(r-n-1) }{R(r-n-1) }\bigg)^2 \sum_{ e\in S_{p,q}^{(n)}} \mathbbmss{E}_{\rho_r}\Bigg[ \sum_{ \substack{ f_1, f_2\in S_{p,q}^{(n+1)} \\ f_1,f_2\subset e } } \big(\frak{g}_{n+1}(f_1,f_2) \big)^\frak{h} \,\bigg|\, \mathpzc{F}_n \Bigg] \,.\label{Diagonal} \end{align} We have applied the identity~(\ref{Expect}) twice to rewrite the sum over the $e_1\neq e_2$ terms. \vspace{.3cm} \noindent \textbf{(d) A single term in the sum~(\ref{Diagonal}):} Next we will show that terms $e\in S_{p,q}^{(n)} $ from the sum~(\ref{Diagonal}) satisfy the $n\gg 1$ order equality \begin{align}\label{Cond} \bigg(\frac{ R'(r-n-1) }{R(r-n-1) }\bigg)^2 \mathbbmss{E}_{\rho_r}\Bigg[ \sum_{ \substack{ f_1, f_2\in S_{p,q}^{(n+1)} \\ f_1,f_2\subset e } } \big(\frak{g}_{n+1}(f_1,f_2) \big)^\frak{h} \,\bigg|\, \mathpzc{F}_n \Bigg]\,=\, \bigg(\frac{R'(r-n) }{ R(r-n) }\bigg)^2\Big(n^\frak{h}\,+\,\mathit{O}\big(n^{\frak{h}-1}\big)\Big)\,. \end{align} Notice that the left side of~(\ref{Cond}) can be rewritten as \begin{align*} & \bigg(\frac{ R'(r-n-1) }{R(r-n-1) }\bigg)^2 \Bigg(n^\frak{h}\sum_{\ell=1}^{b}\ell(\ell-1)\frac{\frac{1}{b}{ b\choose \ell} \big(R(r-n-1) \big)^{\ell}}{R(r-n) } \,+\,(n+1)^\frak{h}\sum_{\ell=1}^{b}\ell \frac{\frac{1}{b}{ b\choose \ell} \big(R(r-n-1) \big)^{\ell}}{R(r-n) } \Bigg)\,, \intertext{where the above terms correspond, respectively, to when $f_1\neq f_2$ and $f_1=f_2$. The factor $\ell(\ell-1)$ appears in the above since if $e$ has $\ell\in \{1,\ldots, b\}$ children then there are $\ell(\ell-1)$ ways to choose $ f_1, f_2\subset e$ such that $f_1\neq f_2$. We can use the chain rule to write } \,=\,& R'(r-n-1)\Bigg(n^\frak{h} \frac{ \frac{d}{dr}\sum_{\ell=1}^{b}\ell\frac{1}{b}{ b\choose \ell} \big(R(r-n-1) \big)^{\ell-1} }{ R(r-n) } \,+\, (n+1)^\frak{h} \frac{\frac{d}{dr}\sum_{\ell=1}^{b}\frac{1}{b}{ b\choose \ell} \big(R(r-n-1) \big)^{\ell} }{R(r-n-1) R(r-n) }\Bigg) \,, \intertext{and another application of the chain rule with $R(r-n)=\frac{1}{b}\big[\big(1+R(r-n-1)\big)^b-1 \big]$ yields } \,=\,& n^\frak{h} \frac{ R'(r-n-1)}{R(r-n) } \frac{d}{dr}\bigg[ \frac{R '(r-n) }{ R'(r-n-1) } \bigg] \,+\,(n+1)^\frak{h}\frac{ R'(r-n-1) }{R(r-n-1) } \frac{ R'(r-n)}{ R(r-n) } \,. \intertext{Applying the quotient rule and factoring out $n^\frak{h}\frac{R'(r-n) }{ R(r-n) }$ yields } \,=\,& n^\frak{h}\frac{R'(r-n) }{ R(r-n) }\Bigg[ \frac{R''(r-n)}{R'(r-n) } \,-\,\frac{R''(r-n-1)}{R '(r-n-1) } \,+\, \Big(1+\frac{1}{n}\Big)^\frak{h}\frac{ R'(r-n-1) }{R(r-n-1) } \Bigg]\,, \intertext{which we can write in terms of $\log$ as } \,=\,& n^\frak{h} \frac{R'(r-n) }{ R(r-n) }\Bigg[ \frac{d}{dr}\log\bigg(\frac{ R'(r-n)}{R'(r-n-1) } \bigg) \,+\, \Big(1+\frac{1}{n}\Big)^\frak{h}\frac{ R'(r-n-1) }{R(r-n-1) } \Bigg]\,. \intertext{By the identity $ R'(r-n)= \big(1+ R(r-n-1) \big)^{b-1}R'(r-n-1)$, we have } \,=\,& n^\frak{h} \frac{R'(r-n) }{ R(r-n) }\Bigg[ \frac{d}{dr}\log\Big( \big(1+R(r-n-1)\big)^{b-1} \Big) \,+\, \Big(1+\frac{1}{n}\Big)^\frak{h}\frac{ R'(r-n-1) }{R(r-n-1) } \Bigg]\,. \intertext{Computing the derivative with the chain rule and factoring out $\frac{R'(r-n) }{ R(r-n) }$ gives us } \,=\,& n^\frak{h} \bigg(\frac{R'(r-n) }{ R(r-n) }\bigg)^2\Bigg[ (b-1) \frac{R(r-n) }{R'(r-n) }\frac{ R'(r-n-1) }{ 1+R(r-n-1) } \,+\,\Big(1+\frac{1}{n}\Big)^\frak{h}\frac{R(r-n) }{ R'(r-n) }\frac{R'(r-n-1) }{R(r-n-1) } \Bigg]\,. \intertext{Applying $ R'(r-n)= \big(1+ R(r-n-1) \big)^{b-1}R'(r-n-1)$ again yields } \,=\,& n^\frak{h} \bigg(\frac{R'(r-n) }{ R(r-n) }\bigg)^2\Bigg[\underbrace{ \frac{ (b-1)R(r-n) }{ \big(1+R(r-n-1) \big)^b } \,+\,\Big(1+\frac{1}{n}\Big)^\frak{h}\frac{ R(r-n) }{R(r-n-1) \big(1+R(r-n-1) \big)^{b-1} }}_{ 1+\frac{\frak{h}}{n}+\mathit{o}( \frac{1}{n} ) }\Bigg]\,. \end{align*} The equality $R(r)=-\frac{\kappa^2}{r}+\frac{ \kappa^2\eta \log(-r) }{ r^2} +\mathit{O}\big(\frac{ \log^2(-r) }{ r^3}\big)$ for $-r\gg 1$ implies that the braced expression is $1+\frac{\frak{h}}{n}+\mathit{o}\big( \frac{1}{n} \big) $ with large $n$. \vspace{.3cm} \noindent \textbf{(e) Returning to~(\ref{Diagonal}).} As a consequence of the order equality~(\ref{Cond}), there is a $C>0$ such that for all $n\in \mathbb{N}$ \begin{align}\label{FinExp} \mathbbmss{E}_{\rho_r}\big[ \widetilde{Q}_\frak{h}^{(n+1)}\big( \tau_{p,q}^{(n+1)}\big) \,\big|\, \mathpzc{F}_n \big]\,-\,\widetilde{Q}_\frak{h}^{(n)}\big( \tau_{p,q}^{(n)}\big) \,\leq \,Cn^{\frak{h}-1} \bigg(\frac{R'(r-n) }{ R(r-n) }\bigg)^2\widetilde{\xi}^{(n)}_{p,q}\,, \end{align} where $\widetilde{\xi}^{(n)}_{p,q}$ is the number of elements in $S_{p,q}^{(n)}$. The expectation of $\mathbbmss{E}_{\rho_r}\big[ \widetilde{Q}_\frak{h}^{(n)}\big( \tau_{p,q}^{(n)}\big) \big]$ can be written in terms of a telescoping sum as \begin{align*} \mathbbmss{E}_{\rho_r}\big[ \widetilde{Q}_\frak{h}^{(n)}\big( \tau_{p,q}^{(n)}\big) \big]\,=\,&\mathbbmss{E}_{\rho_r}\Big[ \widetilde{Q}_\frak{h}^{(1)}\big( \tau_{p,q}^{(1)}\big) \Big]\,+\, \sum_{k=1}^{n-1}\bigg( \mathbbmss{E}_{\rho_r}\Big[ \widetilde{Q}_\frak{h}^{(k+1)}\big( \tau_{p,q}^{(k+1)}\big) \Big]\,-\,\mathbbmss{E}_{\rho_r}\Big[ \widetilde{Q}_\frak{h}^{(k)}\big( \tau_{p,q}^{(k)}\big) \Big]\bigg)\,. \intertext{Inserting nested conditional expectations and applying~(\ref{FinExp}) yields } \,=\,&\bigg(\frac{R'(r) }{R(r)}\bigg)^2 \,+\, \sum_{k=1}^{n-1} \mathbbmss{E}_{\rho_r}\Big[ \mathbbmss{E}_{\rho_r}\Big[ \widetilde{Q}_\frak{h}^{(k+1)}\big( \tau_{p,q}^{(k+1)}\big) \,\Big|\, \mathpzc{F}_k \Big]\,-\,\widetilde{Q}_\frak{h}^{(k)}\big( \tau_{p,q}^{(k)}\big) \Big]\\ \,\leq \,&\bigg(\frac{R'(r) }{R(r)}\bigg)^2 \,+\, \sum_{k=1}^{n-1} Ck^{\frak{h}-1} \bigg(\frac{R'(r-k) }{ R(r-k) }\bigg)^2\mathbbmss{E}_{\rho_r}\Big[\widetilde{\xi}^{(k)}_{p,q} \Big]\, . \intertext{The expectation of $\widetilde{\xi}^{(k)}_{p,q} $ is $\frac{R'(r)}{R(r)} \frac{R(r-k)}{R'(r-k)} $ as a consequence of part (ii) of Proposition~\ref{PropMartingales}, so we have } \, = \,&\bigg(\frac{R'(r) }{R(r)}\bigg)^2 \,+\,\frac{R'(r)}{R(r)} \sum_{k=1}^{n-1} Ck^{\frak{h}-1} \frac{R'(r-k) }{ R(r-k) } \, \leq \,\bigg(\frac{R'(r) }{R(r)}\bigg)^2 \,+\,\frac{R'(r)}{R(r)} \mathbf{C} \sum_{k=1}^{n-1} k^{\frak{h}-2} \,. \end{align*} The inquality holds for large enough $\mathbf{C}>0$ since $\frac{R'(r-k) }{ R(r-k)}=\frac{1}{k}+\mathit{o}\big(\frac{1}{k}\big)$ for $k\gg 1$ by Lemma~\ref{LemVar} and Remark~\ref{RemarkDerR}. Since the series $\sum_{k=1}^{\infty} k^{\frak{h}-2}$ is summable for $\frak{h}\in (0,1)$, the limit of $\mathbb{E}\big[ \widetilde{Q}_\frak{h}^{(n)}\big( \tau_{p,q}^{(n)}\big) \big]$ as $n\rightarrow \infty$ is finite. \end{proof} \subsection{Proof of Lemma~\ref{LemIntSet}} \begin{proof}[Proof of Lemma~\ref{LemIntSet}] For $p,q\in \Gamma$ the set of intersection times $I_{p,q}=\{r\in [0,1]\,|\,p(r)=q(r) \}$ can be written as $$ I_{p,q}\,:=\,\bigcap_{n=1}^{\infty} I_{p,q}^{(n)} \hspace{1.5cm}\text{for}\hspace{1.5cm} I_{p,q}^{(n)}\,=\,[0,1]\,-\,\bigcup_{\substack{1\leq k \leq b^n \\ [p]_n(k)\neq [q]_n(k) }}\Big(\frac{k-1}{b^n}, \frac{k}{b^n} \Big)\,. $$ By~Corollary~\ref{CorIntSet} the log-Hausdorff exponent of $I_{p,q}$ is $\geq 1$. Thus we only need to show that the log-Hausdorff exponent of $I_{p,q}$ is $\leq 1$ by showing that $H^{\log}_{1}( I_{p,q} )<\infty$, where $H^{\log}_{1}=\lim_{\delta\searrow 0} H^{\log}_{1,\delta}$ is the outer measure defined in Definition~\ref{DefLogHaus}. Recall that $\mathcal{V}$ is defined as the set of $x\in [0,1]$ of the form $\frac{k}{b^n}$ for $k,n\in \mathbb{N}_0$ and $\mathcal{E}:=[0,1]-\mathcal{V}$. Then $ H^{\log}_{1}\big( \mathcal{V}\big)=0 $ since $\mathcal{V}$ is countable. Given $\delta>0$ pick $n\in \mathbb{N}$ such that $ b^{-n}\leq \delta $. Let $\widetilde{\xi}_n(p,q)\in \{1,\ldots, b^n\}$ be defined as in Definition~\ref{DefSurvProg}. The set $I_{p,q}\cap \mathcal{E}$ is covered by $\widetilde{\xi}_n(p,q)$ intervals $\big(\frac{k-1}{b^n},\frac{k}{b^n} \big)$ with $k\in \{1,\ldots, b^n\}$, and thus $$H^{\log}_{1,\delta}\big(I_{p,q}\cap \mathcal{E} \big) \,\leq \, \frac{\widetilde{\xi}_n(p,q)}{ \log ( b^n) }\,=\,\frac{ \widetilde{\xi}_n(p,q) }{n \log b } \,. $$ However, by part (ii) of Lemma~\ref{LemMargPop}, $\frac{\kappa^2 }{n }\widetilde{\xi}_n(p,q) $ converges $\rho_r$-a.e.\ to $T(p,q)$. Thus for $\rho_r$-a.e.\ pair $(p,q)$ we have the bound \begin{align} H^{\log}_{1}( I_{p,q})\,=\,\lim_{\delta\searrow 0} H^{\log}_{1,\delta}( I_{p,q})\,\leq \,\liminf_{n\rightarrow \infty}\frac{ \widetilde{\xi}_n(p,q) }{n \log b }\, =\,\frac{ T(p,q)}{ \kappa^2 \log b }\,. \end{align} Since $T(p,q)$ is $\rho_r$-a.e.\ finite by part (ii) of Proposition~\ref{PropLemCorrelate}, $H^{\log}_{1}( I_{p,q})$ is $\rho_r$-a.e.\ finite. Therefore the log-Hausdorff exponent of $I_{p,q}$ is $\leq 1$ for $\rho_r$-a.e.\ pair $(p,q)$. \end{proof} \section{Proof of Theorem~\ref{ThmPathMeasure} }\label{SecPathIntMxM} \begin{proof}[Proof of Theorem~\ref{ThmPathMeasure}] Part (i) is a corollary of (iii), which is proved below. \vspace{.3cm} \noindent (ii) By property (II) of Theorem~\ref{ThmExistence} and part (iv) of Proposition~\ref{PropLemCorrelate}, \begin{align}\label{TComp} \mathbb{E}\bigg[ \int_{\Gamma\times \Gamma}e^{aT(p,q)} \mathbf{M}_r(dp) \mathbf{M}_r(dq) \bigg]\,=\,\int_{\Gamma\times\Gamma}e^{aT(p,q)}\upsilon_r(dp,dq)\,=\,1\,+\,R(r+a)\,. \end{align} It follows that $\mathbf{M}_{r}\times \mathbf{M}_r$ a.s.\ assigns full measure to the set of on pairs $(p,q)$ s.t.\ $ \displaystyle T(p,q):=\lim_{n\rightarrow \infty} \frac{\kappa^2}{n^2}\xi_n(p,q) $ is well-defined and finite. \vspace{.3cm} \noindent (iii) Let $\mathbf{G}$ be the set of $(p,q)\in \Gamma\times \Gamma$ such that the intersection-times set $I_{p,q}$ has log-Hausdorff exponent one and $\mathbf{\widehat{G}}$ be defined as the set of $(p,q)$ such that $T(p,q)>0$. The events $ \mathbf{G}$ and $\mathbf{\widehat{G}}$ differ by sets of $\upsilon_r$-measure zero since \begin{align}\label{UpsilonZero} \upsilon_r\big( \mathbf{G}\Delta\mathbf{\widehat{G}} \big)\,=\,\big(\mu\times \mu\,+\,R(r)\rho_r \big)\big( \mathbf{G}\Delta\mathbf{\widehat{G}} \big)\,=\, R(r) \rho_r \big( \mathbf{G}\Delta\mathbf{\widehat{G}} \big)\,=\,0, \end{align} where $\mathbf{G}\Delta\mathbf{\widehat{G}}$ denotes the symmetric difference $(\mathbf{G}\backslash\mathbf{\widehat{G}})\cup (\mathbf{\widehat{G}}\backslash\mathbf{G}) $. The first equality above holds by part (ii) of Lemma~\ref{LemCorrelate}, the second equality is a consequence of Corollary~\ref{CorrMuMu}, and the third equality holds because $\rho_r$ assigns full measure to $ \mathbf{G}$ and $\mathbf{\widehat{G}}$ by Lemma~\ref{LemIntSet} and Proposition~\ref{PropLemCorrelate}, respectively. Applying property (II) of Theorem~\ref{ThmExistence} with $g=\chi_{ \mathbf{G}\Delta\mathbf{\widehat{G}} }$ yields \begin{align}\label{EM} \mathbb{E}\big[ \mathbf{M}_r\times \mathbf{M}_r\big( \mathbf{G}\Delta\mathbf{\widehat{G}} \big) \big]\,=\,\upsilon_r(\mathbf{G}\Delta\mathbf{\widehat{G}}) \,=\,0\,. \end{align} Thus $\mathbf{M}_r\times \mathbf{M}_r$ a.s.\ assigns the set $\mathbf{G}\Delta\mathbf{\widehat{G}}$ measure zero. Let $\mathbf{S}$ be the set of pairs $(p,q)$ such that the intersection-times set $I_{p,q}$ is finite and $\mathbf{\widehat{S}}$ be the set of pairs such that $T(p,q)=0$. Then $\mathbf{S}\subset \mathbf{\widehat{S}}$, and $$ \mathbb{E}\big[ \mathbf{M}_r\times \mathbf{M}_r\big( \mathbf{\widehat{S}} -\mathbf{S} \big) \big]\,=\,\upsilon_r\big(\mathbf{\widehat{S}} -\mathbf{S}\big)\,=\,(\mu\times \mu+R(r)\rho_r)\big(\mathbf{\widehat{S}} -\mathbf{S}\big) \,=\,\mu\times \mu\big(\mathbf{\widehat{S}} -\mathbf{S}\big)\,=\,0\,, $$ where the third equality holds by part (ii) of Proposition~\ref{PropLemCorrelate}, and the fourth equality uses that $ \mathbf{S}$ is a full measure set for $\mu\times \mu$. Since $\Gamma\times \Gamma= \mathbf{\widehat{S}}\cup \mathbf{\widehat{G}} $, the above shows that $\mathbf{M}_r\times \mathbf{M}_r$ a.s.\ assigns full measure to $\mathbf{G}\cup \mathbf{S}$, which was the desired result. \vspace{.3cm} \noindent (iv): Given $p\in \Gamma$ recall that $\mathbf{\widehat{s}}_{p}$ is defined as the set of $q\in \Gamma$ such that $T(p,q)>0$, which can be expressed as $\mathbf{\widehat{s}}_{p}=\big\{q\in \Gamma \,|\,(p,q)\in \mathbf{\widehat{G}}\}$ for $\mathbf{\widehat{G}}$ defined as in the proof of part (iii). Define the set $\mathcal{S}_{\mathbf{M}_r}:=\big\{p\in \Gamma\,\big|\, \mathbf{M}_r( \mathbf{\widehat{s}}_{p})=0 \big\}$ and the corresponding indicator function $\mathcal{I}_{\mathbf{M}_r}:=\chi_{\mathcal{S}_{\mathbf{M}_r}} $. By definition, we must show that $\mathbf{M}_r$ a.s.\ satisfies $\mathbf{M}_r(\mathcal{S}_{\mathbf{M}_r})=0$. We can write $\mathbf{M}_r=A_r+B_r$, where $A_r(dp):= \mathcal{I}_{\mathbf{M}_r}(p)\mathbf{M}_r(dp) $ and $B_r(dp):= \big(1- \mathcal{I}_{\mathbf{M}_r}(p)\big)\mathbf{M}_r (dp)$. The following gives us a lower bound for the second moment of the total mass of $(\Gamma, B_r)$: \begin{align*} R(r)\,=\,\upsilon_r\big(\mathbf{\widehat{G}} \big) \,=\,\mathbb{E}\big[\mathbf{M}_r\times \mathbf{M}_r\big( \mathbf{\widehat{G}} \big) \big]\,=\,\mathbb{E}\big[B_r\times B_r\big( \mathbf{\widehat{G}} \big) \big]\,\leq \,\mathbb{E}\big[B_r\times B_r\big(\Gamma\times\Gamma\big) \big] \,=\,\mathbb{E}\big[ |B_r(\Gamma)|^2 \big]\,. \end{align*} The first equality holds because $\upsilon_r=\mu\times \mu+R(r)\rho_r$, the probability measure $\rho_r$ assigns probability one to $\mathbf{\widehat{G}}$, and $\mu\times \mu(\mathbf{\widehat{G}} )=0$. The second equality is by property (II) of Theorem~\ref{ThmExistence}. The third equality above follows closely from the definition of $A_r$ since $$A_r\times \mathbf{M}_r\big( \mathbf{\widehat{G}} \big)\,=\,\int_{\Gamma} \mathcal{I}_{\mathbf{M}_r}(p)\mathbf{M}_r\big(\big\{q\in \Gamma\,|\,(p,q)\in \mathbf{\widehat{G}} \big\} \big) \mathbf{M}_r(dp)\,=\,\int_{\Gamma}\mathcal{I}_{\mathbf{M}_r}(p)\mathbf{M}_r(\mathbf{\widehat{s}}_{p} ) \mathbf{M}_r(dp)\,=\,0 \,, $$ and the same result holds for $\mathbf{M}_r\times A_r\big( \mathbf{\widehat{G}} \big)$. Since $\mathbb{E}[\mathbf{M}_r]=\mu$ and $ \mu$ is a probability measure, the constants $\alpha_r:=\mathbb{E}[ A_r(\Gamma) ]$ and $\beta_r:=\mathbb{E}[ B_r(\Gamma) ]$ sum to $1$. The distributional recursive relation in (IV) of Theorem~\ref{ThmExistence} implies that $\alpha_r$ satisfies $\alpha_{r+1}=\alpha_r^b$ for all $r\in {\mathbb R}$ because two paths would need to have trivial intersections in all $b$ components of the concatenation decomposition to avoid having nontrivial intersections. Thus if $\alpha_r >0$ for some $r\in {\mathbb R}$ then $\alpha_{r-N}=\alpha_r^{b^{-N}}$ converges to 1 exponentially quickly as $N\rightarrow \infty$. The third moment of $\mathbf{M}_{r-N}(\Gamma)$ has the lower bound \begin{align} \mathbb{E}\big[ |\mathbf{M}_{r-N}(\Gamma)|^3 \big]\,\geq \,\mathbb{E}\big[ |B_{r-N}(\Gamma) |^3 \big]\,\geq\,\frac{ \big( \mathbb{E}\big[ |B_{r-N}(\Gamma)|^2 \big] \big)^2 }{ \mathbb{E}\big[ B_{r-N}(\Gamma) \big] }\,\geq \,\frac{ \big(R(r-N)\big)^2 }{ \beta_{r-N} }\,, \end{align} where the last inequality uses that $\mathbb{E}\big[ |B_{r-N}(\Gamma)|^2 \big] $ is bounded from below by $R(r-N)$. Since $R(r-N) \approx \frac{\kappa^2}{N} $ as $N\rightarrow \infty$ by Lemma~\ref{LemVar} and $ \beta_{r-N}=1-\alpha_{r-N}$ decays exponentially quickly by the observation above, the third moment of $\mathbf{M}_{r-N}(\Gamma)$ must grow without bound as $N\rightarrow \infty$, which contradicts (III) of Theorem~\ref{ThmExistence}. Therefore $\alpha_r=\mathbb{E}[ \mathbf{M}_{r}(Y) ]$ is zero for all $r\in {\mathbb R}$, and the set $\mathcal{S}_{\mathbf{M}_r}\subset \Gamma$ must a.s.\ have $\mathbf{M}_r$-measure zero. The same argument applies with $\mathbf{\widehat{s}}_{p}$ replaced by $\mathbf{s}_{p}$. \end{proof} \section{Proof of Theorem~\ref{ThmVartheta}}\label{SecLocality} \begin{proof}[Proof of Theorem~\ref{ThmVartheta}] Part (i): The symmetry of the model implies that the expectation of $\vartheta_{\mathbf{M}_r}$ must be a multiple $c>0$ of the uniform measure $(D,\nu)$. The expectation of the total mass of $\vartheta_{\mathbf{M}_r}$ is $$ \mathbb{E}\big[\vartheta_{\mathbf{M}_r}(D)\big]\,=\,\mathbb{E}\bigg[\int_{\Gamma\times \Gamma} T(p,q) \mathbf{M}_r(dp) \mathbf{M}_r(dq) \bigg]\,=\,\int_{\Gamma\times \Gamma} T(p,q) \upsilon_r(dp,dq)\,=\,R'(r)\,, $$ where the second equality follows from (ii) of Theorem~\ref{ThmPathMeasure}, and third equality holds by differentiating (iv) of Proposition~\ref{PropLemCorrelate} at $a=0$. Therefore $c=R'(r)$. \vspace{.3cm} \noindent Part (ii): Suppose to reach a contradiction that there is a positive probability of there being an $A\in \mathcal{B}_{D}$ and a $\frak{h}\in [0,2)$ such that $\textup{dim}_{H}(A)=\frak{h}$ and $\vartheta_{\mathbf{M}_r}(A)>0$. For any $\alpha\in (\frak{h},2)$ the energy defined by $$ \hat{Q}_{\alpha}(\vartheta_{\mathbf{M}_r})\,=\,\int_{D\times D} \frac{1}{\big(d_{D}(x,y) \big)^{\alpha}} \vartheta_{\mathbf{M}_r}(dx,dy) $$ must by infinite. This, however, contradicts part (iv) below, which shows that the analogous energy remains finite when the dimension function $x^{\alpha}$ is replaced by the generalized dimension function $h_\lambda(x)=x^2\big(\log(1/x)\big)^{-\lambda}$ with $\lambda>9$, which decays faster as $x\searrow 0$ than $x^{\alpha}$ for any fixed $\alpha<2$. \vspace{.3cm} \noindent Part (iii): For $n\in \mathbb{N}$ and $E_n$ we can generalize Remark~\ref{RemarkVarthetaSymm} to write $\vartheta_{\mathbf{M}_r}$ in the form \begin{align} \vartheta_{\mathbf{M}_r}\,=\,\bigoplus_{\mathbf{e}\in E_n}\frac{1}{b^{2n}}\bigg(\prod_{k=1}^n\prod_{e\in E_k^{\updownarrow\mathbf{e}}}\mathbf{M}_{r-k}^{e}(\Gamma) \bigg)^2\vartheta_{\mathbf{M}_{r-n}^\mathbf{e}}\hspace{.4cm}\text{through the identification} \hspace{.3cm}D\equiv \bigcup_{e\in E_n}D_e\,, \end{align} where the spaces $D_e$ are copies of $D$ and the measures $\big(\Gamma,\mathbf{M}_{r-k}^{e} \big)$ are interpreted as in Corollary~\ref{CorProp4}. For $x,y\in E$ we write $x\updownarrow y$ if there is path passing through both $x$ and $y$ and $x\diamond y$ otherwise. \vspace{.2cm} \noindent \textbf{Case $\mathbf{x\diamond y}$:} Suppose the points $x,y\in E$ satisfy $\frak{g}_D(x,y)=n$ and $x\diamond y$. Then there exist $\mathbf{e}, \mathbf{f}\in E_n$ such that $x\in \mathbf{e}$ and $y\in \mathbf{f}$, and we can write \begin{align} \mathbb{E}\big[\vartheta_{\mathbf{M}_r}(dx)&\vartheta_{\mathbf{M}_r}(dy) \big]\nonumber \\ \,=\,&\frac{1}{b^{4n}}\mathbb{E}\big[ \vartheta_{\mathbf{M}_{r-n} }(d\langle x\rangle_{\mathbf{e}}) \big]\mathbb{E}\big[ \vartheta_{\mathbf{M}_{r-n} }(d\langle y\rangle_{\mathbf{f}}) \big] \prod_{k=1}^{n-1}\mathbb{E}\Big[ \big( \mathbf{M}_{r-k}(\Gamma) \big)^4\Big]^{b-1}\label{Furr} \,, \intertext{where $\langle x\rangle_{\mathbf{e}},\langle y\rangle_{\mathbf{f}}\in D$ are the dilated positions of $x$ and $y$ in the embedded subcopies of the DHL corresponding to $\mathbf{e}$ and $\mathbf{f}$, respectively. By writing $ \mathbf{M}_{r-k}(\Gamma)=1+\big( \mathbf{M}_{r-k}(\Gamma)-1\big)$ and foiling inside the expectations, we have } \,=\,&\frac{1}{b^{4n}}\nu(d\langle x\rangle_{\mathbf{e}}) \nu(d\langle y\rangle_{\mathbf{f}})\big( R'(r-n)\big)^2 \prod_{k=1}^{n-1}\Big(1\,+\,6R(r-k)\,+\,4R^{(3)}(r-k)\,+\, R^{(4)}(r-k) \Big)^{b-1}\,. \nonumber \intertext{Since $\nu(d\langle x\rangle_{\mathbf{e}})$ and $\nu(d\langle x\rangle_{\mathbf{e}})$ are dilations of $\nu$, we can absorb the factor $b^{-4n}$ to write } \nonumber \,=\,&\nu(d x) \nu(d y) \underbrace{\big( R'(r-n)\big)^2}_{ \sim \,\frac{\kappa^2}{n^4} } \textup{exp}\Bigg\{ \underbrace{(b-1)\sum_{k=1}^{n-1} \Big(6R(r-k)+4R^{(3)}(r-k)+R^{(4)}(r-k) \Big)}_{ =\, 12\log n \,+\,\mathit{O}(1) } \Bigg\} \\ \,\stackrel{n\gg 1}{\sim}\,& \mathbf{c}n^8\nu(d x) \nu(d y)\,.\label{egret} \end{align} The underbraced asymptotics holds since for $-t\gg 1$ \begin{align}\label{Rstuff} R'(t)=\frac{\kappa^2}{t^2}\big(1+\mathit{o}(1)\big)\,,\hspace{.5cm} R(t)=\frac{ \kappa^2 }{ -t }+\mathit{O}\Big( \frac{ \log(-t) }{ t^2 }\Big)\,,\hspace{.5cm} R^{(m)}(t)=\mathit{O}\Big( (-t)^{-\lceil m/2\rceil} \Big)\,, \end{align} by Remark~\ref{RemarkDerR}, Lemma~\ref{LemVar}, and (III) of Theorem~\ref{ThmPathMeasure}, where $\kappa^2:=\frac{2}{b-1}$. The terms $(b-1)6R(r-k)\approx \frac{12}{k}$ are roughly a multiple of the harmonic series when $k\gg 1$, which is the source of the $12\log$ above. Thus~(\ref{egret}) holds for some constant $\mathbf{c}>0$.\vspace{.2cm} \noindent \textbf{Case $\mathbf{x\updownarrow y}$:} The analysis when $\frak{g}_D(x,y)=n$ and $x\updownarrow y$ is more tricky because the analog of~(\ref{Furr}) is \begin{align} \mathbb{E}\big[\vartheta_{\mathbf{M}_r}(dx)\vartheta_{\mathbf{M}_r}(dy) \big] \,=\,&\frac{1}{b^{4n}}\underbracket{\mathbb{E}\big[\big(\mathbf{M}_{r-n}^{\mathbf{e}}(\Gamma) \big)^2 \vartheta_{\mathbf{M}_{r-n} }(d\langle x\rangle_{\mathbf{e}}) \big]}\underbracket{\mathbb{E}\big[ \big(\mathbf{M}_{r-n}^{\mathbf{f}}(\Gamma) \big)^2 \vartheta_{\mathbf{M}_{r-n} }(d\langle y\rangle_{\mathbf{f}}) \big]} \nonumber \\ &\hspace{.2cm} \times \mathbb{E}\Big[ \big( \mathbf{M}_{r-n}(\Gamma) \big)^4\Big]^{b-2} \prod_{k=1}^{n-1}\mathbb{E}\Big[ \big( \mathbf{M}_{r-k}(\Gamma) \big)^4\Big]^{b-1}\,.\label{Furr2} \end{align} Unlike the $x\diamond y$ case, the bracketed terms involve a correlation with the square of the total mass of $\mathbf{M}_{r-n}$. By the symmetry of the model, the expectation of the measures $ \mathbf{M}_r(\Gamma) \vartheta_{\mathbf{M}_r}(dx) $ and $\big(\mathbf{M}_r(\Gamma)\big)^2 \vartheta_{\mathbf{M}_r}(dx) $ must be constant multiplies $A_r,B_r>0$ of the uniform measure $\nu$, i.e., \begin{align} \mathbb{E}\big[ \mathbf{M}_r(\Gamma) \vartheta_{\mathbf{M}_r}(dx) \big] \,=\,A_r \nu(dx)\hspace{1cm}\text{and}\hspace{1cm}\mathbb{E}\big[ \big(\mathbf{M}_r(\Gamma)\big)^2 \vartheta_{\mathbf{M}_r}(dx) \big] \,=\,B_r \nu(dx)\,. \end{align} We will first use the hierarchical symmetry of the model to derive a closed expression for $A_{r}$. With Corollary~\ref{CorProp4} and Remark~\ref{RemarkVarthetaSymm}, we can write \begin{align} A_r \nu(dx) \,=\,&\mathbb{E}\big[ \mathbf{M}_r(\Gamma) \vartheta_{\mathbf{M}_r}(dx) \big] \nonumber \,\\ \,=\,&\mathbb{E}\Bigg[\Bigg(\frac{1}{b}\sum_{1\leq i\leq b} \prod_{1\leq j \leq b}\mathbf{M}_{r-1}^{(i,j)} (\Gamma)\Bigg)\Bigg(\frac{1}{b^2} \sum_{1\leq I,J \leq b}\bigg( \prod_{\ell\neq J}\mathbf{M}_{r-1}^{(I,\ell)}(\Gamma) \bigg)^2\vartheta_{\mathbf{M}_{r-1}}^{(I,J)}(dx)\Bigg) \Bigg]\,.\label{expand} \intertext{By foiling the sums we get two types of terms corresponding to whether $i\neq I$ or $i=I$, respectively.} \,=\,& \frac{b-1}{b}\mathbb{E}\Big[\big( \mathbf{M}_{r-1}(\Gamma) \big)^2 \Big]^{b-1}\mathbb{E}\big[\vartheta_{\mathbf{M}_{r-1}}(dx) \big] \,+\,\frac{1}{b}\mathbb{E}\Big[\big( \mathbf{M}_{r-1}(\Gamma) \big)^3 \Big]^{b-1}\mathbb{E}\big[\mathbf{M}_{r-1}(\Gamma) \vartheta_{\mathbf{M}_{r-1}}(dx) \big] \nonumber \\ \,=\,& \frac{b-1}{b}\big(1+R(r-1)\big)^{b-1}R'(r-1)\nu(dx) \,+\,\frac{1}{b}\mathbb{E}\Big[\big( \mathbf{M}_{r-1}(\Gamma) \big)^3 \Big]^{b-1}A_{r-1}\nu(dx) \end{align} Thus $A_{r}$ can be expressed through the series form \begin{align*} A_{r}\,=\,&\frac{b-1}{b}\sum_{k=1}^\infty\big(1+R(r-k)\big)^{b-1}R'(r-k)\frac{1}{b^{k-1}}\prod_{\ell=1}^{k-1}\mathbb{E}\Big[\big( \mathbf{M}_{r-\ell}(\Gamma) \big)^3 \Big]^{b-1} \,.\nonumber \intertext{Since the third moment of $ \mathbf{M}_{r-\ell}(\Gamma) $ can be written in terms of its centered moments as $1+3R(r-\ell)+R^{(3)}(r-\ell) $, applying the asymptotics~(\ref{Rstuff}) when $-r\gg 1$ yields } \,\sim\,&\frac{b-1}{b}\sum_{k=1}^\infty \frac{\kappa^2}{r^2}\frac{1}{b^{k-1}}\,=\,\frac{\kappa^2}{r^2}\,.\nonumber \end{align*} A similar analysis that begins by expanding $\mathbb{E}\big[ \big(\mathbf{M}_r(\Gamma)\big)^2 \vartheta_{\mathbf{M}_r}(dx) \big]$ as in~(\ref{expand}) leads to an analogous recursion relation for $B_r$ that depends on $A_r$. The resulting series representation for $B_r$ again yields $B_{r}\sim \frac{\kappa^2}{r^2}$ for $-r\gg 1$. Thus~(\ref{Furr2}) is asymptotically equivalent to~(\ref{Furr}). \vspace{.3cm} \noindent Part (iv): By a similar argument as in Proposition~\ref{PropFrostman}, it suffices to work with a modified version of $Q_\lambda $ having a form that fits the hierarchical structure of the model: \begin{align*} \widetilde{Q}_\lambda (\vartheta_{\mathbf{M}_r})\,:=\, \int_{D\times D} \frac{b^{2\frak{g}_D(x,y)}}{ \big( \frak{g}_D(x,y) \big)^{\lambda}}\vartheta_{\mathbf{M}_r}(dx)\vartheta_{\mathbf{M}_r}(dy)\,, \end{align*} where $\frak{g}_D(x,y)$ is defined as in part (iii). Define $U_n^{\updownarrow }:=\{(x,y)\in E\times E\,|\, \frak{g}(x,y)=n \text{ and }x\updownarrow y \} $ and $U_n^{ \Diamond } $ analogously. The expectation of $\widetilde{Q}_\lambda(\vartheta_{\mathbf{M}_r})$ can be written as \begin{align*} \mathbb{E}\big[\widetilde{Q}_\lambda(\vartheta_{\mathbf{M}_r})\big]\,:=\,& \sum_{n=1}^\infty \mathbb{E}\Bigg[\int_{U_n^{\updownarrow }} \frac{b^{2\frak{g}_D(x,y)}}{ \big( \frak{g}_D(x,y) \big)^{\lambda}}\vartheta_{\mathbf{M}_r}(dx)\vartheta_{\mathbf{M}_r}(dy)\Bigg]\,+\,\sum_{n=1}^\infty \mathbb{E}\Bigg[\int_{U_n^{\Diamond }} \frac{b^{2\frak{g}_D(x,y)}}{ \big( \frak{g}_D(x,y) \big)^{\lambda}}\vartheta_{\mathbf{M}_r}(dx)\vartheta_{\mathbf{M}_r}(dy)\Bigg]\\ \,=\,& \sum_{n=1}^\infty \frac{b^{2n}}{ n^{\lambda}}\mathbb{E}\big[ \vartheta_{\mathbf{M}_r}\times \vartheta_{\mathbf{M}_r}( U_n^{\updownarrow }) \big]\,+\,\sum_{n=1}^\infty \frac{b^{2n}}{ n^{\lambda}}\mathbb{E}\big[ \vartheta_{\mathbf{M}_r}\times \vartheta_{\mathbf{M}_r}( U_n^{\Diamond }) \big]\,. \intertext{The correlation function $C_r(x,y)$ is constant over the sets $U_n^{\updownarrow }$ and $U_n^{\Diamond }$, i.e., there are constants $C_{r,n}^{\updownarrow},C_{r,n}^{\diamond}>0$ such that $C_r(x,y)=C_{r,n}^{\updownarrow}$ for $(x,y)\in U_n^{\updownarrow }$ and $C_r(x,y)=C_{r,n}^{\diamond}$ for $(x,y)\in U_n^{\diamond }$. Thus we have } \,=\,& \sum_{n=1}^\infty \frac{b^{2n}}{ n^{\lambda}}C_{r,n}^{\updownarrow} \nu\times \nu\big( U_n^{\updownarrow }) \,+\,\sum_{n=1}^\infty \frac{b^{2n}}{ n^{\lambda}}C_{r,n}^{\diamond} \nu\times \nu\big( U_n^{\diamond })\,=\,\sum_{n=1}^\infty \bigg(\frac{b(b-1)}{ n^{\lambda}}C_{r,n}^{\updownarrow} \,+\, \frac{b-1}{ n^{\lambda}}C_{r,n}^{\diamond} \bigg)\,, \end{align*} where the last equality holds because $\nu\times \nu\big( U_n^{\updownarrow })=\frac{b-1}{b^{2n-1}} $ and $\nu\times \nu\big( U_n^{\diamond })=\frac{b-1}{b^{2n}} $. The constants $C_{r,n}^{\updownarrow}$ and $C_{r,n}^{\diamond} $ are asymptotically proportional to $n^8$ for $n\gg 1$ by part (iii), and therefore the above series converge iff $\lambda>9$. \end{proof} \section{Proofs of results from Section~\ref{SectionHilbert}}\label{SectionLast} \begin{proof}[Proof of Proposition~\ref{PropDecomp}] Define the measures $\Phi_{\mathbf{M}_r}$ and $\Psi_{\mathbf{M}_r}$ on $ D\times \Gamma\times \Gamma$ by $$\Phi_{\mathbf{M}_r}(dx,dp,dq):= \gamma_{p,q}(dx)\mathbf{M}_r(dp)\mathbf{M}_r(dq) \hspace{.5cm} \text{and} \hspace{.5cm} \Psi_{ \mathbf{M}_r }(dx,dp,dq):=\Theta_{ \mathbf{M}_r }^{\updownarrow x}(dp)\Theta_{ \mathbf{M}_r }^{\updownarrow x}(dq)\vartheta_{ \mathbf{M}_r }(dx)\,,$$ which both assign full measure to triples $(x,p,q)\in D\times \Gamma\times \Gamma$ such that $x\in \textup{Range}(p)\cap \textup{Range}(q)$. The total masses of the measures $\Phi_{\mathbf{M}_r}$ and $\Psi_{\mathbf{M}_r}$ agree since $\Theta_{ \mathbf{M}_r }^{\updownarrow x}$ is a probability measure and $\gamma_{p,q}$ has total mass $T(p,q)$: \begin{align}\label{TotMass} \Psi_{\mathbf{M}_r}\big(D\times \Gamma\times \Gamma \big)\,=\,\vartheta_{ \mathbf{M}_r }(D)\,=\,\int_{\Gamma\times \Gamma}T(p,q)\mathbf{M}_r (dp)\mathbf{M}_r (dq)\,=\,\Phi_{\mathbf{M}_r}\big(D\times \Gamma\times \Gamma \big)\,, \end{align} where the second equality holds by Remark~\ref{RemarkVarthetaMass}. Our proof will leverage~(\ref{TotMass}) using the hierarchical symmetry of the model. Notice that $\Phi_{\mathbf{M}_r}$ a.s.\ assigns $V\times \Gamma\times \Gamma$ measure zero since $V$ is countable and $\mathbf{M}_r\times \mathbf{M}_r$ a.s.\ assigns full measure to pairs $(p,q)$ such that $\gamma_{p,q}:=\tau_{p,q}\circ p^{-1}$ has no atoms as a consequence of Proposition~\ref{PropIntMeasure}. Similarly, part (ii) of Theorem~\ref{ThmVartheta} implies that $\Psi_{\mathbf{M}_r}(V\times \Gamma\times \Gamma)=0$ holds a.s. Thus we can focus on the restrictions of $\Phi_{\mathbf{M}_r}$ and $\Psi_{\mathbf{M}_r}$ to the space $\Upsilon :=E\times \Gamma\times \Gamma$. The Borel $\sigma$-algebra $\mathcal{B}_{\Upsilon}$ is generated by the algebra of cylinder sets $\mathcal{A}_{\Upsilon}:=\cup_{n=0}^{\infty}\mathcal{P}(E_n)\otimes \mathcal{P}(\Gamma_n)\otimes \mathcal{P}(\Gamma_n)$, so it suffices for us to show that $\Phi_{\mathbf{M}_r}(\mathbf{e}\times \mathbf{p}\times\mathbf{q})= \Psi_{\mathbf{M}_r}(\mathbf{e}\times \mathbf{p}\times\mathbf{q})$ for every $\mathbf{e}\in E_n$ and $\mathbf{p},\mathbf{q}\in \Gamma_n$. When the edge $\mathbf{e}$ does not lie at an intersection between the coarse-grained paths $\mathbf{p}$ and $\mathbf{q}$, then we already know that $\mathbf{e}\times \mathbf{p}\times\mathbf{q}$ has measure zero under both $\Phi_{\mathbf{M}_r}$ and $\Psi_{\mathbf{M}_r}$, so we will focus on the case when $\mathbf{e}\in \textup{Range}(\mathbf{p})\cap \textup{Range}(\mathbf{q})$. Given $n\in \mathbb{N}$ and $e\in E_n$, let the family of measures $(\Gamma, \mathbf{M}^e_{r-n})$ be defined in relation to $\mathbf{M}_{r}$ as in Corollary~\ref{CorProp4}. If $\mathbf{p},\mathbf{q}\in \Gamma_n$ and $\mathbf{e}\in \textup{Range}(\mathbf{p})\cap \textup{Range}(\mathbf{q})$, then by Remark~\ref{RemarkProp4} \begin{align}\label{here} \Phi_{\mathbf{M}_r}(\mathbf{e}\times \mathbf{p}\times\mathbf{q})\,=\,\frac{1}{|\Gamma_n|^2} \bigg(\prod_{\substack{e \triangleleft \mathbf{p}\\ e\neq \mathbf{e} } }\mathbf{M}^e_{r-n}(\Gamma)\bigg)\bigg(\prod_{\substack{e \triangleleft \mathbf{q}\\ e\neq \mathbf{e} } }\mathbf{M}^e_{r-n}(\Gamma)\bigg)\underbrace{\int_{\Gamma\times \Gamma} T(p,q) \mathbf{M}^\mathbf{e}_{r-n}(dp) \mathbf{M}^\mathbf{e}_{r-n}(dp)}_{ \Phi_{\mathbf{M}_r^{\mathbf{e}}}(D\times \Gamma\times\Gamma) } \,. \end{align} For $x\in \mathbf{e}$, let $\langle x\rangle_{\mathbf{e}}\in E$ denote the corresponding point within the embedded copy of the space $D$ identified with $ \mathbf{e}$. We have the following decompositions of the measures $\vartheta_{ \mathbf{M}_r }$ and $\Theta_{ \mathbf{M}_r }^{\updownarrow x}$ when $x\in \mathbf{e}$: \begin{align}\label{var} \vartheta_{ \mathbf{M}_r }(dx)\,=\,&\frac{1}{b^{2n}}\Bigg( \prod_{k=1}^{n}\prod_{\hat{e}\in E_k^{\updownarrow x} } \mathbf{M}_{r-k}^{\hat{e}}( \Gamma)\Bigg)^2 \vartheta_{ \mathbf{M}_r^{\mathbf{e}} }(d\langle x\rangle_\mathbf{e})\,,\,\text{and} \\ \label{the} \Theta_{ \mathbf{M}_r }^{\updownarrow x}(dp)\,=\,&\Bigg(\prod_{k=1}^{n}\prod_{\hat{e}\in E_k^{\updownarrow x} }\frac{1}{\mathbf{M}_{r-k}^{\hat{e}}(\Gamma) } \mathbf{M}_{r-k}^{\hat{e}}( dp_{\hat{e}})\Bigg)\Theta_{ \mathbf{M}_r^{\mathbf{e}} }^{\updownarrow \langle x\rangle_{\mathbf{e}}}(dp_{\mathbf{e}})\,, \end{align} where the sets $ E_k^{\updownarrow x} $ and the dilated paths $p_{\hat{e}}\in \Gamma$ are defined as in Remark~\ref{RemarkDecomp}. Plugging in the forms~(\ref{var}) and~(\ref{the}) results in a cancellation of the factors $\mathbf{M}_{r-k}^{\hat{e}}(\Gamma) $, yielding the second equality below: \begin{align*} \Psi_{\mathbf{M}_r}(\mathbf{e}\times &\mathbf{p}\times\mathbf{q})\\ \,= \,& \int_{\mathbf{e}\times \mathbf{p}\times\mathbf{q}} \Theta_{ \mathbf{M}_r }^{\updownarrow x}(dp) \Theta_{ \mathbf{M}_r }^{\updownarrow x}(dq) \vartheta_{ \mathbf{M}_r }(dx) \\ \,=\,&\frac{1}{b^{2n}} \Bigg(\prod_{k=1}^{n}\prod_{\hat{e}\in E_k^{\updownarrow\mathbf{e}} }\mathbf{M}_{r-k}^{\hat{e}}( \mathbf{p}_{\hat{e}})\Bigg)\Bigg(\prod_{k=1}^{n}\prod_{\hat{e}\in E_k^{\updownarrow\mathbf{e}} }\mathbf{M}_{r-k}^{\hat{e}}( \mathbf{q}_{\hat{e}})\Bigg) \int_{E\times \Gamma\times \Gamma} \Theta_{ \mathbf{M}_r^{\mathbf{e}} }^{\updownarrow x}(dp) \Theta_{ \mathbf{M}_r^{\mathbf{e}} }^{\updownarrow x}(dq) \vartheta_{ \mathbf{M}_r^{\mathbf{e}} }(dx) \,, \intertext{where we make the interpretations $\mathbf{p}_{\hat{e}}:=\{p_{\hat{e}}\in \Gamma \,|\,p\in \mathbf{p}\} $ and $E_k^{\updownarrow\mathbf{e}}:= E_k^{\updownarrow x} $ for any representative $x\in \mathbf{e}$. Note that if $\hat{e}\in E_k^{\updownarrow\mathbf{e}}$, then $p_{\hat{e}}\subset \Gamma$ is a generation-$(n-k)$ coarse-grained path through the embedded copy of the DHL corresponding to $\hat{e}$. By applying (iii) of Corollary~\ref{CorProp4} to each term $\mathbf{M}_{r-k}^{\hat{e}}( \mathbf{q}_{\hat{e}})$, we get the formula $\mathbf{M}_{r-k}^{\hat{e}}( \mathbf{q}_{\hat{e}})=\frac{1}{|\Gamma_{n-k}|}\prod_{\substack{e\triangleleft \mathbf{p}\\ e \subset \hat{e} } }\mathbf{M}^e_{r-n}(\Gamma)$, so the above is equal to } \,=\,&\frac{1}{b^{2n}}\bigg(\prod_{k=1}^n \frac{1}{|\Gamma_{n-k}| } \bigg)^{2(b-1)} \Bigg(\prod_{\substack{e \triangleleft \mathbf{p}\\ e\neq \mathbf{e} } }\mathbf{M}^e_{r-n}(\Gamma)\Bigg)\Bigg(\prod_{\substack{e \triangleleft \mathbf{q}\\ e\neq \mathbf{e} } }\mathbf{M}^e_{r-n}(\Gamma)\Bigg)\Psi_{\mathbf{M}_r^{\mathbf{e}}}(E\times \Gamma\times\Gamma)\,. \intertext{The formula $|\Gamma_k|=b^{\frac{b^k -1 }{b-1}}$ implies the identity $|\Gamma_n|=b^n \prod_{k=1}^{n}|\Gamma_{k-1}|^{b-1} $, so we finally get } \,=\,&\frac{1}{|\Gamma_n|^2} \Bigg(\prod_{\substack{e \triangleleft \mathbf{p}\\ e\neq \mathbf{e} } }\mathbf{M}^e_{r-n}(\Gamma)\Bigg)\Bigg(\prod_{\substack{e \triangleleft \mathbf{q}\\ e\neq \mathbf{e} } }\mathbf{M}^e_{r-n}(\Gamma)\Bigg)\Psi_{\mathbf{M}_r^{\mathbf{e}}}(E\times \Gamma\times\Gamma)\,, \end{align*} which agrees with~(\ref{here}) by~(\ref{TotMass}) since $ E\times \Gamma\times\Gamma$ has full measure under $\Psi_{\mathbf{M}_r^{\mathbf{e}}}$. Therefore the measures $\Phi_{\mathbf{M}_r}$ and $\Psi_{\mathbf{M}_r}$ are a.s.\ equal. \end{proof} \begin{proof}[Proof of Theorem~\ref{ThmOperator}] Part (i): The linear operator $\mathbf{T}_{\mathbf{M}_r}$ is Hilbert-Schmidt iff its kernel is in $L^2\big(\Gamma\times \Gamma, \mathbf{M}_r\times \mathbf{M}_r\big) $, i.e., $$\int_{\Gamma\times \Gamma}\big(T(p,q)\big)^2 \mathbf{M}_r(dp)\mathbf{M}_r(dq)\,<\,\infty \,.$$ However, this holds for a.e.\ realization of $\mathbf{M}_r$ as a consequence of part (ii) of Theorem~\ref{ThmPathMeasure}. Moreover, the kernel $T(p,q)$ is infinite along its diagonal (which follows easily from its definition in (i) of Proposition~\ref{PropLemCorrelate}), and thus $\mathbf{T}_{\mathbf{M}_r}$ is not trace class. \vspace{.3cm} \noindent Part (ii): For $f\in L^2(\Gamma,\mathbf{M}_r)$ and $g\in L^2(D,\vartheta_{\mathbf{M}_r})$, notice that Proposition~\ref{PropDecomp} implies that \begin{align}\label{ForY} \Big(\int_{D\times \Gamma}g(x)\gamma_{p,q}(dx)\mathbf{M}_r(dq)\Big)\mathbf{M}_r(dp)\,=\,& \int_{D}g(x)\Theta^{\updownarrow x}_{ \mathbf{M}_r }(dp) \vartheta_{\mathbf{M}_r}(dx) \intertext{and} \int_{\Gamma\times \Gamma}f(p)\gamma_{p,q}(dx)\mathbf{M}_r(dp)\mathbf{M}_r(dq)\,=\,& \int_{\Gamma}f(p)\Theta^{\updownarrow x}_{ \mathbf{M}_r }(dp) \vartheta_{\mathbf{M}_r}(dx) \,.\label{ForY*} \end{align} In particular,~(\ref{ForY}) implies that $\int_{D}g(x)\Theta^{\updownarrow x}_{ \mathbf{M}_r }(dp) \vartheta_{\mathbf{M}_r}(dx)$ is absolutely continuous with respect to $\mathbf{M}_r(dp)$, which gives us an alternative representation of the operator $\hat{Y}_{\mathbf{M}_r}$ as \begin{align}\label{YRep} (\hat{Y}_{\mathbf{M}_r}g)(p)\,:=\, \int_{D\times \Gamma}g(x)\gamma_{p,q}(dx)\mathbf{M}_r(dq) \,=\,\frac{ \int_{D}g(x)\Theta^{\updownarrow x}_{ \mathbf{M}_r }(dp) \vartheta_{\mathbf{M}_r}(dx) }{ \mathbf{M}_r(dp) }\,. \end{align} Similarly,~(\ref{ForY*}) implies that the measure $\int_{\Gamma\times \Gamma}f(p)\gamma_{p,q}(dx)\mathbf{M}_r(dp)\mathbf{M}_r(dq)$ is absolutely continuous with respect to $\vartheta_{\mathbf{M}_r}(dx)$ with Radon-Nikodym derivative equal to $\int_{\Gamma}f(p)\Theta^{\updownarrow x}_{ \mathbf{M}_r }(dp)$. The adjoint of $\hat{Y}_{\mathbf{M}_r}$ has the form $(\hat{Y}_{\mathbf{M}_r}^*f)(x)= \int_{\Gamma}f(p)\Theta^{\updownarrow x}_{ \mathbf{M}_r }(dp) $ by the calculation below. \begin{align*} \big\langle f \big|\, \hat{Y}_{\mathbf{M}_r}g \big\rangle_{L^2(\Gamma,\mathbf{M}_r) }\,=\, & \int_{\Gamma}f(p)(\hat{Y}_{\mathbf{M}_r}g)(p)\mathbf{M}_r(dp)\\ \,=\,&\int_{\Gamma}f(p)\bigg(\int_{D\times \Gamma}g(x)\gamma_{p,q}(dx) \mathbf{M}_r(dq)\bigg)\mathbf{M}_r(dp) \intertext{In the above, we used the definition of $\hat{Y}_{\mathbf{M}_r}$. Rearranging the integration and applying~(\ref{ForY*}) yields } \,=\,&\int_{D}\bigg(\frac{\int_{\Gamma\times \Gamma}f(p)\gamma_{p,q}(dx) \mathbf{M}_r(dq)\mathbf{M}_r(dp)}{ \vartheta_{\mathbf{M}_r}(dx) }\bigg)g(x)\vartheta_{\mathbf{M}_r}(dx)\\ \,=\,&\int_{D}\bigg(\int_{\Gamma}f(p)\Theta^{\updownarrow x}_{ \mathbf{M}_r }(dp)\bigg)g(x)\vartheta_{\mathbf{M}_r}(dx) \,=\,\big\langle \hat{Y}_{\mathbf{M}_r}^* f \big|\,g \big\rangle_{L^2(D,\vartheta_{\mathbf{M}_r}) }\,. \end{align*} Now we can show that $\hat{Y}_{\mathbf{M}_r}\hat{Y}_{\mathbf{M}_r}^*$ has integral kernel $T(p,q)$. Applying~(\ref{YRep}) and the formula $(\hat{Y}_{\mathbf{M}_r}^*f)(p)= \int_{\Gamma}f(p)\Theta^{\updownarrow x}_{ \mathbf{M}_r }(dp) $, we can write $\hat{Y}_{\mathbf{M}_r}\hat{Y}_{\mathbf{M}_r}^*f$ in the form \begin{align*} \big(\hat{Y}_{\mathbf{M}_r}\hat{Y}_{\mathbf{M}_r}^*f\big)(p)\,=\,\frac{ \int_{D}\big(\int_{\Gamma}f(q)\Theta^{\updownarrow x}_{ \mathbf{M}_r }(dq) \big) \Theta^{\updownarrow x}_{ \mathbf{M}_r }(dp) \vartheta_{\mathbf{M}_r}(dx) }{ \mathbf{M}_r(dp) }\,=\,&\int_{D\times \Gamma}\gamma_{p,q}(dx)f(q)\mathbf{M}_r(dq) \\ \,=\,&\int_{ \Gamma}T(p,q)f(q)\mathbf{M}_r(dq)\,, \end{align*} where the second equality is by Proposition~\ref{PropDecomp}, and the last equality uses that $(D,\gamma_{p,q})$ has total mass $T(p,q)$. The operator $\hat{Y}_{\mathbf{M}_r}$ must be compact since $\hat{Y}_{\mathbf{M}_r}\hat{Y}_{\mathbf{M}_r}^*$ is Hilbert-Schmidt.\vspace{.3cm} \end{proof} \begin{proof}[Proof of Lemma~\ref{LemApprox}] The operator $Y^{(n)}_{\mathbf{M}_r }$ can be written as $Y^{(n)}_{\mathbf{M}_r }=\mathbf{P}_{\mathbf{M}_r }^{(n)}Y_{\mathbf{M}_r }$ for the orthogonal projection $\mathbf{P}_{\mathbf{M}_r }^{(n)}:L^2(\Gamma,\mathbf{M}_r)\rightarrow L^2(\Gamma,\mathbf{M}_r)$ defined by generation-$n$ coarse-graining $ \big(\mathbf{P}_{\mathbf{M}_r }^{(n)}f\big)(p)=\frac{1}{\mathbf{M}_r([p]_n) } \int_{[p]_n}f(\hat{p})\mathbf{M}_r(d\hat{p}) $. The operator $Y^{(n)}_{\mathbf{M}_r }$ converges in operator norm to $Y_{\mathbf{M}_r }$ as $n\rightarrow \infty$ since $\mathbf{P}_{\mathbf{M}_r }^{(n)}$ converges strongly to the identity operator on $L^2(\Gamma,\mathbf{M}_r)$ and $Y_{\mathbf{M}_r }$ is compact. The kernel of $\mathbf{T}_{\mathbf{M}_r}$ has the form $$\mathbf{T}_{\mathbf{M}_r}^{(n)}(p,q)\,=\,\frac{1}{\mathbf{M}_r([p]_n)\mathbf{M}_r([q]_n) }\int_{[p]_n\times [q]_n } T(\hat{p},\hat{q}) \mathbf{M}_r(d\hat{p})\mathbf{M}_r(d\hat{q})\,=\,\mathbbmss{E}_{\mathbf{M}_r\times \mathbf{M}_r}\big[T(p,q) \,|\,\mathcal{F}_n \big]\,,$$ where the $\sigma$-algebra $\mathcal{F}_n=\mathcal{P}(\Gamma_n)\otimes \mathcal{P}(\Gamma_n)$ is defined as in Definition~\ref{DefSigmaAlgebra}. By Jensen's inequality the exponential moments of $\mathbf{T}_{\mathbf{M}_r}^{(n)}(p,q)$ are bounded by the exponential moments of $T(p,q) $, which are finite. The sequence $\big\{ \mathbf{T}_{\mathbf{M}_r}^{(n)}(p,q) \big\}_{n\in \Gamma}$ is a martingale with respect to $\mathcal{F}_n$ that converges $\mathbf{M}_r\times \mathbf{M}_r$-a.e.\ to $T(p,q)$, and the convergence of the exponential moments holds by Fatou's lemma. \end{proof} \begin{appendix} \section{Energy-based lower bounds for the log-Hausdorff exponent }\label{AppendixHausdorff} \begin{proof}[Proof of Corollary~\ref{CorIntSet}] By Proposition~\ref{PropFrostman}, for $\rho_r$-a.e.\ pair $(p,q)$ there is a nonzero measure $\tau_{p,q}$ that assigns full measure to $I_{p,q}$ and for which the energy $Q_\frak{h}(\tau_{p,q})$ is finite for all $0\leq \frak{h}<1$. For a fixed $\frak{h}\in [0,1)$ define $h(a):=1/\log^\frak{h} (\frac{1}{a})$ for $a>0$. Notice that \begin{align*} Q_\frak{h}(\tau_{p,q})\,=\,\int_{[0,1]}\bigg( \int_{[0,1]} \frac{1}{ h\big(|x-y|\big)} \tau_{p,q}(dx) \bigg) \tau_{p,q}(dy) \,\geq \,\int_{[0,1]} F^{(h)}_{p,q}(y) \tau_{p,q}(dy)\,, \end{align*} where $F_{p,q}^{(h)}(y):=\sup_{\delta >0} \Big(\frac{1}{h(\delta)} \tau_{p,q}\big(y-\delta,y+\delta\big) \Big)$. For $M>0$ let $A_{p,q}^{(M)}$ be the set of $y\in I_{p,q}$ such that $F_{p,q}^{(h)}(y)\leq M $. Since $Q_\frak{h}(\tau_{p,q})<\infty$, there is an $M$ large enough so that $\tau_{p,q}\big( A_{p,q}^{(M)}\big)>\frac{1}{2}\tau_{p,q}([0,1]) $. Next we focus on bounding $\inf_{ \mathcal{C} }\sum_{ I\in \mathcal{C} } h(|I|) $ from below, where the infimum is over all countable coverings, $\mathcal{C}$, of $I_{p,q}$ by closed intervals $I$, where $|I|$ denotes the interval's diameter. Given such a collection $\mathcal{C}$, let $\mathcal{C}^{(M)} $ be the subcollection of $\mathcal{C}$ consisting of intervals $ I$ such that $I\cap A_{p,q}^{(M)}\neq \emptyset $. Of course, $\mathcal{C}^{(M)}$ forms a covering of $A_{p,q}^{(M)}$, and for each interval $I\in \mathcal{C}^{(M)}$ we can pick a representative $y_I \in I\cap A_{p,q}^{(M)}$. Using that $\mathcal{C}^{(M)}$ is a subcollection of $\mathcal{C}$ and the definition of $F_{p,q}^{(h)}$, we have the first two inequalities below. \begin{align}\label{za} \sum_{I\in \mathcal{C}} h(|I|) \,\geq \,&\sum_{ I\in \mathcal{C}^{(M)}} h\big(|I|\big) \nonumber \\ \,\geq \,&\sum_{I\in \mathcal{C}^{(M)} }\frac{ 1}{ F_{p,q}^{(\delta)}(y_I)}\tau_{p,q}\big(y_I-|I|, y_I+|I|\big)\nonumber \intertext{Since $y_I\in A_{p,q}^{(M)}$, we have $F_{p,q}^{(h)}(y_I) \leq M$, and thus } \,\geq \,&\sum_{I }\frac{ 1}{ M}\tau_{p,q}\big(y_I-|I|, y+|I|\big)\,. \nonumber \intertext{Subadditivity of the measure $ \tau_{p,q}$ yields the first inequality below } \,\geq \,&\frac{ 1}{ M} \tau_{p,q}\big(A_{p,q}^{(M)}\big) \,\geq \,\frac{ 1}{ 2M} \tau_{p,q}([0,1])>0 \,. \end{align} The second inequality above follows from how we chose $M$. Since the lower bound is uniform for all coverings $\mathcal{C}$ of $I_{p,q}$, it follows that $H^{\textup{log}}_\frak{h}(I_{p,q})>0$, and so $H^{\textup{log}}_{\hat{\frak{h}}}(I_{p,q})=\infty $ for all $0\leq \hat{\frak{h}} < \frak{h}$. Since $\frak{h}$ is an arbitrary element in $ [0,1)$, we have shown that the log-Hausdorff exponent of $I_{p,q}$ is $\rho_r$-a.e. $\geq 1$. \end{proof} \end{appendix}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:Introduction}A cone-adapted \textbf{shearlet system}\cite{CompactlySupportedShearletFrames,CompactlySupportedShearletsAreOptimallySparse,OptimallySparseMultidimensionalRepresentationUsingShearlets,shearlet_book,ConeAdaptedShearletFirstPaper} ${\rm SH}\left(\varphi,\psi,\theta;\delta\right)$ is a \emph{directional} multiscale system in $L^{2}\left(\mathbb{R}^{2}\right)$ that is obtained by applying suitable translations, shearings and parabolic dilations to the generators $\varphi,\psi,\theta$. The shearings are utilized to obtain elements with different \emph{orientations}; precisely, the number of different orientations on scale $j$ is approximately $2^{j/2}$, in stark contrast to wavelet-like systems which only employ a constant number of directions per scale. We refer to Definition \ref{def:AlphaShearletSystem} for a more precise description of shearlet systems. One of the most celebrated properties of shearlets is their ability to provide ``optimally sparse approximations'' for functions that are governed by directional features like edges. This can be made more precise by introducing the class $\mathcal{E}^{2}\left(\mathbb{R}^{2}\right)$ of \textbf{$C^{2}$-cartoon-like functions}; roughly, these are all compactly supported functions that are \emph{$C^{2}$ away from a $C^{2}$ edge}\cite{OptimallySparseMultidimensionalRepresentationUsingShearlets}. More rigorously, the class $\mathcal{E}^{2}\left(\mathbb{R}^{2}\right)$ consists of all functions $f$ that can be written as $f=f_{1}+{\mathds{1}}_{B}\cdot f_{2}$ with $f_{1},f_{2}\in C_{c}^{2}\left(\mathbb{R}^{2}\right)$ and a compact set $B\subset\mathbb{R}^{2}$ whose boundary $\partial B$ is a $C^{2}$ Jordan curve; see also Definition \ref{def:CartoonLikeFunction} for a completely formal description of the class of cartoon-like functions. With this notion, the (almost) optimal sparse approximation of cartoon-like functions as understood in \cite{OptimallySparseMultidimensionalRepresentationUsingShearlets,CompactlySupportedShearletsAreOptimallySparse} means that \begin{equation} \left\Vert f-f_{N}\right\Vert _{L^{2}}\lesssim N^{-1}\cdot\left(1+\log N\right)^{3/2}\qquad\forall N\in\mathbb{N}\text{ and }f\in\mathcal{E}^{2}\left(\mathbb{R}^{2}\right).\label{eq:IntroductionShearletApproximationRate} \end{equation} Here, the \textbf{$N$-term approximation} $f_{N}$ is obtained by retaining only the $N$ largest coefficients in the expansion $f=\sum_{i\in I}\left\langle f,\psi_{i}\right\rangle \widetilde{\psi_{i}}$, where $\widetilde{\Psi}=\left(\smash{\widetilde{\psi_{i}}}\right)_{i\in I}$ is a dual frame for the shearlet frame $\Psi={\rm SH}\left(\varphi,\psi,\theta;\delta\right)=\left(\psi_{i}\right)_{i\in I}$. Formally, this means $f_{N}=\sum_{i\in I_{N}}\left\langle f,\psi_{i}\right\rangle \widetilde{\psi_{i}}$, where the set $I_{N}\subset I$ satisfies $\left|I_{N}\right|=N$ and $\left|\left\langle f,\psi_{i}\right\rangle \right|\geq\left|\left\langle f,\psi_{j}\right\rangle \right|$ for all $i\in I_{N}$ and $j\in I\setminus I_{N}$. One can even show that the approximation rate in equation \eqref{eq:IntroductionShearletApproximationRate} is optimal up to log factors; i.e., up to log factors, no reasonable system $\left(\varrho_{n}\right)_{n\in\mathbb{N}}$ can achieve a better approximation rate for the whole class $\mathcal{E}^{2}\left(\mathbb{R}^{2}\right)$. The restriction to ``reasonable'' systems is made to exclude pathological cases like dense subsets of $L^{2}\left(\mathbb{R}^{2}\right)$ and involves a restriction of the \emph{search depth}: The $N$-term approximation $f_{N}=\sum_{n\in J_{N}}c_{n}\varrho_{n}$ has to satisfy $\left|J_{N}\right|=N$ and furthermore $J_{N}\subset\left\{ 1,\dots,\pi\left(N\right)\right\} $ for a fixed polynomial $\pi$. For more details on this restriction, we refer to \cite[Section 2.1.1]{CartoonApproximationWithAlphaCurvelets}. The approximation rate achieved by shearlets is precisely the same as that obtained by (second generation) \textbf{curvelets}\cite{CandesDonohoCurvelets}. Note, however, that the construction of curvelets in \cite{CandesDonohoCurvelets} uses \emph{bandlimited} frame elements, while shearlet frames can be chosen to have compact support\cite{CompactlySupportedShearletsAreOptimallySparse,CompactlySupportedShearletFrames}. A frame with compactly supported elements is potentially advantageous for implementations, but also for theoretical considerations, since localization arguments are highly simplified and since compactly supported frames can be adapted to frames on bounded domains, see e.g.\@ \cite{AnisotropicMultiscaleSystemsOnBoundedDomains,ShearletFramesForSobolevSpaces}. A further advantage of shearlets over curvelets is that curvelets are defined using \emph{rotations}, while shearlets employ \emph{shearings} to change the orientation; in contrast to rotations, these shearings leave the digital grid $\mathbb{Z}^{2}$ invariant, which is beneficial for implementations. \subsection{Cartoon approximation \emph{by shearlets}} Despite its great utility, the approximation result in equation \eqref{eq:IntroductionShearletApproximationRate} has one remaining issue: It yields a rapid approximation of $f$ by a linear combination of $N$ elements of the \emph{dual frame} $\widetilde{\Psi}$ of the shearlet frame $\Psi$, \emph{not} by a linear combination of $N$ elements of $\Psi$ itself. If $\Psi$ is a tight frame, this is no problem, but the only known construction of tight cone-adapted shearlet frames uses \emph{bandlimited} generators. In case of a \emph{non-tight} cone-adapted shearlet frame, the only knowledge about $\widetilde{\Psi}$ that is available is that $\widetilde{\Psi}$ is a frame with dual $\Psi$; but nothing seems to be known\cite{IntrinsicLocalizationOfAnisotropicFrames} about the support, the smoothness, the decay or the frequency localization of the elements of $\widetilde{\Psi}$. Thus, it is highly desirable to have an approximation result similar to equation \eqref{eq:IntroductionShearletApproximationRate}, but with $f_{N}$ being a linear combination of $N$ elements \emph{of the shearlet frame $\Psi={\rm SH}\left(\varphi,\psi,\theta;\delta\right)$ itself}. We will provide such a result by showing that \textbf{analysis sparsity} with respect to a (suitable) shearlet frame ${\rm SH}\left(\varphi,\psi,\theta;\delta\right)$ is \emph{equivalent} to \textbf{synthesis sparsity} with respect to the same frame, cf.\@ Theorem \ref{thm:AnalysisAndSynthesisSparsityAreEquivalent}. Here, \emph{analysis sparsity} with respect to a frame $\Psi=\left(\psi_{i}\right)_{i\in I}$ means that the \textbf{analysis coefficients} $A_{\Psi}f=\left(\left\langle f,\psi_{i}\right\rangle \right)_{i\in I}$ are \emph{sparse}, i.e., they satisfy $A_{\Psi}f\in\ell^{p}\left(I\right)$ for some fixed $p\in\left(0,2\right)$. Note that an arbitrary function $f\in L^{2}\left(\mathbb{R}^{2}\right)$ always satisfies $A_{\Psi}f\in\ell^{2}\left(I\right)$ by the frame property. \emph{Synthesis sparsity} means that we can write $f=S_{\Psi}c=\sum_{i\in I}c_{i}\psi_{i}$ for a \emph{sparse} sequence $c=\left(c_{i}\right)_{i\in I}$, i.e., $c\in\ell^{p}\left(I\right)$. For general frames, these two properties need not be equivalent, as shown in Section \ref{sec:AnalysisSynthesisSparsityNotEquivalentInGeneral}. Note though that such an equivalence would indeed imply the desired result, since the proof of equation \eqref{eq:IntroductionShearletApproximationRate} given in \cite{CompactlySupportedShearletsAreOptimallySparse} proceeds by a careful analysis of the analysis coefficients $A_{\Psi}f$ of a cartoon-like function $f$: By counting how many shearlets intersect the ``problematic'' region $\partial B$ where $f=f_{1}+{\mathds{1}}_{B}\cdot f_{2}$ is not $C^{2}$ and by then distinguishing whether the orientation of the shearlet is aligned with the boundary curve $\partial B$ or not, the authors show $\sum_{n>N}\left|\theta_{n}\left(f\right)\right|^{2}\lesssim N^{-2}\cdot\left(1+\log N\right)^{3}$, where $\left(\theta_{n}\left(f\right)\right)_{n\in\mathbb{N}}$ is the \emph{nonincreasing rearrangement} of the shearlet analysis coefficients $A_{\Psi}f$. It is not too hard to see (see e.g.\@ the proof of Theorem \ref{thm:CartoonApproximationWithAlphaShearlets}) that this implies $A_{\Psi}f\in\ell^{p}\left(I\right)$ for all $p>\frac{2}{3}$. Assuming that analysis sparsity with respect to the shearlet frame $\Psi$ is indeed equivalent to synthesis sparsity, this implies $f=\sum_{i\in I}c_{i}\psi_{i}$ for a sequence $c=\left(c_{i}\right)_{i\in I}\in\ell^{p}\left(I\right)$. Then, simply by taking only the $N$ largest coefficients of the sequence $c$ and by using that the synthesis map $S_{\Psi}:\ell^{2}\left(I\right)\to L^{2}\left(\mathbb{R}^{2}\right),\left(e_{i}\right)_{i\in I}\mapsto\sum_{i\in I}e_{i}\psi_{i}$ is bounded, it is not hard to see $\left\Vert f-f_{N}\right\Vert _{L^{2}}\lesssim\left\Vert c-c\cdot{\mathds{1}}_{I_{N}}\right\Vert _{\ell^{2}}\lesssim N^{-\left(p^{-1}-2^{-1}\right)}$, where $I_{N}\subset I$ is a set containing $N$ largest coefficients of $c$. Thus, once we know that analysis sparsity with respect to a (suitable) shearlet frame is equivalent to synthesis sparsity, we only need to make the preceding argument completely rigorous. \subsection{Previous results concerning the equivalence of analysis and synthesis sparsity for shearlets} As noted above, analysis sparsity and synthesis sparsity need not be equivalent for general frames. To address this and other problems, Gröchenig\cite{LocalizationOfFrames} and Gröchenig \& Cordero\cite{LocalizationOfFrames2}, as well as Gröchenig \& Fornasier\cite{IntrinsicLocalizationOfFrames} introduced the concept of \textbf{(intrinsically) localized frames} for which these two properties are indeed equivalent, cf.\@ \cite[Proposition 2]{GribonvalNielsenHighlySparseRepresentations}. In contrast to Gabor- and wavelet frames, it is quite nontrivial, however, to verify that a shearlet or curvelet frame is intrinsically localized: To our knowledge, the only papers discussing \emph{a variant} of this property are \cite{IntrinsicLocalizationOfAnisotropicFrames,IntrinsicLocalizationOfAnisotropicFrames2}, where the results from \cite{IntrinsicLocalizationOfAnisotropicFrames} about curvelets and shearlets are generalized in \cite{IntrinsicLocalizationOfAnisotropicFrames2} to the setting of $\alpha$-molecules; a generalization that we will discuss below in greater detail. For now, let us stick to the setting of \cite{IntrinsicLocalizationOfAnisotropicFrames}. In that paper, Grohs considers a certain \textbf{distance function} $\omega:\Lambda^{S}\times\Lambda^{S}\to\left[1,\infty\right)$ (cf.\@ \cite[Definition 3.9]{IntrinsicLocalizationOfAnisotropicFrames} for the precise formula) on the index set \[ \Lambda^{S}:=\left\{ \left(j,\ell,k,\delta\right)\in\mathbb{N}_{0}\times\mathbb{Z}\times\mathbb{Z}^{2}\times\left\{ 0,1\right\} \,\middle|\,-2^{\left\lfloor j/2\right\rfloor }\leq\ell<2^{\left\lfloor j/2\right\rfloor }\right\} ,\vspace{-0.05cm} \] which is (a slightly modified version of) the index set that is used for shearlet frames. A shearlet frame $\Psi=\left(\psi_{\lambda}\right)_{\lambda\in\Lambda^{S}}$ is called \textbf{$N$-localized with respect to $\omega$} if the associated \textbf{Gramian matrix} $\mathbf{A}:=\mathbf{A}_{\Psi}:=\left(\left\langle \psi_{\lambda},\psi_{\lambda'}\right\rangle \right)_{\lambda,\lambda'\in\Lambda^{S}}$ satisfies \begin{equation} \left|\left\langle \psi_{\lambda},\,\psi_{\lambda'}\right\rangle \right|\leq\left\Vert \mathbf{A}\right\Vert _{\mathcal{B}_{N}}\cdot\left[\omega\left(\lambda,\lambda'\right)\right]^{-N}\qquad\forall\lambda,\lambda'\in\Lambda^{S},\label{eq:GrohsLocalizationDefinition} \end{equation} where $\left\Vert \mathbf{A}\right\Vert _{\mathcal{B}_{N}}$ is chosen to be the optimal constant in the preceding inequality. Then, if $\Psi$ is a frame with \textbf{frame bounds} $A,B>0$, i.e., if $A^{2}\cdot\left\Vert f\right\Vert _{L^{2}}^{2}\leq\sum_{\lambda\in\Lambda^{S}}\left|\left\langle f,\psi_{\lambda}\right\rangle \right|^{2}\leq B^{2}\cdot\left\Vert f\right\Vert _{L^{2}}^{2}$ for all $f\in L^{2}\left(\mathbb{R}^{2}\right)$, \cite[Lemma 3.3]{IntrinsicLocalizationOfAnisotropicFrames} shows that the infinite matrix $\mathbf{A}$ induces a bounded, positive semi-definite operator $\mathbf{A}:\ell^{2}\left(\Lambda^{S}\right)\to\ell^{2}\left(\Lambda^{S}\right)$ that furthermore satisfies $\sigma\left(\mathbf{A}\right)\subset\left\{ 0\right\} \cup\left[A,B\right]$ and the \textbf{Moore-Penrose pseudoinverse} $\mathbf{A}^{+}$ of $\mathbf{A}$ is the Gramian associated to the \textbf{canonical dual frame} $\widetilde{\Psi}$ of $\Psi$. This is important, since \cite[Theorem 3.11]{IntrinsicLocalizationOfAnisotropicFrames} now yields the following: \begin{thm*} Assume that $\Psi=\left(\psi_{\lambda}\right)_{\lambda\in\Lambda^{S}}$ is a shearlet frame with sampling density $\delta>0$ and frame bounds $A,B>0$. Furthermore, assume that $\Psi$ is $N+L$-localized with respect to $\omega$, where \[ N>2\qquad\text{ and }\qquad L>2\cdot\frac{\ln\left(10\right)}{\ln\left(5/4\right)}. \] Then the canonical dual frame $\widetilde{\Psi}$ of $\Psi$ is $N^{+}$-localized with respect to $\omega$, with \begin{equation} N^{+}=N\cdot\left(1+\frac{\log\left(1+\frac{2}{A^{2}+B^{2}}\left\Vert \mathbf{A}\right\Vert _{N+L}\cdot\left[1+C_{\delta}\cdot\left(\frac{2}{1-2^{-L/2-2}}+\frac{8}{3}+\frac{1}{1-2^{2-L/2}}+\frac{1}{1-2^{-L/2}}\right)\right]^{2}\right)}{\log\left(\frac{B^{2}+A^{2}}{B^{2}-A^{2}}\right)}\right)^{-1},\label{eq:GrohsNPlusDefinition} \end{equation} where the constant $C_{\delta}>0$ only depends on the sampling density $\delta>0$. \end{thm*} To see how this theorem could in principle be used, note that the dual frame coefficients satisfy \[ \left(\left\langle f,\,\smash{\widetilde{\psi_{\lambda}}}\right\rangle \right)_{\lambda\in\Lambda^{S}}=\mathbf{A}^{+}\left(\left\langle f,\psi_{\lambda}\right\rangle \right)_{\lambda\in\Lambda}. \] Consequently, if(!)\@ the Gramian $\mathbf{A}^{+}$ of the canonical dual frame $\widetilde{\Psi}$ of $\Psi$ restricts to a well-defined and bounded operator $\mathbf{A}^{+}:\ell^{p}\left(\Lambda^{S}\right)\to\ell^{p}\left(\Lambda^{S}\right)$, then analysis sparsity with respect to $\Psi$ would imply analysis sparsity with respect to $\widetilde{\Psi}$ and thus synthesis sparsity with respect to $\Psi$, as desired. In fact, \cite[Proposition 3.5]{IntrinsicLocalizationOfAnisotropicFrames} shows that if $\mathbf{A}^{+}$ is $N^{+}$-localized with respect to $\omega$, then $\mathbf{A}^{+}:\ell^{p}\left(\Lambda^{S}\right)\to\ell^{p}\left(\Lambda^{S}\right)$ is bounded \emph{as long as} $N^{+}>2p^{-1}$. \medskip{} Thus, it seems that all is well, in particular since a combination of \cite[Theorem 2.9 and Proposition 3.11]{ParabolicMolecules} provides\footnote{Strictly speaking, \cite[Definition 2.4]{ParabolicMolecules} uses the index distance $\omega\left(\lambda,\lambda'\right)=2^{\left|s_{\lambda}-s_{\lambda'}\right|}\left(1+\smash{2^{\min\left\{ s_{\lambda},s_{\lambda'}\right\} }}d\left(\lambda,\lambda'\right)\right)$ which is \emph{different} from the distance $\omega\left(\lambda,\lambda'\right)=2^{\left|s_{\lambda}-s_{\lambda'}\right|}\left(1+d\left(\lambda,\lambda'\right)\right)$ used in \cite[Definition 3.9]{IntrinsicLocalizationOfAnisotropicFrames}. Luckily, this inconsistency is no serious problem, since the distance in \cite{ParabolicMolecules} dominates the distance from \cite{IntrinsicLocalizationOfAnisotropicFrames}, so that $N$-localization with respect to the \cite{ParabolicMolecules}-distance implies $N$-localization with respect to the \cite{IntrinsicLocalizationOfAnisotropicFrames}-distance.} readily verifiable conditions on the generators $\varphi,\psi,\theta$ which ensure that the shearlet frame $\Psi={\rm SH}\left(\varphi,\psi,\theta;\delta\right)$ is $N$-localized with respect to $\omega$. There is, however, a well-hidden remaining problem which is also the reason why the equivalence of analysis and synthesis sparsity is not explicitly claimed in any of the papers \cite{IntrinsicLocalizationOfAnisotropicFrames,IntrinsicLocalizationOfAnisotropicFrames2,ParabolicMolecules,AlphaMolecules}: As seen above, we need $N^{+}>2p^{-1}$, but it is not clear at all that this can be achieved with $N^{+}$ as in equation \eqref{eq:GrohsNPlusDefinition}: There are strong interdependencies between the different quantities on the right-hand side of equation \eqref{eq:GrohsNPlusDefinition} which make it next to impossible to verify $N^{+}>2p^{-1}$. Indeed, the results in \cite{ParabolicMolecules} only yield $\left\Vert \mathbf{A}\right\Vert _{N+L}<\infty$ under certain assumptions (which depend on $N+L$) concerning $\varphi,\psi,\theta$, but \emph{no explicit control} over $\left\Vert \mathbf{A}\right\Vert _{N+L}$ is given. Thus, it is not at all clear that increasing $N$ (or $L$) will increase $N^{+}$. Likewise, the frame bounds $A,B$ only depend on $\varphi,\psi,\theta$ (which are more or less fixed) and on the sampling density $\delta$. Thus, one could be tempted to change $\delta$ to influence $A,B$ in equation \eqref{eq:GrohsNPlusDefinition} and thus to achieve $N^{+}>2p^{-1}$. But the sampling density $\delta$ also influences $C_{\delta}$ and $\left\Vert \mathbf{A}\right\Vert _{N+L}$, so that it is again not clear at all whether one can ensure $N^{+}>2p^{-1}$ by modifying $\delta$. \medskip{} A further framework for deriving the equivalence between analysis and synthesis sparsity for frames is provided by \textbf{(generalized) coorbit theory}\cite{FeichtingerCoorbit0,FeichtingerCoorbit1,FeichtingerCoorbit2,RauhutCoorbitQuasiBanach,GeneralizedCoorbit1,GeneralizedCoorbit2}. Here, one starts with a \emph{continuous} frame $\Psi=\left(\psi_{x}\right)_{x\in X}$ which is indexed by a locally compact measure space $\left(X,\mu\right)$. In the case of classical, group-based coorbit theory\cite{FeichtingerCoorbit0,FeichtingerCoorbit1,FeichtingerCoorbit2}, it is even required that $\left(\psi_{x}\right)_{x\in G}=\left(\pi\left(x\right)\psi\right)_{x\in G}$ arises from an integrable, irreducible \textbf{unitary representation} of a locally compact topological group $G$, although one can weaken certain of these conditions\cite{CoorbitOnHomogenousSpaces,CoorbitOnHomogenousSpaces2,CoorbitWithVoiceInFrechetSpace,CoorbitSpacesForDualPairs}. Based on the continuous frame $\Psi$, one can then introduce so-called \textbf{coorbit spaces} ${\rm Co}\left(Y\right)$ which are defined in terms of decay conditions (specified by the function space $Y$) concerning the \textbf{voice transform} $V_{\Psi}f\left(x\right):=\left\langle f,\,\psi_{x}\right\rangle $ of a function or distribution $f$. Coorbit theory then provides conditions under which one can sample the continuous frame $\Psi$ to obtain a discrete frame $\Psi_{d}=\left(\smash{\psi_{x_{i}}}\right)_{i\in I}$, but such that membership of a distribution $f$ in ${\rm Co}\left(Y\right)$ is \emph{simultaneously} equivalent to analysis sparsity and to synthesis sparsity of $f$ with respect to $\Psi_{d}$. Thus, if one could find a \emph{continuous frame} $\Psi$ such that the prerequisites of coorbit theory are satisfied and such that the discretized frame $\Psi_{d}$ coincides with a discrete, \emph{cone-adapted} shearlet frame, one would obtain the desired equivalence between analysis sparsity and synthesis sparsity. There is, however, no known construction of such a frame $\Psi$: Although there is a rich theory of \textbf{shearlet coorbit spaces}\cite{Dahlke_etal_sh_coorbit1,Dahlke_etal_sh_coorbit2,DahlkeShearletArbitraryDimension,DahlkeShearletCoorbitEmbeddingsInHigherDimensions,DahlkeToeplitzShearletTransform,MR2896277,FuehrSimplifiedVanishingMomentCriteria} which fits into the more general framework of \textbf{wavelet-type coorbit spaces}\cite{FuehrContinuousWaveletTransformsFromSemidirectProducts,FuehrContinuousWaveletTransformsSemidirectProducts,FuehrCoorbit1,FuehrCoorbit2,FuehrGeneralizedCalderonConditions,FuehrSimplifiedVanishingMomentCriteria,FuehrVoigtlaenderCoorbitSpacesAsDecompositionSpaces,FuehrWaveletFramesAndAdmissibility}, the resulting discretized frames are \emph{not} cone-adapted shearlet frames; instead, they are highly \emph{directionally biased} (i.e., they treat the $x$ and $y$ direction in very different ways) and the number of directions per scale is \emph{infinite} for each scale; therefore, these systems are unsuitable for most practical applications and for the approximation of cartoon-like functions, cf.\@ \cite[Section 3.3]{ConeAdaptedShearletFirstPaper}. Hence—at least using the currently known constructions of continuous shearlet frames—coorbit theory can \emph{not} be used to derive the desired equivalence of analysis and synthesis sparsity with respect to cone-adapted shearlet frames. \subsection{Our approach for proving the equivalence of analysis and synthesis sparsity for shearlets} In this paper, we use the recently introduced theory of \textbf{structured Banach frame decompositions of decomposition spaces}\cite{StructuredBanachFrames} to obtain the desired equivalence between analysis and synthesis sparsity for (cone-adapted) shearlet frames. A more detailed and formal exposition of this theory will be given in Section \ref{sec:BanachFrameDecompositionCrashCourse}; for this introduction, we restrict ourselves to the bare essentials. The starting point in \cite{StructuredBanachFrames} is a \emph{covering} $\mathcal{Q}=\left(Q_{i}\right)_{i\in I}$ of the frequency space $\mathbb{R}^{d}$, where it is assumed that each $Q_{i}$ is of the form $Q_{i}=T_{i}Q+b_{i}$ for a fixed \emph{base set} $Q\subset\mathbb{R}^{d}$ and certain linear maps $T_{i}\in\mathrm{GL}\left(\mathbb{R}^{d}\right)$ and $b_{i}\in\mathbb{R}^{d}$. Then, using a suitable \emph{partition of unity} $\Phi=\left(\varphi_{i}\right)_{i\in I}$ subordinate to $\mathcal{Q}$ and a suitable \emph{weight} $w=\left(w_{i}\right)_{i\in I}$ on the index set $I$ of the covering $\mathcal{Q}$, one defines the associated \textbf{decomposition space (quasi)-norm} \[ \left\Vert g\right\Vert _{\DecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}}:=\left\Vert \left(w_{i}\cdot\left\Vert \mathcal{F}^{-1}\left(\varphi_{i}\cdot\widehat{g}\right)\right\Vert _{L^{p}}\right)_{i\in I}\right\Vert _{\ell^{q}}, \] while the associated \textbf{decomposition space} $\DecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}$ contains exactly those distributions $g$ for which this quasi-norm is finite. Roughly speaking, the decomposition space (quasi)-norm measures the size of the distribution $g$ by frequency-localizing $g$ to each of the sets $Q_{i}$ (using the partition of unity $\Phi$), where each of these frequency-localized pieces is measured in $L^{p}\left(\mathbb{R}^{d}\right)$, while the individual contributions are aggregated using a certain weighted $\ell^{q}$-norm. The underlying idea in \cite{StructuredBanachFrames} is to ask whether the \emph{strict} frequency localization using the compactly supported partition of unity $\Phi$ can be replaced by a soft, qualitative frequency localization: Indeed, if $\psi\in L^{1}\left(\mathbb{R}^{d}\right)$ has \emph{essential} frequency support in the base set $Q$, then it is not hard to see that the function \[ \psi^{\left[i\right]}:=\left|\det T_{i}\right|^{-1/2}\cdot\mathcal{F}^{-1}\left(L_{b_{i}}\left[\smash{\widehat{\psi}}\circ T_{i}^{-1}\right]\right)=\left|\det T_{i}\right|^{1/2}\cdot M_{b_{i}}\left[\psi\circ T_{i}^{T}\right] \] has essential frequency support in $Q_{i}=T_{i}Q+b_{i}$, for arbitrary $i\in I$. Here, $L_{x}$ and $M_{\xi}$ denote the usual translation and modulation operators, cf.\@ Section \ref{subsec:Notation}. Using this notation, the theory developed in \cite{StructuredBanachFrames} provides criteria pertaining to the \textbf{generator} $\psi$ which guarantee that the generalized shift-invariant system \begin{equation} \Psi_{\delta}:=\left(L_{\delta\cdot T_{i}^{-T}k}\:\psi^{\left[i\right]}\right)_{i\in I,\,k\in\mathbb{Z}^{d}}\label{eq:IntroductionStructuredFrameDefinition} \end{equation} forms, respectively, a \textbf{Banach frame} or an \textbf{atomic decomposition} for the decomposition space $\DecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}$, for sufficiently fine sampling density $\delta>0$. The notions of Banach frames and atomic decompositions generalize the concept of frames for Hilbert spaces to the setting of (Quasi)-Banach spaces. The precise definitions of these two concepts, however, are outside the scope of this introduction; see e.g.\@ \cite{GroechenigDescribingFunctions} for a lucid exposition. For us, the most important conclusion is the following: If $\Psi_{\delta}$ \emph{simultaneously} forms a Banach space and an atomic decomposition for $\DecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}$, then there is an explicitly known (Quasi)-Banach space of sequences $C_{w}^{p,q}\leq\mathbb{C}^{I\times\mathbb{Z}^{d}}$, called the \textbf{coefficient space}, such that the following are equivalent for a distribution $g$: \begin{enumerate} \item $g\in\DecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}$, \item the analysis coefficients $\left(\left\langle g,\,L_{\delta\cdot T_{i}^{-T}k}\:\psi^{\left[i\right]}\right\rangle \right)_{i\in I,\,k\in\mathbb{Z}^{d}}$ belong to $C_{w}^{p,q}$, \item we can write $g=\sum_{i\in I}\,\sum_{k\in\mathbb{Z}^{d}}\left(\smash{c_{k}^{\left(i\right)}}\cdot\psi^{\left[i\right]}\right)$ for a sequence $\left(\smash{c_{k}^{\left(i\right)}}\right)_{i\in I,\,k\in\mathbb{Z}^{d}}\in C_{w}^{p,q}$. \end{enumerate} One can even derive slightly stronger conclusions which make these purely qualitative statements quantitative. Now, if one chooses $p=q\in\left(0,2\right)$ and a suitable weight $w=\left(w_{i}\right)_{i\in I}$ depending on $p$, one can achieve $C_{w}^{p,q}=\ell^{p}\left(I\times\mathbb{Z}^{d}\right)$. Thus, in this case, the preceding equivalence can be summarized as follows: \[ \text{If }\psi\text{ is nice and }\delta>0\text{ is small, then \textbf{analysis sparsity is equivalent to synthesis sparsity} w.r.t. }\Psi_{\delta}. \] In fact, the theory developed in \cite{StructuredBanachFrames} even allows the base set $Q$ to vary with $i\in I$, i.e., $Q_{i}=T_{i}Q_{i}'+b_{i}$, at least as long as the family $\left\{ Q_{i}'\,\middle|\, i\in I\right\} $ of different base sets remains finite. Similarly, the generator $\psi$ is allowed to vary with $i\in I$, so that $\psi^{\left[i\right]}=\left|\det T_{i}\right|^{1/2}\cdot M_{b_{i}}\left[\psi_{i}\circ T_{i}^{T}\right]$, again with the provision that the set $\left\{ \psi_{i}\,\middle|\, i\in I\right\} $ of generators is finite. As we will see, one can choose a suitable covering $\mathcal{Q}=\mathcal{S}$—the so-called \textbf{shearlet covering} of the frequency space $\mathbb{R}^{2}$—such that the system $\Psi_{\delta}$ from above coincides with a shearlet frame. The resulting decomposition spaces $\DecompSp{\mathcal{S}}p{\ell_{w}^{q}}{}$ are then (slight modifications of) the \textbf{shearlet smoothness spaces} as introduced by Labate et al.\cite{Labate_et_al_Shearlet}. In summary, the theory of \textbf{structured Banach frame decompositions of decomposition spaces} will imply the desired equivalence of analysis and synthesis sparsity with respect to cone-adapted shearlet frames. To this end, however, we first need to show that the technical conditions on the generators that are imposed in \cite{StructuredBanachFrames} are indeed satisfied if the generators of the shearlet system are sufficiently smooth and satisfy certain vanishing moment conditions. As we will see, this is by no means trivial and requires a huge amount of technical estimates. \medskip{} Finally, we remark that spaces similar to the shearlet smoothness spaces have also been considered by Vera: In \cite{VeraShearBesovSpaces}, he introduced so-called \textbf{shear anisotropic inhomogeneous Besov spaces}, which are essentially a generalization of the shearlet smoothness spaces to $\mathbb{R}^{d}$. Vera then shows that the analysis and synthesis operators with respect to certain \emph{bandlimited} shearlet systems are bounded between the shear anisotropic inhomogeneous Besov spaces and certain sequence spaces. Note that the assumption of bandlimited frame elements excludes the possibility of having compact support in space. Furthermore, boundedness of the analysis and synthesis operators alone does \emph{not} imply that the \emph{bandlimited} shearlet systems form Banach frames or atomic decompositions for the shear anisotropic Besov spaces, since this requires existence of a certain \emph{reproducing formula}. In \cite{VeraShearTriebelLizorkin}, Vera also considers \emph{Triebel-Lizorkin type} shearlet smoothness spaces and again derives similar boundedness results for the analysis and synthesis operators. Finally, in both papers \cite{VeraShearBesovSpaces,VeraShearTriebelLizorkin}, certain embedding results between the classical Besov or Triebel-Lizorkin spaces and the new ``shearlet adapted'' smoothness spaces are considered, similarly to our results in Section \ref{sec:EmbeddingsBetweenAlphaShearletSmoothness}. Note though that we are able to completely characterize the existence of such embeddings, while \cite{VeraShearBesovSpaces} only establishes certain necessary and certain sufficient conditions, without achieving a characterization. \subsection{\texorpdfstring{$\alpha$}{α}-shearlets and cartoon-like functions of different regularity} The usual construction of shearlets employs the \textbf{parabolic dilations} ${\rm diag}\left(2^{j},2^{j/2}\right)$ and (the dual frames of) the resulting shearlet systems turn out to be (almost) optimal for the approximation of functions that are $C^{2}$ away from a $C^{2}$ edge. Beginning with the paper \cite{OptimallySparse3D}, it was realized that different regularities—i.e., ``functions that are $C^{\beta}$ away from a $C^{\beta}$ edge''—can be handled by employing a different type of dilations, namely\footnote{In fact, in \cite[Section 4.1]{OptimallySparse3D} the three-dimensional counterparts of the scaling matrices ${\rm diag}\left(2^{\beta j/2},\,2^{j/2}\right)$ are used, but the resulting hybrid shearlet systems have the same approximation properties as those defined using the $\alpha$-parabolic dilations ${\rm diag}\left(2^{j},\,2^{\alpha j}\right)$ with $\alpha=\beta^{-1}$; see Section \ref{sec:CartoonLikeFunctionsAreBoundedInAlphaShearletSmoothness} for more details.} the \textbf{$\alpha$-parabolic dilations} ${\rm diag}\left(2^{j},\,2^{\alpha j}\right)$, with the specific choice $\alpha=\beta^{-1}$. These modified shearlet systems were called \textbf{hybrid shearlets} in \cite{OptimallySparse3D}, where they were introduced in the three-dimensional setting. In the Bachelor's thesis \cite{SandraBachelorArbeit}, precisely in \cite[Section 4]{SandraBachelorArbeit}, it was then shown also in the two-dimensional setting that shearlet systems using $\alpha$-parabolic scaling—from now on called \textbf{$\alpha$-shearlet systems}—indeed yield (almost) optimal approximation rates for the model class of \textbf{$C^{\beta}$-cartoon-like functions}, if $\alpha=\beta^{-1}$. Again, this comes with the caveat that the approximation is actually performed using the \emph{dual frame} of the $\alpha$-shearlet frame. Note, however, that the preceding result requires the regularity $\beta$ of the $C^{\beta}$-cartoon-like functions to satisfy $\beta\in\left(1,2\right]$. Outside of this range, the arguments in \cite{SandraBachelorArbeit} are not applicable; in fact, it was shown in \cite{RoleOfAlphaScaling} that the result concerning the optimal approximation rate fails for $\beta>2$, at least for \textbf{$\alpha$-curvelets\cite{CartoonApproximationWithAlphaCurvelets}} instead of $\alpha$-shearlets. These $\alpha$-curvelets are related to $\alpha$-shearlets in the same way that shearlets and curvelets are related\cite{ParabolicMolecules}, in the sense that the associated coverings of the Fourier domain are equivalent and in that they agree with respect to \emph{analysis} sparsity: If $f$ is $\ell^{p}$-analysis sparse with respect to a (reasonable) $\alpha$-curvelet system, then the same holds with respect to any (reasonable) $\alpha$-shearlet system and vice versa. This was derived in \cite{AlphaMolecules} as an application of the framework of \textbf{$\alpha$-molecules}, a common generalization of $\alpha$-shearlets and $\alpha$-curvelets; see also \cite{MultivariateAlphaMolecules} for a generalization to dimensions larger than two. \medskip{} As we will see, one can modify the shearlet covering $\mathcal{S}$ slightly to obtain the so-called \textbf{$\alpha$-shearlet covering} $\mathcal{S}^{\left(\alpha\right)}$. The systems $\Psi_{\delta}$ (cf.\@ equation \eqref{eq:IntroductionStructuredFrameDefinition}) that result from an application of the theory of structured Banach frame decompositions with the covering $\mathcal{S}^{\left(\alpha\right)}$ then turn out to be $\alpha$-shearlet systems. Therefore, we will be able to establish the equivalence of analysis and synthesis sparsity not only for classical cone-adapted shearlet systems, but in fact for cone-adapted $\alpha$-shearlet systems for arbitrary $\alpha\in\left[0,1\right]$, essentially without additional effort. Even more, recall from above that the theory of structured Banach frame decompositions not only yields equivalence of analysis and synthesis sparsity, but also shows that each of these properties is equivalent to membership of the distribution $f$ under consideration in a suitable decomposition space $\DecompSp{\mathcal{S}^{\left(\alpha\right)}}p{\ell_{w}^{q}}{}$. We will call these spaces \textbf{$\alpha$-shearlet smoothness spaces} and denote them by $\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)$, where the \emph{smoothness parameter} $s$ determines the weight $w$. Using a recently developed theory for embeddings between decomposition spaces\cite{DecompositionEmbedding}, we are then able to completely characterize the existence of embeddings between $\alpha$-shearlet smoothness spaces for different values of $\alpha$. Roughly, such an embedding $\mathscr{S}_{\alpha_{1},s_{1}}^{p_{1},q_{1}}\hookrightarrow\mathscr{S}_{\alpha_{2},s_{2}}^{p_{2},q_{2}}$ means that sparsity (in a certain sense) with respect to $\alpha_{1}$-shearlets implies sparsity (in a possibly different sense) with respect to $\alpha_{2}$-shearlets. In a way, this extends the results of \cite{AlphaMolecules}, where it is shown that analysis sparsity transfers from one $\alpha$-scaled system to another (e.g.\@ from $\alpha$-curvelets to $\alpha$-shearlets); in contrast, our embedding theory characterizes the possibility of transferring such results from $\alpha_{1}$-shearlet systems to $\alpha_{2}$-shearlet systems, even for $\alpha_{1}\neq\alpha_{2}$. It will turn out, however, that simple $\ell^{p}$-sparsity with respect to $\alpha_{1}$-shearlets \emph{never} yields a nontrivial $\ell^{q}$-sparsity with respect to $\alpha_{2}$-shearlets, if $\alpha_{1}\neq\alpha_{2}$. Luckily, one can remedy this situation by requiring $\ell^{p}$-sparsity in conjunction with a certain decay of the coefficients with the scale. Fore more details, we refer to Section \ref{sec:EmbeddingsBetweenAlphaShearletSmoothness}. \subsection{Structure of the paper} \label{subsec:Structure}Before we properly start the paper, we introduce several standard and non-standard notations in the next subsection. In Section \ref{sec:BanachFrameDecompositionCrashCourse}, we give an overview over the main aspects of the theory of \emph{structured Banach frame decompositions of decomposition spaces} that was recently developed by one of the authors in \cite{StructuredBanachFrames}. The most important ingredient for the application of this theory is a suitable covering $\mathcal{Q}=\left(Q_{i}\right)_{i\in I}=\left(T_{i}Q_{i}'+b_{i}\right)_{i\in I}$ of the frequency space $\mathbb{R}^{2}$ such that the provided Banach frames and atomic decompositions are of the desired form; in our case we want to obtain cone-adapted $\alpha$-shearlet systems. Thus, in Section \ref{sec:AlphaShearletSmoothnessDefinition}, we introduce the so-called \textbf{$\alpha$-shearlet coverings} $\mathcal{S}^{\left(\alpha\right)}$ for $\alpha\in\left[0,1\right]$ and we verify that these coverings fulfill the standing assumptions from \cite{StructuredBanachFrames}. The more technical parts of this verification are deferred to Section \ref{sec:AlphaShearletCoveringAlmostStructured} in order to not disrupt the flow of the paper. Furthermore, Section \ref{sec:AlphaShearletSmoothnessDefinition} also contains the definition of the \textbf{$\alpha$-shearlet smoothness spaces} $\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)=\DecompSp{\smash{\mathcal{S}^{\left(\alpha\right)}}}p{\ell_{w^{s}}^{q}}{}$ and an analysis of their basic properties. Section \ref{sec:CompactlySupportedShearletFrames} contains the main results of the paper. Here, we provide readily verifiable conditions—smoothness, decay and vanishing moments—concerning the generators $\varphi,\psi$ of the $\alpha$-shearlet system ${\rm SH}_{\alpha}^{\left(\pm1\right)}\left(\varphi,\psi;\delta\right)$ which ensure that this $\alpha$-shearlet system forms, respectively, a Banach frame or an atomic decomposition for the $\alpha$-shearlet smoothness space $\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)$. This is done by verifying the technical conditions of the theory of structured Banach frame decompositions. All of these results rely on one technical lemma whose proof is extremely lengthy and therefore deferred to Section \ref{sec:MegaProof}. For $\alpha$-shearlet systems, it is expected that $\frac{1}{2}$-shearlets are identical to the classical cone-adapted shearlet systems. This is not quite the case, however, for the shearlet systems ${\rm SH}_{1/2}^{\left(\pm1\right)}\left(\varphi,\psi;\delta\right)$ considered in Section \ref{sec:CompactlySupportedShearletFrames}. The reason for this is that the $\alpha$-shearlet covering $\mathcal{S}^{\left(\alpha\right)}$ divides the frequency plane into \emph{four} conic regions (the top, bottom, left, and right frequency cones) and a low-frequency region, while the usual definition of shearlets only divides the frequency plane into two cones (horizontal and vertical) and a low-frequency region. To remedy this fact, Section \ref{sec:UnconnectedAlphaShearletCovering} introduces a slightly modified covering, the so-called \textbf{unconnected $\alpha$-shearlet covering} $\mathcal{S}_{u}^{\left(\alpha\right)}$; the reason for this terminology being that the individual sets of the covering are not connected anymore. Essentially, $\mathcal{S}_{u}^{\left(\alpha\right)}$ is obtained by combining each pair of opposing sets of the $\alpha$-shearlet covering $\mathcal{S}^{\left(\alpha\right)}$ into one single set. We then verify that the associated decomposition spaces coincide with the previously defined $\alpha$-shearlet smoothness spaces. Finally, we show that the Banach frames and atomic decompositions obtained by applying the theory of structured Banach frame decompositions with the covering $\mathcal{S}_{u}^{\left(1/2\right)}$ indeed yield conventional cone-adapted shearlet systems. In Section \ref{sec:CartoonLikeApproximation}, we apply the equivalence of analysis and synthesis sparsity for $\alpha$-shearlets to prove that $\alpha$-shearlet frames with sufficiently nice generators indeed yield (almost) optimal $N$-term approximations for the class $\mathcal{E}^{\beta}\left(\mathbb{R}^{2}\right)$ of $C^{\beta}$-cartoon-like functions, for $\beta\in\left(1,2\right]$ and $\alpha=\beta^{-1}$. In case of usual shearlets (i.e., for $\alpha=\frac{1}{2}$), this is a straightforward application of the analysis sparsity of $C^{2}$-cartoon-like functions with respect to shearlet systems. But in case of $\alpha\neq\frac{1}{2}$, our $\alpha$-shearlet systems use the $\alpha$-parabolic scaling matrices ${\rm diag}\left(2^{j},2^{\alpha j}\right)$, while analysis sparsity of $C^{\beta}$-cartoon-like functions is only known with respect to $\beta$-shearlet systems, which use the scaling matrices ${\rm diag}\left(2^{\beta j/2},\,2^{j/2}\right)$. Bridging the gap between these two different shearlet systems is not too hard, but cumbersome, so that part of the proof for $\alpha\neq\frac{1}{2}$ is deferred to Section \ref{sec:CartoonLikeFunctionsAreBoundedInAlphaShearletSmoothness}, since most readers are probably mainly interested in the (easier) case of classical shearlets (i.e., $\alpha=\frac{1}{2}$). The obtained approximation rate is almost \emph{optimal} (cf.\@ \cite[Theorem 2.8]{CartoonApproximationWithAlphaCurvelets}) if one restricts to systems where the $N$-term approximation is formed under a certain \emph{polynomial search depth restriction}. But in the main text of the paper, we just construct \emph{some} $N$-term approximation, which not necessarily fulfills this restriction concerning the search depth. In Section \ref{sec:PolynomialSearchDepth}, we give a modified proof which shows that one can indeed retain the same approximation rate, \emph{even under a polynomial search depth restriction}. Finally, in Section \ref{sec:EmbeddingsBetweenAlphaShearletSmoothness} we \emph{completely} characterize the existence of embeddings $\mathscr{S}_{\alpha_{1},s_{1}}^{p_{1},q_{1}}\left(\mathbb{R}^{2}\right)\hookrightarrow\mathscr{S}_{\alpha_{2},s_{2}}^{p_{2},q_{2}}\left(\mathbb{R}^{2}\right)$ between $\alpha$-shearlet smoothness spaces for different values of $\alpha$. Effectively, this characterizes the cases in which one can obtain sparsity with respect to $\alpha_{2}$-shearlets \emph{when the only knowledge available is a certain sparsity with respect to $\alpha_{1}$-shearlets}. \subsection{Notation} \label{subsec:Notation}We write $\mathbb{N}=\mathbb{Z}_{\geq1}$ for the set of \textbf{natural numbers} and $\mathbb{N}_{0}=\mathbb{Z}_{\geq0}$ for the set of natural numbers including $0$. For a matrix $A\in\mathbb{R}^{d\timesd}$, we denote by $A^{T}$ the transpose of $A$. The norm $\left\Vert A\right\Vert $ of $A$ is the usual \textbf{operator norm} of $A$, acting on $\mathbb{R}^{d}$ equipped with the usual euclidean norm $\left|\bullet\right|=\left\Vert \bullet\right\Vert _{2}$. The \textbf{open euclidean ball} of radius $r>0$ around $x\in\mathbb{R}^{d}$ is denoted by $B_{r}\left(x\right)$. For a linear (bounded) operator $T:X\to Y$ between (quasi)-normed spaces $X,Y$, we denote the \textbf{operator norm} of $T$ by \[ \vertiii T:=\vertiii T_{X\to Y}:=\sup_{\left\Vert x\right\Vert _{X}\leq1}\left\Vert Tx\right\Vert _{Y}.\vspace{-0.05cm} \] For an arbitrary set $M$, we let $\left|M\right|\in\mathbb{N}_{0}\cup\left\{ \infty\right\} $ denote the number of elements of the set. For $n\in\mathbb{N}_{0}$, we write $\underline{n}:=\left\{ 1,\dots,n\right\} $; in particular, $\underline{0}=\emptyset$. For the \textbf{closure} of a subset $M$ of some topological space, we write $\overline{M}$. The $d$-dimensional \textbf{Lebesgue measure} of a (measurable) set $M\subset\mathbb{R}^{d}$ is denoted by $\lambda\left(M\right)$ or by $\lambda_{d}\left(M\right)$. Occasionally, we will also use the constant $s_{d}:=\mathcal{H}^{d-1}\left(S^{d-1}\right)$, the \textbf{surface area of the euclidean unit-sphere} $S^{d-1}\subset\mathbb{R}^{d}$. The \textbf{complex conjugate} of $z\in\mathbb{C}$ is denoted by $\overline{z}$. We use the convention $x^{0}=1$ for all $x\in\left[0,\infty\right)$, even for $x=0$. For a subset $M\subset B$ of a fixed \emph{base set} $B$ (which is usually implied by the context), we define the \textbf{indicator function} (or \textbf{characteristic function}) ${\mathds{1}}_{M}$ of the set $M$ by \[ {\mathds{1}}_{M}:B\to\left\{ 0,1\right\} ,x\mapsto\begin{cases} 1, & \text{if }x\in M,\\ 0, & \text{otherwise}. \end{cases} \] The \textbf{translation} and \textbf{modulation} of a function $f:\mathbb{R}^{d}\to\mathbb{C}^{k}$ by $x\in\mathbb{R}^{d}$ or $\xi\in\mathbb{R}^{d}$ are, respectively, denoted by \[ L_{x}f:\mathbb{R}^{d}\to\mathbb{C}^{k},y\mapsto f\left(y-x\right),\qquad\text{ and }\qquad M_{\xi}f:\mathbb{R}^{d}\to\mathbb{C}^{k},y\mapsto e^{2\pi i\left\langle \xi,y\right\rangle }f\left(y\right). \] Furthermore, for $g:\mathbb{R}^{d}\to\mathbb{C}^{k}$, we use the notation $\widetilde{g}$ for the function $\widetilde{g}:\mathbb{R}^{d}\to\mathbb{C}^{k},x\mapsto g\left(-x\right)$. For the \textbf{Fourier transform}, we use the convention $\widehat{f}\left(\xi\right):=\left(\mathcal{F} f\right)\left(\xi\right):=\int_{\mathbb{R}^{d}}f\left(x\right)\cdot e^{-2\pi i\left\langle x,\xi\right\rangle }\operatorname{d} x$ for $f\in L^{1}\left(\mathbb{R}^{d}\right)$. It is well-known that the Fourier transform extends to a unitary automorphism $\mathcal{F}:L^{2}\left(\mathbb{R}^{d}\right)\to L^{2}\left(\mathbb{R}^{d}\right)$. The inverse of this map is the continuous extension of the inverse Fourier transform, given by $\left(\mathcal{F}^{-1}f\right)\left(x\right)=\int_{\mathbb{R}^{d}}f\left(\xi\right)\cdot e^{2\pi i\left\langle x,\xi\right\rangle }\operatorname{d}\xi$ for $f\in L^{1}\left(\mathbb{R}^{d}\right)$. We will make frequent use of the space $\mathcal{S}\left(\mathbb{R}^{d}\right)$ of \textbf{Schwartz functions} and its topological dual space $\mathcal{S}'\left(\mathbb{R}^{d}\right)$, the space of \textbf{tempered distributions}. For more details on these spaces, we refer to \cite[Section 9]{FollandRA}; in particular, we note that the Fourier transform restricts to a linear homeomorphism $\mathcal{F}:\mathcal{S}\left(\mathbb{R}^{d}\right)\to\mathcal{S}\left(\mathbb{R}^{d}\right)$; by duality, we can thus define $\mathcal{F}:\mathcal{S}'\left(\mathbb{R}^{d}\right)\to\mathcal{S}'\left(\mathbb{R}^{d}\right)$ by $\mathcal{F}\varphi=\varphi\circ\mathcal{F}$ for $\varphi\in\mathcal{S}'\left(\mathbb{R}^{d}\right)$. Given an open subset $U\subset\mathbb{R}^{d}$, we let $\DistributionSpace U$ denote the space of \textbf{distributions} on $U$, i.e., the topological dual space of $\DistributionSpace U:=\TestFunctionSpace U$. For the precise definition of the topology on $\TestFunctionSpace U$, we refer to \cite[Chapter 6]{RudinFA}. We remark that the dual pairings $\left\langle \cdot,\cdot\right\rangle _{\mathcal{D}',\mathcal{D}}$ and $\left\langle \cdot,\cdot\right\rangle _{\mathcal{S}',\mathcal{S}}$ are always taken to be \emph{bilinear} instead of sesquilinear. Occasionally, we will make use of the \textbf{Sobolev space} \[ W^{N,p}\left(\smash{\mathbb{R}^{d}}\right)=\left\{ f\in L^{p}\left(\smash{\mathbb{R}^{d}}\right)\,\middle|\,\forall\alpha\in\mathbb{N}_{0}^{d}\text{ with }\left|\alpha\right|\leq N:\quad\partial^{\alpha}f\in L^{p}\left(\smash{\mathbb{R}^{d}}\right)\right\} \qquad\text{ with }p\in\left[1,\infty\right]. \] Here, as usual for Sobolev spaces, the partial derivatives $\partial^{\alpha}f$ have to be understood in the distributional sense. Furthermore, we will use the notations $\left\lceil x\right\rceil :=\min\left\{ k\in\mathbb{Z}\,\middle|\, k\geq x\right\} $ and $\left\lfloor x\right\rfloor :=\max\left\{ k\in\mathbb{Z}\,\middle|\, k\leq x\right\} $ for $x\in\mathbb{R}$. We observe $\left\lfloor x\right\rfloor \leq x<\left\lfloor x\right\rfloor +1$ and $\left\lceil x\right\rceil -1<x\leq\left\lceil x\right\rceil $. Sometimes, we also write $x_{+}:=\left(x\right)_{+}:=\max\left\{ 0,x\right\} $ for $x\in\mathbb{R}$. Finally, we will frequently make use of the \textbf{shearing matrices} $S_{x}$, the \textbf{$\alpha$-parabolic dilation matrices} $D_{b}^{\left(\alpha\right)}$ and the involutive matrix $R$, given by \begin{equation} S_{x}:=\left(\begin{matrix}1 & x\\ 0 & 1 \end{matrix}\right),\quad\text{ and }\quad D_{b}^{\left(\alpha\right)}:=\left(\begin{matrix}b & 0\\ 0 & b^{\alpha} \end{matrix}\right),\quad\text{ as well as }\quad R:=\left(\begin{matrix}0 & 1\\ 1 & 0 \end{matrix}\right),\label{eq:StandardMatrices} \end{equation} for $x\in\mathbb{R}$ and $\alpha,b\in\left[0,\infty\right)$. \section{Structured Banach frame decompositions of decomposition spaces — A crash course} \label{sec:BanachFrameDecompositionCrashCourse}In this section, we give a brief introduction to the theory of structured Banach frames and atomic decompositions for decomposition spaces that was recently developed by one of the authors in \cite{StructuredBanachFrames}. We start with a crash course on decomposition spaces. These are defined using a suitable covering $\mathcal{Q}=\left(Q_{i}\right)_{i\in I}$ of (a subset of) the \emph{frequency space} $\mathbb{R}^{d}$. For the decomposition spaces to be well-defined and for the theory in \cite{StructuredBanachFrames} to be applicable, the covering $\mathcal{Q}$ needs to be a \textbf{semi-structured covering} for which a \textbf{regular partition of unity} exists. For this, it suffices if $\mathcal{Q}$ is an \textbf{almost structured covering}. Since the notion of almost structured coverings is somewhat easier to understand than general semi-structured coverings, we will restrict ourselves to this concept. \begin{defn} \label{def:AlmostStructuredCovering}Let $\emptyset\neq\mathcal{O}\subset\mathbb{R}^{d}$ be open. A family $\mathcal{Q}=\left(Q_{i}\right)_{i\in I}$ is called an \textbf{almost structured covering} of $\mathcal{O}$, if for each $i\in I$, there is an invertible matrix $T_{i}\in\mathrm{GL}\left(\mathbb{R}^{d}\right)$, a translation $b_{i}\in\mathbb{R}^{d}$ and an open, bounded set $Q_{i}'\subset\mathbb{R}^{d}$ such that the following conditions are fulfilled: \begin{enumerate} \item We have $Q_{i}=T_{i}Q_{i}'+b_{i}$ for all $i\in I$. \item We have $Q_{i}\subset\mathcal{O}$ for all $i\in I$. \item $\mathcal{Q}$ is \textbf{admissible}, i.e., there is some $N_{\mathcal{Q}}\in\mathbb{N}$ satisfying $\left|i^{\ast}\right|\leq N_{\mathcal{Q}}$ for all $i\in I$, where the \textbf{index-cluster} $i^{\ast}$ is defined as \begin{equation} i^{\ast}:=\left\{ \ell\in I\,\middle|\, Q_{\ell}\cap Q_{i}\neq\emptyset\right\} \qquad\text{ for }i\in I.\label{eq:IndexClusterDefinition} \end{equation} \item There is a constant $C_{\mathcal{Q}}>0$ satisfying $\left\Vert T_{i}^{-1}T_{j}\right\Vert \leq C_{\mathcal{Q}}$ for all $i\in I$ and all $j\in i^{\ast}$. \item For each $i\in I$, there is an open set $P_{i}'\subset\mathbb{R}^{d}$ with the following additional properties: \begin{enumerate} \item $\overline{P_{i}'}\subset Q_{i}'$ for all $i\in I$. \item The sets $\left\{ P_{i}'\,\middle|\, i\in I\right\} $ and $\left\{ Q_{i}'\,\middle|\, i\in I\right\} $ are finite. \item We have $\mathcal{O}\subset\bigcup_{i\in I}\left(T_{i}P_{i}'+b_{i}\right)$.\qedhere \end{enumerate} \end{enumerate} \end{defn} \begin{rem*} \begin{itemize}[leftmargin=0.4cm] \item In the following, if we require $\mathcal{Q}=\left(Q_{i}\right)_{i\in I}=\left(T_{i}Q_{i}'+b_{i}\right)_{i\in I}$ to be an almost structured covering of $\mathcal{O}$, it is always implicitly understood that $T_{i},Q_{i}'$ and $b_{i}$ are chosen in such a way that the conditions in Definition \ref{def:AlmostStructuredCovering} are satisfied. \item Since each set $Q_{i}'$ is bounded and since the set $\left\{ Q_{i}'\,\middle|\, i\in I\right\} $ is finite, the family $\left(Q_{i}'\right)_{i\in I}$ is uniformly bounded, i.e., there is some $R_{\mathcal{Q}}>0$ satisfying $Q_{i}'\subset\overline{B_{R_{\mathcal{Q}}}}\left(0\right)$ for all $i\in I$.\qedhere \end{itemize} \end{rem*} A crucial property of almost structured coverings is that these always admit a \textbf{regular partition of unity}, a notion which was originally introduced in \cite[Definition 2.4]{DecompositionIntoSobolev}. \begin{defn} \label{def:RegularPartitionOfUnity}Let $\mathcal{Q}=\left(Q_{i}\right)_{i\in I}=\left(T_{i}Q_{i}'+b_{i}\right)_{i\in I}$ be an almost structured covering of the open set $\emptyset\neq\mathcal{O}\subset\mathbb{R}^{d}$. We say that the family $\Phi=\left(\varphi_{i}\right)_{i\in I}$ is a \textbf{regular partition of unity} subordinate to $\mathcal{Q}$ if the following hold: \begin{enumerate} \item We have $\varphi_{i}\in\TestFunctionSpace{\mathcal{O}}$ with $\operatorname{supp}\varphi_{i}\subset Q_{i}$ for all $i\in I$. \item We have $\sum_{i\in I}\varphi_{i}\equiv1$ on $\mathcal{O}$. \item For each $\alpha\in\mathbb{N}_{0}^{d}$, the constant \[ C^{\left(\alpha\right)}:=\sup_{i\in I}\left\Vert \partial^{\alpha}\smash{\varphi_{i}^{\natural}}\right\Vert _{\sup} \] is finite, where for each $i\in I$, the \textbf{normalized version} $\varphi_{i}^{\natural}$ of $\varphi_{i}$ is defined as \[ \varphi_{i}^{\natural}:\mathbb{R}^{d}\to\mathbb{C},\xi\mapsto\varphi_{i}\left(T_{i}\xi+b_{i}\right).\qedhere \] \end{enumerate} \end{defn} \begin{thm} (cf.\@ \cite[Theorem 2.8]{DecompositionIntoSobolev} and see \cite[Proposition 1]{BorupNielsenDecomposition} for a similar statement) Every almost structured covering $\mathcal{Q}$ of an open subset $\emptyset\neq\mathcal{O}\subset\mathbb{R}^{d}$ admits a regular partition of unity $\Phi=\left(\varphi_{i}\right)_{i\in I}$ subordinate to $\mathcal{Q}$. \end{thm} Before we can give the formal definition of decomposition spaces, we need one further notion: \begin{defn} \label{def:QModerateWeightClusteringMap}(cf.\@ \cite[Definition 3.1]{DecompositionSpaces1}) Let $\emptyset\neq\mathcal{O}\subset\mathbb{R}^{d}$ be open and assume that $\mathcal{Q}=\left(Q_{i}\right)_{i\in I}$ is an almost structured covering of $\mathcal{O}$. A \textbf{weight} $w$ on the index set $I$ is simply a sequence $w=\left(w_{i}\right)_{i\in I}$ of positive numbers $w_{i}>0$. The weight $w$ is called \textbf{$\mathcal{Q}$-moderate} if there is a constant $C_{\mathcal{Q},w}>0$ satisfying \begin{equation} w_{j}\leq C_{\mathcal{Q},w}\cdot w_{i}\qquad\forall\:i\in I\text{ and all }j\in i^{\ast}.\label{eq:ModerateWeightDefinition} \end{equation} For an arbitrary weight $w=\left(w_{i}\right)_{i\in I}$ on $I$ and $q\in\left(0,\infty\right]$ we define the \textbf{weighted $\ell^{q}$ space} $\ell_{w}^{q}\left(I\right)$ as \[ \ell_{w}^{q}\left(I\right):=\left\{ \left(c_{i}\right)_{i\in I}\in\mathbb{C}^{I}\,\middle|\,\left(w_{i}\cdot c_{i}\right)_{i\in I}\in\ell^{q}\left(I\right)\right\} , \] equipped with the natural (quasi)-norm $\left\Vert \left(c_{i}\right)_{i\in I}\right\Vert _{\ell_{w}^{q}}:=\left\Vert \left(w_{i}\cdot c_{i}\right)_{i\in I}\right\Vert _{\ell{}^{q}}$. We will also use the notation $\left\Vert c\right\Vert _{\ell_{w}^{q}}$ for arbitrary sequences $c=\left(c_{i}\right)_{i\in I}\in\left[0,\infty\right]^{I}$ with the understanding that $\left\Vert c\right\Vert _{\ell_{w}^{q}}=\infty$ if $c_{i}=\infty$ for some $i\in I$ or if $c\notin\ell_{w}^{q}\left(I\right)$. \end{defn} Now, we can finally give a precise definition of decomposition spaces. We begin with the (easier) case of the so-called \textbf{Fourier-side decomposition spaces}. \begin{defn} \label{def:FourierSideDecompositionSpaces}Let $\mathcal{Q}=\left(Q_{i}\right)_{i\in I}$ be an almost structured covering of the open set $\emptyset\neq\mathcal{O}\subset\mathbb{R}^{d}$, let $w=\left(w_{i}\right)_{i\in I}$ be a $\mathcal{Q}$-moderate weight on $I$ and let $p,q\in\left(0,\infty\right]$. Finally, let $\Phi=\left(\varphi_{i}\right)_{i\in I}$ be a regular partition of unity subordinate to $\mathcal{Q}$. We then define the associated \textbf{Fourier-side decomposition space (quasi)-norm} as \[ \left\Vert g\right\Vert _{\FourierDecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}}:=\left\Vert \left(\left\Vert \mathcal{F}^{-1}\left(\varphi_{i}\cdot g\right)\right\Vert _{L^{p}}\right)_{i\in I}\right\Vert _{\ell_{w}^{q}}\in\left[0,\infty\right]\qquad\text{ for each distribution }g\in\DistributionSpace{\mathcal{O}}. \] The associated \textbf{Fourier-side decomposition space} is simply \[ \FourierDecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}:=\left\{ g\in\DistributionSpace{\mathcal{O}}\,\middle|\,\left\Vert g\right\Vert _{\FourierDecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}}<\infty\right\} .\qedhere \] \end{defn} \begin{rem*} Before we continue with the definition of the actual (space-side) decomposition spaces, a few remarks are in order: \begin{itemize}[leftmargin=0.4cm] \item The expression $\left\Vert \mathcal{F}^{-1}\left(\varphi_{i}\cdot g\right)\right\Vert _{L^{p}}\in\left[0,\infty\right]$ makes sense for each $i\in I$, since $\varphi_{i}\in\TestFunctionSpace{\mathcal{O}}$, so that $\varphi_{i}\cdot g$ is a compactly supported distribution on $\mathbb{R}^{d}$ (and thus also a tempered distribution), so that the Paley-Wiener theorem (see e.g.\@ \cite[Theorem 7.23]{RudinFA}) shows that the tempered distribution $\mathcal{F}^{-1}\left(\varphi_{i}\cdot g\right)$ is given by (integration against) a smooth function of which we can take the $L^{p}$ quasi-norm. \item The notations $\left\Vert g\right\Vert _{\FourierDecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}}$ and $\FourierDecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}$ both suppress the specific regular partition of unity $\Phi$ that was chosen. This is justified, since \cite[Corollary 3.18]{DecompositionEmbedding} shows that any two $L^{p}$-BAPUs\footnote{The exact definition of an $L^{p}$-BAPU is not important for us. The interested reader can find the definition in \cite[Definition 3.5]{DecompositionEmbedding}.} $\Phi,\Psi$ yield equivalent quasi-norms and thus the same (Fourier-side) decomposition spaces. This suffices, since \cite[Corollary 2.7]{DecompositionIntoSobolev} shows that every regular partition of unity is also an $L^{p}$-BAPU for $\mathcal{Q}$, for arbitrary $p\in\left(0,\infty\right]$. \item Finally, \cite[Theorem 3.21]{DecompositionEmbedding} shows that $\FourierDecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}$ is a Quasi-Banach space.\qedhere \end{itemize} \end{rem*} \begin{defn} \label{def:SpaceSideDecompositionSpaces}For an open set $\emptyset\neq\mathcal{O}\subset\mathbb{R}^{d}$, let $Z\left(\mathcal{O}\right):=\mathcal{F}\left(\TestFunctionSpace{\mathcal{O}}\right)\subset\mathcal{S}\left(\mathbb{R}^{d}\right)$ and equip this space with the unique topology which makes the Fourier transform $\mathcal{F}:\TestFunctionSpace{\mathcal{O}}\to Z\left(\mathcal{O}\right),\varphi\mapsto\widehat{\varphi}$ into a homeomorphism. The topological dual space of $Z\left(\mathcal{O}\right)$ is denoted by $Z'\left(\mathcal{O}\right)$. By duality, we define the Fourier transform on $Z'\left(\mathcal{O}\right)$ by $\widehat{g}:=\mathcal{F} g:=g\circ\mathcal{F}\in\DistributionSpace{\mathcal{O}}$ for $g\in Z'\left(\mathcal{O}\right)$. Finally, under the assumptions of Definition \ref{def:FourierSideDecompositionSpaces}, we define the \textbf{(space-side) decomposition space} associated to the parameters $\mathcal{Q},p,q,w$ as \[ \DecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}:=\left\{ g\in Z'\left(\mathcal{O}\right)\,\middle|\,\left\Vert g\right\Vert _{\DecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}}:=\left\Vert \widehat{g}\right\Vert _{\FourierDecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}}<\infty\right\} . \] It is not hard to see that the Fourier transform $\mathcal{F}:Z'\left(\mathcal{O}\right)\to\DistributionSpace{\mathcal{O}}$ is an isomorphism which restricts to an isometric isomorphism $\mathcal{F}:\DecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}\to\FourierDecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}$. \end{defn} \begin{rem*} For an explanation why the reservoirs $\DistributionSpace{\mathcal{O}}$ and $Z'\left(\mathcal{O}\right)$ are the correct choices for defining $\FourierDecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}$ and $\DecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}$, even in case of $\mathcal{O}=\mathbb{R}^{d}$, we refer to \cite[Remark 3.13]{DecompositionEmbedding}. \end{rem*} Now that we have formally introduced the notion of decomposition spaces, we present the framework developed in \cite{StructuredBanachFrames} for the construction of Banach frames and atomic decompositions for these spaces. To this end, we introduce the following set of notations and standing assumptions: \begin{assumption} \label{assu:CrashCourseStandingAssumptions}We fix an almost structured covering $\mathcal{Q}=\left(T_{i}Q_{i}'+b_{i}\right)_{i\in I}$ with associated regular partition of unity $\Phi=\left(\varphi_{i}\right)_{i\in I}$ for the remainder of the section. By definition of an almost structured covering, the set $\left\{ Q_{i}'\,\middle|\, i\in I\right\} $ is finite. Hence, we have $\left\{ Q_{i}'\,\middle|\, i\in I\right\} =\left\{ \smash{Q_{0}^{\left(1\right)}},\dots,\smash{Q_{0}^{\left(n\right)}}\right\} \vphantom{Q_{0}^{\left(n\right)}}$ for certain (not necessarily distinct) open, bounded subsets $Q_{0}^{\left(1\right)},\dots,Q_{0}^{\left(n\right)}\subset\mathbb{R}^{d}$. In particular, for each $i\in I$, there is some $k_{i}\in\underline{n}$ satisfying $Q_{i}'=Q_{0}^{\left(k_{i}\right)}$. We fix the choice of $n\in\mathbb{N}$, of the sets $Q_{0}^{\left(1\right)},\dots,Q_{0}^{\left(n\right)}$ and of the map $I\to\underline{n},i\mapsto k_{i}$ for the remainder of the section. \end{assumption} Finally, we need a suitable \textbf{coefficient space} for our Banach frames and atomic decompositions: \begin{defn} \label{def:CoefficientSpace}For given $p,q\in\left(0,\infty\right]$ and a given weight $w=\left(w_{i}\right)_{i\in I}$ on $I$, we define the associated \textbf{coefficient space} as \[ \begin{split}C_{w}^{p,q} & :=\ell_{\left(\left|\det T_{i}\right|^{\frac{1}{2}-\frac{1}{p}}\cdot w_{i}\right)_{i\in I}}^{q}\!\!\!\!\!\left(\left[\ell^{p}\left(\mathbb{Z}^{d}\right)\right]_{i\in I}\right)\\ & :=\left\{ c=\left(\smash{c_{k}^{\left(i\right)}}\right)_{i\in I,k\in\mathbb{Z}^{d}}\,\middle|\,\left\Vert c\right\Vert _{C_{w}^{p,q}}:=\left\Vert \left(\left|\det T_{i}\right|^{\frac{1}{2}-\frac{1}{p}}\cdot w_{i}\cdot\left\Vert \left(\smash{c_{k}^{\left(i\right)}}\right)_{k\in\mathbb{Z}^{d}}\right\Vert _{\ell^{p}}\right)_{i\in I}\right\Vert _{\ell^{q}}<\infty\right\} \leq\mathbb{C}^{I\times\mathbb{Z}^{d}}.\qedhere \end{split} \] \end{defn} \begin{rem*} Observe that if $w_{i}=\left|\det T_{i}\right|^{\frac{1}{p}-\frac{1}{2}}$ and if $p=q$, then $C_{w}^{p,q}=\ell^{p}\left(I\times\mathbb{Z}^{d}\right)$, with equal (quasi)-norms. \end{rem*} Now that we have introduced the coefficient space $C_{w}^{p,q}$, we are in a position to discuss the existence criteria for Banach frames and atomic decompositions that were derived in \cite{StructuredBanachFrames}. We begin with the case of \textbf{Banach frames}. \begin{thm} \label{thm:BanachFrameTheorem}Let $w=\left(w_{i}\right)_{i\in I}$ be a $\mathcal{Q}$-moderate weight, let $\varepsilon,p_{0},q_{0}\in\left(0,1\right]$ and let $p,q\in\left(0,\infty\right]$ with $p\geq p_{0}$ and $q\geq q_{0}$. Define \[ N:=\left\lceil \frac{d+\varepsilon}{\min\left\{ 1,p\right\} }\right\rceil ,\qquad\tau:=\min\left\{ 1,p,q\right\} \qquad\text{ and }\qquad\sigma:=\tau\cdot\left(\frac{d}{\min\left\{ 1,p\right\} }+N\right). \] Let $\gamma_{1}^{\left(0\right)},\dots,\gamma_{n}^{\left(0\right)}:\mathbb{R}^{d}\to\mathbb{C}$ be given and define $\gamma_{i}:=\gamma_{k_{i}}^{\left(0\right)}$ for $i\in I$. Assume that the following conditions are satisfied: \begin{enumerate} \item We have $\gamma_{k}^{\left(0\right)}\in L^{1}\left(\mathbb{R}^{d}\right)$ and $\mathcal{F}\gamma_{k}^{\left(0\right)}\in C^{\infty}\left(\mathbb{R}^{d}\right)$ for all $k\in\underline{n}$, where all partial derivatives of $\mathcal{F}\gamma_{k}^{\left(0\right)}$ are polynomially bounded. \item We have $\gamma_{k}^{\left(0\right)}\in C^{1}\left(\mathbb{R}^{d}\right)$ and $\nabla\gamma_{k}^{\left(0\right)}\in L^{1}\left(\mathbb{R}^{d}\right)\cap L^{\infty}\left(\mathbb{R}^{d}\right)$ for all $k\in\underline{n}$. \item We have $\left[\mathcal{F}\gamma_{k}^{\left(0\right)}\right]\left(\xi\right)\neq0$ for all $\xi\in\overline{Q_{0}^{\left(k\right)}}$ and all $k\in\underline{n}$. \item We have \[ C_{1}:=\sup_{i\in I}\:\sum_{j\in I}M_{j,i}<\infty\quad\text{ and }\quad C_{2}:=\sup_{j\in I}\:\sum_{i\in I}M_{j,i}<\infty, \] where \[ \qquad\qquad M_{j,i}:=\left(\frac{w_{j}}{w_{i}}\right)^{\tau}\cdot\left(1+\left\Vert T_{j}^{-1}T_{i}\right\Vert \right)^{\sigma}\cdot\max_{\left|\beta\right|\leq1}\left(\left|\det T_{i}\right|^{-1}\cdot\int_{Q_{i}}\:\max_{\left|\alpha\right|\leq N}\left|\left(\left[\partial^{\alpha}\widehat{\partial^{\beta}\gamma_{j}}\right]\left(T_{j}^{-1}\left(\xi-b_{j}\right)\right)\right)\right|\operatorname{d}\xi\right)^{\tau}. \] \end{enumerate} Then there is some $\delta_{0}=\delta_{0}\left(p,q,w,\varepsilon,\left(\gamma_{i}\right)_{i\in I}\right)>0$ such that for arbitrary $0<\delta\leq\delta_{0}$, the family \[ \left(L_{\delta\cdot T_{i}^{-T}k}\:\widetilde{\gamma^{\left[i\right]}}\right)_{i\in I,k\in\mathbb{Z}^{d}}\quad\text{ with }\quad\gamma^{\left[i\right]}=\left|\det T_{i}\right|^{1/2}\cdot M_{b_{i}}\left[\gamma_{i}\circ T_{i}^{T}\right]\quad\text{ and }\quad\widetilde{\gamma^{\left[i\right]}}\left(x\right)=\gamma^{\left[i\right]}\left(-x\right) \] forms a \textbf{Banach frame} for $\DecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}$. Precisely, this means the following: \begin{itemize}[leftmargin=0.7cm] \item The \textbf{analysis operator} \[ A^{\left(\delta\right)}:\DecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}\to C_{w}^{p,q},f\mapsto\left(\left[\smash{\gamma^{\left[i\right]}}\ast f\right]\left(\delta\cdot T_{i}^{-T}k\right)\right)_{i\in I,k\in\mathbb{Z}^{d}} \] is well-defined and bounded for each $\delta\in\left(0,1\right]$. Here, the convolution $\gamma^{\left[i\right]}\ast f$ is defined as \begin{equation} \left(\gamma^{\left[i\right]}\ast f\right)\left(x\right)=\sum_{\ell\in I}\mathcal{F}^{-1}\left(\widehat{\gamma^{\left[i\right]}}\cdot\varphi_{\ell}\cdot\widehat{f}\:\right)\left(x\right)\qquad\forall x\in\mathbb{R}^{d},\label{eq:SpecialConvolutionDefinition} \end{equation} where the series converges normally in $L^{\infty}\left(\mathbb{R}^{d}\right)$ and thus absolutely and uniformly, for each $f\in\DecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}$. For a more convenient expression of $\left(\gamma^{\left[i\right]}\ast f\right)\left(x\right)$, at least for $f\in L^{2}\left(\mathbb{R}^{d}\right)\subset Z'\left(\mathcal{O}\right)$, see Lemma \ref{lem:SpecialConvolutionClarification}. \item For $0<\delta\leq\delta_{0}$, there is a bounded linear \textbf{reconstruction operator} $R^{\left(\delta\right)}:C_{w}^{p,q}\to\DecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}$ satisfying $R^{\left(\delta\right)}\circ A^{\left(\delta\right)}=\operatorname{id}_{\DecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}}$. \item We have the following \textbf{consistency property}: If $\mathcal{Q}$-moderate weights $w^{\left(1\right)}=\left(\smash{w_{i}^{\left(1\right)}}\right)_{i\in I}$ and $w^{\left(2\right)}=\left(\smash{w_{i}^{\left(2\right)}}\right)_{i\in I}$ and exponents $p_{1},p_{2},q_{1},q_{2}\in\left(0,\infty\right]$ are chosen such that the assumptions of the current theorem are satisfied for $p_{1},q_{1},w^{\left(1\right)}$, as well as for $p_{2},q_{2},w^{\left(2\right)}$ and if $0<\delta\leq\min\left\{ \delta_{0}\left(p_{1},q_{1},w^{\left(1\right)},\varepsilon,\left(\gamma_{i}\right)_{i\in I}\right),\delta_{0}\left(p_{2},q_{2},w^{\left(2\right)},\varepsilon,\left(\gamma_{i}\right)_{i\in I}\right)\right\} $ then we have the following equivalence: \[ \forall f\in\DecompSp{\mathcal{Q}}{p_{2}}{\ell_{w^{\left(2\right)}}^{q_{2}}}{}:\quad f\in\DecompSp{\mathcal{Q}}{p_{1}}{\ell_{w^{\left(1\right)}}^{q_{1}}}{}\Longleftrightarrow\left(\left[\smash{\gamma^{\left[i\right]}}\ast f\right]\left(\delta\cdot T_{i}^{-T}k\right)\right)_{i\in I,k\in\mathbb{Z}^{d}}\in C_{w^{\left(1\right)}}^{p_{1},q_{1}}. \] \end{itemize} Finally, there is an estimate for the size of $\delta_{0}$ which is independent of the choice of $p\geq p_{0}$ and $q\geq q_{0}$: There is a constant $K=K\left(p_{0},q_{0},\varepsilon,d,\mathcal{Q},\Phi,\smash{\gamma_{1}^{\left(0\right)},\dots,\gamma_{n}^{\left(0\right)}}\right)>0$ such that we can choose \[ \delta_{0}=1\big/\left[1+K\cdot C_{\mathcal{Q},w}^{4}\cdot\smash{\left(C_{1}^{1/\tau}+C_{2}^{1/\tau}\right)^{2}}\:\vphantom{\sum}\right].\qedhere \] \end{thm} \begin{proof} This is a special case of Theorem \ref{thm:WeightedBanachFrameTheorem}, for $\Omega_{0}=\Omega_{1}=1$, $K=0$ and $v=v_{0}\equiv1$. \end{proof} Now, we provide criteria which ensure that a given family of prototypes generates \textbf{atomic decompositions}. \begin{thm} \label{thm:AtomicDecompositionTheorem}Let $w=\left(w_{i}\right)_{i\in I}$ be a $\mathcal{Q}$-moderate weight, let $\varepsilon,p_{0},q_{0}\in\left(0,1\right]$ and let $p,q\in\left(0,\infty\right]$ with $p\geq p_{0}$ and $q\geq q_{0}$. Define \[ N:=\left\lceil \frac{d+\varepsilon}{\min\left\{ 1,p\right\} }\right\rceil ,\qquad\tau:=\min\left\{ 1,p,q\right\} ,\qquad\vartheta:=\left(\frac{1}{p}-1\right)_{+}\:,\qquad\text{ and }\qquad\varUpsilon:=1+\frac{d}{\min\left\{ 1,p\right\} }, \] as well as \[ \sigma:=\begin{cases} \tau\cdot\left(d+1\right), & \text{if }p\in\left[1,\infty\right],\\ \tau\cdot\left(p^{-1}\cdotd+\left\lceil p^{-1}\cdot\left(d+\varepsilon\right)\right\rceil \right), & \text{if }p\in\left(0,1\right). \end{cases} \] Let $\gamma_{1}^{\left(0\right)},\dots,\gamma_{n}^{\left(0\right)}:\mathbb{R}^{d}\to\mathbb{C}$ be given and define $\gamma_{i}:=\gamma_{k_{i}}^{\left(0\right)}$ for $i\in I$. Assume that there are functions $\gamma_{1}^{\left(0,j\right)},\dots,\gamma_{n}^{\left(0,j\right)}$ for $j\in\left\{ 1,2\right\} $ such that the following conditions are satisfied: \begin{enumerate} \item We have $\gamma_{k}^{\left(0,1\right)}\in L^{1}\left(\mathbb{R}^{d}\right)$ for all $k\in\underline{n}$. \item We have $\gamma_{k}^{\left(0,2\right)}\in C^{1}\left(\mathbb{R}^{d}\right)$ for all $k\in\underline{n}$. \item We have \[ \Omega^{\left(p\right)}:=\max_{k\in\underline{n}}\left\Vert \gamma_{k}^{\left(0,2\right)}\right\Vert _{\varUpsilon}+\max_{k\in\underline{n}}\left\Vert \nabla\gamma_{k}^{\left(0,2\right)}\right\Vert _{\varUpsilon}<\infty, \] where $\left\Vert f\right\Vert _{\varUpsilon}=\sup_{x\in\mathbb{R}^{d}}\left(1+\left|x\right|\right)^{\varUpsilon}\cdot\left|f\left(x\right)\right|$ for $f:\mathbb{R}^{d}\to\mathbb{C}^{\ell}$ and (arbitrary) $\ell\in\mathbb{N}$. \item We have $\mathcal{F}\gamma_{k}^{\left(0,j\right)}\in C^{\infty}\left(\mathbb{R}^{d}\right)$ and all partial derivatives of $\mathcal{F}\gamma_{k}^{\left(0,j\right)}$ are polynomially bounded for all $k\in\underline{n}$ and $j\in\left\{ 1,2\right\} $. \item We have $\gamma_{k}^{\left(0\right)}=\gamma_{k}^{\left(0,1\right)}\ast\gamma_{k}^{\left(0,2\right)}$ for all $k\in\underline{n}$. \item We have $\left[\mathcal{F}\gamma_{k}^{\left(0\right)}\right]\left(\xi\right)\neq0$ for all $\xi\in\overline{Q_{0}^{\left(k\right)}}$ and all $k\in\underline{n}$. \item We have $\left\Vert \gamma_{k}^{\left(0\right)}\right\Vert _{\varUpsilon}<\infty$ for all $k\in\underline{n}$. \item We have \[ K_{1}:=\sup_{i\in I}\:\sum_{j\in I}N_{i,j}<\infty\qquad\text{ and }\qquad K_{2}:=\sup_{j\in I}\:\sum_{i\in I}N_{i,j}<\infty, \] where $\gamma_{j,1}:=\gamma_{k_{j}}^{\left(0,1\right)}$ for $j\in I$ and \[ \qquad\qquad N_{i,j}:=\left(\frac{w_{i}}{w_{j}}\cdot\left(\left|\det T_{j}\right|\big/\left|\det T_{i}\right|\right)^{\vartheta}\right)^{\tau}\!\!\cdot\left(1\!+\!\left\Vert T_{j}^{-1}T_{i}\right\Vert \right)^{\sigma}\!\cdot\left(\left|\det T_{i}\right|^{-1}\!\cdot\int_{Q_{i}}\:\max_{\left|\alpha\right|\leq N}\left|\left[\partial^{\alpha}\widehat{\gamma_{j,1}}\right]\left(T_{j}^{-1}\left(\xi\!-\!b_{j}\right)\right)\right|\operatorname{d}\xi\right)^{\tau}. \] \end{enumerate} Then there is some $\delta_{0}\in\left(0,1\right]$ such that the family \[ \Psi_{\delta}:=\left(L_{\delta\cdot T_{i}^{-T}k}\:\gamma^{\left[i\right]}\right)_{i\in I,\,k\in\mathbb{Z}^{d}}\qquad\text{ with }\qquad\gamma^{\left[i\right]}=\left|\det T_{i}\right|^{1/2}\cdot M_{b_{i}}\left[\gamma_{i}\circ T_{i}^{T}\right] \] forms an \textbf{atomic decomposition} of $\DecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}$, for all $\delta\in\left(0,\delta_{0}\right]$. Precisely, this means the following: \begin{itemize}[leftmargin=0.7cm] \item The \textbf{synthesis map} \[ S^{\left(\delta\right)}:C_{w}^{p,q}\to\DecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{},\left(\smash{c_{k}^{\left(i\right)}}\right)_{i\in I,\,k\in\mathbb{Z}^{d}}\mapsto\sum_{i\in I}\:\sum_{k\in\mathbb{Z}^{d}}\left[c_{k}^{\left(i\right)}\cdot L_{\delta\cdot T_{i}^{-T}k}\:\gamma^{\left[i\right]}\right] \] is well-defined and bounded for every $\delta\in\left(0,1\right]$. \item For $0<\delta\leq\delta_{0}$, there is a bounded linear \textbf{coefficient map} $C^{\left(\delta\right)}:\DecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}\to C_{w}^{p,q}$ satisfying \[ S^{\left(\delta\right)}\circ C^{\left(\delta\right)}=\operatorname{id}_{\DecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}}. \] \end{itemize} Finally, there is an estimate for the size of $\delta_{0}$ which is independent of $p\geq p_{0}$ and $q\geq q_{0}$: There is a constant $K=K\left(p_{0},q_{0},\varepsilon,d,\mathcal{Q},\Phi,\smash{\gamma_{1}^{\left(0\right)},\dots,\gamma_{n}^{\left(0\right)}}\right)>0$ such that we can choose \[ \delta_{0}=\min\left\{ 1,\,\left[K\cdot\Omega^{\left(p\right)}\cdot\left(K_{1}^{1/\tau}+K_{2}^{1/\tau}\right)\right]^{-1}\right\} .\qedhere \] \end{thm} \begin{rem*} \begin{itemize}[leftmargin=0.4cm] \item Convergence of the series defining $S^{\left(\delta\right)}$ has to be understood as follows: For each $i\in I$, the series \[ \sum_{k\in\mathbb{Z}^{d}}\left[c_{k}^{\left(i\right)}\cdot L_{\delta\cdot T_{i}^{-T}k}\:\gamma^{\left[i\right]}\right] \] converges pointwise absolutely to a function $g_{j}\in L_{{\rm loc}}^{1}\left(\mathbb{R}^{d}\right)\cap\mathcal{S}'\left(\mathbb{R}^{d}\right)$ and the series $\sum_{j\in I}\,g_{j}=S^{\left(\delta\right)}\left(\smash{c_{k}^{\left(i\right)}}\right)_{i\in I,k\in\mathbb{Z}^{d}}$ converges unconditionally in the weak-$\ast$-sense in $Z'\left(\mathcal{O}\right)$, i.e., for every $\phi\in Z\left(\mathcal{O}\right)=\mathcal{F}\left(\TestFunctionSpace{\mathcal{O}}\right)$, the series $\sum_{j\in I}\left\langle g_{j},\,\phi\right\rangle _{\mathcal{S}',\mathcal{S}}$ converges absolutely and the functional $\phi\mapsto\sum_{j\in I}\left\langle g_{j},\,\phi\right\rangle _{\mathcal{S}',\mathcal{S}}$ is continuous on $Z\left(\mathcal{O}\right)$. \item The action of $C^{\left(\delta\right)}$ on a given $f\in\DecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}$ is \emph{independent} of the precise choice of $p,q,w$, as long as $C^{\left(\delta\right)}f$ is defined at all.\qedhere \end{itemize} \end{rem*} \begin{proof}[Proof of Theorem \ref{thm:AtomicDecompositionTheorem}] This is a special case of Theorem \ref{thm:WeightedAtomicDecompositionTheorem}, for $\Omega_{0}=\Omega_{1}=1$ and $v=v_{0}\equiv1$. \end{proof} The main limitation of Theorem \ref{thm:AtomicDecompositionTheorem}—in comparison to Theorem \ref{thm:BanachFrameTheorem}—is that we require each $\gamma_{k}^{\left(0\right)}$ to be factorized as a convolution product $\gamma_{k}^{\left(0\right)}=\gamma_{k}^{\left(0,1\right)}\ast\gamma_{k}^{\left(0,2\right)}$, which is tedious to verify. To simplify such verifications, the following result is helpful: \begin{prop} \label{prop:ConvolutionFactorization}(cf.\@ \cite[Lemma 6.9]{StructuredBanachFrames}) Let $\varrho\in L^{1}\left(\mathbb{R}^{d}\right)$ with $\varrho\geq0$. Let $N\in\mathbb{N}$ with $N\geqd+1$ and assume that $\gamma\in L^{1}\left(\mathbb{R}^{d}\right)$ satisfies $\widehat{\gamma}\in C^{N}\left(\mathbb{R}^{d}\right)$ with \[ \left|\partial^{\alpha}\widehat{\gamma}\left(\xi\right)\right|\leq\varrho\left(\xi\right)\cdot\left(1+\left|\xi\right|\right)^{-\left(d+1+\varepsilon\right)}\qquad\forall\xi\in\mathbb{R}^{d}\quad\forall\alpha\in\mathbb{N}_{0}^{d}\text{ with }\left|\alpha\right|\leq N \] for some $\varepsilon\in\left(0,1\right]$. Then there are functions $\gamma_{1}\in C_{0}\left(\mathbb{R}^{d}\right)\cap L^{1}\left(\mathbb{R}^{d}\right)$ and $\gamma_{2}\in C^{1}\left(\mathbb{R}^{d}\right)\cap W^{1,1}\left(\mathbb{R}^{d}\right)$ with $\gamma=\gamma_{1}\ast\gamma_{2}$ and with the following additional properties: \begin{enumerate} \item We have $\left\Vert \gamma_{2}\right\Vert _{K}\leq s_{d}\cdot2^{1+d+3K}\cdot K!\cdot\left(1+d\right)^{1+2K}$ and $\left\Vert \nabla\gamma_{2}\right\Vert _{K}\leq\frac{s_{d}}{\varepsilon}\cdot2^{4+d+3K}\cdot\left(1+d\right)^{2\left(1+K\right)}\cdot\left(K+1\right)!$ for all $K\in\mathbb{N}_{0}$, where $\left\Vert g\right\Vert _{K}:=\sup_{x\in\mathbb{R}^{d}}\left(1+\left|x\right|\right)^{K}\left|g\left(x\right)\right|$. \item We have $\widehat{\gamma_{2}}\in C^{\infty}\left(\mathbb{R}^{d}\right)$ with all partial derivatives of $\widehat{\gamma_{2}}$ being polynomially bounded (even bounded). \item If $\widehat{\gamma}\in C^{\infty}\left(\mathbb{R}^{d}\right)$ with all partial derivatives being polynomially bounded, the same also holds for $\widehat{\gamma_{1}}$. \item We have $\left\Vert \gamma_{1}\right\Vert _{N}\leq\left(1+d\right)^{1+2N}\cdot2^{1+d+4N}\cdot N!\cdot\left\Vert \varrho\right\Vert _{L^{1}}$ and $\left\Vert \gamma\right\Vert _{N}\leq\left(1+d\right)^{N+1}\cdot\left\Vert \varrho\right\Vert _{L^{1}}$. \item We have $\left|\partial^{\alpha}\widehat{\gamma_{1}}\left(\xi\right)\right|\leq2^{1+d+4N}\cdot N!\cdot\left(1+d\right)^{N}\cdot\varrho\left(\xi\right)$ for all $\xi\in\mathbb{R}^{d}$ and $\alpha\in\mathbb{N}_{0}^{d}$ with $\left|\alpha\right|\leq N$.\qedhere \end{enumerate} \end{prop} \section{Definition and basic properties of \texorpdfstring{$\alpha$}{α}-shearlet smoothness spaces} \label{sec:AlphaShearletSmoothnessDefinition}In this section, we introduce the class of \textbf{$\alpha$-shearlet smoothness spaces}. These spaces are a generalization of the ``ordinary'' shearlet smoothness spaces as introduced by Labate et al.\cite{Labate_et_al_Shearlet}. Later on (cf.\@ Theorem \ref{thm:AnalysisAndSynthesisSparsityAreEquivalent}), it will turn out that these spaces simultaneously describe analysis and synthesis sparsity with respect to (suitable) $\alpha$-shearlet frames. We will define the $\alpha$-shearlet smoothness spaces as certain decomposition spaces. Thus, we first have to define the associated covering and the weight for the sequence space $\ell_{w}^{q}\left(I\right)$ that we will use: \begin{defn} \label{def:AlphaShearletCovering}Let $\alpha\in\left[0,1\right]$. The \textbf{$\alpha$-shearlet covering} $\mathcal{S}^{\left(\alpha\right)}$ is defined as \[ \mathcal{S}^{\left(\alpha\right)}:=\left(\smash{S_{i}^{\left(\alpha\right)}}\right)_{i\in I^{\left(\alpha\right)}}=\left(\smash{T_{i}}Q_{i}'\right)_{i\in I^{\left(\alpha\right)}}=\left(\smash{T_{i}}Q_{i}'+b_{i}\right)_{i\in I^{\left(\alpha\right)}}, \] where: \begin{itemize}[leftmargin=0.6cm] \item The \emph{index set} $I^{\left(\alpha\right)}$ is given by $I:=I^{\left(\alpha\right)}:=\left\{ 0\right\} \cup I_{0}$, where \[ \qquad I_{0}:=I_{0}^{(\alpha)}:=\left\{ \left(n,m,\varepsilon,\delta\right)\in\mathbb{N}_{0}\times\mathbb{Z}\times\left\{ \pm1\right\} \times\left\{ 0,1\right\} \,\middle|\,\left|m\right|\leq G_{n}\right\} \quad\text{ with }\quad G_{n}:=G_{n}^{\left(\alpha\right)}:=\left\lceil \smash{2^{n\left(1-\alpha\right)}}\right\rceil . \] \item The \emph{basic sets} $\left(Q_{i}'\right)_{i\in I^{\left(\alpha\right)}}$ are given by $Q_{0}':=\left(-1,1\right)^{2}$ and by $Q_{i}':=Q:=U_{\left(-1,1\right)}^{\left(3^{-1},3\right)}$ for $i\in I_{0}^{\left(\alpha\right)}$, where we used the notation \begin{equation} U_{(a,b)}^{\left(\gamma,\mu\right)}:=\left\{ \begin{pmatrix}\xi\\ \eta \end{pmatrix}\in\left(\gamma,\mu\right)\times\mathbb{R}\left|\frac{\eta}{\xi}\in\left(a,b\right)\right.\right\} \quad\text{ for }a,b\in\mathbb{R}\text{ and }\gamma,\mu\in\left(0,\infty\right).\label{eq:BasicShearletSet} \end{equation} \item The \emph{matrices} $\left(T_{i}\right)_{i\in I^{\left(\alpha\right)}}$ are given by $T_{0}:=\operatorname{id}$ and by $T_{i}:=T_{i}^{\left(\alpha\right)}:=R^{\delta}\cdot A_{n,m,\varepsilon}^{\left(\alpha\right)}$, with $A_{n,m,\varepsilon}^{\left(\alpha\right)}:=\varepsilon\cdot D_{2^{n}}^{\left(\alpha\right)}\cdot S_{m}^{T}$ for $i=\left(n,m,\varepsilon,\delta\right)\in I_{0}^{\left(\alpha\right)}$. Here, the matrices $R,S_{x}$ and $D_{b}^{\left(\alpha\right)}$ are as in equation \eqref{eq:StandardMatrices}. \item The \emph{translations} $\left(b_{i}\right)_{i\in I^{\left(\alpha\right)}}$ are given by $b_{i}:=0$ for all $i\in I^{\left(\alpha\right)}$. \end{itemize} Finally, we define the \emph{weight} $w=\left(w_{i}\right)_{i\in I}$ by $w_{0}:=1$ and $w_{n,m,\varepsilon,\delta}:=2^{n}$ for $\left(n,m,\varepsilon,\delta\right)\in I_{0}$. \end{defn} Our first goal is to show that the covering $\mathcal{S}^{\left(\alpha\right)}$ is an almost structured covering of $\mathbb{R}^{2}$ (cf.\@ Definition \ref{def:AlmostStructuredCovering}). To this end, we begin with the following auxiliary lemma: \begin{lem} \label{lem:AlphaShearletCoveringAuxiliary} \begin{enumerate}[leftmargin=0.6cm] \item Using the notation $U_{\left(a,b\right)}^{\left(\gamma,\mu\right)}$ from equation \eqref{eq:BasicShearletSet} and the shearing matrices $S_{x}$ from equation \eqref{eq:StandardMatrices}, we have for arbitrary $m,a,b\in\mathbb{R}$ and $\kappa,\lambda,\gamma,\mu>0$ that \begin{equation} S_{m}^{T}U_{\left(a,b\right)}^{\left(\gamma,\mu\right)}=U_{\left(m+a,m+b\right)}^{\left(\gamma,\mu\right)}\qquad\text{ and }\qquad{\rm diag}\left(\lambda,\,\kappa\right)U_{\left(a,b\right)}^{\left(\gamma,\mu\right)}=U_{\left(\frac{\kappa}{\lambda}a,\frac{\kappa}{\lambda}b\right)}^{\left(\lambda\gamma,\lambda\mu\right)}.\label{eq:BaseSetTransformationRules} \end{equation} Consequently, \begin{equation} T_{i}^{\left(\alpha\right)}U_{\left(a,b\right)}^{\left(\gamma,\mu\right)}=\varepsilon\cdot U_{\left(2^{n\left(\alpha-1\right)}\left(m+a\right),2^{n\left(\alpha-1\right)}\left(m+b\right)\right)}^{\left(2^{n}\gamma,\,2^{n}\mu\right)}\qquad\text{ for all }\quad i=\left(n,m,\varepsilon,0\right)\in I_{0}.\label{eq:deltazeroset} \end{equation} In particular, $S_{n,m,\varepsilon,0}^{\left(\alpha\right)}=\varepsilon\cdot U_{\left(2^{n\left(\alpha-1\right)}\left(m-1\right),2^{n\left(\alpha-1\right)}\left(m+1\right)\right)}^{\left(2^{n}/3,\,3\cdot2^{n}\right)}$. \item Let $i=\left(n,m,\varepsilon,\delta\right)\in I_{0}$ and let $\left(\begin{smallmatrix}\xi\\ \eta \end{smallmatrix}\right)\in S_{i}^{\left(\alpha\right)}$ be arbitrary. Then the following hold: \begin{enumerate} \item If $i=\left(n,m,\varepsilon,0\right)$, we have $\left|\eta\right|<3\cdot\left|\xi\right|$. \item If $i=\left(n,m,\varepsilon,1\right)$, we have $\left|\xi\right|<3\cdot\left|\eta\right|$. \item We have $2^{n-2}<\frac{2^{n}}{3}<\left|\left(\begin{smallmatrix}\xi\\ \eta \end{smallmatrix}\right)\right|<12\cdot2^{n}<2^{n+4}$.\qedhere \end{enumerate} \end{enumerate} \end{lem} \begin{proof} We establish the different claims individually: \begin{enumerate}[leftmargin=0.6cm] \item The following is essentially identical with the proof of \cite[Lemma 6.3.4]{VoigtlaenderPhDThesis} and is only given here for the sake of completeness. We first observe the following equivalences: \begin{align*} \left(\begin{matrix}\xi\\ \eta \end{matrix}\right)\in U_{\left(m+a,m+b\right)}^{\left(\gamma,\delta\right)} & \Longleftrightarrow\xi\in\left(\gamma,\delta\right)\quad\text{ and }\quad m+a<\frac{\eta}{\xi}<m+b\\ & \Longleftrightarrow\xi\in\left(\gamma,\delta\right)\quad\text{ and }\quad a<\frac{\eta-m\xi}{\xi}<b\\ & \Longleftrightarrow\left(\begin{matrix}1 & 0\\ m & 1 \end{matrix}\right)^{-1}\left(\begin{matrix}\xi\\ \eta \end{matrix}\right)=\left(\begin{matrix}\xi\\ \eta-m\xi \end{matrix}\right)\in U_{\left(a,b\right)}^{\left(\gamma,\delta\right)} \end{align*} and \begin{align*} \left(\begin{matrix}\xi\\ \eta \end{matrix}\right)\in U_{\left(\frac{\kappa}{\lambda}a,\frac{\kappa}{\lambda}b\right)}^{\left(\lambda\gamma,\lambda\mu\right)} & \Longleftrightarrow\xi\in\left(\lambda\gamma,\lambda\mu\right)\quad\text{ and }\quad\frac{\kappa}{\lambda}a<\frac{\eta}{\xi}<\frac{\kappa}{\lambda}b\\ & \Longleftrightarrow\lambda^{-1}\xi\in\left(\gamma,\mu\right)\quad\text{ and }\quad a<\frac{\kappa^{-1}\eta}{\lambda^{-1}\xi}<b\\ & \Longleftrightarrow\left(\begin{matrix}\lambda & 0\\ 0 & \kappa \end{matrix}\right)^{-1}\left(\begin{matrix}\xi\\ \eta \end{matrix}\right)=\left(\begin{matrix}\lambda^{-1}\xi\\ \kappa^{-1}\eta \end{matrix}\right)\in U_{\left(a,b\right)}^{\left(\gamma,\mu\right)}. \end{align*} These equivalences show ${\rm diag}\left(\lambda,\kappa\right)U_{\left(a,b\right)}^{\left(\gamma,\mu\right)}=U_{\left(\frac{\kappa}{\lambda}a,\frac{\kappa}{\lambda}b\right)}^{\left(\lambda\gamma,\lambda\mu\right)}$ and $S_{m}^{T}U_{\left(a,b\right)}^{\left(\gamma,\mu\right)}=U_{\left(m+a,m+b\right)}^{\left(\gamma,\mu\right)}$. But for $i=\left(n,m,\varepsilon,0\right)$, we have $T_{i}^{\left(\alpha\right)}=R^{0}\cdot A_{n,m,\varepsilon}^{\left(\alpha\right)}=\varepsilon\cdot{\rm diag}\left(2^{n},2^{n\alpha}\right)\cdot S_{m}^{T}$. This easily yields the claim. \item We again show the three claims individually: \begin{enumerate} \item For $i=\left(n,m,\varepsilon,0\right)\in I_{0}$, equation \eqref{eq:deltazeroset} yields for $\left(\begin{smallmatrix}\xi\\ \eta \end{smallmatrix}\right)\in S_{i}^{(\alpha)}$ that \[ \frac{\eta}{\xi}\in\left(2^{n\left(\alpha-1\right)}\left(m-1\right),2^{n\left(\alpha-1\right)}\left(m+1\right)\right)\subset\left(-2^{n\left(\alpha-1\right)}\left(\left|m\right|+1\right),\,2^{n\left(\alpha-1\right)}\left(\left|m\right|+1\right)\right), \] since $2^{n\left(\alpha-1\right)}\left(m+1\right)\leq2^{n\left(\alpha-1\right)}\left(\left|m\right|+1\right)$ and \[ 2^{n\left(\alpha-1\right)}\left(m-1\right)\geq2^{n\left(\alpha-1\right)}\left(-|m|-1\right)=-2^{n\left(\alpha-1\right)}\left(|m|+1\right). \] Because of $|m|\leq G_{n}=\lceil2^{n(1-\alpha)}\rceil<2^{n(1-\alpha)}+1$ and $\left|\xi\right|>0$, it follows that \[ \begin{aligned}\left|\eta\right|=\left|\xi\right|\cdot\left|\frac{\eta}{\xi}\right|\leq\left|\xi\right|\cdot2^{n(\alpha-1)}\cdot\left(\left|m\right|+1\right) & <\left|\xi\right|\cdot2^{-n(1-\alpha)}\cdot\left(2^{n(1-\alpha)}+2\right)\\ & \leq\left|\xi\right|\cdot\left(1+2\cdot2^{-n(1-\alpha)}\right)\leq3\cdot\left|\xi\right|. \end{aligned} \] \item For $i=\left(n,m,\varepsilon,1\right)\in I_{0}$ we have \[ \begin{pmatrix}\eta\\ \xi \end{pmatrix}=R\cdot\begin{pmatrix}\xi\\ \eta \end{pmatrix}\in RS_{n,m,\varepsilon,1}^{(\alpha)}=RT_{n,m,\varepsilon,1}^{\left(\alpha\right)}Q=RRA_{n,m,\varepsilon}^{(\alpha)}Q=A_{n,m,\varepsilon}^{(\alpha)}Q=S_{n,m,\varepsilon,0}^{(\alpha)}, \] so that we get $\left|\xi\right|<3\cdot\left|\eta\right|$ from the previous case. \item To prove this claim, we again distinguish two cases: \begin{enumerate} \item For $i=\left(n,m,\varepsilon,0\right)$, equation \eqref{eq:deltazeroset} yields $\varepsilon\xi\in(2^{n}/3,3\cdot2^{n})$ and thus $\frac{2^{n}}{3}<|\xi|<3\cdot2^{n}$. Moreover, we know from a previous part of the lemma that $|\eta|<3\cdot|\xi|$. Thus \begin{align*} \frac{2^{n}}{3}<|\xi|\leq\left|\begin{pmatrix}\xi\\ \eta \end{pmatrix}\right| & \leq|\xi|+|\eta|<|\xi|+3|\xi|=4|\xi|<12\cdot2^{n}. \end{align*} \item For $i=\left(n,m,\varepsilon,1\right)$ we have $\varepsilon\eta\in(2^{n}/3,3\cdot2^{n})$ and thus $\frac{2^{n}}{3}<|\eta|<3\cdot2^{n}$. Moreover, we know from the previous part of the lemma that $|\xi|<3\cdot|\eta|$. Thus \begin{align*} \frac{2^{n}}{3}<|\eta|\leq\left|\begin{pmatrix}\xi\\ \eta \end{pmatrix}\right| & \leq|\xi|+|\eta|<3|\eta|+|\eta|=4|\eta|<12\cdot2^{n}.\qedhere \end{align*} \end{enumerate} \end{enumerate} \end{enumerate} \end{proof} Using the preceding lemma—which will also be frequently useful elsewhere—one can show the following: \begin{lem} \noindent \label{lem:AlphaShearletCoveringIsAlmostStructured}The $\alpha$-shearlet covering $\mathcal{S}^{(\alpha)}$ from Definition \ref{def:AlphaShearletCovering} is an almost structured covering of $\mathbb{R}^{2}$. \end{lem} Since the proof of Lemma \ref{lem:AlphaShearletCoveringIsAlmostStructured} is quite lengthy, although it does not yield too much insight, we postpone it to the appendix (Section \ref{sec:AlphaShearletCoveringAlmostStructured}). Finally, before we can formally define the $\alpha$-shearlet smoothness spaces, we still need to verify that the weight $w$ from Definition \ref{def:AlphaShearletCovering} is $\mathcal{S}^{\left(\alpha\right)}$-moderate (cf.\@ Definition \ref{def:QModerateWeightClusteringMap}). \begin{lem} \label{lem:AlphaShearletWeightIsModerate}For arbitrary $s\in\mathbb{R}$, the weight $w^{s}=\left(w_{i}^{s}\right)_{i\in I}$, with $w=\left(w_{i}\right)_{i\in I}$ as in Definition \ref{def:AlphaShearletCovering}, is $\mathcal{S}^{\left(\alpha\right)}$-moderate (cf.\@ equation \eqref{eq:ModerateWeightDefinition}) with \[ C_{\mathcal{S}^{\left(\alpha\right)},w^{s}}\leq39^{\left|s\right|}. \] Furthermore, we have \[ \frac{1}{3}\cdot w_{i}\leq1+\left|\xi\right|\leq13\cdot w_{i}\qquad\forall\:i\in I\text{ and all }\xi\in S_{i}^{\left(\alpha\right)}.\qedhere \] \end{lem} \begin{proof} First, let $i=\left(n,m,\varepsilon,\delta\right)\in I_{0}$ be arbitrary. By Lemma \ref{lem:AlphaShearletCoveringAuxiliary}, we get \[ \frac{1}{3}\cdot w_{i}=\frac{2^{n}}{3}\leq\left|\xi\right|\leq1+\left|\xi\right|\leq1+12\cdot2^{n}\leq13\cdot2^{n}=13\cdot w_{i}\qquad\forall\xi\in S_{i}^{\left(\alpha\right)}. \] Furthermore, for $i=0$, we have $S_{i}^{\left(\alpha\right)}=\left(-1,1\right)^{2}$ and thus \[ \frac{1}{3}\cdot w_{i}\leq w_{i}=1\leq1+\left|\xi\right|\leq3=3\cdot w_{i}\leq13\cdot w_{i}\qquad\forall\xi\in S_{i}^{\left(\alpha\right)}. \] This establishes the second part of the lemma. Next, let $i,j\in I$ with $S_{i}^{\left(\alpha\right)}\cap S_{j}^{\left(\alpha\right)}\neq\emptyset$. Pick an arbitrary $\xi\in S_{i}^{\left(\alpha\right)}\cap S_{j}^{\left(\alpha\right)}$ and note as a consequence of the preceding estimates that \[ \frac{w_{i}}{w_{j}}\leq\frac{3\cdot\left(1+\left|\xi\right|\right)}{\frac{1}{13}\cdot\left(1+\left|\xi\right|\right)}=39. \] By symmetry, this implies $\frac{1}{39}\leq\frac{w_{i}}{w_{j}}\leq39$ and thus also \[ \frac{w_{i}^{s}}{w_{j}^{s}}=\left(\frac{w_{i}}{w_{j}}\right)^{s}\leq39^{\left|s\right|}.\qedhere \] \end{proof} Now, we can finally formally define the $\alpha$-shearlet smoothness spaces: \begin{defn} \label{def:AlphaShearletSmoothnessSpaces}For $\alpha\in\left[0,1\right]$, $p,q\in\left(0,\infty\right]$ and $s\in\mathbb{R}$, we define the \textbf{$\alpha$-shearlet smoothness space} $\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)$ associated to these parameters as \[ \mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right):=\DecompSp{\smash{\mathcal{S}^{\left(\alpha\right)}}}p{\ell_{w^{s}}^{q}}{}, \] where the covering $\mathcal{S}^{\left(\alpha\right)}$ and the weight $w^{s}$ are as in Definition \ref{def:AlphaShearletCovering} and Lemma \ref{lem:AlphaShearletWeightIsModerate}, respectively. \end{defn} \begin{rem*} Since $\mathcal{S}^{\left(\alpha\right)}$ is an almost structured covering by Lemma \ref{lem:AlphaShearletCoveringIsAlmostStructured} and since $w^{s}$ is $\mathcal{S}^{\left(\alpha\right)}$-moderate by Lemma \ref{lem:AlphaShearletWeightIsModerate}, Definition \ref{def:FourierSideDecompositionSpaces} and the associated remark show that $\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)$ is indeed well-defined, i.e., independent of the chosen regular partition of unity subordinate to $\mathcal{S}^{\left(\alpha\right)}$. The same remark also implies that $\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)$ is a Quasi-Banach space. \end{rem*} Recall that with our definition of decomposition spaces, $\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)$ is a subspace of $Z'\left(\mathbb{R}^{2}\right)=\left[\mathcal{F}\left(\TestFunctionSpace{\mathbb{R}^{2}}\right)\right]'$. But as our next result shows, each $f\in\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)$ actually extends to a tempered distribution: \begin{lem} \label{lem:AlphaShearletIntoTemperedDistributions}Let $\alpha\in\left[0,1\right]$, $p,q\in\left(0,\infty\right]$ and $s\in\mathbb{R}$. Then \[ \mathscr{S}_{\alpha,s}^{p,q}\left(\smash{\mathbb{R}^{2}}\right)\hookrightarrow\mathcal{S}'\left(\smash{\mathbb{R}^{2}}\right), \] in the sense that each $f\in\mathscr{S}_{\alpha,s}^{p,q}\left(\smash{\mathbb{R}^{2}}\right)$ extends to a uniquely determined tempered distribution $f_{\mathcal{S}}\in\mathcal{S}'\left(\mathbb{R}^{2}\right)$. Furthermore, the map $\mathscr{S}_{\alpha,s}^{p,q}\left(\smash{\mathbb{R}^{2}}\right)\hookrightarrow\mathcal{S}'\left(\smash{\mathbb{R}^{2}}\right),f\mapsto f_{\mathcal{S}}$ is linear and continuous with respect to the weak-$\ast$-topology on $\mathcal{S}'\left(\mathbb{R}^{2}\right)$. \end{lem} \begin{proof} It is well known (cf.\@ \cite[Proposition 9.9]{FollandRA}) that $\TestFunctionSpace{\mathbb{R}^{2}}\leq\mathcal{S}\left(\mathbb{R}^{2}\right)$ is dense. Since $\mathcal{F}:\mathcal{S}\left(\mathbb{R}^{2}\right)\to\mathcal{S}\left(\mathbb{R}^{2}\right)$ is a homeomorphism, we see that $Z\left(\mathbb{R}^{2}\right)=\mathcal{F}\left(\TestFunctionSpace{\mathbb{R}^{2}}\right)\leq\mathcal{S}\left(\mathbb{R}^{2}\right)$ is dense, too. Hence, for arbitrary $f\in\mathscr{S}_{\alpha,s}^{p,q}\left(\smash{\mathbb{R}^{2}}\right)$, if there is \emph{any} extension $g\in\mathcal{S}'\left(\mathbb{R}^{2}\right)$ of $f\in Z'\left(\mathbb{R}^{2}\right)$, then $g$ is uniquely determined. Next, by Lemma \ref{lem:AlphaShearletCoveringIsAlmostStructured}, $\mathcal{S}^{\left(\alpha\right)}$ is almost structured, so that \cite[Theorem 8.2]{DecompositionEmbedding} shows that $\mathcal{S}^{\left(\alpha\right)}$ is a regular covering of $\mathbb{R}^{2}$. Thus, once we verify that there is some $N\in\mathbb{N}_{0}$ such that the sequence $w^{\left(N\right)}=\left(\smash{w_{i}^{\left(N\right)}}\right)_{i\in I}$ defined by \[ w_{i}^{\left(N\right)}:=\left|\det\smash{T_{i}^{\left(\alpha\right)}}\vphantom{T_{i}}\right|^{1/p}\cdot\max\left\{ 1,\,\left\Vert T_{i}^{-1}\right\Vert ^{2+1}\right\} \cdot\left[\vphantom{\sum_{i}}\smash{\inf_{\xi\in\vphantom{S_{i}^{\left(\alpha\right)}}\left(\smash{S_{i}^{\left(\alpha\right)}}\right)^{\ast}}}\left(1+\left|\xi\right|\right)\right]^{-N} \] satisfies $w^{\left(N\right)}\in\ell_{1/w^{s}}^{q'}\left(I\right)$ with $q'=\infty$ in case of $q\in\left(0,1\right)$, then the claim of the present lemma is a consequence of \cite[Theorem 8.3]{DecompositionEmbedding} and the associated remark. Here, $\vphantom{S_{i}^{\left(\alpha\right)}}\left(\smash{S_{i}^{\left(\alpha\right)}}\right)^{\ast}=\bigcup_{j\in i^{\ast}}S_{j}^{\left(\alpha\right)}$. Since $I=\left\{ 0\right\} \cup I_{0}$ and since the single (finite(!))\@ term $w_{0}^{\left(N\right)}$ does not influence membership of $w^{\left(N\right)}$ in $\ell_{1/w^{s}}^{q'}$, we only need to show $w^{\left(N\right)}|_{I_{0}}\in\ell_{1/w^{s}}^{q'}\left(I_{0}\right)$. But for $i=\left(n,m,\varepsilon,\delta\right)\in I_{0}$, we have \[ \left\Vert T_{i}^{-1}\right\Vert =\left\Vert \left(\begin{matrix}2^{-n} & 0\\ -2^{-n}m & 2^{-\alpha n} \end{matrix}\right)\right\Vert \leq3. \] Here, the last step used that $\left|2^{-n}\right|\leq1$, $\left|2^{-\alpha n}\right|\leq1$ and that $\left|m\right|\leq G_{n}=\left\lceil 2^{n\left(1-\alpha\right)}\right\rceil \leq\left\lceil 2^{n}\right\rceil =2^{n}$, so that $\left|-2^{-n}m\right|\leq1$ as well. Furthermore, Lemma \ref{lem:AlphaShearletCoveringAuxiliary} shows $\frac{2^{n}}{3}\leq\left|\xi\right|\leq12\cdot2^{n}$ for all $\xi\in S_{i}^{\left(\alpha\right)}$. In particular, since we have $\left|\xi\right|\leq2$ for arbitrary $\xi\in S_{0}^{\left(\alpha\right)}=\left(-1,1\right)^{2}$, we have $i^{\ast}\subset I_{0}$ as soon as $\frac{2^{n}}{3}>2$, i.e., for $n\geq3$. Now, for $n\geq3$ and $j=\left(\nu,\mu,e,d\right)\in i^{\ast}\subset I_{0}$, there is some $\eta\in S_{i}^{\left(\alpha\right)}\cap S_{j}^{\left(\alpha\right)}$, so that Lemma \ref{lem:AlphaShearletCoveringAuxiliary} yields $\frac{2^{n}}{3}\leq\left|\eta\right|\leq12\cdot2^{\nu}$. Another application of Lemma \ref{lem:AlphaShearletCoveringAuxiliary} then shows $\left|\xi\right|\geq\frac{2^{\nu}}{3}\geq\frac{1}{3^{2}\cdot12}\cdot2^{n}=\frac{2^{n}}{108}$ for all $\xi\in S_{j}^{\left(\alpha\right)}$. All in all, we have shown $1+\left|\xi\right|\geq\left|\xi\right|\geq\frac{2^{n}}{108}$ for all $\xi\in\left(\smash{S_{i}^{\left(\alpha\right)}}\right)^{\ast}$ for arbitrary $i=\left(n,m,\varepsilon,\delta\right)\in I_{0}$ with $n\geq3$. But in case of $n\leq2$, we simply have $1+\left|\xi\right|\geq1\geq\frac{2^{n}}{108}$, so that this estimate holds for all $i=\left(n,m,\varepsilon,\delta\right)\in I_{0}$. Overall, we conclude \[ w_{i}^{\left(N\right)}\leq3^{3}\cdot2^{\left(1+\alpha\right)\frac{n}{p}}\cdot\left(\frac{2^{n}}{108}\right)^{-N}=3^{3}\cdot108^{N}\cdot2^{n\left(\frac{1+\alpha}{p}-N\right)}\qquad\forall\:i=\left(n,m,\varepsilon,\delta\right)\in I_{0}. \] For arbitrary $\theta\in\left(0,1\right]$, this implies \begin{align*} \sum_{i=\left(n,m,\varepsilon,\delta\right)\in I_{0}}\left[\frac{1}{w_{i}^{s}}\cdot w_{i}^{\left(N\right)}\right]^{\theta} & \leq4\cdot\left(3^{3}\cdot108^{N}\right)^{\theta}\cdot\sum_{n=0}^{\infty}\:\sum_{\left|m\right|\leq G_{n}}2^{n\theta\left(\frac{1+\alpha}{p}-s-N\right)}\\ \left({\scriptstyle \text{since }G_{n}\leq2^{n}}\right) & \leq12\cdot\left(3^{3}\cdot108^{N}\right)^{\theta}\cdot\sum_{n=0}^{\infty}\:2^{\theta n\left(\frac{1}{\theta}+\frac{1+\alpha}{p}-s-N\right)}<\infty \end{align*} as soon as $N>\frac{1}{\theta}+\frac{1+\alpha}{p}-s$, which can always be satisfied. Since we have $\ell^{\theta}\left(I_{0}\right)\hookrightarrow\ell^{q'}\left(I_{0}\right)$ for $\theta\leq q'$, this shows that we always have $w^{\left(N\right)}\in\ell_{1/w^{s}}^{q'}\left(I\right)$, for sufficiently large $N\in\mathbb{N}_{0}$. As explained above, we can thus invoke \cite[Theorem 8.3]{DecompositionEmbedding} to complete the proof. \end{proof} Now that we have verified that the $\alpha$-shearlet smoothness spaces are indeed well-defined (Quasi)-Banach spaces, our next goal is to verify that the theory of structured Banach frame decompositions for decomposition spaces—as outlined in Section \ref{sec:BanachFrameDecompositionCrashCourse}—applies to these spaces. This is the goal of the next section. As we will see (see e.g.\@ Theorem \ref{thm:AnalysisAndSynthesisSparsityAreEquivalent}), this implies that the $\alpha$-shearlet smoothness spaces \emph{simultaneously} characterize analysis sparsity and synthesis sparsity with respect to (suitable) $\alpha$-shearlet systems. \section{Construction of Banach frame decompositions for \texorpdfstring{$\alpha$}{α}-shearlet smoothness spaces} \label{sec:CompactlySupportedShearletFrames}We now want to verify the pertinent conditions from Theorems \ref{thm:BanachFrameTheorem} and \ref{thm:AtomicDecompositionTheorem} for the $\alpha$-shearlet smoothness spaces. To this end, first recall from Definition \ref{def:AlphaShearletCovering} that we have $Q_{i}'=Q$ for all $i\in I_{0}$ and furthermore $Q_{0}'=\left(-1,1\right)^{2}$. Consequently, in the notation of Assumption \ref{assu:CrashCourseStandingAssumptions}, we can choose $n=2$ and $Q_{0}^{\left(1\right)}:=Q=U_{\left(-1,1\right)}^{\left(3^{-1},3\right)}$, as well as $Q_{0}^{\left(2\right)}:=\left(-1,1\right)^{2}$. We fix a \textbf{low-pass filter} $\varphi\in W^{1,1}\left(\mathbb{R}^{2}\right)\cap C^{1}\left(\mathbb{R}^{2}\right)$ and a \textbf{mother shearlet} $\psi\in W^{1,1}\left(\mathbb{R}^{2}\right)\cap C^{1}\left(\mathbb{R}^{2}\right)$. Then we set (again in the notation of Assumption \ref{assu:CrashCourseStandingAssumptions}) $\gamma_{1}^{\left(0\right)}:=\psi$ and $\gamma_{2}^{\left(0\right)}:=\varphi$, as well as $k_{0}:=2$ and $k_{i}:=1$ for $i\in I_{0}$. With these choices, the family $\Gamma=\left(\gamma_{i}\right)_{i\in I}$ introduced in Theorems \ref{thm:BanachFrameTheorem} and \ref{thm:AtomicDecompositionTheorem} satisfies $\gamma_{i}=\gamma_{k_{i}}^{\left(0\right)}=\gamma_{1}^{\left(0\right)}=\psi$ for $i\in I_{0}$ and $\gamma_{0}=\gamma_{k_{0}}^{\left(0\right)}=\gamma_{2}^{\left(0\right)}=\varphi$, so that the family $\Gamma$ is completely determined by $\varphi$ and $\psi$. Our main goal in this section is to derive readily verifiable conditions on $\varphi,\psi$ which guarantee that the generalized shift-invariant system $\Psi_{\delta}:=\left(L_{\delta\cdot T_{i}^{-T}k}\:\gamma^{\left[i\right]}\right)_{i\in I,\,k\in\mathbb{Z}^{2}}$, with $\gamma^{\left[i\right]}=\left|\det T_{i}\right|^{1/2}\cdot\gamma_{i}\circ T_{i}^{T}$, generates, respectively, a Banach frame or an atomic decomposition for the $\alpha$-shearlet smoothness space $\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)$, for sufficiently small $\delta>0$. Precisely, we assume $\widehat{\psi},\widehat{\varphi}\in C^{\infty}\left(\mathbb{R}^{2}\right)$, where all partial derivatives of these functions are assumed to be polynomially bounded. Furthermore, we assume (at least for the application Theorem \ref{thm:BanachFrameTheorem}) that \begin{equation} \begin{split}\max_{\left|\beta\right|\leq1}\max_{\left|\theta\right|\leq N}\left|\left(\partial^{\theta}\widehat{\partial^{\beta}\psi}\right)\left(\xi\right)\right| & \leq C\cdot\min\left\{ \left|\xi_{1}\right|^{M_{1}}\!\!,\left(1+\left|\xi_{1}\right|\right)^{-M_{2}}\right\} \cdot\left(1+\left|\xi_{2}\right|\right)^{-K}\!=C\cdot\theta_{1}\left(\xi_{1}\right)\cdot\theta_{2}\left(\xi_{2}\right)=C\cdot\varrho\left(\xi\right),\\ \max_{\left|\beta\right|\leq1}\max_{\left|\theta\right|\leq N}\left|\left(\partial^{\theta}\widehat{\partial^{\beta}\varphi}\right)\left(\xi\right)\right| & \leq C\cdot\left(1+\left|\xi\right|\right)^{-H}=C\cdot\varrho_{0}\left(\xi\right) \end{split} \label{eq:MotherShearletMainEstimate} \end{equation} for all $\xi=\left(\xi_{1},\xi_{2}\right)\in\mathbb{R}^{2}$, a suitable constant $C>0$ and certain $M_{1},M_{2},K,H\in\left[0,\infty\right)$ and $N\in\mathbb{N}$. To be precise, we note that equation \eqref{eq:MotherShearletMainEstimate} employed the abbreviations \[ \theta_{1}\left(\xi_{1}\right):=\min\left\{ \left|\xi_{1}\right|^{M_{1}},\left(1+\left|\xi_{1}\right|\right)^{-M_{2}}\right\} \quad\text{ and }\quad\theta_{2}\left(\xi_{2}\right):=\left(1+\left|\xi_{2}\right|\right)^{-K}\qquad\text{ for }\xi_{1},\xi_{2}\in\mathbb{R}, \] as well as $\varrho\left(\xi\right):=\theta_{1}\left(\xi_{1}\right)\cdot\theta_{2}\left(\xi_{2}\right)$ and $\varrho_{0}\left(\xi\right):=\left(1+\left|\xi\right|\right)^{-H}$ for $\xi=\left(\begin{smallmatrix}\xi_{1}\\ \xi_{2} \end{smallmatrix}\right)\in\mathbb{R}^{2}$. Our goal in the following is to derive conditions on $N,M_{1},M_{2},K,H$ (depending on $p,q,s,\alpha$) which ensure that the family $\Psi_{\delta}$ indeed forms a Banach frame or an atomic decomposition for $\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)=\DecompSp{\mathcal{S}^{\left(\alpha\right)}}p{\ell_{w^{s}}^{q}}{}$. \medskip{} To verify the conditions of Theorem \ref{thm:BanachFrameTheorem} (recalling that $b_{j}=0$ for all $j\in I$), we need to estimate the quantity \begin{align} M_{j,i} & :=\left(\frac{w_{j}^{s}}{w_{i}^{s}}\right)^{\tau}\cdot\left(1+\left\Vert T_{j}^{-1}T_{i}\right\Vert \right)^{\sigma}\cdot\max_{\left|\beta\right|\leq1}\left(\left|\det T_{i}\right|^{-1}\cdot\int_{S_{i}^{\left(\alpha\right)}}\max_{\left|\theta\right|\leq N}\left|\left(\partial^{\theta}\widehat{\partial^{\beta}\gamma_{j}}\right)\left(T_{j}^{-1}\xi\right)\right|\operatorname{d}\xi\right)^{\tau}\nonumber \\ \left({\scriptstyle \text{eq. }\eqref{eq:MotherShearletMainEstimate}}\right) & \leq C^{\tau}\cdot\left(\frac{w_{j}^{s}}{w_{i}^{s}}\right)^{\tau}\cdot\left(1+\left\Vert T_{j}^{-1}T_{i}\right\Vert \right)^{\sigma}\cdot\left(\left|\det T_{i}\right|^{-1}\cdot\int_{S_{i}^{\left(\alpha\right)}}\varrho_{j}\left(T_{j}^{-1}\xi\right)\operatorname{d}\xi\right)^{\tau}=:C^{\tau}\cdot M_{j,i}^{\left(0\right)}\label{eq:AlphaShearletConditionTargetTerm} \end{align} with $\sigma,\tau>0$ and $N\in\mathbb{N}$ as in Theorem \ref{thm:BanachFrameTheorem} and arbitrary $i,j\in I$, where we defined $\varrho_{j}:=\varrho$ for $j\in I_{0}$, with $\varrho$ and $\varrho_{0}$ as defined in equation \eqref{eq:MotherShearletMainEstimate}. In view of equation \eqref{eq:AlphaShearletConditionTargetTerm}, the following—highly nontrivial—lemma is crucial: \begin{lem} \label{lem:MainShearletLemma}Let $\alpha\in\left[0,1\right]$ and $\tau_{0},\omega,c\in\left(0,\infty\right)$. Furthermore, let $K,H,M_{1},M_{2}\in\left[0,\infty\right)$. Then there is a constant $C_{0}=C_{0}\left(\alpha,\tau_{0},\omega,c,K,H,M_{1},M_{2}\right)>0$ with the following property: If $\sigma,\tau\in\left(0,\infty\right)$ and $s\in\mathbb{R}$ satisfy $\tau\geq\tau_{0}$ and $\frac{\sigma}{\tau}\leq\omega$ and if we have $K\geq K_{0}+c$ , $M_{1}\geq M_{1}^{(0)}+c$, and $M_{2}\geq M_{2}^{(0)}+c$, as well as $H\geq H_{0}+c$ for \begin{align*} K_{0} & :=\begin{cases} \max\left\{ \frac{\sigma}{\tau}-s,\,\frac{2+\sigma}{\tau}\right\} , & \text{if }\alpha=1,\\ \max\left\{ \frac{1-\alpha}{\tau}+2\frac{\sigma}{\tau}-s,\,\frac{2+\sigma}{\tau}\right\} , & \text{if }\alpha\in\left[0,1\right), \end{cases}\\ M_{1}^{(0)} & :=\begin{cases} \frac{1}{\tau}+s, & \text{if }\alpha=1,\\ \frac{1}{\tau}+\max\left\{ s,\,0\right\} , & \text{if }\alpha\in\left[0,1\right), \end{cases}\\ M_{2}^{(0)} & :=\left(1+\alpha\right)\frac{\sigma}{\tau}-s,\\ H_{0} & :=\frac{1-\alpha}{\tau}+\frac{\sigma}{\tau}-s, \end{align*} then we have \[ \max\left\{ \sup_{i\in I}\sum_{j\in I}M_{j,i}^{\left(0\right)},\,\sup_{j\in I}\sum_{i\in I}M_{j,i}^{\left(0\right)}\right\} \leq C_{0}^{\tau}, \] where $M_{j,i}^{\left(0\right)}$ is as in equation \eqref{eq:AlphaShearletConditionTargetTerm}, i.e., \[ M_{j,i}^{\left(0\right)}:=\left(\frac{w_{j}^{s}}{w_{i}^{s}}\right)^{\tau}\cdot\left(1+\left\Vert T_{j}^{-1}T_{i}\right\Vert \right)^{\sigma}\cdot\left(\left|\det T_{i}\right|^{-1}\cdot\int_{S_{i}^{\left(\alpha\right)}}\varrho_{j}\left(T_{j}^{-1}\xi\right)\operatorname{d}\xi\right)^{\tau}, \] with $\varrho_{0}\left(\xi\right)=\left(1+\left|\xi\right|\right)^{-H}$ and $\varrho_{j}\left(\xi\right)=\min\left\{ \left|\xi_{1}\right|^{M_{1}},\left(1+\left|\xi_{1}\right|\right)^{-M_{2}}\right\} \cdot\left(1+\left|\xi_{2}\right|\right)^{-K}$ for arbitrary $j\in I_{0}$. \end{lem} The proof of Lemma \ref{lem:MainShearletLemma} is highly technical and very lengthy. In order to not disrupt the flow of the paper too severely, we deferred the proof to the appendix (Section \ref{sec:MegaProof}). Using the general result of Lemma \ref{lem:MainShearletLemma}, we can now derive convenient sufficient conditions concerning the low-pass filter $\varphi$ and the mother shearlet $\psi$ which ensure that $\varphi,\psi$ generate a Banach frame for $\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)$. \begin{thm} \label{thm:NicelySimplifiedAlphaShearletFrameConditions}Let $\alpha\in\left[0,1\right]$, $\varepsilon,p_{0},q_{0}\in\left(0,1\right]$ and $s_{0},s_{1}\in\mathbb{R}$ with $s_{0}\leq s_{1}$. Assume that $\varphi,\psi:\mathbb{R}^{2}\rightarrow\mathbb{C}$ satisfy the following: \begin{itemize}[leftmargin=0.6cm] \item $\varphi,\psi\in L^{1}\left(\mathbb{R}^{2}\right)$ and $\widehat{\varphi},\widehat{\psi}\in C^{\infty}\left(\mathbb{R}^{2}\right)$, where all partial derivatives of $\widehat{\varphi},\widehat{\psi}$ have at most polynomial growth. \item $\varphi,\psi\in C^{1}\left(\mathbb{R}^{2}\right)$ and $\nabla\varphi,\nabla\psi\in L^{1}\left(\mathbb{R}^{2}\right)\cap L^{\infty}\left(\mathbb{R}^{2}\right)$. \item We have \begin{align*} \widehat{\psi}\left(\xi\right)\neq0 & \text{ for all }\xi=\left(\xi_{1},\xi_{2}\right)\in\mathbb{R}^{2}\text{ with }\xi_{1}\in\left[3^{-1},3\right]\text{ and }\left|\xi_{2}\right|\leq\left|\xi_{1}\right|,\\ \widehat{\varphi}\left(\xi\right)\ne0 & \text{ for all }\xi\in\left[-1,1\right]^{2}. \end{align*} \item There is some $C>0$ such that $\widehat{\psi}$ and $\widehat{\varphi}$ satisfy the estimates \begin{equation} \begin{split}\left|\partial^{\theta}\smash{\widehat{\psi}}\left(\xi\right)\right| & \leq C\cdot\left|\xi_{1}\right|^{M_{1}}\left(1+\left|\xi_{2}\right|\right)^{-\left(1+K\right)}\qquad\forall\xi=\left(\xi_{1},\xi_{2}\right)\in\mathbb{R}^{2}\text{ with }\left|\xi_{1}\right|\leq1,\\ \left|\partial^{\theta}\smash{\widehat{\psi}}\left(\xi\right)\right| & \leq C\cdot\left(1+\left|\xi_{1}\right|\right)^{-\left(M_{2}+1\right)}\left(1+\left|\xi_{2}\right|\right)^{-\left(K+1\right)}\qquad\forall\xi=\left(\xi_{1},\xi_{2}\right)\in\mathbb{R}^{2},\\ \left|\partial^{\theta}\widehat{\varphi}\left(\xi\right)\right| & \leq C\cdot\left(1+\left|\xi\right|\right)^{-\left(H+1\right)}\qquad\forall\xi\in\mathbb{R}^{2} \end{split} \label{eq:ShearletFrameFourierDecayCondition} \end{equation} for all $\theta\in\mathbb{N}_{0}^{2}$ with $\left|\theta\right|\leq N_{0}$, where $N_{0}:=\left\lceil p_{0}^{-1}\cdot\left(2+\varepsilon\right)\right\rceil $ and \begin{align*} K & :=\varepsilon+\max\left\{ \frac{1-\alpha}{\min\left\{ p_{0},q_{0}\right\} }+2\left(\frac{2}{p_{0}}+N_{0}\right)-s_{0},\,\frac{2}{\min\left\{ p_{0},q_{0}\right\} }+\frac{2}{p_{0}}+N_{0}\right\} ,\\ M_{1} & :=\varepsilon+\frac{1}{\min\left\{ p_{0},q_{0}\right\} }+\max\left\{ s_{1},\,0\right\} ,\\ M_{2} & :=\max\left\{ 0,\,\varepsilon+\left(1+\alpha\right)\left(\frac{2}{p_{0}}+N_{0}\right)-s_{0}\right\} ,\\ H & :=\max\left\{ 0,\,\varepsilon+\frac{1-\alpha}{\min\left\{ p_{0},q_{0}\right\} }+\frac{2}{p_{0}}+N_{0}-s_{0}\right\} . \end{align*} \end{itemize} Then there is some $\delta_{0}\in\left(0,1\right]$ such that for $0<\delta\leq\delta_{0}$ and all $p,q\in\left(0,\infty\right]$ and $s\in\mathbb{R}$ with $p\geq p_{0}$, $q\geq q_{0}$ and $s_{0}\leq s\leq s_{1}$, the following is true: The family \[ \widetilde{{\rm SH}}_{\alpha,\varphi,\psi,\delta}^{\left(\pm1\right)}:=\left(L_{\delta\cdot T_{i}^{-T}k}\widetilde{\gamma^{\left[i\right]}}\right)_{i\in I,k\in\mathbb{Z}^{2}}\quad\text{ with }\quad\widetilde{\gamma^{\left[i\right]}}(x)=\gamma^{\left[i\right]}(-x)\quad\text{ and }\quad\gamma^{\left[i\right]}:=\begin{cases} \left|\det T_{i}\right|^{1/2}\cdot\left(\psi\circ T_{i}^{T}\right), & \text{if }i\in I_{0},\\ \varphi, & \text{if }i=0 \end{cases} \] forms a Banach frame for $\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)=\mathcal{D}\left(\mathcal{S}^{\left(\alpha\right)},L^{p},\ell_{w^{s}}^{q}\right)$. Precisely, this means the following: \begin{enumerate}[leftmargin=0.6cm] \item The \textbf{analysis operator} \[ A^{(\delta)}:\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)\to C_{w^{s}}^{p,q},f\mapsto\left[\left(\smash{\gamma^{\left[i\right]}\ast f}\right)\left(\delta\cdot T_{i}^{-T}k\right)\right]_{i\in I,k\in\mathbb{Z}^{2}} \] is well-defined and bounded for arbitrary $\delta\in\left(0,1\right]$, with the coefficient space $C_{w^{s}}^{p,q}$ from Definition \ref{def:CoefficientSpace}. The convolution $\gamma^{\left[i\right]}\ast f$ has to be understood as explained in equation \eqref{eq:SpecialConvolutionDefinition}; see Lemma \ref{lem:SpecialConvolutionClarification} for a more convenient expression for this convolution, for $f\in L^{2}\left(\mathbb{R}^{2}\right)$. \item For $0<\delta\leq\delta_{0}$, there is a bounded linear \textbf{reconstruction operator} \[ R^{(\delta)}:C_{w^{s}}^{p,q}\to\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right) \] satisfying $R^{\left(\delta\right)}\circ A^{\left(\delta\right)}=\operatorname{id}_{\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)}$. \item For $0<\delta\leq\delta_{0}$, we have the following \textbf{consistency statement}: If $f\in\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)$ and if $p_{0}\leq\tilde{p}\leq\infty$, $q_{0}\leq\tilde{q}\leq\infty$ and $s_{0}\leq\tilde{s}\leq s_{1}$, then the following equivalence holds: \[ f\in\mathscr{S}_{\alpha,\tilde{s}}^{\tilde{p},\tilde{q}}\left(\mathbb{R}^{2}\right)\quad\Longleftrightarrow\quad\left[\left(\smash{\gamma^{\left[i\right]}\ast f}\right)\left(\delta\cdot T_{i}^{-T}k\right)\right]_{i\in I,k\in\mathbb{Z}^{2}}\in C_{w^{\tilde{s}}}^{\tilde{p},\tilde{q}}.\qedhere \] \end{enumerate} \end{thm} \begin{proof} First, we show that there are constants $K_{1},K_{2}>0$ such that \begin{equation} \max_{\left|\beta\right|\leq1}\max_{\left|\theta\right|\leq N_{0}}\left|\left(\partial^{\theta}\smash{\widehat{\partial^{\beta}\smash{\psi}}}\right)\left(\xi\right)\right|\leq K_{1}\cdot\min\left\{ \left|\xi_{1}\right|^{M_{1}},\left(1+\left|\xi_{1}\right|\right)^{-M_{2}}\right\} \cdot\left(1+\left|\xi_{2}\right|\right)^{-K}=:K_{1}\cdot\varrho\left(\xi\right)\label{eq:NicelySimplifiedAlphaShearletFrameConditionTargetEstimate} \end{equation} and \begin{equation} \max_{\left|\beta\right|\leq1}\max_{\left|\theta\right|\leq N_{0}}\left|\left(\partial^{\theta}\smash{\widehat{\partial^{\beta}\varphi}}\right)\left(\xi\right)\right|\leq K_{2}\cdot\left(1+\left|\xi_{2}\right|\right)^{-H}=:K_{2}\cdot\varrho_{0}\left(\xi\right)\label{eq:NiceSimplifiedAlphaShearletFrameConditionTargetEstimateLowPass} \end{equation} for all $\xi=\left(\xi_{1},\xi_{2}\right)\in\mathbb{R}^{2}$. To this end, we recall that $\varphi,\psi\in C^{1}\left(\mathbb{R}^{2}\right)\cap W^{1,1}\left(\mathbb{R}^{2}\right)$, so that standard properties of the Fourier transform show for $\beta=e_{\ell}$ (the $\ell$-th unit vector) that \[ \widehat{\partial^{\beta}\psi}\left(\xi\right)=2\pi i\cdot\xi_{\ell}\cdot\widehat{\psi}\left(\xi\right)\text{ \ \ and \ \ }\widehat{\partial^{\beta}\varphi}\left(\xi\right)=2\pi i\cdot\xi_{\ell}\cdot\widehat{\varphi}\left(\xi\right)\qquad\forall\xi\in\mathbb{R}^{2}. \] Then, Leibniz's rule yields for $\beta=e_{\ell}$ and arbitrary $\theta\in\mathbb{N}_{0}^{2}$ with $\left|\theta\right|\leq N_{0}$ that \begin{align} \left|\left(\partial^{\theta}\smash{\widehat{\partial^{\beta}\psi}}\right)\left(\xi\right)\right| & =2\pi\cdot\left|\sum_{\nu\leq\theta}\binom{\theta}{\nu}\cdot\left(\partial^{\nu}\xi_{\ell}\right)\cdot\left(\partial^{\theta-\nu}\smash{\widehat{\psi}}\,\right)\left(\xi\right)\right|\nonumber \\ & \leq2^{N_{0}+1}\pi\cdot\left(1+\left|\xi_{\ell}\right|\right)\cdot\max_{\left|\eta\right|\leq N_{0}}\left|\left(\partial^{\eta}\smash{\widehat{\psi}}\,\right)\left(\xi\right)\right|\label{eq:NicelySimplifiedShearletFrameConditionsDerivativeEstimate1}\\ & \leq2^{N_{0}+1}\pi\cdot\left(1+\left|\xi_{\ell}\right|\right)\cdot C\cdot\left(1+\left|\xi_{1}\right|\right)^{-\left(1+M_{2}\right)}\left(1+\left|\xi_{2}\right|\right)^{-\left(1+K\right)}\nonumber \\ & \leq2^{N_{0}+1}\pi C\cdot\left(1+\left|\xi_{1}\right|\right)^{-M_{2}}\cdot\left(1+\left|\xi_{2}\right|\right)^{-K},\label{eq:NicelySimplifiedShearletFrameConditionsDerivativeEstimate2} \end{align} since we have \[ \left|\partial^{\nu}\xi_{\ell}\right|=\begin{cases} \left|\xi_{\ell}\right|, & \text{if }\nu=0\\ 1, & \text{if }\nu=e_{\ell}\\ 0, & \text{otherwise} \end{cases}\qquad\text{ and thus }\qquad\left|\partial^{\nu}\xi_{\ell}\right|\leq1+\left|\xi_{\ell}\right|\leq1+\left|\xi\right|. \] Above, we also used that $\sum_{\nu\leq\theta}\binom{\theta}{\nu}=\left(2,\dots,2\right)^{\theta}=2^{\left|\theta\right|}\leq2^{N_{0}}$, as a consequence of the $d$-dimensional binomial theorem (cf.\@ \cite[Section 8.1, Exercise 2.b]{FollandRA}). Likewise, we get \[ \begin{aligned}\left|\left(\partial^{\theta}\widehat{\partial^{\beta}\varphi}\right)\left(\xi\right)\right| & =2\pi\cdot\left|\sum_{\nu\leq\text{\ensuremath{\theta}}}\binom{\theta}{\nu}\cdot\left(\partial^{\nu}\xi_{\ell}\right)\cdot\left(\partial^{\theta-\nu}\widehat{\varphi}\right)\left(\xi\right)\right|\\ & \leq2^{N_{0}+1}\pi\cdot\left(1+\left|\xi\right|\right)\cdot\max_{\left|\eta\right|\leq N_{0}}\left|\partial^{\eta}\widehat{\varphi}\left(\xi\right)\right|\\ & \leq2^{N_{0}+1}\pi C\cdot\left(1+\left|\xi\right|\right)^{-H}\\ & =2^{N_{0}+1}\pi C\cdot\varrho_{0}(\xi) \end{aligned} \] and, by assumption, \[ \left|\partial^{\theta}\widehat{\varphi}\left(\xi\right)\right|\leq C\cdot\left(1+\left|\xi\right|\right)^{-\left(H+1\right)}\leq C\cdot\left(1+\left|\xi\right|\right)^{-H}=C\cdot\varrho_{0}\left(\xi\right). \] With this, we have already established equation \eqref{eq:NiceSimplifiedAlphaShearletFrameConditionTargetEstimateLowPass} with $K_{2}:=2^{N_{0}+1}\pi C$. \medskip{} To validate equation \eqref{eq:NicelySimplifiedAlphaShearletFrameConditionTargetEstimate}, we now distinguish the two cases $\left|\xi_{1}\right|>1$ and $\left|\xi_{1}\right|\leq1$: \textbf{Case 1}: We have $\left|\xi_{1}\right|>1$. In this case, $\varrho\left(\xi\right)=\left(1+\left|\xi_{1}\right|\right)^{-M_{2}}\left(1+\left|\xi_{2}\right|\right)^{-K}$, so that equation \eqref{eq:NicelySimplifiedShearletFrameConditionsDerivativeEstimate2} shows $\left|\left(\partial^{\theta}\widehat{\partial^{\beta}\psi}\right)\left(\xi\right)\right|\leq2^{N_{0}+1}\pi C\cdot\varrho\left(\xi\right)$ for $\beta=e_{\ell},$ $\ell\in\left\{ 1,2\right\} $ and arbitrary $\theta\in\mathbb{N}_{0}^{2}$ with $\left|\theta\right|\leq N_{0}$. Finally, we also have \[ \left|\partial^{\theta}\widehat{\psi}\left(\xi\right)\right|\leq C\cdot\left(1+\left|\xi_{1}\right|\right)^{-\left(1+M_{2}\right)}\left(1+\left|\xi_{2}\right|\right)^{-\left(1+K\right)}\leq C\cdot\left(1+\left|\xi_{1}\right|\right)^{-M_{2}}\left(1+\left|\xi_{2}\right|\right)^{-K}=C\cdot\varrho\left(\xi\right) \] and hence $\max_{\left|\beta\right|\leq1}\max_{\left|\theta\right|\leq N_{0}}\left|\left(\partial^{\theta}\widehat{\partial^{\beta}\psi}\right)\left(\xi\right)\right|\leq2^{N_{0}+1}\pi C\cdot\varrho\left(\xi\right)$ for all $\xi\in\mathbb{R}^{2}$ with $\left|\xi_{1}\right|>1$. \medskip{} \textbf{Case 2}: We have $\left|\xi_{1}\right|\leq1$. First note that this implies $\left(1+\left|\xi_{1}\right|\right)^{-M_{2}}\geq2^{-M_{2}}\geq2^{-M_{2}}\left|\xi_{1}\right|^{M_{1}}$ and consequently $\varrho\left(\xi\right)\geq2^{-M_{2}}\left|\xi_{1}\right|^{M_{1}}\cdot\left(1+\left|\xi_{2}\right|\right)^{-K}$. Furthermore, we have for arbitrary $\ell\in\left\{ 1,2\right\} $ that \[ 1+\left|\xi_{\ell}\right|\leq\max\left\{ 1+\left|\xi_{1}\right|,\,1+\left|\xi_{2}\right|\right\} \leq\max\left\{ 2,\,1+\left|\xi_{2}\right|\right\} \leq2\cdot\left(1+\left|\xi_{2}\right|\right). \] In conjunction with equation \eqref{eq:NicelySimplifiedShearletFrameConditionsDerivativeEstimate1}, this shows for $\beta=e_{\ell}$, $\ell\in\left\{ 1,2\right\} $ and $\theta\in\mathbb{N}_{0}^{2}$ with $\left|\theta\right|\leq N_{0}$ that \begin{align*} \left|\left(\partial^{\theta}\widehat{\partial^{\beta}\psi}\right)\left(\xi\right)\right| & \leq2^{N_{0}+1}\pi\cdot\left(1+\left|\xi_{\ell}\right|\right)\cdot\max_{\left|\eta\right|\leq N_{0}}\left|\partial^{\eta}\widehat{\psi}\left(\xi\right)\right|\\ & \leq2^{N_{0}+2}\pi C\cdot\left(1+\left|\xi_{2}\right|\right)\cdot\left|\xi_{1}\right|^{M_{1}}\cdot\left(1+\left|\xi_{2}\right|\right)^{-\left(1+K\right)}\\ & \leq2^{2+M_{2}+N_{0}}\pi C\cdot\varrho\left(\xi\right). \end{align*} Finally, we also have \[ \left|\partial^{\theta}\widehat{\psi}\left(\xi\right)\right|\leq C\cdot\left|\xi_{1}\right|^{M_{1}}\left(1+\left|\xi_{2}\right|\right)^{-\left(1+K\right)}\leq C\cdot\left|\xi_{1}\right|^{M_{1}}\left(1+\left|\xi_{2}\right|\right)^{-K}\leq2^{M_{2}}C\cdot\varrho\left(\xi\right). \] All in all, we have shown $\max_{\left|\beta\right|\leq1}\max_{\left|\theta\right|\leq N_{0}}\left|\left(\partial^{\theta}\widehat{\partial^{\beta}\psi}\right)\left(\xi\right)\right|\leq2^{2+M_{2}+N_{0}}\pi C\cdot\varrho\left(\xi\right)$ for all $\xi\in\mathbb{R}^{2}$ with $\left|\xi_{1}\right|\leq1$. \medskip{} All together, we have thus established eq.\@ \eqref{eq:NicelySimplifiedAlphaShearletFrameConditionTargetEstimate} with $K_{1}:=2^{2+M_{2}+N_{0}}\pi C$. Now, define $C_{\diamondsuit}:=\max\left\{ K_{1},K_{2}\right\} =K_{1}$. Now, for proving the current theorem, we want to apply Theorem \ref{thm:BanachFrameTheorem} with $\gamma_{1}^{\left(0\right)}:=\psi$, $\gamma_{2}^{\left(0\right)}:=\varphi$ and $k_{i}:=1$ for $i\in I_{0}$ and $k_{0}:=2$, as well as $Q_{0}^{\left(1\right)}:=Q=U_{\left(-1,1\right)}^{\left(3^{-1},3\right)}$ and $Q_{0}^{\left(2\right)}:=\left(-1,1\right)^{2}$, cf.\@ Assumption \ref{assu:CrashCourseStandingAssumptions} and Definition \ref{def:AlphaShearletCovering}. In the notation of Theorem \ref{thm:BanachFrameTheorem}, we then have $\gamma_{i}=\gamma_{k_{i}}^{\left(0\right)}$ for all $i\in I$, i.e., $\gamma_{i}=\psi$ for $i\in I_{0}$ and $\gamma_{0}=\varphi$. Using this notation and setting furthermore $\varrho_{i}:=\varrho$ for $i\in I_{0}$, we have thus shown for arbitrary $N\in\mathbb{N}_{0}$ with $N\leq N_{0}$ that \[ \begin{aligned}M_{j,i}: & =\left(\frac{w_{j}^{s}}{w_{i}^{s}}\right)^{\tau}\cdot\left(1+\left\Vert T_{j}^{-1}T_{i}\right\Vert \right)^{\sigma}\cdot\max_{\left|\beta\right|\leq1}\left(\left|\det T_{i}\right|^{-1}\cdot\int_{S_{i}^{\left(\alpha\right)}}\max_{\left|\theta\right|\leq N}\left|\left(\partial^{\theta}\widehat{\partial^{\beta}\gamma_{j}}\right)\left(T_{j}^{-1}\xi\right)\right|\operatorname{d}\xi\right)^{\tau}\\ & \leq C_{\diamondsuit}^{\tau}\cdot\left(\frac{w_{j}^{s}}{w_{i}^{s}}\right)^{\tau}\cdot\left(1+\left\Vert T_{j}^{-1}T_{i}\right\Vert \right)^{\sigma}\cdot\left(\left|\det T_{i}\right|^{-1}\cdot\int_{S_{i}^{\left(\alpha\right)}}\varrho_{j}\left(T_{j}^{-1}\xi\right)\operatorname{d}\xi\right)^{\tau}=:C_{\diamondsuit}^{\tau}\cdot M_{j,i}^{\left(0\right)} \end{aligned} \] for arbitrary $\sigma,\tau>0$, $s\in\mathbb{R}$ and the $\mathcal{S}^{\left(\alpha\right)}$-moderate weight $w^{s}$ (cf. Lemma \ref{lem:AlphaShearletWeightIsModerate}). In view of the assumptions of the current theorem, the prerequisites (1)-(3) of Theorem \ref{thm:BanachFrameTheorem} are clearly fulfilled, but we still need to verify \[ C_{1}:=\sup_{i\in I}\:\sum_{j\in I}M_{j,i}<\infty\quad\text{ and }\quad C_{2}:=\sup_{j\in I}\:\sum_{i\in I}M_{j,i}<\infty, \] with $M_{j,i}$ as above, $\tau:=\min\left\{ 1,p,q\right\} \geq\min\left\{ p_{0},q_{0}\right\} =:\tau_{0}$, and \begin{equation} N:=\left\lceil \frac{2+\varepsilon}{\min\left\{ 1,p\right\} }\right\rceil \leq\left\lceil \frac{2+\varepsilon}{p_{0}}\right\rceil =N_{0},\quad\text{ as well as }\quad\sigma:=\tau\cdot\left(\frac{2}{\min\left\{ 1,p\right\} }+N\right)\leq\tau\cdot\left(\frac{2}{p_{0}}+N_{0}\right).\label{eq:NicelySimplifiedAlphaShearletFrameConditionSigmaDefinition} \end{equation} In particular, we have $\frac{\sigma}{\tau}\leq\frac{2}{p_{0}}+N_{0}=\frac{2}{p_{0}}+\left\lceil \frac{2+\varepsilon}{p_{0}}\right\rceil =:\omega$. Hence, Lemma \ref{lem:MainShearletLemma} (with $c=\varepsilon$) yields a constant $C_{0}=C_{0}\left(\alpha,\tau_{0},\omega,\varepsilon,K,H,M_{1},M_{2}\right)$ with $\max\left\{ C_{1},C_{2}\right\} \leq C_{\diamondsuit}^{\tau}C_{0}^{\tau}$, provided that we can show $H\geq H_{0}+\varepsilon$, $K\geq K_{0}+\varepsilon$ and $M_{\ell}\geq M_{\ell}^{\left(0\right)}+\varepsilon$ for $\ell\in\left\{ 1,2\right\} $, with $H_{0},K_{0},M_{1}^{(0)},M_{2}^{(0)}$ as defined in Lemma \ref{lem:MainShearletLemma}. But we have \[ \begin{aligned}H_{0} & =\frac{1-\alpha}{\tau}+\frac{\sigma}{\tau}-s\leq\frac{1-\alpha}{\tau_{0}}+\omega-s_{0}\\ & =\frac{1-\alpha}{\min\left\{ p_{0},q_{0}\right\} }+\frac{2}{p_{0}}+N_{0}-s_{0}\\ & \leq H-\varepsilon. \end{aligned} \] Furthermore, \[ \begin{aligned}M_{2}^{(0)} & =(1+\alpha)\frac{\sigma}{\tau}-s\leq(1+\alpha)\omega-s_{0}\\ & =\left(1+\alpha\right)\left(\frac{2}{p_{0}}+N_{0}\right)-s_{0}\\ & \leq M_{2}-\varepsilon \end{aligned} \] and \[ M_{1}^{(\text{0})}\leq\frac{1}{\tau}+\max\left\{ s,0\right\} \leq\frac{1}{\min\left\{ p_{0},q_{0}\right\} }+\max\left\{ s_{1},0\right\} =M_{1}-\varepsilon, \] as well as \[ \begin{aligned}K_{0} & \leq\max\left\{ \frac{1-\alpha}{\tau}+2\frac{\sigma}{\tau}-s,\,\frac{2+\sigma}{\tau}\right\} \\ & \leq\max\left\{ \frac{1-\alpha}{\tau_{0}}+2\omega-s_{0},\frac{2}{\tau_{0}}+\omega\right\} \\ & =\max\left\{ \frac{1-\alpha}{\min\left\{ p_{0},q_{0}\right\} }+2\left(\frac{2}{p_{0}}+N_{0}\right)-s_{0},\frac{2}{\min\left\{ p_{0},q_{0}\right\} }+\frac{2}{p_{0}}+N_{0}\right\} \\ & =K-\varepsilon. \end{aligned} \] Thus, Lemma \ref{lem:MainShearletLemma} is applicable, so that \[ C_{1}^{1/\tau}=\left(\sup_{i\in I}\:\smash{\sum_{j\in I}}M_{j,i}\right)^{1/\tau}\leq C_{\diamondsuit}C_{0}, \] where the right-hand side is independent of $p,q$ and $s$, since $C_{0}$ is independent of $p,q$ and $s$ and since \[ C_{\diamondsuit}=C_{\diamondsuit}\left(\varepsilon,p_{0},M_{2},C\right)=2^{2+M_{2}+N_{0}}\pi C=2^{2+M_{2}+\left\lceil \frac{2+\varepsilon}{p_{0}}\right\rceil }\pi C. \] The exact same estimate holds for $C_{2}$. We have shown that all prerequisites for Theorem \ref{thm:BanachFrameTheorem} are fulfilled. Hence, the theorem implies that there is a constant $K_{\diamondsuit}=K_{\diamondsuit}\left(p_{0},q_{0},\varepsilon,\mathcal{S}^{(\alpha)},\varphi,\psi\right)>0$ (independent of $p,q,s$) such that the family $\widetilde{{\rm SH}}_{\alpha,\varphi,\psi,\delta}^{\left(\pm1\right)}$ forms a Banach frame for $\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)$, as soon as $\delta\leq\delta_{00}$, where \[ \delta_{00}:=\left(1+K_{\diamondsuit}\cdot C_{\mathcal{S}^{\left(\alpha\right)},w^{s}}^{4}\cdot\left(C_{1}^{1/\tau}+C_{2}^{1/\tau}\right)^{2}\right)^{-1}. \] From Lemma \ref{lem:AlphaShearletWeightIsModerate} we know that $C_{\mathcal{S}^{\left(\alpha\right)},w^{s}}\leq39^{\left|s\right|}\leq39^{s_{2}}$ where $s_{2}:=\max\left\{ \left|s_{0}\right|,\left|s_{1}\right|\right\} $. Hence, choosing \[ \delta_{0}:=\left(1+4\cdot K_{\diamondsuit}\cdot C_{\diamondsuit}^{2}\cdot C_{0}^{2}\cdot39^{4s_{2}}\right)^{-1}, \] we get $\delta_{0}\leq\delta_{00}$ and $\delta_{0}$ is independent of the precise choice of $p,q,s$, as long as $p\geq p_{\text{0}},$ $q\geq q_{0}$ and $s_{0}\leq s\leq s_{1}$. Thus, for $0<\delta\leq\delta_{0}$ and arbitrary $p,q\in\left(0,\infty\right]$, $s\in\mathbb{R}$ with $p\geq p_{0}$, $q\geq q_{0}$ and $s_{0}\leq s\leq s_{1}$, the family $\widetilde{{\rm SH}}_{\alpha,\varphi,\psi,\delta}^{\left(\pm1\right)}$ forms a Banach frame for $\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)$. \end{proof} Finally, we also come to verifiable sufficient conditions which ensure that the low-pass $\varphi$ and the mother shearlet $\psi$ generate atomic decompositions for $\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)$. \begin{thm} \label{thm:ReallyNiceShearletAtomicDecompositionConditions}Let $\alpha\in\left[0,1\right]$, $\varepsilon,p_{0},q_{0}\in\left(0,1\right]$ and $s_{0},s_{1}\in\mathbb{R}$ with $s_{0}\leq s_{1}$. Assume that $\varphi,\psi\in L^{1}\left(\mathbb{R}^{2}\right)$ satisfy the following properties: \begin{itemize}[leftmargin=0.6cm] \item We have $\left\Vert \varphi\right\Vert _{1+\frac{2}{p_{0}}}<\infty$ and $\left\Vert \psi\right\Vert _{1+\frac{2}{p_{0}}}<\infty$, where $\left\Vert g\right\Vert _{\Lambda}=\sup_{x\in\mathbb{R}^{2}}\left(1+\left|x\right|\right)^{\Lambda}\left|g\left(x\right)\right|$ for $g:\mathbb{R}^{2}\to\mathbb{C}^{\ell}$ (with arbitrary $\ell\in\mathbb{N}$) and $\Lambda\geq0$. \item We have $\widehat{\varphi},\widehat{\psi}\in C^{\infty}\left(\mathbb{R}^{2}\right)$, where all partial derivatives of $\widehat{\varphi},\widehat{\psi}$ are polynomially bounded. \item We have \begin{align*} \widehat{\psi}\left(\xi\right)\neq0 & \text{ for all }\xi=\left(\xi_{1},\xi_{2}\right)\in\mathbb{R}^{2}\text{ with }\xi_{1}\in\left[3^{-1},3\right]\text{ and }\left|\xi_{2}\right|\leq\left|\xi_{1}\right|,\\ \widehat{\varphi}\left(\xi\right)\ne0 & \text{ for all }\xi\in\left[-1,1\right]^{2}. \end{align*} \item We have \begin{equation} \begin{split}\left|\partial^{\beta}\widehat{\varphi}\left(\xi\right)\right| & \lesssim\left(1+\left|\xi\right|\right)^{-\Lambda_{0}},\\ \left|\partial^{\beta}\smash{\widehat{\psi}}\left(\xi\right)\right| & \lesssim\min\left\{ \left|\xi_{1}\right|^{\Lambda_{1}},\left(1+\left|\xi_{1}\right|\right)^{-\Lambda_{2}}\right\} \cdot\left(1+\left|\xi_{2}\right|\right)^{-\Lambda_{3}}\cdot\left(1+\left|\xi\right|\right)^{-\left(3+\varepsilon\right)} \end{split} \label{eq:ShearletAtomicDecompositionFourierDecayCondition} \end{equation} for all $\xi=\left(\xi_{1},\xi_{2}\right)\in\mathbb{R}^{2}$ and all $\beta\in\mathbb{N}_{0}^{2}$ with $\left|\beta\right|\leq\left\lceil p_{0}^{-1}\cdot\left(2+\varepsilon\right)\right\rceil $, where \begin{align*} \qquad\qquad\Lambda_{0} & :=\begin{cases} 3+2\varepsilon+\max\left\{ \frac{1-\alpha}{\min\left\{ p_{0},q_{0}\right\} }+3+s_{1},\,2\right\} , & \text{if }p_{0}=1,\\ 3+2\varepsilon+\max\left\{ \frac{1-\alpha}{\min\left\{ p_{0},q_{0}\right\} }+\frac{1-\alpha}{p_{0}}+1+\alpha+\left\lceil \frac{2+\varepsilon}{p_{0}}\right\rceil +s_{1},\,2\right\} , & \text{if }p_{0}\in\left(0,1\right), \end{cases}\\ \qquad\qquad\Lambda_{1} & :=\varepsilon+\frac{1}{\min\left\{ p_{0},q_{0}\right\} }+\max\left\{ 0,\,\left(1+\alpha\right)\left(\frac{1}{p_{0}}-1\right)-s_{0}\right\} ,\\ \qquad\qquad\Lambda_{2} & :=\begin{cases} \varepsilon+\max\left\{ 2,\,3\left(1+\alpha\right)+s_{1}\right\} , & \text{if }p_{0}=1,\\ \varepsilon+\max\left\{ 2,\,\left(1+\alpha\right)\left(1+\frac{1}{p_{0}}+\left\lceil \frac{2+\varepsilon}{p_{0}}\right\rceil \right)+s_{1}\right\} , & \text{if }p_{0}\in\left(0,1\right), \end{cases}\\ \qquad\qquad\Lambda_{3} & :=\begin{cases} \varepsilon+\max\left\{ \frac{1-\alpha}{\min\left\{ p_{0},q_{0}\right\} }+6+s_{1},\,\frac{2}{\min\left\{ p_{0},q_{0}\right\} }+3\right\} , & \text{if }p_{0}=1,\\ \varepsilon+\max\left\{ \frac{1-\alpha}{\min\left\{ p_{0},q_{0}\right\} }+\frac{3-\alpha}{p_{0}}+2\left\lceil \frac{2+\varepsilon}{p_{0}}\right\rceil +1+\alpha+s_{1},\,\frac{2}{\min\left\{ p_{0},q_{0}\right\} }+\frac{2}{p_{0}}+\left\lceil \frac{2+\varepsilon}{p_{0}}\right\rceil \right\} , & \text{if }p_{0}\in\left(0,1\right). \end{cases} \end{align*} \end{itemize} Then there is some $\delta_{0}\in\left(0,1\right]$ such that for all $0<\delta\leq\delta_{0}$ and all $p,q\in\left(0,\infty\right]$ and $s\in\mathbb{R}$ with $p\geq p_{0}$, $q\geq q_{0}$ and $s_{0}\leq s\leq s_{1}$, the following is true: The family \[ {\rm SH}_{\alpha,\varphi,\psi,\delta}^{\left(\pm1\right)}:=\left(L_{\delta\cdot T_{i}^{-T}k}\gamma^{\left[i\right]}\right)_{i\in I,\,k\in\mathbb{Z}^{2}}\quad\text{ with }\quad\gamma^{\left[i\right]}:=\begin{cases} \left|\det T_{i}\right|^{1/2}\cdot\left(\psi\circ T_{i}^{T}\right), & \text{if }i\in I_{0},\\ \varphi, & \text{if }i=0 \end{cases} \] forms an atomic decomposition for $\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)$. Precisely, this means the following: \begin{enumerate}[leftmargin=0.6cm] \item The \textbf{synthesis map} \[ S^{\left(\delta\right)}:C_{w^{s}}^{p,q}\to\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right),\left(\smash{c_{k}^{\left(i\right)}}\right)_{i\in I,k\in\mathbb{Z}^{2}}\mapsto\sum_{i\in I}\:\sum_{k\in\mathbb{Z}^{2}}\left(c_{k}^{\left(i\right)}\cdot L_{\delta\cdot T_{i}^{-T}k}\gamma^{\left[i\right]}\right) \] is well-defined and bounded for all $\delta\in\left(0,1\right]$, where the \emph{coefficient space} $C_{w^{s}}^{p,q}$ is as in Definition \ref{def:CoefficientSpace}. Convergence of the series has to be understood as described in the remark to Theorem \ref{thm:AtomicDecompositionTheorem}. \item For $0<\delta\leq\delta_{0}$, there is a bounded linear \textbf{coefficient map} \[ C^{\left(\delta\right)}:\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)\to C_{w^{s}}^{p,q} \] satisfying $S^{(\delta)}\circ C^{\left(\delta\right)}=\operatorname{id}_{\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)}$. Furthermore, the action of $C^{\left(\delta\right)}$ is \emph{independent} of the precise choice of $p,q,s$. Precisely, if $p_{1},p_{2}\geq p_{0}$, $q_{1},q_{2}\geq q_{0}$ and $s^{\left(1\right)},s^{\left(2\right)}\in\left[s_{0},s_{1}\right]$ and if $f\in\mathscr{S}_{\alpha,s^{\left(1\right)}}^{p_{1},q_{1}}\cap\mathscr{S}_{\alpha,s^{\left(2\right)}}^{p_{2},q_{2}}$, then $C_{1}^{\left(\delta\right)}f=C_{2}^{\left(\delta\right)}f$, where $C_{i}^{\left(\delta\right)}$ denotes the coefficient operator for the choices $p=p_{i}$, $q=q_{i}$ and $s=s^{\left(i\right)}$ for $i\in\left\{ 1,2\right\} $.\qedhere \end{enumerate} \end{thm} \begin{proof} Later in the proof, we will apply Theorem \ref{thm:AtomicDecompositionTheorem} to the decomposition space $\mathscr{S}_{\alpha,s}^{p,q}\left(\smash{\mathbb{R}^{2}}\right)=\DecompSp{\mathcal{S}^{\left(\alpha\right)}}p{\ell_{w^{s}}^{q}}{}$ with $w$ and $w^{s}$ as in Lemma \ref{lem:AlphaShearletWeightIsModerate}, while Theorem \ref{thm:AtomicDecompositionTheorem} itself considers the decomposition space $\DecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}$. To avoid confusion between these two different choices of the weight $w$, we will write $v$ for the weight defined in Lemma \ref{lem:AlphaShearletWeightIsModerate}, so that we get $\mathscr{S}_{\alpha,s}^{p,q}\left(\smash{\mathbb{R}^{2}}\right)=\DecompSp{\mathcal{S}^{\left(\alpha\right)}}p{\ell_{v^{s}}^{q}}{}$. For the application of Theorem \ref{thm:AtomicDecompositionTheorem}, we will thus choose $\mathcal{Q}=\mathcal{S}^{\left(\alpha\right)}$ and $w=v^{s}$. Our assumptions on $\varphi$ show that there is a constant $C_{1}>0$ satisfying $\left|\partial^{\beta}\widehat{\varphi}\left(\xi\right)\right|\leq C_{1}\cdot\left(1+\left|\xi\right|\right)^{-\Lambda_{0}}$ for all $\beta\in\mathbb{N}_{0}^{2}$ with $\left|\beta\right|\leq N_{0}:=\left\lceil p_{0}^{-1}\cdot\left(2+\varepsilon\right)\right\rceil $. We first apply Proposition \ref{prop:ConvolutionFactorization} (with $N=N_{0}\geq\left\lceil 2+\varepsilon\right\rceil =3=d+1$, with $\gamma=\varphi$ and with $\varrho=\varrho_{1}$ for $\varrho_{1}\left(\xi\right):=C_{1}\cdot\left(1+\left|\xi\right|\right)^{3+\varepsilon-\Lambda_{0}}$, where we note $\Lambda_{0}-3-\varepsilon\geq2+\varepsilon$, so that $\varrho_{1}\in L^{1}\left(\mathbb{R}^{2}\right)$). We indeed have $\left|\partial^{\beta}\widehat{\varphi}\left(\xi\right)\right|\leq C_{1}\cdot\left(1+\left|\xi\right|\right)^{-\Lambda_{0}}=\varrho_{1}\left(\xi\right)\cdot\left(1+\left|\xi\right|\right)^{-\left(d+1+\varepsilon\right)}$ for all $\left|\beta\right|\leq N_{0}$, since we are working in $\mathbb{R}^{d}=\mathbb{R}^{2}$. Consequently, Proposition \ref{prop:ConvolutionFactorization} provides functions $\varphi_{1}\in C_{0}\left(\mathbb{R}^{2}\right)\cap L^{1}\left(\mathbb{R}^{2}\right)$ and $\varphi_{2}\in C^{1}\left(\mathbb{R}^{2}\right)\cap W^{1,1}\left(\mathbb{R}^{2}\right)$ with $\varphi=\varphi_{1}\ast\varphi_{2}$ and with the following additional properties: \begin{enumerate} \item We have $\left\Vert \varphi_{2}\right\Vert _{\Lambda}<\infty$ and $\left\Vert \nabla\varphi_{2}\right\Vert _{\Lambda}<\infty$ for all $\Lambda\in\mathbb{N}_{0}$. \item We have $\widehat{\varphi_{2}}\in C^{\infty}\left(\mathbb{R}^{2}\right)$, where all partial derivatives of $\widehat{\varphi_{2}}$ are polynomially bounded. \item We have $\widehat{\varphi_{1}}\in C^{\infty}\left(\mathbb{R}^{2}\right)$, where all partial derivatives of $\widehat{\varphi_{1}}$ are polynomially bounded. This uses that $\widehat{\varphi}\in C^{\infty}\left(\mathbb{R}^{2}\right)$ with all partial derivatives being polynomially bounded. \item We have \begin{equation} \left|\partial^{\beta}\widehat{\varphi_{1}}\left(\xi\right)\right|\leq\frac{C_{2}}{C_{1}}\cdot\varrho_{1}\left(\xi\right)=C_{2}\cdot\left(1+\left|\xi\right|\right)^{3+\varepsilon-\Lambda_{0}}\quad\forall\xi\in\mathbb{R}^{2}\text{ and }\beta\in\mathbb{N}_{0}^{2}\text{ with }\left|\beta\right|\leq N_{0}.\label{eq:AlphaShearletAtomicDecompositionPhiFactorizationEstimate} \end{equation} Here, $C_{2}$ is given by $C_{2}:=C_{1}\cdot2^{3+4N_{0}}\cdot N_{0}!\cdot3^{N_{0}}$. \end{enumerate} Likewise, our assumptions on $\psi$ show that there is a constant $C_{3}>0$ satisfying \[ \left|\partial^{\beta}\widehat{\psi}\left(\xi\right)\right|\leq C_{3}\cdot\min\left\{ \left|\xi_{1}\right|^{\Lambda_{1}},\left(1+\left|\xi_{1}\right|\right)^{-\Lambda_{2}}\right\} \cdot\left(1+\left|\xi_{2}\right|\right)^{-\Lambda_{3}}\cdot\left(1+\left|\xi\right|\right)^{-\left(3+\varepsilon\right)}\quad\forall\xi\in\mathbb{R}^{2}\:\forall\beta\in\mathbb{N}_{0}^{2}\text{ with }\left|\beta\right|\leq N_{0}. \] Now, we again apply Proposition \ref{prop:ConvolutionFactorization}, but this time with $N=N_{0}\geqd+1$, with $\gamma=\psi$ and with $\varrho=\varrho_{2}$ for $\varrho_{2}\left(\xi\right):=C_{3}\cdot\min\left\{ \left|\xi_{1}\right|^{\Lambda_{1}},\left(1+\left|\xi_{1}\right|\right)^{-\Lambda_{2}}\right\} \cdot\left(1+\left|\xi_{2}\right|\right)^{-\Lambda_{3}}$, where we note that $\Lambda_{2}\geq2+\varepsilon$ and $\Lambda_{3}\geq3\geq2+\varepsilon$, so that \begin{align*} \varrho_{2}\left(\xi\right) & \leq C_{3}\cdot\left(1+\left|\xi_{1}\right|\right)^{-\left(2+\varepsilon\right)}\cdot\left(1+\left|\xi_{2}\right|\right)^{-\left(2+\varepsilon\right)}\\ & \leq C_{3}\cdot\left[\max\left\{ 1+\left|\xi_{1}\right|,\,1+\left|\xi_{2}\right|\right\} \right]^{-\left(2+\varepsilon\right)}\\ & \leq C_{3}\cdot\left(1+\left\Vert \xi\right\Vert _{\infty}\right)^{-\left(2+\varepsilon\right)}\in L^{1}\left(\smash{\mathbb{R}^{2}}\right). \end{align*} As we just saw, we indeed have $\left|\partial^{\beta}\widehat{\psi}\left(\xi\right)\right|\leq\varrho_{2}\left(\xi\right)\cdot\left(1+\left|\xi\right|\right)^{-\left(d+1+\varepsilon\right)}$ for all $\left|\beta\right|\leq N_{0}$, since we are working in $\mathbb{R}^{d}=\mathbb{R}^{2}$. Consequently, Proposition \ref{prop:ConvolutionFactorization} provides functions $\psi_{1}\in C_{0}\left(\mathbb{R}^{2}\right)\cap L^{1}\left(\mathbb{R}^{2}\right)$ and $\psi_{2}\in C^{1}\left(\mathbb{R}^{2}\right)\cap W^{1,1}\left(\mathbb{R}^{2}\right)$ with $\psi=\psi_{1}\ast\psi_{2}$ and with the following additional properties: \begin{enumerate} \item We have $\left\Vert \psi_{2}\right\Vert _{\Lambda}<\infty$ and $\left\Vert \nabla\psi_{2}\right\Vert _{\Lambda}<\infty$ for all $\Lambda\in\mathbb{N}_{0}$. \item We have $\widehat{\psi_{2}}\in C^{\infty}\left(\mathbb{R}^{2}\right)$, where all partial derivatives of $\widehat{\psi_{2}}$ are polynomially bounded. \item We have $\widehat{\psi_{1}}\in C^{\infty}\left(\mathbb{R}^{2}\right)$, where all partial derivatives of $\widehat{\psi_{1}}$ are polynomially bounded. This uses that $\widehat{\psi}\in C^{\infty}\left(\mathbb{R}^{2}\right)$ with all partial derivatives being polynomially bounded. \item We have \begin{equation} \begin{split}\quad\qquad\left|\partial^{\beta}\,\smash{\widehat{\psi_{1}}}\,\left(\xi\right)\right| & \!\leq\!\frac{C_{4}}{C_{3}}\cdot\varrho_{2}\left(\xi\right)\\ & \!=\!C_{4}\cdot\min\left\{ \left|\xi_{1}\right|^{\Lambda_{1}},\left(1+\left|\xi_{1}\right|\right)^{-\Lambda_{2}}\right\} \cdot\left(1+\left|\xi_{2}\right|\right)^{-\Lambda_{3}}\quad\forall\xi\in\mathbb{R}^{2}\text{ and }\beta\in\mathbb{N}_{0}^{2}\text{ with }\left|\beta\right|\leq N_{0}. \end{split} \label{eq:AlphaShearletAtomicDecompositionPsiFactorizationEstimate} \end{equation} Here, $C_{4}$ is given by $C_{4}:=C_{3}\cdot2^{3+4N_{0}}\cdot N_{0}!\cdot3^{N_{0}}$. \end{enumerate} In summary, if we define $M_{1}:=\Lambda_{1}$, $M_{2}:=\Lambda_{2}$ and $K:=\Lambda_{3}$, as well as $H:=\Lambda_{0}-3-\varepsilon$, then we have $M_{1},M_{2},K,H\geq0$ and \begin{equation} \begin{split}\max_{\left|\beta\right|\leq N_{0}}\left|\partial^{\beta}\widehat{\psi_{1}}\left(\xi\right)\right| & \leq C_{5}\cdot\min\left\{ \left|\xi_{1}\right|^{M_{1}},\,\left(1+\left|\xi_{1}\right|\right)^{-M_{2}}\right\} \cdot\left(1+\left|\xi_{2}\right|\right)^{-K}=:C_{5}\cdot\varrho\left(\xi\right),\\ \max_{\left|\beta\right|\leq N_{0}}\left|\partial^{\beta}\widehat{\varphi_{1}}\left(\xi\right)\right| & \leq C_{5}\cdot\left(1+\left|\xi\right|\right)^{-H}=:C_{5}\cdot\varrho_{0}\left(\xi\right), \end{split} \label{eq:AlphaShearletAtomicDecompositionGeneratorsMainEstimate} \end{equation} where we defined $C_{5}:=\max\left\{ C_{2},C_{4}\right\} $ for brevity. For consistency with Lemma \ref{lem:MainShearletLemma}, we define $\varrho_{j}:=\varrho$ for arbitrary $j\in I_{0}$. Now, define $n:=2$, $\gamma_{1}^{\left(0\right)}:=\psi$ and $\gamma_{2}^{\left(0\right)}:=\varphi$, as well as $\gamma_{1}^{\left(0,j\right)}:=\psi_{j}$ and $\gamma_{2}^{\left(0,j\right)}:=\varphi_{j}$ for $j\in\left\{ 1,2\right\} $. We want to verify the assumptions of Theorem \ref{thm:AtomicDecompositionTheorem} for these choices and for $\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)=\DecompSp{\mathcal{S}^{\left(\alpha\right)}}p{\ell_{v^{s}}^{q}}{}=\DecompSp{\mathcal{Q}}p{\ell_{w}^{q}}{}$. To this end, we recall from Definition \ref{def:AlphaShearletCovering} that $\mathcal{Q}:=\mathcal{S}^{\left(\alpha\right)}=\left(T_{i}Q_{i}'+b_{i}\right)_{i\in I}$, with $Q_{i}'=U_{\left(-1,1\right)}^{\left(3^{-1},3\right)}=Q=:Q_{0}^{\left(1\right)}=Q_{0}^{\left(k_{i}\right)}$ for all $i\in I_{0}$, where $k_{i}:=1$ for $i\in I_{0}$ and with $Q_{0}'=\left(-1,1\right)^{2}=:Q_{0}^{\left(2\right)}=Q_{0}^{\left(k_{0}\right)}$, where $k_{0}:=2$ and $n:=2$, cf.\@ Assumption \ref{assu:CrashCourseStandingAssumptions}. Now, let us verify the list of prerequisites of Theorem \ref{thm:AtomicDecompositionTheorem}: \begin{enumerate} \item We have $\gamma_{k}^{\left(0,1\right)}\in\left\{ \varphi_{1},\psi_{1}\right\} \subset L^{1}\left(\mathbb{R}^{2}\right)$ for $k\in\left\{ 1,2\right\} $ by the properties of $\varphi_{1},\psi_{1}$ from above. \item Likewise, we have $\gamma_{k}^{\left(0,2\right)}\in\left\{ \varphi_{2},\psi_{2}\right\} \subset C^{1}\left(\mathbb{R}^{2}\right)$ by the properties of $\varphi_{2},\psi_{2}$ from above. \item Next, with $\varUpsilon=1+\frac{d}{\min\left\{ 1,p\right\} }$ as in Theorem \ref{thm:AtomicDecompositionTheorem}, we have $\varUpsilon\leq1+\frac{2}{p_{0}}=:\varUpsilon_{0}$ and thus, with $\Omega^{\left(p\right)}$ as in Theorem \ref{thm:AtomicDecompositionTheorem}, \begin{equation} \begin{split}\Omega^{\left(p\right)} & =\max_{k\in\underline{n}}\left\Vert \gamma_{k}^{\left(0,2\right)}\right\Vert _{\varUpsilon}+\max_{k\in\underline{n}}\left\Vert \nabla\gamma_{k}^{\left(0,2\right)}\right\Vert _{\varUpsilon}\\ & =\max\left\{ \left\Vert \varphi_{2}\right\Vert _{\varUpsilon},\left\Vert \psi_{2}\right\Vert _{\varUpsilon}\right\} +\max\left\{ \left\Vert \nabla\varphi_{2}\right\Vert _{\varUpsilon},\left\Vert \nabla\psi_{2}\right\Vert _{\varUpsilon}\right\} \\ & \leq\max\left\{ \left\Vert \varphi_{2}\right\Vert _{\left\lceil \varUpsilon_{0}\right\rceil },\left\Vert \psi_{2}\right\Vert _{\left\lceil \varUpsilon_{0}\right\rceil }\right\} +\max\left\{ \left\Vert \nabla\varphi_{2}\right\Vert _{\left\lceil \varUpsilon_{0}\right\rceil },\left\Vert \nabla\psi_{2}\right\Vert _{\left\lceil \varUpsilon_{0}\right\rceil }\right\} =:C_{6}<\infty \end{split} \label{eq:AtomicDecompositionSecondConvolutionFactorEstimate} \end{equation} by the properties of $\varphi_{2},\psi_{2}$ from above. \item We have $\mathcal{F}\gamma_{k}^{\left(0,j\right)}\in\left\{ \widehat{\varphi_{1}},\widehat{\psi_{1}},\widehat{\varphi_{2}},\widehat{\psi_{2}}\right\} \subset C^{\infty}\left(\mathbb{R}^{2}\right)$ and all partial derivatives of these functions are polynomially bounded. \item We have $\gamma_{1}^{\left(0\right)}=\psi=\psi_{1}\ast\psi_{2}=\gamma_{1}^{\left(0,1\right)}\ast\gamma_{1}^{\left(0,2\right)}$ and $\gamma_{2}^{\left(0\right)}=\varphi=\varphi_{1}\ast\varphi_{2}=\gamma_{2}^{\left(0,1\right)}\ast\gamma_{2}^{\left(0,2\right)}$. \item By assumption, we have $\mathcal{F}\gamma_{1}^{\left(0\right)}\left(\xi\right)=\widehat{\psi}\left(\xi\right)\neq0$ for all $\xi\in\overline{Q}=\overline{Q_{0}^{\left(1\right)}}$. Likewise, we have $\mathcal{F}\gamma_{2}^{\left(0\right)}\left(\xi\right)=\widehat{\varphi}\left(\xi\right)\neq0$ for all $\xi\in\left[-1,1\right]^{2}=\overline{\left(-1,1\right)^{2}}=\overline{Q_{0}^{\left(2\right)}}$. \item We have $\left\Vert \smash{\gamma_{1}^{\left(0\right)}}\right\Vert _{\varUpsilon}=\left\Vert \psi\right\Vert _{\varUpsilon}\leq\left\Vert \psi\right\Vert _{\varUpsilon_{0}}=\left\Vert \psi\right\Vert _{1+\frac{2}{p_{0}}}<\infty$ and $\left\Vert \smash{\gamma_{2}^{\left(0\right)}}\right\Vert _{\varUpsilon}=\left\Vert \varphi\right\Vert _{\varUpsilon}\leq\left\Vert \varphi\right\Vert _{1+\frac{2}{p_{0}}}<\infty$, thanks to our assumptions on $\varphi,\psi$. \end{enumerate} Thus, as the last prerequisite of Theorem \ref{thm:AtomicDecompositionTheorem}, we have to verify \[ K_{1}:=\sup_{i\in I}\:\sum_{j\in I}N_{i,j}<\infty\qquad\text{ and }\qquad K_{2}:=\sup_{j\in I}\:\sum_{i\in I}N_{i,j}<\infty, \] where $\gamma_{j,1}:=\gamma_{k_{j}}^{\left(0,1\right)}$ for $j\in I$ (i.e., $\gamma_{0,1}=\gamma_{2}^{\left(0,1\right)}=\varphi_{1}$ and $\gamma_{j,1}=\gamma_{1}^{\left(0,1\right)}=\psi_{1}$ for $j\in I_{0}$) and \begin{align*} N_{i,j} & :=\left(\frac{w_{i}}{w_{j}}\cdot\left[\frac{\left|\det T_{j}\right|}{\left|\det T_{i}\right|}\right]^{\vartheta}\right)^{\tau}\!\!\cdot\left(1\!+\!\left\Vert T_{j}^{-1}T_{i}\right\Vert \right)^{\sigma}\!\cdot\left(\left|\det T_{i}\right|^{-1}\!\cdot\int_{Q_{i}}\:\max_{\left|\beta\right|\leq N}\left|\left[\partial^{\beta}\widehat{\gamma_{j,1}}\right]\left(T_{j}^{-1}\left(\xi\!-\!b_{j}\right)\right)\right|\operatorname{d}\xi\right)^{\tau}\\ \left({\scriptstyle \text{since }b_{j}=0\text{ for all }j\in I}\right) & \overset{\left(\ast\right)}{\leq}\left(\frac{v_{j}^{\left(1+\alpha\right)\vartheta-s}}{v_{i}^{\left(1+\alpha\right)\vartheta-s}}\right)^{\tau}\cdot\left(1\!+\!\left\Vert T_{j}^{-1}T_{i}\right\Vert \right)^{\sigma}\!\cdot\left(\left|\det T_{i}\right|^{-1}\!\cdot\int_{S_{i}^{\left(\alpha\right)}}\:\max_{\left|\beta\right|\leq N}\left|\left[\partial^{\beta}\widehat{\gamma_{j,1}}\right]\left(T_{j}^{-1}\xi\right)\right|\operatorname{d}\xi\right)^{\tau}\\ \left({\scriptstyle \text{eq. }\eqref{eq:AlphaShearletAtomicDecompositionGeneratorsMainEstimate}\text{ and }N\leq N_{0}}\right) & \leq C_{5}^{\tau}\cdot M_{j,i}^{\left(0\right)}, \end{align*} where the quantity $M_{j,i}^{\left(0\right)}$ is defined as in Lemma \ref{lem:MainShearletLemma}, but with $s^{\natural}:=\left(1+\alpha\right)\vartheta-s$ instead of $s$. At the step marked with $\left(\ast\right)$, we used that we have $w=v^{s}$ and $\left|\det T_{i}\right|=v_{i}^{1+\alpha}$ for all $i\in I$. To be precise, we recall from Theorem \ref{thm:AtomicDecompositionTheorem} that the quantities $N,\tau,\sigma,\vartheta$ from above are given (because of $d=2$) by $\vartheta=\left(p^{-1}-1\right)_{+}$, \[ \tau=\min\left\{ 1,p,q\right\} \geq\min\left\{ p_{0},q_{0}\right\} =:\tau_{0}\qquad\text{ and }\qquad N=\left\lceil \left(d+\varepsilon\right)\big/\min\left\{ 1,p\right\} \right\rceil \leq\left\lceil p_{0}^{-1}\cdot\left(2+\varepsilon\right)\right\rceil =N_{0}, \] as well as \[ \sigma=\begin{cases} \tau\cdot\left(d+1\right)=3\cdot\tau, & \text{if }p\in\left[1,\infty\right],\\ \tau\cdot\left(\frac{d}{p}+\left\lceil \frac{d+\varepsilon}{p}\right\rceil \right)=\tau\cdot\left(\frac{2}{p}+N\right)\leq\tau\cdot\left(\frac{2}{p_{0}}+N_{0}\right), & \text{if }p\in\left(0,1\right). \end{cases} \] In particular, we have $\frac{\sigma}{\tau}\leq\frac{2}{p_{0}}+N_{0}=:\omega$, even in case of $p\in\left[1,\infty\right]$, since $\frac{2}{p_{0}}+N_{0}\geq N_{0}\geq\left\lceil 2+\varepsilon\right\rceil \geq3$. Now, Lemma \ref{lem:MainShearletLemma} (with $c=\varepsilon$) yields a constant \[ C_{0}=C_{0}\left(\alpha,\tau_{0},\omega,\varepsilon,K,H,M_{1},M_{2}\right)=C_{0}\left(\alpha,p_{0},q_{0},\varepsilon,\Lambda_{0},\Lambda_{1},\Lambda_{2},\Lambda_{3}\right)>0 \] satisfying $\max\left\{ K_{1},K_{2}\right\} \leq C_{5}^{\tau}C_{0}^{\tau}$, provided that we can show $H\geq H_{0}+\varepsilon$, $K\geq K_{0}+\varepsilon$ and $M_{\ell}\geq M_{\ell}^{\left(0\right)}+\varepsilon$ for $\ell\in\left\{ 1,2\right\} $, where \begin{align*} K_{0} & :=\begin{cases} \max\left\{ \frac{\sigma}{\tau}-s^{\natural},\,\frac{2+\sigma}{\tau}\right\} , & \text{if }\alpha=1,\\ \max\left\{ \frac{1-\alpha}{\tau}+2\frac{\sigma}{\tau}-s^{\natural},\,\frac{2+\sigma}{\tau}\right\} , & \text{if }\alpha\in\left[0,1\right), \end{cases}\\ M_{1}^{(0)} & :=\begin{cases} \frac{1}{\tau}+s^{\natural}, & \text{if }\alpha=1,\\ \frac{1}{\tau}+\max\left\{ s^{\natural},\,0\right\} , & \text{if }\alpha\in\left[0,1\right), \end{cases}\\ M_{2}^{(0)} & :=\left(1+\alpha\right)\frac{\sigma}{\tau}-s^{\natural},\\ H_{0} & :=\frac{1-\alpha}{\tau}+\frac{\sigma}{\tau}-s^{\natural}. \end{align*} But we have \begin{align*} H_{0} & =\begin{cases} \frac{1-\alpha}{\tau}+3+s, & \text{if }p\in\left[1,\infty\right],\\ \frac{1-\alpha}{\tau}+\frac{2}{p}+N-\left[\left(1+\alpha\right)\left(\frac{1}{p}-1\right)-s\right], & \text{if }p\in\left(0,1\right) \end{cases}\\ & =\begin{cases} \frac{1-\alpha}{\tau}+3+s, & \text{if }p\in\left[1,\infty\right],\\ \frac{1-\alpha}{\tau}+\frac{1-\alpha}{p}+1+\alpha+\left\lceil \frac{2+\varepsilon}{p}\right\rceil +s, & \text{if }p\in\left(0,1\right) \end{cases}\\ & \leq\begin{cases} \frac{1-\alpha}{\tau_{0}}+3+s_{1}, & \text{if }p\in\left[1,\infty\right],\\ \frac{1-\alpha}{\tau_{0}}+\frac{1-\alpha}{p_{0}}+1+\alpha+\left\lceil \frac{2+\varepsilon}{p_{0}}\right\rceil +s_{1}, & \text{if }p\in\left(0,1\right) \end{cases}\\ & \leq\Lambda_{0}-3-2\varepsilon=H-\varepsilon, \end{align*} as an easy case distinction (using $\left\lceil p_{0}^{-1}\cdot\left(2+\varepsilon\right)\right\rceil \geq\left\lceil 2+\varepsilon\right\rceil \geq3$ and the observation that $p\in\left(0,1\right)$ entails $p_{0}\in\left(0,1\right)$) shows. Furthermore, \begin{align*} M_{2}^{(0)} & =\begin{cases} 3\cdot\left(1+\alpha\right)+s, & \text{if }p\in\left[1,\infty\right],\\ \left(1+\alpha\right)\left(\frac{2}{p}+N\right)-\left[\left(1+\alpha\right)\left(\frac{1}{p}-1\right)-s\right], & \text{if }p\in\left(0,1\right) \end{cases}\\ & =\begin{cases} 3\cdot\left(1+\alpha\right)+s, & \text{if }p\in\left[1,\infty\right],\\ \left(1+\alpha\right)\left(1+\frac{1}{p}+N\right)+s, & \text{if }p\in\left(0,1\right) \end{cases}\\ & \leq\begin{cases} 3\cdot\left(1+\alpha\right)+s_{1}, & \text{if }p\in\left[1,\infty\right],\\ \left(1+\alpha\right)\left(1+\frac{1}{p_{0}}+\left\lceil \frac{2+\varepsilon}{p_{0}}\right\rceil \right)+s_{1}, & \text{if }p\in\left(0,1\right) \end{cases}\\ & \leq\Lambda_{2}-\varepsilon=M_{2}-\varepsilon, \end{align*} as one can see again using an easy case distinction, since $\left\lceil p_{0}^{-1}\cdot\left(2+\varepsilon\right)\right\rceil \geq\left\lceil 2+\varepsilon\right\rceil \geq3$. Likewise, \begin{align*} M_{1}^{\left(0\right)}\leq\frac{1}{\tau}+\max\left\{ s^{\natural},\,0\right\} & \leq\frac{1}{\tau_{0}}+\max\left\{ 0,\,\left(1+\alpha\right)\left(\frac{1}{p}-1\right)_{+}-s\right\} \\ & \leq\frac{1}{\tau_{0}}+\max\left\{ 0,\,\left(1+\alpha\right)\left(\frac{1}{p_{0}}-1\right)-s_{0}\right\} \\ & =\Lambda_{1}-\varepsilon=M_{1}-\varepsilon. \end{align*} Finally, we also have \begin{align*} K_{0} & \leq\max\left\{ \frac{1-\alpha}{\tau}+2\frac{\sigma}{\tau}-s^{\natural},\,\frac{2+\sigma}{\tau}\right\} \\ & =\begin{cases} \max\left\{ \frac{1-\alpha}{\tau}+6+s,\,\frac{2}{\tau}+3\right\} , & \text{if }p\in\left[1,\infty\right],\\ \max\left\{ \frac{1-\alpha}{\tau}+2\left(\frac{2}{p}+N\right)-\left[\left(1+\alpha\right)\left(\frac{1}{p}-1\right)-s\right],\,\frac{2}{\tau}+\left(\frac{2}{p}+N\right)\right\} , & \text{if }p\in\left(0,1\right) \end{cases}\\ & =\begin{cases} \max\left\{ \frac{1-\alpha}{\tau}+6+s,\,\frac{2}{\tau}+3\right\} , & \text{if }p\in\left[1,\infty\right],\\ \max\left\{ \frac{1-\alpha}{\tau}+\frac{3-\alpha}{p}+2N+1+\alpha+s,\,\frac{2}{\tau}+\frac{2}{p}+N\right\} , & \text{if }p\in\left(0,1\right) \end{cases}\\ & \leq\begin{cases} \max\left\{ \frac{1-\alpha}{\tau_{0}}+6+s_{1},\,\frac{2}{\tau_{0}}+3\right\} , & \text{if }p\in\left[1,\infty\right],\\ \max\left\{ \frac{1-\alpha}{\tau_{0}}+\frac{3-\alpha}{p_{0}}+2\left\lceil \frac{2+\varepsilon}{p_{0}}\right\rceil +1+\alpha+s_{1},\,\frac{2}{\tau_{0}}+\frac{2}{p_{0}}+\left\lceil \frac{2+\varepsilon}{p_{0}}\right\rceil \right\} , & \text{if }p\in\left(0,1\right) \end{cases}\\ & \leq\Lambda_{3}-\varepsilon=K-\varepsilon, \end{align*} as one can see again using an easy case distinction and the estimate $\left\lceil p_{0}^{-1}\cdot\left(2+\varepsilon\right)\right\rceil \geq\left\lceil 2+\varepsilon\right\rceil \geq3$. Consequently, Lemma \ref{lem:MainShearletLemma} is indeed applicable and yields $\max\left\{ K_{1},K_{2}\right\} \leq C_{5}^{\tau}C_{0}^{\tau}$. We have thus verified all assumptions of Theorem \ref{thm:AtomicDecompositionTheorem}, which yields a constant \[ K=K\left(p_{0},q_{0},\varepsilon,d,\mathcal{Q},\Phi,\gamma_{1}^{\left(0\right)},\dots,\gamma_{n}^{\left(0\right)}\right)=K\left(p_{0},q_{0},\varepsilon,\alpha,\varphi,\psi\right)>0 \] such that the family ${\rm SH}_{\alpha,\varphi,\psi,\delta}^{\left(\pm1\right)}$ from the statement of the current theorem yields an atomic decomposition of the $\alpha$-shearlet smoothness space $\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)=\DecompSp{\mathcal{Q}}p{\ell_{v^{s}}^{q}}{}$, as soon as \[ 0<\delta\leq\delta_{00}:=\min\left\{ 1,\,\left[K\cdot\Omega^{\left(p\right)}\cdot\left(K_{1}^{1/\tau}+K_{2}^{1/\tau}\right)\right]^{-1}\right\} . \] But in equation \eqref{eq:AtomicDecompositionSecondConvolutionFactorEstimate} we saw $\Omega^{\left(p\right)}\leq C_{6}$ independently of $p\geq p_{0}$, $q\geq q_{0}$ and of $s\in\left[s_{0},s_{1}\right]$, so that \[ \delta_{00}\geq\delta_{0}:=\min\left\{ 1,\,\left[2K\cdot C_{0}C_{5}C_{6}\right]^{-1}\right\} , \] where $\delta_{0}>0$ is independent of the precise choice of $p,q,s$, as long as $p\geq p_{0}$, $q\geq q_{0}$ and $s\in\left[s_{0},s_{1}\right]$. The claims concerning the notion of convergence for the series defining $S^{\left(\delta\right)}$ and concerning the independence of the action of $C^{\left(\delta\right)}$ from the choice of $p,q,s$ are consequences of the remark after Theorem \ref{thm:AtomicDecompositionTheorem}. \end{proof} If $\varphi,\psi$ are compactly supported and if the mother shearlet $\psi$ is a tensor product, the preceding conditions can be simplified significantly: \begin{cor} \label{cor:ReallyNiceAlphaShearletTensorAtomicDecompositionConditions}Let $\alpha\in\left[0,1\right]$, $\varepsilon,p_{0},q_{0}\in\left(0,1\right]$ and $s_{0},s_{1}\in\mathbb{R}$ with $s_{0}\leq s_{1}$. Let $\Lambda_{0},\dots,\Lambda_{3}$ as in Theorem \ref{thm:ReallyNiceShearletAtomicDecompositionConditions} and set $N_{0}:=\left\lceil p_{0}^{-1}\cdot\left(2+\varepsilon\right)\right\rceil $. Assume that the mother shearlet $\psi$ can be written as $\psi=\psi_{1}\otimes\psi_{2}$ and that $\varphi,\psi_{1},\psi_{2}$ satisfy the following: \begin{enumerate}[leftmargin=0.6cm] \item We have $\varphi\in C_{c}^{\left\lceil \Lambda_{0}\right\rceil }\left(\mathbb{R}^{2}\right)$, $\psi_{1}\in C_{c}^{\left\lceil \Lambda_{2}+3+\varepsilon\right\rceil }\left(\mathbb{R}\right)$, and $\psi_{2}\in C_{c}^{\left\lceil \Lambda_{3}+3+\varepsilon\right\rceil }\left(\mathbb{R}\right)$. \item We have $\frac{\operatorname{d}^{\ell}}{\operatorname{d}\xi^{\ell}}\widehat{\psi_{1}}\left(0\right)=0$ for $\ell=0,\dots,N_{0}+\left\lceil \Lambda_{1}\right\rceil -1$. \item We have $\widehat{\varphi}\left(\xi\right)\neq0$ for all $\xi\in\left[-1,1\right]^{2}$. \item We have $\widehat{\psi_{1}}\left(\xi\right)\neq0$ for all $\xi\in\left[3^{-1},3\right]$ and $\widehat{\psi_{2}}\left(\xi\right)\neq0$ for all $\xi\in\left[-3,3\right]$. \end{enumerate} Then, $\varphi,\psi$ satisfy all assumptions of Theorem \ref{thm:ReallyNiceShearletAtomicDecompositionConditions}. \end{cor} \begin{proof} Since $\varphi,\psi\in L^{1}\left(\mathbb{R}^{2}\right)$ are compactly supported, it is well known that $\widehat{\varphi},\widehat{\psi}\in C^{\infty}\left(\mathbb{R}^{2}\right)$ with all partial derivatives being polynomially bounded (in fact bounded). Thanks to the compact support and boundedness of $\varphi,\psi$, we also clearly have $\left\Vert \varphi\right\Vert _{1+\frac{2}{p_{0}}}<\infty$ and $\left\Vert \psi\right\Vert _{1+\frac{2}{p_{0}}}<\infty$. Next, if $\xi=\left(\xi_{1},\xi_{2}\right)\in\mathbb{R}^{2}$ satisfies $\xi_{1}\in\left[3^{-1},3\right]$ and $\left|\xi_{2}\right|\le\left|\xi_{1}\right|$, then $\left|\xi_{2}\right|\leq\left|\xi_{1}\right|\leq3$, i.e., $\xi_{2}\in\left[-3,3\right]$. Thus $\widehat{\psi}\left(\xi\right)=\widehat{\psi_{1}}\left(\xi_{1}\right)\cdot\widehat{\psi_{2}}\left(\xi_{2}\right)\neq0$, as required in Theorem \ref{thm:ReallyNiceShearletAtomicDecompositionConditions}. Hence, it only remains to verify \[ \left|\partial^{\beta}\widehat{\varphi}\left(\xi\right)\right|\lesssim\left(1+\left|\xi\right|\right)^{-\Lambda_{0}}\quad\text{ and }\quad\left|\partial^{\beta}\widehat{\psi}\left(\xi\right)\right|\lesssim\min\left\{ \left|\xi_{1}\right|^{\Lambda_{1}},\left(1+\left|\xi_{1}\right|\right)^{-\Lambda_{2}}\right\} \cdot\left(1+\left|\xi_{2}\right|\right)^{-\Lambda_{3}}\cdot\left(1+\left|\xi\right|\right)^{-\left(3+\varepsilon\right)} \] for all $\xi\in\mathbb{R}^{2}$ and all $\beta\in\mathbb{N}_{0}^{2}$ with $\left|\beta\right|\leq N_{0}$. To this end, we first recall that differentiation under the integral shows for $g\in C_{c}\left(\mathbb{R}^{d}\right)$ that $\widehat{g}\in C^{\infty}\left(\mathbb{R}^{d}\right)$, where the derivatives are given by \begin{equation} \partial^{\beta}\widehat{g}\left(\xi\right)=\int_{\mathbb{R}^{d}}g\left(x\right)\cdot\partial_{\xi}^{\beta}e^{-2\pi i\left\langle x,\xi\right\rangle }\operatorname{d} x=\int_{\mathbb{R}^{d}}\left(-2\pi ix\right)^{\beta}g\left(x\right)\cdot e^{-2\pi i\left\langle x,\xi\right\rangle }\operatorname{d} x=\left(\mathcal{F}\left[x\mapsto\left(-2\pi ix\right)^{\beta}g\left(x\right)\right]\right)\left(\xi\right).\label{eq:DerivativeOfFourierTransform} \end{equation} Furthermore, the usual mantra that ``smoothness of $f$ implies decay of $\widehat{f}$'' shows that every $g\in W^{N,1}\left(\mathbb{R}^{d}\right)$ satisfies $\left|\widehat{g}\left(\xi\right)\right|\lesssim\left(1+\left|\xi\right|\right)^{-N}$, see e.g.\@ \cite[Lemma 6.3]{StructuredBanachFrames}. Now, because of $\varphi\in C_{c}^{\left\lceil \Lambda_{0}\right\rceil }\left(\mathbb{R}^{2}\right)$, we also have $\left[x\mapsto\left(-2\pi ix\right)^{\beta}\varphi\left(x\right)\right]\in C_{c}^{\left\lceil \Lambda_{0}\right\rceil }\left(\mathbb{R}^{2}\right)\hookrightarrow W^{\left\lceil \Lambda_{0}\right\rceil ,1}\left(\mathbb{R}^{2}\right)$ and thus \[ \left|\partial^{\beta}\widehat{\varphi}\left(\xi\right)\right|=\left|\left(\mathcal{F}\left[x\mapsto\left(-2\pi ix\right)^{\beta}\varphi\left(x\right)\right]\right)\left(\xi\right)\right|\lesssim\left(1+\left|\xi\right|\right)^{-\left\lceil \Lambda_{0}\right\rceil }\leq\left(1+\left|\xi\right|\right)^{-\Lambda_{0}}, \] as desired. For the estimate concerning $\widehat{\psi}$, we have to work slightly harder: With the same arguments as for $\varphi$, we get $\left|\partial^{\beta}\widehat{\psi_{1}}\left(\xi\right)\right|\lesssim\left(1+\left|\xi\right|\right)^{-\left(\Lambda_{2}+3+\varepsilon\right)}$ and $\left|\partial^{\beta}\widehat{\psi_{2}}\left(\xi\right)\right|\lesssim\left(1+\left|\xi\right|\right)^{-\left(\Lambda_{3}+3+\varepsilon\right)}$ for all $\left|\beta\right|\leq N_{0}$. Now, in case of $\left|\xi_{1}\right|\geq1$, we have $\left|\xi_{1}\right|^{\Lambda_{1}}\geq1\geq\left(1+\left|\xi_{1}\right|\right)^{-\Lambda_{2}}$ and thus \begin{align*} \left|\partial^{\beta}\widehat{\psi}\left(\xi\right)\right| & =\left|\left(\partial^{\beta_{1}}\widehat{\psi_{1}}\right)\left(\xi_{1}\right)\cdot\left(\partial^{\beta_{2}}\widehat{\psi_{2}}\right)\left(\xi_{2}\right)\right|\\ & \lesssim\left(1+\left|\xi_{1}\right|\right)^{-\left(\Lambda_{2}+3+\varepsilon\right)}\cdot\left(1+\left|\xi_{2}\right|\right)^{-\left(\Lambda_{3}+3+\varepsilon\right)}\\ & =\min\left\{ \left|\xi_{1}\right|^{\Lambda_{1}},\,\left(1+\left|\xi_{1}\right|\right)^{-\Lambda_{2}}\right\} \cdot\left(1+\left|\xi_{2}\right|\right)^{-\Lambda_{3}}\cdot\left[\left(1+\left|\xi_{1}\right|\right)\left(1+\left|\xi_{2}\right|\right)\right]^{-\left(3+\varepsilon\right)}\\ & \leq\min\left\{ \left|\xi_{1}\right|^{\Lambda_{1}},\,\left(1+\left|\xi_{1}\right|\right)^{-\Lambda_{2}}\right\} \cdot\left(1+\left|\xi_{2}\right|\right)^{-\Lambda_{3}}\cdot\left(1+\left|\xi\right|\right)^{-\left(3+\varepsilon\right)}, \end{align*} as desired. Here, the last step used that $\left(1+\left|\xi_{1}\right|\right)\left(1+\left|\xi_{2}\right|\right)\geq1+\left|\xi_{1}\right|+\left|\xi_{2}\right|\geq1+\left|\xi\right|$. It remains to consider the case $\left|\xi_{1}\right|\leq1$. But for arbitrary $\beta_{1}\in\mathbb{N}_{0}$ with $\beta_{1}\leq N_{0}$, our assumptions on $\widehat{\psi_{1}}$ ensure $\partial^{\theta}\left[\partial^{\beta_{1}}\widehat{\psi_{1}}\right]\left(0\right)=0$ for all $\theta\in\left\{ 0,\dots,\left\lceil \Lambda_{1}\right\rceil -1\right\} $, where we note $\Lambda_{1}>0$, so that $\left\lceil \Lambda_{1}\right\rceil -1\geq0$. But as the Fourier transform of a compactly supported function, $\widehat{\psi_{1}}$ (and thus also $\partial^{\beta_{1}}\widehat{\psi_{1}}$) can be extended to an entire function on $\mathbb{C}$. In particular, \begin{align} \partial^{\beta_{1}}\widehat{\psi_{1}}\left(\xi_{1}\right) & =\sum_{\theta=0}^{\infty}\frac{\partial^{\theta}\left[\partial^{\beta_{1}}\widehat{\psi_{1}}\right]\left(0\right)}{\theta!}\cdot\xi_{1}^{\theta}=\sum_{\theta=\left\lceil \Lambda_{1}\right\rceil }^{\infty}\frac{\partial^{\theta}\left[\partial^{\beta_{1}}\widehat{\psi_{1}}\right]\left(0\right)}{\theta!}\cdot\xi_{1}^{\theta}\nonumber \\ \left({\scriptstyle \text{with }\ell=\theta-\left\lceil \Lambda_{1}\right\rceil }\right) & =\xi_{1}^{\left\lceil \Lambda_{1}\right\rceil }\cdot\sum_{\ell=0}^{\infty}\frac{\partial^{\ell+\left\lceil \Lambda_{1}\right\rceil }\left[\partial^{\beta_{1}}\widehat{\psi_{1}}\right]\left(0\right)}{\left(\ell+\left\lceil \Lambda_{1}\right\rceil \right)!}\cdot\xi_{1}^{\ell}\label{eq:VanishingFourierDerivativesYieldFourierDecayAtOrigin} \end{align} for all $\xi\in\mathbb{R}$, where the power series in the last line converges absolutely on all of $\mathbb{R}$. In particular, the (continuous(!))\@ function defined by the power series is bounded on $\left[-1,1\right]$, so that we get $\left|\partial^{\beta_{1}}\widehat{\psi_{1}}\left(\xi_{1}\right)\right|\lesssim\left|\xi_{1}\right|^{\left\lceil \Lambda_{1}\right\rceil }\leq\left|\xi_{1}\right|^{\Lambda_{1}}$ for $\xi_{1}\in\left[-1,1\right]$. Furthermore, note $\left(1+\left|\xi_{1}\right|\right)^{-\Lambda_{2}}\geq2^{-\Lambda_{2}}\geq2^{-\Lambda_{2}}\cdot\left|\xi_{1}\right|^{\Lambda_{1}}$, so that \begin{align*} \left|\partial^{\beta}\widehat{\psi}\left(\xi\right)\right| & =\left|\left(\partial^{\beta_{1}}\widehat{\psi_{1}}\right)\left(\xi_{1}\right)\cdot\left(\partial^{\beta_{2}}\widehat{\psi_{2}}\right)\left(\xi_{2}\right)\right|\\ & \lesssim\left|\xi_{1}\right|^{\Lambda_{1}}\cdot\left(1+\left|\xi_{2}\right|\right)^{-\left(\Lambda_{3}+3+\varepsilon\right)}\\ & \leq2^{\Lambda_{2}}\cdot\min\left\{ \left|\xi_{1}\right|^{\Lambda_{1}},\,\left(1+\left|\xi_{1}\right|\right)^{-\Lambda_{2}}\right\} \cdot\left(1+\left|\xi_{2}\right|\right)^{-\Lambda_{3}}\cdot2^{3+\varepsilon}\cdot\left[\left(1+\left|\xi_{1}\right|\right)\left(1+\left|\xi_{2}\right|\right)\right]^{-\left(3+\varepsilon\right)}\\ & \leq2^{3+\varepsilon+\Lambda_{2}}\cdot\min\left\{ \left|\xi_{1}\right|^{\Lambda_{1}},\,\left(1+\left|\xi_{1}\right|\right)^{-\Lambda_{2}}\right\} \cdot\left(1+\left|\xi_{2}\right|\right)^{-\Lambda_{3}}\cdot\left(1+\left|\xi\right|\right)^{-\left(3+\varepsilon\right)}.\qedhere \end{align*} \end{proof} Finally, we provide an analogous simplification of the conditions of Theorem \ref{thm:NicelySimplifiedAlphaShearletFrameConditions}: \begin{cor} \label{cor:ReallyNiceAlphaShearletTensorBanachFrameConditions}Let $\alpha\in\left[0,1\right]$, $\varepsilon,p_{0},q_{0}\in\left(0,1\right]$ and $s_{0},s_{1}\in\mathbb{R}$ with $s_{0}\leq s_{1}$. Let $K,M_{1},M_{2},H$ as in Theorem \ref{thm:NicelySimplifiedAlphaShearletFrameConditions} and set $N_{0}:=\left\lceil p_{0}^{-1}\cdot\left(2+\varepsilon\right)\right\rceil $. The functions $\varphi,\psi$ fulfill all assumption of Theorem \ref{thm:NicelySimplifiedAlphaShearletFrameConditions} if the mother shearlet $\psi$ can be written as $\psi=\psi_{1}\otimes\psi_{2}$, where $\varphi,\psi_{1},\psi_{2}$ satisfy the following: \begin{enumerate}[leftmargin=0.6cm] \item We have $\varphi\in C_{c}^{\left\lceil H+1\right\rceil }\left(\mathbb{R}^{2}\right)$, $\psi_{1}\in C_{c}^{\left\lceil M_{2}+1\right\rceil }\left(\mathbb{R}\right)$, and $\psi_{2}\in C_{c}^{\left\lceil K+1\right\rceil }\left(\mathbb{R}\right)$. \item We have $\frac{\operatorname{d}^{\ell}}{\operatorname{d}\xi^{\ell}}\widehat{\psi_{1}}\left(0\right)=0$ for $\ell=0,\dots,N_{0}+\left\lceil M_{1}\right\rceil -1$. \item We have $\widehat{\varphi}\left(\xi\right)\neq0$ for all $\xi\in\left[-1,1\right]^{2}$. \item We have $\widehat{\psi_{1}}\left(\xi\right)\neq0$ for all $\xi\in\left[3^{-1},3\right]$ and $\widehat{\psi_{2}}\left(\xi\right)\neq0$ for all $\xi\in\left[-3,3\right]$.\qedhere \end{enumerate} \end{cor} \begin{proof} Observe $\varphi,\psi\in C_{c}\left(\mathbb{R}^{2}\right)\subset L^{1}\left(\mathbb{R}^{2}\right)$ and note $\widehat{\varphi},\widehat{\psi}\in C^{\infty}\left(\mathbb{R}^{2}\right)$, where all partial derivatives of these functions are bounded (and thus polynomially bounded), since $\varphi,\psi$ are compactly supported. Next, since $K,H,M_{1},M_{2}\geq0$, our assumptions clearly entail $\varphi,\psi\in C_{c}^{1}\left(\mathbb{R}^{2}\right)$, so that $\nabla\varphi,\nabla\psi\in L^{1}\left(\mathbb{R}^{2}\right)\cap L^{\infty}\left(\mathbb{R}^{2}\right)$. Furthermore, we see exactly as in the proof of Corollary \ref{cor:ReallyNiceAlphaShearletTensorAtomicDecompositionConditions} that $\widehat{\psi}\left(\xi\right)\neq0$ for all $\xi=\left(\xi_{1},\xi_{2}\right)\in\mathbb{R}^{2}$ with $\xi_{1}\in\left[3^{-1},3\right]$ and $\left|\xi_{2}\right|\leq\left|\xi_{1}\right|$. Thus, it remains to verify that $\widehat{\varphi},\widehat{\psi}$ satisfy the decay conditions in equation \eqref{eq:ShearletFrameFourierDecayCondition}. But we see exactly as in the proof of Corollary \ref{cor:ReallyNiceAlphaShearletTensorAtomicDecompositionConditions} (cf.\@ the argument around equation \eqref{eq:DerivativeOfFourierTransform}) that $\left|\partial^{\beta}\widehat{\varphi}\left(\xi\right)\right|\lesssim\left(1+\left|\xi\right|\right)^{-\left\lceil H+1\right\rceil }\leq\left(1+\left|\xi\right|\right)^{-\left(H+1\right)}$, as well as $\left|\partial^{\beta_{1}}\widehat{\psi_{1}}\left(\xi_{1}\right)\right|\lesssim\left(1+\left|\xi_{1}\right|\right)^{-\left\lceil M_{2}+1\right\rceil }\leq\left(1+\left|\xi_{1}\right|\right)^{-\left(M_{2}+1\right)}$ and $\left|\partial^{\beta_{2}}\widehat{\psi_{2}}\left(\xi_{2}\right)\right|\lesssim\left(1+\left|\xi_{2}\right|\right)^{-\left\lceil K+1\right\rceil }\leq\left(1+\left|\xi_{2}\right|\right)^{-\left(K+1\right)}$ for all $\beta\in\mathbb{N}_{0}^{2}$ and $\beta_{1},\beta_{2}\in\mathbb{N}_{0}$. This establishes the last two lines of equation \eqref{eq:ShearletFrameFourierDecayCondition}. For the first line of equation \eqref{eq:ShearletFrameFourierDecayCondition}, we see as in the proof of Corollary \ref{cor:ReallyNiceAlphaShearletTensorAtomicDecompositionConditions} (cf.\@ the argument around equation \eqref{eq:VanishingFourierDerivativesYieldFourierDecayAtOrigin}) that $\left|\partial^{\beta_{1}}\widehat{\psi_{1}}\left(\xi_{1}\right)\right|\lesssim\left|\xi_{1}\right|^{\left\lceil M_{1}\right\rceil }\leq\left|\xi_{1}\right|^{M_{1}}$ for all $\xi_{1}\in\left[-1,1\right]$. Since we saw above that $\left|\partial^{\beta_{2}}\widehat{\psi_{2}}\left(\xi_{2}\right)\right|\lesssim\left(1+\left|\xi_{2}\right|\right)^{-\left(K+1\right)}$ for all $\xi_{2}\in\mathbb{R}$, we have thus also established the first line of equation \eqref{eq:ShearletFrameFourierDecayCondition}. \end{proof} \section{The unconnected \texorpdfstring{$\alpha$}{α}-shearlet covering} \label{sec:UnconnectedAlphaShearletCovering}The $\alpha$-shearlet covering as introduced in Definition \ref{def:AlphaShearletCovering} divides the frequency space $\mathbb{R}^{2}$ into a low-frequency part and into \emph{four} different frequency cones: the top, bottom, left and right cones. But for real-valued functions, the absolute value of the Fourier transform is symmetric. Consequently, there is no non-zero real-valued function with Fourier transform essentially supported in the top (or left, ...) cone. For this reason, it is customary to divide the frequency plane into a low-frequency part and \emph{two} different frequency cones: the horizontal and the vertical frequency cone. In this section, we account for this slightly different partition of the frequency plane, by introducing the so-called \textbf{unconnected $\alpha$-shearlet covering}. The reason for this nomenclature is that the \emph{connected} base set $Q=U_{\left(-1,1\right)}^{\left(3^{-1},3\right)}$ from Definition \ref{def:AlphaShearletCovering} is replaced by the \emph{unconnected} set $Q\cup\left(-Q\right)$. We then show that all results from the preceding two sections remain true for this modified covering, essentially since the associated decomposition spaces are identical, cf.\@ Lemma \ref{lem:UnconnectedAlphaShearletSmoothnessIsBoring}. \begin{defn} \label{def:UnconnectedAlphaShearletCovering}Let $\alpha\in\left[0,1\right]$. The \textbf{unconnected $\alpha$-shearlet covering} $\mathcal{S}_{u}^{\left(\alpha\right)}$ is defined as \[ \mathcal{\mathcal{\mathcal{S}}}_{u}^{(\alpha)}\::=\:\left(\smash{W_{v}^{\left(\alpha\right)}}\right)_{v\in V^{\left(\alpha\right)}}\::=\:\left(\smash{W_{v}}\right)_{v\in V^{\left(\alpha\right)}}\::=\:\left(\smash{B_{v}}W_{v}'\right)_{v\in V^{\left(\alpha\right)}}=\left(B_{v}W_{v}'+b_{v}\right)_{v\in V^{\left(\alpha\right)}}\:, \] where: \begin{itemize}[leftmargin=0.6cm] \item The \emph{index set} $V^{\left(\alpha\right)}$ is given by $V:=V^{\left(\alpha\right)}:=\left\{ 0\right\} \cup V_{0}$, where \[ \qquad V_{0}:=V_{0}^{(\alpha)}:=\left\{ \left(n,m,\delta\right)\in\mathbb{N}_{0}\times\mathbb{Z}\times\left\{ 0,1\right\} \,\middle|\,\left|m\right|\leq G_{n}\right\} \quad\text{ with }\quad G_{n}:=G_{n}^{\left(\alpha\right)}:=\left\lceil \smash{2^{n\left(1-\alpha\right)}}\right\rceil . \] \item The \emph{basic sets} $\left(W_{v}'\right)_{v\in V^{\left(\alpha\right)}}$ are given by $W_{0}':=\left(-1,1\right)^{2}$ and by $W_{v}':=Q_{u}:=U_{\left(-1,1\right)}^{\left(3^{-1},3\right)}\cup\left[-\smash{U_{\left(-1,1\right)}^{\left(3^{-1},3\right)}}\vphantom{U^{\left(3\right)}}\right]$ for $v\in V_{0}^{\left(\alpha\right)}$. The notation $U_{\left(a,b\right)}^{\left(\gamma,\mu\right)}$ used here is as defined in equation \eqref{eq:BasicShearletSet}. \item The \emph{matrices} $\left(B_{v}\right)_{v\in V^{\left(\alpha\right)}}$ are given by $B_{0}:=B_{0}^{\left(\alpha\right)}:=\operatorname{id}$ and by $B_{v}:=B_{v}^{\left(\alpha\right)}:=R^{\delta}\cdot A_{n,m}^{\left(\alpha\right)}$, where we define $A_{n,m}^{\left(\alpha\right)}:=D_{2^{n}}^{\left(\alpha\right)}\cdot S_{m}^{T}$ for $v=\left(n,m,\delta\right)\in V_{0}$. Here, the matrices $R,S_{x}$ and $D_{b}^{\left(\alpha\right)}$ are as in equation \eqref{eq:StandardMatrices}. \item The \emph{translations} $\left(b_{v}\right)_{v\in V^{\left(\alpha\right)}}$ are given by $b_{v}:=0$ for all $v\in V^{\left(\alpha\right)}$. \end{itemize} Finally, we define the \emph{weight} $u=\left(u_{v}\right)_{v\in V}$ by $u_{0}:=1$ and $u_{n,m,\delta}:=2^{n}$ for $\left(n,m,\delta\right)\in V_{0}$. \end{defn} The unconnected $\alpha$-shearlet covering $\mathcal{S}_{u}^{\left(\alpha\right)}$ is highly similar to the (connected) $\alpha$-shearlet covering $\mathcal{S}^{\left(\alpha\right)}$ from Definition \ref{def:AlphaShearletCovering}. In particular, we have $Q_{u}=Q\cup\left(-Q\right)$ with $Q=U_{\left(-1,1\right)}^{\left(3^{-1},3\right)}$ as in Definition \ref{def:AlphaShearletCovering}. To further exploit this connection between the two coverings, we define the \textbf{projection map} \[ \pi:I^{\left(\alpha\right)}\to V^{\left(\alpha\right)},i\mapsto\begin{cases} 0, & \text{if }i=0,\\ \left(n,m,\delta\right), & \text{if }i=\left(n,m,\varepsilon,\delta\right)\in I_{0}^{\left(\alpha\right)}. \end{cases} \] Likewise, for $\varepsilon\in\left\{ \pm1\right\} $, we define the \textbf{$\varepsilon$-injection} \[ \iota_{\varepsilon}:V^{\left(\alpha\right)}\to I^{\left(\alpha\right)},v\mapsto\begin{cases} 0, & \text{if }v=0,\\ \left(n,m,\varepsilon,\delta\right), & \text{if }v=\left(n,m,\delta\right)\in V_{0}^{\left(\alpha\right)}. \end{cases} \] Note that $B_{v}^{\left(\alpha\right)}=\varepsilon\cdot T_{\iota_{\varepsilon}\left(v\right)}^{\left(\alpha\right)}$ for all $v\in V_{0}^{\left(\alpha\right)}$, so that \begin{equation} W_{v}^{\left(\alpha\right)}=S_{\iota_{1}\left(v\right)}^{\left(\alpha\right)}\cup S_{\iota_{-1}\left(v\right)}^{\left(\alpha\right)}=\bigcup_{\varepsilon\in\left\{ \pm1\right\} }S_{\iota_{\varepsilon}\left(v\right)}^{\left(\alpha\right)}\qquad\forall v\in V_{0}^{\left(\alpha\right)},\label{eq:UnconnectedCoveringAsUnionOfConnectedCovering} \end{equation} since $B_{v}^{\left(\alpha\right)}\left[-\smash{U_{\left(-1,1\right)}^{\left(3,3^{-1}\right)}}\vphantom{U^{\left(3\right)}}\right]=-B_{v}^{\left(\alpha\right)}\smash{U_{\left(-1,1\right)}^{\left(3,3^{-1}\right)}}=T_{\iota_{-1}\left(v\right)}^{\left(\alpha\right)}Q=S_{\iota_{-1}\left(v\right)}^{\left(\alpha\right)}$. Because of $W_{0}^{\left(\alpha\right)}=\left(-1,1\right)^{2}=S_{0}^{\left(\alpha\right)}$, equation \eqref{eq:UnconnectedCoveringAsUnionOfConnectedCovering} remains valid for $v=0$. Using these observations, we can now prove the following lemma: \begin{lem} \noindent \label{lem:UnconnectedAlphaShearletCoveringIsAlmostStructured}The unconnected $\alpha$-shearlet covering $\mathcal{S}_{u}^{(\alpha)}$ is an almost structured covering of $\mathbb{R}^{2}$. \end{lem} \begin{proof} In Lemma \ref{lem:AlphaShearletCoveringIsAlmostStructured}, we showed that the (connected) $\alpha$-shearlet covering $\mathcal{S}^{\left(\alpha\right)}$ is almost structured. Thus, for the proof of the present lemma, we will frequently refer to the proof of Lemma \ref{lem:AlphaShearletCoveringIsAlmostStructured}. First of all, recall from the proof of Lemma \ref{lem:AlphaShearletCoveringIsAlmostStructured} the notation $P_{(n,m,\varepsilon,\delta)}'=U_{(-3/4,3/4)}^{(1/2,5/2)}$ for arbitrary $\left(n,m,\varepsilon,\delta\right)\in I_{0}$. Then, for $v=\left(n,m,\delta\right)\in V_{0}$ let us define $R_{(n,m,\delta)}':=P_{(n,m,1,\delta)}'\cup\left(-P_{(n,m,1,\delta)}'\right)$. Furthermore, set $R_{0}':=P_{0}'$, again with $P_{0}'=\left(-\frac{3}{4},\frac{3}{4}\right)^{2}$ as in the proof of Lemma \ref{lem:AlphaShearletCoveringIsAlmostStructured}. Then it is not hard to verify $\overline{R_{v}'}\subset W_{v}'$ for all $v\in V$. Furthermore, in the proof of Lemma \ref{lem:AlphaShearletCoveringIsAlmostStructured}, we showed $\bigcup_{i\in I}T_{i}P_{i}'=\mathbb{R}^{2}$. But this implies \begin{align*} \bigcup_{v\in V}\left(B_{v}R_{v}'+b_{v}\right) & =R_{0}'\cup\bigcup_{\left(n,m,\delta\right)\in V_{0}}B_{\left(n,m,\delta\right)}R_{\left(n,m,\delta\right)}'\\ & =P_{0}'\cup\bigcup_{\left(n,m,\delta\right)\in V_{0}}\left[B_{\left(n,m,\delta\right)}P_{\left(n,m,1,\delta\right)}'\cup-B_{\left(n,m,\delta\right)}P_{\left(n,m,1,\delta\right)}'\right]\\ \left({\scriptstyle \text{since }P_{\left(n,m,1,\delta\right)}'=P_{\left(n,m,-1,\delta\right)}'}\right) & =P_{0}'\cup\bigcup_{\left(n,m,\delta\right)\in V_{0}}\left[T_{\left(n,m,1,\delta\right)}P_{\left(n,m,1,\delta\right)}'\cup T_{\left(n,m,-1,\delta\right)}P_{\left(n,m,-1,\delta\right)}'\right]\\ & =\bigcup_{i\in I}\left(T_{i}P_{i}'+b_{i}\right)=\mathbb{R}^{2}. \end{align*} Next, if $W_{\left(n,m,\delta\right)}^{\left(\alpha\right)}\cap W_{\left(k,\ell,\gamma\right)}^{\left(\alpha\right)}\neq\emptyset$, then equation \eqref{eq:UnconnectedCoveringAsUnionOfConnectedCovering} yields certain $\varepsilon,\beta\in\left\{ \pm1\right\} $ such that $S_{\left(n,m,\varepsilon,\delta\right)}^{\left(\alpha\right)}\cap S_{\left(k,\ell,\beta,\gamma\right)}^{(\alpha)}\neq\emptyset$. But this implies $\left(k,\ell,\gamma\right)=\pi\left(\left(k,\ell,\beta,\gamma\right)\right)$, where $\left(k,\ell,\beta,\gamma\right)\in I_{0}\cap\left(n,m,\varepsilon,\delta\right)^{\ast}$ and where the index cluster is formed with respect to the covering $\mathcal{S}^{\left(\alpha\right)}$. Consequently, we have shown \begin{equation} \left(n,m,\delta\right)^{\ast}\subset\left\{ 0\right\} \cup\bigcup_{\varepsilon\in\left\{ \pm1\right\} }\pi\left(I_{0}\cap\left(n,m,\varepsilon,\delta\right)^{\ast}\right).\label{eq:UnconnectedShearletCoveringClusterInclusion} \end{equation} But since $\mathcal{S}^{\left(\alpha\right)}$ is admissible, the constant $N:=\sup_{i\in I}\left|i^{\ast}\right|$ is finite. But by what we just showed, we have $\left|\left(n,m,\delta\right)^{\ast}\right|\leq1+2N$ for all $\left(n,m,\delta\right)\in V_{0}$. Finally, using a very similar argument one can show \[ 0^{\ast_{\mathcal{S}_{u}^{\left(\alpha\right)}}}\subset\left\{ 0\right\} \cup\pi\left(I_{0}\cap0^{\ast_{\mathcal{S}^{\left(\alpha\right)}}}\right), \] where the index-cluster is taken with respect to $\mathcal{S}_{u}^{\left(\alpha\right)}$ on the left-hand side and with respect to $\mathcal{S}^{\left(\alpha\right)}$ on the right-hand side. Thus, $\left|0^{\ast_{\mathcal{S}_{u}^{\left(\alpha\right)}}}\right|\leq1+N$, so that $\sup_{v\in V}\left|v^{\ast}\right|\leq1+2N<\infty$. All in all, we have thus shown that $\mathcal{S}_{u}^{\left(\alpha\right)}$ is an admissible covering of $\mathbb{R}^{2}$. It remains to verify $\sup_{v\in V}\sup_{r\in v^{\ast}}\left\Vert B_{v}^{-1}B_{r}\right\Vert <\infty$. To this end, recall that $C:=\sup_{i\in I}\sup_{j\in i^{\ast}}\left\Vert T_{i}^{-1}T_{j}\right\Vert $ is finite, since $\mathcal{S}^{\left(\alpha\right)}$ is an almost structured covering. Now, let $v\in V$ and $r\in v^{\ast}$ be arbitrary. We distinguish several cases: \textbf{Case 1}: We have $v=\left(n,m,\delta\right)\in V_{0}$ and $r=\left(k,\ell,\gamma\right)\in V_{0}$. As above, there are thus certain $\varepsilon,\beta\in\left\{ \pm1\right\} $ such that $\left(k,\ell,\beta,\gamma\right)\in\left(n,m,\varepsilon,\delta\right)^{\ast}$. Hence, \[ \left\Vert B_{v}^{-1}B_{r}\right\Vert =\left\Vert \left(\varepsilon\cdot T_{n,m,\varepsilon,\delta}\right)^{-1}\cdot\beta\cdot T_{k,\ell,\beta,\gamma}\right\Vert =\left\Vert \left(T_{n,m,\varepsilon,\delta}\right)^{-1}\cdot T_{k,\ell,\beta,\gamma}\right\Vert \leq C. \] \textbf{Case 2}: We have $v=0$ and $r=\left(k,\ell,\gamma\right)\in V_{0}$. There is then some $\beta\in\left\{ \pm1\right\} $ satisfying $\left(k,\ell,\beta,\gamma\right)\in0^{\ast}$, where the index-cluster is taken with respect to $\mathcal{S}^{\left(\alpha\right)}$. Hence, we get again that \[ \left\Vert B_{v}^{-1}B_{r}\right\Vert =\left\Vert T_{0}^{-1}\cdot\beta\cdot T_{k,\ell,\beta,\gamma}\right\Vert =\left\Vert T_{0}^{-1}\cdot T_{k,\ell,\beta,\gamma}\right\Vert \leq C. \] \textbf{Case 3}: We have $v=\left(n,m,\delta\right)\in V_{0}$ and $r=0$. Hence, $0\in\left(n,m,\varepsilon,\delta\right)^{\ast}$ for some $\varepsilon\in\left\{ \pm1\right\} $, so that \[ \left\Vert B_{v}^{-1}B_{r}\right\Vert =\left\Vert \left(\varepsilon\cdot T_{n,m,\varepsilon,\delta}\right)^{-1}\cdot T_{0}\right\Vert =\left\Vert T_{n,m,\varepsilon,\delta}^{-1}\cdot T_{0}\right\Vert \leq C. \] \textbf{Case 4}: We have $v=r=0$. In this case, $\left\Vert B_{v}^{-1}B_{r}\right\Vert =1\leq C$. Hence, we have verified $\sup_{v\in V}\sup_{r\in v^{\ast}}\left\Vert B_{v}^{-1}B_{r}\right\Vert <\infty$. Since the sets $\left\{ \smash{W_{v}'}\,\middle|\, v\in\smash{V}\right\} $ and $\left\{ \smash{R_{v}'}\,\middle|\, v\in\smash{V}\right\} $ are finite families of bounded, open sets (in fact, each of these families only has two elements), we have shown that $\mathcal{S}_{u}^{\left(\alpha\right)}$ is an almost structured covering of $\mathbb{R}^{2}$. \end{proof} Before we can define the decomposition spaces associated to the unconnected $\alpha$-shearlet covering $\mathcal{S}_{u}^{\left(\alpha\right)}$, we need to verify that the weights that we want to use are $\mathcal{S}_{u}^{\left(\alpha\right)}$-moderate. \begin{lem} \label{lem:WeightUnconnectedModerate}Let $u=\left(u_{v}\right)_{v\in V}$ as in Definition \ref{def:UnconnectedAlphaShearletCovering}. Then $u^{s}=\left(u_{v}^{s}\right)_{v\in V}$ is $\mathcal{S}_{u}^{\left(\alpha\right)}$-moderate with $C_{\mathcal{S}_{u}^{\left(\alpha\right)},u^{s}}\leq39^{\left|s\right|}$. \end{lem} \begin{proof} As seen in equation \eqref{eq:UnconnectedCoveringAsUnionOfConnectedCovering}, we have $W_{v}^{\left(\alpha\right)}=\bigcup_{\varepsilon\in\left\{ \pm1\right\} }S_{\iota_{\varepsilon}\left(v\right)}^{\left(\alpha\right)}$ for arbitrary $v\in V$ (also for $v=0$). Furthermore, it is easy to see $u_{v}=w_{\iota_{\varepsilon}\left(v\right)}$ for arbitrary $\varepsilon\in\left\{ \pm1\right\} $ and $v\in V$. Thus, if $W_{v}^{\left(\alpha\right)}\cap W_{r}^{\left(\alpha\right)}\neq\emptyset$ for certain $v,r\in V$, there are $\varepsilon,\beta\in\left\{ \pm1\right\} $ such that $S_{\iota_{\varepsilon}\left(v\right)}^{\left(\alpha\right)}\cap S_{\iota_{\beta}\left(r\right)}^{\left(\alpha\right)}\neq\emptyset$. But Lemma \ref{lem:AlphaShearletWeightIsModerate} shows that $w^{s}$ is $\mathcal{S}^{\left(\alpha\right)}$-moderate with $C_{\mathcal{S}^{\left(\alpha\right)},w^{s}}\leq39^{\left|s\right|}$. Hence, \[ u_{v}^{s}/u_{r}^{s}=w_{\iota_{\varepsilon}\left(v\right)}^{s}/w_{\iota_{\beta}\left(r\right)}^{s}\leq39^{\left|s\right|}.\qedhere \] \end{proof} Since we now know that $\mathcal{S}_{u}^{\left(\alpha\right)}$ is an almost structured covering of $\mathbb{R}^{2}$ and since $u^{s}$ is $\mathcal{S}_{u}^{\left(\alpha\right)}$-moderate, we see precisely as in the remark after Definition \ref{def:AlphaShearletSmoothnessSpaces} that the \emph{unconnected} $\alpha$-shearlet smoothness spaces that we now define are well-defined Quasi-Banach spaces. We emphasize that the following definition will only be of transitory relevance, since we will immediately show that the newly defined \emph{unconnected} $\alpha$-shearlet smoothness spaces are identical with the previously defined $\alpha$-shearlet smoothness spaces. \begin{defn} \label{def:UnconnectedAlphaShearletSmoothness}For $\alpha\in\left[0,1\right]$, $p,q\in\left(0,\infty\right]$ and $s\in\mathbb{R}$, we define the \textbf{unconnected $\alpha$-shearlet smoothness space} $\mathscr{D}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)$ associated to these parameters as \[ \mathscr{D}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right):=\DecompSp{\mathcal{S}_{u}^{\left(\alpha\right)}}p{\ell_{u^{s}}^{q}}{}, \] where the covering $\mathcal{S}_{u}^{\left(\alpha\right)}$ and the weight $u^{s}$ are as in Definition \ref{def:UnconnectedAlphaShearletCovering} and Lemma \ref{lem:WeightUnconnectedModerate}, respectively. \end{defn} \begin{lem} \label{lem:UnconnectedAlphaShearletSmoothnessIsBoring}We have \[ \mathscr{S}_{\alpha,s}^{p,q}\left(\smash{\mathbb{R}^{2}}\right)=\mathscr{D}_{\alpha,s}^{p,q}\left(\smash{\mathbb{R}^{2}}\right)\qquad\forall\alpha\in\left[0,1\right],\quad p,q\in\left(0,\infty\right]\quad\text{ and }\quad s\in\mathbb{R}, \] with equivalent quasi-norms. \end{lem} \begin{proof} We will derive the claim from \cite[Lemma 6.11, part (2)]{DecompositionEmbedding}, with the choice $\mathcal{Q}:=\mathcal{S}_{u}^{\left(\alpha\right)}$ and $\mathcal{P}:=\mathcal{S}^{\left(\alpha\right)}$, recalling that $\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)=\DecompSp{\mathcal{S}^{\left(\alpha\right)}}p{\ell_{w^{s}}^{q}}{}=\mathcal{F}^{-1}\left[\FourierDecompSp{\mathcal{S}^{\left(\alpha\right)}}p{\ell_{w^{s}}^{q}}{}\right]$ and likewise $\mathscr{D}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)=\mathcal{F}^{-1}\left[\FourierDecompSp{\mathcal{S}_{u}^{\left(\alpha\right)}}p{\ell_{u^{s}}^{q}}{}\right]$. To this end, we first have to verify that the coverings $\mathcal{S}^{\left(\alpha\right)}$ and $\mathcal{S}_{u}^{\left(\alpha\right)}$ are \textbf{weakly equivalent}. This means that \[ \sup_{i\in I}\left|\left\{ v\in V\,\middle|\,\smash{W_{v}^{\left(\alpha\right)}}\cap S_{i}^{\left(\alpha\right)}\neq\emptyset\right\} \right|<\infty\qquad\text{ and }\qquad\sup_{v\in V}\left|\left\{ i\in I\,\middle|\, S_{i}^{\left(\alpha\right)}\cap W_{v}^{\left(\alpha\right)}\neq\emptyset\right\} \right|<\infty. \] We begin with the first claim and thus let $i\in I$ be arbitrary. It is easy to see $S_{i}^{\left(\alpha\right)}\subset W_{\pi\left(i\right)}^{\left(\alpha\right)}$. Consequently, if $v\in V$ satisfies $W_{v}^{\left(\alpha\right)}\cap S_{i}^{\left(\alpha\right)}\neq\emptyset$, then $\emptyset\subsetneq W_{v}^{\left(\alpha\right)}\cap S_{i}^{\left(\alpha\right)}\subset W_{v}^{\left(\alpha\right)}\cap W_{\pi\left(i\right)}^{\left(\alpha\right)}$ and thus $v\in\left[\pi\left(i\right)\right]^{\ast}$, where the index-cluster is formed with respect to $\mathcal{S}_{u}^{\left(\alpha\right)}$. On the one hand, this implies \begin{equation} w_{i}^{t}=u_{\pi\left(i\right)}^{t}\:\asymp_{t}\:u_{v}^{t}\qquad\text{ if }S_{i}^{\left(\alpha\right)}\cap W_{v}^{\left(\alpha\right)}\neq\emptyset,\text{ for arbitrary }t\in\mathbb{R},\label{eq:ConnectedUnconnectedCoveringWeightEquivalence} \end{equation} since $u^{t}$ is $\mathcal{S}_{u}^{\left(\alpha\right)}$-moderate by Lemma \ref{lem:WeightUnconnectedModerate}. On the other hand, we get \[ \sup_{i\in I}\left|\left\{ v\in V\,\middle|\,\smash{W_{v}^{\left(\alpha\right)}}\cap S_{i}^{\left(\alpha\right)}\neq\emptyset\right\} \right|\leq\sup_{i\in I}\left|\left[\pi\left(i\right)\right]^{\ast}\right|\leq\sup_{v\in V}\left|v^{\ast}\right|<\infty, \] since we know that $\mathcal{S}_{u}^{\left(\alpha\right)}$ is admissible (cf.\@ Lemma \ref{lem:UnconnectedAlphaShearletCoveringIsAlmostStructured}). Now, let us verify the second claim. To this end, let $v\in V$ be arbitrary. For $i\in I$ with $S_{i}^{\left(\alpha\right)}\cap W_{v}^{\left(\alpha\right)}\neq\emptyset$, equation \eqref{eq:UnconnectedCoveringAsUnionOfConnectedCovering} shows $\emptyset\neq\bigcup_{\varepsilon\in\left\{ \pm1\right\} }\left(S_{i}^{\left(\alpha\right)}\cap S_{\iota_{\varepsilon}\left(v\right)}^{\left(\alpha\right)}\right)$ and thus $i\in\bigcup_{\varepsilon\in\left\{ \pm1\right\} }\left[\iota_{\varepsilon}\left(v\right)\right]^{\ast}$, where the index-cluster is formed with respect to $\mathcal{S}^{\left(\alpha\right)}$. As above, this yields \[ \sup_{v\in V}\left|\left\{ i\in I\,\middle|\, S_{i}^{\left(\alpha\right)}\cap W_{v}^{\left(\alpha\right)}\neq\emptyset\right\} \right|\leq\sup_{v\in V}\left|\left[\iota_{1}\left(v\right)\right]^{\ast}\right|+\left|\left[\iota_{-1}\left(v\right)\right]^{\ast}\right|\leq2\cdot\sup_{i\in I}\left|i^{\ast}\right|<\infty, \] since $\mathcal{S}^{\left(\alpha\right)}$ is admissible (cf.\@ Lemma \ref{lem:AlphaShearletCoveringIsAlmostStructured}). \medskip{} We have thus verified the two main assumptions of \cite[Lemma 6.11]{DecompositionEmbedding}, namely that $\mathcal{Q},\mathcal{P}$ are weakly equivalent and that $u_{v}^{s}\asymp w_{i}^{s}$ if $W_{v}^{\left(\alpha\right)}\cap S_{i}^{\left(\alpha\right)}\neq\emptyset$, thanks to equation \eqref{eq:ConnectedUnconnectedCoveringWeightEquivalence}. But since we also want to get the claim for $p\in\left(0,1\right)$, we have to verify the additional condition (2) from \cite[Lemma 6.11]{DecompositionEmbedding}, i.e., that $\mathcal{P}=\mathcal{S}^{\left(\alpha\right)}=\left(\smash{S_{j}^{\left(\alpha\right)}}\right)_{j\in I}=\left(T_{j}Q_{j}'\right)_{j\in I}$ is almost subordinate to $\mathcal{Q}=\mathcal{S}_{u}^{\left(\alpha\right)}=\left(W_{v}\right)_{v\in V}=\left(B_{v}W_{v}'\right)_{v\in V}$ and that $\left|\det\left(T_{j}^{-1}B_{v}\right)\right|\lesssim1$ if $W_{v}\cap S_{j}^{\left(\alpha\right)}\neq\emptyset$. But we saw in equation \eqref{eq:ConnectedUnconnectedCoveringWeightEquivalence} that if $W_{v}^{\left(\alpha\right)}\cap S_{j}^{\left(\alpha\right)}\neq\emptyset$, then \[ \left|\det\left(T_{j}^{-1}B_{v}\right)\right|=\left(w_{j}^{1+\alpha}\right)^{-1}\cdot u_{v}^{1+\alpha}\:\asymp_{\alpha}\:1. \] Furthermore, $S_{j}^{\left(\alpha\right)}\subset W_{\pi\left(j\right)}^{\left(\alpha\right)}$ for all $j\in I$, so that $\mathcal{P}=\mathcal{S}^{\left(\alpha\right)}$ is subordinate (and thus also almost subordinate, cf.\@ \cite[Definition 2.10]{DecompositionEmbedding}) to $\mathcal{Q}=\mathcal{S}_{u}^{\left(\alpha\right)}$, as required. The claim is now an immediate consequence of \cite[Lemma 6.11]{DecompositionEmbedding}. \end{proof} In order to allow for a more succinct formulation of our results about Banach frames and atomic decompositions in the setting of the \emph{unconnected} $\alpha$-shearlet covering, we now introduce the notion of \textbf{cone-adapted $\alpha$-shearlet systems}. As we will see in Section \ref{sec:CartoonLikeFunctionsAreBoundedInAlphaShearletSmoothness}, these systems are different, but intimately connected to the \textbf{cone-adapted $\beta$-shearlet systems} (with $\beta\in\left(1,\infty\right)$) as introduced in \cite[Definition 3.10]{AlphaMolecules}. There are three main reasons why we think that the new definition is preferable to the old one: \begin{enumerate} \item With the new definition, a family $\left(L_{\delta k}\,\varphi\right)_{k\in\mathbb{Z}^{2}}\cup\left(\psi_{j,\ell,\delta,k}\right)_{j,\ell,\delta,k}$ of $\alpha$-shearlets has the property that the shearlets $\psi_{j,\ell,\delta,k}$ of scale $j$ have essential frequency support in the dyadic corona $\left\{ \xi\in\mathbb{R}^{2}\with2^{j-c}<\left|\xi\right|<2^{j+c}\right\} $ for suitable $c>0$. In contrast, for $\beta$-shearlets, the shearlets of scale $j$ have essential frequency support in $\left\{ \xi\in\mathbb{R}^{2}\with2^{\frac{\beta}{2}\left(j-c\right)}<\left|\xi\right|<2^{\frac{\beta}{2}\left(j+c\right)}\right\} $, cf.\@ Lemma \ref{lem:ReciprocalShearletCoveringIsAlmostStructuredGeneralized}. \item With the new definition, a family of cone-adapted $\alpha$-shearlets is also a family of $\alpha$-molecules, if the generators are chosen suitably. In contrast, for $\beta$-shearlets, one has the slightly inconvenient fact that a family of cone-adapted $\beta$-shearlets is a family of $\beta^{-1}$-molecules, cf.\@ \cite[Proposition 3.11]{AlphaMolecules}. \item The new definition includes the two boundary values $\alpha\in\left\{ 0,1\right\} $ which correspond to ridgelet-like systems and to wavelet-like systems, respectively. In contrast, for $\beta$-shearlets, the boundary values $\beta\in\left\{ 1,\infty\right\} $ are excluded from the definition. \end{enumerate} We remark that a very similar definition to the one given here is already introduced in \cite[Definition 5.1]{MultivariateAlphaMolecules}, even generally in $\mathbb{R}^{d}$ for $d\geq2$. \begin{defn} \label{def:AlphaShearletSystem}Let $\alpha\in\left[0,1\right]$. For generators $\varphi,\psi\in L^{1}\left(\mathbb{R}^{2}\right)+L^{2}\left(\mathbb{R}^{2}\right)$ and a given sampling density $\delta>0$, we define the \textbf{cone-adapted $\alpha$-shearlet system} with sampling density $\delta$ generated by $\varphi,\psi$ as \[ {\rm SH}_{\alpha}\left(\varphi,\psi;\,\delta\right):=\left(\gamma^{\left[v,k\right]}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}:=\left(L_{\delta\cdot B_{v}^{-T}k}\:\gamma^{\left[v\right]}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}\quad\text{ with }\quad\gamma^{\left[v\right]}:=\begin{cases} \left|\det\smash{B_{v}}\right|^{1/2}\cdot\left(\psi\circ B_{v}^{T}\right), & \text{if }v\in V_{0},\\ \varphi, & \text{if }v=0, \end{cases} \] where $V,V_{0}$ and $B_{v}$ are as in Definition \ref{def:UnconnectedAlphaShearletCovering}. Note that the notation $\gamma^{\left[v,k\right]}$ suppresses the sampling density $\delta>0$. If we want to emphasize this sampling density, we write $\gamma^{\left[v,k,\delta\right]}$ instead of $\gamma^{\left[v,k\right]}$. \end{defn} \begin{rem} \label{rem:AlphaShearletsYieldUsualShearlets}In case of $\alpha=\frac{1}{2}$, the preceding definition yields special cone-adapted shearlet systems: As defined in \cite[Definition 1.2]{CompactlySupportedShearletsAreOptimallySparse}, the cone-adapted shearlet system ${\rm SH}\left(\varphi,\psi,\theta;\,\delta\right)$ with sampling density $\delta>0$ generated by $\varphi,\psi,\theta\in L^{2}\left(\mathbb{R}^{2}\right)$ is ${\rm SH}\left(\varphi,\psi,\theta;\,\delta\right)=\Phi\left(\varphi;\,\delta\right)\cup\Psi\left(\psi;\,\delta\right)\cup\Theta\left(\theta;\,\delta\right)$, where \begin{align*} \Phi\left(\varphi;\,\delta\right) & :=\left\{ \varphi_{k}:=\varphi\left(\bullet-\delta k\right)\,\middle|\, k\in\mathbb{Z}^{2}\right\} ,\\ \Psi\left(\psi;\,\delta\right) & :=\left\{ \psi_{j,\ell,k}:=2^{\frac{3}{4}j}\cdot\psi\left(S_{\ell}A_{2^{j}}\bullet-\delta k\right)\,\middle|\, j\in\mathbb{N}_{0},\ell\in\mathbb{Z}\text{ with }\left|\ell\right|\leq\left\lceil \smash{2^{j/2}}\right\rceil \text{ and }k\in\mathbb{Z}^{2}\right\} ,\\ \Theta\left(\theta;\,\delta\right) & :=\left\{ \theta_{j,\ell,k}:=2^{\frac{3}{4}j}\cdot\theta\left(S_{\ell}^{T}\widetilde{A}_{2^{j}}\bullet-\delta k\right)\,\middle|\, j\in\mathbb{N}_{0},\ell\in\mathbb{Z}\text{ with }\left|\ell\right|\leq\left\lceil \smash{2^{j/2}}\right\rceil \text{ and }k\in\mathbb{Z}^{2}\right\} , \end{align*} with $S_{k}=\left(\begin{smallmatrix}1 & k\\ 0 & 1 \end{smallmatrix}\right)$, $A_{2^{j}}={\rm diag}\left(2^{j},\,2^{j/2}\right)$ and $\widetilde{A}_{2^{j}}={\rm diag}\left(2^{j/2},\,2^{j}\right)$. Now, the most common choice for $\theta$ is $\theta=\psi\circ R$ for $R=\left(\begin{smallmatrix}0 & 1\\ 1 & 0 \end{smallmatrix}\right)$. With this choice, we observe in the notation of Definitions \ref{def:AlphaShearletSystem} and \ref{def:UnconnectedAlphaShearletCovering} that \[ \gamma^{\left[0,k\right]}=L_{\delta\cdot B_{0}^{-T}k}\:\gamma^{\left[0\right]}=L_{\delta k}\:\varphi=\varphi\left(\bullet-\delta k\right)=\varphi_{k}\qquad\forall k\in\mathbb{Z}^{2}. \] Furthermore, we note because of $\alpha=\frac{1}{2}$ that \[ B_{j,\ell,0}^{T}=\left[\left(\begin{matrix}2^{j} & 0\\ 0 & 2^{j/2} \end{matrix}\right)\cdot\left(\begin{matrix}1 & 0\\ \ell & 1 \end{matrix}\right)\right]^{T}=S_{\ell}\cdot A_{2^{j}}\:, \] with $\left|\det\smash{B_{j,\ell,0}}\right|=2^{\frac{3}{2}j}$, so that \[ \gamma^{\left[\left(j,\ell,0\right),k\right]}=L_{\delta\cdot\left[S_{\ell}A_{2^{j}}\right]^{-1}k}\:\gamma^{\left[\left(j,\ell,0\right)\right]}=2^{\frac{3}{4}j}\cdot\psi\left(S_{\ell}\cdot A_{2^{j}}\bullet-\delta k\right)=\psi_{j,\ell,k}\qquad\forall\left(j,\ell,0\right)\in V_{0}\text{ and }k\in\mathbb{Z}^{2}. \] Finally, we observe $\theta\left(S_{\ell}^{T}\widetilde{A}_{2^{j}}\bullet-\delta k\right)=\psi\left(RS_{\ell}^{T}\widetilde{A}_{2^{j}}\bullet-\delta Rk\right)$, as well as \[ R\cdot S_{\ell}^{T}\cdot\widetilde{A}_{2^{j}}=\left(\begin{matrix}0 & 1\\ 1 & 0 \end{matrix}\right)\left(\begin{matrix}1 & 0\\ \ell & 1 \end{matrix}\right)\left(\begin{matrix}2^{j/2} & 0\\ 0 & 2^{j} \end{matrix}\right)=\left(\begin{matrix}2^{j/2}\ell & 2^{j}\\ 2^{j/2} & 0 \end{matrix}\right) \] and \[ B_{j,\ell,1}^{T}=\left[\left(\begin{matrix}0 & 1\\ 1 & 0 \end{matrix}\right)\left(\begin{matrix}2^{j} & 0\\ 0 & 2^{j/2} \end{matrix}\right)\left(\begin{matrix}1 & 0\\ \ell & 1 \end{matrix}\right)\right]^{T}=\left[\left(\begin{matrix}0 & 2^{j/2}\\ 2^{j} & 0 \end{matrix}\right)\left(\begin{matrix}1 & 0\\ \ell & 1 \end{matrix}\right)\right]^{T}=\left(\begin{matrix}2^{j/2}\ell & 2^{j/2}\\ 2^{j} & 0 \end{matrix}\right)^{T}=R\cdot S_{\ell}^{T}\cdot\widetilde{A}_{2^{j}}. \] Consequently, we also get \[ \gamma^{\left[\left(j,\ell,1\right),k\right]}=L_{\delta\cdot\left[R\cdot S_{\ell}^{T}\cdot\widetilde{A}_{2^{j}}\right]^{-1}k}\:\gamma^{\left[\left(j,\ell,1\right)\right]}=2^{\frac{3}{4}j}\cdot\psi\left(R\cdot S_{\ell}^{T}\cdot\widetilde{A}_{2^{j}}\bullet-\delta k\right)=2^{\frac{3}{4}j}\cdot\psi\left(R\cdot S_{\ell}^{T}\cdot\widetilde{A}_{2^{j}}\bullet-\delta RRk\right)=\theta_{j,\ell,Rk} \] for arbitrary $\left(j,\ell,1\right)\in V_{0}$ and $k\in\mathbb{Z}^{2}$. Since $\mathbb{Z}^{2}\to\mathbb{Z}^{2},k\mapsto Rk$ is bijective, this implies \[ {\rm SH}\left(\varphi,\psi,\theta;\,\delta\right)={\rm SH}_{1/2}\left(\varphi,\psi;\,\delta\right)\text{ up to a reordering in the translation variable }k\qquad\text{ if }\text{\ensuremath{\theta}}=\psi\circ R.\qedhere \] \end{rem} We now want to transfer Theorems \ref{thm:NicelySimplifiedAlphaShearletFrameConditions} and \ref{thm:ReallyNiceShearletAtomicDecompositionConditions} to the setting of the \emph{unconnected} $\alpha$-shearlet covering. The link between the connected and the unconnected setting is provided by the following lemma: \begin{lem} \label{lem:MEstimate}With $\varrho$, $\varrho_{0}$ as in equation \eqref{eq:MotherShearletMainEstimate}, set $\widetilde{\varrho}_{0}:=\varrho_{0}$, as well as $\widetilde{\varrho}_{v}:=\varrho$ for $v\in V_{0}$. Moreover, set \[ \widetilde{M}_{r,v}^{(0)}:=\left(\frac{u_{r}^{s}}{u_{v}^{s}}\right)^{\tau}\left(1\!+\!\left\Vert B_{r}^{-1}B_{v}\right\Vert \right)^{\sigma}\left(\left|\det\smash{B_{v}}\right|^{-1}\cdot\int_{W_{v}^{\left(\alpha\right)}}\widetilde{\varrho}_{r}\left(B_{r}^{-1}\xi\right)\operatorname{d}\xi\right)^{\!\tau} \] for $v,r\in V$. Then we have \[ \widetilde{M}_{r,v}^{\left(0\right)}\leq2^{\tau}\cdot M_{\iota_{1}\left(r\right),\iota_{1}\left(v\right)}^{\left(0\right)} \] for all $v,r\in V$, where $M_{\iota_{1}\left(r\right),\iota_{1}\left(v\right)}^{\left(0\right)}$ is as in Lemma \ref{lem:MainShearletLemma}. \end{lem} \begin{proof} First of all, recall \[ W_{v}'=\begin{cases} \vphantom{\sum_{j}}U_{\left(-1,1\right)}^{\left(3^{-1},3\right)}\cup\left[\vphantom{U^{\left(\gamma\right)}}-\smash{U_{\left(-1,1\right)}^{\left(3^{-1},3\right)}}\right]=Q_{\iota_{1}\left(v\right)}'\cup\left[\vphantom{U^{\left(\gamma\right)}}-\smash{Q_{\iota_{1}\left(v\right)}'}\right], & \text{if }v\in V_{0},\\ \left(-1,1\right)^{2}=\left(-1,1\right)^{2}\cup\left[-\left(-1,1\right)^{2}\right]=Q_{\iota_{1}\left(v\right)}'\cup\left[\vphantom{U^{\left(\gamma\right)}}-\smash{Q_{\iota_{1}\left(v\right)}'}\right], & \text{if }v=0 \end{cases} \] and $B_{v}=T_{\iota_{1}\left(v\right)}$, as well as $u_{v}=w_{\iota_{1}\left(v\right)}$ and $\widetilde{\varrho}_{v}=\varrho_{\iota_{1}\left(v\right)}$ for all $v\in V$. Thus, \begin{align*} \widetilde{M}_{r,v}^{\left(0\right)} & =\left(\frac{u_{r}^{s}}{u_{v}^{s}}\right)^{\tau}\cdot\left(1+\left\Vert B_{r}^{-1}B_{v}\right\Vert \right)^{\sigma}\cdot\left(\left|\det\smash{B_{v}}\right|^{-1}\cdot\int_{W_{v}^{\left(\alpha\right)}}\widetilde{\varrho}_{r}\left(B_{r}^{-1}\xi\right)\operatorname{d}\xi\right)^{\tau}\\ \left({\scriptstyle \text{with }\zeta=B_{v}^{-1}\xi}\right) & =\left(\frac{w_{\iota_{1}\left(r\right)}^{s}}{w_{\iota_{1}\left(v\right)}^{s}}\right)^{\tau}\cdot\left(1+\left\Vert T_{\iota_{1}\left(r\right)}^{-1}T_{\iota_{1}\left(v\right)}\right\Vert \right)^{\sigma}\cdot\left(\int_{W_{v}'}\widetilde{\varrho}_{r}\left(B_{r}^{-1}B_{v}\zeta\right)\operatorname{d}\zeta\right)^{\tau}\\ & =\left(\frac{w_{\iota_{1}\left(r\right)}^{s}}{w_{\iota_{1}\left(v\right)}^{s}}\right)^{\tau}\cdot\left(1+\left\Vert T_{\iota_{1}\left(r\right)}^{-1}T_{\iota_{1}\left(v\right)}\right\Vert \right)^{\sigma}\cdot\left(\int_{Q_{\iota_{1}\left(v\right)}'\cup\left[-Q_{\iota_{1}\left(v\right)}'\right]}\varrho_{\iota_{1}\left(r\right)}\left(T_{\iota_{1}\left(r\right)}^{-1}T_{\iota_{1}\left(v\right)}\zeta\right)\operatorname{d}\zeta\right)^{\tau}\\ & \leq\left(\frac{w_{\iota_{1}\left(r\right)}^{s}}{w_{\iota_{1}\left(v\right)}^{s}}\right)^{\tau}\cdot\left(1+\left\Vert T_{\iota_{1}\left(r\right)}^{-1}T_{\iota_{1}\left(v\right)}\right\Vert \right)^{\sigma}\\ & \phantom{\leq}\qquad\cdot\left(\int_{Q_{\iota_{1}\left(v\right)}'}\varrho_{\iota_{1}\left(r\right)}\left(T_{\iota_{1}\left(r\right)}^{-1}T_{\iota_{1}\left(v\right)}\zeta\right)\operatorname{d}\zeta+\int_{-Q_{\iota_{1}\left(v\right)}'}\varrho_{\iota_{1}\left(r\right)}\left(T_{\iota_{1}\left(r\right)}^{-1}T_{\iota_{1}\left(v\right)}\zeta\right)\operatorname{d}\zeta\right)^{\tau}\\ \left({\scriptstyle \text{since }\varrho_{\iota_{1}\left(r\right)}\left(-\xi\right)=\varrho_{\iota_{1}\left(r\right)}\left(\xi\right)}\right) & =\left(\frac{w_{\iota_{1}\left(r\right)}^{s}}{w_{\iota_{1}\left(v\right)}^{s}}\right)^{\tau}\cdot\left(1+\left\Vert T_{\iota_{1}\left(r\right)}^{-1}T_{\iota_{1}\left(v\right)}\right\Vert \right)^{\sigma}\cdot\left(2\cdot\int_{Q_{\iota_{1}\left(v\right)}'}\varrho_{\iota_{1}\left(r\right)}\left(T_{\iota_{1}\left(r\right)}^{-1}T_{\iota_{1}\left(v\right)}\zeta\right)\operatorname{d}\zeta\right)^{\tau}\\ \left({\scriptstyle \text{with }\xi=T_{\iota_{1}\left(v\right)}\zeta}\right) & =2^{\tau}\cdot\left(\frac{w_{\iota_{1}\left(r\right)}^{s}}{w_{\iota_{1}\left(v\right)}^{s}}\right)^{\tau}\cdot\left(1+\left\Vert T_{\iota_{1}\left(r\right)}^{-1}T_{\iota_{1}\left(v\right)}\right\Vert \right)^{\sigma}\cdot\left(\left|\det T_{\iota_{1}\left(v\right)}\right|^{-1}\cdot\int_{S_{\iota_{1}\left(v\right)}^{\left(\alpha\right)}}\varrho_{\iota_{1}\left(r\right)}\left(T_{\iota_{1}\left(r\right)}^{-1}\xi\right)\operatorname{d}\xi\right)^{\tau}\\ & =2^{\tau}\cdot M_{\iota_{1}\left(r\right),\iota_{1}\left(v\right)}^{\left(0\right)}.\qedhere \end{align*} \end{proof} Since the map $\iota_{1}:V\to I$ is injective, Lemma \ref{lem:MEstimate} implies \[ \max\left\{ \left(\sup_{v\in V}\,\sum_{r\in V}\widetilde{M}_{r,v}^{\left(0\right)}\right)^{1/\tau},\,\left(\sup_{r\in V}\,\sum_{v\in V}\widetilde{M}_{r,v}^{\left(0\right)}\right)^{1/\tau}\right\} \leq2\cdot\max\left\{ \sup_{i\in I}\,\sum_{j\in I}M_{j,i}^{\left(0\right)},\,\sup_{j\in I}\,\sum_{i\in I}M_{j,i}^{\left(0\right)}\right\} . \] Then, recalling Lemma \ref{lem:UnconnectedAlphaShearletSmoothnessIsBoring} and using \emph{precisely} the same arguments as for proving Theorems \ref{thm:NicelySimplifiedAlphaShearletFrameConditions} and \ref{thm:ReallyNiceShearletAtomicDecompositionConditions}, one can prove the following two theorems: \begin{thm} \label{thm:NicelySimplifiedUnconnectedAlphaShearletFrameConditions}Theorem \ref{thm:NicelySimplifiedAlphaShearletFrameConditions} remains essentially valid if the family $\widetilde{{\rm SH}}_{\alpha,\varphi,\psi,\delta}^{\left(\pm1\right)}$ is replaced by the $\alpha$-shearlet system \[ {\rm SH}_{\alpha}\left(\smash{\widetilde{\varphi},\widetilde{\psi}};\,\delta\right)=\left(L_{\delta\cdot B_{v}^{-T}k}\:\widetilde{\gamma^{\left[v\right]}}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}\quad\text{ with }\quad\gamma^{\left[v\right]}:=\begin{cases} \left|\det\smash{B_{v}}\right|^{1/2}\cdot\left(\psi\circ B_{v}^{T}\right), & \text{if }v\in V_{0},\\ \varphi, & \text{if }v=0, \end{cases} \] where $\widetilde{\varphi}\left(x\right)=\varphi\left(-x\right)$ and $\widetilde{\psi}\left(x\right)=\psi\left(-x\right)$. The only two necessary changes are the following: \begin{enumerate} \item The assumption $\widehat{\psi}\left(\xi\right)\neq0$ for $\xi=\left(\xi_{1},\xi_{2}\right)\in\mathbb{R}^{2}$ with $\xi_{1}\in\left[3^{-1},3\right]$ and $\left|\xi_{2}\right|\leq\left|\xi_{1}\right|$ has to be replaced by \[ \widehat{\psi}\left(\xi\right)\neq0\text{ for }\xi=\left(\xi_{1},\xi_{2}\right)\in\mathbb{R}^{2}\text{ with }\frac{1}{3}\leq\left|\xi_{1}\right|\leq3\text{ and }\left|\xi_{2}\right|\leq\left|\xi_{1}\right|. \] \item For the definition of the analysis operator $A^{\left(\delta\right)}$, the convolution $\gamma^{\left[v\right]}\ast f$ has to be defined as in equation \eqref{eq:SpecialConvolutionDefinition}, but using a regular partition of unity $\left(\varphi_{v}\right)_{v\in V}$ for $\mathcal{S}_{u}^{\left(\alpha\right)}$, i.e., \[ \left(\gamma^{\left[v\right]}\ast f\right)\left(x\right)=\sum_{\ell\in V}\mathcal{F}^{-1}\left(\widehat{\gamma^{\left[v\right]}}\cdot\varphi_{\ell}\cdot\widehat{f}\:\right)\left(x\right)\qquad\forall x\in\mathbb{R}^{d}, \] where the series converges normally in $L^{\infty}\left(\mathbb{R}^{2}\right)$ and thus absolutely and uniformly, for all $f\in\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)$. For a more convenient expression for this convolution—at least for $f\in L^{2}\left(\mathbb{R}^{2}\right)$—see Lemma \ref{lem:SpecialConvolutionClarification} below.\qedhere \end{enumerate} \end{thm} \begin{thm} \label{thm:ReallyNiceUnconnectedShearletAtomicDecompositionConditions}Theorem \ref{thm:ReallyNiceShearletAtomicDecompositionConditions} remains essentially valid if the family ${\rm SH}_{\varphi,\psi,\delta}^{\left(\pm1\right)}$ is replaced by the $\alpha$-shearlet system \[ {\rm SH}_{\alpha}\left(\varphi,\psi;\,\delta\right)=\left(L_{\delta\cdot B_{v}^{-T}k}\:\gamma^{\left[v\right]}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}\quad\text{ with }\quad\gamma^{\left[v\right]}:=\begin{cases} \left|\det\smash{B_{v}}\right|^{1/2}\cdot\left(\psi\circ B_{v}^{T}\right), & \text{if }v\in V_{0},\\ \varphi, & \text{if }v=0. \end{cases} \] The only necessary change is that the assumption $\widehat{\psi}\left(\xi\right)\neq0$ for $\xi=\left(\xi_{1},\xi_{2}\right)\in\mathbb{R}^{2}$ with $\xi_{1}\in\left[3^{-1},3\right]$ and $\left|\xi_{2}\right|\leq\left|\xi_{1}\right|$ has to be replaced by \[ \widehat{\psi}\left(\xi\right)\neq0\text{ for }\xi=\left(\xi_{1},\xi_{2}\right)\in\mathbb{R}^{2}\text{ with }\frac{1}{3}\leq\left|\xi_{1}\right|\leq3\text{ and }\left|\xi_{2}\right|\leq\left|\xi_{1}\right|.\qedhere \] \end{thm} \begin{rem} \label{rem:NiceTensorConditionsForUnconnectedCovering}With the exact same reasoning, one can also show that Corollaries \ref{cor:ReallyNiceAlphaShearletTensorAtomicDecompositionConditions} and \ref{cor:ReallyNiceAlphaShearletTensorBanachFrameConditions} remain valid with the obvious changes. Again, one now has to require \[ \widehat{\psi_{1}}\left(\xi\right)\neq0\text{ for }\frac{1}{3}\leq\left|\xi\right|\leq3. \] instead of $\widehat{\psi_{1}}\left(\xi\right)\neq0$ for $\xi\in\left[3^{-1},3\right]$. \end{rem} The one remaining limitation of Theorems \ref{thm:NicelySimplifiedAlphaShearletFrameConditions} and \ref{thm:NicelySimplifiedUnconnectedAlphaShearletFrameConditions} is their somewhat strange definition of the convolution $\left(\gamma^{\left[i\right]}\ast f\right)\left(x\right)$. The following lemma makes this definition more concrete, under the \emph{assumption} that we already know $f\in L^{2}\left(\mathbb{R}^{2}\right)$. For general $f\in\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)$, this need not be the case, but for suitable values of $p,q,s$, we have $\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)\hookrightarrow L^{2}\left(\mathbb{R}^{2}\right)$, as we will see in Theorem \ref{thm:AnalysisAndSynthesisSparsityAreEquivalent}. \begin{lem} \label{lem:SpecialConvolutionClarification}Let $\left(\varphi_{i}\right)_{i\in I}$ be a regular partition of unity subordinate to some almost structured covering $\mathcal{Q}=\left(Q_{i}\right)_{i\in I}$ of $\mathbb{R}^{d}$. Assume that $\gamma\in L^{1}\left(\mathbb{R}^{d}\right)\cap L^{2}\left(\mathbb{R}^{d}\right)$ with $\widehat{\gamma}\in C^{\infty}\left(\mathbb{R}^{d}\right)$, where all partial derivatives of $\widehat{\gamma}$ are polynomially bounded. Let $f\in L^{2}\left(\mathbb{R}^{d}\right)\hookrightarrow\mathcal{S}'\left(\mathbb{R}^{d}\right)\hookrightarrow Z'\left(\mathbb{R}^{d}\right)$ be arbitrary. Then we have \[ \sum_{\ell\in I}\mathcal{F}^{-1}\left(\widehat{\gamma}\cdot\varphi_{\ell}\cdot\widehat{f}\right)\left(x\right)=\left\langle f,\,L_{x}\widetilde{\gamma}\right\rangle \qquad\forall x\in\mathbb{R}^{d}, \] where $\widetilde{\gamma}\left(x\right)=\gamma\left(-x\right)$ and where $\left\langle f,g\right\rangle =\int_{\mathbb{R}^{d}}f\left(x\right)\cdot g\left(x\right)\operatorname{d} x$. \end{lem} \begin{proof} In the expression $\mathcal{F}^{-1}\left(\widehat{\gamma}\cdot\varphi_{\ell}\cdot\widehat{f}\right)\left(x\right)$, the inverse Fourier transform is the inverse Fourier transform of the compactly supported, tempered distribution $\widehat{\gamma}\cdot\varphi_{\ell}\cdot\widehat{f}\in\mathcal{S}'\left(\mathbb{R}^{d}\right)$. But by the Paley-Wiener theorem (see e.g.\@ \cite[Theorem 7.23]{RudinFA}), the tempered distribution $\mathcal{F}^{-1}\left(\widehat{\gamma}\cdot\varphi_{\ell}\cdot\widehat{f}\right)$ is given by (integration against) a (uniquely determined) smooth function, whose value at $x\in\mathbb{R}^{d}$ we denote by $\mathcal{F}^{-1}\left(\widehat{\gamma}\cdot\varphi_{\ell}\cdot\widehat{f}\right)\left(x\right)$. Precisely, we have \[ \mathcal{F}^{-1}\left(\widehat{\gamma}\cdot\varphi_{\ell}\cdot\widehat{f}\right)\left(x\right)=\left\langle \widehat{\gamma}\cdot\widehat{f},\,\varphi_{\ell}\cdot e^{2\pi i\left\langle x,\bullet\right\rangle }\right\rangle _{\DistributionSpace{\mathbb{R}^{d}},\TestFunctionSpace{\mathbb{R}^{d}}}=\int_{\mathbb{R}^{d}}\widehat{\gamma}\left(\xi\right)\cdot\widehat{f}\left(\xi\right)\cdot e^{2\pi i\left\langle x,\xi\right\rangle }\cdot\varphi_{\ell}\left(\xi\right)\operatorname{d}\xi. \] But since $\mathcal{Q}$ is an admissible covering of $\mathbb{R}^{d}$ and since $\left(\varphi_{\ell}\right)_{\ell\in I}$ is a regular partition of unity subordinate to $\mathcal{Q}$, we have \begin{align*} \sum_{\ell\in I}\left|\widehat{\gamma}\left(\xi\right)\cdot\widehat{f}\left(\xi\right)\cdot e^{2\pi i\left\langle x,\xi\right\rangle }\cdot\varphi_{\ell}\left(\xi\right)\right| & \leq\left|\widehat{\gamma}\left(\xi\right)\cdot\smash{\widehat{f}}\left(\xi\right)\right|\cdot\sum_{\ell\in I}\left|\varphi_{\ell}\left(\xi\right)\right|\\ & \leq\sup_{\ell\in I}\left\Vert \varphi_{\ell}\right\Vert _{\sup}\cdot\left|\widehat{\gamma}\left(\xi\right)\cdot\smash{\widehat{f}}\left(\xi\right)\right|\cdot\sum_{\ell\in I}{\mathds{1}}_{Q_{\ell}}\left(\xi\right)\\ & \leq N_{\mathcal{Q}}\cdot\sup_{\ell\in I}\left\Vert \varphi_{\ell}\right\Vert _{\sup}\cdot\left|\widehat{\gamma}\left(\xi\right)\cdot\smash{\widehat{f}}\left(\xi\right)\right|\in L^{1}\left(\mathbb{R}^{d}\right), \end{align*} since $\widehat{\gamma},\widehat{f}\in L^{2}\left(\mathbb{R}^{d}\right)$. Since we also have $\sum_{\ell\in I}\varphi_{\ell}\equiv1$ on $\mathbb{R}^{d}$, we get by the dominated convergence theorem that \begin{align*} \sum_{\ell\in I}\mathcal{F}^{-1}\left(\widehat{\gamma}\cdot\varphi_{\ell}\cdot\widehat{f}\right)\left(x\right) & =\int_{\mathbb{R}^{d}}\widehat{\gamma}\left(\xi\right)\cdot\widehat{f}\left(\xi\right)\cdot e^{2\pi i\left\langle x,\xi\right\rangle }\cdot\sum_{\ell\in I}\varphi_{\ell}\left(\xi\right)\operatorname{d}\xi\\ & =\int_{\mathbb{R}^{d}}\widehat{\gamma}\left(\xi\right)\cdot\widehat{f}\left(\xi\right)\cdot e^{2\pi i\left\langle x,\xi\right\rangle }\operatorname{d}\xi=\mathcal{F}^{-1}\left(\smash{\widehat{\gamma}\cdot\widehat{f}}\right)\left(x\right), \end{align*} where $\mathcal{F}^{-1}\left(\smash{\widehat{\gamma}\cdot\widehat{f}}\right)\in L^{2}\left(\mathbb{R}^{d}\right)\cap C_{0}\left(\mathbb{R}^{d}\right)$ by the Riemann-Lebesgue Lemma and Plancherel's theorem, because of $\widehat{\gamma}\cdot\widehat{f}\in L^{1}\left(\mathbb{R}^{d}\right)\cap L^{2}\left(\mathbb{R}^{d}\right)$. But Young's inequality shows $\gamma\ast f\in L^{2}\left(\mathbb{R}^{d}\right)$, while the convolution theorem yields $\mathcal{F}\left[\gamma\ast f\right]=\widehat{\gamma}\cdot\widehat{f}$. Hence, $\gamma\ast f=\mathcal{F}^{-1}\left(\smash{\widehat{\gamma}\cdot\widehat{f}}\right)$ almost everywhere. But both sides of the identity are continuous functions, since the convolution of two $L^{2}$ functions is continuous. Thus, the equality holds everywhere, so that we finally get \[ \sum_{\ell\in I}\mathcal{F}^{-1}\left(\widehat{\gamma}\cdot\varphi_{\ell}\cdot\widehat{f}\right)\left(x\right)=\mathcal{F}^{-1}\left(\smash{\widehat{\gamma}\cdot\widehat{f}}\right)\left(x\right)=\left(\gamma\ast f\right)\left(x\right)=\int_{\mathbb{R}^{d}}f\left(y\right)\cdot\gamma\left(x-y\right)\operatorname{d} y=\left\langle f,\,L_{x}\widetilde{\gamma}\right\rangle .\qedhere \] \end{proof} We close this section with a theorem that justifies the title of the paper: It formally encodes the fact that \emph{analysis sparsity is equivalent to synthesis sparsity} for (suitable) $\alpha$-shearlet systems. \begin{thm} \label{thm:AnalysisAndSynthesisSparsityAreEquivalent}Let $\alpha\in\left[0,1\right]$, $\varepsilon,p_{0}\in\left(0,1\right]$ and $s^{\left(0\right)}\geq0$ be arbitrary. Assume that $\varphi,\psi\in L^{1}\left(\mathbb{R}^{2}\right)$ satisfy the assumptions of Theorems \ref{thm:NicelySimplifiedUnconnectedAlphaShearletFrameConditions} and \ref{thm:ReallyNiceUnconnectedShearletAtomicDecompositionConditions} with $q_{0}=p_{0}$ and $s_{0}=0$, as well as $s_{1}=s^{\left(0\right)}+\left(1+\alpha\right)\left(p_{0}^{-1}-2^{-1}\right)$. For $\delta>0$, denote by ${\rm SH}_{\alpha}\left(\varphi,\psi;\delta\right)=\left(\gamma^{\left[v,k,\delta\right]}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}$ the $\alpha$-shearlet system generated by $\varphi,\psi$, as in Definition \ref{def:AlphaShearletSystem}. Then there is some $\delta_{0}\in\left(0,1\right]$ with the following property: For all $p\in\left[p_{0},2\right]$ and all $s\in\left[0,s^{\left(0\right)}\right]$, we have \begin{align*} \mathscr{S}_{\alpha,s+\left(1+\alpha\right)\left(p^{-1}-2^{-1}\right)}^{p,p}\left(\mathbb{R}^{2}\right) & =\left\{ f\in L^{2}\left(\mathbb{R}^{2}\right)\,\middle|\,\left(u_{v}^{s}\cdot\left\langle f,\,\smash{\gamma^{\left[v,k,\delta\right]}}\right\rangle _{L^{2}}\vphantom{\gamma^{\left[v,k\right]}}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}\in\ell^{p}\left(V\times\mathbb{Z}^{2}\right)\right\} \\ & =\left\{ \sum_{\left(v,k\right)\in V\times\mathbb{Z}^{2}}c_{k}^{\left(v\right)}\cdot\gamma^{\left[v,k,\delta\right]}\,\middle|\,\left(u_{v}^{s}\cdot\smash{c_{k}^{\left(v\right)}}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}\in\ell^{p}\left(V\times\mathbb{Z}^{2}\right)\right\} , \end{align*} as long as $0<\delta\leq\delta_{0}$. Here, the weight $u=\left(u_{v}\right)_{v\in V}$ is as in Definition \ref{def:UnconnectedAlphaShearletCovering}, i.e., $u_{n,m,\delta}=2^{n}$ and $u_{0}=1$. In fact, for $f\in\mathscr{S}_{\alpha,s+\left(1+\alpha\right)\left(p^{-1}-2^{-1}\right)}^{p,p}\left(\mathbb{R}^{2}\right)$, we even have a (quasi)-norm equivalence \begin{align*} \left\Vert f\right\Vert _{\mathscr{S}_{\alpha,s+\left(1+\alpha\right)\left(p^{-1}-2^{-1}\right)}^{p,p}} & \asymp\left\Vert \left(u_{v}^{s}\cdot\left\langle f,\,\smash{\gamma^{\left[v,k,\delta\right]}}\right\rangle _{L^{2}}\vphantom{\gamma^{\left[v,k,\delta\right]}}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}\right\Vert _{\ell^{p}}\\ & \asymp\inf\left\{ \!\left\Vert \left(u_{v}^{s}\cdot\smash{c_{k}^{\left(v\right)}}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}\right\Vert _{\ell^{p}}\,\middle|\, f=\!\!\!\sum_{\left(v,k\right)\in V\times\mathbb{Z}^{2}}\!c_{k}^{\left(v\right)}\cdot\gamma^{\left[v,k,\delta\right]}\text{ with uncond. conv. in }L^{2}\left(\mathbb{R}^{2}\right)\right\} \!. \end{align*} In particular, $\mathscr{S}_{\alpha,s+\left(1+\alpha\right)\left(p^{-1}-2^{-1}\right)}^{p,p}\left(\mathbb{R}^{2}\right)\hookrightarrow L^{2}\left(\mathbb{R}^{2}\right)$ and ${\rm SH}_{\alpha}\left(\varphi,\psi;\delta\right)$ is a frame for $L^{2}\left(\mathbb{R}^{2}\right)$. \end{thm} \begin{rem*} As one advantage of the decomposition space point of view, we observe that $\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)$ is \emph{easily} seen to be translation invariant, while this is not so easy to see in the characterization via analysis or synthesis sparsity in terms of a discrete $\alpha$-shearlet system. \end{rem*} \begin{proof} We start with a few preparatory definitions and observations. For brevity, we set \begin{equation} \left\Vert f\right\Vert _{\ast,p,s,\delta}:=\inf\left\{ \left\Vert \left(u_{v}^{s}\cdot\smash{c_{k}^{\left(v\right)}}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}\right\Vert _{\ell^{p}}\,\middle|\, f=\!\!\sum_{\left(v,k\right)\in V\times\mathbb{Z}^{2}}c_{k}^{\left(v\right)}\cdot\gamma^{\left[v,k,\delta\right]}\text{ with uncond. conv. in }L^{2}\left(\mathbb{R}^{2}\right)\right\} \label{eq:AnalysisSynthesisSparsitySpecialNormDefinition} \end{equation} for $f\in\mathscr{S}_{\alpha,s+\left(1+\alpha\right)\left(p^{-1}-2^{-1}\right)}^{p,p}\left(\mathbb{R}^{2}\right)$ and $s\in\left[0,s^{\left(0\right)}\right]$, as well as $p\in\left[p_{0},2\right]$. Next, our assumptions entail that $\varphi,\psi$ satisfy the assumptions of Theorem \ref{thm:NicelySimplifiedUnconnectedAlphaShearletFrameConditions} (and thus equation \eqref{eq:ShearletFrameFourierDecayCondition}) for $s_{0}=0$ and $s_{1}=s^{\left(0\right)}+\left(1+\alpha\right)\left(p_{0}^{-1}-2^{-1}\right)\geq0$. But this implies (in the notation of Theorem \ref{thm:NicelySimplifiedAlphaShearletFrameConditions}) that $K,H,M_{2}\geq2+\varepsilon$. Hence, \[ \left(1+\left|\xi_{1}\right|\right)^{-\left(M_{2}+1\right)}\left(1+\left|\xi_{2}\right|\right)^{-\left(K+1\right)}\leq\left[\left(1+\left|\xi_{1}\right|\right)\left(1+\left|\xi_{2}\right|\right)\right]^{-\left(2+\varepsilon\right)}\leq\left(1+\left|\xi\right|\right)^{-\left(2+\varepsilon\right)}\in L^{1}\left(\mathbb{R}^{2}\right). \] Therefore, equation \eqref{eq:ShearletFrameFourierDecayCondition} entails $\widehat{\varphi},\widehat{\psi}\in L^{1}\left(\mathbb{R}^{2}\right)$, so that Fourier inversion yields $\varphi,\psi\in L^{1}\left(\mathbb{R}^{2}\right)\cap C_{0}\left(\mathbb{R}^{2}\right)\hookrightarrow L^{2}\left(\mathbb{R}^{2}\right)$. Consequently, $\gamma^{\left[v\right]}\in L^{1}\left(\mathbb{R}^{2}\right)\cap L^{2}\left(\mathbb{R}^{2}\right)$ for all $v\in V$, which will be important for our application of Lemma \ref{lem:SpecialConvolutionClarification} later in the proof. Finally, for $g:\mathbb{R}^{2}\to\mathbb{C}$, set $g^{\ast}:\mathbb{R}^{2}\to\mathbb{C},x\mapsto\overline{g\left(-x\right)}$. For $g\in L^{1}\left(\mathbb{R}^{2}\right)$, we then have $\widehat{g^{\ast}}\left(\xi\right)=\overline{\widehat{g}\left(\xi\right)}$ for all $\xi\in\mathbb{R}^{2}$. Therefore, in case of $g\in C^{1}\left(\mathbb{R}^{2}\right)$ with $g,\nabla g\in L^{1}\left(\mathbb{R}^{2}\right)\cap L^{\infty}\left(\mathbb{R}^{2}\right)$ and with $\widehat{g}\in C^{\infty}\left(\mathbb{R}^{2}\right)$, this implies that $g^{\ast}$ satisfies the same properties and that $\left|\partial^{\theta}\widehat{g^{\ast}}\right|=\left|\partial^{\theta}\widehat{g}\right|$ for all $\theta\in\mathbb{N}_{0}^{2}$. These considerations easily show that since $\varphi,\psi$ satisfy the assumptions of Theorem \ref{thm:NicelySimplifiedUnconnectedAlphaShearletFrameConditions} (with $q_{0}=p_{0}$ and $s_{0}=0$, as well as $s_{1}=s^{\left(0\right)}+\left(1+\alpha\right)\left(p_{0}^{-1}-2^{-1}\right)$), so do $\varphi^{\ast},\psi^{\ast}$. Thus, Theorem \ref{thm:NicelySimplifiedUnconnectedAlphaShearletFrameConditions} yields a constant $\delta_{1}\in\left(0,1\right]$ such that the $\alpha$-shearlet system ${\rm SH}_{\alpha}\left(\overline{\varphi},\overline{\psi};\delta\right)={\rm SH}_{\alpha}\left(\smash{\widetilde{\varphi^{\ast}}},\smash{\widetilde{\psi^{\ast}}};\delta\right)$ forms a Banach frame for $\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)$, for all $p,q\in\left[p_{0},\infty\right]$ and all $s\in\mathbb{R}$ with $0\leq s\leq s^{\left(0\right)}+\left(1+\alpha\right)\left(p_{0}^{-1}-2^{-1}\right)$, as long as $0<\delta\leq\delta_{1}$. Likewise, Theorem \ref{thm:ReallyNiceUnconnectedShearletAtomicDecompositionConditions} yields a constant $\delta_{2}\in\left(0,1\right]$ such that ${\rm SH}_{\alpha}\left(\varphi,\psi;\delta\right)$ yields an atomic decomposition of $\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)$ for the same range of parameters, as long as $0<\delta\leq\delta_{2}$. Now, let us set $\delta_{0}:=\min\left\{ \delta_{1},\delta_{2}\right\} \in\left(0,1\right]$. \medskip{} Let $p\in\left[p_{0},2\right]$ and $s\in\left[0,s^{\left(0\right)}\right]$ be arbitrary and set $s^{\natural}:=s+\left(1+\alpha\right)\left(p^{-1}-2^{-1}\right)$. It is not hard to see directly from Definition \ref{def:CoefficientSpace}—and because of $\left|\det B_{v}\right|=u_{v}^{1+\alpha}$ for all $v\in V$—that the quasi-norm of the coefficient space $C_{u^{s^{\natural}}}^{p,p}$ satisfies \[ \left\Vert \left(\smash{c_{k}^{\left(v\right)}}\right)_{v\in V,k\in\mathbb{Z}^{2}}\right\Vert _{C_{u^{s^{\natural}}}^{p,p}}=\left\Vert \left(\left|\det B_{v}\right|^{\frac{1}{2}-\frac{1}{p}}\cdot u_{v}^{s^{\natural}}\cdot\left\Vert \left(\smash{c_{k}^{\left(v\right)}}\right)_{k\in\mathbb{Z}^{2}}\right\Vert _{\ell^{p}}\right)_{v\in V}\right\Vert _{\ell^{p}}=\left\Vert \left(u_{v}^{s}\cdot\smash{c_{k}^{\left(v\right)}}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}\right\Vert _{\ell^{p}}\in\left[0,\infty\right] \] for arbitrary sequences $\left(\smash{c_{k}^{\left(v\right)}}\right)_{v\in V,k\in\mathbb{Z}^{2}}$, and $C_{u^{s^{\natural}}}^{p,p}$ contains exactly those sequences for which this (quasi)-norm is finite. Now, note because of $s\geq0$ and $p\leq2$ that $C_{u^{s^{\natural}}}^{p,p}\hookrightarrow\ell^{2}\left(V\times\mathbb{Z}^{2}\right)$, since $u_{v}\geq1$ for all $v\in V$ and since $\ell^{p}\hookrightarrow\ell^{2}$. Next, note that we have \[ s_{0}=0\leq s\leq s^{\natural}\leq s^{\left(0\right)}+\left(1+\alpha\right)\left(p_{0}^{-1}-2^{-1}\right)=s_{1}, \] so that ${\rm SH}_{\alpha}\left(\varphi,\psi;\delta\right)$ forms an atomic decomposition of $\mathscr{S}_{\alpha,s^{\natural}}^{p,p}\left(\mathbb{R}^{2}\right)$ for all $0<\delta\leq\delta_{0}$. This means that the synthesis operator \[ S^{\left(\delta\right)}:C_{u^{s^{\natural}}}^{p,p}\to\mathscr{S}_{\alpha,s^{\natural}}^{p,p}\left(\mathbb{R}^{2}\right),\left(\smash{c_{k}^{\left(v\right)}}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}\mapsto\sum_{\left(v,k\right)\in V\times\mathbb{Z}^{2}}c_{k}^{\left(v\right)}\cdot\gamma^{\left[v,k,\delta\right]} \] is well-defined and bounded with unconditional convergence of the series in $\mathscr{S}_{\alpha,s^{\natural}}^{p,p}\left(\mathbb{R}^{2}\right)$. This implicitly uses that the synthesis operator $S^{\left(\delta\right)}$ as defined in Theorem \ref{thm:ReallyNiceShearletAtomicDecompositionConditions} is bounded and satisfies $S^{\left(\delta\right)}\left(\delta_{v,k}\right)=\gamma^{\left[v,k,\delta\right]}$ for all $\left(v,k\right)\in V\times\mathbb{Z}^{2}$ and that we have $c=\left(\smash{c_{k}^{\left(v\right)}}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}=\sum_{\left(v,k\right)\in V\times\mathbb{Z}^{2}}c_{k}^{\left(v\right)}\cdot\delta_{v,k}$ for all $c\in C_{u^{s^{\natural}}}^{p,p}$, with unconditional convergence in $C_{u^{s^{\natural}}}^{p,p}$, since $p\leq2<\infty$. This immediately yields \begin{equation} \Omega_{1}:=\left\{ \sum_{\left(v,k\right)\in V\times\mathbb{Z}^{2}}c_{k}^{\left(v\right)}\cdot\gamma^{\left[v,k,\delta\right]}\,\middle|\,\left(u_{v}^{s}\cdot\smash{c_{k}^{\left(v\right)}}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}\in\ell^{p}\left(V\times\mathbb{Z}^{2}\right)\right\} ={\rm range}\left(\smash{S^{\left(\delta\right)}}\right)\subset\mathscr{S}_{\alpha,s^{\natural}}^{p,p}\left(\mathbb{R}^{2}\right).\label{eq:AnalysisSynthesisSparsityEquivalentOmegaDefinition} \end{equation} Further, if $f\in\mathscr{S}_{\alpha,s^{\natural}}^{p,p}\left(\mathbb{R}^{2}\right)$ and if $c=\left(\smash{c_{k}^{\left(v\right)}}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}$ is an arbitrary sequence satisfying $f=\sum_{\left(v,k\right)\in V\times\mathbb{Z}^{2}}c_{k}^{\left(v\right)}\cdot\gamma^{\left[v,k,\delta\right]}$ with unconditional convergence in $L^{2}\left(\mathbb{R}^{2}\right)$, there are two cases: \begin{casenv} \item We have $\left\Vert \left(u_{v}^{s}\cdot\smash{c_{k}^{\left(v\right)}}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}\right\Vert _{\ell^{p}}=\infty$. In this case, $\left\Vert f\right\Vert _{\mathscr{S}_{\alpha,s^{\natural}}^{p,p}\left(\mathbb{R}^{2}\right)}\leq\vertiii{\smash{S^{\left(\delta\right)}}}\cdot\left\Vert \left(u_{v}^{s}\cdot\smash{c_{k}^{\left(v\right)}}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}\right\Vert _{\ell^{p}}$ is trivial. \item We have $\left\Vert \left(u_{v}^{s}\cdot\smash{c_{k}^{\left(v\right)}}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}\right\Vert _{\ell^{p}}<\infty$. In this case, we get $c\in C_{u^{s^{\natural}}}^{p,p}$ and $f=S^{\left(\delta\right)}c$. Therefore, we see $\left\Vert f\right\Vert _{\mathscr{S}_{\alpha,s^{\natural}}^{p,p}\left(\mathbb{R}^{2}\right)}\leq\vertiii{\smash{S^{\left(\delta\right)}}}\cdot\left\Vert c\right\Vert _{C_{u^{s^{\natural}}}^{p,p}}=\vertiii{\smash{S^{\left(\delta\right)}}}\cdot\left\Vert \left(u_{v}^{s}\cdot\smash{c_{k}^{\left(v\right)}}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}\right\Vert _{\ell^{p}}$. \end{casenv} All in all, we have thus established \[ \left\Vert f\right\Vert _{\mathscr{S}_{\alpha,s^{\natural}}^{p,p}\left(\mathbb{R}^{2}\right)}\leq\vertiii{\smash{S^{\left(\delta\right)}}}\cdot\left\Vert f\right\Vert _{\ast,p,s,\delta}\qquad\forall f\in\mathscr{S}_{\alpha,s^{\natural}}^{p,p}\left(\mathbb{R}^{2}\right). \] Next, note that the considerations from the preceding paragraph with the choice $p=2$ and $s=0$ also show that $S^{\left(\delta\right)}:\ell^{2}\left(V\times\mathbb{Z}^{2}\right)\to\mathscr{S}_{\alpha,0}^{2,2}\left(\mathbb{R}^{2}\right)$ is well-defined and bounded. But \cite[Lemma 6.10]{DecompositionEmbedding} yields $\mathscr{S}_{\alpha,0}^{2,2}\left(\mathbb{R}^{2}\right)=L^{2}\left(\mathbb{R}^{2}\right)$ with equivalent norms. Since we saw above that $C_{u^{s^{\natural}}}^{p,p}\hookrightarrow\ell^{2}\left(V\times\mathbb{Z}^{2}\right)$ for all $p\leq2$ and $s\geq0$, this implies in particular that the series defining $S^{\left(\delta\right)}c$ converges unconditionally in $L^{2}\left(\mathbb{R}^{2}\right)$ for arbitrary $c\in C_{u^{s^{\natural}}}^{p,p}$, for arbitrary $s\in\left[0,s^{\left(0\right)}\right]$ and $p\in\left[p_{0},2\right]$. But from the atomic decomposition property of ${\rm SH}_{\alpha}\left(\varphi,\psi;\delta\right)$, we also know that there is a bounded coefficient operator $C^{\left(\delta\right)}:\mathscr{S}_{\alpha,s^{\natural}}^{p,p}\left(\mathbb{R}^{2}\right)\to C_{u^{s^{\natural}}}^{p,p}$ satisfying $S^{\left(\delta\right)}\circ C^{\left(\delta\right)}=\operatorname{id}_{\mathscr{S}_{\alpha,s^{\natural}}^{p,p}}$. Thus, for arbitrary $f\in\mathscr{S}_{\alpha,s^{\natural}}^{p,p}\left(\mathbb{R}^{2}\right)$ and $e=\left(e_{v,k}\right)_{v\in V,k\in\mathbb{Z}^{2}}:=C^{\left(\delta\right)}f\in C_{u^{s^{\natural}}}^{p,p}$, we have $f=S^{\left(\delta\right)}e=\sum_{\left(v,k\right)\in V\times\mathbb{Z}^{2}}e_{k}^{\left(v\right)}\cdot\gamma^{\left[v,k,\delta\right]}\in\Omega_{1}$, where the series converges unconditionally in $L^{2}\left(\mathbb{R}^{2}\right)$ (and in $\mathscr{S}_{\alpha,s^{\natural}}^{p,p}\left(\mathbb{R}^{2}\right)$). In particular, we get \[ \left\Vert f\right\Vert _{\ast,p,s,\delta}\leq\left\Vert \left(u_{v}^{s}\cdot\smash{e_{k}^{\left(v\right)}}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}\right\Vert _{\ell^{p}}=\left\Vert e\right\Vert _{C_{u^{s^{\natural}}}^{p,p}}\leq\vertiii{\smash{C^{\left(\delta\right)}}}\cdot\left\Vert f\right\Vert _{\mathscr{S}_{\alpha,s^{\natural}}^{p,p}}<\infty, \] as well as \begin{align*} \left\Vert f\right\Vert _{L^{2}\left(\mathbb{R}^{2}\right)} & \lesssim\left\Vert f\right\Vert _{\mathscr{S}_{\alpha,0}^{2,2}}\leq\vertiii{\smash{S^{\left(\delta\right)}}}_{\ell^{2}\to\mathscr{S}_{\alpha,0}^{2,2}}\cdot\left\Vert e\right\Vert _{C_{u^{0}}^{2,2}}=\vertiii{\smash{S^{\left(\delta\right)}}}_{\ell^{2}\to\mathscr{S}_{\alpha,0}^{2,2}}\cdot\left\Vert e\right\Vert _{\ell^{2}}\\ & \leq\vertiii{\smash{S^{\left(\delta\right)}}}_{\ell^{2}\to\mathscr{S}_{\alpha,0}^{2,2}}\cdot\left\Vert e\right\Vert _{C_{u^{s^{\natural}}}^{p,p}}\leq\vertiii{\smash{S^{\left(\delta\right)}}}_{\ell^{2}\to\mathscr{S}_{\alpha,0}^{2,2}}\cdot\vertiii{\smash{C^{\left(\delta\right)}}}\cdot\left\Vert f\right\Vert _{\mathscr{S}_{\alpha,s^{\natural}}^{p,p}}<\infty \end{align*} for all $f\in\mathscr{S}_{\alpha,s^{\natural}}^{p,p}\left(\mathbb{R}^{2}\right)$. Up to now, we have thus shown $\mathscr{S}_{\alpha,s^{\natural}}^{p,p}\left(\mathbb{R}^{2}\right)=\Omega_{1}$ (with $\Omega_{1}$ as in equation \eqref{eq:AnalysisSynthesisSparsityEquivalentOmegaDefinition}) and $\left\Vert f\right\Vert _{\ast,p,s,\delta}\asymp\left\Vert f\right\Vert _{\mathscr{S}_{\alpha,s^{\natural}}^{p,p}}$ for all $f\in\mathscr{S}_{\alpha,s^{\natural}}^{p,p}\left(\mathbb{R}^{2}\right)$, with $\left\Vert f\right\Vert _{\ast,p,s,\delta}$ as in equation \eqref{eq:AnalysisSynthesisSparsitySpecialNormDefinition}. Finally, we have also shown $\mathscr{S}_{\alpha,s^{\natural}}^{p,p}\left(\mathbb{R}^{2}\right)\hookrightarrow L^{2}\left(\mathbb{R}^{2}\right)$. \medskip{} Thus, it remains to show \[ \Omega_{2}:=\left\{ f\in L^{2}\left(\mathbb{R}^{2}\right)\,\middle|\,\left(u_{v}^{s}\cdot\left\langle f,\,\smash{\gamma^{\left[v,k,\delta\right]}}\right\rangle _{L^{2}}\vphantom{\gamma^{\left[v,k,\delta\right]}}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}\in\ell^{p}\left(V\times\mathbb{Z}^{2}\right)\right\} \overset{!}{=}\mathscr{S}_{\alpha,s^{\natural}}^{p,p}\left(\mathbb{R}^{2}\right), \] as well as $\left\Vert f\right\Vert _{\mathscr{S}_{\alpha,s^{\natural}}^{p,p}}\asymp\left\Vert \left(u_{v}^{s}\cdot\left\langle f,\,\smash{\gamma^{\left[v,k,\delta\right]}}\right\rangle _{L^{2}}\vphantom{\gamma^{\left[v,k,\delta\right]}}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}\right\Vert _{\ell^{p}}$ for $f\in\mathscr{S}_{\alpha,s^{\natural}}^{p,p}\left(\mathbb{R}^{2}\right)$. But Theorem \ref{thm:NicelySimplifiedUnconnectedAlphaShearletFrameConditions} (applied with $\varphi^{\ast},\psi^{\ast}$ instead of $\varphi,\psi$, see above) shows that the analysis operator \[ A^{\left(\delta\right)}:\mathscr{S}_{\alpha,s^{\natural}}^{p,p}\left(\mathbb{R}^{2}\right)\to C_{u^{s^{\natural}}}^{p,p},f\mapsto\left[\left(\smash{\varrho^{\left[v\right]}}\ast f\right)\left(\delta\cdot B_{v}^{-T}k\right)\right]_{v\in V,\,k\in\mathbb{Z}^{2}} \] is well-defined and bounded, where (cf.\@ Theorem \ref{thm:NicelySimplifiedAlphaShearletFrameConditions}), the family $\left(\varrho^{\left[v\right]}\right)_{v\in V}$ is given by $\varrho^{\left[v\right]}=\left|\det B_{v}\right|^{1/2}\cdot\left(\psi^{\ast}\circ B_{v}^{T}\right)$ for $v\in V_{0}$ and by $\varrho^{\left[0\right]}=\varphi^{\ast}$. Note that this yields $\widetilde{\varrho^{\left[v\right]}}=\overline{\gamma^{\left[v\right]}}$, where the family $\left(\gamma^{\left[v\right]}\right)_{v\in V}$ is as in Definition \ref{def:AlphaShearletSystem}. Now, since we already showed $\mathscr{S}_{\alpha,s^{\natural}}^{p,p}\left(\mathbb{R}^{2}\right)\hookrightarrow L^{2}\left(\mathbb{R}^{2}\right)$ and since $\varrho^{\left[v\right]}\in L^{1}\left(\mathbb{R}^{2}\right)\cap L^{2}\left(\mathbb{R}^{2}\right)$ for all $v\in V$, as we saw at the start of the proof, Lemma \ref{lem:SpecialConvolutionClarification} yields \[ \left(\smash{\varrho^{\left[v\right]}\ast f}\right)\left(\delta\cdot B_{v}^{-T}k\right)=\left\langle f,\,L_{\delta\cdot B_{v}^{-T}k}\:\widetilde{\varrho^{\left[v\right]}}\right\rangle =\left\langle f,\,L_{\delta\cdot B_{v}^{-T}k}\:\overline{\gamma^{\left[v\right]}}\right\rangle =\left\langle f,\,L_{\delta\cdot B_{v}^{-T}k}\:\gamma^{\left[v\right]}\right\rangle _{L^{2}}=\left\langle f,\,\gamma^{\left[v,k,\delta\right]}\right\rangle _{L^{2}} \] for all $f\in\mathscr{S}_{\alpha,s^{\natural}}^{p,p}\left(\mathbb{R}^{2}\right)$ and $\left(v,k\right)\in V\times\mathbb{Z}^{2}$. We thus see $\mathscr{S}_{\alpha,s^{\natural}}^{p,p}\left(\mathbb{R}^{2}\right)\subset\Omega_{2}$ and \[ \left\Vert \left(u_{v}^{s}\cdot\left\langle f,\,\smash{\gamma^{\left[v,k,\delta\right]}}\right\rangle _{L^{2}}\vphantom{\gamma^{\left[v,k,\delta\right]}}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}\right\Vert _{\ell^{p}}=\left\Vert \smash{A^{\left(\delta\right)}}f\right\Vert _{C_{u^{s^{\natural}}}^{p,p}}\leq\vertiii{\smash{A^{\left(\delta\right)}}}\cdot\left\Vert f\right\Vert _{\mathscr{S}_{\alpha,s^{\natural}}^{p,p}}\qquad\forall f\in\mathscr{S}_{\alpha,s^{\natural}}^{p,p}\left(\mathbb{R}^{2}\right). \] Conversely, let $f\in\Omega_{2}$ be arbitrary, i.e., $f\in L^{2}\left(\mathbb{R}^{2}\right)$ with $\left(u_{v}^{s}\cdot\left\langle f,\,\smash{\gamma^{\left[v,k,\delta\right]}}\right\rangle _{L^{2}}\vphantom{\gamma^{\left[v,k,\delta\right]}}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}\in\ell^{p}\left(V\times\mathbb{Z}^{2}\right)$. This means $f\in L^{2}\left(\mathbb{R}^{2}\right)=\mathscr{S}_{\alpha,0}^{2,2}\left(\mathbb{R}^{2}\right)$ and $\left[\left(\varrho^{\left[v\right]}\ast f\right)\left(\delta\cdot B_{v}^{-T}k\right)\right]_{v\in V,\,k\in\mathbb{Z}^{2}}\in C_{u^{s^{\natural}}}^{p,p}$, again by Lemma \ref{lem:SpecialConvolutionClarification}. Thus, the consistency statement of Theorem \ref{thm:NicelySimplifiedAlphaShearletFrameConditions} shows $f\in\mathscr{S}_{\alpha,s^{\natural}}^{p,p}\left(\mathbb{R}^{2}\right)$. Therefore, $f=R^{\left(\delta\right)}A^{\left(\delta\right)}f$ for the reconstruction operator $R^{\left(\delta\right)}:C_{u^{s^{\natural}}}^{p,p}\to\mathscr{S}_{\alpha,s^{\natural}}^{p,p}\left(\mathbb{R}^{2}\right)$ that is provided by Theorem \ref{thm:NicelySimplifiedUnconnectedAlphaShearletFrameConditions} (applied with $\varphi^{\ast},\psi^{\ast}$ instead of $\varphi,\psi$). Thus, \[ \left\Vert f\right\Vert _{\mathscr{S}_{\alpha,s^{\natural}}^{p,p}}\leq\vertiii{\smash{R^{\left(\delta\right)}}}\cdot\left\Vert \smash{A^{\left(\delta\right)}}f\right\Vert _{C_{u^{s^{\natural}}}^{p,p}}=\vertiii{\smash{R^{\left(\delta\right)}}}\cdot\left\Vert \left(u_{v}^{s}\cdot\left\langle f,\,\smash{\gamma^{\left[v,k,\delta\right]}}\right\rangle _{L^{2}}\vphantom{\gamma^{\left[v,k,\delta\right]}}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}\right\Vert _{\ell^{p}}. \] If we apply the preceding considerations for $s=0$ and $p=2$, we in particular get \[ \left\Vert f\right\Vert _{L^{2}}\asymp\left\Vert f\right\Vert _{\mathscr{S}_{\alpha,0}^{2,2}}\asymp\left\Vert \left(\left\langle f,\,\smash{\gamma^{\left[v,k,\delta\right]}}\right\rangle _{L^{2}}\vphantom{\gamma^{\left[v,k,\delta\right]}}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}\right\Vert _{\ell^{2}}\qquad\forall f\in L^{2}\left(\mathbb{R}^{2}\right)=\mathscr{S}_{\alpha,0}^{2,2}\left(\mathbb{R}^{2}\right), \] which implies that the $\alpha$-shearlet system ${\rm SH}_{\alpha}\left(\varphi,\psi;\delta\right)=\left(\gamma^{\left[v,k,\delta\right]}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}$ is a frame for $L^{2}\left(\mathbb{R}^{2}\right)$. \end{proof} \section{Approximation of cartoon-like functions using \texorpdfstring{$\alpha$}{α}-shearlets} \label{sec:CartoonLikeApproximation}One of the most celebrated properties of shearlet systems is that they provide (almost) optimal \emph{approximation rates} for the model class $\mathcal{E}^{2}\left(\mathbb{R}^{2};\nu\right)$ of \textbf{cartoon-like functions}, which we introduce formally in Definition \ref{def:CartoonLikeFunction} below. More precisely, this means (cf.\@ \cite[Theorem 1.3]{CompactlySupportedShearletsAreOptimallySparse} for the case of compactly supported shearlets) that \begin{equation} \left\Vert f-\smash{f^{\left(N\right)}}\right\Vert _{L^{2}}\leq C\cdot N^{-1}\cdot\left(1+\log N\right)^{3/2}\qquad\forall N\in\mathbb{N}\text{ and }f\in\mathcal{E}^{2}\left(\mathbb{R}^{2};\nu\right),\label{eq:UsualShearletApproximation} \end{equation} where $f^{\left(N\right)}$ is the so-called \textbf{$N$-term approximation of $f$}. The exact interpretation of this $N$-term approximation, however, requires some explanation, as was briefly discussed in the introduction: In general, given a dictionary $\Psi=\left(\psi_{i}\right)_{i\in I}$ in a Hilbert space $\mathcal{H}$ (which is assumed to satisfy $\overline{{\rm span}\left\{ \psi_{i}\,\middle|\, i\in I\right\} }=\mathcal{H}$), we let \begin{equation} \mathcal{H}_{\Psi}^{\left(N\right)}:=\left\{ \sum_{i\in J}\alpha_{i}\psi_{i}\,\middle|\, J\subset I\text{ with }\left|J\right|\leq N\text{ and }\left(\alpha_{i}\right)_{i\in J}\in\mathbb{C}^{J}\right\} \label{eq:NElementsLinearCombinationSpaceDefinition} \end{equation} denote the sub\emph{set} (which is in general \emph{not} a subspace) of $\mathcal{H}$ consisting of linear combinations of (at most) $N$ elements of $\Psi$. The usual definition of a (in general non-unique) best \textbf{$N$-term approximation} to $f\in\mathcal{H}$ is any $f_{\Psi}^{\left(N\right)}\in\mathcal{H}_{\Psi}^{\left(N\right)}$ satisfying \[ \left\Vert f-\smash{f_{\Psi}^{\left(N\right)}}\right\Vert =\inf_{g\in\mathcal{H}_{\Psi}^{\left(N\right)}}\left\Vert f-g\right\Vert . \] This definition is given for example in \cite[Section 3.1]{ShearletsAndOptimallySparseApproximation}. Note, however, that in general, it is not clear whether such a best $N$-term approximation exists. But regardless of whether a best $N$-term approximation exists or not, we can always define the \textbf{$N$-term approximation error} as \begin{equation} \alpha_{\Psi}^{\left(N\right)}\left(f\right):=\inf_{g\in\mathcal{H}_{\Psi}^{\left(N\right)}}\left\Vert f-g\right\Vert .\label{eq:GeneralNTermApproximationErrorDefinition} \end{equation} All in all, the goal of (nonlinear) $N$-term approximations is to approximate an element $f\in\mathcal{H}$ using only a fixed number of elements \emph{from the dictionary $\Psi$}. Thus, when one reads the usual statement that \emph{shearlets provide (almost) optimal $N$-term approximation rates for cartoon-like functions}, one could be tempted to think that equation \eqref{eq:UsualShearletApproximation} has to be understood as \begin{equation} \alpha_{\Psi}^{\left(N\right)}\left(f\right)\leq C\cdot N^{-1}\cdot\left(1+\log N\right)^{3/2}\qquad\forall N\in\mathbb{N}\text{ and }f\in\mathcal{E}^{2}\left(\mathbb{R}^{2};\nu\right),\label{eq:UsualShearletApproximationFormal} \end{equation} where the dictionary \emph{$\Psi$ is a (suitable) shearlet system}. This, however, is \emph{not} what is shown e.g.\@ in \cite{ShearletsAndOptimallySparseApproximation}. What is shown there, instead, is that if $\widetilde{\Psi}=\left(\smash{\widetilde{\psi_{i}}}\right)_{i\in I}$ denotes \emph{the (canonical) dual frame} (in fact, any dual frame will do) of a suitable shearlet system $\Psi$, then we have \[ \alpha_{\widetilde{\Psi}}^{\left(N\right)}\left(f\right)\leq C\cdot N^{-1}\cdot\left(1+\log N\right)^{3/2}\qquad\forall N\in\mathbb{N}\text{ and }f\in\mathcal{E}^{2}\left(\mathbb{R}^{2};\nu\right). \] This approximation rate \emph{using the dual frame} $\widetilde{\Psi}$ is not completely satisfactory, since for non-tight shearlet systems $\Psi$, the properties of $\widetilde{\Psi}$ (like smoothness, decay, etc) are largely unknown. Note that there is no known construction of a \emph{tight, compactly supported} cone-adapted shearlet frame. Furthermore, to our knowledge, there is—up to now—nothing nontrivial\footnote{Of course, one knows $\alpha_{\Psi}^{\left(N\right)}\left(f\right)\to0$ as $N\to\infty$, but this holds for every $f\in L^{2}\left(\mathbb{R}^{2}\right)$ and every frame $\Psi$ of $L^{2}\left(\mathbb{R}^{2}\right)$.} known about $\alpha_{\Psi}^{\left(N\right)}\left(f\right)$ for $f\in\mathcal{E}^{2}\left(\mathbb{R}^{2}\right)$ in the case that $\Psi$ is itself a shearlet system, unless $\Psi$ is a \emph{tight} shearlet frame. \medskip{} This difference between approximation using the primal and the dual frame is essentially a difference between analysis and synthesis sparsity: The usual proof strategy to obtain the approximation rate with respect to the \emph{dual} frame is to show that the analysis coefficients $\left(\left\langle f,\,\psi_{i}\right\rangle \right)_{i\in I}$ are \emph{sparse} in the sense that they lie in some (weak) $\ell^{p}$ space. Then one uses the reconstruction formula \[ f=\sum_{i\in I}\left\langle f,\,\psi_{i}\right\rangle \widetilde{\psi_{i}}\qquad\text{ using the dual frame }\widetilde{\Psi}=\left(\smash{\widetilde{\psi_{i}}}\right)_{i\in I} \] and truncates this series to the $N$ terms with the largest coefficients $\left|\left\langle f,\,\psi_{i}\right\rangle \right|$. Using the sparsity of the coefficients, one then obtains the claim. In other words, since the analysis coefficients with respect to $\Psi=\left(\psi_{i}\right)_{i\in I}$ are the synthesis coefficients with respect to $\widetilde{\Psi}$, analysis sparsity with respect to $\Psi$ yields synthesis sparsity with respect to $\widetilde{\Psi}$. Conversely, analysis sparsity with respect to $\widetilde{\Psi}$ yields synthesis sparsity with respect to $\Psi$ itself. But since only limited knowledge about $\widetilde{\Psi}$ is available, this fact is essentially impossible to apply. \medskip{} But our preceding results concerning Banach frames and atomic decompositions for ($\alpha$)-shearlet smoothness spaces show that \emph{analysis sparsity is equivalent to synthesis sparsity} (cf.\@ Theorem \ref{thm:AnalysisAndSynthesisSparsityAreEquivalent}) for sufficiently nice and sufficiently densely sampled $\alpha$-shearlet frames. Using this fact, we will show in this section that we indeed have \[ \alpha_{\Psi}^{\left(N\right)}\left(f\right)\leq C_{\varepsilon}\cdot N^{-\left(1-\varepsilon\right)}\qquad\forall N\in\mathbb{N}\text{ and }f\in\mathcal{E}^{2}\left(\mathbb{R}^{2};\nu\right), \] where $\varepsilon\in\left(0,1\right)$ can be chosen arbitrarily and where $\Psi$ is a (suitable) shearlet frame. In fact, we will also obtain a corresponding statement for $\alpha$-shearlet frames. Note though that the approximation rate $N^{-\left(1-\varepsilon\right)}$ is slightly inferior to the rate of decay in equation \eqref{eq:UsualShearletApproximationFormal}. Nevertheless—to the best of our knowledge—this is still the best result on approximating cartoon-like functions \emph{by shearlets} (instead of using the \emph{dual frame} of a shearlet frame) which is known. Our proof strategy is straightforward: The known \emph{analysis-sparsity} results, in conjunction with our results about Banach frames for shearlet smoothness spaces, show that $\mathcal{E}^{2}\left(\mathbb{R}^{2};\nu\right)$ is a bounded subset of a certain range of shearlet smoothness spaces. Thus, using our results about atomic decompositions for these shearlet smoothness spaces, we get \emph{synthesis sparsity} with respect to the (primal(!))\@ shearlet frame. We then truncate this (quickly decaying) series to obtain a good $N$-term approximation. \medskip{} We begin our considerations by recalling the notion of $C^{\beta}$-cartoon-like functions, which were originally introduced (in a preliminary form) in \cite{DonohoSparseComponentsOfImages}.\pagebreak{} \begin{defn} \label{def:CartoonLikeFunction}Fix parameters $0<\varrho_{0}<\varrho_{1}<1$ once and for all. \begin{itemize}[leftmargin=0.6cm] \item For $\nu>0$ and $\beta\in\left(1,2\right]$, the set ${\rm STAR}^{\beta}\left(\nu\right)$ is the family of all subsets $\mathcal{B}\subset\left[0,1\right]^{2}$ for which there is some $x_{0}\in\mathbb{R}^{2}$ and a $2\pi$-periodic function $\varrho:\mathbb{R}\to\left[\varrho_{0},\varrho_{1}\right]$ with $\varrho\in C^{\beta}\left(\mathbb{R}\right)$ such that \[ \mathcal{B}-x_{0}=\left\{ r\cdot\left(\begin{matrix}\cos\phi\\ \sin\phi \end{matrix}\right)\,\middle|\,\phi\in\left[0,2\pi\right]\text{ and }0\leq r\leq\varrho\left(\phi\right)\right\} \] and such that the $\beta-1$ \textbf{Hölder semi-norm} $\left[\varrho'\right]_{\beta-1}=\sup_{\phi,\varphi\in\mathbb{R},\phi\neq\varphi}\smash{\frac{\left|\varrho'\left(\phi\right)-\varrho'\left(\varphi\right)\right|}{\left|\phi-\varphi\right|^{\beta-1}}}\vphantom{\sum^{m}}$ satisfies $\left[\varrho'\right]_{\beta-1}\leq\nu$. \item For $\nu>0$ and $\beta\in\left(1,2\right]$, the class $\mathcal{E}^{\beta}\left(\mathbb{R}^{2};\nu\right)$ of \textbf{cartoon-like functions with regularity $\beta$} is defined as \[ \qquad\mathcal{E}^{\beta}\left(\mathbb{R}^{2};\nu\right):=\left\{ f_{1}+{\mathds{1}}_{\mathcal{B}}\cdot f_{2}\,\middle|\,\vphantom{C^{\gamma}}\mathcal{B}\in\smash{{\rm STAR}^{\beta}\left(\nu\right)}\text{ and }f_{i}\in\smash{C_{c}^{\beta}}\left(\smash{\left[0,1\right]^{2}}\right)\text{ with }\left\Vert f_{i}\right\Vert _{C^{\beta}}\leq\min\left\{ 1,\nu\right\} \text{ for }i\in\underline{2}\right\} , \] where $\left\Vert f\right\Vert _{C^{\beta}}=\left\Vert f\right\Vert _{\sup}+\left\Vert \nabla f\right\Vert _{\sup}+\left[\nabla f\right]_{\beta-1}$ and $\left[g\right]_{\beta-1}=\sup_{x,y\in\mathbb{R}^{2},x\neq y}\frac{\left|g\left(x\right)-g\left(y\right)\right|}{\left|x-y\right|^{\beta-1}}$ for $g:\mathbb{R}^{2}\to\mathbb{C}^{\ell}$, as well as \[ C_{c}^{\beta}\left(\smash{\left[0,1\right]^{2}}\right)=\left\{ f\in\smash{C^{\left\lfloor \beta\right\rfloor }}\left(\mathbb{R}^{2}\right)\,\middle|\,\operatorname{supp} f\subset\left[0,1\right]^{2}\text{ and }\left\Vert f\right\Vert _{C^{\beta}}<\infty\right\} . \] Finally, we set $\mathcal{E}^{\beta}\left(\mathbb{R}^{2}\right):=\bigcup_{\nu>0}\mathcal{E}^{\beta}\left(\mathbb{R}^{2};\nu\right)$.\qedhere \end{itemize} \end{defn} \begin{rem*} The definition of ${\rm STAR}^{\beta}\left(\nu\right)$ given here is slightly more conservative than in \cite[Definition 2.5]{CartoonApproximationWithAlphaCurvelets}, where it is only assumed that $\varrho:\mathbb{R}\to\left[0,\varrho_{1}\right]$ with $0<\varrho_{1}<1$, instead of $\varrho:\mathbb{R}\to\left[\varrho_{0},\varrho_{1}\right]$. We also note that $\left[\varrho'\right]_{\beta-1}=\left\Vert \varrho''\right\Vert _{\sup}$ in case of $\beta=2$. This is a simple consequence of the definition of the derivative and of the mean-value theorem. Hence, in case of $\beta=2$, the definition given here is consistent with (in fact, slightly stronger than) the one used in \cite[Definition 1.1]{ShearletsAndOptimallySparseApproximation}. Further, we note that in \cite[Definition 5.9]{AlphaMolecules}, the class $\mathcal{E}^{\beta}\left(\mathbb{R}^{2}\right)$ is simply defined as \[ \left\{ f_{1}+{\mathds{1}}_{B}\cdot f_{2}\,\middle|\, f_{1},f_{2}\in C_{c}^{\beta}\left(\smash{\left[0,1\right]^{2}}\right),\:B\subset\left[0,1\right]^{2}\text{ Jordan dom. with regular closed piecewise }C^{\beta}\text{ boundary curve}\right\} . \] Even for this—much more general—definition, the authors of \cite{AlphaMolecules} then invoke the results which are derived in \cite{CartoonApproximationWithAlphaCurvelets} under the more restrictive assumptions. This is somewhat unpleasant, but does not need to concern us: In fact, in the following, we will frequently use the notation $\mathcal{E}^{\beta}\left(\mathbb{R}^{2};\nu\right)$, but the precise definition of this space is not really used; all that we need to know is that if $\varphi,\psi$ are suitable shearlet generators, then the $\beta$-shearlet coefficients $c=\left(c_{j,k,\varepsilon,m}\right)_{j,k,\varepsilon,m}$ of $f\in\mathcal{E}^{\beta}\left(\mathbb{R}^{2};\nu\right)$ satisfy $c\in\ell^{\frac{2}{1+\beta}+\varepsilon}$ for all $\varepsilon>0$, with $\left\Vert f\right\Vert _{\ell^{\frac{2}{1+\beta}+\varepsilon}}\leq C_{\varepsilon,\nu,\beta,\varphi,\psi}$. Below, we will derive this by combining \cite[Theorem 4.2]{CartoonApproximationWithAlphaCurvelets} with \cite[Theorem 5.6]{AlphaMolecules}, where \cite[Theorem 5.6]{AlphaMolecules} does not use the notion of cartoon-like functions at all. \end{rem*} As our first main technical result in this section, we show that the $C^{\beta}$-cartoon-like functions are bounded subsets of suitably chosen $\alpha$-shearlet smoothness spaces. Once we have developed this property, we obtain the claimed approximation rate by invoking the atomic decomposition results from Theorem \ref{thm:ReallyNiceUnconnectedShearletAtomicDecompositionConditions}. \begin{prop} \label{prop:CartoonLikeFunctionsBoundedInAlphaShearletSmoothness}Let $\nu>0$ and $\beta\in\left(1,2\right]$ be arbitrary and let $p\in\left(2/\left(1+\beta\right),\:2\right]$. Then \[ \mathcal{E}^{\beta}\left(\mathbb{R}^{2};\nu\right)\quad\text{ is a bounded subset of }\quad\mathscr{S}_{\beta^{-1},\left(1+\beta^{-1}\right)\left(p^{-1}-2^{-1}\right)}^{p,p}\left(\mathbb{R}^{2}\right).\qedhere \] \end{prop} \begin{proof} Here, we only give the proof for the case $\beta=2$. For $\beta\in\left(1,2\right)$, the proof is more involved and thus postponed to the appendix (Section \ref{sec:CartoonLikeFunctionsAreBoundedInAlphaShearletSmoothness}). The main reason for the additional complications in case of $\beta\in\left(1,2\right)$ is that our proof essentially requires that we already know that there is some sufficiently nice, cone-adapted $\alpha$-shearlet system with respect to which the $C^{\beta}$-cartoon-like functions are analysis sparse (in a suitable ``almost $\ell^{2/\left(1+\beta\right)}$'' sense). In case of $\beta=2$, this is known, since we then have $\alpha=\beta^{-1}=\frac{1}{2}$, so that the $\alpha$-shearlet systems from Definition \ref{def:AlphaShearletSystem} coincide with the usual cone-adapted shearlets, cf.\@ Remark \ref{rem:AlphaShearletsYieldUsualShearlets}. But in case of $\beta\in\left(1,2\right)$, it is only known (cf.\@ \cite[Theorem 5.6]{AlphaMolecules}) that $C^{\beta}$-cartoon-like functions are analysis sparse with respect to suitable \textbf{$\beta$-shearlet systems} (cf.\@ Definition \ref{def:BetaShearletSystem} and note $\beta\notin\left[0,1\right]$, so that the notion of $\beta$-shearlets does not collide with our notion of $\alpha$-shearlets for $\alpha\in\left[0,1\right]$) which are different, but closely related to the $\beta^{-1}$-shearlet systems from Definition \ref{def:AlphaShearletSystem}. Making this close connection precise is what mainly makes the proof in case of $\beta\in\left(1,2\right)$ more involved, cf.\@ Section \ref{sec:CartoonLikeFunctionsAreBoundedInAlphaShearletSmoothness}. Thus, let us consider the case $\beta=2$. Choose $\phi_{0}\in\TestFunctionSpace{\mathbb{R}}$ with $\phi_{0}\geq0$ and $\phi_{0}\not\equiv0$, so that $\widehat{\phi_{0}}\left(0\right)=\left\Vert \phi_{0}\right\Vert _{L^{1}}>0$. By continuity of $\widehat{\phi_{0}}$, there is thus some $\nu>0$ with $\widehat{\phi_{0}}\left(\xi\right)\neq0$ on $\left[-\nu,\nu\right]$. Now, define $\phi_{1}:=\phi_{0}\left(3\bullet/\nu\right)$ and note that $\phi_{1}\in\TestFunctionSpace{\mathbb{R}}$ with $\widehat{\phi_{1}}\left(\xi\right)=\frac{\nu}{3}\cdot\widehat{\phi_{0}}\left(\nu\xi/3\right)\neq0$ for $\xi\in\left[-3,3\right]$. Now, set $\varphi:=\phi_{1}\otimes\phi_{1}\in\TestFunctionSpace{\mathbb{R}^{2}}$ and $\psi_{2}:=\phi_{1}$, as well as $\psi_{1}:=\phi_{1}^{\left(8\right)}$, the $8$-th derivative of $\phi_{1}$. By differentiating under the integral and by performing partial integration, we get for $0\leq k\leq7$ that \begin{equation} \frac{\operatorname{d}^{k}}{\operatorname{d}\xi^{k}}\bigg|_{\xi=0}\widehat{\psi_{1}}=\frac{\operatorname{d}^{k}}{\operatorname{d}\xi^{k}}\bigg|_{\xi=0}\widehat{\phi_{1}^{\left(8\right)}}=\int_{\mathbb{R}}\phi_{1}^{\left(8\right)}\left(x\right)\cdot\left(-2\pi ix\right)^{k}\operatorname{d} x=\left(-1\right)^{8}\cdot\int_{\mathbb{R}}\phi_{1}\left(x\right)\cdot\frac{\operatorname{d}^{8}\left(-2\pi ix\right)^{k}}{\operatorname{d} x^{8}}\operatorname{d} x=0,\label{eq:CartoonLikeFunctionsBoundedVanishingMoments} \end{equation} since $\frac{\operatorname{d}^{8}\left(-2\pi ix\right)^{k}}{\operatorname{d} x^{8}}\equiv0$ for $0\leq k\leq7$. Next, observe $\widehat{\varphi}\left(\xi\right)=\widehat{\phi_{1}}\left(\xi_{1}\right)\cdot\widehat{\phi_{1}}\left(\xi_{2}\right)\neq0$ for $\xi\in\left[-3,3\right]^{2}\supset\left[-1,1\right]^{2}$, as well as $\widehat{\psi_{2}}\left(\xi\right)\neq0$ for $\xi\in\left[-3,3\right]$ and finally \[ \widehat{\psi_{1}}\left(\xi\right)=\left(2\pi i\xi\right)^{8}\cdot\widehat{\phi_{1}}\left(\xi\right)=\left(2\pi\right)^{8}\cdot\xi^{8}\cdot\widehat{\phi_{1}}\left(\xi\right)\neq0\text{ for }\xi\in\left[-3,3\right]\setminus\left\{ 0\right\} , \] which in particular implies $\widehat{\psi_{1}}\left(\xi\right)\neq0$ for $\frac{1}{3}\leq\left|\xi\right|\leq3$. Now, setting $\psi:=\psi_{1}\otimes\psi_{2}$, we want to verify that $\varphi,\psi$ satisfy the assumptions of Theorem \ref{thm:AnalysisAndSynthesisSparsityAreEquivalent} with the choices $\varepsilon=\frac{1}{4}$, $p_{0}=\frac{2}{3}$, $s^{\left(0\right)}=0$ and $\alpha=\frac{1}{2}$. Since we have $\varphi\in\TestFunctionSpace{\mathbb{R}^{2}}$ and $\psi_{1},\psi_{2}\in\TestFunctionSpace{\mathbb{R}}$ and since $\widehat{\psi_{2}}\left(\xi\right)\neq0$ for $\xi\in\left[-3,3\right]$ and $\widehat{\psi_{1}}\left(\xi\right)\neq0$ for $\frac{1}{3}\leq\left|\xi\right|\leq3$ and since finally $\widehat{\varphi}\left(\xi\right)\neq0$ for $\xi\in\left[-1,1\right]^{2}$, Remark \ref{rem:NiceTensorConditionsForUnconnectedCovering} and Corollaries \ref{cor:ReallyNiceAlphaShearletTensorAtomicDecompositionConditions} and \ref{cor:ReallyNiceAlphaShearletTensorBanachFrameConditions} show that all we need to check is $\frac{\operatorname{d}^{\ell}}{\operatorname{d}\xi^{\ell}}\big|_{\xi=0}\widehat{\psi_{1}}=0$ for all $\ell=0,\dots,N_{0}+\left\lceil \Lambda_{1}\right\rceil -1$ and all $\ell=0,\dots,N_{0}+\left\lceil M_{1}\right\rceil -1$, where $N_{0}=\left\lceil p_{0}^{-1}\cdot\left(2+\varepsilon\right)\right\rceil =\left\lceil 27/8\right\rceil =4$, \begin{align*} \Lambda_{1} & =\varepsilon+p_{0}^{-1}+\max\left\{ 0,\,\left(1+\alpha\right)\left(p_{0}^{-1}-1\right)\right\} =\frac{5}{2}\leq3,\\ \text{and }M_{1} & =\varepsilon+p_{0}^{-1}+\max\left\{ 0,\,\left(1+\alpha\right)\left(p_{0}^{-1}-2^{-1}\right)\right\} =\frac{1}{4}+3\leq4, \end{align*} cf.\@ Theorems \ref{thm:AnalysisAndSynthesisSparsityAreEquivalent}, \ref{thm:NicelySimplifiedAlphaShearletFrameConditions}, and \ref{thm:ReallyNiceShearletAtomicDecompositionConditions}. Hence, $N_{0}+\left\lceil \Lambda_{1}\right\rceil -1\leq7$ and $N_{0}+\left\lceil M_{1}\right\rceil -1\leq7$, so that equation \eqref{eq:CartoonLikeFunctionsBoundedVanishingMoments} shows that $\varphi,\psi$ indeed satisfy the assumptions of Theorem \ref{thm:AnalysisAndSynthesisSparsityAreEquivalent}. That theorem yields because of $\alpha=\frac{1}{2}$ some $\delta_{0}\in\left(0,1\right]$ such that the following hold for all $0<\delta\leq\delta_{0}$: \begin{itemize}[leftmargin=0.6cm] \item The shearlet system ${\rm SH}_{1/2}\left(\varphi,\psi;\delta\right)=\left(\gamma^{\left[v,k,\delta\right]}\right)_{v\in V,k\in\mathbb{Z}^{2}}$ is a frame for $L^{2}\left(\mathbb{R}^{2}\right)$. \item Since $p\in\left(2/\left(1+\beta\right),\:2\right]\subset\left[\frac{2}{3},2\right]=\left[p_{0},2\right]$, we have \[ \qquad\mathscr{S}_{\beta^{-1},\,\left(1+\beta^{-1}\right)\left(\frac{1}{p}-\frac{1}{2}\right)}^{p,p}\left(\mathbb{R}^{2}\right)=\mathscr{S}_{\alpha,\left(1+\alpha\right)\left(\frac{1}{p}-\frac{1}{2}\right)}^{p,p}\left(\mathbb{R}^{2}\right)=\left\{ f\in L^{2}\left(\mathbb{R}^{2}\right)\,\middle|\,\left(\left\langle f,\,\smash{\gamma^{\left[v,k,\delta\right]}}\right\rangle _{L^{2}}\vphantom{\gamma^{\left[v,k,\delta\right]}}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}\in\ell^{p}\left(V\times\mathbb{Z}^{2}\right)\right\} \] and there is a constant $C_{p}=C_{p}\left(\varphi,\psi,\delta\right)>0$ such that \[ \left\Vert f\right\Vert _{\mathscr{S}_{\beta^{-1},\,\left(1+\beta^{-1}\right)\left(p^{-1}-2^{-1}\right)}^{p,p}}\leq C_{p}\cdot\left\Vert \left(\left\langle f,\,\smash{\gamma^{\left[v,k,\delta\right]}}\right\rangle _{L^{2}}\vphantom{\gamma^{\left[v,k,\delta\right]}}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}\right\Vert _{\ell^{p}}\qquad\forall f\in\mathscr{S}_{\beta^{-1},\,\left(1+\beta^{-1}\right)\left(p^{-1}-2^{-1}\right)}^{p,p}\left(\mathbb{R}^{2}\right). \] \end{itemize} Thus, since we clearly have $\mathcal{E}^{2}\left(\mathbb{R}^{2};\nu\right)\subset L^{2}\left(\mathbb{R}^{2}\right)$, it suffices to show that there is a constant $C=C\left(p,\nu,\delta,\varphi,\psi\right)>0$ such that $\left\Vert A^{\left(\delta\right)}f\right\Vert _{\ell^{p}}\leq C<\infty$ for all $f\in\mathcal{E}^{2}\left(\mathbb{R}^{2};\nu\right)$, where $A^{\left(\delta\right)}f:=\left(\left\langle f,\,\gamma^{\left[v,k,\delta\right]}\right\rangle _{L^{2}}\right)_{v\in V,\,k\in\mathbb{Z}^{2}}$. Here, we note that the sequence $A^{\left(\delta\right)}f$ just consists of the shearlet coefficients of $f$ (up to a trivial reordering in the translation variable $k$) with respect to the shearlet frame ${\rm SH}_{\frac{1}{2}}\left(\varphi,\psi;\,\delta\right)={\rm SH}\left(\varphi,\psi,\smash{\theta};\,\delta\right)$ with $\theta\left(x,y\right)=\psi\left(y,x\right)$, cf.\@ Remark \ref{rem:AlphaShearletsYieldUsualShearlets}. Hence, there is hope to derive the estimate $\left\Vert A^{\left(\delta\right)}f\right\Vert _{\ell^{p}}\leq C$ as a consequence of \cite[equation (3)]{CompactlySupportedShearletsAreOptimallySparse}, which states that \begin{equation} \sum_{n>N}\left|\lambda\left(f\right)\right|_{n}^{2}\leq C\cdot N^{-2}\cdot\left(1+\log N\right)^{3}\qquad\forall N\in\mathbb{N}\text{ and }f\in\mathcal{E}^{2}\left(\mathbb{R}^{2};\nu\right),\label{eq:ShearletCoefficientDecay} \end{equation} where $\left(\left|\lambda\left(f\right)\right|_{n}\right)_{n\in\mathbb{N}}$ are the absolute values of the shearlet coefficients of $f$ with respect to the shearlet frame ${\rm SH}\left(\varphi,\psi,\theta;\,\delta\right)$, ordered nonincreasingly. In particular, $\left\Vert A^{\left(\delta\right)}f\right\Vert _{\ell^{p}}=\left\Vert \left[\left|\lambda\left(f\right)\right|_{n}\right]_{n\in\mathbb{N}}\right\Vert _{\ell^{p}}$. \medskip{} Note though that in order for \cite[equation (3)]{CompactlySupportedShearletsAreOptimallySparse} to be applicable, we need to verify that $\varphi,\psi,\theta$ satisfy the assumptions of \cite[Theorem 1.3]{CompactlySupportedShearletsAreOptimallySparse}, i.e., $\varphi,\psi,\theta$ need to be compactly supported (which is satisfied) and \begin{enumerate} \item $\left|\widehat{\psi}\left(\xi\right)\right|\lesssim\min\left\{ 1,\left|\xi_{1}\right|^{\sigma}\right\} \cdot\min\left\{ 1,\left|\xi_{1}\right|^{-\tau}\right\} \cdot\min\left\{ 1,\left|\xi_{2}\right|^{-\tau}\right\} $ and \item $\left|\frac{\partial}{\partial\xi_{2}}\widehat{\psi}\left(\xi\right)\right|\leq\left|h\left(\xi_{1}\right)\right|\cdot\left(1+\frac{\left|\xi_{2}\right|}{\left|\xi_{1}\right|}\right)^{-\tau}$ for some $h\in L^{1}\left(\mathbb{R}\right)$ \end{enumerate} for certain (arbitrary) $\sigma>5$ and $\tau\geq4$. Furthermore, $\theta$ needs to satisfy the same estimate with interchanged roles of $\xi_{1},\xi_{2}$. But in view of $\theta\left(x,y\right)=\psi\left(y,x\right)$, it suffices to establish the estimates for $\psi$. To this end, recall from above that $\widehat{\psi_{1}}\in C^{\infty}\left(\mathbb{R}\right)$ is analytic with $\frac{\operatorname{d}^{k}}{\operatorname{d}\xi^{k}}\bigg|_{\xi=0}\widehat{\psi_{1}}=0$ for $0\leq k\leq7$. This easily implies $\left|\widehat{\psi_{1}}\left(\xi\right)\right|\lesssim\left|\xi\right|^{8}$ for $\left|\xi\right|\leq1$, see e.g.\@ the proof of Corollary \ref{cor:ReallyNiceAlphaShearletTensorAtomicDecompositionConditions}, in particular equation \eqref{eq:VanishingFourierDerivativesYieldFourierDecayAtOrigin}. Furthermore, since $\psi_{1},\psi_{2}\in\TestFunctionSpace{\mathbb{R}}$, we get for arbitrary $K\in\mathbb{N}$ that $\left|\widehat{\psi_{i}}\left(\xi\right)\right|\lesssim\left(1+\left|\xi\right|\right)^{-K}$ for $i\in\left\{ 1,2\right\} $. Altogether, we conclude $\left|\widehat{\psi_{1}}\left(\xi\right)\right|\lesssim\min\left\{ 1,\left|\xi\right|^{8}\right\} \cdot\left(1+\left|\xi\right|\right)^{-8}$ and likewise $\left|\widehat{\psi_{2}}\left(\xi\right)\right|\lesssim\left(1+\left|\xi\right|\right)^{-8}$ for all $\xi\in\mathbb{R}$, so that the first estimate is fulfilled for $\sigma:=8>5$ and $\tau:=8\geq4$. Next, we observe for $\xi\in\mathbb{R}^{2}$ with $\xi_{1}\neq0$ that \[ 1+\frac{\left|\xi_{2}\right|}{\left|\xi_{1}\right|}\leq\left(1+\left|\xi_{2}\right|\right)\cdot\left(1+\left|\xi_{1}\right|^{-1}\right)\leq2\cdot\left(1+\left|\xi_{2}\right|\right)\cdot\max\left\{ 1,\,\left|\xi_{1}\right|^{-1}\right\} \] and thus \[ \left(1+\left|\xi_{2}\right|/\left|\xi_{1}\right|\right)^{-8}\geq2^{-8}\cdot\left(1+\left|\xi_{2}\right|\right)^{-8}\cdot\min\left\{ 1,\,\left|\xi_{1}\right|^{8}\right\} . \] But since we have $\widehat{\psi_{2}}\in\mathcal{S}\left(\mathbb{R}\right)$ and thus $\left|\widehat{\psi_{2}}'\left(\xi\right)\right|\lesssim\left(1+\left|\xi\right|\right)^{-8}$, this implies \begin{align*} \left|\frac{\partial}{\partial\xi_{2}}\widehat{\psi}\left(\xi\right)\right|=\left|\widehat{\psi_{1}}\left(\xi_{1}\right)\right|\cdot\left|\widehat{\psi_{2}}'\left(\xi_{2}\right)\right| & \lesssim\left(1+\left|\xi_{1}\right|\right)^{-8}\cdot\left(1+\left|\xi_{2}\right|\right)^{-8}\cdot\min\left\{ 1,\left|\xi_{1}\right|^{8}\right\} \\ & \lesssim\left(1+\left|\xi_{1}\right|\right)^{-8}\cdot\left(1+\left|\xi_{2}\right|/\left|\xi_{1}\right|\right)^{-8}, \end{align*} so that the second condition from above is satisfied for our choice $\tau=8$, with $h\left(\xi_{1}\right)=\left(1+\left|\xi_{1}\right|\right)^{-8}$. \medskip{} Consequently, we conclude from \cite[equation (3)]{CompactlySupportedShearletsAreOptimallySparse} that equation \eqref{eq:ShearletCoefficientDecay} is satisfied. Now, for arbitrary $M\in\mathbb{N}_{\geq4}$, we apply equation \eqref{eq:ShearletCoefficientDecay} with $N=\left\lceil \frac{M}{2}\right\rceil \geq2$, noting that $\left\lceil \frac{M}{2}\right\rceil \leq\frac{M}{2}+1\leq\frac{M}{2}+\frac{M}{4}=\frac{3}{4}M\leq M$ to deduce \begin{align*} \frac{1}{4}M\cdot\left|\lambda\left(f\right)\right|_{M}^{2}\leq\left|\lambda\left(f\right)\right|_{M}^{2}\cdot\left(M-\left\lceil M/2\right\rceil \right) & \leq\sum_{\left\lceil M/2\right\rceil <n\leq M}\left|\lambda\left(f\right)\right|_{n}^{2}\leq\sum_{n>\left\lceil M/2\right\rceil }\left|\lambda\left(f\right)\right|_{n}^{2}\\ & \leq C\cdot\left\lceil M/2\right\rceil ^{-2}\cdot\left(1+\log\left\lceil M/2\right\rceil \right)^{3}\\ & \leq4C\cdot M^{-2}\cdot\left(1+\log M\right)^{3}, \end{align*} which implies $\left|\lambda\left(f\right)\right|_{M}\leq\sqrt{16C}\cdot\left[M^{-1}\cdot\left(1+\log M\right)\right]^{3/2}$ for $M\in\mathbb{N}_{\geq4}$. But since $\mathcal{E}^{2}\left(\mathbb{R}^{2};\nu\right)\subset L^{2}\left(\mathbb{R}^{2}\right)$ is bounded and since the elements of the shearlet frame ${\rm SH}\left(\varphi,\psi,\smash{\theta};\,\delta\right)$ are $L^{2}$-bounded, we have $\left\Vert \left[\left|\lambda\left(f\right)\right|_{n}\right]_{n\in\mathbb{N}}\right\Vert _{\ell^{\infty}}\lesssim1$, so that we get $\left|\lambda\left(f\right)\right|_{M}\lesssim\left[M^{-1}\cdot\left(1+\log M\right)\right]^{3/2}$ for all $M\in\mathbb{N}$ and all $f\in\mathcal{E}^{2}\left(\mathbb{R}^{2};\nu\right)$, where the implied constant is independent of the precise choice of $f$. But this easily yields $\left\Vert \left[\left|\lambda\left(f\right)\right|_{M}\right]_{M\in\mathbb{N}}\right\Vert _{\ell^{p}}\lesssim1$, since $p\in\left(\frac{2}{3},2\right]=\left(2/\left(1+\beta\right),\,2\right]$. Here, the implied constant might depend on $\varphi,\psi,\delta,p,\nu$, but not on $f\in\mathcal{E}^{2}\left(\mathbb{R}^{2};\nu\right)$. \end{proof} We can now easily derive the claimed statement about the approximation rate of functions $f\in\mathcal{E}^{\beta}\left(\mathbb{R}^{2};\nu\right)$ with respect to $\beta^{-1}$-shearlet systems. \begin{thm} \label{thm:CartoonApproximationWithAlphaShearlets}Let $\beta\in\left(1,2\right]$ be arbitrary. Assume that $\varphi,\psi\in L^{1}\left(\mathbb{R}^{2}\right)$ satisfy the conditions of Theorem \ref{thm:ReallyNiceUnconnectedShearletAtomicDecompositionConditions} for $\alpha=\beta^{-1}$, $p_{0}=q_{0}=\frac{2}{1+\beta}$, $s_{0}=0$ and $s_{1}=\frac{1}{2}\left(1+\beta\right)$ and some $\varepsilon\in\left(0,1\right]$ (see Remark \ref{rem:CartoonApproximationConstantSimplification} for simplified conditions which ensure that these assumptions are satisfied). Then there is some $\delta_{0}=\delta_{0}\left(\varepsilon,\beta,\varphi,\psi\right)>0$ such that for all $0<\delta\leq\delta_{0}$ and arbitrary $f\in\mathcal{E}^{\beta}\left(\mathbb{R}^{2}\right)$ and $N\in\mathbb{N}$, there is a function $f^{\left(N\right)}\in L^{2}\left(\mathbb{R}^{2}\right)$ which is a linear combination of $N$ elements of the $\beta^{-1}$-shearlet frame $\Psi={\rm SH}_{\beta^{-1}}\left(\varphi,\psi;\delta\right)=\left(\gamma^{\left[v,k,\delta\right]}\right)_{v\in V,k\in\mathbb{Z}^{2}}$ such that the following holds: For arbitrary $\sigma,\nu>0$, there is a constant $C=C\left(\beta,\delta,\nu,\sigma,\varphi,\psi\right)>0$ satisfying \[ \left\Vert f-\smash{f^{\left(N\right)}}\right\Vert _{L^{2}}\leq C\cdot N^{-\left(\frac{\beta}{2}-\sigma\right)}\qquad\forall f\in\mathcal{E}^{\beta}\left(\mathbb{R}^{2};\nu\right)\text{ and }N\in\mathbb{N}.\qedhere \] \end{thm} \begin{rem*} It was shown in \cite[Theorem 2.8]{CartoonApproximationWithAlphaCurvelets} that \emph{no} dictionary $\Phi$ can achieve an error $\alpha_{\Phi}^{\left(N\right)}\left(f\right)\leq C\cdot N^{-\theta}$ for all $N\in\mathbb{N}$ and $f\in\mathcal{E}^{\beta}\left(\mathbb{R}^{2};\nu\right)$ with $\theta>\frac{\beta}{2}$, as long as one insists on a \emph{polynomial depth restriction} for forming the $N$-term approximation. In this sense, the resulting approximation rate is almost optimal. We remark, however, that it is \emph{not immediately clear} whether the $N$-term approximation whose existence is claimed by the theorem above can be chosen to satisfy the polynomial depth search restriction. There is a long-standing tradition\cite{CandesDonohoCurvelets,CartoonApproximationWithAlphaCurvelets,OptimallySparseMultidimensionalRepresentationUsingShearlets,AlphaMolecules,CompactlySupportedShearletsAreOptimallySparse} to omit further considerations concerning this question; therefore, we deferred to Section \ref{sec:PolynomialSearchDepth} the proof that the above approximation rate can also be achieved using a polynomially restricted search depth. For more details on the technical assumption of polynomial depth restriction in $N$-term approximations, we refer to \cite[Section 2.1.1]{CartoonApproximationWithAlphaCurvelets}. \end{rem*} \begin{proof} Set $\alpha:=\beta^{-1}$. Under the given assumptions, Theorem \ref{thm:ReallyNiceUnconnectedShearletAtomicDecompositionConditions} ensures that ${\rm SH}_{\alpha}\left(\varphi,\psi;\,\delta\right)$ forms an atomic decomposition for $\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)$ for all $p\geq p_{0}$, $q\geq q_{0}$ and $s_{0}\leq s\leq s_{1}$, for arbitrary $0<\delta\leq\delta_{0}$, where the constant $\delta_{0}=\delta_{0}\left(\alpha,\varepsilon,p_{0},q_{0},s_{0},s_{1},\varphi,\psi\right)=\delta_{0}\left(\varepsilon,\beta,\varphi,\psi\right)>0$ is provided by Theorem \ref{thm:ReallyNiceUnconnectedShearletAtomicDecompositionConditions}. Fix some $0<\delta\leq\delta_{0}$. Let $S^{\left(\delta\right)}:C_{u^{0}}^{2,2}\to\mathscr{S}_{\alpha,0}^{2,2}\left(\mathbb{R}^{2}\right)$ and $C^{\left(\delta\right)}:\mathscr{S}_{\alpha,0}^{2,2}\left(\mathbb{R}^{2}\right)\to C_{u^{0}}^{2,2}$ be the synthesis map and the coefficient map whose existence and boundedness is guaranteed by Theorem \ref{thm:ReallyNiceUnconnectedShearletAtomicDecompositionConditions}, since $2\geq p_{0}=q_{0}$ and since $s_{0}\leq0\leq s_{1}$. Note directly from Definition \ref{def:CoefficientSpace} that $C_{u^{0}}^{2,2}=\ell^{2}\left(V\times\mathbb{Z}^{2}\right)$ and that $\mathscr{S}_{\alpha,0}^{2,2}\left(\mathbb{R}^{2}\right)=L^{2}\left(\mathbb{R}^{2}\right)$ (cf.\@ \cite[Lemma 6.10]{DecompositionEmbedding}). Now, for arbitrary $f\in\mathcal{E}^{\beta}\left(\mathbb{R}^{2}\right)\subset L^{2}\left(\mathbb{R}^{2}\right)$, let \[ \left(\smash{c_{j}^{\left(f\right)}}\right)_{j\in V\times\mathbb{Z}^{2}}:=c^{\left(f\right)}:=C^{\left(\delta\right)}f\in\ell^{2}\left(V\times\mathbb{Z}^{2}\right). \] Furthermore, for $f\in\mathcal{E}^{\beta}\left(\mathbb{R}^{2}\right)$ and $N\in\mathbb{N}$, choose a set $J_{N}^{\left(f\right)}\subset V\times\mathbb{Z}^{2}$ with $\left|\smash{J_{N}^{\left(f\right)}}\vphantom{J_{N}}\right|=N$ and such that $\left|\smash{c_{j}^{\left(f\right)}}\right|\geq\left|\smash{c_{i}^{\left(f\right)}}\right|$ for all $j\in J_{N}^{\left(f\right)}$ and all $i\in\left(V\times\mathbb{Z}^{2}\right)\setminus J_{N}^{\left(f\right)}$. For a general sequence, such a set need not exist, but since we have $c^{\left(f\right)}\in\ell^{2}$, a moment's thought shows that it does, since for each $\varepsilon>0$, there are only finitely many indices $i\in V\times\mathbb{Z}^{2}$ satisfying $\left|\smash{c_{i}^{\left(f\right)}}\right|\geq\varepsilon$. Finally, set $f^{\left(N\right)}:=S^{\left(\delta\right)}\left[c^{\left(f\right)}\cdot{\mathds{1}}_{J_{N}^{\left(f\right)}}\right]\in L^{2}\left(\mathbb{R}^{2}\right)$ and note that $f^{\left(N\right)}$ is indeed a linear combination of (at most) $N$ elements of ${\rm SH}_{\beta^{-1}}\left(\varphi,\psi;\delta\right)=\left(\gamma^{\left[v,k,\delta\right]}\right)_{v\in V,k\in\mathbb{Z}^{2}}$, by definition of $S^{\left(\delta\right)}$. Moreover, note that the so-called \textbf{Stechkin lemma} (see e.g.\@ \cite[Lemma 3.3]{StechkinLemma}) shows \begin{equation} \left\Vert c^{\left(f\right)}-{\mathds{1}}_{J_{N}^{\left(f\right)}}\cdot c^{\left(f\right)}\right\Vert _{\ell^{2}}\leq N^{-\left(\frac{1}{p}-\frac{1}{2}\right)}\cdot\left\Vert \smash{c^{\left(f\right)}}\right\Vert _{\ell^{p}}\qquad\forall N\in\mathbb{N}\text{ and }p\in\left(0,2\right]\text{ for which }\left\Vert \smash{c^{\left(f\right)}}\right\Vert _{\ell^{p}}<\infty.\label{eq:StechkinEstimate} \end{equation} It remains to verify that the $f^{\left(N\right)}$ satisfy the stated approximation rate. To show this, let $\sigma,\nu>0$ be arbitrary. Because of $\frac{1}{p}-\frac{1}{2}\to\frac{\beta}{2}$ as $p\downarrow2/\left(1+\beta\right)$, there is some $p\in\left(2/\left(1+\beta\right),\,2\right)$ satisfying $\frac{1}{p}-\frac{1}{2}\geq\frac{\beta}{2}-\sigma$. Set $s:=\left(1+\alpha\right)\left(p^{-1}-2^{-1}\right)$. Observe that $p\geq p_{0}=q_{0}$, as well as \[ s_{0}=0\leq s=\left(1+\alpha\right)\left(p^{-1}-2^{-1}\right)\leq\left(1+\alpha\right)\left(\frac{1+\beta}{2}-\frac{1}{2}\right)=\frac{\beta}{2}\left(1+\beta^{-1}\right)=\frac{1}{2}\left(1+\beta\right)=s_{1}. \] Now, observe $\vphantom{B_{v}^{\left(\alpha\right)}}\left|\det\smash{B_{v}^{\left(\alpha\right)}}\right|=u_{v}^{1+\alpha}$ for all $v\in V$, so that the remark after Definition \ref{def:CoefficientSpace} shows that the coefficient space $C_{u^{s}}^{p,p}$ satisfies $C_{u^{s}}^{p,p}=\ell^{p}\left(V\times\mathbb{Z}^{2}\right)\hookrightarrow\ell^{2}\left(V\times\mathbb{Z}^{2}\right)=C_{u^{0}}^{2,2}$. Therefore, Theorem \ref{thm:ReallyNiceUnconnectedShearletAtomicDecompositionConditions} and the associated remark (and the inclusion $\mathscr{S}_{\alpha,s}^{p,p}\left(\mathbb{R}^{2}\right)\hookrightarrow L^{2}\left(\mathbb{R}^{2}\right)$ from Theorem \ref{thm:AnalysisAndSynthesisSparsityAreEquivalent}) show that the synthesis map and the coefficient map from above restrict to bounded linear operators \[ S^{\left(\delta\right)}:\ell^{p}\left(V\times\mathbb{Z}^{2}\right)\to\mathscr{S}_{\alpha,s}^{p,p}\left(\mathbb{R}^{2}\right)\quad\text{ and }\quad C^{\left(\delta\right)}:\mathscr{S}_{\alpha,s}^{p,p}\left(\mathbb{R}^{2}\right)\to\ell^{p}\left(V\times\mathbb{Z}^{2}\right). \] Next, Proposition \ref{prop:CartoonLikeFunctionsBoundedInAlphaShearletSmoothness} shows $\mathcal{E}^{\beta}\left(\mathbb{R}^{2}\right)\subset\mathscr{S}_{\alpha,s}^{p,p}\left(\mathbb{R}^{2}\right)$ and even yields a constant $C_{1}=C_{1}\left(\beta,\nu,p\right)>0$ satisfying $\left\Vert f\right\Vert _{\mathscr{S}_{\alpha,s}^{p,p}}\leq C_{1}$ for all $f\in\mathcal{E}^{\beta}\left(\mathbb{R}^{2};\nu\right)$. This implies \begin{equation} \left\Vert \smash{c^{\left(f\right)}}\right\Vert _{\ell^{p}}=\left\Vert \smash{C^{\left(\delta\right)}}f\right\Vert _{\ell^{p}}\leq\vertiii{\smash{C^{\left(\delta\right)}}}_{\mathscr{S}_{\alpha,s}^{p,p}\to\ell^{p}}\cdot\left\Vert f\right\Vert _{\mathscr{S}_{\alpha,s}^{p,p}}\leq C_{1}\cdot\vertiii{\smash{C^{\left(\delta\right)}}}_{\mathscr{S}_{\alpha,s}^{p,p}\to\ell^{p}}<\infty\qquad\forall f\in\mathcal{E}^{\beta}\left(\mathbb{R}^{2};\nu\right).\label{eq:NTermApproximationellPEstimate} \end{equation} By putting everything together and recalling $S^{\left(\delta\right)}\circ C^{\left(\delta\right)}=\operatorname{id}_{\mathscr{S}_{\alpha,0}^{2,2}}=\operatorname{id}_{L^{2}}$, we finally arrive at \begin{align*} \left\Vert f-\smash{f^{\left(N\right)}}\right\Vert _{L^{2}} & =\left\Vert S^{\left(\delta\right)}C^{\left(\delta\right)}f-S^{\left(\delta\right)}\left[{\mathds{1}}_{J_{N}^{\left(f\right)}}\cdot c^{\left(f\right)}\right]\right\Vert _{L^{2}}\\ \left({\scriptstyle \text{since }\mathscr{S}_{\alpha,0}^{2,2}\left(\smash{\mathbb{R}^{2}}\right)=L^{2}\left(\smash{\mathbb{R}^{2}}\right)\text{ with equivalent norms}}\right) & \asymp\left\Vert S^{\left(\delta\right)}\left[c^{\left(f\right)}-{\mathds{1}}_{J_{N}^{\left(f\right)}}\cdot c^{\left(f\right)}\right]\right\Vert _{\mathscr{S}_{\alpha,0}^{2,2}}\\ & \leq\vertiii{\smash{S^{\left(\delta\right)}}}_{C_{u^{0}}^{2,2}\to\mathscr{S}_{\alpha,0}^{2,2}}\cdot\left\Vert c^{\left(f\right)}-{\mathds{1}}_{J_{N}^{\left(f\right)}}\cdot c^{\left(f\right)}\right\Vert _{\ell^{2}}\\ \left({\scriptstyle \text{eq. }\eqref{eq:StechkinEstimate}}\right) & \leq\vertiii{\smash{S^{\left(\delta\right)}}}_{C_{u^{0}}^{2,2}\to\mathscr{S}_{\alpha,0}^{2,2}}\cdot\left\Vert \smash{c^{\left(f\right)}}\right\Vert _{\ell^{p}}\cdot N^{-\left(\frac{1}{p}-\frac{1}{2}\right)}\\ \left({\scriptstyle \text{eq. }\eqref{eq:NTermApproximationellPEstimate}}\right) & \leq C_{1}\cdot\vertiii{\smash{C^{\left(\delta\right)}}}_{\mathscr{S}_{\alpha,s}^{p,p}\to\ell^{p}}\cdot\vertiii{\smash{S^{\left(\delta\right)}}}_{C_{u^{0}}^{2,2}\to\mathscr{S}_{\alpha,0}^{2,2}}\cdot N^{-\left(\frac{1}{p}-\frac{1}{2}\right)}\\ \left({\scriptstyle \text{since }\frac{1}{p}-\frac{1}{2}\geq\frac{\beta}{2}-\sigma}\right) & \leq C_{1}\cdot\vertiii{\smash{C^{\left(\delta\right)}}}_{\mathscr{S}_{\alpha,s}^{p,p}\to\ell^{p}}\cdot\vertiii{\smash{S^{\left(\delta\right)}}}_{C_{u^{0}}^{2,2}\to\mathscr{S}_{\alpha,0}^{2,2}}\cdot N^{-\left(\frac{\beta}{2}-\sigma\right)} \end{align*} for all $N\in\mathbb{N}$ and $f\in\mathcal{E}^{\beta}\left(\mathbb{R}^{2};\nu\right)$. Since $p$ only depends on $\sigma,\beta$, this easily yields the desired claim. \end{proof} We close this section by making the assumptions of Theorem \ref{thm:CartoonApproximationWithAlphaShearlets} more transparent: \begin{rem} \label{rem:CartoonApproximationConstantSimplification}With the choices of $\alpha,p_{0},q_{0},s_{0},s_{1}$ from Theorem \ref{thm:CartoonApproximationWithAlphaShearlets}, one can choose $\varepsilon=\varepsilon\left(\beta\right)\in\left(0,1\right]$ such that the constants $\left\lceil p_{0}^{-1}\cdot\left(2+\varepsilon\right)\right\rceil $ and $\Lambda_{0},\dots,\Lambda_{3}$ from Theorem \ref{thm:ReallyNiceShearletAtomicDecompositionConditions} satisfy $\Lambda_{1}\leq3$, as well as \[ \left\lceil \frac{2+\varepsilon}{p_{0}}\right\rceil =\begin{cases} 3, & \text{if }\beta<2,\\ 4, & \text{if }\beta=2, \end{cases}\quad\Lambda_{0}\leq\begin{cases} 11, & \text{if }\beta<2,\\ 12, & \text{if }\beta=2, \end{cases}\quad\Lambda_{2}<\begin{cases} 11, & \text{if }\beta<2,\\ 12, & \text{if }\beta=2 \end{cases}\quad\text{ and }\quad\Lambda_{3}<\begin{cases} 14, & \text{if }\beta<2,\\ 16, & \text{if }\beta=2. \end{cases} \] Thus, in view of Remark \ref{rem:NiceTensorConditionsForUnconnectedCovering} (which refers to Corollary \ref{cor:ReallyNiceAlphaShearletTensorAtomicDecompositionConditions}), it suffices in \emph{every} case to have $\varphi\in C_{c}^{12}\left(\mathbb{R}^{2}\right)$ and $\psi=\psi_{1}\otimes\psi_{2}$ with $\psi_{1}\in C_{c}^{15}\left(\mathbb{R}\right)$ and $\psi_{2}\in C_{c}^{19}\left(\mathbb{R}\right)$ and with the following additional properties: \begin{enumerate}[leftmargin=0.6cm] \item $\widehat{\varphi}\left(\xi\right)\neq0$ for all $\xi\in\left[-1,1\right]^{2}$, \item $\widehat{\psi_{1}}\left(\xi\right)\neq0$ for $\frac{1}{3}\leq\left|\xi\right|\leq3$ and $\widehat{\psi_{2}}\left(\xi\right)\neq0$ for all $\xi\in\left[-3,3\right]$, \item We have $\frac{\operatorname{d}^{\ell}}{\operatorname{d}\xi^{\ell}}\big|_{\xi=0}\widehat{\psi_{1}}=0$ for $0\leq\ell\leq6$. In case of $\beta<2$, it even suffices to have this for $0\leq\ell\leq5$.\qedhere \end{enumerate} \end{rem} \begin{proof} We have $p_{0}^{-1}=\frac{1+\beta}{2}$ and thus $\frac{2}{p_{0}}=1+\beta\in\left(2,3\right)$ in case of $\beta<2$. Hence, $\frac{2+\varepsilon}{p_{0}}\in\left(2,3\right)$ for $\varepsilon=\varepsilon\left(\beta\right)$ sufficiently small. In case of $\beta=2$, we get $\frac{2+\varepsilon}{p_{0}}=3+\frac{\varepsilon}{p_{0}}\in\left(3,4\right)$ for $\varepsilon>0$ sufficiently small. This establishes the claimed identity for $N_{0}:=\left\lceil p_{0}^{-1}\cdot\left(2+\varepsilon\right)\right\rceil $. For the remainder of the proof, we always assume that $\varepsilon$ is chosen small enough for this identity to hold. Next, the constant $\Lambda_{1}$ from Theorem \ref{thm:ReallyNiceShearletAtomicDecompositionConditions} satisfies because of $\alpha=\beta^{-1}$ that \begin{align*} \Lambda_{1} & =\varepsilon+\frac{1}{\min\left\{ p_{0},q_{0}\right\} }+\max\left\{ 0,\,\left(1+\alpha\right)\left(\frac{1}{p_{0}}-1\right)-s_{0}\right\} \\ & =\varepsilon+\frac{1+\beta}{2}+\left(1+\beta^{-1}\right)\left(\frac{1+\beta}{2}-1\right)\\ & =\varepsilon+\frac{1}{2}+\beta-\frac{\beta^{-1}}{2}, \end{align*} which is strictly increasing with respect to $\beta>0$. Therefore, we always have $\Lambda_{1}\leq\varepsilon+\frac{1}{2}+2-\frac{2^{-1}}{2}=2+\frac{1}{4}+\varepsilon\leq3$ for $\varepsilon\leq\frac{3}{4}$. Furthermore, the constant $\Lambda_{0}$ from Theorem \ref{thm:ReallyNiceShearletAtomicDecompositionConditions} is—because of $p_{0}=\frac{2}{1+\beta}<1$—given by \begin{align*} \Lambda_{0} & =2\varepsilon+3+\max\left\{ \frac{1-\alpha}{\min\left\{ p_{0},q_{0}\right\} }+\frac{1-\alpha}{p_{0}}+1+\alpha+\left\lceil \frac{2+\varepsilon}{p_{0}}\right\rceil +s_{1},\,2\right\} \\ & =2\varepsilon+3+\max\left\{ \frac{\left(1-\beta^{-1}\right)\left(1+\beta\right)}{2}+\frac{\left(1-\beta^{-1}\right)\left(1+\beta\right)}{2}+1+\beta^{-1}+N_{0}+\frac{1}{2}\left(1+\beta\right),\,2\right\} \\ & =2\varepsilon+3+\max\left\{ \frac{3}{2}+\frac{3}{2}\beta+N_{0},\,2\right\} \\ & \leq7+N_{0}+\frac{1}{2}+2\varepsilon. \end{align*} Since $N_{0}=3$ for $\beta<2$, this easily yields $\Lambda_{0}\leq11$ for $\varepsilon\leq\frac{1}{4}$. Similarly, we get $\Lambda_{0}\leq12$ for $\beta=2$ and $\varepsilon\leq\frac{1}{4}$. Likewise, the constant $\Lambda_{2}$ from Theorem \ref{thm:ReallyNiceShearletAtomicDecompositionConditions} satisfies \begin{align*} \Lambda_{2} & =\varepsilon+\max\left\{ 2,\,\left(1+\alpha\right)\left(1+\frac{1}{p_{0}}+\left\lceil \frac{2+\varepsilon}{p_{0}}\right\rceil \right)+s_{1}\right\} \\ & =\varepsilon+\max\left\{ 2,\,\left(1+\beta^{-1}\right)\left(1+\frac{1+\beta}{2}+N_{0}\right)+\frac{1+\beta}{2}\right\} \\ & =\varepsilon+\max\left\{ 2,\,\frac{5}{2}+\beta+N_{0}+\frac{3}{2}\beta^{-1}+\beta^{-1}N_{0}\right\} \\ & =\varepsilon+\frac{5}{2}+\beta+N_{0}+\frac{3}{2}\beta^{-1}+\beta^{-1}N_{0}. \end{align*} Hence, in case of $\beta<2$, we thus get $\Lambda_{2}=\varepsilon+\frac{11}{2}+\beta+\frac{9}{2}\beta^{-1}=:\varepsilon+g\left(\beta\right)$, where $g:\left(0,\infty\right)\to\mathbb{R}$ is \emph{strictly} convex with $g\left(1\right)=11$ and $g\left(2\right)=\frac{39}{4}<11$, so that $g\left(\beta\right)<11$ for all $\beta\in\left(1,2\right)$. Thus, $\Lambda_{2}<11$ for sufficiently small $\varepsilon=\varepsilon\left(\beta\right)>0$. Finally, for $\beta=2$, we get $\Lambda_{2}=11+\frac{1}{4}+\varepsilon<12$ for $0<\varepsilon<\frac{3}{4}$. As the final constant, we consider \begin{align*} \Lambda_{3} & =\varepsilon+\max\left\{ \frac{1-\alpha}{\min\left\{ p_{0},q_{0}\right\} }+\frac{3-\alpha}{p_{0}}+2\left\lceil \frac{2+\varepsilon}{p_{0}}\right\rceil +1+\alpha+s_{1},\,\frac{2}{\min\left\{ p_{0},q_{0}\right\} }+\frac{2}{p_{0}}+\left\lceil \frac{2+\varepsilon}{p_{0}}\right\rceil \right\} \\ & =\varepsilon+\max\left\{ \frac{\left(1-\beta^{-1}\right)\left(1+\beta\right)}{2}+\frac{\left(3-\beta^{-1}\right)\left(1+\beta\right)}{2}+2N_{0}+1+\beta^{-1}+\frac{1+\beta}{2},\,1+\beta+1+\beta+N_{0}\right\} \\ & =\varepsilon+\max\left\{ \frac{5}{2}\beta+\frac{5}{2}+2N_{0},\,2+2\beta+N_{0}\right\} \\ & =\varepsilon+\frac{5}{2}\beta+\frac{5}{2}+2N_{0}. \end{align*} In case of $\beta=2$, this means $\Lambda_{3}=\varepsilon+15+\frac{1}{2}<16$ for $0<\varepsilon<\frac{1}{2}$. Finally, for $\beta<2$, we get $\Lambda_{3}<13+\frac{1}{2}+\varepsilon<14$ for $0<\varepsilon<\frac{1}{2}$. \end{proof} \section{Embeddings between \texorpdfstring{$\alpha$}{α}-shearlet smoothness spaces} \label{sec:EmbeddingsBetweenAlphaShearletSmoothness}In the preceding sections, we saw that the $\alpha$-shearlet smoothness spaces $\mathscr{S}_{\alpha,s}^{p,q}\left(\mathbb{R}^{2}\right)$ simultaneously characterize analysis and synthesis sparsity with respect to (sufficiently nice) $\alpha$-shearlet systems; see in particular Theorem \ref{thm:AnalysisAndSynthesisSparsityAreEquivalent}. Since we have a whole family of $\alpha$-shearlet systems, parametrized by $\alpha\in\left[0,1\right]$, it is natural to ask if the different systems are related in some way, e.g.\@ if $\ell^{p}$-sparsity, $p\in\left(0,2\right)$, with respect to $\alpha_{1}$-shearlet systems implies $\ell^{q}$-sparsity with respect to $\alpha_{2}$-shearlet systems, for some $q\in\left(0,2\right)$. In view of Theorem \ref{thm:AnalysisAndSynthesisSparsityAreEquivalent}, this is equivalent to asking whether there is an \textbf{embedding} \begin{equation} \mathscr{S}_{\alpha_{1},\left(1+\alpha_{1}\right)\left(p^{-1}-2^{-1}\right)}^{p,p}\left(\mathbb{R}^{2}\right)\hookrightarrow\mathscr{S}_{\alpha_{2},\left(1+\alpha_{2}\right)\left(q^{-1}-2^{-1}\right)}^{q,q}\left(\mathbb{R}^{2}\right).\label{eq:EmbeddingSectionSparsityEmbedding} \end{equation} Note, however, that equation \eqref{eq:EmbeddingSectionSparsityEmbedding} is equivalent to asking whether one can deduce $\ell^{q}$-sparsity with respect to $\alpha_{2}$-shearlets from $\ell^{p}$-sparsity with respect to $\alpha_{1}$-shearlets \emph{without any additional information}. If one \emph{does} have additional information, e.g., if one is only interested in functions $f$ with $\operatorname{supp} f\subset\Omega$, where $\Omega\subset\mathbb{R}^{2}$ is fixed and bounded, then the embedding in equation \eqref{eq:EmbeddingSectionSparsityEmbedding} is a sufficient, but in general \emph{not} a necessary criterion for guaranteeing that $f$ is $\ell^{q}$-sparse with respect to $\alpha_{2}$-shearlets if it is $\ell^{p}$-sparse with respect to $\alpha_{1}$-shearlets. More general than equation \eqref{eq:EmbeddingSectionSparsityEmbedding}, we will \emph{completely characterize} the existence of the embedding \begin{equation} \mathscr{S}_{\alpha_{1},s_{1}}^{p_{1},q_{1}}\left(\mathbb{R}^{2}\right)\hookrightarrow\mathscr{S}_{\alpha_{2},s_{2}}^{p_{2},q_{2}}\left(\mathbb{R}^{2}\right)\label{eq:EmbeddingSectionGeneralEmbedding} \end{equation} for arbitrary $p_{1},p_{2},q_{1},q_{2}\in\left(0,\infty\right]$, $\alpha_{1},\alpha_{2}\in\left[0,1\right]$ and $s_{1},s_{2}\in\mathbb{R}$. As an application, we will then see that the embedding \eqref{eq:EmbeddingSectionSparsityEmbedding} is \emph{never} fulfilled for $p,q\in\left(0,2\right)$, but that if one replaces the left-hand side of the embedding \eqref{eq:EmbeddingSectionSparsityEmbedding} by $\mathscr{S}_{\alpha_{1},\varepsilon+\left(1+\alpha_{1}\right)\left(p^{-1}-2^{-1}\right)}^{p,p}\left(\mathbb{R}^{2}\right)$ for some $\varepsilon>0$, then the embedding holds for suitable $p,q\in\left(0,2\right)$. Thus, \emph{without further information}, $\ell^{p}$-sparsity with respect to $\alpha_{1}$-shearlets \emph{never} implies nontrivial $\ell^{q}$-sparsity with respect to $\alpha_{2}$-shearlets; but one can still transfer sparsity in some sense if one has $\ell^{p}$-sparsity with respect to $\alpha_{1}$-shearlets, together with a certain \emph{decay of the $\alpha_{1}$-shearlet coefficients with the scale}. We remark that the results in this section can be seen as a continuation of the work in \cite{AlphaMolecules}: In that paper, the authors develop the framework of \textbf{$\alpha$-molecules} which allows one to transfer (analysis) sparsity results between different systems that employ $\alpha$-parabolic scaling; for example between $\alpha$-shearlets and $\alpha$-curvelets. Before \cite[Theorem 4.2]{AlphaMolecules}, the authors note that ``\emph{it might though be very interesting for future research to also let $\alpha$-molecules for different $\alpha$'s interact.}'' In a way, this is precisely what we are doing in this section, although we focus on the special case of $\alpha$-shearlets instead of (more general) $\alpha$-molecules. \medskip{} In order to characterize the embedding \eqref{eq:EmbeddingSectionGeneralEmbedding}, we will invoke the embedding theory for decomposition spaces\cite{DecompositionEmbedding} that was developed by one of the authors; this will greatly simplify the proof, since we do not need to start from scratch. In order for the theory in \cite{DecompositionEmbedding} to be applicable to an embedding $\DecompSp{\mathcal{Q}}{p_{1}}{\ell_{w}^{q_{1}}}{}\hookrightarrow\DecompSp{\mathcal{P}}{p_{2}}{\ell_{v}^{q_{2}}}{}$, the two coverings $\mathcal{Q}=\left(Q_{i}\right)_{i\in I}$ and $\mathcal{P}=\left(P_{j}\right)_{j\in J}$ need to be \emph{compatible} in a certain sense. For this, it suffices if $\mathcal{Q}$ is \textbf{almost subordinate} to $\mathcal{P}$ (or vice versa); roughly speaking, this means that the covering $\mathcal{Q}$ is \emph{finer} than $\mathcal{P}$. Precisely, it means that each set $Q_{i}$ is contained in $P_{j_{i}}^{n\ast}$ for some $j_{i}\in J$, where $n\in\mathbb{N}$ is fixed and where $P_{j_{i}}^{n\ast}=\bigcup_{\ell\in j_{i}^{n\ast}}P_{\ell}$. Here, the sets $j^{n\ast}$ are defined inductively, via $L^{\ast}:=\bigcup_{\ell\in L}\ell^{\ast}$ (with $\ell^{\ast}$ as in Definition \ref{def:AlmostStructuredCovering}) and with $L^{\left(n+1\right)\ast}:=\left(L^{n\ast}\right)^{\ast}$ for $L\subset J$. The following lemma establishes this \emph{compatibility} between different $\alpha$-shearlet coverings. \begin{lem} \label{lem:AlphaShearletCoveringSubordinateness}Let $0\leq\alpha_{1}\leq\alpha_{2}\leq1$. Then $\mathcal{S}^{\left(\alpha_{1}\right)}=\left(\smash{S_{i}^{\left(\alpha_{1}\right)}}\right)_{i\in I^{\left(\alpha_{1}\right)}}$ is almost subordinate to $\mathcal{S}^{\left(\alpha_{2}\right)}=\left(\smash{S_{j}^{\left(\alpha_{2}\right)}}\right)_{j\in I^{\left(\alpha_{2}\right)}}$. \end{lem} \begin{proof} Since we have $\bigcup_{i\in I^{\left(\alpha_{1}\right)}}S_{i}^{\left(\alpha_{1}\right)}=\mathbb{R}^{2}=\bigcup_{j\in I^{\left(\alpha_{2}\right)}}S_{j}^{\left(\alpha_{2}\right)}$ and since all of the sets $S_{i}^{\left(\alpha_{1}\right)}$ and $S_{j}^{\left(\alpha_{2}\right)}$ are open and path-connected, \cite[Corollary 2.13]{DecompositionEmbedding} shows that it suffices to show that $\mathcal{S}^{\left(\alpha_{1}\right)}$ is \textbf{weakly subordinate} to $\mathcal{S}^{\left(\alpha_{2}\right)}$. This means that we have $\sup_{i\in I^{\left(\alpha_{1}\right)}}\left|L_{i}\right|<\infty$, with \[ L_{i}:=\left\{ j\in I^{\left(\alpha_{2}\right)}\,\middle|\, S_{j}^{\left(\alpha_{2}\right)}\cap S_{i}^{\left(\alpha_{1}\right)}\neq\emptyset\right\} \qquad\text{ for }i\in I^{\left(\alpha_{1}\right)}. \] To show this, we first consider only the case $i=\left(n,m,\varepsilon,0\right)\in I_{0}^{\left(\alpha_{1}\right)}$ and let $j\in L_{i}$ be arbitrary. We now distinguish several cases regarding $j$: \textbf{Case 1}: We have $j=\left(k,\ell,\beta,0\right)\in I_{0}^{\left(\alpha_{2}\right)}$. Let $\left(\xi,\eta\right)\in S_{i}^{\left(\alpha_{1}\right)}\cap S_{j}^{\left(\alpha_{2}\right)}$. In view of equation \eqref{eq:deltazeroset}, this implies $\xi\in\varepsilon\left(2^{n}/3,3\cdot2^{n}\right)\cap\beta\left(2^{k}/3,3\cdot2^{k}\right)$, so that in particular $\varepsilon=\beta$. Furthermore, we see $2^{k}/3<\left|\xi\right|<3\cdot2^{n}$, which yields $2^{n-k}>\frac{1}{9}>2^{-4}$. Analogously, we get $2^{n}/3<\left|\xi\right|<3\cdot2^{k}$ and thus $2^{n-k}<9<2^{4}$. Together, these considerations imply $\left|n-k\right|\leq3$. Furthermore, since $\left(\xi,\eta\right)\in S_{i}^{\left(\alpha_{1}\right)}\cap S_{j}^{\left(\alpha_{2}\right)}$, equation \eqref{eq:deltazeroset} also shows \[ \frac{\eta}{\xi}=\frac{\varepsilon\eta}{\varepsilon\xi}=\frac{\beta\eta}{\beta\xi}\in2^{n\left(\alpha_{1}-1\right)}\left(m-1,m+1\right)\cap2^{k\left(\alpha_{2}-1\right)}\left(\ell-1,\ell+1\right). \] Hence, we get the two inequalities \[ 2^{n\left(\alpha_{1}-1\right)}\left(m+1\right)>2^{k\left(\alpha_{2}-1\right)}\left(\ell-1\right)\qquad\text{ and }\qquad2^{n\left(\alpha_{1}-1\right)}\left(m-1\right)<2^{k\left(\alpha_{2}-1\right)}\left(\ell+1\right) \] and thus \[ \ell<\left(m+1\right)2^{n\alpha_{1}-k\alpha_{2}+k-n}+1\qquad\text{ and }\qquad\ell>\left(m-1\right)2^{n\alpha_{1}-k\alpha_{2}+k-n}-1. \] In other words, \[ \ell\in\left(\left(m-1\right)2^{n\alpha_{1}-k\alpha_{2}+k-n}-1,\left(m+1\right)2^{n\alpha_{1}-k\alpha_{2}+k-n}+1\right)\cap\mathbb{Z}=:s_{m}^{\left(n,k\right)}. \] But since any interval $I=\left(A,B\right)$ with $A\leq B$ satisfies $\left|I\cap\mathbb{Z}\right|\leq B-A+1$, the cardinality of $s_{m}^{\left(n,k\right)}$ can be estimated by \[ \begin{aligned}\left|s_{m}^{\left(n,k\right)}\right| & \leq\left(m+1\right)2^{n\alpha_{1}-k\alpha_{2}+k-n}+1-\left(m-1\right)2^{n\alpha_{1}-k\alpha_{2}+k-n}+1+1\\ & =3+2\cdot2^{n\alpha_{1}-k\alpha_{2}+k-n}\\ \left({\scriptstyle \text{since }\left|n-k\right|\leq3}\right) & \leq3+2\cdot2^{n\alpha_{1}-(n-3)\alpha_{2}+3}\\ & =3+2^{4}\cdot2^{n(\alpha_{1}-\alpha_{2})}\cdot2^{3\alpha_{2}}\\ \left({\scriptstyle \text{since }\alpha_{1}-\alpha_{2}\leq0\text{ and }\alpha_{2}\leq1}\right) & \leq3+2^{7}=131. \end{aligned} \] Thus, \[ L_{i}^{\left(0\right)}:=\left\{ j=\left(k,\ell,\beta,0\right)\in I_{0}^{\left(\alpha_{2}\right)}\left|S_{j}^{\left(\alpha_{2}\right)}\cap S_{i}^{\left(\alpha_{1}\right)}\neq\emptyset\right.\right\} \subset\bigcup_{t=n-3}^{n+3}\left(\left\{ t\right\} \times s_{m}^{\left(n,t\right)}\times\left\{ \pm1\right\} \times\left\{ 0\right\} \right), \] which is a finite set, with at most $7\cdot131\cdot2=1834$ elements. \medskip{} \textbf{Case 2}: We have $j=(k,\ell,\beta,1)\in I_{0}^{\left(\alpha_{2}\right)}$. Let $\left(\xi,\eta\right)\in S_{i}^{\left(\alpha_{1}\right)}\cap S_{j}^{\left(\alpha_{2}\right)}$. With similar arguments as in the previous case, this implies $\xi\in\varepsilon\left(2^{n}/3,3\cdot2^{n}\right)$, $\eta\in\beta\left(2^{k}/3,3\cdot2^{k}\right)$ and $\frac{\eta}{\xi}\in2^{n\left(\alpha_{1}-1\right)}\left(m-1,m+1\right)$, as well as $\frac{\xi}{\eta}\in2^{k\left(\alpha_{2}-1\right)}\left(\ell-1,\ell+1\right)$. Furthermore since $\left(\xi,\eta\right)\in S_{n,m,\varepsilon,0}^{\left(\alpha_{1}\right)}$ and $\left(\xi,\eta\right)\in S_{k,\ell,\beta,1}^{\left(\alpha_{2}\right)}$, we know from Lemma \ref{lem:AlphaShearletCoveringAuxiliary} that $\left|\eta\right|<3\left|\xi\right|$ and $\left|\xi\right|<3\left|\eta\right|$. Thus, $2^{k}/3<\left|\eta\right|<3\cdot\left|\xi\right|<3\cdot3\cdot2^{n}$ and hence $2^{k-n}<27<2^{5}$. Likewise, $2^{n}/3<\left|\xi\right|<3\cdot\left|\eta\right|<3\cdot3\cdot2^{k}$ and hence $2^{n-k}<2^{5}$, so that we get $\left|n-k\right|\leq4$. Now, we distinguish two subcases regarding $\left|\eta/\xi\right|$: \begin{enumerate} \item We have $\left|\eta/\xi\right|>1$. Because of $\left|m\right|\leq\left\lceil 2^{n\left(1-\alpha_{1}\right)}\right\rceil \leq1+2^{n\left(1-\alpha_{1}\right)}$, this implies \[ 1<\left|\frac{\eta}{\xi}\right|<2^{n\left(\alpha_{1}-1\right)}\left(\left|m\right|+1\right)\leq2^{n\left(\alpha_{1}-1\right)}\left(2^{n\left(1-\alpha_{1}\right)}+1+1\right)=1+2\cdot2^{n\left(\alpha_{1}-1\right)} \] and hence \[ \frac{1}{1+2\cdot2^{n\left(\alpha_{1}-1\right)}}<\left|\frac{\xi}{\eta}\right|<1. \] Furthermore, we know $\left|\xi/\eta\right|<2^{k\left(\alpha_{2}-1\right)}\left(\left|\ell\right|+1\right)$, so that we get \[ \frac{1}{1+2\cdot2^{n\left(\alpha_{1}-1\right)}}<\left|\frac{\xi}{\eta}\right|<2^{k\left(\alpha_{2}-1\right)}\left(\left|\ell\right|+1\right)\qquad\text{ and hence }\qquad\left|\ell\right|>\frac{2^{k\left(1-\alpha_{2}\right)}}{1+2\cdot2^{n\left(\alpha_{1}-1\right)}}-1. \] Thus, we have \[ \left|\ell\right|\in\mathbb{Z}\cap\left(\frac{2^{k\left(1-\alpha_{2}\right)}}{1+2\cdot2^{n\left(\alpha_{1}-1\right)}}-1,\left\lceil \smash{2^{k\left(1-\alpha_{2}\right)}}\right\rceil \right]\subset\mathbb{Z}\cap\left(\frac{2^{k\left(1-\alpha_{2}\right)}}{1+2\cdot2^{n\left(\alpha_{1}-1\right)}}-1,2^{k\left(1-\alpha_{2}\right)}+1\right)=:s^{\left(n,k\right)}, \] where as above \[ \begin{aligned}\left|s^{\left(n,k\right)}\right|\leq2^{k\left(1-\alpha_{2}\right)}+1-\frac{2^{k\left(1-\alpha_{2}\right)}}{1+2\cdot2^{n\left(\alpha_{1}-1\right)}}+1+1 & =3+2^{k\left(1-\alpha_{2}\right)}\left(1-\frac{1}{1+2\cdot2^{n\left(\alpha_{1}-1\right)}}\right)\\ & =3+2^{k\left(1-\alpha_{2}\right)}\frac{2\cdot2^{n\left(\alpha_{1}-1\right)}}{1+2\cdot2^{n\left(\alpha_{1}-1\right)}}\\ & \leq3+2\cdot2^{k\left(1-\alpha_{2}\right)-n\left(1-\alpha_{1}\right)}\\ \left({\scriptstyle \text{since }1-\alpha_{2}\geq0\text{ and }\left|n-k\right|\leq4}\right) & \leq3+2\cdot2^{\left(n+4\right)\left(1-\alpha_{2}\right)-n\left(1-\alpha_{1}\right)}\\ & =3+2\cdot2^{4\left(1-\alpha_{2}\right)}2^{n\left(\alpha_{1}-\alpha_{2}\right)}\\ \left({\scriptstyle \text{since }\alpha_{1}-\alpha_{2}\leq0\text{ and }\alpha_{2}\geq0}\right) & \leq3+2\cdot2^{4}=35. \end{aligned} \] Finally, note that $\left|\ell\right|\in s^{\left(n,k\right)}$ implies $\ell\in\pm s^{\left(n,k\right)}$, with $\left|\pm s^{\left(n,k\right)}\right|\leq70$. \item We have $\left|\eta/\xi\right|\leq1$. This yields $1\leq\left|\xi/\eta\right|<2^{k\left(\alpha_{2}-1\right)}\left(\left|\ell\right|+1\right)$ and hence $\left|\ell\right|>2^{k\left(1-\alpha_{2}\right)}-1$. Thus, we have \[ \left|\ell\right|\in\mathbb{Z}\cap\left(2^{k\left(1-\alpha_{2}\right)}-1,\left\lceil \smash{2^{k\left(1-\alpha_{2}\right)}}\right\rceil \right]\subset\mathbb{Z}\cap\left(2^{k\left(1-\alpha_{2}\right)}-1,2^{k\left(1-\alpha_{2}\right)}+1\right)=:\tilde{s}^{\left(n,k\right)}, \] where one easily sees $\left|\tilde{s}^{\left(n,k\right)}\right|\leq3$ and then $\ell\in\pm\tilde{s}^{\left(n,k\right)}$ with $\left|\pm\tilde{s}^{\left(n,k\right)}\right|\leq6$. \end{enumerate} All in all, we see \[ L_{i}^{\left(1\right)}:=\left\{ j=\left(k,\ell,\beta,1\right)\in I_{0}^{\left(\alpha_{2}\right)}\left|S_{j}^{\left(\alpha_{2}\right)}\cap S_{i}^{\left(\alpha_{1}\right)}\neq\emptyset\right.\right\} \subset\bigcup_{t=n-4}^{n+4}\left[\left\{ t\right\} \times\left(\left[\pm\smash{s^{\left(n,t\right)}}\right]\cup\left[\pm\smash{\tilde{s}^{\left(n,t\right)}}\right]\right)\times\left\{ \pm1\right\} \times\left\{ 1\right\} \right] \] and hence $\left|\smash{L_{i}^{(1)}}\right|\leq9\cdot\left(70+6\right)\cdot2=1368$. In total, Cases 1 and 2 show because of $L_{i}\subset L_{i}^{(0)}\cup L_{i}^{(1)}\cup\left\{ 0\right\} $ that $\left|L_{i}\right|\leq\left|\smash{L_{i}^{(0)}}\right|+\left|\smash{L_{i}^{(1)}}\right|+\left|\left\{ 0\right\} \right|\leq3203$ for all $i=\left(n,m,\varepsilon,0\right)\in I_{0}^{\left(\alpha_{1}\right)}$. \medskip{} But in case of $i=\left(n,m,\varepsilon,1\right)\in I_{0}^{\left(\alpha_{1}\right)}$, we get the same result. Indeed, if we set $\tilde{\gamma}:=1-\gamma$ for $\gamma\in\left\{ 0,1\right\} $, then \[ \begin{aligned}I_{0}^{\left(\alpha_{2}\right)}\cap L_{\left(n,m,\varepsilon,1\right)} & =\left\{ \left(k,\ell,\beta,\gamma\right)\in I_{0}^{\left(\alpha_{2}\right)}\left|S_{k,\ell,\beta,\gamma}^{\left(\alpha_{2}\right)}\cap S_{n,m,\varepsilon,1}^{\left(\alpha_{1}\right)}\neq\emptyset\right.\right\} \\ & =\left\{ \left(k,\ell,\beta,\gamma\right)\in I_{0}^{\left(\alpha_{2}\right)}\left|RS_{k,\ell,\beta,\tilde{\gamma}}^{\left(\alpha_{2}\right)}\cap RS_{n,m,\varepsilon,0}^{\left(\alpha_{1}\right)}\neq\emptyset\right.\right\} \\ & =\left\{ \left(k,\ell,\beta,\tilde{\gamma}\right)\in I_{0}^{\left(\alpha_{2}\right)}\left|S_{k,\ell,\beta,\gamma}^{\left(\alpha_{2}\right)}\cap S_{n,m,\varepsilon,0}^{\left(\alpha_{1}\right)}\neq\emptyset\right.\right\} \\ & =\left\{ \left(k,\ell,\beta,\tilde{\gamma}\right)\,\middle|\,\left(k,\ell,\beta,\gamma\right)\in I_{0}^{\left(\alpha_{2}\right)}\cap L_{\left(n,m,\varepsilon,0\right)}\right\} , \end{aligned} \] and thus $\left|\smash{I_{0}^{\left(\alpha_{2}\right)}}\cap L_{\left(n,m,\varepsilon,1\right)}\right|=\left|\smash{I_{0}^{\left(\alpha_{2}\right)}}\cap L_{\left(n,m,\varepsilon,0\right)}\right|\leq3202$, so that $\left|L_{\left(n,m,\varepsilon,1\right)}\right|\leq3203$. \medskip{} It remains to consider the case $i=0$. But for $\xi\in S_{0}^{\left(\alpha_{1}\right)}=\left(-1,1\right)^{2}$, we have $1+\left|\xi\right|\leq3$. Conversely, Lemma \ref{lem:AlphaShearletWeightIsModerate} shows $1+\left|\xi\right|\geq\frac{1}{3}\cdot w_{j}=2^{k}/3$ for all $\xi\in S_{j}^{\left(\alpha_{2}\right)}$ and all $j=\left(k,\ell,\beta,\gamma\right)\in I_{0}^{\left(\alpha_{2}\right)}$. Hence, $j\in L_{0}$ can only hold if $2^{k}/3\leq3$, i.e., if $k\leq3$. Since we also have $\left|\ell\right|\leq\left\lceil 2^{k\left(1-\alpha_{2}\right)}\right\rceil \leq2^{k}\leq2^{3}=8$, this implies \[ L_{0}\subset\left\{ 0\right\} \cup\left[\left\{ 0,1,2,3\right\} \times\left\{ -8,\dots,8\right\} \times\left\{ \pm1\right\} \times\left\{ 0,1\right\} \right] \] and hence $\left|L_{0}\right|\leq1+4\cdot17\cdot2\cdot2=273\leq3203$. \medskip{} In total, we have shown $\sup_{i\in I^{\left(\alpha_{1}\right)}}\left|L_{i}\right|\leq3203<\infty$, so that $\mathcal{S}^{\left(\alpha_{1}\right)}$ is weakly subordinate to $\mathcal{S}^{\left(\alpha_{2}\right)}$. As seen at the beginning of the proof, this suffices. \end{proof} Now that we have seen that $\mathcal{S}^{\left(\alpha_{1}\right)}$ is almost subordinate to $\mathcal{S}^{\left(\alpha_{2}\right)}$ for $\alpha_{1}\leq\alpha_{2}$, the theory from \cite{DecompositionEmbedding} is applicable. But the resulting conditions simplify greatly, if in addition to the coverings, also the employed weights are compatible in a certain sense. Precisely, for two coverings $\mathcal{Q}=\left(Q_{i}\right)_{i\in I}$ and $\mathcal{P}=\left(P_{j}\right)_{j\in J}$ and for a weight $w=\left(w_{i}\right)_{i\in I}$ on the index set of $\mathcal{Q}$, we say that $w$ is \textbf{relatively $\mathcal{P}$-moderate}, if there is a constant $C>0$ with \[ w_{i}\leq C\cdot w_{\ell}\qquad\text{for all }i,\ell\in I\text{ with }Q_{i}\cap P_{j}\neq\emptyset\neq Q_{\ell}\cap P_{j}\text{ for some }j\in J. \] Likewise, the covering $\mathcal{Q}=\left(T_{i}Q_{i}'+b_{i}\right)_{i\in I}$ is called relatively $\mathcal{P}$-moderate, if the weight $\left(\left|\det T_{i}\right|\right)_{i\in I}$ is relatively $\mathcal{P}$-moderate. Our next lemma shows that these two conditions are satisfied if $\mathcal{Q}$ and $\mathcal{P}$ are two $\alpha$-shearlet coverings. \begin{lem} \label{lem:AlphaShearletRelativelyModerate}Let $0\leq\alpha_{1}\leq\alpha_{2}\leq1$ and let $\mathcal{S}^{\left(\alpha_{1}\right)}$ and $\mathcal{S}^{\left(\alpha_{2}\right)}$ be the associated $\alpha$-shearlet coverings. Then the following hold: \begin{enumerate} \item $\mathcal{S}^{\left(\alpha_{1}\right)}$ is relatively $\mathcal{S}^{\left(\alpha_{2}\right)}$-moderate. \item For arbitrary $s\in\mathbb{R}$, the weight $w^{s}=\left(w_{i}^{s}\right)_{i\in I^{(\alpha_{1})}}$ with $w=\left(w_{i}\right)_{i\in I^{\left(\alpha_{1}\right)}}$ as in Definition \ref{def:AlphaShearletCovering} (considered as a weight for $\mathcal{S}^{(\alpha_{1})}$) is relatively $\mathcal{S}^{\left(\alpha_{2}\right)}$-moderate. More precisely, we have $39^{-\left|s\right|}\cdot w_{j}^{s}\leq w_{i}^{s}\leq39^{\left|s\right|}\cdot w_{j}^{s}$ for all $i\in I^{(\alpha_{1})}$ and $j\in I^{(\alpha_{2})}$ with $S_{i}^{(\alpha_{1})}\cap S_{j}^{(\alpha_{2})}\neq\emptyset$.\qedhere \end{enumerate} \end{lem} \begin{proof} It is not hard to see $\left|\det\smash{T_{i}^{\left(\alpha_{1}\right)}}\right|=w_{i}^{1+\alpha_{1}}$ for all $i\in I^{\left(\alpha_{1}\right)}$. Thus, the second claim implies the first one. To prove the second one, let $i\in I^{\left(\alpha_{1}\right)}$ and $j\in I^{\left(\alpha_{2}\right)}$ with $S_{i}^{\left(\alpha_{1}\right)}\cap S_{j}^{\left(\alpha_{2}\right)}\neq\emptyset$. Thus, there is some $\xi\in S_{i}^{\left(\alpha_{1}\right)}\cap S_{j}^{\left(\alpha_{2}\right)}$. In view of Lemma \ref{lem:AlphaShearletWeightIsModerate}, this implies\vspace{-0.1cm} \[ \frac{w_{j}}{39}\leq\frac{1+\left|\xi\right|}{13}\leq w_{i}\leq3\cdot\left(1+\left|\xi\right|\right)\leq39\cdot w_{j}, \] from which it easily follows that $39^{-\left|s\right|}\cdot w_{j}^{s}\leq w_{i}^{s}\leq39^{\left|s\right|}\cdot w_{j}^{s}$. This establishes the second part of the second claim of the lemma. But this easily implies that the weight $w^{s}$ is relatively $\mathcal{S}^{\left(\alpha_{2}\right)}$-moderate: Indeed, let $i,\ell\in I^{\left(\alpha_{1}\right)}$ be arbitrary with $S_{i}^{\left(\alpha_{1}\right)}\cap S_{j}^{\left(\alpha_{2}\right)}\neq\emptyset\neq S_{\ell}^{\left(\alpha_{1}\right)}\cap S_{j}^{\left(\alpha_{2}\right)}$ for some $j\in I^{\left(\alpha_{2}\right)}$. This implies $w_{i}^{s}\leq39^{\left|s\right|}\cdot w_{j}^{s}\leq\left(39^{2}\right)^{\left|s\right|}\cdot w_{\ell}^{s}$, as desired. \end{proof} Now that we have established the strong compatibility between the $\alpha$-shearlet coverings $\mathcal{S}^{\left(\alpha_{1}\right)}$ and $\mathcal{S}^{\left(\alpha_{2}\right)}$ and of the associated weights, we can easily characterize the existence of embeddings between the $\alpha$-shearlet smoothness. \begin{thm} \noindent \label{thm:EmbeddingBetweenAlphaShearlets}Let $\alpha_{1},\alpha_{2}\in\left[0,1\right]$ with $\alpha_{1}\leq\alpha_{2}$. For $s,r\in\mathbb{R}$ and $p_{1},p_{2},q_{1},q_{2}\in\left(0,\infty\right]$, the map \[ \mathscr{S}_{\alpha_{2},r}^{p_{1},q_{1}}\left(\mathbb{R}^{2}\right)\to\mathscr{S}_{\alpha_{1},s}^{p_{2},q_{2}}\left(\mathbb{R}^{2}\right),f\mapsto f \] is well-defined and bounded if and only if we have $p_{1}\leq p_{2}$ and \[ \begin{cases} r>s+\left(1+\alpha_{1}\right)\left(\frac{1}{p_{1}}-\frac{1}{p_{2}}\right)+\left(\alpha_{2}-\alpha_{1}\right)\left(\frac{1}{q_{2}}-\frac{1}{p_{1}^{\pm\triangle}}\right)_{+}+\left(1-\alpha_{2}\right)\left(\frac{1}{q_{2}}-\frac{1}{q_{1}}\right), & \text{if }q_{2}<q_{1},\\ r\geq s+\left(1+\alpha_{1}\right)\left(\frac{1}{p_{1}}-\frac{1}{p_{2}}\right)+\left(\alpha_{2}-\alpha_{1}\right)\left(\frac{1}{q_{2}}-\frac{1}{p_{1}^{\pm\triangle}}\right)_{+}, & \text{if }q_{2}\geq q_{1}. \end{cases} \] \noindent Likewise, the map\vspace{-0.05cm} \[ \mathscr{S}_{\alpha_{1},s}^{p_{1},q_{1}}\left(\mathbb{R}^{2}\right)\rightarrow\mathscr{S}_{\alpha_{2},r}^{p_{2},q_{2}}\left(\mathbb{R}^{2}\right),f\mapsto f \] is well-defined and bounded if and only if we have $p_{1}\leq p_{2}$ and \[ \begin{cases} s>r+\left(1+\alpha_{1}\right)\left(\frac{1}{p_{1}}-\frac{1}{p_{2}}\right)+\left(\alpha_{2}-\alpha_{1}\right)\left(\frac{1}{p_{2}^{\triangledown}}-\frac{1}{q_{1}}\right)_{+}+\left(1-\alpha_{2}\right)\left(\frac{1}{q_{2}}-\frac{1}{q_{1}}\right), & \text{if }q_{2}<q_{1},\\ s\geq r+\left(1+\alpha_{1}\right)\left(\frac{1}{p_{1}}-\frac{1}{p_{2}}\right)+\left(\alpha_{2}-\alpha_{1}\right)\left(\frac{1}{p_{2}^{\triangledown}}-\frac{1}{q_{1}}\right)_{+}, & \text{if }q_{2}\geq q_{1}. \end{cases} \] Here, we used the notations\vspace{-0.15cm} \[ p^{\triangledown}:=\min\left\{ p,p'\right\} ,\qquad\text{ and }\qquad\frac{1}{p^{\pm\triangle}}:=\min\left\{ \frac{1}{p},1-\frac{1}{p}\right\} , \] where the \textbf{conjugate exponent} $p'$ is defined as usual for $p\in\left[1,\infty\right]$ and as $p':=\infty$ for $p\in\left(0,1\right)$. \end{thm} \begin{proof} For the first part, we want to invoke part (4) of \cite[Theorem 7.2]{DecompositionEmbedding}, with $\mathcal{Q}=\mathcal{S}^{\left(\alpha_{2}\right)}=\left(\smash{T_{i}^{\left(\alpha_{2}\right)}}Q_{i}'\right)_{i\in I^{\left(\alpha_{2}\right)}}$ and $\mathcal{P}=\mathcal{S}^{\left(\alpha_{1}\right)}=\left(\smash{T_{i}^{\left(\alpha_{1}\right)}}Q_{i}'\right)_{i\in I^{\left(\alpha_{1}\right)}}$ and with $w=\left(w_{i}^{r}\right)_{i\in I^{\left(\alpha_{2}\right)}}$ and $v=\left(w_{i}^{s}\right)_{i\in I^{\left(\alpha_{1}\right)}}$. To this end, we first have to verify that $\mathcal{Q},\mathcal{P},w,v$ satisfy \cite[Assumption 7.1]{DecompositionEmbedding}. But we saw in Lemma \ref{lem:AlphaShearletWeightIsModerate} that $w$ and $v$ are $\mathcal{Q}$-moderate and $\mathcal{P}$-moderate, respectively. Furthermore, $\mathcal{Q},\mathcal{P}$ are almost structured coverings (cf.\@ Lemma \ref{lem:AlphaShearletCoveringIsAlmostStructured}) and thus also \textbf{semi-structured coverings} (cf.\@ \cite[Definition 2.5]{DecompositionEmbedding}) of $\mathcal{O}=\mathcal{O}'=\mathbb{R}^{2}$. Furthermore, since $\left\{ Q_{i}'\,\middle|\, i\in I^{\left(\alpha\right)}\right\} $ is a finite family of nonempty open sets (for arbitrary $\alpha\in\left[0,1\right]$), it is not hard to see that $\mathcal{S}^{\left(\alpha\right)}$ is an open covering of $\mathbb{R}^{2}$ and that there is some $\varepsilon>0$ and for each $i\in I^{\left(\alpha\right)}$ some $\eta_{i}\in\mathbb{R}^{2}$ with $B_{\varepsilon}\left(\eta_{i}\right)\subset Q_{i}'$. Thus, $\mathcal{S}^{\left(\alpha\right)}$ is a \textbf{tight}, open semi-structured covering of $\mathbb{R}^{2}$ for all $\alpha\in\left[0,1\right]$. Hence, so are $\mathcal{Q},\mathcal{P}$. Finally, \cite[Corollary 2.7]{DecompositionIntoSobolev} shows that if $\Phi=\left(\varphi_{i}\right)_{i\in I^{\left(\alpha_{2}\right)}}$ and $\Psi=\left(\psi_{j}\right)_{j\in I^{\left(\alpha_{1}\right)}}$ are regular partitions of unity for $\mathcal{Q},\mathcal{P}$, respectively, then $\Phi,\Psi$ are \textbf{$L^{p}$-BAPUs} (cf.\@ \cite[Definitions 3.5 and 3.6]{DecompositionEmbedding}) for $\mathcal{Q},\mathcal{P}$, simultaneously for all $p\in\left(0,\infty\right]$. Hence, all assumptions of \cite[Assumption 7.1]{DecompositionEmbedding} are satisfied. Next, Lemma \ref{lem:AlphaShearletCoveringSubordinateness} shows that $\mathcal{P}=\mathcal{S}^{\left(\alpha_{1}\right)}$ is almost subordinate to $\mathcal{Q}=\mathcal{S}^{\left(\alpha_{2}\right)}$ and Lemma \ref{lem:AlphaShearletRelativelyModerate} shows that $\mathcal{P}$ and $v$ are relatively $\mathcal{Q}$-moderate, so that all assumptions of \cite[Theorem 7.2, part (4)]{DecompositionEmbedding} are satisfied. Now, let us choose, for each $j\in I^{(\alpha_{2})}$, an arbitrary index $i_{j}\in I^{(\alpha_{1})}$ with $S_{i_{j}}^{(\alpha_{1})}\cap S_{j}^{(\alpha_{2})}\neq\emptyset$. Then \cite[Theorem 7.2, part (4)]{DecompositionEmbedding} shows that the embedding $\mathscr{S}_{\alpha_{2},r}^{p_{1},q_{1}}\left(\mathbb{R}^{2}\right)\hookrightarrow\mathscr{S}_{\alpha_{1},s}^{p_{2},q_{2}}\left(\mathbb{R}^{2}\right)$ holds if and only if we have $p_{1}\leq p_{2}$ and if furthermore, the following expression (then a constant) is finite: \[ \begin{aligned}K & :=\left\Vert \left(\frac{w_{i_{j}}^{s}}{w_{j}^{r}}\cdot\left|\det T_{j}^{(\alpha_{2})}\right|^{\left(\frac{1}{q_{2}}-\frac{1}{p_{1}^{\pm\triangle}}\right)_{+}}\;\cdot\;\left|\det T_{i_{j}}^{(\alpha_{1})}\right|^{\frac{1}{p_{1}}-\frac{1}{p_{2}}-\left(\frac{1}{q_{2}}-\frac{1}{p_{1}^{\pm\triangle}}\right)_{+}}\right)_{j\in I_{0}^{(\alpha_{2})}}\right\Vert _{\ell^{q_{2}\cdot(q_{1}/q_{2})'}}\\ \left({\scriptstyle \text{Lemma }\ref{lem:AlphaShearletRelativelyModerate}}\right) & \asymp\left\Vert \left(\frac{2^{ks}}{2^{kr}}\,\cdot\,2^{k\left(1+\alpha_{2}\right)\left(\frac{1}{q_{2}}-\frac{1}{p_{1}^{\pm\triangle}}\right)_{+}}\;\cdot\;2^{k\left(1+\alpha_{1}\right)\left[\frac{1}{p_{1}}-\frac{1}{p_{2}}-\left(\frac{1}{q_{2}}-\frac{1}{p_{1}^{\pm\triangle}}\right)_{+}\right]}\right)_{(k,\ell,\beta,\gamma)\in I_{0}^{(\alpha_{2})}}\right\Vert _{\ell^{q_{2}\cdot(q_{1}/q_{2})'}}\\ & =\left\Vert \left(\raisebox{-0.2cm}{\ensuremath{2^{k\left(\left(s-r\right)+\left(1+\alpha_{2}\right)\left(\frac{1}{q_{2}}-\frac{1}{p_{1}^{\pm\triangle}}\right)_{+}+\left(1+\alpha_{1}\right)\left[\frac{1}{p_{1}}-\frac{1}{p_{2}}-\left(\frac{1}{q_{2}}-\frac{1}{p_{1}^{\pm\triangle}}\right)_{+}\right]\right)}}}\right)_{(k,\ell,\beta,\gamma)\in I_{0}^{(\alpha_{2})}}\right\Vert _{\ell^{q_{2}\cdot(q_{1}/q_{2})'}}\\ & =\left\Vert \left(\raisebox{-0.2cm}{\ensuremath{2^{k\left(s-r+\left(\alpha_{2}-\alpha_{1}\right)\left(\frac{1}{q_{2}}-\frac{1}{p_{1}^{\pm\triangle}}\right)_{+}+\left(1+\alpha_{1}\right)\left(\frac{1}{p_{1}}-\frac{1}{p_{2}}\right)\right)}}}\right)_{(k,\ell,\beta,\gamma)\in I_{0}^{(\alpha_{2})}}\right\Vert _{\ell^{q_{2}\cdot(q_{1}/q_{2})'}}. \end{aligned} \] Note that we only took the norm of the sequence with $j\in I_{0}^{\left(\alpha_{2}\right)}$, omitting the term for $j=0$, in contrast to the definition of $K$ in \cite[Theorem 7.2]{DecompositionEmbedding}. This is justified, since we are only interested in finiteness of the norm, for which the single (finite(!))\@ term for $j=0$ is irrelevant. Now, we distinguish two different cases regarding $q_{1}$ and $q_{2}$: \textbf{Case 1}: We have $q_{2}<q_{1}$. This implies $\varrho:=q_{2}\cdot\left(q_{1}/q_{2}\right)'<\infty$, cf.\@ \cite[Equation (4.3)]{DecompositionEmbedding}. For brevity, let us define $\Theta:=s-r+\left(\alpha_{2}-\alpha_{1}\right)\left(\frac{1}{q_{2}}-\frac{1}{p_{1}^{\pm\triangle}}\right)_{+}+\left(1+\alpha_{1}\right)\left(\frac{1}{p_{1}}-\frac{1}{p_{2}}\right)$. Then, we get \[ \begin{aligned}K^{\varrho} & \asymp\left\Vert \left(2^{k\Theta}\right)_{(k,\ell,\beta,\gamma)\in I_{0}^{(\alpha_{2})}}\right\Vert _{\ell^{\varrho}}^{\varrho}=\sum_{(k,\ell,\beta,\gamma)\in I_{0}^{(\alpha_{2})}}2^{k\cdot\varrho\cdot\Theta}=\sum_{k=0}^{\infty}2^{k\cdot\varrho\cdot\Theta}\sum_{\left|\ell\right|\leq\left\lceil 2^{k\left(1-\alpha_{2}\right)}\right\rceil }\sum_{\beta\in\left\{ \pm1\right\} }\sum_{\gamma\in\left\{ 0,1\right\} }1\\ & =4\cdot\sum_{k=0}^{\infty}2^{k\cdot\varrho\cdot\Theta}\left(1+2\cdot\left\lceil \smash{2^{k\left(1-\alpha_{2}\right)}}\right\rceil \right)\asymp\sum_{k=0}^{\infty}2^{k\left(\varrho\cdot\Theta+1-\alpha_{2}\right)}. \end{aligned} \] Now, note from the remark to \cite[Lemma 4.8]{DecompositionEmbedding} that $\frac{1}{p\cdot\left(q/p\right)'}=\left(\frac{1}{p}-\frac{1}{q}\right)_{+}$ for arbitrary $p,q\in\left(0,\infty\right]$. Hence, in the present case, we have $\varrho^{-1}=\left(q_{2}^{-1}-q_{1}^{-1}\right)_{+}=q_{2}^{-1}-q_{1}^{-1}$. Therefore, we see that the last sum from above—and therefore $K$—is finite if and only if $\varrho\cdot\Theta+1-\alpha_{2}<0$. But this is equivalent to \[ s-r+\left(\alpha_{2}-\alpha_{1}\right)\left(\frac{1}{q_{2}}-\frac{1}{p_{1}^{\pm\triangle}}\right)_{+}+\left(1+\alpha_{1}\right)\left(\frac{1}{p_{1}}-\frac{1}{p_{2}}\right)=\Theta\overset{!}{<}\left(\alpha_{2}-1\right)\cdot\left(q_{2}^{-1}-q_{1}^{-1}\right), \] from which it easily follows that the claimed equivalence from the first part of the theorem holds in case of $q_{2}<q_{1}$. \medskip{} \textbf{Case 2}: We have $q_{2}\geq q_{1}$. This implies $q_{2}\cdot\left(q_{1}/q_{2}\right)'=\infty$, cf.\@ \cite[Equation (4.3)]{DecompositionEmbedding}. Thus, with $\Theta$ as in the previous case, we have \[ K\asymp\sup_{(k,\ell,\beta,\gamma)\in I_{0}^{(\alpha_{2})}}2^{k\Theta}, \] so that $K$ is finite if and only if $\Theta\leq0$, which is equivalent to \[ r\geq s+(\alpha_{2}-\alpha_{1})\left(\frac{1}{q_{2}}-\frac{1}{p_{1}^{\pm\triangle}}\right)_{+}+\left(1+\alpha_{1}\right)\left(\frac{1}{p_{1}}-\frac{1}{p_{2}}\right). \] As in the previous case, this shows for $q_{2}\geq q_{1}$ that the claimed equivalence from the first part of the theorem holds. \medskip{} For the second part of the theorem, we make use of part (4) of \cite[Theorem 7.4]{DecompositionEmbedding}, with $\mathcal{Q}=\mathcal{S}^{\left(\alpha_{1}\right)}=\left(\smash{T_{i}^{\left(\alpha_{1}\right)}}Q_{i}'\right)_{i\in I^{\left(\alpha_{1}\right)}}$ and $\mathcal{P}=\mathcal{S}^{\left(\alpha_{2}\right)}=\left(\smash{T_{i}^{\left(\alpha_{2}\right)}}Q_{i}'\right)_{i\in I^{\left(\alpha_{2}\right)}}$ and with $w=\left(w_{i}^{s}\right)_{i\in I^{\left(\alpha_{1}\right)}}$ and $v=\left(v_{i}^{r}\right)_{i\in I^{\left(\alpha_{2}\right)}}$. As above, one sees that the corresponding assumptions are fulfilled. Thus, \cite[Theorem 7.4, part (4)]{DecompositionEmbedding} shows that the embedding $\mathscr{S}_{\alpha_{1},s}^{p_{1},q_{1}}\left(\mathbb{R}^{2}\right)\hookrightarrow\mathscr{S}_{\alpha_{2},r}^{p_{2},q_{2}}\left(\mathbb{R}^{2}\right)$ holds if and only if we have $p_{1}\leq p_{2}$ and if furthermore the following expression (then a constant) is finite: \[ C:=\left\Vert \left(\frac{w_{j}^{r}}{w_{i_{j}}^{s}}\cdot\left|\det T_{j}^{(\alpha_{2})}\right|{}^{\left(\frac{1}{p_{2}^{\triangledown}}-\frac{1}{q_{1}}\right)_{+}}\cdot\left|\det T_{i_{j}}^{(\alpha_{1})}\right|{}^{\frac{1}{p_{1}}-\frac{1}{p_{2}}-\left(\frac{1}{p_{2}^{\triangledown}}-\frac{1}{q_{1}}\right)_{+}}\right)_{j\in I^{(\alpha_{2})}}\right\Vert _{\ell^{q_{2}\cdot(q_{1}/q_{2})'}}, \] where for each $j\in I^{(\alpha_{2})}$ an arbitrary index $i_{j}\in I^{(\alpha_{1})}$ with $S_{i_{j}}^{(\alpha_{1})}\cap S_{j}^{(\alpha_{2})}\neq\emptyset$ is chosen. But in view of Lemma \ref{lem:AlphaShearletRelativelyModerate}, it is not hard to see that $C$ satisfies \[ \begin{aligned}C & \asymp\left\Vert \left(\frac{2^{kr}}{2^{ks}}\cdot2^{k\left(1+\alpha_{2}\right)\left(\frac{1}{p_{2}^{\triangledown}}-\frac{1}{q_{1}}\right)_{+}}\;\cdot\;2^{k\left(1+\alpha_{1}\right)\left[\frac{1}{p_{1}}-\frac{1}{p_{2}}-\left(\frac{1}{p_{2}^{\triangledown}}-\frac{1}{q_{1}}\right)_{+}\right]}\right)_{(k,\ell,\beta,\gamma)\in I_{0}^{(\alpha_{2})}}\right\Vert _{\ell^{q_{2}\cdot(q_{1}/q_{2})'}}\\ & =\left\Vert \left(\raisebox{-0.2cm}{\ensuremath{2^{k\left(\left(1+\alpha_{1}\right)\left[\frac{1}{p_{1}}-\frac{1}{p_{2}}-\left(\frac{1}{p_{2}^{\triangledown}}-\frac{1}{q_{1}}\right)_{+}\right]+\left(1+\alpha_{2}\right)\left(\frac{1}{p_{2}^{\triangledown}}-\frac{1}{q_{1}}\right)_{+}-s+r\right)}}}\right)_{(k,\ell,\beta,\gamma)\in I_{0}^{(\alpha_{2})}}\right\Vert _{\ell^{q_{2}\cdot(q_{1}/q_{2})'}}\\ & =\left\Vert \left(\raisebox{-0.2cm}{\ensuremath{2^{k\left(\left(1+\alpha_{1}\right)\left(\frac{1}{p_{1}}-\frac{1}{p_{2}}\right)+\left(\alpha_{2}-\alpha_{1}\right)\left(\frac{1}{p_{2}^{\triangledown}}-\frac{1}{q_{1}}\right)_{+}-s+r\right)}}}\right)_{(k,\ell,\beta,\gamma)\in I_{0}^{(\alpha_{2})}}\right\Vert _{\ell^{q_{2}\cdot(q_{1}/q_{2})'}}. \end{aligned} \] As above, we distinguish two cases regarding $q_{1}$ and $q_{2}$: \textbf{Case 1}: We have $q_{2}<q_{1}$, so that $\varrho:=q_{2}\cdot\left(q_{1}/q_{2}\right)'<\infty$. But setting \[ \Gamma:=\left(1+\alpha_{1}\right)\left(\frac{1}{p_{1}}-\frac{1}{p_{2}}\right)+\left(\alpha_{2}-\alpha_{1}\right)\left(\frac{1}{p_{2}^{\triangledown}}-\frac{1}{q_{1}}\right)_{+}-s+r, \] we have \[ \begin{aligned}C^{\varrho} & \asymp\left\Vert \left(2^{k\Gamma}\right)_{(k,\ell,\beta,\gamma)\in I_{0}^{(\alpha_{2})}}\right\Vert _{\ell^{\varrho}}^{\varrho}=\sum_{(k,\ell,\beta,\text{\ensuremath{\gamma}})\in I_{0}^{(\alpha_{2})}}2^{k\cdot\varrho\cdot\Gamma}\\ & =\sum_{k=0}^{\infty}2^{k\cdot\varrho\cdot\Gamma}\sum_{\left|\ell\right|\leq\left\lceil 2^{k\left(1-\alpha_{2}\right)}\right\rceil }\sum_{\beta\in\left\{ \pm1\right\} }\sum_{\gamma\in\left\{ 0,1\right\} }1\asymp\sum_{k=0}^{\infty}2^{k\left(\varrho\cdot\Gamma+1-\alpha_{2}\right)}. \end{aligned} \] As above, we have $\varrho^{-1}=\left(q_{2}^{-1}-q_{1}^{-1}\right)_{+}=q_{2}^{-1}-q_{1}^{-1}$ and we see that the last sum—and thus $C$—is finite if and only if we have $\varrho\cdot\Gamma+1-\alpha_{2}<0$, which is equivalent to \[ \left(1+\alpha_{1}\right)\left(\frac{1}{p_{1}}-\frac{1}{p_{2}}\right)+\left(\alpha_{2}-\alpha_{1}\right)\left(\frac{1}{p_{2}^{\triangledown}}-\frac{1}{q_{1}}\right)_{+}-s+r=\Gamma\overset{!}{<}\left(\alpha_{2}-1\right)\cdot\left(q_{2}^{-1}-q_{1}^{-1}\right). \] Based on this, it is not hard to see that the equivalence stated in the second part of the theorem is valid for $q_{2}<q_{1}$. \medskip{} \textbf{Case 2}: We have $q_{2}\geq q_{1}$, so that $q_{2}\cdot\left(q_{1}/q_{2}\right)'=\infty$. In this case, we have—with $\Gamma$ as above—that \[ C\asymp\sup_{(k,\ell,\beta,\gamma)\in I_{0}^{(\alpha_{2})}}2^{k\Gamma}, \] which is finite if and only if $\Gamma\leq0$, which is equivalent to \[ s\geq r+(1+\alpha_{1})\left(\frac{1}{p_{1}}-\frac{1}{p_{2}}\right)+(\alpha_{2}-\alpha_{1})\left(\frac{1}{p_{2}^{\triangledown}}-\frac{1}{q_{1}}\right)_{+}. \] This easily shows that the claimed equivalence from the second part of the theorem also holds for $q_{2}\geq q_{1}$. \end{proof} With Theorem \ref{thm:EmbeddingBetweenAlphaShearlets}, we have established the characterization of the general embedding from equation \eqref{eq:EmbeddingSectionGeneralEmbedding}. Our main application, however, was to determine under which conditions $\ell^{p}$-sparsity of $f$ with respect to $\alpha_{1}$-shearlet systems implies $\ell^{q}$-sparsity of $f$ with respect to $\alpha_{2}$-shearlet systems, \emph{if one has no additional information}. As discussed around equation \eqref{eq:EmbeddingSectionSparsityEmbedding}, this amounts to an embedding $\mathscr{S}_{\alpha_{1},\left(1+\alpha_{1}\right)\left(p^{-1}-2^{-1}\right)}^{p,p}\left(\mathbb{R}^{2}\right)\hookrightarrow\mathscr{S}_{\alpha_{2},\left(1+\alpha_{2}\right)\left(q^{-1}-2^{-1}\right)}^{q,q}\left(\mathbb{R}^{2}\right)$. Since we are only interested in \emph{nontrivial} sparsity, and since arbitrary $L^{2}$ functions have $\alpha$-shearlet coefficients in $\ell^{2}$, the only interesting case is for $p,q\leq2$. This setting is considered in our next lemma: \begin{lem} Let $\alpha_{1},\alpha_{2}\in\left[0,1\right]$ with $\alpha_{1}\neq\alpha_{2}$, let $p,q\in\left(0,2\right]$ and let $\varepsilon\in\left[0,\infty\right)$. The embedding \[ \mathscr{S}_{\alpha_{1},\varepsilon+\left(1+\alpha_{1}\right)\left(p^{-1}-2^{-1}\right)}^{p,p}\left(\mathbb{R}^{2}\right)\hookrightarrow\mathscr{S}_{\alpha_{2},\left(1+\alpha_{2}\right)\left(q^{-1}-2^{-1}\right)}^{q,q}\left(\mathbb{R}^{2}\right) \] holds if and only if we have $p\leq q$ and $q\geq\left(\frac{1}{2}+\frac{\varepsilon}{\left|\alpha_{1}-\alpha_{2}\right|}\right)^{-1}$. \end{lem} \begin{rem*} The case $\varepsilon=0$ corresponds to the embedding which is considered in equation \eqref{eq:EmbeddingSectionSparsityEmbedding}. Here, the preceding lemma shows that the embedding can only hold if $q\geq2$. Since the $\alpha_{2}$-shearlet coefficients of every $L^{2}$ function are $\ell^{2}$-sparse, we see that $\ell^{p}$-sparsity with respect to $\alpha_{1}$-shearlets does not imply any nontrivial $\ell^{q}$-sparsity with respect to $\alpha_{2}$-shearlets for $\alpha_{1}\neq\alpha_{2}$, \emph{if no additional information than the $\ell^{p}$-sparsity with respect to $\alpha_{1}$-shearlets is given}. But in conjunction with Theorem \ref{thm:AnalysisAndSynthesisSparsityAreEquivalent}, we see that if the $\alpha_{1}$-shearlet coefficients $\left(\left\langle f,\psi^{\left[\left(j,\ell,\iota\right),k\right]}\right\rangle _{L^{2}}\right)_{\left(j,\ell,\iota\right)\in I^{\left(\alpha_{1}\right)},k\in\mathbb{Z}^{2}}$ satisfy \begin{equation} \left\Vert \left(2^{\varepsilon j}\cdot\left\langle f,\psi^{\left[\left(j,\ell,\iota\right),k\right]}\right\rangle _{L^{2}}\right)_{\left(j,\ell,\iota\right)\in I^{\left(\alpha_{1}\right)},\,k\in\mathbb{Z}^{2}}\right\Vert _{\ell^{p}}<\infty\label{eq:SparsityTransferGoodCondition} \end{equation} for some $\varepsilon>0$, then one can derive $\ell^{q}$-sparsity with respect to $\alpha_{2}$-shearlets for $q\geq\max\left\{ p,\,\left(\frac{1}{2}+\frac{\varepsilon}{\left|\alpha_{1}-\alpha_{2}\right|}\right)^{-1}\right\} $. Observe that equation \eqref{eq:SparsityTransferGoodCondition} combines an $\ell^{p}$-estimate with a decay of the coefficients with the scale parameter $j\in\mathbb{N}_{0}$. \end{rem*} \begin{proof} Theorem \ref{thm:EmbeddingBetweenAlphaShearlets} shows that the embedding can only hold if $p\leq q$. Thus, we only need to show for $0<p\leq q\leq2$ that the stated embedding holds if and only if we have $q\geq\left(\frac{1}{2}+\frac{\varepsilon}{\left|\alpha_{1}-\alpha_{2}\right|}\right)^{-1}$. For brevity, let $s:=\varepsilon+\left(1+\alpha_{1}\right)\left(p^{-1}-2^{-1}\right)$ and $r:=\left(1+\alpha_{2}\right)\left(q^{-1}-2^{-1}\right)$. We start with a few auxiliary observations: Because of $p\leq q\leq2$, we have $q^{\triangledown}=\min\left\{ q,q'\right\} =q$ and $\frac{1}{p^{\pm\triangle}}=\min\left\{ \frac{1}{p},1-\frac{1}{p}\right\} =1-\frac{1}{p}$, as well as $\frac{1}{q^{\triangledown}}-\frac{1}{p}=\frac{1}{q}-\frac{1}{p}\leq0$ and $\frac{1}{p}+\frac{1}{q}\geq1$, so that $\frac{1}{q}-\frac{1}{p^{\pm\triangle}}=\frac{1}{q}-1+\frac{1}{p}\geq0$. Now, let us first consider the case $\alpha_{1}<\alpha_{2}$. Since we assume $p\leq q$, Theorem \ref{thm:EmbeddingBetweenAlphaShearlets} shows that the embedding holds if and only if \[ \begin{aligned} & s\overset{!}{\geq}r+(1+\alpha_{1})\left(\frac{1}{p}-\frac{1}{q}\right)+(\alpha_{2}-\alpha_{1})\left(\frac{1}{q^{\triangledown}}-\frac{1}{p}\right)_{+}\\ \Longleftrightarrow & \left(1+\alpha_{1}\right)\left(p^{-1}-2^{-1}\right)+\varepsilon\overset{!}{\geq}\left(1+\alpha_{2}\right)\left(q^{-1}-2^{-1}\right)+\left(1+\alpha_{1}\right)\left(p^{-1}-q^{-1}\right)\\ \Longleftrightarrow & \varepsilon\overset{!}{\geq}\left(1+\alpha_{2}\right)\left(q^{-1}-2^{-1}\right)+\left(1+\alpha_{1}\right)\left(2^{-1}-q^{-1}\right)=\left(\alpha_{2}-\alpha_{1}\right)\left(q^{-1}-2^{-1}\right)\\ \left({\scriptstyle \text{since }\alpha_{2}-\alpha_{1}>0}\right)\Longleftrightarrow & \frac{\varepsilon}{\alpha_{2}-\alpha_{1}}+\frac{1}{2}\overset{!}{\geq}\frac{1}{q}\\ \Longleftrightarrow & q\overset{!}{\geq}\left(\frac{1}{2}+\frac{\varepsilon}{\alpha_{2}-\alpha_{1}}\right)^{-1}=\left(\frac{1}{2}+\frac{\varepsilon}{\left|\alpha_{2}-\alpha_{1}\right|}\right)^{-1}. \end{aligned} \] Finally, we consider the case $\alpha_{1}>\alpha_{2}$. Again, since $p\leq q$, Theorem \ref{thm:EmbeddingBetweenAlphaShearlets} (with interchanged roles of $\alpha_{1},\alpha_{2}$ and $r,s$) shows that the desired embedding holds if and only if \[ \begin{aligned} & s\overset{!}{\geq}r+\left(1+\alpha_{2}\right)\left(\frac{1}{p}-\frac{1}{q}\right)+\left(\alpha_{1}-\alpha_{2}\right)\left(\frac{1}{q}-\frac{1}{p^{\pm\triangle}}\right)_{+}\\ \left({\scriptstyle \text{since }q^{-1}-1+p^{-1}\geq0}\right)\Longleftrightarrow & \left(1+\alpha_{1}\right)\left(\frac{1}{p}-\frac{1}{2}\right)+\varepsilon\overset{!}{\geq}\left(1+\alpha_{2}\right)\left(\frac{1}{q}-\frac{1}{2}\right)+\left(1+\alpha_{2}\right)\left(\frac{1}{p}-\frac{1}{q}\right)+\left(\alpha_{1}-\alpha_{2}\right)\left(\frac{1}{q}-1+\frac{1}{p}\right)\\ \Longleftrightarrow & \varepsilon\overset{!}{\geq}\left(1+\alpha_{2}\right)\left(p^{-1}-2^{-1}\right)+\left(1+\alpha_{1}\right)\left(2^{-1}-p^{-1}\right)+\left(\alpha_{1}-\alpha_{2}\right)\left(q^{-1}-1+p^{-1}\right)\\ \Longleftrightarrow & \varepsilon\overset{!}{\geq}\left(\alpha_{1}-\alpha_{2}\right)\left(2^{-1}-p^{-1}+q^{-1}-1+p^{-1}\right)=\left(\alpha_{1}-\alpha_{2}\right)\left(q^{-1}-2^{-1}\right)\\ \left({\scriptstyle \text{since }\alpha_{1}-\alpha_{2}>0}\right)\Longleftrightarrow & \frac{\varepsilon}{\alpha_{1}-\alpha_{2}}+\frac{1}{2}\overset{!}{\geq}\frac{1}{q}\\ \Longleftrightarrow & q\overset{!}{\geq}\left(\frac{1}{2}+\frac{\varepsilon}{\alpha_{1}-\alpha_{2}}\right)^{-1}=\left(\frac{1}{2}+\frac{\varepsilon}{\left|\alpha_{1}-\alpha_{2}\right|}\right)^{-1}. \end{aligned} \] This completes the proof. \end{proof} \section*{Acknowledgments} We would like to thank Gitta Kutyniok for pushing us to improve the statement and the proof of Lemma \ref{lem:MainShearletLemma} and thus also of Theorems \ref{thm:NicelySimplifiedAlphaShearletFrameConditions}, \ref{thm:ReallyNiceShearletAtomicDecompositionConditions} and Remark \ref{rem:CartoonApproximationConstantSimplification}. Without her positive insistence, the proof of Lemma \ref{lem:MainShearletLemma} would be about $5$ pages longer and Theorem \ref{thm:CartoonApproximationWithAlphaShearlets} concerning the approximation of $C^{2}$-cartoon-like functions with shearlets would require $\approx40$ vanishing moments and generators in $C_{c}^{M}\left(\mathbb{R}^{2}\right)$ with $M\approx150$, while our new improved conditions only require $7$ vanishing moments and generators in $C_{c}^{19}\left(\mathbb{R}^{2}\right)$, cf.\@ Remark \ref{rem:CartoonApproximationConstantSimplification}. FV would like to express warm thanks to Hartmut Führ for several fruitful discussions and suggestions related to the present paper, in particular for suggesting the title ``\emph{analysis vs.\@ synthesis sparsity for shearlets}'' which we adopted nearly unchanged. FV would also like to thank Philipp Petersen for useful discussions related to the topics in this paper and for suggesting some changes in the notation. Both authors would like to thank Jackie Ma for raising the question whether membership in shearlet smoothness spaces can also be characterized using compactly supported shearlets. We also thank Martin Schäfer for checking parts of the introduction related to the paper \cite{RoleOfAlphaScaling} for correctness. Both authors acknowledge support from the European Commission through DEDALE (contract no.\@ 665044) within the H2020 Framework Program. AP also acknowledges partial support by the Lichtenberg Professorship Grant of the Volkswagen Stiftung awarded to Christian Kuehn. \newpage{}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $h$ be a non-degenerate, symmetric and $\mathbb{C}$-bilinear form on~$\mathbb{C}^{2m}$. The Grassmannian variety~$M$ of totally isotropic $k$-dimensional subspaces in $\mathbb{C}^{2m}$ is a homogeneous space of a~$|2|$-graded parabolic geometry. We assume throughout the paper that $n:=m-k\ge k\ge2$. We will show that on~$M$ there is a complex of invariant dif\/ferential operators which we call the $k$-\textit{Dirac complex}. The main result of this article is (see Theorem~\ref{theorem formal exactness}) that the complex is formally exact (as explained above in the abstract) in the sense of~\cite{Sp}. This result is crucial for an application in \cite{S} where it is shown that the complex is exact with formal power series at any f\/ixed point and that it descends (as outlined in the recent series \cite{CSaI, CSaII,CSaIII}) to a resolution of the $k$-Dirac operator studied in Clif\/ford analysis (see~\cite{CSSS,SSSL}). As a~potential application of the resolution, there is an open problem of characterizing the domains of monogenicity, i.e., an open set~$\mathcal{U}$ is a domain of monogenicity if there is no open set~$\mathcal{U}'$ with~$\mathcal{U}\subsetneq\mathcal{U}'$ such that each monogenic function\footnote{A monogenic function is a null solution of the $k$-Dirac operator.} in~$\mathcal{U}$ extends to a monogenic function in $\mathcal{U}'$. Recall from \cite[Section~4]{H} that the Dolbeault resolution together with some $L^2$ estimates are crucial in a proof of the statement that any pseudoconvex domain is a domain of holomorphy. The $k$-Dirac complexes belong to the singular inf\/initesimal character and so the BGG machinery introduced in~\cite{CaSS} is not available. However, we will show that each $k$-Dirac complex arises as the direct image of a relative BGG sequence (see \cite{CSo,CSoI} for a recent publication on this topic) and so, this paper f\/its into the scheme of the Penrose transform (see \cite{BE,WW}). In particular, we will work here only in the setting of complex parabolic geometries. The machinery of the Penrose transform is a main tool used in \cite{B}. The main result of that article is a construction of families of locally exact complexes of invariant dif\/ferential operators on quaternionic manifolds. One of these quaternionic complexes can (see \cite{BAS,BS,CSS}) be identif\/ied with a resolution of the $k$-\textit{Cauchy--Fueter operator} which has been intensively studied in Clif\/ford analysis (see again \cite{CSSS,SSSL}). In order to prove the local exactness of this quaternionic complex, one uses that an almost quaternionic structure is a $|1|$-graded parabolic geometry and the theory of constant coef\/f\/icient operators from~\cite{Na}. Unfortunately, the parabolic geometry on $M$ is $|2|$-graded and so there is canonical a 2-step f\/iltration of the tangent bundle of $M$ given by a bracket generating distribution. With such a~structure, it is more natural to work with weighted jets (see~\cite{Mo}) rather than usual jets and we use this concept also here, i.e., we prove the formal exactness of the $k$-Dirac complexes with respect to the weighted jets. Nevertheless, we will prove in~\cite{S} that the formal exactness of the $k$-Dirac complex (or more precisely the exactness of~(\ref{les of graded jets II}) for each $\ell+j\ge0$) is enough to conclude that it descends to a resolution of the $k$-Dirac operator. We consider here only the even case $\mathbb{C}^{2m}$ as, due to the representation theory, the Penrose transform does not work in odd dimension $\mathbb{C}^{2m+1}$ and it seems that this case has to be treated by completely dif\/ferent methods. The assumption $n\ge k$ is (see \cite{CSSS}) called the stable range. This assumption is needed only in Proposition~\ref{thm direct images} where we compute direct images of sheaves that appear in the relative BGG sequences. Hence, it is reasonable to expect that (see also~\cite{K}) the machinery of the Penrose transform provides formally exact complexes also in the unstable range $n<k$. For the application in \cite{S}, we need to show that the $k$-Dirac complexes constructed in this paper give rise to complexes from \cite{TS} which live on the corresponding real parabolic geometries. This turns out to be rather easy since any linear $\mathrm{G}$-invariant operator is determined by a certain $\mathrm{P}$-equivariant homomorphism. As this correspondence works also in the smooth setting, passing from the holomorphic setting to the smooth setting is rather elementary. The abstract approach of the Penrose transform is not very helpful when one is interested in local formulae of dif\/ferential operators. Local formulae of the operators in the $k$-Dirac complexes can be found in~\cite{TS}. Notice that in this article we construct only one half of each complex from~\cite{TS}. This is due to the fact that the complex space of spinors decomposes into two irreducible $\mathfrak{so}(2n,\mathbb{C})$-sub-representations. The other half of each $k$-Dirac complex can (see Remark~\ref{remark PT for other Dirac}) be easily obtained by adapting results from this paper. Finally, let us mention few more articles which deal with the $k$-Dirac complexes. The null solutions of the f\/irst operator in the $k$-Dirac complex were studied in \cite{TSI,TSII}. The singular Hasse graphs and the corresponding homomorphisms of generalized Verma modules were computed in~\cite{F}. \subsection*{Notation} \begin{itemize}\itemsep=0pt \item $M(n,k,\mathbb{C})$ matrices of size $n\times k$ with complex coef\/f\/icients; \item $M(n,\mathbb{C}):=M(n,n,\mathbb{C})$; \item $A(n,\mathbb{C}):=\{A\in M(n,\mathbb{C})\,|\, A^T=-A\}$; \item $1_n$ identity $n\times n$-matrix; \item $[v_1,\dots,v_\ell]$ linear span of vectors $v_1,\dots,v_\ell$. \end{itemize} \section{Preliminaries}\label{section Preliminaries} In Section \ref{section Preliminaries} we will review some well known material. Namely, in Section~\ref{section review} we will summarize some theory of complex parabolic geometries. We will recall in Section~\ref{section wdo} the concept of weighted jets on f\/iltered manifolds and in Section~\ref{section ideal sheaf} the def\/inition of the normal bundle associated to analytic subvariety and the formal neighborhood. In Section~\ref{section PT} we will give a~short summary of the Penrose transform. See \cite{CS} for a thorough introduction into the theory of parabolic geometries. The concept of weighted jets was originally introduced in the smooth setting by Tohru Morimoto, see for example~\cite{Mo}. Sections~\ref{section ideal sheaf} and~\ref{section PT} were taken mostly from \cite{BE,WW}. \subsection{Review of parabolic geometries}\label{section review} Let $\mathfrak{g}$ be a complex semi-simple Lie algebra, $\mathfrak{h}$ be a Cartan subalgebra, $\triangle$ be the associated set of roots, $\triangle^+$ be a set of positive roots and $\triangle^0=\{\alpha_1,\dots,\alpha_m\}$ be the associated set of (pairwise distinct) simple roots. We will denote by $\mathfrak{g}_\alpha$ the root space associated to $\alpha\in\triangle$ and we will write $\alpha>0$ if $\alpha\in\triangle^+$ and $\alpha<0$ if $-\alpha\in\triangle^+$. Given $\alpha\in\triangle$, there are unique integers $\lambda_1,\dots,\lambda_m$ such that $\alpha=\lambda_1\alpha_1+\dots+\lambda_m\alpha_m$. If $\Sigma\subset\triangle^0$, then the integer $ht_\Sigma(\alpha):=\sum\limits_{i\colon \alpha_i\in\Sigma}\lambda_i$ is called the $\Sigma$-\textit{height} of $\alpha$. We put $\mathfrak{g}_i:=\bigoplus\limits_{\alpha\colon ht_\Sigma(\alpha)=i}\mathfrak{g}_{\alpha}$. Then there is an integer $k\ge0$ such that $\mathfrak{g}_k\ne\{0\}$, $\mathfrak{g}_\ell=\{0\}$ whenever $|\ell|>k$ and \begin{gather}\label{k-gradation} \mathfrak{g}=\mathfrak{g}_{-k}\oplus\mathfrak{g}_{-k+1}\oplus\dots\oplus\mathfrak{g}_{k-1}\oplus\mathfrak{g}_k. \end{gather} The direct sum decomposition (\ref{k-gradation}) is the $|k|$-\textit{grading} on $\mathfrak{g}$ associated to $\Sigma$. Since $[\mathfrak{g}_\alpha,\mathfrak{g}_\beta]\subset\mathfrak{g}_{\alpha+\beta}$ for every $\alpha,\beta\in\triangle$, it follows that $[\mathfrak{g}_i,\mathfrak{g}_j]\subset\mathfrak{g}_{i+j}$ for each \smash{$i,j\in\mathbb{Z}$}. In particular, it follows that $\mathfrak{g}_0$ is a subalgebra and it can be shown that it is always reductive, i.e., $\mathfrak{g}_0=\mathfrak{g}_0^{ss}\oplus\mathfrak z(\mathfrak{g}_0)$ where $\mathfrak{g}_0^{ss}:=[\mathfrak{g}_0,\mathfrak{g}_0]$ is semi-simple and $\mathfrak z(\mathfrak{g}_0)$ is the center (see \cite[Corollary~2.1.6]{CS}). Moreover, each subspace $\mathfrak{g}_i$ is $\mathfrak{g}_0$-invariant and the Killing form of $\mathfrak{g}$ induces an isomorphism $\mathfrak{g}_{-i}\cong\mathfrak{g}_i^\ast$ of $\mathfrak{g}_0$-modules where $ ^\ast$ denotes the dual module. We put \begin{gather*} \mathfrak{g}^i:=\bigoplus_{j\ge i} \mathfrak{g}_j, \qquad \mathfrak{g}_-:=\mathfrak{g}_{-k}\oplus\dots\oplus\mathfrak{g}_{-1}, \qquad \mathfrak{p}_\Sigma:=\mathfrak{g}^0\qquad \mathrm{and}\qquad \mathfrak{p}_+:=\mathfrak{g}^1. \end{gather*} Then $\mathfrak{p}_\Sigma$ is the \textit{parabolic subalgebra} associated to the $|k|$-grading and $\mathfrak{p}_\Sigma=\mathfrak{g}_0\oplus\mathfrak{p}_+$ is known as the Levi decomposition (see \cite[Section~2.2]{BE}). This means that $\mathfrak{p}_+$ is the nilradical\footnote{Recall that the nilradical is a maximal nilpotent ideal and that it is unique.} and that~$\mathfrak{g}_0$ is a maximal reductive subalgebra called the \textit{Levi factor}. It is clear that each subspace~$\mathfrak{g}^i$ is $\mathfrak{p}_\Sigma$-invariant and that $\mathfrak{g}_-$ is a nilpotent subalgebra. Moreover, it can be shown that, as a~Lie algebra, $\mathfrak{g}_-$ is generated by~$\mathfrak{g}_{-1}$. The algebra $\mathfrak{b}:=\mathfrak{p}_{\triangle^0}$ is called the \textit{standard Borel subalgebra}. A subalgebra of $\mathfrak{g}$ is called \textit{standard parabolic} if it contains $\mathfrak{b}$ and in particular, $\mathfrak{p}_\Sigma$ is such an algebra. More generally, a~subalgebra of $\mathfrak{g}$ is called a \textit{Borel subalgebra} and a \textit{parabolic subalgebra} if it is conjugated to the standard Borel subalgebra and to a standard parabolic subalgebra, respectively. We will for brevity sometimes write~$\mathfrak{p}$ instead of~$\mathfrak{p}_\Sigma$. Let $s_i$ be the simple ref\/lection associated to $\alpha_i$, $i=1,2,\dots,m$, $W_\mathfrak{g}$ be the Weyl group of $\mathfrak{g}$ and $W_\mathfrak{p}$ be the subgroup of $W_\mathfrak{g}$ generated by $\{s_i\colon \alpha_i\not\in\Sigma\}$. Then $W_\mathfrak{p}$ is isomorphic to the Weyl group of $\mathfrak{g}_0^{ss}$ and the directed graph that encodes the Bruhat order on $W_\mathfrak{g}$ contains a subgraph called the \textit{Hasse diagram $W^\mathfrak{p}$ attached to $\mathfrak{p}$} (see \cite[Section 4.3]{BE}). The vertices of $W^\mathfrak{p}$ consist of those $w\in W_\mathfrak{g}$ such that $w.\lambda$ is $\mathfrak{p}$-dominant for any $\mathfrak{g}$-dominant weight $\lambda$ where the dot stands for the af\/f\/ine action, namely, $w.\lambda=w(\lambda+\rho)-\rho$ where $\rho$ is the lowest form. It turns out that each right coset of $W_\mathfrak{p}$ in $W_\mathfrak{g}$ contains a unique element from $W^\mathfrak{p}$ and it can be shown that this is the element of minimal length (see \cite[Lemma 4.3.3]{BE}). This identif\/ies $W^\mathfrak{p}$ with $W_\mathfrak{p}\backslash W_\mathfrak{g}$. We will need also the relative case. Assume that $\Sigma'\subset\triangle^0$ and put $\mathfrak{r}:=\mathfrak{p}_{\Sigma'}$. Then $\mathfrak{q}:=\mathfrak{r}\cap\mathfrak{p}=\mathfrak{p}_{\Sigma\cup\Sigma'}$ is a standard parabolic subalgebra and $\mathfrak l:=\mathfrak{g}_0^{ss}\cap\mathfrak{q}$ is a parabolic subalgebra of $\mathfrak{g}_0^{ss}$ (see \cite[Section 2.4]{BE}). The def\/inition of the Hasse diagram attached to $\mathfrak{p}$ applies also to the pair $(\mathfrak{g}_0^{ss},\mathfrak l)$, namely an element $w\in W_\mathfrak{p}$ (as \cite[Section 4.4]{BE}) belongs to the \textit{relative Hasse diagram} $W^\mathfrak{q}_\mathfrak{p}$ if it is the element of minimal length in its right coset of $W_\mathfrak{q}$ in $W_\mathfrak{p}$. Hence, $W^\mathfrak{q}_\mathfrak{p}$ is a subset of $W_\mathfrak{g}$ which can be naturally identif\/ied with $W_\mathfrak{q}\backslash W_\mathfrak{p}$. There is (up to isomorphism) a unique connected and simply connected complex Lie group~$\mathrm{G}$ with Lie algebra $\mathfrak{g}$. Assume that $\Sigma=\{\alpha_{i_1},\dots,\alpha_{i_j}\}$. Let $\omega_1,\dots,\omega_m$ be the fundamental weights associated to the simple roots and $\mathbb{V}$ be an irreducible $\mathfrak{g}$-module with highest weight $\lambda:=\omega_{i_1}+\dots+\omega_{i_j}$. Since any $\mathfrak{g}$-representation integrates to $\mathrm{G}$, $\mathbb{V}$ is also a $\mathrm{G}$-module. The action descends to the projective space $\mathbb P(\mathbb{V})$ and the stabilizer of the line spanned by a non-zero highest weight vector $v$ is the associated \textit{parabolic subgroup} $\mathrm{P}$. This is by def\/inition a closed subgroup of $\mathrm{G}$ and its Lie algebra is $\mathfrak{p}$. The homogeneous space $\mathrm{G}/\mathrm{P}$ is biholomorphic to the $\mathrm{G}$-orbit of $[v]\in\mathbb P(\mathbb{V})$ and since it is completely determined by~$\Sigma$, we denote it by crossing in the Dynkin diagram of $\mathfrak{g}$ the simple roots from $\Sigma$. We will for brevity put $M:=\mathrm{G}/\mathrm{P}$ and denote by $\textbf{p}\colon \mathrm{G}\rightarrow M$ the canonical projection. On $\mathrm{G}$ lives a tautological $\mathfrak{g}$-valued 1-form $\omega$ which is known as the \textit{Maurer-Cartan form}. This form is $\mathrm{P}$-equivariant in the sense that for each $p\in\mathrm{P}\colon (r^p)^\ast\omega=\operatorname{Ad}(p^{-1})\circ\omega$ where $\operatorname{Ad}$ is the adjoint representation and $r^p$ is the principal action of $p$. If $\mathbb{V}$ is a subspace of $\mathfrak{g}$ and $g\in\mathrm{G}$, then $\omega^{-1}_g(\mathbb{V})$ is a subspace of~$T_g\mathrm{G}$ and the disjoint union $\bigcup_{g\in\mathrm{G}}\omega_g^{-1}(\mathbb{V})$ determines a distribution on~$\mathrm{G}$ which we for brevity denote by $\omega^{-1}(\mathbb{V})$. Since $T\textbf{p}=T\textbf{p}\circ Tr^p$, it follows that $T\textbf{p}(\omega^{-1}(\mathbb{V}))$ is a well-def\/ined distribution on $M$ provided that $\mathbb{V}$ is $\mathrm{P}$-invariant. In particular, this applies to~$\mathfrak{g}^i$ and we put $F_{i}:=T\textbf{p}(\omega^{-1}(\mathfrak{g}^{i}))$. Since $\ker(T\textbf{p})=\omega^{-1}(\mathfrak{p})$, it follows that the f\/iltration $\mathfrak{g}^{-k}/\mathfrak{p}\supset\dots\supset\mathfrak{g}^{-1}/\mathfrak{p}\supset\mathfrak{g}^0/\mathfrak{p}$ gives a~f\/iltration $TM=F_{-k}\supset F_{-k+1}\supset\dots\supset F_{-1}\supset F_0=\{0\}$ of the tangent bundle $TM$ where~$\{0\}$ is the zero section. The \textit{graded tangent bundle} associated to the f\/iltration $\{F_{-i}\colon i=k,\dots,0\}$ is $gr(TM):=\bigoplus_{i=-k}^{-1}gr_i(TM)$ where $gr_i(TM):=F_{i}/F_{i+1}$. Since $M$ is the homogeneous model, we have the following: \begin{itemize}\itemsep=0pt \item the f\/iltration is compatible\footnote{Filtrations which satisfy this property are called \textit{regular}.} with the Lie bracket of vector f\/ields in the sense that the commutator of a section of $F_i$ and a section of $F_j$ is a section of $F_{i+j}$, \item the Lie bracket descends to a vector bundle map $\mathcal{L}\colon \Lambda^2gr(TM)\rightarrow gr(TM)$, called the \textit{Levi form}, which is homogeneous of degree zero and \item $(gr(T_xM),\mathcal{L}_x)$, $x\in M$ is a nilpotent Lie algebra isomorphic to $\mathfrak{g}_-$. \end{itemize} Hence, $(gr(TM),\mathcal{L})$ is a locally trivial bundle of nilpotent Lie algebras with typical f\/iber $\mathfrak{g}_-$ and it follows that $F_{-1}$ is a bracket generating distribution. We denote by $T^\ast M=\Lambda^{1,0}T^\ast M$ the vector bundle dual to $TM$, i.e., the f\/iber over $x\in M$ is the space of $\mathbb{C}$-linear maps $T_x M\rightarrow\mathbb{C}$. The f\/iltration of $TM$ induces a f\/iltration $T^\ast M=F_1\supset F_2\supset\dots\supset F_{k+1}=\{0\}$ where $F_{i}:=F_{-i+1}^\perp$ is the annihilator of $F_{-i+1}$. We put $gr_i(T^\ast M):= F_i/F_{i+1}$, $i=1,2,\dots,k$ so that $gr(T^\ast M)=\bigoplus_{i=1}^kgr_i(T^\ast M)$ is the associated graded vector bundle and $gr_i(T^\ast M)$ is isomorphic to the dual of~$gr_i(TM)$. \subsection{Weighted dif\/ferential operators}\label{section wdo} Let $M$ be the homogeneous space with the regular f\/iltration $\{F_{-j}\colon j=0,\dots,k\}$ as in Section~\ref{section review}. As $M$ is a complex manifold, $TM_\mathbb{C}:=TM\otimes\mathbb{C}=T^{1,0}M\oplus T^{0,1}M$ where the f\/irst and the second summand is the holomorphic and the anti-holomorphic part\footnote{The holomorphic and anti-holomorphic part is the $-i$ and the $+i$-eigenspace, respectively, for the canonical almost complex structure on~$TM_\mathbb{C}$.}, respectively. As each vector bundle~$F_{-j}$ is a~holomorhic sub-bundle of $TM$, we have $F_{-j}\otimes\mathbb{C}=F^{1,0}_{-j}\oplus F^{0,1}_{-j}$ as above. Let $\mathcal{U}$ be an open subset of $M$ and $X$ be a holomorphic vector f\/ield on $\mathcal{U}$. The \textit{weighted order} $\mathfrak{wo}(X)$ of~$X$ is the smallest integer $j$ such that $X\in\Gamma(F_{-j}^{1,0}|_\mathcal{U})$. A dif\/ferential operator $D$ acting on the space $\mathcal{O}(\mathcal{U})$ of holomorphic functions on $\mathcal{U}$ is called a \textit{differential operator of weighted order at most} $r$ if for each $x\in \mathcal{U}$ there is an open neighborhood $\mathcal{U}_x$ of $x$ with a local framing\footnote{This means that the holomorphic vector f\/ields $X_1,\dots,X_p$ trivialize $T^{1,0}M$ over $\mathcal{U}_x$.} $\{X_1,\dots,X_p\}$ by holomorphic vector f\/ields such that \begin{gather* D|_{\mathcal{U}_x}=\sum_{a\in\mathbb N_{0}^p} f_a X_1^{a_1}\cdots X_p^{a_p}, \end{gather*} where $\mathbb N_0^p:=\{a=(a_1,\dots,a_p)\colon a_i\in\mathbb{Z},\ a_i\ge0,\ i=1,\dots,p\}$, $f_a\in\mathcal{O}(\mathcal{U}_x)$ and for all $a$ in the sum with $f_a$ non-zero: $\sum\limits_{i=1}^pa_i.\mathfrak{wo}(X_i)\le r$. We write $\mathfrak{wo}(D)\le r$. Let $\mathcal{O}_x$ be the space of germs of holomorphic functions at $x$. We denote by $\mathfrak{F}_x^{i}$ the space of those germs $f\in\mathcal{O}_x$ such that $Df(x)=0$ for every dif\/ferential operator $D$ which is def\/ined on an open neighborhood of $x$ and $\mathfrak{wo}(D)\le i$. We put $\mathfrak{J}^i_x:=\mathcal{O}_x/\mathfrak{F}_{x}^{i+1}$, denote by $\mathfrak{j}^i_xf\in\mathfrak{J}^i_x$ the class of $f$ and call it the $i$-\textit{th weighted jet of} $f$. Then the disjoint union $\mathfrak{J}^i:=\cup_{x\in M}\mathfrak{J}_x^i$ is naturally a~holomorphic vector bundle over~$M$, the canonical vector bundle map $\mathfrak{J}^i\xrightarrow{\pi_i}\mathfrak{J}^{i-1}$ has constant rank and thus, its kernel $\mathfrak{gr}^i$ is again a~holomorphic vector bundle with f\/iber $\mathfrak{gr}^i_x$ over $x$. Notice that for each integer $i\ge0$ there is a short exact sequence $0\rightarrow\mathfrak{F}^{i+1}_x\rightarrow\mathfrak{F}^{i}_x\rightarrow\mathfrak{gr}^{i+1}_x\rightarrow0$ of vector spaces. Assume that $V$ is a holomorphic vector bundle over $M$. We denote by $V^\ast$ the dual bundle, by $\langle-,-\rangle$ the canonical pairing between $V$ and $V^\ast$ and f\/inally, by $\mathcal{O}(V)_x$ the space of germs of holomorphic sections of $V$ at $x$. We def\/ine $\mathfrak{F}_{x}^iV$ as the space of germs $s\in\mathcal{O}(V)_x$ such that $\langle\lambda,s\rangle\in\mathfrak{F}_x^i$ for each $\lambda\in\mathcal{O}(V^\ast)_x$. We put $\mathfrak{J}^i_x:=\mathcal{O}(V)_x/\mathfrak{F}_{x}^{i+1}V$, denote by $\mathfrak{j}^i_xs\in\mathfrak{J}^i_xV$ the equivalence class of $s$ and call it the $i$-\textit{th weighted jet of} $s$. Then the disjoint union $\mathfrak{J}^i V:=\bigcup_{x\in M}\mathfrak{J}^i_xM$ is naturally a holomorphic vector bundle over $M$, the canonical bundle map $\mathfrak{J}^iV\xrightarrow{\pi_i}\mathfrak{J}^{i-1}V$ has constant rank and thus, its kernel $\mathfrak{gr}^i V$ is again a holomorphic vector bundle and we denote by $\mathfrak{gr}^i_xV$ its f\/iber over $x$. As above, there is for each integer $i\ge0$ a short exact sequence $0\rightarrow\mathfrak{F}^{i+1}_x V\rightarrow\mathfrak{F}^{i}_xV\rightarrow\mathfrak{gr}^{i+1}_xV\rightarrow0$ and just as in the smooth case, there is a canonical linear isomorphism $\mathfrak{gr}^i_x\otimes V_x\rightarrow\mathfrak{gr}^i_xV$. \begin{Remark}\label{remark usual jets} If the f\/iltration is trivial, i.e., $F_{-1}=TM$, then the concept of weighted jets agrees with that of usual jets. In this case we will use calligraphic letters instead of Gothic letters, i.e., we write $\mathcal F^i$ and $\mathcal J^i$ and $gr^i$ and $j^i_xf$ instead of $\mathfrak{F}^i$ and $\mathfrak{J}^i$ and $\mathfrak{gr}^i$ and $\mathfrak{j}^i_xf$, respectively. The vector bundle $gr^i$ is canonically isomorphic to the $i$-th symmetric power $S^iT^\ast M$. \end{Remark} Assume that there is a $\mathrm{P}$-module $\mathbb{V}$ such that $V$ is isomorphic to the $\mathrm{G}$-homogeneous vector bundle $\mathrm{G}\times_\mathrm{P}\mathbb{V}$. Let $e$ be the identity element of $\mathrm{G}$. Then we call the point $x_0:=e\mathrm{P}$ the origin of $M$ and we put \begin{gather} \label{notation weighted jets over origin} \mathfrak{J}^i\mathbb{V}:=\mathfrak{J}^i_{x_0}V \qquad \mathrm{and}\qquad \mathfrak{gr}^i\mathbb{V}:=\mathfrak{gr}^i_{x_0}V. \end{gather} There are linear isomorphisms \begin{gather}\label{isom of graded jets over origin} \mathfrak{gr}^r\mathbb{V}\cong\mathfrak{gr}^{r}_{x_0}\otimes\mathbb{V} \cong\bigoplus_{i_1+2i_2+\dots+ki_k=r}S^{i_1}\mathfrak{g}_1\otimes S^{i_2}\mathfrak{g}_2\otimes\dots \otimes S^{i_k}\mathfrak{g}_k\otimes\mathbb{V}. \end{gather} We will be interested in the sub-bundle $ S^i gr_1(T^\ast M)\otimes V$ of $\mathfrak{gr}^iV$. Notice that the f\/iber of this sub-bundle over $x\in M$ is $\{\mathfrak{j}^i_xf\colon f\in\mathcal{O}(V)_x,\ j^{i-1}_xf=0\}$, i.e., the vector space of all weighted $i$-th jets of germs of holomorphic sections at $x$ whose usual $(i-1)$-th jet vanishes. The f\/iber of this bundle over $x_0$ is isomorphic to $S^i\mathfrak{g}_1\otimes\mathbb{V}$ and we denote it for brevity by $gr^i\mathbb{V}$. Suppose that $\mathbb{W}$ is another $\mathrm{P}$-module and $W:=\mathrm{G}\times_\mathrm{P}\mathbb{W}$ be the associated homogeneous vector bundle. We say that the \textit{weighted order} of a linear dif\/ferential operator $D\colon \Gamma(V)\rightarrow\Gamma(W)$ is at most $r$ if for each $x\in M$, $ s\in\mathcal{O}(V)_x\colon \mathfrak{j}^r_xs=0\Rightarrow Ds(x)=0$. It is well known (see~\cite{Mo}) that $D$ induces for each $i\ge0$ a vector bundle map $\mathfrak{gr}^{i}V\rightarrow\mathfrak{gr}^{i-r}W$ where we agree that $\mathfrak{gr}^\ell W=0$ if $\ell <0$. The restriction of this map to the f\/ibers over the origin is a linear map \begin{gather* \mathfrak{gr} D\colon \ \mathfrak{gr}^{i}\mathbb{V}\rightarrow\mathfrak{gr}^{i-r}\mathbb{W}. \end{gather*} \subsection{Ideal sheaf of an analytic subvariety}\label{section ideal sheaf} Let us f\/irst recall some basics from the theory of sheaves (see for example~\cite{WW}). Suppose that~$\mathcal{F}$ and~$\mathcal{G}$ are sheaves on topological spaces $X$ and $Y$, respectively, and that $\iota\colon X\rightarrow Y$ is a~continuous map. We denote by~$\mathcal{F}_x$ the stalk of $\mathcal{F}$ at $x\in X$ and by $\mathcal{F}(\mathcal{U})$ or by $\Gamma(\mathcal{U},\mathcal{F})$ the space of sections of $\mathcal{F}$ over an open set $\mathcal{U}$. Then the pullback sheaf $\iota^{-1}\mathcal{G}$ is a sheaf on $X$ and the direct image $\iota_\ast\mathcal{F}$ is a sheaf on~$Y$. The $q$-th direct image $\iota^q_\ast\mathcal{F}$ is a sheaf on $Y$, it is def\/ined as the sheaf\/if\/ication of the pre-sheaf $\mathcal{V}\mapsto H^q(\iota^{-1}(\mathcal{V}),\mathcal{F})$ where $\mathcal{V}$ is open in~$Y$. Suppose now that $X$ and $Y$ are complex manifolds with structure sheaves of holomorphic functions $\mathcal{O}_X$ and $\mathcal{O}_Y$, respectively, that $\iota$ is holomorphic and that $\mathcal{G}$ is a sheaf of $\mathcal{O}_Y$-modules. Then $\iota^{-1}\mathcal{G}$ is in general not a sheaf of $\mathcal{O}_X$-modules. To f\/ix this problem, we use that $\iota^{-1}\mathcal{O}_Y$ is naturally a sub-sheaf of $\mathcal{O}_X$ and def\/ine a new sheaf $\iota^\ast\mathcal{G}:=\mathcal{O}_X\otimes_{\iota^{-1}\mathcal{O}_Y}\iota^{-1}\mathcal{G}$. Then $\iota^\ast\mathcal{G}$ is by construction a sheaf of $\mathcal{O}_X$-modules. Now we can continue with the def\/inition of the ideal sheaf. Suppose that the holomorphic map $\iota$ is an embedding. The restriction $TY|_X$ contains the tangent bundle $TX$ of $X$. The normal bundle $N_X$ of $X$ in $Y$ is simply the quotient bundle, i.e., it f\/its into the short exact sequence $0\rightarrow TX\rightarrow TY|_X\rightarrow NX\rightarrow0$ of holomorhic vector bundles. Dually, the co-normal bundle $N^\ast$ f\/its into the short exact sequence $0\rightarrow N^\ast \rightarrow T^\ast Y|_X\rightarrow T^\ast X\rightarrow0$. The structure sheaf $\mathcal{O}_Y$ contains a sub-sheaf called the \textit{ideal sheaf} $\mathcal{I}_X$. If $\mathcal{V}$ is an open subset of $Y$, then $\mathcal{I}_X(\mathcal{V})=\{f\in\mathcal{O}_Y(\mathcal{V})\colon f=0$ on $\mathcal{V}\cap X\}$. Notice that $\mathcal{I}_X(\mathcal{V})$ is an ideal in the ring~$\mathcal{O}_Y(\mathcal{V})$ and hence, for each positive integer $i$ there is the sheaf $\mathcal{I}^i_X$ whose space of sections over $\mathcal{V}$ is $(\mathcal{I}_X(\mathcal{V}))^i$. Then there are short exact sequences of sheaves \begin{gather*} 0\rightarrow\mathcal{I}_X\rightarrow\mathcal{O}_Y\rightarrow \iota_\ast\mathcal{O}_X\rightarrow0 \end{gather*} and \begin{gather*} 0\rightarrow\mathcal{I}^{i+1}_X\rightarrow\mathcal{I}^i_X\rightarrow\iota_\ast \mathcal{O}(S^i N^\ast)\rightarrow0, \end{gather*} where $S^iN^\ast$ is the $i$-th symmetric power of $N^\ast$ and we agree that $\mathcal{I}^0_X=\mathcal{O}_Y$. We put $\mathcal{F}^i_X:=\iota^{-1}\mathcal{I}_X^i$. As $\iota^{-1}$ is an exact functor, we get short exact sequences \begin{gather*} 0\rightarrow\mathcal{F}_{X}\rightarrow\iota^{-1}\mathcal{O}_Y\rightarrow \mathcal{O}_X\rightarrow0\label{ses with pullback ideal sheaf} \end{gather*} and \begin{gather*} 0\rightarrow\mathcal{F}^{i+1}_X\rightarrow\mathcal{F}^i_X\rightarrow\mathcal{O}(S^i N^\ast)\rightarrow0 \end{gather*} of sheaves on $X$. Here we use that the adjunction morphism $\iota^{-1}\iota_\ast\mathcal{F}\rightarrow\mathcal{F}$ is an isomorphism when $\mathcal{F}=\mathcal{O}_X$ or $\mathcal{O}(S^iN^\ast)$. Put $\mathcal{O}_X^{(i)}:=\mathcal{O}_Y/\mathcal{I}^{i+1}_X$. The pair $(X,\mathcal{O}^{(i)}_X)$ is called the $i$-\textit{th formal neighborhood of $X$ in} $Y$. Then $\iota^{-1}\mathcal{O}_X^{(i)}\cong\iota^{-1}\mathcal{O}_Y/\mathcal{F}^{(i+1)}$ and since the support of $\mathcal{O}_X^{(i)}$ is contained in $X$, the sheaf $\iota^{-1}\mathcal{O}_X^{(i)}$ contains basically the same information as the sheaf $\mathcal{O}_X^{(i)}$. These sheaves will be crucial in this article. \begin{Remark} The stalk of $\mathcal{F}^i_X$ at $x\in X$ is equal to $\{f\in\mathcal{F}_x\colon j^i_xf=0\}$. Hence, if $X=x$ is a~point, the stalk of $\mathcal{F}^i_X$ at $x$ is $\{f\in(\mathcal{O}_Y)_x\colon j^i_xf=0\}$. Since any sheaf over a point is completely determined by its stalk, there is no risk of confusion with the notation set in Remark~\ref{remark usual jets}. \end{Remark} \subsection{The Penrose transform}\label{section PT} Let us f\/irst set notation. Suppose that $\lambda\in\mathfrak{h}^\ast$ is a $\mathfrak{g}$-integral and $\mathfrak{p}$-dominant weight. Then there is (see \cite[Remark 3.1.6]{BE}) an irreducible $\mathrm{P}$-module $\mathbb{V}_\lambda$ with lowest weight $-\lambda$. We denote by $V_\lambda:=\mathrm{G}\times_\mathrm{P}\mathbb{V}_\lambda$ the induced vector bundle and by $\mathcal{O}_\mathfrak{p}(\lambda)$ the associated sheaf of holomorphic sections. Suppose that $\mathfrak{p}$, $\mathfrak{r}$ are standard parabolic subalgebras. Then $\mathfrak{q}:=\mathfrak{r}\cap\mathfrak{p}$ is also a standard parabolic subalgebra and we denote by $\mathrm{P}$ and $\mathrm{R}$ and $\mathrm{Q}$ the associated parabolic subgroups with Lie algebras $\mathfrak{p}$ and $\mathfrak{r}$ and $\mathfrak{q}$, respectively, as explained in Section \ref{section review}. Then $\mathrm{Q}=\mathrm{R}\cap\mathrm{P}$ and there is a double f\/ibration diagram \begin{gather*} \label{double fibration diagram} \xymatrix{&\ar[dl]^\eta \mathrm{G}/\mathrm{Q}\ar[dr]^\tau&\\ \mathrm{G}/\mathrm{R}&&\mathrm{G}/\mathrm{P},} \end{gather*} where $\eta$ and $\tau$ are the canonical projections. The space $\mathrm{G}/\mathrm{R}$ is called the \textit{twistor space~$TS$} and $\mathrm{G}/\mathrm{Q}$ the \textit{correspondence space~$CS$}. Such a diagram is a starting point for the Penrose transform. Next we need to f\/ix an $\mathfrak{r}$-dominant and integral weight $\lambda\in\mathfrak{h}^\ast$. Then there is a relative BGG sequence $\blacktriangle^\ast(\lambda)$ which is an exact sequence of holomorphic sections of associated vector bundles over $CS$ and linear $\mathrm{G}$-invariant dif\/ferential operators such that $\eta^{-1}\mathcal{O}_\mathfrak{r}(\lambda)$ is the kernel sheaf of the f\/irst operator in the sequence. In other words, there is a long exact sequence of sheaves \begin{gather* 0\rightarrow\eta^{-1}\mathcal{O}_\mathfrak{r}(\lambda)\rightarrow\blacktriangle^\ast(\lambda). \end{gather*} The upshot of this is that although the pullback sheaf $\eta^{-1}\mathcal{O}_\mathfrak{r}(\lambda)$ is not a sheaf of holomorphic sections of an associated vector bundle over $CS$, it is naturally a sub-sheaf of $\mathcal{O}_\mathfrak{q}(\lambda)$ which is cut out by an invariant dif\/ferential equation. Moreover, the graph of the relative BGG sequence is \cite[Section~8.7]{BE} completely determined by the $W^\mathfrak{q}_\mathfrak{r}$-orbit of~$\lambda$. Then we push down the relative BGG sequence by the direct image functor $\tau_\ast$. Computing higher direct images of sheaves in the relative BGG sequence is completely algorithmic and algebraic (see \cite[Section~5.3]{BE}). On the other hand, there is no general algorithm which computes direct images of dif\/ferential operators and it seems that this has to be treated in each case separately. Nevertheless, in this way one obtains a complex of operators on~$\mathrm{G}/\mathrm{P}$. \section{Lie theory}\label{section LT} In Section \ref{section LT} we will provide an algebraic background which is needed in the construction of the $k$-Dirac complexes via the Penrose transform. We will work with complex parabolic geometries which are associated to gradings on the simple Lie algebra $\mathfrak{g}=\mathfrak{so}(2m,\mathbb{C})$. Section \ref{section LT} is organized as follows: in Section \ref{section lie algebra} we will set notation and study the gradings on $\mathfrak{g}$. In Section \ref{section the relative Weyl group} we will compute the relative Hasse diagram $W^\mathfrak{q}_\mathfrak{r}$. \subsection{Lie algebra $\mathfrak{g}$ and parabolic subalgebras}\label{section lie algebra} Let $\label{standard basis} \{e_1,\dots,e_m,e^\ast_{k+1},\dots,e^\ast_m,e_1^\ast,\dots,e_k^\ast\}$ be the standard basis of $\mathbb{C}^{2m}$, $\delta$ be the Kronecker delta and $h$ be the complex bilinear form that satisf\/ies $h(e_i,e^\ast_j)=\delta_{ij}$, $h(e_i,e_j)=h(e^\ast_i,e^\ast_j)=0$ for all $i,j=1,\dots,m$. A matrix belongs to the associated Lie algebra $\mathfrak{g}:=\mathfrak{so}(h)\cong\mathfrak{so}(2m,\mathbb{C})$ if and only if it is of the form \begin{gather}\label{lie algebra g} \left( \begin{matrix} A&Z_1&Z_2&W\\ X_1&B&D&-Z_2^T\\ X_2&C&-B^T&-Z_1^T\\ Y&-X_2^T&-X_1^T&-A^T \end{matrix} \right), \end{gather} where $A\in M(k,\mathbb{C})$, $B\in M(n,\mathbb{C})$, $C,D\in A(n,\mathbb{C})$, $X_1,X_2,Z_1^T,Z_2^T\in M(n,k,\mathbb{C})$, $W,Y\in A(k,\mathbb{C})$. The subspace of diagonal matrices $\mathfrak{h}$ is a Cartan subalgebra of $\mathfrak{g}$. We denote by $\epsilon_i$ the linear form on $\mathfrak{h}$ def\/ined by $\varepsilon_i\colon H=(h_{kl})\mapsto h_{ii}$. Then $\{\epsilon_1,\dots,\epsilon_m\}$ is a basis of $\mathfrak{h}^\ast$ and $\triangle=\{\pm\epsilon_i\pm\epsilon_j\colon i,j=1,\dots,m\}$. If we choose $\triangle^+=\{\epsilon_i\pm\epsilon_j\colon 1\le i<j\le m\}$ as positive roots, then the simple roots are $\alpha_i:=\epsilon_i-\epsilon_{i+1}$, $i=1,\dots,m-1$ and $\alpha_m:=\epsilon_{m-1}+\epsilon_m$. The associated fundamental weights are $\omega_i=\epsilon_1+\dots+\epsilon_i$, $i=1,\dots,m-2$, $ \omega_{m-1}=\frac{1}{2}(\epsilon_1+\dots+\epsilon_{m-1}-\epsilon_m)$ and $\omega_m=\frac{1}{2}(\epsilon_1+\dots+\epsilon_m)$. The \textit{lowest form} $\rho$ is equal to $\frac{1}{2}\sum\limits_{\alpha\in\triangle^+}\alpha=\omega_1+\dots+\omega_m=(m-1,\dots,1,0)$. If $\lambda=\sum\limits_{i=1}^m\lambda_i\epsilon_i$ where $\lambda_i\in\mathbb{C}$, then we will also write $\lambda=(\lambda_1,\dots,\lambda_m)$. The simple ref\/lection $s_i\in W_\mathfrak{g}$ associated to $\alpha_i$ acts on $\mathfrak{h}^\ast$ by \begin{gather}\label{simple reflections} s_i(\lambda)=(\lambda_1,\dots,\lambda_{i-1},\lambda_{i+1},\lambda_{i},\lambda_{i+2},\dots,\lambda_m),\qquad i=1,\dots,m-1 \end{gather} and \begin{gather*} s_m(\lambda)=(\lambda_1,\dots,\lambda_{m-2},-\lambda_m,-\lambda_{m-1}). \end{gather*} We will be interested in the double f\/ibration diagram \begin{gather}\label{double fibration diagram I} \begin{matrix} {\dynkin\root{}\link\dots\link\noroot{}\link\dots\link\root{}\norootupright{} \rootdownright{}\enddynkin}\\ \swarrow\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \searrow\\ \\ {\dynkin \root{}\link\dots\link\root{}\norootupright{}\rootdownright{}\enddynkin}\ \ \ \ \ \ \ \ \ \ {\dynkin \root{}\link\dots\link\noroot{}\link\dots\link\root{}\rootupright{}\rootdownright{}\enddynkin} \end{matrix} \end{gather} where, going from left to right, the sets of simple roots are $\{\alpha_m\}$, $\{\alpha_k,\alpha_m\}$ and $\{\alpha_k\}$ and the associated gradings are \begin{gather*} \mathfrak{r}_{-1}\oplus\mathfrak{r}_0\oplus\mathfrak{r}_1, \qquad \mathfrak{q}_{-3}\oplus\mathfrak{q}_{-2}\oplus\dots\oplus\mathfrak{q}_3\qquad \mathrm{and}\qquad \mathfrak{g}_{-2}\oplus\dots\oplus\mathfrak{g}_2, \end{gather*} respectively. With respect to the block decomposition from (\ref{lie algebra g}), we have\footnote{Here we mean $\mathfrak{q}_0$ is the subspace of block diagonal matrices, $\mathfrak{q}_1$ is the subspace of those block matrices where only the matrices $Z_1$, $D$ are non-zero, etc.} \begin{gather* \hspace*{17mm}\left( \begin{matrix} \mathfrak{q}_0&\mathfrak{q}_1&\mathfrak{q}_2&\mathfrak{q}_3\\ \mathfrak{q}_{-1}&\mathfrak{q}_0&\mathfrak{q}_1&\mathfrak{q}_2\\ \mathfrak{q}_{-2}&\mathfrak{q}_{-1}&\mathfrak{q}_0&\mathfrak{q}_1\\ \mathfrak{q}_{-3}&\mathfrak{q}_{-2}&\mathfrak{q}_{-1}&\mathfrak{q}_0 \end{matrix} \right) \\ \hspace*{25mm} \swarrow\ \ \ \ \ \ \ \ \ \ \ \ \ \searrow \nonumber\\ \left( \begin{matrix} \mathfrak{r}_0&\mathfrak{r}_0&\mathfrak{r}_1&\mathfrak{r}_1\\ \mathfrak{r}_0&\mathfrak{r}_0&\mathfrak{r}_1&\mathfrak{r}_1\\ \mathfrak{r}_{-1}&\mathfrak{r}_{-1}&\mathfrak{r}_0&\mathfrak{r}_0\\ \mathfrak{r}_{-1}&\mathfrak{r}_{-1}&\mathfrak{r}_0&\mathfrak{r}_0\\ \end{matrix} \right)\ \ \ \left( \begin{matrix} \mathfrak{g}_0&\mathfrak{g}_1&\mathfrak{g}_1&\mathfrak{g}_2\\ \mathfrak{g}_{-1}&\mathfrak{g}_0&\mathfrak{g}_0&\mathfrak{g}_1\\ \mathfrak{g}_{-1}&\mathfrak{g}_0&\mathfrak{g}_0&\mathfrak{g}_1\\ \mathfrak{g}_{-2}&\mathfrak{g}_{-1}&\mathfrak{g}_{-1}&\mathfrak{g}_0\\ \end{matrix} \right). \end{gather*} The associated standard parabolic subalgebras are \begin{gather*} \mathfrak{r}=\mathfrak{r}_0\oplus\mathfrak{r}_1, \qquad \mathfrak{q}=\mathfrak{q}_0\oplus\mathfrak{q}_1\oplus\mathfrak{q}_2\oplus\mathfrak{q}_3\qquad \mathrm{and} \qquad \mathfrak{p}=\mathfrak{g}_0\oplus\mathfrak{g}_1\oplus\mathfrak{g}_2, \end{gather*} respectively, and we have the following isomorphisms \begin{gather*} \mathfrak{r}_0\cong M(m,\mathbb{C}), \qquad \mathfrak{q}_0\cong M(k,\mathbb{C})\oplus M(n,\mathbb{C})\qquad \mathrm{and} \qquad \mathfrak{g}_0\cong M(k,\mathbb{C})\oplus\mathfrak{so}(2n,\mathbb{C}). \end{gather*} We for brevity put \begin{gather} \mathbb{C}^k:=[e_1,\dots,e_k], \qquad \mathbb{C}^{k\ast}:=[e_1^\ast,\dots,e_k^\ast],\nonumber\\ \mathbb{C}^n:=[e_{k+1},\dots,e_m],\qquad \mathbb{C}^{n\ast}:=[e_{k+1}^\ast,\dots,e_m^\ast],\nonumber\\ \mathbb{C}^{2n}:=\mathbb{C}^n\oplus\mathbb{C}^{n\ast}\qquad \mathrm{and}\qquad \mathbb{C}^m:=\mathbb{C}^k\oplus\mathbb{C}^n.\label{subspace of C2m} \end{gather} Notice that the bilinear form $h$ induces dualities between $\mathbb{C}^k$ and $\mathbb{C}^{k\ast}$ and between $\mathbb{C}^n$ and $\mathbb{C}^{n\ast}$ which justif\/ies the notation, that $\mathbb{C}^m$ is a maximal, totally isotropic and $\mathfrak{r}_0$-invariant subspace, that $\mathbb{C}^k$, $\mathbb{C}^{k\ast}$, $\mathbb{C}^n$ and $\mathbb{C}^{n\ast}$ are $\mathfrak{q}_0$-invariant, that $\mathbb{C}^k$, $\mathbb{C}^{2n}$ and $\mathbb{C}^{k\ast}$ are $\mathfrak{g}_0$-invariant and f\/inally, that $h|_{\mathbb{C}^{2n}}$ is a non-degenerate, symmetric and $\mathfrak{g}_0$-invariant bilinear form. We will for brevity write only $h$ instead of $h|_{\mathbb{C}^{2n}}$ as it will be always clear from the context what is meant. Let us now consider the associated nilpotent subalgebras \begin{gather*} \mathfrak{r}_-=\mathfrak{r}_{-1},\qquad \mathfrak{q}_-=\mathfrak{q}_{-3}\oplus\mathfrak{q}_{-2}\oplus\mathfrak{q}_{-1}\qquad \mathrm{and} \qquad \mathfrak{g}_-=\mathfrak{g}_{-2}\oplus\mathfrak{g}_{-1}. \end{gather*} By the Jacobi identity, the Lie bracket is equivariant with respect to the adjoint action of the corresponding Levi factor and by the grading property following equation~(\ref{k-gradation}), it is homogeneous of degree zero. Hence, we can consider the Lie bracket in each homogeneity separately. The f\/irst algebra $\mathfrak{r}_-$ is abelian and so there is nothing to add. On the other hand, $\mathfrak{q}_{-}$ is 3-graded and, as $\mathfrak{q}_0$-modules, we have $\mathfrak{q}_{-1}\cong\mathbb{E}\oplus\mathbb{F}$, $ \mathfrak{q}_{-2}\cong\mathbb{C}^{k\ast}\otimes\mathbb{C}^{n\ast}$, $\mathfrak{q}_{-3}\cong\Lambda^2\mathbb{C}^{k\ast}$ where we put $\mathbb{E}:=\mathbb{C}^{k\ast}\otimes\mathbb{C}^n$ and $\mathbb{F}:=\Lambda^2\mathbb{C}^{n\ast}$. Using these isomorphisms, the Lie brackets in homogeneity~$-2$ and~$-3$ are the compositions of the canonical projections \begin{gather}\label{lie bracket in homogeneity -2} \Lambda^2\mathfrak{q}_{-1}\rightarrow\mathbb{E}\otimes\mathbb{F}=\big(\mathbb{C}^{k\ast}\otimes\mathbb{C}^n\big) \otimes\Lambda^2\mathbb{C}^{n\ast}\rightarrow\mathbb{C}^{k\ast}\otimes\mathbb{C}^{n\ast}=\mathfrak{q}_{-2} \end{gather} and \begin{gather*} \mathfrak{q}_{-1}\otimes\mathfrak{q}_{-2}\rightarrow\mathbb{E}\otimes\mathfrak{q}_{-2}= \big(\mathbb{C}^{k\ast}\otimes\mathbb{C}^n\big)\otimes\big(\mathbb{C}^{k\ast}\otimes\mathbb{C}^{n\ast}\big)\rightarrow\Lambda^2\mathbb{C}^{k\ast} =\mathfrak{q}_{-3}, \end{gather*} respectively. Here we use the canonical pairing $\mathbb{C}^n\otimes\mathbb{C}^{n\ast}\rightarrow\mathbb{C}$. Notice that $\Lambda^2\mathbb{E}\oplus\Lambda^2\mathbb{F}$ is contained in the kernel of~(\ref{lie bracket in homogeneity -2}). In order to understand the Lie bracket on $\mathfrak{g}_-$, f\/irst notice that there are isomorphisms $\mathfrak{g}_{-1}\cong \mathbb{C}^{k\ast}\otimes\mathbb{C}^{2n}$ and $ \mathfrak{g}_{-2}\cong\Lambda^2\mathbb{C}^{k\ast}\otimes\mathbb{C}$ of irreducible $\mathfrak{g}_0$-modules where $\mathbb{C}$ is the trivial representation of $\mathfrak{so}(2n,\mathbb{C})$. As $\mathfrak{g}_-$ is 2-graded, the Lie bracket is non-zero only in homogeneity~$-2$. It is given by \begin{gather*} \Lambda^2\mathfrak{g}_{-1}=\Lambda^2\big(\mathbb{C}^{k\ast}\otimes\mathbb{C}^{2n}\big)\rightarrow\Lambda^2\mathbb{C}^{k\ast}\otimes S^2\mathbb{C}^{2n}\rightarrow\Lambda^2\mathbb{C}^{k\ast}\otimes\mathbb{C}=\mathfrak{g}_{-2}, \end{gather*} where in the last map we take the trace with respect to $h$. In the table below we specify when $\lambda=(\lambda_1,\dots,\lambda_m)\in\mathfrak{h}^\ast$ is dominant for each parabolic subalgebra $\mathfrak{p},\mathfrak{q}$ and $\mathfrak{r}$. We put $\mathbb{N}_0:=\mathbb{N}\cup\{0\}$. \begin{table}[h!]\centering \caption{Dominant weights.}\label{table dominant weights} \begin{tabular}{|c|c|} \hline algebra&dominant and integral weights \\ \hline$\mathfrak{p}$&$\lambda_i-\lambda_{i+1}\in\mathbb{N}_0,\ i\ne k,\ 2\lambda_m\in\mathbb{Z},\ \lambda_{m-1}\ge|\lambda_m|$\\ \hline$\mathfrak{r}$&$\lambda_i-\lambda_{i+1}\in\mathbb{N}_0$\\ \hline$\mathfrak{q}$&$\lambda_i-\lambda_{i+1}\in\mathbb{N}_0,i\ne k$\\ \hline \end{tabular} \end{table} \subsection[Relative Hasse diagram $W_\mathfrak{r}^\mathfrak{q}$]{Relative Hasse diagram $\boldsymbol{W_\mathfrak{r}^\mathfrak{q}}$}\label{section the relative Weyl group} Let us f\/irst set notation. By a \textit{partition} we will mean an element of $\mathbb{N}^{k,n}_{++}:=\{(a_1,\dots,a_k)\colon a_i\in\mathbb{Z}$, $n\ge a_1\ge a_2\ge\dots\ge a_k\ge0\}$. For two partitions $a=(a_1,\dots,a_k)$ and $a'=(a_1',\dots,a'_k)$ we write $a\le a'$ if $a_i\le a'_i$ for all $i=1,\dots,k$ and $a<a'$ if $a\le a'$ and $a\ne a'$. If $a<a'$ does not hold, then we write $a\nless a'$. We put \begin{gather}\label{abbreviations} |a|=a_1+\dots+a_k,\qquad d(a):=\max\{i\colon a_i\ge i\},\\ q(a)=\sum_{i=1}^{k}\max\{ a_i-i,0\}\qquad \mathrm{and}\qquad r(a):=d(a)+q(a).\nonumber \end{gather} To the partition $a$ we associate the \textit{Young diagram} (or the \textit{Ferrers diagram}) $\mathrm{Y}$ consisting of~$k$ left-justif\/ied rows with $a_i$-boxes in the $i$-th row. Let $b_i$ be the number of boxes in the $i$-th column of $\mathrm{Y}$. Then we call $b=(b_1,\dots,b_n)\in\mathbb{N}^{n,k}_{++}$ the \textit{partition conjugated to} $a$ and we say that $a$ is \textit{symmetric} if $a_i=b_i$, $i=1,\dots,k$ and $b_{k+1}=\dots=b_n=0$. As we assume $n\ge k$, the set of symmetric partitions in $\mathbb{N}^{k,n}_{++}$ depends only on $k$, and thus, we denote it for simplicity by $S^k$ and put $S^k_j:=\{a\in S^k\colon r(a)=j\}$. \begin{Example}\label{ex YDI}\quad \begin{enumerate}\itemsep=0pt \item[(1)] The empty partition is by def\/inition always symmetric. \item[(2)] The Young diagram of $a=(4,3,1,0,0)\in\mathbb{N}^{5,6}_{++}$ is \begin{gather}\label{YD} \Yctj \end{gather} and we f\/ind that $d(a)=2$, $q(a)=4$ and $r(a)=6$. The conjugated partition is $b=(3,2,2,1,0,0)$ with $d(b)=2$, $q(b)=2$ and $r(b)=4$. The Young diagram of $b$ is \begin{gather*} \Ytddj. \end{gather*} We see that the partition is not symmetric. \end{enumerate} \end{Example} Notice that $d(a)$ and $q(a)$ are equal to the number of boxes in the associated Young diagram that are on and above the main diagonal, respectively and that a partition is symmetric if and only if its Young diagram is symmetric with respect to the ref\/lection along the main diagonal. We can now continue by investigating the relative Hasse graph $W_\mathfrak{r}^\mathfrak{q}$. The group $W_\mathfrak{r}$ is generated by $s_1,\dots,s_{m-1}$ while $W_\mathfrak{q}$ is generated by elements $s_1,\dots,s_{k-1},s_{k+1},\dots,s_{m-1}$. By (\ref{simple reflections}), it follows that $W_\mathfrak{r}$ is the permutation group $S_m$ on $\{1,\dots,m\}$ and that $W_\mathfrak{q}\cong S_k\times S_n$ is the stabilizer of $\{1,\dots,k\}$. Recall from Section~\ref{section review} that in each left coset of $W_\mathfrak{q}$ in $W_\mathfrak{r}$ there is a unique element of minimal length and that we denote the set of all such distinguished representatives by $W_\mathfrak{r}^\mathfrak{q}$. Moreover, the Bruhat order on $W_\mathfrak{g}$ descends to a partial order on $W_\mathfrak{r}$ and on $W_\mathfrak{q}^\mathfrak{r}$. We will now show that there is an isomorphism $\mathbb{N}^{k,n}_{++}\rightarrow W^\mathfrak{q}_\mathfrak{r}$ of partially ordered sets. Let $a=(a_1,\dots,a_k)\in\mathbb{N}^{k,n}_{++}$ and $\mathrm{Y}$ be the associated Young diagram. We will call the box in the $i$-th row and the $j$-th column of $Y$ an $(i,j)$\textit{-box} and we write into this box the number $\sharp(i,j):=k-i+j$. Notice that $1\le\sharp(i,j)\le m$. Then the set of boxes in $Y$ is indexed by $\Xi_a:=\{(i,j)\colon i=1,\dots,k,\ j=1,\dots,a_i\}$ and we order this set lexicographically, i.e., $(i,j)<(i',j')$ if $i<i'$ or $i=i'$ and $j<j'$. Then \begin{gather}\label{definition of permutation} w_a:=s_{\sharp(\Psi(1))}s_{\sharp(\Psi(2))}\dots s_{\sharp(\Psi(|a|))}\in S_m, \end{gather} where $\Psi\colon \{1,2,\dots,|a|\}\rightarrow\Xi_a$ is the unique isomorphism of ordered sets. Let us now look at an example. \begin{Example} The Young diagram from (\ref{YD}) is f\/illed as \begin{gather* \begin{matrix} k&k+1&k+2&k+3\\ k-1&k&k+1&\\ k-2&&&\\ \end{matrix} \end{gather*} and so $w_a:=s_ks_{k+1}s_{k+2}s_{k+3}s_{k-1}s_ks_{k+1}s_{k-2}$. \end{Example} We have the following preliminary observation. \begin{Lemma} Let $a=(a_1,\dots,a_k)$ be as above and $b=(b_1,\dots,b_n)$ be the conjugated partition. Then the permutation $w_a\in S_m$ from~\eqref{definition of permutation} satisfies \begin{gather}\label{permutation from YD} w_a(k-i+1+a_i)=k-i+1\qquad \mathrm{and}\qquad w_a(k+j-b_j)=k+j \end{gather} for each $i=1,\dots,k$ and $j=1,\dots,n$. \end{Lemma} \begin{proof} Fix $i=1,\dots,k$. If $a_i>0$, there is $r_i:=k-i+1$ written in the $(i,1)$-box and $r^i:=k-i+a_i=r_i+a_i-1$ in the $(i,a_i)$-box. We put $r^i-1=r_i:=k-i+1$ if $a_i=0$. Similarly, if $j=1,\dots,n$ and $b_j>0$, then there is $c_j:=k+j-1$ in the $(1,j)$-box and $c^j:=k+j-b_j=c_j-b_j+1$ in the $(b_j,j)$-box. We put $c^j-1=c_j:=k+j-1$ if $b_j=0$. Then it is easy to check that $w_a(r^i+1)=r_i$ and $w_a(c^j)=c_j+1$ which completes the proof. \end{proof} Notice that the sets $\{k-i+1+a_i\colon i=1,\dots,k\}$ and $\{ k+j-b_j\colon j=1,\dots,n\}$ are disjoint and that their union is $\{1,2,\dots,m\}$. By~(\ref{permutation from YD}), it follows that \begin{gather}\label{action on lowest form} w_a\rho=\rho+(-a_k,\dots,-a_1\,|\,b_1,\dots,b_n), \end{gather} where $\rho=(m-1,\dots,1,0)$ is the lowest form of $\mathfrak{g}$ and for clarity, we separate the f\/irst $k$ and last $n$ coef\/f\/icients by~$|$. Comparing this with Table~\ref{table dominant weights}, we see that $w_a\rho$ is $\mathfrak{q}$-dominant. As the same holds for any $\mathfrak{r}$-dominant weight, it follows that $w_a\in W^\mathfrak{q}_\mathfrak{r}$. \begin{Lemma}\label{lemma relative Weyl group} The map $\mathbb{N}^{k,n}_{++}\rightarrow W^\mathfrak{q}_\mathfrak{r}$, $a\mapsto w_a$ is an isomorphism of partially ordered sets. \end{Lemma} \begin{proof} The map $a\mapsto w_a$ is by (\ref{action on lowest form}) clearly injective. To show surjectivity, f\/ix $w\in W^\mathfrak{q}_\mathfrak{r}$. Then the sequence $c_1,\dots,c_k$ where $c_i=w^{-1}(i)$, $i=1,\dots,k$ is increasing. By \cite[Proposition~3.2.16]{CS}, the map $w\in W^\mathfrak{q}_\mathfrak{r}\mapsto w^{-1}\omega_k$ is injective. It follows that $w^{-1}\omega_k$ is uniquely determined by the sequence $c_1,\dots,c_k$. Then $a:=(a_1,\dots,a_k)\in\mathbb{N}^{k,n}_{++}$ where $a_i:=c_{k-i+1}+i-k-1$, $i=1,\dots,k$ and from~(\ref{permutation from YD}), it follows that $w_a^{-1}(k-i+1)=k-i+1+a_i=c_{k-i+1}$, $i=1,\dots,k$. This shows that $w_a^{-1}\omega_k=w^{-1}\omega_k$ and thus, $w=w_a$. Now it remains to show that the map is compatible with the orders. Assume that $a=(a_1,\dots,a_k)$, $a'=(a'_1,\dots,a'_k)\in\mathbb{N}^{k,n}_{++}$ satisfy $|a'|=|a|+1$ and $a<a'$. Then there is a unique integer $i\le k$ such that $a'_i=a_i+1$ and so $w_{a'}=w_as_{k-i+a'_i}$. By~(\ref{permutation from YD}), we have that $w_a\alpha_{k-i+a'_i}>0$ and thus by \cite[Proposition~3.2.16]{CS}, there is an arrow $w_a\rightarrow w_{a'}$ in $W^\mathfrak{q}_\mathfrak{r}$. On the other hand, suppose that $a''=(a_1'',\dots,a_k'')\in\mathbb{N}^{k,n}_{++}$ satisf\/ies $a'\not <a''$. In order to complete the proof, it is enough to show that there is no arrow $w_{a'}\rightarrow w_{a''}$. By assumptions, there is $j$ such that $a'_1\le a_1'',\dots,a_{j-1}'\le a''_{j-1}$ and $a'_j>a_j''$. Without loss of generality we may assume that $i=j$. Then $w_{a'}^{-1}(w_{a}\alpha_{k+a_i-i})=s_{k+a_i-i}\alpha_{k+a_i-i}<0$. On the other hand by~(\ref{permutation from YD}), it follows that $w_{a''}^{-1}(w_a\alpha_{k+a_i-i})>0$. We proved that $\Phi_{w_a'}\not\subset\Phi_{w_{a''}}$ and thus by \cite[Proposition~3.2.17]{CS}, there cannot be any arrow $w_{a'}\rightarrow w_{a''}$. \end{proof} We will later need the following two observations. A permutation $w\in S_m$ is $k$-\textit{balanced}, if the following is true: if $w(k-i)>k$ for some $i=0,\dots,k-1$, then $ w(k+i+1)\le k$. \begin{Lemma}\label{lemma symmetric YD} The permutation $w_a$ associated to $a\in\mathbb{N}^{k,n}_{++}$ is $k$-balanced if and only if $a\in S^k$. \end{Lemma} \begin{proof} Let $b=(b_1,\dots,b_n)$ be the partition conjugated to $a=(a_1,\dots,a_k)$. First notice that if $w_a(k-i)=k+j>k$, then by~(\ref{permutation from YD}) we have $i=b_j-j$. If $a\in S^k$, then $w_a(k+i+1)=w_a(k-j+b_j+1)=w_a(k-j+a_j+1)=k-j+1\le k$ and so $a$ is $k$-balanced. If $a\not\in S^k$, then there is $j$ such that $a_1=b_1,\dots,a_{j-1}= b_{j-1}$ and $a_j\ne b_j$. It follows that $b_j\ge j$ and so $i:=b_j-j\ge0$. Then $w_a(k-i)=w_a(k-b_j+j)=k+j>k$. If $a_j>b_j$, then $w_a(k+i+1)=k+b_j+1>k$. If $a_j<b_j$, then $w_a(k+i+1)=k+b_j>k$. \end{proof} Recall from \cite{BE} that given $w\in S_m$, there exists a minimal integer $\ell(w)$, called the \textit{length} of~$w$, such that~$w$ can be expressed as a product of $\ell(w)$ simple ref\/lections $s_1,\dots,s_m$. It is well known that $\ell(w)$ is equal to the number of pairs $1\le i<j\le m$ such that $w(i)>w(j)$. \begin{Lemma}\label{lemma length of permutation} Let $w_a\in S_m$. Then $\ell(w_a)=|a|$. \end{Lemma} \begin{proof} By the def\/inition of $w_a$, it follows that $\ell(w_a)\le |a|$. On the other hand, if $a<a'$, then $w_a\rightarrow w_{a'}$ and thus also $\ell(w_a)<\ell(w_{a'})$. By induction on $|a|$, we have that $\ell(w_a)\ge|a|$. \end{proof} \section{Geometric structures attached to (\ref{double fibration diagram I})}\label{section geometry behind PT} In Section \ref{section geometry behind PT} we will consider dif\/ferent geometric structures associated to~(\ref{double fibration diagram I}). Namely, we will consider in Section~\ref{section complex grassmannian} the associated homogeneous spaces, in Section~\ref{section filtration of TM and TCS} the f\/iltrations of tangent bundles of these parabolic geometries and in Section \ref{section projections} the projections $\eta$ and $\tau$. \subsection{Homogeneous spaces}\label{section complex grassmannian} A connected and simply connected Lie group $\mathrm{G}$ with Lie algebra $\mathfrak{g}$ is isomorphic to $\mathrm{Spin}(2m,\mathbb{C})$. Let $\mathrm{R}$, $\mathrm{Q}$ and $\mathrm{P}$ be the parabolic subgroups of~$\mathrm{G}$ with Lie algebras $\mathfrak{r}$, $\mathfrak{q}$ and~$\mathfrak{p}$ that are associated to $\{\alpha_m\}$, $\{\alpha_k,\alpha_m\}$ and~$\{\alpha_k\}$, respectively, as explained in Section~\ref{section review}. We for brevity put $TS:=\mathrm{G}/\mathrm{R}$, $CS:=\mathrm{G}/\mathrm{Q}$ and $M:=\mathrm{G}/\mathrm{P}$. Recall from Section~\ref{section PT} that we call $TS$ the twistor space and~$CS$ the correspondence space. \textit{The twistor space $TS$}. Let us f\/irst recall (see \cite[Section 6]{GW}) some well known facts about spinors. Recall from (\ref{subspace of C2m}) that $\mathbb{W}:=\mathbb{C}^{m}$ is a maximal totally isotropic subspace of $\mathbb{C}^{2m}$. We can (via $h$) identify the dual space $\mathbb{W}^\ast$ with the subspace $[e_1^\ast,\dots,e_m^\ast]$. Put $\mathbb{S}:=\bigoplus_{i=0}^m\Lambda^i\mathbb{W}^\ast$. There is a canonical linear map $\mathbb{C}^{2m}\rightarrow\operatorname{End}(\mathbb{S})$ which is determined by $w\cdot\psi=i_w\psi$ and $w^\ast\cdot\psi=w^\ast\wedge\psi$ where $w\in\mathbb{W}$, $w^\ast\in\mathbb{W}^\ast$, $\psi\in\mathbb{S}$ and $i_w$ stands for the contraction by $w$. If $\psi\in\mathbb{S}$, then we put $T_\psi:=\{v\in\mathbb{C}^{2m}\colon v\cdot\psi=0\}$. If $\psi\ne0$, then~$T_\psi$ is a totally isotropic subspace and we call $\psi$ a~\textit{pure spinor} if $\dim T_\psi=m$ (which is equivalent to saying that $T_\psi$ is a~maximal totally isotropic subspace). The standard linear isomorphism $\Lambda^2\mathbb{C}^{2m}\cong\mathfrak{g}$ gives an injective linear map $\mathfrak{g}\rightarrow\operatorname{End}(\mathbb{S})$. It is straightforward to verify that the map is a homomorphism of Lie algebras where the commutator in the associative algebra $\operatorname{End}(\mathbb{S})$ is the standard one. Hence, $\mathfrak{g}$ is a Lie subalgebra of $\operatorname{End}(\mathbb{S})$ and it turns out that~$\mathbb{S}$ is no longer irreducible under $\mathfrak{g}$ but it decomposes as $\mathbb{S}_+\oplus\mathbb{S}_-$ where $\mathbb{S}_+:=\bigoplus_{i=0}^m\Lambda^{2i}\mathbb{W}^\ast$ and $\mathbb{S}_-:=\bigoplus_{i=0}^m\Lambda^{2i+1}\mathbb{W}^\ast$. Then $\mathbb{S}_+$ and $\mathbb{S}_-$ are irreducible non-isomorphic complex spinor representations of $\mathfrak{g}$ with highest weights $\omega_m$ and $\omega_{m-1}$, respectively. It is well known that any pure spinor belongs to $\mathbb{S}_+$ or to $\mathbb{S}_-$ (which explains why the Grassmannian of maximal totally isotropic subspaces in~$\mathbb{C}^{2m}$ has two connected components). Now we can easily describe the twistor space. The spinor $1\in\mathbb{S}_+$ is annihilated by all positive roots in $\mathfrak{g}$ and hence, it is a highest weight vector. Recall from Section~\ref{section review} that the line spanned by~1 is invariant under~$\mathrm{R}$ and since $T_1=\mathbb{W}$, we f\/ind that~$\mathrm{R}$ is the stabilizer of $\mathbb{W}$ inside $\mathrm{G}$. As~$\mathrm{G}$ is connected, we conclude that~$TS$ is the connected component of~$\mathbb{W}$ in the Grassmannian of maximal totally isotropic subspaces in~$\mathbb{C}^{2m}$. \textit{The isotropic Grassmannian $M$.} An irreducible $\mathfrak{g}$-module with highest weight $\omega_k$ is isomorphic to $\Lambda^k\mathbb{C}^{2m}$. Then $e_1\wedge e_2\wedge\dots\wedge e_k$ is clearly a highest weight vector and the corresponding point in $\mathbb P(\Lambda^k\mathbb{C}^{2m})$ can be viewed as the totally isotropic subspace $x_0:=\mathbb{C}^k$. We see that $M$ is the Grassmannian of totally isotropic $k$-dimensional subspaces in $\mathbb{C}^{2m}$. We denote by $\textbf{p}\colon \mathrm{G}\rightarrow M$ the canonical projection. \textit{The correspondence space $CS$}. The correspondence space $CS$ is the generalized f\/lag manifold of nested subspaces $\{(z,x)\colon z\in TS,\ x\in M,\ x\subset z\}$ and~$\mathrm{Q}$ is the stabilizer of $(\mathbb{W},x_0)$. Let $\textbf{q}\colon \mathrm{G}\rightarrow CS$ be the canonical projection. \subsection[Filtrations of the tangent bundles of $M$ and $CS$]{Filtrations of the tangent bundles of $\boldsymbol{M}$ and $\boldsymbol{CS}$}\label{section filtration of TM and TCS} Recall from Section \ref{section review} that the $|2|$-grading $\mathfrak{g}=\mathfrak{g}_{-2}\oplus\mathfrak{g}_{-1}\oplus\dots\oplus\mathfrak{g}_2$ associated to $\{\alpha_k\}$ determines a 2-step f\/iltration $\{0\}=F_{0}^M\subset F_{-1}^M\subset F_{-2}^M=TM$ of the tangent bundle of $M$ where $\{0\}$ is the zero section. We put $G_{i}^M:=F_{i}^M/F_{i+1}^M$, $i=-2,-1$ so that the associated graded bundle $gr(TM)=G_{-2}^M\oplus G_{-1}^M$ is a locally trivial bundle of graded nilpotent Lie algebras with typical f\/iber $\mathfrak{g}_-$. Dually, there is a f\/iltration $T^\ast M=F_1^M\supset F_2^M\supset F_3^M=\{0\}$ where $F_i^M\cong (F^M_{-i+1})^\perp$. We put $G_i^M:=F_i^M/F_{i+1}^M$ so that $G_i^M\cong(G_{-i}^M)^\ast$. There are linear isomorphisms \begin{gather}\label{linear isomorphisms over x0} \mathfrak{g}_{i}\cong \big(G^M_i\big)_{x_0},\qquad i=-2,-1,1,2. \end{gather} Recall from Section \ref{section wdo} that $\mathfrak{gr}^r_{x_0}$ denotes the vector space of weighted $r$-jets of germs of holomorhic functions at $x_0$ whose weighted $(r-1)$-jet vanishes. Then the isomorphisms from~(\ref{isom of graded jets over origin}) are \begin{gather*} \mathfrak{gr}^{1}_{x_0}\cong \mathfrak{g}_1,\qquad \mathfrak{gr}^{2}_{x_0}\cong S^2\mathfrak{g}_1\oplus\mathfrak{g}_2,\qquad \mathfrak{gr}^{3}_{x_0}\cong S^3\mathfrak{g}_1\oplus\mathfrak{g}_1\otimes\mathfrak{g}_2, \qquad \dots \end{gather*} for small $r$ and in general \begin{gather}\label{weighted jets at x_0} \mathfrak{gr}^{r}_{x_0}\cong\bigoplus_{i+2j=r}S^i\mathfrak{g}_1\otimes S^j\mathfrak{g}_2. \end{gather} The $|3|$-grading $\mathfrak{g}=\bigoplus_{i=-3}^3\mathfrak{q}_i$ determined by $\{\alpha_k,\alpha_m\}$ induces a 3-step f\/iltration $TCS= F_{-3}^{CS}\supset F_{-2}^{CS}\supset F_{-1}^{CS}\supset F_0^{CS}=\{0\}$. We put $G_i^{CS}:= F_i^{CS}/F^{CS}_{i+1}$ so that $gr(TCS)=\bigoplus_{i=-3}^{-1}G_i^{CS}$ is a locally trivial vector bundle of graded nilpotent Lie algebras with typical f\/iber $\mathfrak{q}_-$. Dually, we get a f\/iltration $T^\ast CS=F_1^{CS}\supset F_2^{CS}\supset F_3^{CS}\supset F_4^{CS}=\{0\}$ where $F_i^{CS}:=(F^{CS}_{-i+1})^\perp$. The associated graded vector bundle is $gr(T^\ast CS)=\bigoplus_{i=1}^3G_i^{CS}$ where we put $G_i^{CS}:=F_{i}^{CS}/F^{CS}_{i+1}$. Then as above, $G_i^{CS}\cong (G_{-i}^{CS})^\ast$. The $\mathrm{Q}$-invariant subspaces $\mathbb{E}\oplus\mathfrak{q}$ and $\mathbb{F}\oplus\mathfrak{q}$ give a f\/iner f\/iltration of the tangent bundle, namely $F_{-1}^{CS}=E^{CS}\oplus F^{CS}$. Since the Lie bracket $\Lambda^2\mathfrak{q}_{-1}\rightarrow\mathfrak{q}_{-2}$ vanishes on $\Lambda^2\mathbb{E}\oplus\Lambda^2\mathbb{F}$, it follows that $E^{CS}$ and $F^{CS}$ are integrable distributions. This can be deduced also from the short exact sequences \begin{gather}\label{E and F as kernels of tangent maps} 0\rightarrow E^{CS}\rightarrow TCS\xrightarrow{T\eta}T(TS)\rightarrow0\qquad \mathrm{and}\qquad 0\rightarrow F^{CS}\rightarrow TCS\xrightarrow{T\tau}TM\rightarrow0, \end{gather} i.e., $E^{CS}=\ker(T\eta)$ and $F^{CS}=\ker(T\tau)$. Notice that $(T\tau)^{-1}(F_{-1}^M)=F^{CS}_{-2}$. \subsection[Projections $\tau$ and $\eta$]{Projections $\boldsymbol{\tau}$ and $\boldsymbol{\eta}$}\label{section projections} Recall from (\ref{subspace of C2m}) that $\mathbb{C}^{2n}:=[e_{k+1},\dots,e_m,e_{k+1}^\ast,\dots,e_m^\ast]$ and $\mathbb{C}^n:=[e_{k+1},\dots,e_m]$, i.e., we view~$\mathbb{C}^{2n}$ and~$\mathbb{C}^n$ as subspaces of~$\mathbb{C}^{2m}$. On~$\mathbb{C}^{2n}$ we consider the non-degenerate bilinear form~$h|_{\mathbb{C}^{2n}}$ which we for brevity denote by~$h$. Then $\mathbb{C}^n$ is a maximal totally isotropic subspace of~$\mathbb{C}^{2n}$. The f\/ibers of $\tau$ and $\eta$ are homogeneous spaces of parabolic geometries which (see~\cite{BE}) can be recovered from the Dynkin diagrams given in~(\ref{double fibration diagram I}). \begin{Lemma}\label{lemma fibers}\quad \begin{enumerate}\itemsep=0pt \item[$(a)$] The fibers of $\tau$ are biholomorphic to the Grassmannian of $k$-dimensional subspaces in $\mathbb{C}^{n+k}$. \item [$(b)$] The fibers of $\eta$ are biholomorphic to the connected component $\mathrm{Gr}^+_h(n,n)$ of $\mathbb{C}^n$ in the Grassmannian of maximal totally isotropic subspaces in~$\mathbb{C}^{2n}$. \end{enumerate} \end{Lemma} \begin{proof}As the f\/ibers over distinct points are biholomorphic, it suf\/f\/ices to look at the f\/ibers of~$\eta$ and~$\tau$ over $\mathbb{W}$ and~$x_0$, respectively. (a) By def\/inition, $\eta^{-1}(\mathbb{W})$ is the set of $k$-dimensional totally isotropic subspaces in $\mathbb{W}$. As $\mathbb{W}$ is already totally isotropic, the f\/irst claim follows. (b) Notice that $x_0^\bot=x_0\oplus \mathbb{C}^{2n}$. Then it is easy to see that $y\in\mathrm{Gr}^+_h(n,n)\mapsto(x_0\oplus y,x_0)\in\tau^{-1}(x_0)$ is a~biholomorphism. \end{proof} We will use the following notation. Assume that $X\in M(2m,k,\mathbb{C})$ and $Y\in M(2m,n,\mathbb{C})$ have maximal rank. Then we denote by $[X]$ the $k$-dimensional subspace of $\mathbb{C}^{2m}$ that is spanned by the columns of the matrix and by $[X|Y]$ the f\/lag of nested subspaces $[X]\subset[X]\oplus[Y]$. It is straightforward to verify that \begin{gather}\label{affine subset of M} (\textbf{p}\circ\exp)\colon \ \mathfrak{g}_- \rightarrow M,\\ (\textbf{p}\circ\exp)\left( \begin{matrix} 0&0&0&0\\ X_1&0&0&0\\ X_2&0&0&0\\ Y&-X_2^T&-X_1^T&0\\ \end{matrix} \right) = \left[ \begin{matrix} 1_k\\ X_1\\ X_2\\ Y-\frac{1}{2}(X_1^TX_2+X^T_2X_1)\\ \end{matrix} \right].\nonumber \end{gather} We see that $\mathcal{X}:=\textbf{p}\circ\exp(\mathfrak{g}_-)$ is an open, dense and af\/f\/ine subset of $M$ and that any $(z,x)\in\tau^{-1}(\mathcal{X})$ can be represented by \begin{gather}\label{coordinates on tau^{-1}(U)} \left[ \begin{array}{c|c} 1_k&0\\ X_1&A\\ X_2&B\\ Y-\frac{1}{2}(X_1^TX_2+X_2^TX_1)&C\\ \end{array} \right], \end{gather} where $A,B\in M(n,\mathbb{C})$, $C\in M(k,n,\mathbb{C})$ are such that $\left[ \begin{matrix} A\\ B\\ \end{matrix} \right]\in\mathrm{Gr}^+_{h}(n,n)$ and $C=-(X_1^TB+X_2^TA)$. We immediately get the following observation. \begin{Lemma}\label{lemma set tau^{-1}(U)} The set $\tau^{-1}(\mathcal{X})$ is biholomorphic to $\mathcal{X}\times\tau^{-1}(x_0)$. The restriction of $\tau$ to this set is then the projection onto the first factor. \end{Lemma} The set $\tau^{-1}(\mathcal{X})$ is not af\/f\/ine as dif\/ferent choices of $A$ and $B$ might lead to the same element in $\tau^{-1}(\mathcal{X})$. Let $\mathcal{Y}$ be the subset of $\tau^{-1}(\mathcal{X})$ of those nested f\/lags $x\subset z$ which can be represented by a matrix as above with $A$ regular. In that case we may assume $A=1_n$ which uniquely pins down~$B$. It is straightforward to f\/ind that $B=-B^T$ and conversely, any skew-symmetric $n\times n$ matrix determines a totally isotropic $n$-dimensional subspace in~$\mathbb{C}^{2n}$. We see that $\mathcal{Y}$ is an open and af\/f\/ine set which is biholomorphic to $\mathfrak{g}_-\times A(n,\mathbb{C})$. In order to write down also $\eta$ as a canonical projection $\mathbb{C}^{{m\choose2}+nk}\rightarrow\mathbb{C}^{m\choose2}$, it will be convenient to choose a dif\/ferent coordinate system on $\mathcal{Y}$. \begin{Lemma}\label{lemma projection eta} Let $\mathcal{Y}$ be as above and put $\mathcal{Z}:=\eta(\mathcal{Y})$. Then $\mathcal{Y}$ and $\mathcal{Z}$ are open affine sets and there is a commutative diagram of holomorphic maps \begin{gather}\label{other coordinates on CS}\begin{split} & \xymatrix{\mathcal{Y}\ar[d]^{\eta|_\mathcal{Y}}\ar[r]&A(m,\mathbb{C})\times M(n,k,\mathbb{C})\ar[d]^{pr_1}\\ \mathcal{Z}\ar[r]&A(m,\mathbb{C}),}\end{split} \end{gather} where $pr_1$ is the canonical projection and the horizontal arrows are biholomorphisms. \end{Lemma} \begin{proof} Let $(z,x)$ be the nested f\/lag corresponding to (\ref{coordinates on tau^{-1}(U)}) where $A=1_n$ so that $B=-B^T$. Put for brevity $Y':=Y-\frac{1}{2}(X_1^TX_2+X_2^TX_1)$ and $C:=-X_2^T-X_1^TB$. The map in the f\/irst row in (\ref{other coordinates on CS}) is $(z,x)\mapsto(W,Z)$ where \begin{gather*}\label{coordinates on W} W=\left( \begin{matrix} W_1&W_2\\ W_0&-W_1^T \end{matrix} \right) \end{gather*} and $ Z:=X_1$, $W_1:=X_2-BX_1$, $W_0:=Y'-CX_1$, $W_2:=B$. Then $W=-W^T$ and the map $\mathcal{Y}\rightarrow A(m,\mathbb{C})\times M(n,k,\mathbb{C})$ is clearly a biholomorphism. In order to have a geometric interpretation of the map, consider the following. Using Gaussian elimination on the columns of the matrix~(\ref{coordinates on tau^{-1}(U)}), we can eliminate the $X_1$-block and get a new matrix \begin{gather*} \left( \begin{matrix} 1_k&0\\ 0&1_n\\ X_2-BX_1&B\\ Y'-CX_1&C \end{matrix} \right)=\left( \begin{matrix} 1_m\\ W \end{matrix} \right). \end{gather*} The columns of the matrix span the same totally isotropic subspace $z$ as the original matrix. Moreover, it is clear that $z$ admits a unique basis of this form. From this we easily see that $\mathcal{Z}$ is indeed an open af\/f\/ine subset of~$TS$ which is biholomorphic to~$A(m,\mathbb{C})$. In these coordinate systems, the restriction of $\eta$ is the projection onto the f\/irst factor. \end{proof} \section[The Penrose transform for the $k$-Dirac complexes]{The Penrose transform for the $\boldsymbol{k}$-Dirac complexes}\label{section computing PT} In Section \ref{section computing PT} we will consider the relative BGG sequence associated to a particular $\mathfrak{r}$-dominant and integral weight as explained in Section \ref{section PT}. More explicitly, we will def\/ine in Section \ref{section relative double complex} for each $p,q\ge0$ a sheaf of relative $(p,q)$-forms and we get a Dolbeault-like double complex. Then we will show (see Section~\ref{section relative de Rham}) that this double complex contains a relative holomorphic de Rham complex. Then in Section~\ref{section relative twisted double complex} we will twist each sheaf of relative $(p,q)$-forms as well as the holomorhic de Rham complex by a certain pullback sheaf. Using some elementary representation theory, we will turn (see Section~\ref{section relative BGG sequence}) the twisted relative de Rham complex into the relative BGG sequence. In Section~\ref{section direct images} we will compute direct images of sheaves in the relative BGG sequence. We will use the following notation. We denote by $\mathcal{O}_\mathfrak{q}$ and $\mathcal{E}^{p,q}_\mathfrak{q}$ the structure sheaf and the sheaf of smooth $(p,q)$-forms, respectively, over~$CS$. We denote the corresponding sheaves over~$TS$ by the subscript~$\mathfrak{r}$. If $W$ is a~holomorphic vector bundle over $CS$, then we denote by $\mathcal{O}_\mathfrak{q}(W)$ the sheaf of holomorphic sections of~$W$ and by $\mathcal{E}^{p,q}_\mathfrak{q}(W)$ the sheaf of smooth $(p,q)$-forms with values in $W$. We for brevity put $\mathcal{E}_\ast:=\mathcal{E}^{0,0}_\ast$ and $\mathcal{E}^{p,q}_\ast(\mathcal{U}):=\Gamma(\mathcal{U},\mathcal{E}_\ast^{p,q})$ where $\mathcal{U}$ is an open set and $\ast=\mathfrak{q}$ or $\mathfrak{r}$. Moreover, we put $\eta^\ast\mathcal{E}^{p,q}_\mathfrak{r}:= \mathcal{E}_\mathfrak{q}\otimes_{\eta^{-1}\mathcal{E}_\mathfrak{r}}\mathcal{E}^{p,q}_\mathfrak{r}$ where we use that $\eta^{-1}\mathcal{E}_\mathfrak{r}$ is naturally a sub-sheaf of~$\mathcal{E}_\mathfrak{q}$. \subsection{Double complex of relative forms}\label{section relative double complex} Recall from Lemma \ref{lemma projection eta} that $\mathcal{Y}$ is biholomorphic to $A(m,\mathbb{C})\times M(n,k,\mathbb{C})$, that $\eta(\mathcal{Y})=\mathcal{Z}$ is biholomorphic to $A(m,\mathbb{C})$ and that the canonical map $\mathcal{Y}\rightarrow\mathcal{Z}$ is the projection onto the f\/irst factor. In this way we can use matrix coef\/f\/icients on $M(n,k,\mathbb{C})$ as coordinates on the f\/ibers of $\eta$. We will write $Z=(z_{\alpha i})\in M(n,k,\mathbb{C})$ and if $I=((\alpha_1,i_1),\dots,(\alpha_p,i_p))$ is a multi-index where $(\alpha_j,i_j)\in I(n,k):=\{(\alpha,i)\colon \alpha=1,\dots,n,\ i=1,\dots,k\}$, $j=1,\dots,p$, then we put $dz_I:=d z_{(\alpha_1,i_1)}\wedge\dots\wedge d z_{(\alpha_p,i_p)}$ and $|I|=p$. We call \begin{gather*} \mathcal{E}^{p+1,q}_\eta:=\mathcal{E}^{p+1,q}_\mathfrak{q}/(\eta^\ast\mathcal{E}^{1,0}_\mathfrak{r}\wedge\mathcal{E}_\mathfrak{q}^{p,q}),\qquad p,q\ge0 \end{gather*} the \textit{sheaf of relative $(p+1,q)$-forms}. By the $\mathrm{G}$-action, it is clearly enough to understand the space of sections of this sheaf over the open set $\mathcal{Y}$ from Section~\ref{section projections}. Given $\omega\in\mathcal{E}^{p,q}_\eta(\mathcal{Y})$, it is easy to see that there is a unique $(p,q)$-form cohomologous to $\omega$ which can be written in the form \begin{gather}\label{relative form in coordinates} \sideset{}{'}\sum_{|I|=p} dz_I\wedge \omega_I, \end{gather} where $\Sigma'$ denote that the summation is performed only over strictly increasing multi-indeces\footnote{We order the set $I(n,k)$ lexicographically, i.e., $(\alpha,i)<(\alpha',i')$ if $\alpha<\alpha'$ or $\alpha=\alpha'$ and $i<i'$.} and each $\omega_I\in\mathcal{E}^{0,q}_\mathfrak{q}(\mathcal{Y})$. As $\eta$ is holomorphic, $\partial$ and $\bar\partial$ commute with the pullback map $\eta^\ast$. We see that $\partial(\eta^\ast\mathcal{E}^{1,0}_\mathfrak{r}\wedge\mathcal{E}_\mathfrak{q}^{p,q})\subset \eta^\ast\mathcal{E}^{1,0}_\mathfrak{r}\wedge\mathcal{E}_\mathfrak{q}^{p+1,q}$ and $\bar\partial(\eta^\ast\mathcal{E}^{1,0}_\mathfrak{r}\wedge\mathcal{E}_\mathfrak{q}^{p,q})\subset \eta^\ast\mathcal{E}^{1,0}_\mathfrak{r}\wedge\mathcal{E}_\mathfrak{q}^{p,q+1}$ and thus, $\partial$ and $\bar\partial$ descend to dif\/ferential operators \begin{gather* \partial_\eta\colon \ \mathcal{E}^{p,q}_\eta\rightarrow\mathcal{E}^{p+1,q}_\eta\qquad \mathrm{and}\qquad \bar\partial\colon \ \mathcal{E}^{p,q}_\eta\rightarrow\mathcal{E}^{p,q+1}_\eta, \end{gather*} respectively. From the def\/initions it easily follows that: \begin{gather}\label{properties of relative dbar} \partial_\eta(\omega\wedge\omega')=(\partial_\eta\omega)\wedge\omega'+(-1)^{p+q}\omega\wedge\partial_\eta\omega',\\ \partial_\eta f=\sum_{(\alpha,i)\in I(n,k)}\frac{\partial f}{\partial z_{\alpha i}}dz_{\alpha i},\nonumber \end{gather} where $\omega\in\mathcal{E}^{p,q}_\eta(\mathcal{Y})$, $\omega'\in\mathcal{E}^{p',q'}_\eta(\mathcal{Y})$ and $f\in\mathcal{E}(\mathcal{Y})$. Recall from Section \ref{section filtration of TM and TCS} that $\partial_{z_{\alpha i}}\in\Gamma(E^{1,0}|_{\mathcal{Y}})$ and thus, $\partial_\eta f(x)$, $ x\in\mathcal{Y}$ depends only on the f\/irst weighted jet~$\mathfrak{j}^1_xf$ of $f$ at~$x$ (see Section~\ref{section wdo}). Recall from (\ref{E and F as kernels of tangent maps}) that the distribution $E^{CS}$ is equal to $\ker(T\eta)$. \begin{Proposition}\label{thm double complex relative forms}\quad \begin{enumerate}\itemsep=0pt \item[$(i)$] The sheaf $\mathcal{E}^{p,q}_\eta$ is naturally isomorphic to the sheaf $\mathcal{E}^{0,q}_\mathfrak{q}(\Lambda^{p}E^{CS\ast})$ of smooth $(0,q)$-forms with values in the vector bundle $\Lambda^p E^{CS\ast}$ and $(\mathcal{E}^{p,\ast}_\eta,\bar\partial)$ is a resolution of $\mathcal{O}_\mathfrak{q}(\Lambda^p E^{CS\ast})$ by fine sheaves. \item [$(ii)$] $\partial_\eta$ is a linear $\mathrm{G}$-invariant differential operator of weighted order one and the sequence of sheaves $(\mathcal{E}^{\ast,q}_\eta,\partial_\eta)$, $q\ge0$ is exact. \label{lemma local exactness of relative d} \item [$(iii)$]The data $(\mathcal{E}^{p,q}_\eta,(-1)^p\bar\partial,\partial_\eta)$ define a double complex of fine sheaves with exact rows and columns. \end{enumerate} \end{Proposition} \begin{proof} $(i)$ By def\/inition, the sequence of vector bundles $0\rightarrow E^{CS\bot}\rightarrow T^\ast CS\rightarrow E^{CS\ast}\rightarrow0$ is short exact. Hence, also the sequence $0\rightarrow E^{CS\bot}\wedge\Lambda^{p}T^\ast CS\rightarrow\Lambda^{p+1} T^\ast CS\rightarrow \Lambda^{p+1}E^{CS\ast}\rightarrow0$, $p\ge1$ is short exact. In view of the isomorphism $\mathcal{E}^{p+1,q}_\mathfrak{q}\cong\mathcal{E}^{0,q}_\mathfrak{q}(\Lambda^{p+1}T^\ast CS)$, it is enough to show that $\eta^\ast\mathcal{E}^{1,0}_\mathfrak{r}\wedge\mathcal{E}_\mathfrak{q}^{p,q}$ is isomorphic to $\mathcal{E}^{0,q}_\mathfrak{q}(E^{CS\bot}\wedge\Lambda^{p}T^\ast CS)\cong\mathcal{E}_\mathfrak{q}(E^{CS\bot})\wedge\mathcal{E}^{0,q}_\mathfrak{q}(\Lambda^{p}T^\ast CS)$. Now $\eta^\ast\mathcal{E}^{1,0}_\mathfrak{r}$ is a sub-sheaf of $\mathcal{E}_\mathfrak{q}^{1,0}=\mathcal{E}_\mathfrak{q}(T^\ast CS)$ and since $\ker(T\eta)=E^{CS}$, it is contained in~$\mathcal{E}_\mathfrak{q}(E^{CS\bot})$. Using that $\ker(T\eta)=E^{CS}$ again, it is easy to see that the map $\eta^\ast\mathcal{E}^{1,0}_\mathfrak{r}\rightarrow\mathcal{E}_\mathfrak{q}(E^{CS\bot})$ induces an isomorphism of stalks at any point. Hence, $\eta^\ast\mathcal{E}^{1,0}_\mathfrak{r}\cong\mathcal{E}_\mathfrak{q}(E^{CS\bot})$ and the proof of the f\/irst claim is complete. The second claim is clear. (ii) It is clear that $\partial_\eta$ is $\mathbb{C}$-linear. It is $\mathrm{G}$-invariant as $\partial$ commutes with the pullback of any holomorphic map and since $\eta$ is $\mathrm{G}$-equivariant. As we already observed above that $\partial_\eta f(x)$ depends only on $\mathfrak{j}^1_xf$ when $x\in\mathcal{Y}$, the $\mathrm{G}$-invariance of $\partial_\eta$ shows that the same holds on $CS$ and thus, $\partial_\eta$ is a dif\/ferential operator of weighted order one. It remains to check the exactness of the complex and using the $\mathrm{G}$-invariance, it is enough to do this at~$x\in\mathcal{Y}$. By Lemma~\ref{lemma projection eta}, $\mathcal{Y}$ is biholomorphic to $\mathbb{C}^\ell$ where $\ell={m\choose2}+nk$. Hence, we can view the standard coordinates $w_1,\dots,w_\ell$ on $\mathbb{C}^\ell$ as coordinates on $\mathcal{Y}$. If $J=(j_1,\dots,j_q)$ where $j_1,\dots,j_q\in\{1,\dots,\ell\}$, then we put $d\bar w_J=d\bar w_{j_1}\wedge\cdots\wedge d\bar w_{j_q}$ and $|J|=q$. Let $\omega=\sideset{}{'}\sum\limits_{|I|=p}dz_I\wedge\omega_I\in\mathcal{E}^{p,q}_\eta(\mathcal{Y})$ be the relative form as in~(\ref{relative form in coordinates}). Then there are unique functions $f_{I,J}\in\mathcal{E}_\mathfrak{q}(\mathcal{Y})$ so that $\omega_I=\sideset{}{'}\sum\limits_{|J|=q}f_{I,J}d\bar w_J$. Assume that $\partial_\eta\omega=0$ on some open neighborhood $\mathcal{U}_x$ of $x$. This is equivalent to saying that for each increasing multi-index $J\colon \partial_\eta\sigma_J=0$ on $\mathcal{U}_x$ where $\sigma_J:=\sideset{}{'}\sum_{I}f_{I,J}dz_I$. Now using the same arguments as in the proof of the Dolbeault lemma, see \cite[Theorem~2.3.3]{H}, we can for each $J$ f\/ind a $(p-1,0)$-form $\phi_J$ such that $\partial_\eta\phi_J=\sigma_J$ on some open neighborhood of~$x$. Then $\partial_\eta\big(\sideset{}{'}\sum_J\phi_J\wedge d\bar w _J\big)=\sideset{}{'}\sum_J\sigma_J\wedge d\bar w_J=\omega$ on some open neighborhood of~$x$ and the proof is complete. (iii) This follows from $[\bar\partial,\partial_\eta]=0$ and the observations made above. \end{proof} \subsection{Relative de Rham complex}\label{section relative de Rham} By def\/inition, $\Omega^\ast_\eta:=\mathcal{E}^{\ast,0}_\eta\cap\ker\bar\partial$ is a~sheaf of holomorphic sections. Since $[\bar\partial,\partial_\eta]=0$, there is a~complex of sheaves $(\Omega^\ast_\eta,\partial_\eta)$ and we call it the \textit{relative de Rham complex}. \begin{Proposition}\label{thm relative de Rham}\quad \begin{enumerate}\itemsep=0pt \item [$(i)$] The relative de Rham complex is an exact sequence of sheaves which resolves the sheaf $\eta^{-1}\mathcal{O}_\mathfrak{r}$. \item [$(ii)$] The relative de Rham complex induces for each $r:=\ell+j\ge0$ a long exact sequence of vector bundles \begin{gather} \mathfrak{gr}^{\ell+j}\xrightarrow{\mathfrak{gr} \partial_\eta}E^{CS\ast}\otimes\mathfrak{gr}^{\ell+j-1} \rightarrow\cdots\nonumber\\ \hphantom{\mathfrak{gr}^{\ell+j}}{} \rightarrow\Lambda^jE^{CS\ast}\otimes\mathfrak{gr}^{\ell} \xrightarrow{\mathfrak{gr}\partial_\eta}\Lambda^{j+1}E^{CS\ast}\otimes\mathfrak{gr}^{\ell+j-1} \rightarrow\cdots. \label{first exact formal complex of relative de Rham} \end{gather} Let $s_0>0$, $s_1,s_2,s_3\ge0$ be integers such that $s_0+s_1+2s_2+3s_3=r$. Then the sequence~\eqref{first exact formal complex of relative de Rham} contains a long exact subsequence \begin{gather}\label{second exact formal complex of relative de Rham} 0\rightarrow {S}^{s_0}E^{CS\ast}\otimes S^{s_1,s_2,s_3}\rightarrow E^{CS\ast}\otimes {S}^{s_0-1}E^{CS\ast}\otimes S^{s_1,s_2,s_3}\rightarrow\cdots \\ \hphantom{0} \rightarrow\Lambda^jE^{CS\ast}\otimes {S}^{s_0-j}E^{CS\ast}\otimes S^{s_1,s_2,s_3}\!\rightarrow \Lambda^{j+1}E^{CS\ast}\otimes {S}^{s_0-j-1}E^{CS\ast}\otimes S^{s_1,s_2,s_3}\!\rightarrow\cdots,\nonumber \end{gather} where ${S}^{s_1,s_2,s_3}:={S}^{s_1}F^{CS\ast}\otimes {S}^{s_2}G^{CS}_2\otimes {S}^{s_3}G^{CS}_3$. \item [$(iii)$] The kernel of the first map in \eqref{first exact formal complex of relative de Rham} is $\bigoplus_{s_1+2s_2+3s_3=r}{S}^{s_1,s_2,s_3}$. \end{enumerate} \end{Proposition} \begin{proof} (i) Since $[\partial_\eta,\bar\partial]=0$, the relative de Rham complex is a sub-complex of the zero-th row $(\mathcal{E}_\mathfrak{q}^{\ast,0},\partial_\eta)$ of the double complex from Proposition~\ref{thm double complex I}. By diagram chasing and using the exactness of columns and rows in the double complex, one easily proves the exactness of the relative de Rham complex. By~(\ref{properties of relative dbar}), it easily follows that $\eta^{-1}\mathcal{O}_\mathfrak{r}=\mathcal{E}^{0,0}_\eta\cap \ker(\partial_\eta)\cap\ker(\bar\partial)$. (ii) The standard de Rham complex induces the Spencer complex (see \cite{Sp}) which is known to be exact. As the complex $(\Omega^\ast_\eta,\partial_\eta)$ is just a relative version of the (holomorphic) de Rham complex and $\partial_\eta$ satisf\/ies the usual properties of $\partial$, it is clear that the relative de Rham complex induces for each ${s_0}>0$, $s_1$, $s_2$ and $s_3$ the long exact sequence~(\ref{second exact formal complex of relative de Rham}). The sequence~(\ref{first exact formal complex of relative de Rham}) is the direct sum of all such sequences as ${s_0}$, $s_1$, $s_2,$ and $s_3$ ranges over all quadruples of non-negative integers satisfying $r={s_0}+s_1+2s_2+3s_3$. (iii) This readily follows from the part (ii). \end{proof} \subsection{Twisted relative de Rham complex}\label{section relative twisted double complex} The weight $\lambda:=(1-2n)\omega_m$ is $\mathfrak{g}$-integral and $\mathfrak{r}$-dominant. Hence, there is an irreducible $\mathrm{R}$-module $\mathbb{W}_\lambda$ with lowest weight $-\lambda$. Since $\mathfrak{r}$ is associated to $\{\alpha_m\}$, it follows that $\dim\mathbb{W}_\lambda=1$ and so $\mathbb{W}_\lambda$ is also an irreducible $\mathrm{Q}$-module. We will denote by $\mathcal{E}_\mathfrak{q}(\lambda)$ and $\mathcal{O}_\mathfrak{q}(\lambda)$ the sheaves of smooth and holomorphic sections of $W_\lambda^{CS}:=\mathrm{G}\times_\mathrm{Q}\mathbb{W}_\lambda$, respectively. If $W$ is a vector bundle over $CS$, then we denote $W(\lambda):=W\otimes W_\lambda^{CS}$, i.e., we twist $W$ by tensoring with the line bundle~$W_\lambda^{CS}$. It is not hard to see that $\eta^\ast\mathcal{E}_\mathfrak{r}(\lambda)\cong\mathcal{E}_\mathfrak{q}(\lambda)$ and $\eta^\ast\mathcal{O}_\mathfrak{r}(\lambda)=\mathcal{O}_\mathfrak{q}(\lambda)$ where we denote by the subscript $\mathfrak{r}$ the corresponding sheaves over~$TS$. We call $\mathcal{E}^{p,q}_\eta(\lambda):=\mathcal{E}^{p,q}_\eta\otimes_{\eta^{-1}\mathcal{E}_\mathfrak{r}}\eta^{-1}\mathcal{E}_\mathfrak{r}(\lambda)$ the \textit{sheaf of twisted relative $(p,q)$-forms}. Consider the following sequence of canonical isomorphisms: \begin{gather} \mathcal{E}^{p,q}_\eta(\lambda)\rightarrow\mathcal{E}^{p,q}_\eta\otimes_{\mathcal{E}_\mathfrak{q}} \mathcal{E}_\mathfrak{q}\otimes_{\eta^{-1}\mathcal{E}_\mathfrak{r}}\eta^{-1}\mathcal{E}_\mathfrak{r} (\lambda)\rightarrow\mathcal{E}^{p,q}_\eta\otimes_{\mathcal{E}_\mathfrak{q}}\eta^\ast\mathcal{E}_\mathfrak{r}(\lambda)\nonumber\\ \hphantom{\mathcal{E}^{p,q}_\eta(\lambda)}{} \rightarrow\mathcal{E}^{p,q}_\eta\otimes_{\mathcal{E}_\mathfrak{q}}\mathcal{E}_\mathfrak{q}(\lambda) \rightarrow\mathcal{E}^{0,q}_\mathfrak{q}\otimes_{\mathcal{E}_\mathfrak{q}}\big(\mathcal{E}^{p,0}_\eta \otimes_{\mathcal{E}_\mathfrak{q}}\mathcal{E}_\mathfrak{q}(\lambda)\big) \rightarrow\mathcal{E}^{0,q}_\mathfrak{q}\big(\Lambda^pE^{CS\ast}(\lambda)\big).\label{formula18-18} \end{gather} We see that $\mathcal{E}^{p,q}_\eta(\lambda)$ is isomorphic to the sheaf of smooth $(0,q)$-forms with values in $\Lambda^pE^{CS\ast}(\lambda)$. Hence, the Dolbeault dif\/ferential induces a dif\/ferential $\bar\partial\colon \mathcal{E}^{p,q}_\eta(\lambda)\rightarrow\mathcal{E}^{p,q+1}_\eta(\lambda)$ and a complex $(\mathcal{E}^{p,\ast}_\eta(\lambda),\bar\partial)$. A section of $\mathcal{E}^{p,q}_\eta(\lambda)$ is by def\/inition a f\/inite sum of decomposable elements $\omega\otimes v$ where $\omega$ and $v$ are sections of $\mathcal{E}^{p,q}_\eta$ and $\eta^{-1}\mathcal{E}_\mathfrak{r}(\lambda)$, respectively. As any section of $\eta^{-1}\mathcal{E}_\mathfrak{r}$ as well as transition functions between sections of $\eta^{-1}\mathcal{E}_\mathfrak{r}(\lambda)$ belong to $\ker(\partial_\eta)$, it follows that there is a unique linear dif\/ferential operator \begin{gather*}\label{twisted relative de Rham operator} \mathcal{E}^{p,q}_\eta(\lambda)\rightarrow\mathcal{E}^{p+1,q}_\eta(\lambda), \end{gather*} which satisf\/ies $\omega\otimes v\mapsto\partial_\eta\omega\otimes v$. We denote the operator also by $\partial_\eta$ as there is no risk of confusion. It is clear that $\partial_\eta$ is a linear $\mathrm{G}$-invariant dif\/ferential operator of weighted order one. \begin{Proposition}\label{thm double complex I} Let $p,q\ge0$ be integers. \begin{enumerate}\itemsep=0pt \item[$(i)$] The sequence of sheaves $(\mathcal{E}^{(p,\ast)}_\eta(\lambda),\bar\partial)$ is exact. \item [$(ii)$] The sequence of sheaves $(\mathcal{E}^{(\ast,q)}_\eta(\lambda),\partial_\eta)$ is exact. \item [$(iii)$] There is a double complex $(\mathcal{E}_\eta^{p,q}(\lambda),\partial_\eta,(-1)^p\bar{\partial})$ of fine sheaves with exact rows and columns. \end{enumerate} \end{Proposition} \begin{proof} (i) By construction, the sequence is a Dolbeault complex and the claim follows. (ii) The exactness follows immediately from Proposition \ref{thm double complex relative forms}(ii). (iii) We need to verify that $[\bar\partial,\partial_\eta]=0$. To see this, notice that a section of $\mathcal{E}^{p,q}_\eta(\lambda)$ can be locally written as a f\/inite sum of elements as above with $v$ holomorphic. The claim then easily follows from Proposition~\ref{thm double complex relative forms}(iii). \end{proof} Put $\Omega^{\ast}_\eta(\lambda):=\mathcal{E}^{\ast,0}_\eta(\lambda)\cap\ker(\bar\partial)$. The complex $(\mathcal{E}^{\ast,0}_\eta(\lambda),\partial_\eta)$ contains a sub-complex $(\Omega^\ast_\eta(\lambda),\partial_\eta)$ which we call the \textit{twisted relative de Rham complex}. As in Proposition~\ref{thm relative de Rham}, one can easily see that $(\Omega_\eta^\ast(\lambda),\partial_\eta)$ is an exact sequence of sheaves of holomorhic sections. Following the proof of Proposition~\ref{thm relative de Rham}, we obtain the following: \begin{Proposition}\label{thm relative twisted de Rham} The relative de Rham complex $(\Omega_\eta^\ast(\lambda),\partial_\eta)$ induces for each $r\ge0$ a long exact sequence of vector bundles \begin{gather}\label{first exact formal complex of twisted relative de Rham} \big(\Lambda^\bullet E^{CS\ast}\otimes\mathfrak{gr}^{r-\bullet}(\lambda),\mathfrak{gr}\partial_\eta\big). \end{gather} Let $s_0>0$, $s_1,s_2,s_3\ge0$ be integers such that $s_0+s_1+2s_2+3s_3=r$. Then the sequence~\eqref{first exact formal complex of twisted relative de Rham} contains a long exact subsequence \begin{gather}\label{second exact formal complex of twisted relative de Rham} \big(\Lambda^\bullet E^{CS\ast}\otimes {S}^{s_0-\bullet}E^{CS\ast}\otimes {S}^{s_1,s_2,s_3}(\lambda),\mathfrak{gr}\partial_\eta\big), \end{gather} where ${S}^{s_1,s_2,s_3}$ is defined in Proposition~{\rm \ref{thm relative de Rham}}. The kernel of the first map in~\eqref{first exact formal complex of twisted relative de Rham} is the bundle $\bigoplus_{s_1+2s_2+3s_3=r}{S}^{s_1,s_2,s_3}(\lambda)$. \end{Proposition} \subsection{Relative BGG sequence}\label{section relative BGG sequence} We know that $\Omega^p_\eta(\lambda)$ is isomorphic to the sheaf of holomorphic sections of $\Lambda^pE^{CS\ast}(\lambda)=\mathrm{G}\times_\mathrm{Q}(\Lambda^p\mathbb{E}^{\ast}\otimes\mathbb{W}_\lambda)$. The $\mathrm{Q}$-module $\Lambda^p\mathbb{E}^{\ast}$ is not irreducible. Decomposing this module into irreducible $\mathrm{Q}$-modules, we obtain from the relative twisted de Rham complex a relative BGG sequence and this will be crucial in the construction of the $k$-Dirac complexes. We will use notation from Section~\ref{section the relative Weyl group}. \begin{Proposition}\label{thm relative bgg sequence Let $a\in\mathbb{N}^{k,n}_{++}$ and $w_a\in W^\mathfrak{q}_\mathfrak{r}$ be as in Section~{\rm \ref{section the relative Weyl group}}. Then \begin{gather}\label{decomposition of skew symmetric powers} \Lambda^p\mathbb{E}^\ast\otimes\mathbb{W}_\lambda= \!\!\bigoplus_{a\in\mathbb{N}^{k,n}_{++}:|a|=p} \!\!\mathbb{W}_{\lambda_a}\qquad \text{and thus}\qquad \Omega^{p}_\eta(\lambda)= \!\!\bigoplus_{a\in\mathbb{N}^{n,k}_{++}:|a|=p} \!\!\mathcal{O}_\mathfrak{q}(\lambda_a), \end{gather} where $\mathbb{W}_{\lambda_a}$ is an irreducible $\mathrm{Q}$-module with lowest weight $-\lambda_a:=-w_a.\lambda$. There is a linear $\mathrm{G}$-invariant differential operator \begin{gather}\label{dif op in BGG} \partial_{a'}^a\colon \ \mathcal{O}_\mathfrak{q}(\lambda_a)\rightarrow\Omega^p_\eta(\lambda)\xrightarrow{\partial_\eta}\Omega^p_\eta(\lambda)\rightarrow\mathcal{O}_\mathfrak{q}(\lambda_{a'}) \end{gather} where the first map is the canonical inclusion and the last map is the canonical projection. If $a\nless a'$, then $\partial_{a'}^a=0$. \end{Proposition} \begin{proof} Recall from Section \ref{section lie algebra} that the semi-simple part $\mathfrak{r}_0^{ss}$ of $\mathfrak{r}_0$ is isomorphic to $\mathfrak{sl}(m,\mathbb{C})$ and that $\mathfrak{r}_0^{ss}\cap\mathfrak{q}$ is a parabolic subalgebra of~$\mathfrak{r}_{0}^{ss}$. The direct sum decomposition from~(\ref{decomposition of skew symmetric powers}) then follows at once from the Kostant's version of the Bott--Borel--Weyl theorem (see \cite[Theorem~3.3.5]{CS}) applied to $\mathbb{W}_\lambda$ and $(\mathfrak{r}_0^{ss},\mathfrak{r}_0^{ss}\cap\mathfrak{q})$ and the identity $\ell(w_a)=|a|$ from Lemma~\ref{lemma length of permutation}. Recall from \cite[Section~8.7]{BE} that the graph of the relative BGG sequence coincides with the relative Hasse graph~$W^\mathfrak{q}_\mathfrak{r}$. The last claim then follows from Lemma~\ref{lemma relative Weyl group}. \end{proof} \begin{Remark}\label{remark weight} Let $a=(a_1,\dots,a_k)\in\mathbb{N}^{k,n}_{++}$ and $b=(b_1,\dots,b_n)\in\mathbb{N}^{n,k}_{++}$ be the conjugated partition. In order to compute $\lambda_a$ from Proposition~\ref{thm relative bgg sequence}, notice that \begin{gather}\label{shifted lambda} \lambda+\rho=\bigg(\frac{2k-1}{2},\dots,\frac{3}{2},\frac{1}{2}\,\bigg|\,-\frac{1}{2},-\frac{3}{2},\dots,\frac{-2n+1}{2}\bigg). \end{gather} Since $w_a(\omega_m)=\omega_m$, we have $w_a(\lambda)=\lambda$ and thus $\lambda_a=w_a(\lambda+\rho)-\rho=\lambda+w_a\rho-\rho$. By~(\ref{action on lowest form}), it follows that \begin{gather* \lambda_a=\lambda+(-a_k,\dots,-a_1\,|\, b_1,b_2,\dots,b_n). \end{gather*} \end{Remark} \subsection{Direct image of the relative BGG sequence}\label{section direct images} Recall from \cite[Section 5.3]{BE} that given a $\mathfrak{g}$-integral and $\mathfrak{q}$-dominant weight $\nu$, there is at most one $\mathfrak{p}$-dominant weight in the $W_\mathfrak{p}^\mathfrak{q}$-orbit of $\nu$. If there is no $\mathfrak{p}$-dominant weight, then all direct images of $\mathcal{O}_\mathfrak{q}(\nu)$ vanish. If there is a $\mathfrak{p}$-dominant weight, say $\mu=w.\nu$ where $w\in W^\mathfrak{q}_\mathfrak{p}$, then $\tau^{\ell(w)}_\ast\mathcal{O}_\mathfrak{q}(\nu)\cong\mathcal{O}_\mathfrak{p}(\mu)$ is the unique non-zero direct image of $\mathcal{O}_\mathfrak{q}(\nu)$. \begin{Proposition}\label{thm direct images} Let $n\ge k\ge 2$ and $a=(a_1,\dots,a_k)\in\mathbb{N}^{k,n}_{++}$. Put $\mu^\pm:=\frac{1}{2}(-2n+1,\dots,-2n+1\,|\,1,\dots,1,\pm1)$ and $\mu_a:=\mu^\ast-(a_k,\dots,a_1\,|\,0,\dots,0)$ where $\ast=+$ if $d(a)\equiv n \mod 2$ and $\ast=-$ otherwise. Then \begin{gather*} \tau_\ast^q(\mathcal{O}_\mathfrak{q}(\lambda_a)) = \begin{cases} \mathcal{O}_\mathfrak{p}(\mu_a),&\mathrm{if} \ a\in S^k,\ q=\ell(a):={n\choose2}-q(a),\\ \{0\},& \mathrm{otherwise}. \end{cases} \end{gather*} \end{Proposition} \begin{proof}By def\/inition, each $w\in W_\mathfrak{p}^\mathfrak{q}$ f\/ixes the f\/irst $k$ coef\/f\/icients of $\lambda_a$ and so it is enough to look at the last $n$ coef\/f\/icients. By Remark~\ref{remark weight}, it follows \begin{gather* w_a(\lambda+\rho)=\lambda_a+\rho=\bigg(\dots,i-a_i-\frac{1}{2},\dots,\frac{1}{2}-a_1\,\bigg|\, b_1-\frac{1}{2},\dots,b_j-j+\frac{1}{2},\dots\bigg), \end{gather*} where $b=(b_1,\dots,b_n)$ is the conjugated partition. Put $c:=(c_1,\dots,c_n)$ where $c_j:=\big|b_j-j+\frac{1}{2}\big|$. By~(\ref{simple reflections}) and Table~\ref{table dominant weights}, if $c_i= c_j$ for some $i\ne j$, then there cannot be a~$\mathfrak{p}$-dominant weight in the $W^\mathfrak{q}_\mathfrak{p}$-orbit of~$\lambda_a$. If $a\not\in S^k$, then by Lemma~\ref{lemma symmetric YD} there is $s\in\{0,\dots,k-1\}$ such that\footnote{Notice that at this point we need that $n\ge k$.} $w_a(k-s)=k+i\ge k$ and $w_a(k+s+1)=k+j\ge k$ for some distinct positive integers~$i$ and~$j$. By~(\ref{shifted lambda}), it follows that $b_i-i+\frac{1}{2}=\frac{1}{2}+s$, $b_j-j-\frac{1}{2}=-\frac{1}{2}-s$ and thus $c_i=c_j$. Hence, all direct images of $\mathcal{O}_\mathfrak{q}(\lambda_a)$ are zero. We may now suppose that $a\in S^k$. By the def\/inition of $d(a)$, we have $b_{d(a)}-d(a)+\frac{1}{2}>0>b_{d(a)+1}-d(a)-\frac{1}{2}$. By Lemma \ref{lemma symmetric YD} and~(\ref{shifted lambda}), it follows that $(c_1,\dots,c_n)$ is a permutation of $\big(\frac{2n-1}{2},\dots,\frac{3}{2},\frac{1}{2}\big)$. As $\lambda_a$ is $\mathfrak{q}$-dominant, we know that the sequence $\big(b_1-\frac{1}{2},\cdots,b_j-j+\frac{1}{2},\dots\big)$ is decreasing. Thus for each integer $i=1,\dots,d(a)$, the set $\{j\colon i<j\le n,\ c_i>c_j\}$ contains precisely $b_i-i$ distinct elements. Altogether, there are precisely $\sum\limits_{\ell=1}^{d(a)}(b_\ell-\ell)=q(a)$ pairs $i<j$ such that $c_i>c_j$. Equivalently, there are $\ell(a)$ pairs $i<j$ such that $c_i<c_j$. It follows that the length of the permutation that maps $(c_1,\dots,c_n)$ to $\big(\frac{2n-1}{2},\dots,\frac{3}{2},\frac{1}{2}\big)$ is precisely $\ell(a)$. Now it is easy to see (recall~(\ref{simple reflections})) that there is $w\in W^\mathfrak{q}_\mathfrak{p}$ such that $w.\lambda_a$ is $\mathfrak{p}$-dominant and $\ell(w)=\ell(a)$. As there are $n-d(a)$ negative numbers in the sequence $\big(b_1-\frac{1}{2},\dots,b_j-j+\frac{1}{2},\dots\big)$, the last claim about the sign of the last coef\/f\/icient of $w.\lambda_a$ also follows. This completes the proof.\end{proof} \begin{Remark}\label{remark PT for other Dirac} In Proposition~\ref{thm direct images} we recovered the $W^\mathfrak{p}$-orbit of the singular weight $\mu^+$ if $n$ is even and of $\mu^-$ if $n$ is odd which was computed in \cite{F}. There is an automorphism of $\mathfrak{g}$ which swaps $\alpha_m$ and $\alpha_{m-1}$ and hence, it swaps also the associated parabolic subalgebras. If we cross in~(\ref{double fibration diagram I}) the simple root $\alpha_{m-1}$ instead of $\alpha_m$, take $(1-2n)\omega_{m-1}$ as $\lambda$ and follow the computations given above, we will get the~$W^\mathfrak{p}$-orbit of $\mu^+$ if~$n$ is odd and of $\mu^-$ if $n$ is even. As also all other arguments presented in this paper work for the other case, we will obtain the other ``half'' of the $k$-Dirac complex from \cite{TS} as mentioned in Introduction. \end{Remark} \subsection{Double complex of relative forms II}\label{section relative double complex II} The direct sum decomposition from Proposition \ref{thm relative bgg sequence} together with the isomorphism in~\eqref{formula18-18} gives a direct sum decomposition $\mathcal{E}^{p,q}_\eta(\lambda)=\bigoplus_{a\in\mathbb{N}^{k,n}_{++}:|a|=p}\mathcal{E}^{0,q}_\mathfrak{q}(\lambda_a)$. Let $a,a'\in\mathbb{N}^{k,n}_{++}$ be such that $p=|a|=|a'|-1$. Then there is a linear dif\/ferential operator \begin{gather}\label{relative differential in double complex} \partial_{a'}^a\colon \ \mathcal{E}^{0,q}_\mathfrak{q}(\lambda_a)\rightarrow\mathcal{E}^{p,q}_\eta(\lambda)\xrightarrow{\partial_\eta}\mathcal{E}^{p+1,q}_\eta(\lambda)\rightarrow\mathcal{E}^{0,q}_\mathfrak{q}(\lambda_{a'}) \end{gather} where the f\/irst map is the canonical inclusion and the last map is the canonical projection as in (\ref{dif op in BGG}). We denote the dif\/ferential operator by $\partial_{a'}^a$ as in (\ref{dif op in BGG}) as there is no risk of confusion. Recall from Proposition \ref{thm relative bgg sequence} that $\partial^a_{a'}=0$ if $a\nless a'$. Suppose that $\mathcal U$ is an open, contractible and Stein subset of $M$. We put $\mathcal{E}^{p,q}_\eta(\tau^{-1}(\mathcal U),\lambda):=\Gamma(\tau^{-1}(\mathcal U),\mathcal{E}^{p,q}_\eta(\lambda))$, i.e., this is the space of sections of the sheaf $\mathcal{E}^{p,q}_\eta(\lambda)$ over $\tau^{-1}(\mathcal U)$. Then there is a double complex \begin{gather}\label{double complex II}\begin{split} \xymatrix{&&&\\ \cdots\ar[r]& \mathcal{E}^{p,q+1}_\eta(\tau^{-1}(\mathcal U),\lambda)\ar[r]^{D''}\ar[u] &\mathcal{E}^{p+1,q+1}_\eta(\tau^{-1}(\mathcal U),\lambda) \ar[u]\ar[r]&\cdots\\ \cdots\ar[r]&\mathcal{E}^{p,q}_\eta(\tau^{-1}(\mathcal U),\lambda)\ar[r]^{D''}\ar[u]^{D'}&\mathcal{E}^{p+1,q}_\eta(\tau^{-1}(\mathcal U),\lambda)\ar[u]^{D'}\ar[r]&\cdots,\\ &\ar[u]&\ar[u]&}\end{split} \end{gather} where $D'=(-1)^p\bar\partial$ and $D''=\partial_\eta$. Put $T^i(\mathcal U):=\bigoplus_{p+q=i}\mathcal{E}^{p,q}_\eta(\tau^{-1}(\mathcal U),\lambda)$. We obtain three complexes $(T^{\ast}(\mathcal U),D')$, $(T^{\ast}(\mathcal U),D'')$ and $(T^\ast(\mathcal U),D'+D'')$. \begin{Lemma}\label{lemma sections over stein set} Let $a\in\mathbb{N}^{k,n}_{++}$ and $\mathcal U$ be the open, contractible and Stein subset of $M$ as above. Then \begin{gather}\label{sections over stein set} H^{q}\big(\tau^{-1}(\mathcal U),\mathcal{O}_\mathfrak{q}(\lambda_a)\big)= \begin{cases} \Gamma(\mathcal U,\mathcal{O}_\mathfrak{p}(\mu_a)),& a\in S^k, \ q=\ell(a),\\ \{0\}, & \mathrm{otherwise}, \end{cases} \end{gather} and thus also \begin{gather*} H^{{n\choose2}+j}(T^\ast(\mathcal U),D')=\bigoplus_{a\in S^k_j}H^{\ell(a)}\big(\tau^{-1}(\mathcal U),\mathcal{O}_\mathfrak{q}(\lambda_a)\big). \end{gather*} \end{Lemma} \begin{proof}The f\/irst claim follows from Proposition \ref{thm direct images} and application of the Leray spectral sequence as explained in \cite{BE}. For the second claim, recall from \cite[Theorem~3.20]{W} that the sheaf cohomology is equal to the Dolbeault cohomology, i.e., there is an isomorphism \begin{gather}\label{computing sheaf cohomology groups} H^{q}\big(\tau^{-1}(\mathcal U),\mathcal{O}_\mathfrak{q}(\lambda_a)\big)\cong H^{q}\big(\mathcal{E}^{0,\ast}_\mathfrak{q}\big(\tau^{-1}(\mathcal U),\lambda_a\big),\bar\partial\big). \end{gather} The cohomology group appears on the $(|a|+\ell(a))=(d(a)+2q(a)+{n\choose2}-q(a))=({n\choose2}+r(a))$-th diagonal of the double complex. Here, see Proposition \ref{thm direct images}, we use that $\ell(a)={n\choose2}-q(a)$, the notation from~(\ref{abbreviations}) and $S^k_j=\{a\in S^k\colon r(a)=j\}$. \end{proof} \section[$k$-Dirac complexes]{$\boldsymbol{k}$-Dirac complexes}\label{section complex} In Section \ref{section complex} we will give the def\/inition of dif\/ferential operators in the $k$-Dirac complexes. It will be clear from the construction that the operators are linear, local and $\mathrm{G}$-invariant. Later in Lemma~\ref{lemma diff op on graded jets} we will show that each operator is indeed a dif\/ferential operator and we give an upper bound on its weighted order. The operators naturally form a sequence and we will prove in Theorem~\ref{theorem complex} that they form a complex which we call the $k$-Dirac complex. Recall from Section \ref{section relative double complex II} that $\mathcal{E}^{0,q}_\mathfrak{q}(\tau^{-1}(\mathcal U),\lambda_a)$ is the space of sections of the sheaf $\mathcal{E}^{0,q}_\mathfrak{q}(\lambda_a)$ over~$\tau^{-1}(\mathcal U)$. If $\alpha\in\mathcal{E}^{0,q}_\mathfrak{q}(\tau^{-1}(\mathcal U),\lambda_a)$ is $\bar\partial$-closed, then we will denote by $[\alpha]\in H^{q}(\tau^{-1}(\mathcal U),\mathcal{O}_\mathfrak{q}(\lambda_a))$ the corresponding cohomology class. \begin{Lemma}\label{lemma diff op} Let $j\ge0$, $a\in S^k_j$, $a'\in S^k_{j+1}$ be such that $a<a'$ and $\mathcal{U}$ be the Stein set as above. Then there is a~linear, local and $\mathrm{G}$-invariant operator \begin{gather* D_{a'}^a\colon \ \Gamma(\mathcal U,\mathcal{O}_\mathfrak{p}(\mu_a))\rightarrow\Gamma(\mathcal U,\mathcal{O}_\mathfrak{p}(\mu_{a'})). \end{gather*} \end{Lemma} \begin{proof} Let us for a moment put $\mathcal{V}:=\tau^{-1}(\mathcal U)$. Using the isomorphisms from~(\ref{sections over stein set}), it is enough to def\/ine a map $H^{\ell(a)}(\mathcal{V},\mathcal{O}_\mathfrak{q}(\lambda_a))\rightarrow H^{\ell(a')}(\mathcal{V},\mathcal{O}_\mathfrak{q}(\lambda_{a'}))$ which has the right properties. By assumption, we have $|a'|-|a|\in\{1,2\}$. Let us f\/irst consider $|a'|-|a|=1$. Then $q:=\ell(a')=\ell(a)$ and by (\ref{relative differential in double complex}), we have the map $\partial_{a'}^a\colon \mathcal{E}^{0,q}_\mathfrak{q}(\mathcal{V},\lambda_a)\rightarrow\mathcal{E}^{0,q}_\mathfrak{q}(\mathcal{V},\lambda_{a'})$ in the double complex (\ref{double complex II}). The induced map on cohomology is $D_{a'}^a$. If $|a'|-|a|=2$, then $q:=\ell(a)=\ell(a')+1$ and we f\/ind that there are precisely two non-symmetric partitions $b,c\in\mathbb{N}^{k,n}_{++}$ such that $a<b<a'$ and $ a<c<a'$. Then there is a diagram \begin{gather}\label{def of second order operator}\begin{split} & \xymatrix{\mathcal{E}^{0,q}_\mathfrak{q}(\mathcal{V},\lambda_a)\ar[r]^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!(\partial^a_b,\partial_c^a)}&\mathcal{E}^{0,q}_\mathfrak{q}(\mathcal{V},\lambda_b)\oplus\mathcal{E}^{0,q}_\mathfrak{q}(\mathcal{V},\lambda_c)&\\ &\mathcal{E}^{0,q-1}_\mathfrak{q}(\mathcal{V},\lambda_b)\oplus\mathcal{E}^{0,q-1}_\mathfrak{q}(\mathcal{V},\lambda_c)\ar[u]^{(-1)^p\bar\partial}\ar[r]^{ \ \ \ \ \ \ \ \ \ \partial_{a'}^b+\partial_{a'}^c}&\mathcal{E}^{0,q-1}_\mathfrak{q}(\mathcal{V},\lambda_{a'}),}\end{split} \end{gather} which lives in the double complex (\ref{double complex II}). Let $\alpha\in\mathcal{E}^{0,q}_\mathfrak{q}(\mathcal{V},\lambda_a)$ be $\bar\partial$-closed. Then $\partial_{b}^a\alpha$ and $\partial_{c}^a\alpha$ are also $\bar\partial$-closed and thus by Lemma \ref{lemma sections over stein set} and the isomorphism~(\ref{computing sheaf cohomology groups}), we can f\/ind $\beta$ and $\gamma$ such that $\partial_{b}^a\alpha=(-1)^p\bar\partial\beta$ and $\partial_c^a\alpha=(-1)^p\bar\partial\gamma$ where $p=|a|+1$. Since the relative BGG sequence is a complex, we have \begin{gather*} \bar\partial\big(\partial_{a'}^b\beta+\partial_{a'}^c\gamma\big)=(-1)^p(\partial_{a'}^b\partial_b^a+\partial_{a'}^c\partial_c^a)\alpha=0, \end{gather*} which shows that $\partial_{a'}^b\beta+\partial_{a'}^c\gamma$ is a cocycle. Of course this elements depends on choices but we claim that $[\partial_{a'}^b\beta+\partial_{a'}^c\gamma]$ depends only on $[\alpha]$. It is easy to see that $[\partial_{a'}^b\beta+\partial_{a'}^c\gamma]$ does not depend on the choices of $\beta$ and $\gamma$. If $[\alpha]=0$, say $\alpha=(-1)^{p-1}\bar\partial\varrho$, then we may put $\beta=-\partial_b^a\varrho$ and $\gamma=-\partial_c^a\varrho$ and thus $\partial_{a'}^b\beta+\partial_{a'}^c\gamma=-(\partial_{a'}^b \partial^a_b+\partial^c_{a'} \partial_c^a)\varrho=0$. Hence, we can put $D_{a'}^a[\alpha]:=[\partial_{a'}^b\beta+\partial_{a'}^c\gamma]$. From the construction is clear that $D_{a'}^a$ is linear. The locality follows from the fact that $D_{a'}^a$ is compatible with restrictions to smaller Stein subsets of $\mathcal{U}$. As the operators in the double complex (\ref{double complex II}) are $\mathrm{G}$-invariant, it is easy to verify that each operator $D_{a'}^a$ is $\mathrm{G}$-invariant. \end{proof} Put $\mathcal{O}_j:=\bigoplus_{a\in S^k_{j}}\mathcal{O}_\mathfrak{p}(\mu_a)$ and $\mathcal{O}_j(\mathcal{U}):=\Gamma(\mathcal{U},\mathcal{O}_j)$. If $a\in S^k_j$ and $s\in\mathcal{O}_j(\mathcal{U})$, then we denote by $s_a$ the $a$-th component of $s$ so that we may write $s=(s_a)_{a\in S^k_j}$. We call the following complex~(\ref{sequence of diff ope}) the $k$-\textit{Dirac complex}. \begin{Theorem}\label{theorem complex} With the notation set above, there is a complex \begin{gather}\label{sequence of diff ope} \mathcal{O}_0(\mathcal{U})\xrightarrow{D_0}\mathcal{O}_1(\mathcal{U})\rightarrow\cdots \rightarrow\mathcal{O}_j(\mathcal{U})\xrightarrow{D_j}\mathcal{O}_{j+1}(\mathcal{U})\rightarrow\cdots \end{gather} of linear $\mathrm{G}$-invariant operators where \begin{gather*} (D_js)_{a'}=\sum_{a<a'}D_{a'}^as_a. \end{gather*} \end{Theorem} \begin{proof} Let $a,a'\in S^k$ be such that $a<a'$, $r(a)=r(a')-2$. We need to verify that $\sum\limits_{a''\in S^k\colon a<a''<a'}D_{a'}^{a''}D_{a''}^{a}=0$. Observe that $|a'|-|a|\in\{3,4\}$. Let us f\/irst assume that $|a'|-3=|a|$. Then there are at most two symmetric partitions $a''$ such that $a<a''<a'$. If there is only one such symmetric partition $a''$, then, since the relative BGG sequence is a~complex, it follows easily that $D_{a'}^{a''}D_{a''}^{a}=0$. So we can assume that there are two symmetric partitions, say $a''_1$, $a''_2$. Consider for example \begin{gather*}\xymatrix{&b=\sYtj\ar[rd]\ar[r]&c=\sYtd\ar[rd]&\\ a=\sYdj\ar[ru]\ar[r]\ar[rd]&a''_1=\sYdd\ar[ru]\ar[rd]&a''_2=\sYtjj\ar[r]&a'=\sYtdj.\\ &b'=\sYdjj\ar[r]\ar[ru]&c'=\sYddj\ar[ru].&} \end{gather*} Then we can f\/ind $\beta$ and $\beta'$ so that $\partial_b^a\alpha=(-1)^{p}\bar\partial\beta$ and $\partial_{b'}^a\alpha=(-1)^{p}\bar\partial\beta'$ where $p=|b|=|b'|$. Then $[D_{a_2''}^a\alpha]=[\partial_{a''_2}^b\beta+\partial_{a''_2}^{b'}\beta']$ which implies \begin{gather*} (-1)^{p+1}\bar\partial\big(\partial_c^b\beta\big)=(-1)^{p+1}\partial_c^b\bar\partial\beta=-\partial_c^b \partial_b^a\alpha=\partial_{c}^{a_1''} \partial_{a_1''}^a\alpha \end{gather*} and similarly $(-1)^{p+1}\bar\partial(\partial_{c'}^{b'}\beta')=\partial_{c'}^{a_1''}\partial_{a''_1}^a\alpha$. Hence, we conclude that \begin{gather*} D^{a_1''}_{a'} D^{a}_{a_1''}[\alpha]=\big[\partial_{a'}^{c}\partial_c^b\beta+\partial_{a'}^{c'}\partial_{c'}^{b'}\beta'\big]\end{gather*} and thus \begin{gather*} D_{a'}^{a_2''} D_{a_2''}^a[\alpha]=\big[\partial_{a'}^{a_2''}\big(\partial_{a_2''}^b\beta+\partial_{a_2''}^{b'}\beta'\big)\big] =-\big[\partial_{a'}^{c}\partial_c^b\beta+\partial_{a'}^{c'}\partial_{c'}^{b'}\beta'\big]=-D^{a_1''}_{a'} D^{a}_{a_1''}[\alpha]. \end{gather*} This completes the proof when $|a'|=|a|+3$ and now we may assume $|a'|=|a|+4$. We put $A'':=\{a''\in S^k|\ a<a''<a'\}$, $B:=\{b\in\mathbb{N}^{k,n}_{++}\,|\, \exists\, a''\in A''\colon a<b<a''\}$, $B':=\{b'\in\mathbb{N}^{k,n}_{++}\,|\, \exists\, a''\in A''\colon a''<b'<a'\}$ and f\/inally $C:=\{c\in\mathbb{N}^{k,n}_{++}\setminus S^k\,|\, \exists \, b\in B,\ \exists\, b'\in B'\colon b'<c<b''\}$. Consider for example the diagram \begin{gather*}\xymatrix{ &b_1=\sYjj\ar[r]\ar[rd]&c_1=\sYjjj\ar[r]&b'_1=\sYdjj\ar[rd]\\ a=\sYj\ar[rd]\ar[ru]&&a''=\sYdj\ar[rd]\ar[ru]&&a'=\sYtjj,\\ &b_2=\sYd\ar[r]\ar[ru]&c_2=\sYt\ar[r]&b_2'=\sYtj\ar[ru]&} \end{gather*} where $A''=\{a''\}$, $B=\{b_1,b_2\}$, $B'=\{b_1',b_2'\}$ and $C=\{c_1,c_2\}$. As above, the set $A''$ contains at most two elements but we will not need that. Now we can proceed as above. There are $\beta_i$ such that $(-1)^{p}\bar\partial\beta_i=\partial_{b_i}^a\alpha$ where $p=|b_i|=|a|+1$ and so $[D_{a''_j}^{a}\alpha]=\big[\sum\limits_{b_i\in B}\partial_{a''_j}^{b_i}\beta_i\big]$ for every $a''_j\in A''$. As the relative BGG sequence is a~complex, we have for each $c_\ell\in C$: \begin{gather*} \bar\partial\bigg(\sum_{b_i\in B}\partial_{c_\ell}^{b_i}\beta_i\bigg)=\sum_{b_i\in B}\partial_{c_\ell}^{b_i}\bar\partial\beta_i=(-1)^{p}\sum_{b_i\in B}\partial_{c_\ell}^{b_i}\partial_{b_i}^a\alpha=0. \end{gather*} As above, there is $\gamma_\ell$ such that $(-1)^{p+1}\bar\partial\gamma_\ell=\sum\limits_{b_i\in B}\partial_{c_\ell}^{b_i}\beta_i$. Then for $b'_s\in B'$: \begin{gather*} (-1)^{p+2}\bar\partial\bigg(\sum_{c_\ell\in C}\partial_{b'_s}^{c_\ell}\gamma_\ell\bigg) =-\sum_{c_\ell\in C}\partial_{b'_s}^{c_\ell}((-1)^{p+1}\bar\partial\gamma_\ell) =-\sum_{c_\ell\in C,\ b_i\in B}\partial_{b'_s}^{c_\ell}\partial_{c_\ell}^{b_i}\beta_i\\ \hphantom{(-1)^{p+2}\bar\partial\bigg(\sum_{c_\ell\in C}\partial_{b'_s}^{c_\ell}\gamma_\ell\bigg)}{} =\sum_{b_i\in B,\ a''_j\in A''_j\colon b_i<a''_j<b_{s}'}\partial_{b'_s}^{a''_j}\partial_{a''_j}^{b_i}\beta_i =\sum_{a''_j\in A\colon a''_j<b'_s}\partial_{b'_s}^{a''_j}\bigg(\sum_{b_i\in B}\partial_{a''_j}^{b_i}\beta_i\bigg)\\ \hphantom{(-1)^{p+2}\bar\partial\bigg(\sum_{c_\ell\in C}\partial_{b'_s}^{c_\ell}\gamma_\ell\bigg)}{} =\sum_{a''_j\in A\colon a''_j<b'_s}\partial_{b'_s}^{a''_j}(D_{a''_j}^a\alpha). \end{gather*} This implies that $\sum\limits_{a''_j\in A''}D_{a'}^{a''}D_{a''}^a([\alpha])$ is the cohomology class of \begin{gather*} \sum_{a''_j\in A''}\bigg(\sum_{b'_s\in B'\colon a''_j<b_s'}\partial_{a'}^{b'_s}\bigg(\sum_{c_\ell\in C}\partial_{b'_s}^{c_\ell}\gamma_\ell\bigg)\bigg) = \sum_{b'_s\in B'}\sum_{c_\ell\in C}\partial_{a'}^{b'_s}\partial_{b'_s}^{c_\ell}\gamma_\ell \\ \hphantom{\sum_{a''_j\in A''}\bigg(\sum_{b'_s\in B'\colon a''_j<b_s'}\partial_{a'}^{b'_s}\bigg(\sum_{c_\ell\in C}\partial_{b'_s}^{c_\ell}\gamma_\ell\bigg)\bigg)}{} =\sum_{c_\ell\in C}\sum_{b'_s\in B'}\partial_{a'}^{b'_s}\partial_{b'_s}^{c_\ell}\gamma_\ell=\sum_{c_\ell\in C}0=0. \end{gather*} In the f\/irst equality we use the fact that given $b'_s\in B'$, there is only one $a''_j\in A''$ such that $a''_j<b'_s$ and in the third equality we use that the relative BGG sequence is a complex once more. \end{proof} \section[Formal exactness of $k$-Dirac complexes]{Formal exactness of $\boldsymbol{k}$-Dirac complexes}\label{section formal exactness} We will proceed in Section \ref{section formal exactness} as follows. In Section~\ref{section formal neighborhood of X_0} we will recall the def\/inition of the normal bundle of the analytic subvariety $X_0:=\tau^{-1}(x_0)$ and give the def\/inition of the weighted formal neighborhood of $X_0$. In Section~\ref{section formal double complex} we will consider the double complex of twisted relative forms from Section~\ref{section computing PT} and restrict it to the weighted formal neighborhood of~$X_0$. In Section \ref{section les of weighted jets} we will prove that the operators def\/ined in Section \ref{section complex} are dif\/ferential operators and f\/inally, in Theorem~\ref{theorem formal exactness} we will prove that the $k$-Dirac complexes are formally exact. \subsection[Formal neighborhood of $\tau^{-1}(x_0)$]{Formal neighborhood of $\boldsymbol{\tau^{-1}(x_0)}$}\label{section formal neighborhood of X_0} Let us f\/irst recall notation from Section \ref{section filtration of TM and TCS}. There is the 2-step f\/iltration $\{0\}=F_0^M\subset F_{-1}^M\subset F_{-2}^M=TM$ and the 3-step f\/iltration $\{0\}=F_0^{CS}\subset F^{CS}_{-1}\subset F^{CS}_{-2}\subset F^{CS}_{-3}=TM$. Moreover, $F_{-1}^{CS}$ decomposes as $E^{CS}\oplus F^{CS}$ where $E^{CS}=\ker(T\eta)$ and $F^{CS}=\ker(T\tau)$. From this it follows that~$E^{CS}$ and~$F^{CS}$ are integrable distributions. Dually, there are f\/iltrations $T^\ast M=F_1^M\supset F_2^M\supset F_3^M=\{0\}$ and $T^\ast CS=F_1^{CS}\supset F_2^{CS}\supset F_3^{CS}\supset F_4^{CS}=\{0\}$ where $F_i^M$ is the annihilator of $F_{-i+1}^M$ and similarly for $F_i^{CS}$. We put $G_i^M:=F_i^M/F_{i+1}^M$ and $G_i^{CS}:=F_i^{CS}/F_{i+1}^{CS}$ so that \begin{gather*} gr(TM)=G_{-2}^M\oplus G_{-1}^M,\qquad gr(T^\ast M)= G_1^M\oplus G_2^M,\\ gr(TCS)= G_{-3}^{CS}\oplus G_{-2}^{CS}\oplus G_{-1}^{CS},\qquad gr(T^\ast CS)=G_1^{CS}\oplus G_2^{CS}\oplus G_3^{CS},\\ G_i^M\cong \big(G_{-i}^M\big)^\ast,\quad i=1,2 \qquad \mathrm{and} \qquad G_i^{CS}\cong\big(G_{-i}^{CS}\big)^\ast,\quad i=1,2,3. \end{gather*} Let us now brief\/ly recall Section~\ref{section ideal sheaf}. If $X$ is an analytic subvariety of a complex manifold $Y$, then the normal bundle $N_X$ of $X$ in $Y$ is the quotient $(TY|_X)/TX$ and the co-normal bundle $N^\ast_X$ is the annihilator of $TX$ inside $T^\ast X$. In particular, the origin $x_0$ can be viewed as an analytic subvariety of $M$ with local def\/ining equation $X_1=0$, $X_2=0$ and $ Y=0$ where the matrices are those as in~(\ref{affine subset of M}). For each $i\ge1$ there is the associated ($i$-th power of the) ideal sheaf $\mathcal{I}_{x_0}^i$. This is a sheaf of $\mathcal{O}_M$-modules such that \begin{gather*} (\mathcal{I}_{x_0})^i_x= \begin{cases} (\mathcal{O}_M)_x, & x\ne x_0,\\ \mathcal{F}^i_{x_0}, & x=x_0, \end{cases} \end{gather*} where $\mathcal{O}_M$ is the structure sheaf on $M$, $\mathcal{F}^i_{x}=\{f\in(\mathcal{O}_M)_x\colon j^i_xf=0\}$ and the subscript $x$ stands for the stalk at $x\in M$ of the corresponding sheaf. Also recall from Section~\ref{section wdo} the def\/inition of weighted jets. For each $i\ge0$, there is a short exact sequence of vector spaces \begin{gather}\label{ses of weighted jets} 0\rightarrow\mathfrak{F}^{i+1}_{x_0}\rightarrow\mathfrak{F}^i_{x_0}\rightarrow\mathfrak{gr}^{i+1}_{x_0}\rightarrow0, \end{gather} where $\mathfrak{F}^i_{x_0}:=\{f\in\mathcal{O}_{x_0}\colon \mathfrak{j}^i_{x_0}f=0\}$. We will view~(\ref{ses of weighted jets}) also as a short exact sequence of sheaves over~$\{x_0\}$. Put $X_0:=\tau^{-1}(x_0)$. Recall from Lemma~\ref{lemma fibers} that $X_0$ is complex manifold which is biholomorphic to the connected component $\mathrm{Gr}^+_h(n,n)$ of $\mathbb{C}^{n}$ in the Grassmannian of maximal totally isotropic subspaces in~$\mathbb{C}^{2n}$. \begin{Remark}\label{remark notation over X_0} If $V^{CS}$ is a holomorphic vector bundle over $CS$, we will for brevity put $V:=V^{CS}|_{X_0}$. We also put $\tau_0:=\tau|_{X_0}$. \end{Remark} \begin{Lemma}\label{lemma analytic subvariety X_0}\quad \begin{enumerate}\itemsep=0pt \item [$(i)$] $X_0$ is a closed analytic subvariety of $CS$ and there is an isomorphism of sheaves $\mathcal{I}_{X_0}\cong\tau^\ast\mathcal{I}_{x_0}$. \item [$(ii)$] There is an isomorphism of vector bundles\footnote{Here we use notation set in Remark \ref{remark notation over X_0}.} $TX_0\cong F$. \item [$(iii)$] The normal bundle $N$ of $X_0$ in $CS$ is isomorphic to $\tau_0^{\ast} T_{x_0}M$. In particular, $N$ is a trivial holomorphic vector bundle. \end{enumerate} \end{Lemma} \begin{proof}\label{lemma X0} (i) By Lemma \ref{lemma set tau^{-1}(U)}, $\tau^{-1}(\mathcal{X})=\mathcal{X}\times X_0$ where $\mathcal{X}=(\textbf{p}\circ\exp)(\mathfrak{g}_-)$. From this the claim easily follows. (ii) As $X_0=\tau^{-1}(x_0)$, it is clear that $TX_0=\ker(T\tau)|_{X_0}$. But we know that $\ker(T\tau)=F$ and the claim follows. (iii) By def\/inition, $\tau^\ast_0 T_{x_0}M=\{(x,v)\,|\, x\in X_0,\ v\in T_{x_0}M\}$. Hence, there is an obvious projection $TCS|_{X_0}\rightarrow\tau^\ast_0 T_{x_0}M$, $(x,v)\mapsto (x,T_x\tau(v))$ which descends to an isomorphism $N\rightarrow\tau^\ast_0 T_{x_0}M$. \end{proof} Recall now the linear isomorphisms $\mathfrak{g}_{i}\cong (G_i^M)_{x_0}$, $i=-2,-1,1,2$, from~(\ref{linear isomorphisms over x0}). In particular, we can view $\mathfrak{g}_i$ as the f\/iber of $G_i^M$ over $\{x_0\}$ and thus also as a vector bundle over $\{x_0\}$. We use this point of view in the following def\/inition. \begin{Definition}\label{df weighted co-normal neigh} Put $N^\ast_i:=\tau^\ast_0 \mathfrak{g}_i$, $i=1,2$ and $ \mathfrak{S}^\ell N^\ast:=\tau^\ast_0\mathfrak{gr}^{\ell}$, $\ell=0,1,2,\dots$. \end{Definition} Notice that $N^\ast_i$, $i=1,2$ and $\mathfrak{S}^\ell N^\ast$, $\ell\ge0$ are by def\/inition trivial holomorphic vector bundles over~$X_0$. Recall from the end of Section~\ref{section wdo} that $gr^\ell_{x_0}$ is the subspace of~$\mathfrak{gr}^\ell_{x_0}$ that is isomorphic to~$ S^\ell\mathfrak{g}_1$. \begin{Lemma} The co-normal bundle $N^\ast $ of $X_0$ in $CS$ is isomorphic to $\tau^\ast_0 T^\ast_{x_0}M$ and the bundle~$N^\ast_2$ is isomorphic to~$G_3$. There are short exact sequences of vector bundles \begin{gather}\label{ses with conormal bdle} 0\rightarrow N^\ast_2\rightarrow N^\ast\rightarrow N^\ast_1\rightarrow0\qquad \text{and}\qquad 0\rightarrow G_2\rightarrow N^\ast_1\rightarrow E^{\ast}\rightarrow0 \end{gather} over $X_0$. Moreover, for each $\ell\ge0$ there are isomorphisms of vector bundles \begin{gather}\label{weighted formal neighborhood} \mathfrak{S}^\ell N^\ast=\bigoplus_{\ell_1+2\ell_2=\ell } {S}^{\ell_1}N^\ast_1\otimes {S}^{\ell_2}N^\ast_2 \qquad \text{and}\qquad S^\ell N^\ast_1=\tau^\ast gr^\ell_{x_0}. \end{gather} \end{Lemma} \begin{proof} There is a canonical injective vector bundle map $\tau^\ast_0 T^\ast_{x_0}M\rightarrow T^\ast CS$ and a moment of thought shows that its image is contained in $N^\ast$. By comparing dimensions of both vector bundles, we have $\tau^\ast_0 T^\ast_{x_0}M\cong N^\ast$ and thus the f\/irst claim. It is clear that $N_2^\ast=\tau^\ast_0\mathfrak{g}_2$ is the annihilator of $F_{-2}=(T\tau)^{-1}(\mathfrak{g}_{-1})$ and since $G_3=F_{-2}^\perp$, the second claim follows. The f\/irst sequence in (\ref{ses with conormal bdle}) is the pullback of the short exact sequence $0\rightarrow \mathfrak{g}_2\rightarrow T^\ast_{x_0}M\rightarrow \mathfrak{g}_1\rightarrow0$ and thus, it is short exact. The exactness of the latter sequence follows from the exactness of $0\rightarrow G^{CS}_2\rightarrow F^\perp/G^{CS}_3\rightarrow E^{CS\ast}\rightarrow0$ and the isomorphisms $N^\ast\cong F^\perp$, $G_3\cong N_2^\ast$ and $N_1^\ast\cong N^\ast/N_2^\ast$. The isomorphisms in (\ref{weighted formal neighborhood}) follow immediately from def\/initions and the isomorphism (\ref{weighted jets at x_0}). \end{proof} We know that $\mathfrak{S}^\ell N^\ast$ is a trivial holomorphic vector bundle over the compact base $X_0$. It follows that any global holomorphic section of $\mathfrak{S}^\ell N^\ast$ is a constant $\mathfrak{gr}^{\ell}$-valued function on $X_0$ and that $\mathfrak{S}^\ell N^\ast$ is trivialized by such sections. The same is obviously true also for $S^\ell N^\ast_1$. Let us formulate this as lemma. \begin{Lemma}\label{lemma global hol sections over X_0} The holomorphic vector bundles $\mathfrak{S}^\ell N^\ast$ and $ S^\ell N^\ast_1$ are trivial and there are canonical isomorphisms $\mathfrak{gr}^{\ell}_{x_0}\rightarrow\Gamma(\mathcal{O}(\mathfrak{S}^\ell N^\ast))$ and $gr^\ell_{x_0}\rightarrow\Gamma(\mathcal{O}({S}^\ell N^\ast_1))$ of finite-dimensional vector spaces. \end{Lemma} Let us f\/inish this section by recalling the concept of formal neighborhoods (see \cite{B,WW}). Let $\iota_0\colon X_0\hookrightarrow CS$ be the inclusion. Then $\mathcal{F}_{X_0}:=\iota^{-1}_0\mathcal{I}_{X_0}$ is a sheaf of $\mathcal{O}_{X_0}$-modules whose stalk at $x\in X_0$ is the space of germs of holomorphic functions which are def\/ined on some open neighborhood $\mathcal{V}$ of $x$ in $CS$ and which vanish on $\mathcal{V}\cap X_0$. Let us now view the vector space $\mathcal{F}_{x_0}=\{f\in(\mathcal{O}_M)_{x_0}\colon f(x_0)=0\}$ also as a sheaf over $\{x_0\}$. Then (recall from Lemma~\ref{lemma analytic subvariety X_0}) it is easy to see that $\mathcal{F}_{X_0}=\tau^\ast_0\mathcal{F}_{x_0}$. Observe that $\Gamma(\mathcal{F}_{X_0})$ is the space of equivalence classes of holomorphic functions which are def\/ined on an open neighborhood of $X_0$ in $CS$ where two such functions belong to the same equivalence class if they agree on some possibly smaller open neighborhood of~$X_0$. The inf\/inite-dimensional vector spaces from (\ref{ses of weighted jets}) form a decreasing f\/iltration $\cdots\subset\mathfrak{F}^{i+1}_{x_0}\subset\mathfrak{F}^{i}_{x_0}\subset\cdots$ of $\mathcal{F}_{x_0}=\mathfrak{F}^0_{x_0}$. Then $\mathfrak{F}^i_{X_0}:=\tau^\ast\mathfrak{F}^i_{x_0}$ is a sheaf of $\mathcal{O}_{X_0}$-modules which is naturally a~sub-sheaf of $\mathcal{F}_{X_0}$. This induces a f\/iltration $\dots\subset\mathfrak{F}^{i+1}_{X_0}\subset\mathfrak{F}^{i}_{X_0}\subset\cdots$ of $\mathcal{F}_{X_0}=\mathfrak{F}^0_{X_0}$. Arguing as in Section \ref{section ideal sheaf}, one can show that for each $i\ge0$ there is a short exact sequence of sheaves \begin{gather*} 0\rightarrow\mathfrak{F}^{i+1}_{X_0}\rightarrow\mathfrak{F}^i_{X_0}\rightarrow\mathcal{O}\big(\mathfrak{S}^{i+1}N^\ast\big)\rightarrow0 \end{gather*} and thus, the graded sheaf associated to the f\/iltration $\mathcal{F}_{X_0}$ is isomorphic to $\bigoplus_{i\ge1}\mathcal{O}(\mathfrak{S}^iN^\ast)$. Using the analogy with the classical formal neighborhood, we will call the pair $(X,\mathcal{O}^{(i)}_X)$ where $\mathcal{O}_X^{(i)}:=\iota_0^{-1}\mathcal{O}_{CS}/\mathfrak{F}^{i+1}_{X_0}$ the $i$\textit{-th weighted formal neighborhood of} $X_0$. Notice that the f\/iltration $\{\mathfrak{F}^i_{X_0}\colon i=0,1,2,\dots\}$ descends to a f\/iltration of $\mathcal{O}_X^{(i)}$ and that the associated graded sheaf is isomorphic to $\bigoplus_{j\ge0}^{i+1}\mathcal{O}(\mathfrak{S}^jN^\ast)$. \subsection[The double complex on the formal neighborhood of $\tau^{-1}(x_0)$]{The double complex on the formal neighborhood of $\boldsymbol{\tau^{-1}(x_0)}$}\label{section formal double complex} Recall from Section~\ref{section relative BGG sequence} that for each $a\in\mathbb{N}^{k,n}_{++}$ there is a $\mathrm{Q}$-dominant and integral weight~$\lambda_a$, an irreducible $\mathrm{Q}$-module $\mathbb{W}_{\lambda_a}$ with lowest weight $-\lambda_a$ and an associated vector bundle $W_{\lambda_a}^{CS}=\mathrm{G}\times_\mathrm{Q}\mathbb{W}_{\lambda_a}$. We will denote by $W_{\lambda_a}$ the restriction of $W_{\lambda_a}^{CS}$ to $X_0$, by $\mathcal{O}(\lambda_a)$ the sheaf of holomorphic sections of $W_{\lambda_a}$, by $\mathcal{E}^{p,q}$ the sheaf of smooth $(p,q)$-forms over $X_0$ and by $\mathcal{E}^{p,q}(\lambda_a)$ the sheaf of $(p,q)$-forms with values in $W_{\lambda_a}$. If $V$ is another vector bundle over $X_0$, then we denote by $V(\lambda_a)$ the tensor product of~$V$ with~$W_{\lambda_a}$. We will use the notation set in~(\ref{notation weighted jets over origin}) and~(\ref{isom of graded jets over origin}). \begin{Lemma}\label{lemma les of formal neighborhoods} There is for each $r:=\ell+j\ge0$ a long exact sequence a vector bundles over $X_0$: \begin{gather}\label{les of formal neighs I} \mathfrak{S}^{r}N^\ast(\lambda)\xrightarrow{\mathfrak{d}_0} E^\ast\otimes \mathfrak{S}^{r-1}N^\ast(\lambda)\xrightarrow{\mathfrak{d}_1}\Lambda^2E^\ast\otimes \mathfrak{S}^{r-2}N^\ast(\lambda)\xrightarrow{\mathfrak{d}_2}\cdots. \end{gather} This sequence contains a long exact subsequence \begin{gather}\label{les of formal neighs II} {S}^{r}N^\ast_1 (\lambda)\xrightarrow{\delta_0}E^\ast\otimes {S}^{r-1}N^\ast_1(\lambda)\xrightarrow{\delta_1}\Lambda^2E^\ast\otimes {S}^{r-2}N^\ast_1(\lambda)\xrightarrow{\delta_2}\cdots. \end{gather} \end{Lemma} \begin{proof} In order to obtain the sequence (\ref{les of formal neighs I}), take the direct sum of all long exact sequences from~(\ref{second exact formal complex of twisted relative de Rham}) indexed by $s_0$, $s_1$, $s_2$ and $s_3$ where $s_0+s_2+2s_3=\ell+j$, $s_1=0$ and restrict it to~$X_0$. The subsequence~(\ref{les of formal neighs II}) is obtained similarly, we only add one more condition $s_3=0$. \end{proof} Recall that each long exact sequence from (\ref{second exact formal complex of twisted relative de Rham}) is induced by the relative twisted de Rham complex by restricting to weighted jets. Hence, also (\ref{les of formal neighs I}) and (\ref{les of formal neighs II}) are naturally induced by this complex. \begin{Remark}\label{remark sheaves of vector valued forms over X_0 I} Let $\mathcal{E}^{0,q}(\Lambda^j E^\ast\otimes\mathfrak{S}^\ell N^\ast(\lambda))$ be the sheaf of smooth $(0,q)$-forms with values in the corresponding vector bundle over $X_0$. The vector bundle map $\mathfrak{d}_j$ induces a map of sheaves \begin{gather}\label{horizontal differential in first double complex with FN} \mathcal{E}^{0,q}\big(\Lambda^j E^\ast\otimes\mathfrak{S}^\ell N^\ast(\lambda)\big)\rightarrow \mathcal{E}^{0,q}\big(\Lambda^{j+1} E^\ast\otimes\mathfrak{S}^{\ell-1}N^\ast(\lambda)\big), \end{gather} which we also denote by $\mathfrak{d}_j$ as there is no risk of confusion. Recall from (\ref{decomposition of skew symmetric powers}) that $\Lambda^j\mathbb{E}^\ast\otimes\mathbb{W}_\lambda=\bigoplus_{a\in\mathbb{N}^{k,n}_{++}\colon |a|=j}\mathbb{W}_{\lambda_a}$ which gives direct sum decomposition $\mathcal{E}^{0,q}(\Lambda^j E^\ast\otimes\mathfrak{S}^\ell N^\ast(\lambda))=\bigoplus_{a\in\mathbb{N}^{k,n}_{++}\colon |a|=j}\mathcal{E}^{0,q}(\mathfrak{S}^{\ell}N^\ast(\lambda_a))$. We see that if $a, a'\in\mathbb{N}^{k,n}_{++}$ are such that $|a|=|a'|-1=j$, then $\mathfrak{d}_j$ induces \begin{gather}\label{graded differential I} \mathfrak{d}_{a'}^a\colon \ \mathcal{E}^{0,q}\big(\mathfrak{S}^{\ell}N^\ast(\lambda_a)\big)\rightarrow\mathcal{E}^{0,q}\big(\mathfrak{S}^{\ell-1}N^\ast(\lambda_{a'})\big) \end{gather} in the same way $\partial_\eta$ induces in (\ref{dif op in BGG}) the operator $\partial_{a'}^{a}$ in the relative BGG sequence. By Proposition \ref{thm relative bgg sequence}, $\mathfrak{d}_{a'}^{a}=0$ if $a\nless a'$. \end{Remark} \begin{Remark}\label{remark sheaves of vector valued forms over X_0 II} Replacing (\ref{les of formal neighs I}) by (\ref{les of formal neighs II}) in Remark \ref{remark sheaves of vector valued forms over X_0 I}, we get a map of sheaves \begin{gather}\label{horizontal differential in second double complex with FN} \delta_j\colon \ \mathcal{E}^{0,q}\big(\Lambda^j E^\ast\otimes S^\ell N_1^\ast(\lambda)\big)\rightarrow \mathcal{E}^{0,q}\big(\Lambda^{j+1} E^\ast\otimes S^{\ell-1}N_1^\ast(\lambda)\big). \end{gather} If $a,a'$ are as above, then there is a map \begin{gather* \delta_{a'}^a\colon \ \mathcal{E}^{0,q}\big(S^{\ell}N^\ast_1(\lambda_a)\big)\rightarrow\mathcal{E}^{0,q}\big(S^{\ell-1}N^\ast_1(\lambda_{a'})\big), \end{gather*} which is induced in the same way $\mathfrak{d}_j$ induces $\mathfrak{d}_{a'}^a$. \end{Remark} Even though the proof of Lemma \ref{lemma sheaf cohomology groups over origin} is trivial, it will be crucial later on. \begin{Lemma}\label{lemma sheaf cohomology groups over origin} Let $a\in\mathbb{N}^{k,n}_{++}$. Then \begin{gather}\label{first isom} (\tau_0)^q_\ast\big(\mathcal{O}\big(\mathfrak{S}^\ell N^\ast (\lambda_a)\big)\big)=H^q\big(X_0,\mathcal{O}\big(\mathfrak{S}^\ell N^\ast (\lambda_a)\big)\big)= \begin{cases} \mathfrak{gr}^{\ell}\mathbb{V}_{\mu_a}, \\ \{0\} \end{cases} \end{gather} and \begin{gather}\label{second isom} (\tau_0)^q_\ast\big(\mathcal{O}({S}^\ell N^\ast_1(\lambda_a))\big)=H^q\big(X_0,\mathcal{O}({S}^\ell N_1^\ast (\lambda_a))\big)= \begin{cases} gr^\ell\mathbb{V}_{\mu_a}, \\ \{0\}, \end{cases} \end{gather} where\footnote{As above, we identify a sheaf over $\{x_0\}$ with its stalk.} in \eqref{first isom} and \eqref{second isom} the first possibility holds if and only if $a\in S^k$ and $q=\ell(a)$. \end{Lemma} \begin{proof}The f\/irst equality in (\ref{first isom}) is just the def\/inition of $(\tau_0)_\ast^q$. The sheaf cohomology group in the middle is equal to the cohomology of the Dolbeault complex. In view of Lemma~\ref{lemma global hol sections over X_0}, $\Gamma(\mathcal{E}^{0,q}(\mathfrak{S}^\ell N^\ast(\lambda_a)))\cong\mathfrak{gr}^{\ell}_{x_0}\otimes\Gamma(\mathcal{E}^{0,q}(\lambda_a))$ and thus, the sheaf cohomology group is isomorphic to $\mathfrak{gr}^{\ell}_{x_0}\otimes H^q(X_0,\mathcal{O}(\lambda_a))$. By the Bott--Borel--Weil theorem, $H^q(X_0,\mathcal{O}(\lambda_a))\cong\mathbb{V}_{\mu_a}$ if $a\in S^k$, $q=\ell(a)$ and vanishes otherwise. The second equality in~(\ref{first isom}) then follows from the isomorphism $\mathfrak{gr}^{\ell}_{x_0}\otimes\mathbb{V}_{\mu_a}\rightarrow\mathfrak{gr}^\ell\mathbb{V}_{\mu_a}$ from (\ref{isom of graded jets over origin}). The isomorphism in (\ref{second isom}) is proved similarly. We only use the other isomorphism \begin{gather*} \Gamma\big(\mathcal{O}\big({S}^\ell N^\ast_1\big)\big)\rightarrow gr^\ell_{x_0} \end{gather*} from Lemma \ref{lemma global hol sections over X_0} and the isomorphism $gr^{\ell}_{x_0}\otimes\mathbb{V}_{\mu_a}\rightarrow gr^\ell\mathbb{V}_{\mu_a}$. \end{proof} There is for each non-negative integer a certain double complex whose horizontal dif\/ferential is~(\ref{horizontal differential in first double complex with FN}) and the vertical dif\/ferential is (up to sign) the Dolbeault dif\/ferential. This is the double complex from Proposition \ref{thm double complex I} restricted to the weighted formal neighborhood of $X_0$. \begin{Proposition}\label{thm first double complex on formal neigh} Let $r\ge0$ be an integer. Then there is a double complex $(\mathfrak{E}^{p,q}(r),d',d'')$ where: \begin{itemize}\itemsep=0pt \item $\mathfrak{E}^{p,q}(r)=\Gamma(\mathcal{E}^{0,q}(\Lambda^p E^\ast\otimes\mathfrak{S}^{r-p}N^\ast(\lambda)))$, \item the vertical differential $d'$ is $(-1)^p\bar\partial$ where $\bar\partial$ is the standard Dolbeault differential and \item the horizontal differential $d''$ is $\mathfrak{d}_p$ from \eqref{horizontal differential in first double complex with FN}. \end{itemize} Moreover, we claim that: \begin{enumerate}\itemsep=0pt \item [$(i)$] $H^j(T^\ast(r),d'+d'')=0$ if $j>{n\choose2}$ where $T^i(r):=\bigoplus_{p+q=i}\mathfrak{E}^{p,q}(r)$; \item [$(ii)$] the first page of the spectral sequence associated to the filtration by columns is \begin{gather* \mathfrak{E}^{p,q}_1(r)=\bigoplus_{a\in S^k\colon |a|=p,\ \ell(a)=q}\mathfrak{gr}^{r-p}\mathbb{V}_{\mu_a}; \end{gather*} \item [$(iii)$] the spectral sequence degenerates on the second page. \end{enumerate} \end{Proposition} \begin{proof}Recall from the proof of Proposition \ref{thm relative de Rham} that $\mathfrak{gr}\partial_\eta$ is induced from $\partial_\eta$ by passing to weighted jets (as explained at the end of Section~\ref{section wdo}) and, see Lemma~\ref{lemma les of formal neighborhoods}, that $\mathfrak{d}=\mathfrak{d}_p$ is the restriction of the map $\mathfrak{gr}\partial_\eta$ to the sub-complex~(\ref{les of formal neighs I}). Since $[\partial_\eta,\bar\partial]=0$, we have that $[\mathfrak{d},\bar\partial]=0$ and thus also $d'd''=-d''d'$. This shows the f\/irst claim. (i) The rows of the double complex are exact as the sequence~(\ref{les of formal neighs I}) is exact. Since $\dim X_0={n\choose2}$, it follows that $\mathfrak{E}^{p,q}_1(r)=0$ whenever $q>{n\choose2}$. This proves the claim. (ii) By def\/inition, $\mathfrak{E}^{p,q}_1(r)$ is the $d'$-cohomology group in the $p$-th row and $q$-th column. The claim then follows from the direct sum decomposition from Remark~\ref{remark sheaves of vector valued forms over X_0 I} and Lemma~\ref{lemma sheaf cohomology groups over origin}. (iii) The space $\mathfrak{gr}^{r-p}\mathbb{V}_{\mu_a}$ lives on the $|a|$-th vertical line and $\ell(a)=({n\choose 2}-q(a))$-th horizontal line of the f\/irst page of the spectral sequence and thus, on the $(|a|+{n\choose2}-q(a))=(2q(a)+d(a)+{n\choose2}-q(a))=(r(a)+{n\choose2})$-th diagonal. Choose $a'\in S^k$ such that $\mathfrak{gr}^{r-|a'|}\mathbb{V}_{\mu_{a'}}$ lives on the next diagonal and $a<a'$. This means that $r(a')=r(a)+1$ and so $q(a)=q(a')$ or $q(a')=q(a)+1$. In the f\/irst case, $\mathfrak{gr}^{r-|a'|}\mathbb{V}_{\mu_{a'}}$ lives on the $\ell(a)$-th row. In the second case, it lives on the $(\ell(a)-1)$-th row. As $\mathfrak{d}_{a'}^a=0$ if $a\nless a'$, it follows from def\/inition that the dif\/ferential on the $i$-th page is zero if $i>2$. \end{proof} If we use the exactness of (\ref{les of formal neighs II}) instead of (\ref{les of formal neighs I}) and use the isomorphism (\ref{second isom}) instead of~(\ref{first isom}), the proof of Proposition~\ref{thm first double complex on formal neigh} gives the following. \begin{Proposition}\label{thm second double complex on formal neigh} The double complex from Proposition {\rm \ref{thm first double complex on formal neigh}} contains a double complex \linebreak $(F^{p,q}(r),d',d'')$ where $F^{p,q}(r):=\Gamma(\mathcal{E}^{0,q}(\Lambda^pE^\ast\otimes{S}^{r-p}N^\ast_1(\lambda)))$. Moreover we claim that: \begin{enumerate}\itemsep=0pt \item [$(i)$] $H^j(T^\ast(r),d'+d'')=0$ if $j>{n\choose2}$ where $T^i(r):=\bigoplus_{p+q=i}F^{p,q}(r)$; \item [$(ii)$] the first page of the spectral sequence associated to the filtration by columns is \begin{gather*} F^{p,q}_1(r):=\bigoplus_{a\in S^k\colon |a|=p,\ \ell(a)=q}gr^{r-p}\mathbb{V}_{\mu_a}; \end{gather*} \item [$(iii)$] the spectral sequence degenerates on the second page. \end{enumerate} \end{Proposition} \subsection{Long exact sequence of weighted jets}\label{section les of weighted jets} Let $a\in S^k$ and $\mathbb{V}_{\mu_a}$ be an irreducible $\mathrm{P}$-module with lowest weight $-\mu_a$, see Proposition \ref{thm direct images}. Now we are ready to show that the linear operators def\/ined in Lemma \ref{lemma diff op} are dif\/ferential operators and we give an upper bound on their weighted order. \begin{Lemma}\label{lemma diff op on graded jets} Let $a,a'\in S^k$ be such that $a<a'$ and $ r(a')=r(a)+1$. Then the operator $D^a_{a'}$ from Lemma~{\rm \ref{lemma diff op}} is a differential operator of weighted order at most $s:=|a'|-|a|$. Hence, $D^{a}_{a'}$ induces for each $i\ge 0$ a linear map \begin{gather}\label{map of weighted jets I} \mathfrak{gr} D_{a'}^a\colon \ \mathfrak{gr}^i\mathbb{V}_{\mu_a}\rightarrow\mathfrak{gr}^{i-s}\mathbb{V}_{\mu_{a'}}, \end{gather} which restricts to a linear map \begin{gather}\label{map of weighted jets II} gr D_{a'}^a\colon \ gr^i\mathbb{V}_{\mu_a}\rightarrow gr^{i-s}\mathbb{V}_{\mu_{a'}}. \end{gather} \end{Lemma} \begin{proof} Let us make a few preliminary observations. Let $v\in\mathcal{O}_\mathfrak{p}(\mu_a)_{x_0}$. By the $\mathrm{G}$-invariance of $D_{a'}^a$, it is obviously enough to show that $(D_a^{a'}v)(x_0)$ depends only on $\mathfrak{j}_{x_0}^sv$. We may assume that $v$ is def\/ined on the Stein set $\mathcal{U}$ from Section~\ref{section complex} and so we can view $v$ as a cohomology class $[\alpha]=H^{\ell(a)}(\tau^{-1}(\mathcal{U}),\mathcal{O}_\mathfrak{q}(\lambda_a))$. A choice of Weyl structure (see~\cite{CS}) and the isomorphisms~(\ref{first isom}) give for each integer $i\ge0$ isomorphisms \begin{gather*} \mathfrak{J}^i\mathbb{V}_{\mu_a}\rightarrow\bigoplus_{j=0}^i\mathfrak{gr}^j\mathbb{V}_{\mu_a}\rightarrow \bigoplus_{j=0}^iH^{\ell(a)}\big(X_0,\mathfrak{S}^jN^\ast(\lambda_a)\big). \end{gather*} Hence, the Taylor series of $v$ at $x_0$ determines an inf\/inite\footnote{We will at this point avoid discussion about the convergence of the sum as we will not need it.} sum $\sum\limits_{j=0}^\infty [v_j]$ where each $[v_j]$ belong to $H^{\ell(a)}(X_0,\mathfrak{S}^j(\lambda_a))$. Now we can proceed with the proof. By assumption, $s\in\{1,2\}$. If $s=1$, then $\ell(a)=\ell(a')$. By def\/inition, $D_{a'}^av$ corresponds to $[\partial_{a'}^a\alpha]\in H^{\ell(a')}(\tau^{-1}(U),\mathcal{O}_\mathfrak{q}(\lambda_{a'}))$ and $\mathfrak{j}^i_{x_0}(D_{a'}^av)$ can be viewed as $\sum\limits_{j=0}^i[(\mathfrak{d}_{a'}^{a}) v_{j+1}]$. But since $[\mathfrak{d}_{a'}^a (v_j)]\in H^{\ell(a)}(X_0,\mathfrak{S}^{j-1}(\lambda_a))$, it is clear that $D_{a'}^a(v)(x_0)=0$ if $\mathfrak{j}^1_{x_0}v=0$. This completes the proof when $s=1$. Notice that the linear map $\mathfrak{gr} D_{a'}^a$ f\/its into a commutative diagram \begin{gather}\label{com diagram with delta}\begin{split} \xymatrix{ \Gamma(\mathcal{E}^{0,\ell(a)}\big(\mathfrak{S}^iN^\ast(\lambda_a))\big)\cap\ker\bar\partial\ar[d]\ar[r]^{\mathfrak{d}_{a'}^a}& \Gamma\big(\mathcal{E}^{0,\ell(a')}\big(\mathfrak{S}^{i-1}N^\ast(\lambda_{a'})\big)\big)\cap\ker\bar\partial\ar[d]\\ H^{\ell(a)}\big(X_0,\mathfrak{S}^iN^\ast(\lambda_a)\big)\ar[r]&H^{\ell(a')}\big(X_0,\mathfrak{S}^{i-1}N^\ast(\lambda_{a'})\big)\\ \mathfrak{gr}^i\mathbb{V}_{\mu_a}\ar[u]\ar[r]^{\mathfrak{gr} D_{a'}^a}&\mathfrak{gr}^{i-1}\mathbb{V}_{\mu_{a'},}\ar[u] }\end{split} \end{gather} where the lower vertical arrows are the isomorphisms from Lemma~\ref{lemma sheaf cohomology groups over origin}, the upper vertical arrows are the canonical projections and the map $\mathfrak{d}_{a'}^a$ is the one from~(\ref{graded differential I}). Let us now assume $s=2$. In view of the diagram (\ref{def of second order operator}), we have to replace in (\ref{com diagram with delta}) the map~$\mathfrak{d}_{a'}^a$ by the diagram \begin{gather*} \xymatrix{\Gamma(\mathcal{E}^{0,q}(\mathfrak{S}^iN^\ast(\lambda_a)))\cap \operatorname{Ker}(\bar\partial)\ar[r]^{(\mathfrak{d}_b^{a})\oplus(\mathfrak{d}_c^{a})}&\ \ \ \ \Gamma(\mathcal{E}^{0,q}(\mathfrak{S}^{i-1}N^\ast({\lambda_b}\oplus {\lambda_c})))\\ \Gamma(\mathcal{E}^{0,q-1}(\mathfrak{S}^{i-1}N^\ast({\lambda_b}\oplus {\lambda_c})))\ar[ur]^{\bar\partial}\ar[r]^{(\mathfrak{d}_{a'}^{b})+(\mathfrak{d}_{a'}^{c})}&\ \ \ \ \ \Gamma(\mathcal{E}^{0,q-1}(\mathfrak{S}^{i-2}N^\ast(\lambda_{a'})))\cap \operatorname{Ker}(\bar\partial),} \end{gather*} where we for brevity put $\mathfrak{S}^\bullet N^\ast(\lambda_b\oplus\lambda_c):=\mathfrak{S}^\bullet N^\ast(\lambda_b)\oplus\mathfrak{S}^\bullet N^\ast(\lambda_c)$. Following the same line of arguments as in the case $s=1$, we easily f\/ind that $D_{a'}^a(v)(x_0)=0$ whenever $\mathfrak{j}^2_{x_0}v=0$. In order to prove the claim about $gr D_{a'}^a$, we need to replace everywhere $\mathfrak{d}_{a'}^{a}$ by its restric\-tion~$\delta_{a'}^a$ and use~(\ref{second isom}) instead of~(\ref{first isom}). \end{proof} In order to get rid of the factor $s$ in (\ref{map of weighted jets I}) and (\ref{map of weighted jets II}), we shift the gradings by introducing $\mathfrak{gr}^i\mathbb{V}_{\mu_a}[\uparrow]:=\mathfrak{gr}^{i-q(a)}\mathbb{V}_{\mu_a}$ and $gr^i\mathbb{V}_{\mu_a}[\uparrow]:= gr^{i-q(a)}\mathbb{V}_{\mu_a}$. We can now rewrite the maps from~(\ref{map of weighted jets I}) and~(\ref{map of weighted jets II}) as \begin{gather* \mathfrak{gr} D_{a'}^a\colon \ \mathfrak{gr}^\ell\mathbb{V}_{\mu_a}[\uparrow]\rightarrow\mathfrak{gr}^{\ell-1}\mathbb{V}_{\mu_{a'}}[\uparrow]\qquad \mathrm{and}\qquad grD_{a'}^a\colon \ gr^\ell\mathbb{V}_{\mu_a}[\uparrow]\rightarrow gr^{\ell-1}\mathbb{V}_{\mu_{a'}}[\uparrow], \end{gather*} respectively, where $\ell\ge0$ is the corresponding integer. We also put \begin{gather*} \mathfrak{gr}^\ell\mathbb{V}_{j,i}[\uparrow]= \!\!\bigoplus_{a\in S^k_j\colon q(a)=i} \!\!\mathfrak{gr}^\ell\mathbb{V}_{\mu_a}[\uparrow],\qquad \mathfrak{gr}^\ell\mathbb{V}_j[\uparrow]=\bigoplus_{i=0}^{j}\mathfrak{gr}^\ell\mathbb{V}_{j,i}[\uparrow] \end{gather*} and \begin{gather*} gr^\ell\mathbb{V}_{j,i}[\uparrow]= \!\!\bigoplus_{a\in S^k_j\colon q(a)=i}\!\! gr^\ell\mathbb{V}_{\mu_a}[\uparrow],\qquad gr^\ell\mathbb{V}_j[\uparrow]=\bigoplus_{i=0}^{j}gr^\ell\mathbb{V}_{j,i}[\uparrow]. \end{gather*} We view $\mathfrak{gr} D_{a'}^a$ also as a map $\mathfrak{gr}^\ell\mathbb{V}_j[\uparrow]\rightarrow\mathfrak{gr}^{\ell-1}\mathbb{V}_{j+1}[\uparrow]$ by extending it from $\mathfrak{gr}^\ell\mathbb{V}_{\mu_a}[\uparrow]$ by zero to all the other summands. We put \begin{gather* \mathfrak{gr} D_j:= \!\!\sum_{a\in S^k_j,\ a'\in S^k_{j+1}\colon a<a'} \!\! \mathfrak{gr} D_{a'}^a\colon \ \mathfrak{gr}^\ell\mathbb{V}_j[\uparrow]\rightarrow\mathfrak{gr}^{\ell-1}\mathbb{V}_{j+1}[\uparrow] \end{gather*} and \begin{gather*} \mathfrak{gr}( D_j)_{i'}^i\colon \ \mathfrak{gr}^\ell\mathbb{V}_{j,i}[\uparrow]\rightarrow\mathfrak{gr}^\ell\mathbb{V}_j[\uparrow]\xrightarrow{\mathfrak{gr} D_j}\mathfrak{gr}^{\ell-1}\mathbb{V}_{j+1}[\uparrow]\rightarrow \mathfrak{gr}^{\ell-1}\mathbb{V}_{j+1,i'}[\uparrow], \end{gather*} where the f\/irst map is the canonical inclusion and the last map is the canonical projection. Recall from Section \ref{section the relative Weyl group} that if $a<a'$, $a\in S^k_j$, $a'\in S^k_{j+1}$, then $q(a')\le q(a)+1$. This implies that $\mathfrak{gr}(D_j)_{i'}^{i}=0$ if $i\ne i'$ or $i'\ne i+1$. Then $\mathfrak{gr} D_j$ is \begin{gather* \begin{matrix} \left(\begin{matrix} \mathfrak{gr}^\ell\mathbb{V}_{j,0}[\uparrow]\\ \oplus \\ \mathfrak{gr}^\ell\mathbb{V}_{j,1}[\uparrow]\\ \oplus\\ \dots\\ \end{matrix}\right) \begin{matrix} \longrightarrow\\ \searrow\\ \longrightarrow\\ \searrow\\ \dots \end{matrix} \left( \begin{matrix} \mathfrak{gr}^{\ell-1}\mathbb{V}_{j+1,0}[\uparrow]\\ \oplus\\ \mathfrak{gr}^{\ell-1}\mathbb{V}_{j+1,1}[\uparrow]\\ \oplus\\ \dots\\ \end{matrix} \right) \end{matrix}, \end{gather*} where the horizontal arrows and the diagonal arrows are $\mathfrak{gr}(D_j)^i_i$ and $\mathfrak{gr}(D_j)^i_{i+1}$, respectively. We similarly def\/ine linear maps $gr D_j\colon gr^\ell\mathbb{V}_j[\uparrow]\rightarrow gr^{\ell-1}\mathbb{V}_{j+1}[\uparrow]$ and $gr (D_j)_{i'}^{i}\colon gr^\ell\mathbb{V}_{j,i}[\uparrow]\rightarrow gr^{\ell-1}\mathbb{V}_{j+1,i'}[\uparrow]$. \begin{Remark}\label{remark dif op and ss} Notice that \begin{gather*} \mathfrak{gr}^\ell\mathbb{V}_{j,i}[\uparrow] =\bigoplus_{a\in S^k_j\colon q(a)=i}\mathfrak{gr}^\ell\mathbb{V}_{\mu_a}[\uparrow]=\bigoplus_{a\in S^k\colon r(a)=j,\ q(a)=i}\mathfrak{gr}^{\ell-q(a)}\mathbb{V}_{\mu_a}\\ \hphantom{\mathfrak{gr}^\ell\mathbb{V}_{j,i}[\uparrow]}{} =\bigoplus_{a\in S^k\colon |a|=i+j,\ \ell(a)={n\choose2}-i}\mathfrak{gr}^{\ell-i}\mathbb{V}_{\mu_a}=\mathfrak{E}_1^{i+j,{n\choose2}-i}(\ell+j). \end{gather*} Put $p:=i+j$, $q:={n\choose2}-i$ and $r:=\ell+j$. Then we can view $\mathfrak{gr}(D_j)_{i}^i$ and $\mathfrak{gr}(D_j)_{i+1}^i$ as maps \begin{gather* \mathfrak{E}^{p,q}_1(r)\rightarrow \mathfrak{E}_1^{p+1,q}(r)\qquad \mathrm{and}\qquad \mathfrak{E}^{p,q}_1(r)\rightarrow \mathfrak{E}_1^{p+2,q-1}(r), \end{gather*} respectively. By the def\/inition of $\mathfrak{gr}(D_j)_{i}^i$ from Lemma~\ref{lemma diff op on graded jets}, it follows that we can view it as the dif\/ferential~$d_1$ on the f\/irst page of the spectral sequence from Proposition~\ref{thm first double complex on formal neigh}. Suppose that $v\in \mathfrak{E}^{p,q}_1(s)$ satisf\/ies $d_1(v)=0$. Then we can apply the dif\/ferential $d_2$ living on the second page to $v+\operatorname{im}(d_1)$ and, comparing this with the def\/inition of~$\mathfrak{gr}(D_j)_{i+1}^i$ from Lemma~\ref{lemma diff op on graded jets}, we f\/ind that \begin{gather} \label{d_2 as differential operator} d_2(v+\operatorname{im}(d_1))=\mathfrak{gr}(D_j)_{i+1}^i(v)+\operatorname{im}(d_1). \end{gather} Similarly we f\/ind that $gr^\ell\mathbb{V}_{j,i}=F_1^{p,q}(s)$ where $p$, $q$ and $s$ are as above. Moreover we can view $gr (D_j)_{i}^i$ and $gr (D_j)_{i+1}^i$ as maps \begin{gather* F_1^{p,q}(s)\rightarrow F_1^{p+1,q}(s)\qquad \mathrm{and}\qquad F_1^{p,q}(s)\rightarrow F_1^{p+2,q-1}(s), \end{gather*} respectively. As the double complex from Proposition \ref{thm second double complex on formal neigh} is a sub-complex of the double complex from Proposition~\ref{thm first double complex on formal neigh} and $gr (D_j)_{i'}^i$ is the restriction of~$\mathfrak{gr} (D_j)_{i'}^i$ to the corresponding subspace, we see that $gr (D_j)_{i}^i$ coincides with the dif\/ferential on the f\/irst page of the spectral sequence from Proposition~\ref{thm second double complex on formal neigh} and that $gr(D_j)^i_{i+1}$ is related to the dif\/ferential on the second page just as $\mathfrak{gr}(D_j)^i_{i+1}$ is related to $d_2$. \end{Remark} The exactness of the complex (\ref{les of graded jetsI}) for each $\ell+j\ge0$ implies (see \cite{S}) the exactness of the $k$-Dirac complex at the level of inf\/inite weighted jets at any f\/ixed point. Following \cite{Sp}, we say that the $k$-Dirac complex is formally exact. Notice that for application in \cite{S}, the exactness of the sub-complex~(\ref{les of graded jets II}) for each $\ell+j\ge0$ is a crucial point in the proof of the local exactness of the descended complex and thus, in constructing the resolution of the $k$-Dirac operator. \begin{Theorem}\label{theorem formal exactness} The $k$-Dirac complex induces for each $\ell+j\ge0$ a long exact sequence \begin{gather}\label{les of graded jetsI} \mathfrak{gr}^{\ell+j}\mathbb{V}_0[\uparrow]\xrightarrow{\mathfrak{gr} D_0}\mathfrak{gr}^{\ell+j-1}\mathbb{V}_1[\uparrow]\rightarrow\dots\rightarrow\mathfrak{gr}^\ell\mathbb{V}_j[\uparrow]\xrightarrow{\mathfrak{gr} D_j}\mathfrak{gr}^{\ell-1}\mathbb{V}_{j+1}[\uparrow]\rightarrow\cdots \end{gather} of finite-dimensional vector spaces. The complex contains a sub-complex \begin{gather}\label{les of graded jets II} gr^{\ell+j}\mathbb{V}_0[\uparrow]\xrightarrow{gr D_0}gr^{\ell+j-1}\mathbb{V}_1[\uparrow]\rightarrow\dots\rightarrow gr^\ell\mathbb{V}_j[\uparrow]\xrightarrow{gr D_j}gr^{\ell-1}\mathbb{V}_{j+1}[\uparrow]\rightarrow\cdots, \end{gather} which is also exact. \end{Theorem} \begin{proof} Let $v\in\mathfrak{gr}^\ell\mathbb{V}_j[\uparrow]$, $j\ge1$ be such that $\mathfrak{gr} D_j(v)=0$. Write $v=(v_0,\dots,v_j)$ with respect to the decomposition given above, i.e., $v_i\in\mathfrak{gr}^\ell\mathbb{V}_{j,i}[\uparrow]$. Assume that $v_0=v_1=\dots=v_{i-1}=0$ and that $v_i\ne0$. We have that $\mathfrak{gr} (D_{i}^i)(v_i)=0$ and $\mathfrak{gr} (D_{i+1}^i)(v_i)+\mathfrak{gr} (D_{i+1}^{i+1})(v_{i+1})=0$. If we view $v_i$ as an element of $\mathfrak{E}_1^{p,q}(s)$ as in Remark~\ref{remark dif op and ss}, we see that $d_1(v_i)=0$ and by~(\ref{d_2 as differential operator}), we f\/ind that $d_2(v_i)=0$. By Proposition~\ref{thm first double complex on formal neigh}, the spectral sequence $\mathfrak{E}^{p,q}(r)$ collapses on the second page and by part~(i), we have that $\ker(d_2)=\operatorname{im}(d_2)$ beyond the ${n\choose2}$-th diagonal. By Remark \ref{remark dif op and ss} again, $\mathfrak{gr}^\ell\mathbb{V}_{j,i}[\uparrow]$ lives on the $({n\choose2}+j)$-th diagonal. We see that there are $t_{i-1}\in\mathfrak{gr}^{\ell+1}\mathbb{V}_{j-1,i-1}[\uparrow]$ and $t_{i}\in\mathfrak{gr}^{\ell+1}\mathbb{V}_{j-1,i}[\uparrow]$ such that $\mathfrak{gr}(D_{i-1}^{i-1})(t_{i-1})=0$ and $\mathfrak{gr}(D_{i}^{i-1})(t_{i-1})+\mathfrak{gr}(D_{i}^{i})(t_{i})=v_i$. Hence, we can kill the lowest non-zero component of $v$ and repeating this argument f\/initely many times, we see that there is $t\in\mathfrak{gr}^{\ell+1}\mathbb{V}_{j-1}[\uparrow]$ such that $v=\mathfrak{gr} D_{j-1}(t)$. The proof of the exactness of the second sequence (\ref{les of graded jets II}) proceeds similarly. We only replace $\mathfrak{gr}^\ell\mathbb{V}_{j}[\uparrow]$ by $gr^\ell\mathbb{V}_{j}[\uparrow]$, $\mathfrak{gr}^\ell\mathbb{V}_{j,i}[\uparrow]$ by $gr^\ell\mathbb{V}_{j,i}[\uparrow]$, use that the second spectral sequence from Proposition~\ref{thm second double complex on formal neigh} has the same key properties as the spectral sequence from Proposition~\ref{thm first double complex on formal neigh} and the end of Remark~\ref{remark dif op and ss}. \end{proof} \subsection*{Acknowledgements} The author is grateful to Vladim\'{\i}r Sou\v{c}ek for his support and many useful conversations. The author would also like to thank to Luk\'a\v{s} Krump for the possibility of using his package for the Young diagrams. The author wishes to thank to the unknown referees for many helpful suggestions which considerably improved this article. The research was partially supported by the grant 17-01171S of the Grant Agency of the Czech Republic. \pdfbookmark[1]{References}{ref}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} The recent spectacular advancements in the manipulation and control of interacting quantum systems at the level of a single object, both in equilibrium and far-from-equilibrium conditions, opened up a wealth of unprecedented possibilities in the realm of modern quantum physics~\cite{BulutaNori}. On one hand, they paved the way to the discovery of unconventional states of quantum matter. On the other hand, they enabled to exploit quantum mechanics in order to speedup classical computation, through the implementation of quantum computation or quantum simulation protocols. In the latter context, one of the most widely known approaches is the so-called quantum annealing (QA)~\cite{Finnila_CPL94,Kadowaki_PRE98,Brooke_SCI99,Santoro_SCI02}, {\em alias} adiabatic quantum computation~\cite{Farhi_SCI01}. Due to the realisation of {\it ad-hoc} quantum hardware implementations, mainly based on superconducting flux qubits, QA is nowadays becoming a field of quite intense research~\cite{Harris_PRB10,Johnson_Nat11, Denchev_2016, Lanting_2014, Boixo_2013, Dickson_2013, Boixo_2014}. Its basic strategy works as follows. Assume to encode the solution of a given problem in the ground state of a suitable Hamiltonian. The goal of the protocol is to find such state by performing an adiabatic connection (if possible) with another Hamiltonian, typically describing a much simpler physical system. Starting from the basic idea rooted on the adiabatic theorem of quantum mechanics~\cite{Messiah:book}, a number of different situations that enable a considerable speedup induced by quantum fluctuations have been extensively analysed in the context of Hamiltonian complexity theory, in the closed-system setting~\cite{Albash_RMP18}. Nonetheless, a good description of the physics emerging from the above mentioned experimental devices cannot neglect the role of dissipation, and the ensuing open-system quantum dynamics~\cite{Weiss:book}. In the absence of dissipation, the adiabatic unitary dynamics suggests that a slower annealing will lead to a smaller density of defects generated in the process. It is a well established fact that, during any non trivial QA dynamics, one inevitably encounters some kind of phase transition, be it a second-order critical point or a first-order transition, where the gap protecting the ground state --- in principle non-zero, for a finite system --- vanishes as the number of system sites/spins $N$ goes to infinity~\cite{Santoro_JPA06,Zamponi_QA:review, Wauters_PRA17}. This results in a density of defects $n_{\mathrm{def}}(\tau)$ that decreases more or less slowly as the annealing time-scale $\tau$ is increased. In the second-order case, the predicted decrease of $n_{\mathrm{def}}(\tau)$ is a power-law $\tau^{-\alpha}$, with the exponent $\alpha$ determined by the equilibrium critical point exponents: this is often referred to as the {\em Kibble-Zurek} (KZ) scaling~\cite{kibble80, zurek96, Polkovnikov_RMP11, Sondhi_2012}. Although only marginally considered up to now, still the presence of dissipation modifies this scenario considerably. One can argue that an environment will likely have an opposite effect on the density of defects:\cite{Fubini_NJP07} given enough time, it would lead to an increase of $n_{\mathrm{def}}(\tau)$ towards, eventually, a full thermalization in the limit $\tau\to \infty$. The competing effects of a quantum adiabatic driving in presence of a dissipative environment might therefore lead to interesting non-monotonicities in the curve $n_{\mathrm{def}}(\tau)$: the increase of $n_{\mathrm{def}}(\tau)$ for larger $\tau$ has been referred to as {\em anti-Kibble-Zurek} (AKZ)~\cite{DelCampo_PRL16}. This is, in turn, intrinsically linked to the presence of a {\em minimum} of $n_{\mathrm{def}}(\tau)$ at some intermediate $\tau_{\mathrm{opt}}$, known as {\em optimal working point} (OWP). It is worth mentioning that the term ``anti-Kibble-Zurek'' appeared for the first time in a completely classical setting, the adiabatic dynamics of multiferroic hexagonal manganites~\cite{Griffin_PRX12}, with the crucial difference that the deviation from the expected KZ scenario is there seen as an unexplained decrease of $n_{\mathrm{def}}(\tau)$ for {\em fast} annealings, i.e., for $\tau\to 0$. This opposite trend leads to a {\em maximum} of $n_{\mathrm{def}}(\tau)$ for intermediate $\tau$ and, as far as we understand, has nothing to do with AKZ which we will address here in our quantum mechanical framework. Returning to the quantum case, there have been a few studies on how dissipation affects the QA performance on a quantum transverse-field Ising chain in transverse field, where the annealing is performed by slowly switching-off the transverse field. Some studies have employed a classical Markovian noise superimposed to the driving field~\cite{Fubini_NJP07, DelCampo_PRL16, Gao_PRB17} or a Lindblad master equation with suitable dissipators~\cite{Gong_SciRep16, Keck_NJP17}; others have considered the effect of one or several bosonic baths coupled to each spin along the transverse direction~\cite{Patane_PRL08,Patane_PRB09,Nalbach_PRB15,Smelyanskiy_PRL17}. The general common feature that emerges from these studies is that the density of defects stops following the KZ scaling after a certain $\tau$, and starts to increase again. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{Figure1.pdf} \caption{Density of defects \textit{vs} annealing time for a quantum Ising chain weakly coupled to an Ohmic bath, at different bath temperatures $T$, compared to the ideal coherent evolution (KZ) behaviour, $n_{\mathrm{def}}(\tau) \sim \tau^{-1/2}$. The plot highlights the three distinct behaviours we have found: $n_{\mathrm{def}}(\tau)$ can {\it i}) display a global minimum (green triangles), {\it ii}) a local minimum (blue circles), {\it iii}) converge monotonically towards a large-$\tau$ thermal plateau (red squares). Here the system-bath coupling constant is kept fixed at $\alpha=10^{-2}$.} \label{fig:ndef_tau_intro} \end{figure} In this work, we reconsider these issues, concentrating our attention on the benchmark case of a transverse-field Ising chain. Remarkably, we find that the OWP disappears below a certain temperature, which depends on the system-bath coupling. The possible situations we encounter are outlined in Fig.~\ref{fig:ndef_tau_intro}, where the density of defects is plotted as a function of the annealing time $\tau$, for various temperatures and fixed system-bath coupling strengths. Notice that three different situations may emerge, in which $n_{\mathrm{def}}(\tau)$ either shows a global or local minimum at some $\tau_{\mathrm{opt}}$ (\textit{i.e.} the global/local OWP), and situations where $n_{\mathrm{def}}(\tau)$ deviates from the simple coherent-dynamics KZ-scaling, but is still monotonically decreasing, hence no OWP is found. Quite remarkably, as we shall discuss later on, the range of temperatures that are relevant for current quantum annealers is such that one would predict the \textit{absence} of an OWP. We will further comment on the validity of the often used assumption that the density of defects can be computed as a simple sum of two contributions~\cite{Patane_PRL08, Patane_PRB09, DelCampo_PRL16, Nalbach_PRB15, Keck_NJP17}: one given by the purely coherent dynamics, the other coming from a time-evolution governed only by dissipators, \textit{i.e.} neglecting the coherent part. Since we consider also regimes for which relaxation processes after the critical point are important, we will provide evidence that this additivity assumption breaks down for large enough annealing times. The structure of the paper is as follows: in Sec. \ref{sec:model} we introduce the dissipative quantum transverse-field Ising chain under investigation, and discuss the Bloch-Redfield quantum master equation (QME) approach we use to work out the dissipative time-evolution of the system. Our numerical results are illustrated in Sec. \ref{sec:results}: we first analyse the conditions for the emergence of an OWP, also sketching a phase diagram as a function of temperature and system-bath coupling strength. Next, we investigate the issue of the additivity of the coherent and incoherent contributions to the density of defects in different regimes. Finally, in Sec.~\ref{sec:conclusions} we summarize our findings, and provide a discussion of their relevance. The two appendices are devoted to a discussion of technical issues related to the QME we have used, and to the approach towards thermal equilibrium in presence of a bath. \section{Model and methods} \label{sec:model} The model we are going to study is described by the following Hamiltonian: \begin{equation} \label{H_tot} \widehat{H}_{\mathrm{tot}}(t) = \widehat{H}_{\mathrm{sys}}(t) + \widehat{H}_{\mathrm{\scriptscriptstyle SB}} + \widehat{H}_{\mathrm{bath}}, \end{equation} where $\widehat{H}_{\mathrm{sys}}(t)$ is the time-dependent system Hamiltonian, $\widehat{H}_{\mathrm{\scriptscriptstyle SB}}$ is the system-bath interaction term, and $\widehat{H}_{\mathrm{bath}}$ is a harmonic oscillator bath Hamiltonian. The system is taken to be a quantum spin-$1/2$ Ising chain in a transverse field~\cite{Sachdev:book}: \begin{equation} \label{H_XY} \widehat{H}_{\mathrm{sys}}(t) = -J \sum_{i=1}^N \Big[ \hat{\sigma}_i^x \hat{\sigma}_{i+1}^x + h(t) \hat{\sigma}_i^z \Big], \end{equation} where $\hat{\boldsymbol{\sigma}}_i \equiv \big( \hat{\sigma}^x_i, \hat{\sigma}^y_i, \hat{\sigma}^z_i \big)$ are the usual Pauli matrices on the $i^\mathrm{th}$ site, $N$ the number of sites, $J>0$ the ferromagnetic coupling strength, and $h(t)\geq 0$ the external (driving) field, which is turned off during the evolution. Periodic boundary conditions (PBC) are assumed, \textit{i.e.} $\hat{\boldsymbol{\sigma}}_{N+1} = \hat{\boldsymbol{\sigma}}_1$. The interaction Hamiltonian we considered is written as \begin{subequations} \begin{align} \widehat{H}_{\mathrm{\scriptscriptstyle SB}} &= -\frac{1}{2} \hat{X} \otimes \sum_{i=1}^N \hat{\sigma}_i^z ,\\ \hat{X} &= \sum_l \lambda_l (\opbdag{l} + \opb{l}) , \end{align} \end{subequations} where the $\opb{l}$ are bosonic annihilation operators, and $\lambda_l$ are the system-bath coupling constants. The bath Hamiltonian is taken, as usual, as $\widehat{H}_{\mathrm{bath}} = \sum_l \hbar \omega_l \, \opbdag{l} \opb{l}$, where $\omega_l$ are the harmonic oscillator frequencies. The coupling between the system and the environment is captured by the spectral function~\cite{Leggett_RMP87,Weiss:book} $J(\omega) = \sum_l \lambda_l^2 \delta(\omega - \omega_l)$. We will focus on Ohmic dissipation: for a continuum of frequencies $\omega_l$, we define $J(\omega) = 2\alpha \hbar^2 \omega e^{-\omega/\omega_c}$, where $\alpha$ quantifies the system-bath coupling strength and $\omega_c$ is a cutoff frequency. Notice that here we consider a {\em single} (common) bath, which is coupled to all the spins along the $z$-direction, as done in Ref.~\onlinecite{Nalbach_PRB15}. This is, essentially, the quantum version of a noise term acting on the transverse field, whose classical counterpart was treated in Refs.~\onlinecite{Fubini_NJP07, DelCampo_PRL16}. This choice of system-bath coupling, which generates infinite-range correlations between all the spins in the chain, will allow us to proceed, after further simplifying assumptions, with a simple perturbative QME, as will be clear in a short while. For the closed system case (\textit{i.e.} without the coupling with the oscillators bath), the problem can be analytically tackled by means of a standard Jordan-Wigner transformation, followed by a Fourier transform~\cite{Lieb_AP61, Pfeuty_AP70}, which allows to rewrite Eq.~\eqref{H_XY} in terms of spinless fermions operators $\opc{k}$ in momentum space \begin{equation} \label{eqn:HsysF} \widehat{H}_{\mathrm{sys}}^F = \sum_{k>0} \Big[ \xi_k(t) (\opcdag{k}\opc{k} - \opc{-k}\opcdag{-k} ) + \Delta_k (\opcdag{k} \opcdag{-k} + \mathrm{H.c.}) \Big] \end{equation} where $\xi_k(t) = 2J \big( h(t) - \cos k \big)$ and $\Delta_k = 2J\sin k$. The $k$ values in the sum depend on the considered fermionic sector, since $\widehat{H}_{\mathrm{sys}}^F$ commutes with the fermion parity~\cite{Dziarmaga_PRL05}. The initial ground state belongs to the even-parity sector for any value of the transverse field, hence the time-evolving state always belongs to that sector. Due to this fact, one can restrict the choice of the $k$ values to $k=\pi (2n-1)/N$, with $n=1,\ldots, N/2$, thus fixing the anti-periodic boundary conditions (ABC) for fermions. In presence of the system-bath interaction, it is in general not possible to write Eq.~\eqref{H_tot} as a sum of independent terms for each given $k$. However, the fact that the single bath operator $\hat{X}$ couples to a translationally invariant term, $\sum_i \hat{\sigma}_i^z = -2\sum_{k>0} (\opcdag{k}\opc{k} - \opc{-k}\opcdag{-k})$, ensures momentum conservation for the fermions. As argued in Ref.~\onlinecite{Nalbach_PRB15}, this in turn implies that the self-energy for the one-body fermionic Green's function is $k$-diagonal, and all fermionic momenta connected to the external lines have momentum $k$. Different momenta $k'\neq k$ appear only in closed internal loops. At the lowest order level (second-order) and within the usual Markovian approximation, the tadpole diagram, which contains a loop, simply provides a shift of energy levels and can be neglected, while the only relevant self-energy diagram has momentum $k$ in the fermionic internal line \cite{Patane_PRB09}. This suggests that, at least at weak coupling and within a Born-Markov approximation, it is legitimate to assume that each momentum $k$ does not interact with other momenta $k'\neq k$, and that we could write the coupling to the bath as: \begin{subequations} \begin{align} \widehat{H}_{\mathrm{\scriptscriptstyle SB}} &= \sum_{k>0} \hat{X}_k \otimes (\opcdag{k}\opc{k} - \opc{-k}\opcdag{-k}), \\ \hat{X}_k &= \sum_l \lambda_l (\opbdag{l,k} + \opb{l,k}) , \end{align} \end{subequations} and $\widehat{H}_{\mathrm{bath}} = \sum_{k>0} \sum_l \hbar \omega_l \, \opbdag{l,k} \opb{l,k}$, where we have effectively ``split'' the original unique bath into $N/2$ identical copies, one for each fermionic $k$-value, all with identical $J(\omega)$. This choice greatly simplifies the problem, since the total Hamiltonian can be written as a sum in $k$-space: \begin{equation} \widehat{H}_{\mathrm{tot}}(t) = \sum_{k>0} \widehat{H}_{k}(t) . \end{equation} This automatically leads to an ensemble of independent dissipative two-level systems. Indeed, it is convenient to map the even-parity fermionic Hilbert space to a collection of pseudo-spin-1/2 quasiparticles, one for each $k>0$, with the identification $|\!\!\!\uparrow_k\rangle\equiv \opcdag{k}\opcdag{-k}|0\rangle$, and $|\!\!\downarrow_k\rangle\equiv|0\rangle$. Introducing the pseudo-spin Pauli matrices $\hat{\boldsymbol{\tau}}_k \equiv \big( \hat{\tau}^x_k, \hat{\tau}^y_k, \hat{\tau}^z_k \big)$ to represent such two-dimensional space, the Hamiltonian for each $k$ mode reads: \begin{equation} \widehat{H}_{k}(t) = \big( \xi_k(t) + \hat{X}_k \big) \, \hat{\tau}^z_k + \Delta_k \, \hat{\tau}^x_k + \sum_l \hbar \omega_l \, \opbdag{l,k} \opb{l,k} . \end{equation} Hence, as anticipated, each driven two-level system is coupled with its own bath of harmonic oscillators through a $\hat{\tau}^z_k$ term. It is worth to stress that this simplifying assumption of $k$-decoupled baths does not modify the thermal steady state that the system reaches at long time, as we will explicitly show in a short while. Summarizing the previous discussion, for our specific choice of the system-bath coupling, the dissipative dynamics of a translationally invariant quantum Ising chain can be computed by studying the time evolution of $N/2$ two-level systems in momentum space, each coupled to an {\em independent identical} bath, described by a Gibbs density matrix at temperature $T_b = (k_B \beta_b)^{-1}$, where $k_B$ is the Boltzmann's constant: \begin{equation} \hat \rho_{\mathrm{bath}} = \frac{\textrm{e}^{- \beta_b \widehat{H}_{\mathrm{bath}}}}{{\mathrm{Tr}}\big\{ \textrm{e}^{-\beta_b \widehat{H}_{\mathrm{bath}}} \big\} } \;. \end{equation} We address the dissipative dynamics of each two-level system by means of a standard perturbative Bloch-Redfield QME~\cite{Cohen:book,Gaspard_JCP99a}: \begin{equation} \label{Bloch-Redfield} \frac{d}{dt} \hat{\rho}_{\mathrm{sys}}^{(k)} = -\frac{i}{\hbar} \Big[ \widehat{H}_{\mathrm{sys}}^{(k)}, \hat \rho_{\mathrm{sys}}^{(k)} \Big] - \Big( \Big[ \hat{\tau}^z_k, \hat{S}_k(t) \, \hat{\rho}_{\mathrm{sys}}^{(k)} \Big] + \mathrm{H.c.} \Big) \,, \end{equation} where $\widehat{H}_{\mathrm{sys}}^{(k)}(t) = \xi_k(t) \hat{\tau}^z_k + \Delta_k \hat{\tau}^x_k$ is the two-level system Hamiltonian, and we assume a weak system-bath coupling and the usual Born-Markov approximation~\cite{Gaspard_JCP99a,Yamaguchi_PRE17}. Here the first term on the right hand side represents the unitary coherent evolution, while the second term contains the dissipative effect of the bath on the system dynamics. The operator $\hat{S}_k(t)$ expresses the interaction of the bosonic bath with the time-evolving system~\cite{Yamaguchi_PRE17,Arceci_PRB17} in terms of the bath correlation function $C(t) \equiv C_k(t)={\mathrm{Tr}}_{\mathrm{bath}} \Big\{ \hat \rho_{\mathrm{bath}} \, e^{i\widehat{H}_{\mathrm{bath}}t/\hbar} \, \hat{X}_k \, e^{-i\widehat{H}_{\mathrm{bath}}t/\hbar} \, \hat{X}_k \Big\}$: \begin{equation} \label{S_operator} \hat{S}_k(t) \approx \frac{1}{\hbar^2} \! \int_{0}^{t} \mathrm{d} t' \, C(t') \, \hat{U}_{0,k}^{\phantom \dagger}(t,t-t') \, \hat{\tau}^z_k \, \hat{U}_{0,k}^{\dagger}(t,t-t') \;. \end{equation} The time-dependence of $\hat{S}_k(t)$ is due to the unperturbed time-evolution operator $\hat{U}_{0,k}(t,t-t')$ of the system, which changes with $t$ since $\widehat{H}_{\mathrm{sys}}^{(k)}(t)$ is driven: \begin{equation} \label{eq:Ut_evol} \hat{U}_{0,k}(t,t-t') = \overrightarrow{\rm Texp} \left[ -\frac{i}{\hbar} \int_{t-t'}^t \mathrm{d} s \, \widehat{H}_{\mathrm{sys}}^{(k)}(s) \right] \,. \end{equation} Assuming that $C(t)$ decays fast with respect to the time scales of the evolving system, and that $\widehat{H}_{\mathrm{sys}}^{(k)}(t)$ is approximately constant on the decay time scale of $C(t)$, the expression in Eq.~\eqref{eq:Ut_evol} can be drastically simplified. This allows to write the explicit differential equations that we used to solve for the $\hat{\rho}_{\mathrm{sys}}^{(k)}(t)$ of each single two-level system, as detailed in Ref.~\onlinecite{Arceci_PRB17}: we report them in App.~\ref{appA} for the reader's convenience. One might wonder how reasonable is our rather special choice of bath in representing the dissipative dynamics of an Ising chain. To answer this question, we have looked at the relaxation towards equilibrium at fixed values of the transverse field. Any reasonable weakly coupled bath at temperature $T_b$ should allow the system to reach thermal equilibrium values for the operators one wants to measure. This is indeed what the Bloch-Redfield equation~\eqref{Bloch-Redfield} does, but the equilibrium temperature $T$ that the system reaches is actually given by $T=T_b/2$, for a reason which is discussed in some detail in App.~\ref{appB}. In essence, a peculiarity of our bath-coupling is that {\em only the even-parity fermionic sector} is involved in the dissipative dynamics: the odd-parity part of the Hilbert space, which would correspond to different wave-vectors $k$, is completely decoupled from the bath and neglected altogether. This does no justice to the equilibrium thermodynamics, which takes into account {\em all states} in the Hilbert space, and not only a dynamically conserved subspace. It turns out, however, that accounting for such a part of the Hilbert space simply amounts to having a temperature $T=T_b/2$. Hence in all the plots, we always indicate the effective temperature $T=T_b/2$ that the system would reach at thermodynamic equilibrium, rather than the bath-temperature $T_b$ used in our simulations. \section{Numerical results} \label{sec:results} Before presenting our results, it is mandatory to introduce the QA protocol we are going to simulate, and the figure of merit we will use to quantify its performance. Namely, we choose to vary the external field $h(t)$ in Eq.~\eqref{H_XY} in the time interval $t \in [0,\tau]$, where $\tau$ denotes the total annealing time, and implement a standard linear schedule $h(t) = \left(1- t/\tau \right) h_0$, where $h_0$ is the initial value of the field. In this way, the annealing crosses the zero-temperature critical point of the quantum Ising chain, $h_c=1$, separating a paramagnetic phase ($h>h_c$) from a ferromagnetically ordered phase ($h<h_c$) in the $\hat{\sigma}^x$ direction. In all the numerical calculations, we fix the number of sites at $N = 1000$. Concerning the bath, we choose $\omega_c = 10J$ as cutoff frequency in the Ohmic spectral function. The initial condition $\hat{\rho}_{\mathrm{sys}}^{(k)}(0)$ is chosen to be the ground state of $\widehat{H}_{\mathrm{sys}}^{(k)}(0)$ for $h(0)=h_0 \gg 1$ (we fix $h_0=10$). The time evolution of $\hat{\rho}_{\mathrm{sys}}^{(k)}(t)$ is then calculated by integrating the corresponding equations of motion by means of a standard fourth-order Runge-Kutta method. To assess the quality of the annealing, we compute the average density of defects~\cite{Dziarmaga_PRL05,Caneva_PRB07} over the ferromagnetic classical Ising state. In the original spin language, the operator counting such defects reads: \begin{equation} \label{ndef_spin} \hat{n}_{\mathrm{def}} = \frac{1}{2 N} \sum_{i=1}^N \big( 1 - \hat{\sigma}_i^x \hat{\sigma}_{i+1}^x \big) \;. \end{equation} Translating it into fermions and pseudo-spins, we can write the desired average as: \begin{equation} \label{ndef_formula} n_{\mathrm{def}}(t) = \frac{1}{N} \sum_{k>0} {\mathrm{Tr}}\big\{ \hat{n}_{\mathrm{def}}^{(k)} \, \hat{\rho}_{\mathrm{sys}}^{(k)}(t) \big\} \end{equation} where $\hat{n}_{\mathrm{def}}^{(k)} = \mathbb{1} - \hat{\tau}^z_k \cos k + \hat{\tau}^x_k \sin k$. In the following, we discuss the dependence of the final density of defects $n_{\mathrm{def}}(t=\tau)$ on the annealing time $\tau$, for different system-bath coupling strengths $\alpha$ and temperatures $T=T_b/2$. In particular, we characterise the regimes for which an OWP is present or not, and study how the defect density approaches thermal values for long annealing times. We also analyse the conditions under which the processes of coherent and incoherent defect production can be regarded as independent, and highlight regimes in which this assumption fails. \subsection{The optimal working point issue} \begin{figure}[tbp] \centering \includegraphics[width=\columnwidth]{Figure2.pdf} \caption{Density of defects \textit{vs} annealing time $\tau$ for (a) $\alpha = 10^{-3}$, (b) $\alpha = 10^{-2},$ for different effective temperatures $T$, as indicated in the legend. The arrows indicate the direction of increasing temperatures. The trend for high $T$ is of AKZ type, with an emergent OWP. At lower $T$ and/or higher $\alpha$ values a monotonic trend smoothly appears, with the absence of OWP.} \label{fig:ndef_tau} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=\columnwidth]{Figure3.pdf} \caption{Density of defects \textit{vs} annealing time $\tau$ for $k_BT = J$, at different coupling strengths $\alpha$. Note that, for large enough annealing times, $n_{\mathrm{def}}(\tau)$ converges towards the thermal value.} \label{fig:ndef_tau_T} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=\columnwidth]{Figure4.pdf} \caption{(a) Dependence of $n_{\mathrm{opt}}$ on $T$, for various values of $\alpha$. Each curve defines an upper value $T_{\mathrm{up}}(\alpha)$ at which $n_{\mathrm{opt}}(T_{\mathrm{up}})=n_{\infty}(T_{\mathrm{up}})$ (marked by stars), and a lower $T_{\mathrm{low}}(\alpha)$ at which the local minimum defining $n_{\mathrm{opt}}$ disappears. \newline (b) Phase diagram in the $T-\alpha$ plane with $T_{\mathrm{up}}(\alpha)$ and $T_{\mathrm{low}}(\alpha)$. A proper OWP only exists for $T> T_{\mathrm{up}}(\alpha)$. The shaded area alludes to the typical range of temperatures of interest for the D-Wave$^{\mbox{\tiny \textregistered}}$ hardware~\cite{Harris_PRB10,Johnson_Nat11}, with $k_BT\simeq 12$ mK and $J\gtrsim 80$ mK. } \label{fig:phasediag} \end{figure} Let us start by looking at the behaviour of the final density of defects $n_{\mathrm{def}}(\tau)$ as a function of the annealing time $\tau$. In Fig.~\ref{fig:ndef_tau} we consider $\alpha = 10^{-3}$ and $10^{-2}$, for which our perturbative approach is reliable~\cite{Arceci_PRB17}, and different values of $T$. For sufficiently high temperatures, we observe a clear AKZ trend: after the initial decrease, $n_{\mathrm{def}}(\tau)$ attains an absolute minimum at some value $n_{\mathrm{opt}}=n_{\mathrm{def}}(\tau_{\mathrm{opt}})$ --- corresponding to the OWP $\tau_{\mathrm{opt}}$ --- and then starts to increase again towards a large-$\tau$ plateau at $n_{\infty}=n_{\mathrm{def}}(\tau\to\infty)$. By decreasing $T$, however, the plateau value $n_{\infty}$ can become smaller than $n_{\mathrm{opt}}$, hence $\tau_{\mathrm{opt}}$ would correspond to a {\em local minimum} and should be called, strictly speaking, a ``{\em local} optimal working point''. A further reduction of $T$ leads to the disappearance of the local minimum at $\tau_{\mathrm{opt}}$, with a monotonic decrease of $n_{\mathrm{def}}(\tau)$ as $\tau$ grows. By comparing the two plots, it is clear that the presence of an OWP is determined by the interplay between the temperature $T$ and system-bath coupling strength $\alpha$. Conversely, Fig.~\ref{fig:ndef_tau_T} displays the final density of defects for a fixed temperature, $k_BT=J$, while scanning $\alpha$ in the range $[10^{-4}, 10^{-1}]$: we see that very weak couplings favour an AKZ behaviour, while stronger couplings tend to cause the lack of an OWP. Moreover, it appears neatly that $n_{\mathrm{def}}(\tau)$ exhibits a convergence, for large $\tau$, towards a value $n_{\infty}(T)$ which depends {\em only} on the temperature $T$. We have verified that such limiting value coincides with the final ($h=0$) thermal value $n_{\mathrm{therm}}(T)\equiv n_{\mathrm{def}}^{T}(h=0)$, indicated by a horizontal dashed line in Fig.~\ref{fig:ndef_tau_T} and calculated from the equilibrium average: \begin{equation} \label{eqn:thermal_t} n_{\mathrm{def}}^{T}(h) = \frac{1}{N} \sum_{k>0} {\mathrm{Tr}}\big\{ \hat{n}_{\mathrm{def}}^{(k)} \, \hat{\rho}_{\mathrm{sys}}^{T}(h) \big\} \;, \end{equation} where $\hat{\rho}_{\mathrm{sys}}^{T}(h)$ is the system thermal state at bath temperature $T_b=2T$, when the transverse field is $h$. The explicit calculation of $n_{\mathrm{therm}}(T)$, following App.~\ref{appB}, brings: \begin{equation} n_{\infty}(T) \equiv n_{\mathrm{therm}}(T) = \frac{1}{2} \big( 1-\tanh{(\beta J)} \big) \;. \end{equation} Figure~\ref{fig:phasediag}(a) summarizes the values obtained for $n_{\mathrm{opt}}(T)$ versus $T$, for various $\alpha$. The stars mark the temperatures $T_{\mathrm{up}}(\alpha)$ where $n_{\mathrm{opt}}(T)$ crosses the (infinite-time limit) thermal value $n_{\mathrm{therm}}(T)$: given $\alpha$, only for $T> T_{\mathrm{up}}$ the minimum at $\tau_{\mathrm{opt}}$ is an absolute minimum of $n_{\mathrm{def}}(\tau)$. For $T<T_{\mathrm{low}}(\alpha)$ the minimum disappears completely --- $n_{\mathrm{def}}(\tau)$ is a monotonically decreasing function of $\tau$. For $T_{\mathrm{low}}(\alpha)<T<T_{\mathrm{up}}(\alpha)$, $n_{\mathrm{opt}}$ survives only as a local minimum. Summarizing, for the range of $\alpha$ we have investigated (the weak-coupling region $\alpha<10^{-1}$) one can construct two characteristic temperature curves, $T_{\mathrm{low}}(\alpha)<T_{\mathrm{up}}(\alpha)$ and a phase diagram, sketched in Fig.~\ref{fig:phasediag}(b). Notice that the two curves are difficult to extrapolate from the data for $\alpha\to 0$, because the simulations would require a too large time-scale to observe the presence or absence of the local minimum in $n_{\mathrm{opt}}$. We can however argue, on rather simple grounds, that $T_{\mathrm{up}}(\alpha\to 0)$ should drop to zero as $\sim 1/\log(1/\alpha)$. Indeed, as seen from Fig.~\ref{fig:phasediag}(a), $n_{\mathrm{opt}}(T,\alpha)$ appears to be roughly linear in $T$ in the region where it crosses the thermal curve, $n_{\mathrm{opt}}(T,\alpha) \simeq A_{\alpha} T$, with a slope $A_{\alpha}$ which, as we have verified, depends on $\alpha$ in a power-law fashion. Since $n_{\mathrm{therm}}(T)\sim \textrm{e}^{-2J/k_BT}$ for small $T$, we can write the implicit relationship: \begin{equation} n_{\mathrm{opt}}(T_{\mathrm{up}},\alpha) \simeq A_{\alpha} T_{\mathrm{up}} \simeq \textrm{e}^{-2J/k_BT_{\mathrm{up}}} \;. \end{equation} Assuming a power-law for $A_{\alpha}$ we get, up to sub-leading corrections, \begin{equation} T_{\mathrm{up}}(\alpha) \sim \frac{\mathrm{const}}{\log{\frac{1}{\alpha}} + O\left( \log \log \frac{1}{\alpha} \right)} \;. \end{equation} This functional form fits our numerically determined data in a remarkably good way. The behaviour of the $T_{\mathrm{low}}(\alpha)$ curve is considerably less trivial. On the practical side, it is computationally harder to obtain information of the temperature below which a local OWP ceases to exists. Our data suggest that there might be a critical value $\alpha_c\approx 5.5 \cdot 10^{-4}$ below which a local OWP exists even at the smallest temperatures, but this might be an artefact of some of the approximations involved in our weak-coupling QME. All in all, the phase diagram is quite clear --- at least for weak-moderate values of $\alpha$ --- in predicting the presence of a true OWP only for relatively large temperatures $T$. We will discuss this in the concluding section. \begin{figure} \centering \includegraphics[width=\columnwidth]{Figure5.pdf} \caption{Density of defects \textit{vs} time for $\tau = 10^5$, $k_BT = J$ and different system-bath coupling strengths. The arrow at $t_c$ marks the value of $t$ at which the transverse field crosses the critical value, $h(t_c)=h_c$. For $\alpha = 10^{-2}$, where the defects density has fully converged (see Fig.~\ref{fig:ndef_tau_T}), $n_{\mathrm{def}}(t)$ is almost superimposed to the exact instantaneous thermal one computed from Eq. \ref{ndef_thermal}. } \label{fig:ndeftime_T2} \end{figure} The fact that the system converges to a thermal state for long annealing times is quite reasonable, and perhaps expected. Indeed, if the thermalization time-scale becomes smaller than the annealing time-scale, one would expect that the system's state remains close to the instantaneous thermal equilibrium state at every time during the whole dynamics. Figure~\ref{fig:ndeftime_T2}, where we plot $n_{\mathrm{def}}(t)$ \textit{vs} time at fixed $k_BT = J$ and fixed annealing time $\tau = 10^5$, confirms this expectation. In Fig.~\ref{fig:ndeftime_T2} the dashed line indicates, as a guide, the ``instantaneous'' exact thermal value $n_{\mathrm{def}}^{T}(h(t))$ computed according to Eq.~\eqref{eqn:thermal_t} (see App.~\ref{appB} for details), while the arrow at $t_c$ marks the value of $t$ where the transverse field $h(t)$ crosses the critical point, $h(t_c)=h_c=1$. We observe that, for increasing couplings $\alpha$, the curves tend to be closer and closer to the instantaneous thermal one, since the thermalization time-scale decreases. \subsection{Interplay between coherent and incoherent defects production} As mentioned above, in absence of dissipation, the defects produced are due to violations of adiabaticity in the coherent dynamics \begin{equation} \label{Bloch-Redfield-coh} \frac{d}{dt} \hat{\rho}_{\mathrm{coh}}^{(k)}(t) = -\frac{i}{\hbar} \Big[ \widehat{H}_{\mathrm{sys}}^{(k)}(t), \hat \rho_{\mathrm{coh}}^{(k)}(t) \Big] \;, \end{equation} and would be given by: \begin{equation} n_{\mathrm{def}}^{\mathrm{coh}}(t) = \frac{1}{N} \sum_{k>0} {\mathrm{Tr}}\big\{ \hat{n}_{\mathrm{def}}^{(k)} \, \hat{\rho}_{\mathrm{coh}}^{{(k)}}(t) \big\} \;. \end{equation} As well known, $n_{\mathrm{def}}^{\mathrm{coh}}(t=\tau)$ obeys the usual KZ scaling~\cite{Polkovnikov_RMP11}. In the present case, for the Ising chain, $n_{\mathrm{def}}^{\mathrm{coh}}(\tau)\sim \tau^{-1/2}$. In the literature related to dissipative QA, it is often found that the density of defects can be regarded as the sum of two independent contributions \begin{equation} \label{eqn:additivity} n_{\mathrm{def}}(t) \approx n_{\mathrm{def}}^{\mathrm{coh}}(t) + n_{\mathrm{def}}^{\mathrm{diss}}(t) \;. \end{equation} The second contribution, $n_{\mathrm{def}}^{\mathrm{diss}}(t)$, should be due to a {\em purely dissipative} time-evolution of the system state: \begin{subequations} \label{Bloch-Redfield-diss} \begin{align} \frac{d}{dt} \hat{\rho}_{\mathrm{diss}}^{(k)} &= - \Big( \big[ \hat{\tau}^z_k, \hat{S}_k(t) \, \hat{\rho}_{\mathrm{diss}}^{(k)} \big] + \mathrm{H.c.} \Big), \\ n_{\mathrm{def}}^{\mathrm{diss}}(t) &= \frac{1}{N} \sum_{k>0} {\mathrm{Tr}}\big\{ \hat{n}_{\mathrm{def}}^{(k)} \, \hat{\rho}_{\mathrm{diss}}^{{(k)}}(t) \big\} \,. \end{align} \end{subequations} Notice that the time evolution of the system Hamiltonian enters here only through the bath-convoluted system operator $\hat{S}_k(t)$. In particular, based on this ``additivity'' assumption, Refs.~\onlinecite{Patane_PRL08, Patane_PRB09, Nalbach_PRB15} have computed scaling laws for the defects density in presence of dissipation due to one or more thermal bosonic baths. However, a crucial requirement for these scaling laws to hold is that thermalization effects after the critical point has been crossed must be negligible: indeed, in Refs.~\onlinecite{Patane_PRL08, Patane_PRB09}, the adiabatic sweep is stopped {\em at the critical point} or immediately below it, so giving no time to the system to ``feel'' the thermal environment; in Ref.~\onlinecite{Nalbach_PRB15}, the analysis is carried out for a very small system-bath coupling $\alpha$, so that the thermalization time is extremely long, much longer than the characteristic annealing time scale. As a consequence, after the critical point crossing, the system is very weakly affected by the bath, and the additivity assumption still holds. Here we are considering an annealing protocol that can leave enough time to the system to thermalize after the critical point crossing: indeed, the quantum critical point is crossed when $h(t_c)=(1-t_c/\tau)h_0=1$, in our units, hence $t_c = (1-1/h_0) \tau= 0.9\tau$, for $h_0=10$. This means that, after the critical point, the system has $t_{\mathrm{avail}} = 0.1\tau$ time to relax to the thermal state, \textit{i.e.} a time proportional to the annealing time $\tau$. Therefore, for all the $\tau$ values for which $t_{\mathrm{avail}}$ is comparable or larger than the bath thermalization time, the effect of the bath after the quantum critical point will be no more negligible. \begin{figure} \centering \includegraphics[width=\columnwidth]{Figure6.pdf} \caption{Test for the additivity assumption Eq.~\eqref{eqn:additivity} for the formation of defects. We compare $n_{\mathrm{def}}(t=\tau)$, calculated with the full Bloch-Redfield evolution in Eq.~\eqref{Bloch-Redfield} (continuous curves, filled symbols), to the sum of $n_{\mathrm{def}}^{\mathrm{coh}}(\tau)$ plus the purely-dissipative evolution contribution $n_{\mathrm{def}}^{\mathrm{diss}}(\tau)$ from Eq.~\eqref{Bloch-Redfield-diss} (dashed curve, empty symbols).} \label{fig:additivity} \end{figure} Figure~\ref{fig:additivity} shows a test of the additivity assumption for four different bath temperatures at fixed coupling $\alpha = 10^{-2}$; for each temperature, we compare the defects density obtained via Eq.~\eqref{Bloch-Redfield} with that obtained by the sum of $n_{\mathrm{def}}^{\mathrm{coh}}$ and $n_{\mathrm{def}}^{\mathrm{diss}}$. For $\tau$ small enough, the additivity assumption always holds, since $t_{\mathrm{avail}}$ is too short, \textit{i.e.} there is not enough time to feel the effect of the bath after the critical point is crossed. However, for longer annealing times the additivity starts to fail: the lower the temperature, the worse it is. In particular, we see that additivity would {\em always} predict the presence of an OWP, but in some regimes the interplay between coherent and dissipative effects is non-trivial and the two contributions cannot be considered separately. Note also that for $k_BT = 5J$ the additivity assumption seems to hold for every annealing time, even after converging to its thermal value. However, this is probably due to the fact that both two values tend to converge to the maximum for the density of defects, and therefore additivity holds better. \section{Discussion and conclusions} \label{sec:conclusions} In the present paper, we have revisited some of the issues related to QA in presence of dissipation. In particular, we investigated under which conditions it is possible to find an ``optimal'' annealing time, the optimal working point (OWP), that minimizes the number of defects, and therefore maximizes the annealing performance. We have tackled those issues in the benchmark case of a transverse-field Ising chain QA, by studying its open-system quantum dynamics with a Markovian QME, as appropriate for a dissipative environment modelled by a standard Caldeira-Leggett Ohmic bath, weakly coupled in a uniform way to the transverse magnetization. Of course, such a choice of the system-bath coupling is rather peculiar and very specific. However, we have tested that it provides the correct steady-state thermalization for a chain evolving at fixed transverse field; hence we expect that our results should retain some general validity, at least qualitatively, for other forms of thermal bosonic baths. Interestingly, a proper OWP can be seen essentially only in a high-temperature regime, $k_BT\gtrsim 0.5 J$. For temperatures which might be relevant for current~\cite{Harris_PRB10,Johnson_Nat11}, and presumably future quantum annealers, $k_BT \ll J$, schematically sketched by a shaded area in the phase-diagram of Fig.~\ref{fig:phasediag}(b), we found that $n_{\mathrm{def}}(\tau)$ would be monotonically decreasing (hence without OWP), except for very weak bath couplings, $\alpha\lesssim 10^{-3}$. In the intermediate temperature regime, $n_{\mathrm{def}}(\tau)$ displays a {\em local minimum} at finite $\tau$, but the actual global minimum is attained as a $\tau\to\infty$ thermal plateau. Obviously, the previous considerations would apply to experimental realizations where the coupling to the environment can be considered to be weak and Ohmic, which apparently is not the case for the D-Wave$^{\mbox{\tiny \textregistered}}$ hardware~\cite{Harris_PRB10,Johnson_Nat11}, where $1/f$ noise seems to play an important role~\cite{Boixo_2016}. The extension of our study to cases where the bath spectral density has different low-frequency behaviours, such as sub-Ohmic or with $1/f$ components, is a very interesting open issue which we leave to a future work. Previously related studies~\cite{Patane_PRL08, Patane_PRB09, Nalbach_PRB15} on the same model did not detect all these different behaviours, because they either stopped the annealing close to the critical point~\cite{Patane_PRL08, Patane_PRB09} --- to highlight some universal aspects of the story, which survive in presence of the environment --- or considered an extremely small, $\alpha\sim 10^{-6}$, system-bath coupling~\cite{Nalbach_PRB15}: this amounts, in some sense, to effectively disregarding thermalization/relaxation processes occurring after the critical point has been crossed. A second issue we have considered is the additivity {\em Ansatz} on the density of defects, Eq.~\eqref{eqn:additivity}, i.e., its being a simple sum of the density of defects coming from the coherent dynamics with that originating from the time evolution due to dissipators only: we found that additivity breaks down as soon as the bath thermalization time is effectively shorter than the characteristic time-scale for the system dynamics; for our annealing protocol, this happens at long enough annealing times $\tau$, as shown in Fig.~\ref{fig:additivity}. In conclusion, we believe that QA protocols realized with quantum annealers for which thermal effects are sufficiently weak, at sufficiently low temperatures, should not show any OWP, but rather a monotonic decrease of the error towards a large running time thermal plateau. Furthermore, it would be tempting to move away from the reference quantum Ising chain toy-model, and explore the effects of dissipation in more sophisticated models. The use of quantum trajectories~\cite{Daley_AP14} or of tensor-network approaches, recently extended to deal with open quantum systems~\cite{Verstraete_04, Zwolak_04}, could help in addressing generic one-dimensional (or quasi-one-dimensional) systems, which would be hardly attackable from an analytic perspective. \section*{ACKNOWLEDGMENTS} We acknowledge fruitful discussions with A. Silva, R. Fazio and G. Falci. Research was partly supported by EU FP7 under ERC-MODPHYSFRICT, Grant Agreement No. 320796. \vspace{3mm}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Methodology} \label{sec:methodology intro} \begin{figure*}[tb] \small \centering \includegraphics[width=0.94\linewidth]{diagrams/method_pipeline3_comp.pdf} \caption{Proposed two-stage navigation scheme. \textbf{First stage} -- \textit{Workspace definition}: (a) recognize the target and compute its pose based on visual markers, (b) navigate close to the target based on navigational sensors and visual markers, (c) generate probabilistic map with stereo imagery and Dispnet; RGB-D camera based probabilistic map displayed for reference. \textbf{Second stage} -- \textit{Optimized localization}: (f)-(g) multimodal localization inputs which are incorporated to a final Kalman filter-based localization estimate. An image quality assessment ({IQA}) is introduced (h) to validate reliability of the extended localization inputs to boost the accuracy of the estimates given by the baseline inputs (see Sec.~\ref{sec: adaptive_navigation_scheme}).} \label{fig:overview_states_localization} \end{figure*} Figure~\ref{fig:overview_states_localization} shows the proposed two-stage navigation scheme: \newline \textbf{First stage}-\emph{Workspace definition with loose localization} \begin{description} \item[1.1.] Approach the target (oil\&gas panel) until its global 3D pose its confidently determined based on a priori knowledge; see Sec.~\ref{sec: knowledge-enable localization}, Fig.~\ref{fig:overview_states_localization}(a). \item[1.2.] Navigate close to the target using odometry from navigation sensors and visual landmarks (\emph{baseline localization}); see Sec.~\ref{sec:marker localization} and Fig.~\ref{fig:overview_states_localization}(a)(b). \item[1.3.] Compute a probabilistic map from stereo input of the target object/area while navigating based on the odometry uncertainty; see examples in Fig.~\ref{fig:overview_states_localization}(c). \end{description} \textbf{Second stage}-\emph{Optimized localization} \begin{description} \item[2.1] Evaluate the reliability of the visual input, i.e., stereo image quality (Sec.~\ref{sec: adaptive_navigation_scheme}, Fig.~\ref{fig:overview_states_localization}(h)), and determine which of the next VO modalities to use: \item[2.2.a] Extract planes (Sec.~\ref{sec: plane extraction}) from dense pointclouds (Sec.~\ref{sec: depth map computation}), filtered using the probabilistic map computed in the first stage to prevent large drifts and noise artifacts. See Fig.~\ref{fig:overview_states_localization}(g)and Fig.~\ref{fig:second_stage_pc_comet}. \item[2.2.b] Extract and track robust 2D features from imagery; see Sec.~\ref{sec: feature-based method}, Fig.~\ref{fig:overview_states_localization}(f). \item[2.3.] Compute visual odometry either from plane registration or feature tracking (Sec.~\ref{sec: plane-based method},~\ref{sec: feature-based method}) depending on the image quality assessment (IQA) and integrate the results into the localization filter. \end{description} The objective of the first stage is to compute a probabilistic map (octomap~\cite{Hornung2013}) of the expected ROV workspace area. A coarse 3D representation of the scene can be obtained given few samples. Figure~\ref{fig:overview_states_localization}(c) illustrates this by comparing the octomap generated with a simulated RGB-D camera (reference) and the one generated by pointclouds from stereo imagery. High precision is not crucial here since mapping is not the final goal, but to filter spurious 3D pointclouds, e.g., from dynamic objects like fish or from processing artifacts, as shown in Fig.~\ref{fig:second_stage_pc_comet}. We will now describe in detail each component from the second stage, \emph{optimized localization}. \begin{figure}[tb] \small \centering \includegraphics[width=0.98\linewidth]{fig/localization/second_stage/pc_comet.png} \caption{Comet-like artifacts (right) produced in 3D pointclouds by noisy depth maps (left). These are further filtered by the probabilistic map generated in stage 1 (Fig.~\ref{fig:overview_states_localization}(c)) to later extract planes and obtain visual odometry (Fig.~\ref{fig:overview_states_localization}(g)). } \label{fig:second_stage_pc_comet} \end{figure} \subsection{Visual odometry} \label{sec:visual odometry} The visual markers attached to the panel (Sec.~\ref{sec:marker localization}) are not always observable. Therefore, further methods are beneficial to aid navigation. Here we adapt plane-based and featured-based odometry methods to our scenario to exploit structures and features found in the environment. \subsubsection{Odometry from plane registration} \label{sec: plane-based method} After plane segmentation (Sec.~\ref{sec: plane extraction}), the plane normal equals to the eigen vector with smallest eigen value of Matrix $\mathbf{A}$: \begin{equation} \small \mathbf{A} = \begin{pmatrix} \Gamma_n(x,x) & \Gamma_n(x,y) & \Gamma_n(x,z)\\ \Gamma_n(y,x) & \Gamma_n(y,y) & \Gamma_n(y,z)\\ \Gamma_n(z,x) & \Gamma_n(z,y) & \Gamma_n(z,z) \end{pmatrix} \end{equation} where $\Gamma_n(\alpha,\beta) = \sum_{j}^{n}(\alpha_j - m_{\alpha})(\beta_j - m_{\beta}), \alpha, \beta \in \{x, y, z\}$ and $m$ is the mass center of the points in plane set $\mathbb{P}_i$. To update the matrix $\mathbf{A}$ and hence the planes normal representation as fast as possible when new points are considered, the sum of orthogonal distances $\Gamma_{l}(\alpha,\beta)$ is updated with a new point $p_{l+1}$ as follows: \begin{equation} \small \begin{aligned} \Gamma_{l+1}(\alpha,\beta) = & \Gamma_l(\alpha,\beta) + \alpha_{l+1}\beta_{l+1} \\ & -m_{\alpha}(l+1)(\sum_{j=1}^{l+1}p_j + m_{\alpha}(l+1)) \\ & +m_{\alpha}(l)(\sum_{j=1}^{l}p_j + m_{\alpha}(l)) \end{aligned} \end{equation} Then the relative pose ${^C_R\mathbf{T}_{rel}}$ in camera frame at time $t$ and $t+1$ can be calculated by the extracted planes. Here we exploit the plane registration method in \cite{pathak2010fast} to estimate rotation only. As shown in Sec.~\ref{exp: dense maps} experiment, in our deep-sea scenario we commonly encountered between 3 to 5 planes per frame, and at least 4 plane correspondences are necessary to estimate translation. Suppose planes extracted at time $t$ and $t+1$ be $\varPi_1 = \{{^1\pi_i} | i = 1,...,M\} $ and $\varPi_2 = \{{^2\pi_j} | j = 1,...,N\} $ respectively, then the $M \times N$ candidate pairs $({^1\pi_i}, {^2\pi_j})$ are filtered and registered by the following tests from~\cite{pathak2010fast}, which are adapted to our deep-sea setup: \begin{itemize} \item Size-Similarity Test: The Hessian matrix $H$ of the plane parameters derived from plane extraction is proportional to the number of points in the plane $\pi$. Thus, the Hessian of two matched planes should be similar, i.e., \begin{equation} \small \lvert log{\lvert ^1H_i \rvert_+} - log{\lvert ^2H_i \rvert_+} \rvert < \bar{L}_{det} \end{equation} where $\lvert H_i \rvert_+$ is the product of the singular values of $H_i$ and $\bar{L}_{det}$ is the similarity threshold. \item Cross-Angle Test: The angle between two planes $({^1\pi_{i_1}}, {^1\pi_{i_2}})$ in frame ${^1\mathcal{F}}$ should be approximately the same as the angle to the correspondent two planes $({^2\pi_{j_1}}, {^2\pi_{j_2}})$ in frame ${^2\mathcal{F}}$, described as \begin{equation} \small {^1\hat{n}^{\top}_{i_1}}{^1\hat{n}_{i_2}} \approx {^2\hat{n}^{\top}_{j_1}}{^2\hat{n}_{j_2}} \end{equation} where $\hat{n}_{k}$ is the normal to the plane $\pi_k$. \item Parallel Consistency Test: Two plane pairs $({^1\pi_{i_1}}, {^2\pi_{j_1}})$ from the frames ${^1\mathcal{F}}$ and ${^2\mathcal{F}}$ are defined as parallel if their normals meet ${^1\hat{n}^{\top}_{i_1}}{^1\hat{n}_{i_2}}\approx 1$ and ${^2\hat{n}^{\top}_{j_1}}{^2\hat{n}_{j_2}} \approx 1$, or ${^1\hat{n}^{\top}_{i_1}}{^1\hat{n}_{i_2}}\approx -1$ and ${^2\hat{n}^{\top}_{j_1}}{^2\hat{n}_{j_2}} \approx -1$. \end{itemize} If only one plane is extracted from the current frame, it is tested only by the Size-Similarity test because others require at least two plane correspondences. Then the filtered plane pairs are used to calculate the rotation ${^1_2R}$ between frame ${^1\mathcal{F}}$ and ${^2\mathcal{F}}$ by the equation: \begin{equation} \small \max_{^1_2R} \zeta_r = constant + \sum_{i=1}^{S}\omega_i{^1\hat{n}_i}\cdot({^1_2R} {^2\hat{n}_i}) \label{eq: cal_rotation} \end{equation} which can be solved by Davenport's q-method and where $w_i$ are weights inversely proportional to the rotational uncertainty. If the rotation ${^1_2R}$ is represented as quaternion ${^1_2\hat{q}}$, Eq.~\ref{eq: cal_rotation} can be written as: \begin{equation} \small \begin{aligned} \max_{^1_2\hat{q}} \zeta_r & = \sum_{i=1}^{S}\omega_i{^1_2\hat{q}}K {^1_2\hat{q}} \end{aligned} \end{equation} Then the covariance ${^1_2\mathbf{C}_{\hat{q}\hat{q}}}$ of quaternion ${^1_2\hat{q}}$ is \begin{subequations} \small \begin{equation} \small {^1_2\mathbf{H}_{\hat{q}\hat{q}}} = 2(K - \mu_{max}(K)I) \end{equation} \begin{equation} \small {^1_2\mathbf{C}_{\hat{q}\hat{q}}} = - {^1_2\mathbf{H}_{\hat{q}\hat{q}}}^{+} \end{equation} \end{subequations} where $\mu_{max}(K)$ is the maximum eigen value of $K$, derived from Davenport's q-method. The covariance ${^1_2\mathbf{C}_{\hat{q}\hat{q}}}$ and rotation ${^1_2\hat{q}}$ are used as input for the our navigation filter. \subsubsection{Feature-based tracking} \label{sec: feature-based method} Whenever there are distinctive and sufficient 2D texture features in the environment, related methods provide a reliable and fast way to compute odometry. Here, ORB-SLAM2 \cite{mur2017orb} is used. It consists of three main threads: tracking, local mapping, and loop closing. Considering our application, we briefly describe the tracking process. When a new stereo image is input, the initial guess of the pose ${^C_R\mathbf{T}'}$ is estimated through the tracked features from the last received image. Afterwards, the pose ${^C_R\mathbf{T}}$ can be improved by conducting bundle adjustment on a memorized local map $\mathbf{M}_i$. Moreover, the tracking thread also decides whether the stereo image should be an image keyframe or not. When tracking fails, new images are matched with stored keyframes to re-localize the robot. ORB-SLAM2 was chosen because direct VO methods (DSO,LSD-SLAM) assume brightness constancy throughout image regions~\cite{Park2017_slam}, which seldom happens in underwater due to light backscatter. Likewsie, visual-inertial SLAM methods (VINS,OKVIS) require precise synchronization between camera and IMU~\cite{Buyval2016_slam}, but by hardware design all sensors are loose-coupled in our application. \subsection{Adaptive image quality based navigation} \label{sec: adaptive_navigation_scheme} At the end of the localization pipeline the EKF can integrate all the inputs based on their measurement confidence, i.e., covariance matrix. For efficiency, it is preferable to filter out low confidence odometry values before using them for the EKF. This could be done by examining the covariance matrix after the vision processing. But computation time is an important factor in our real-time application. We hence use decision criteria on the sensor (image) quality to omit visual odometry computations, which are likely to generate low confidence results. We introduced multiple visual odometry modalities in Sec.~\ref{sec:visual odometry}; see Fig.~\ref{fig:overview_states_localization}(e)(f)(g). The visual marker-based odometry, as part of the baseline inputs, is not filtered out due to its high reliability and precision. Feature tracking ORB-SLAM localization is highly dependent on image quality; textureless regions and low-contrast significantly reduce its accuracy. In contrast, plane-based odometry copes well with textureless environments given that there is an underlying structure. But it is very computationally demanding due to dense depth estimation and plane extraction (Sec.~\ref{sec: dense depth mapping and computation}). Based on this, we propose an image quality assessment (IQA) to reason about which visual cues to use in the localization pipeline. We aggregate a non-reference image quality measure based on Minkowski Distance (MDM)~\cite{Ziaei2018_MDMIQA} and the number of tracked ORB features between consecutive frames. The MDM provides three values in the $\left[ 0,1 \right]$ range describing the contrast distortion in the image; thus, the number of ORB features is normalized based on the predefined maximum number of features to track. If each of these IQA values is defined as $m_I(t)$, the final measurement for each timestamp $t$ is their average $\overline{m}_I(t)$. Experiments in Sec.~\ref{exp:visual odometry performance} show the performance of these IQA measurements. \section{Introduction} \label{introduction} The marine environment is challenging for automation technologies. Yet, oceans are one of the main forces driving commerce, employment, economic revenue and natural resources exploitation, which in turn triggers profound interest in the development of new technologies to facilitate intervention tasks, e.g., in oil\&gas production (OGP). Remote Operated Vehicles (ROVs) are the current work-horse used for these tasks, which include inspection of ships, submerged structures and valves manipulation. In particular, manipulation tasks are extremely challenging without stable and accurate robot localization. In general, a global positioning based navigation is desirable to correct measurements from inertial navigation systems (INS) in a \emph{tightly-coupled} approach~\cite{Tal2017_uwnavigation}. However, such data has to be transmitted acoustically through ultra-short/long baseline (USBL/LBL) beacons that have low bandwidth~\cite{Stutters2008_uwnavigation}, signal delays and deployment constraints. Additional sensors, i.e., Doppler velocity logs (DVLs) and digital compasses, can improve the localization accuracy but still not at the required standards to perform \emph{floating-base} manipulation. We present a navigation scheme that uses visual odometry (VO) methods based on stereo camera imagery and an initial probabilistic map of the working space to boost localization accuracy in challenging conditions. The application scenario is the monitoring and dexterous manipulation of a OGP panel (Fig.~\ref{fig:dexrov_in_action}) within the EU-project DexROV~\cite{Mueller2018_DexROVSIL,Birk2018 \begin{figure} \small \centering \subfigure[]{\label{fig:rov_and_panel} \includegraphics[width=0.95\linewidth]{fig/dexrov/dexrov_rov_panel_3rdview.png}} \subfigure[]{\label{fig:rov_stereo_camera} \includegraphics[width=0.395\linewidth]{fig/dexrov/dexrov_rov.png}} \subfigure[]{\label{fig:panel_analog_camera} \includegraphics[width=0.555\linewidth]{fig/dexrov/dexrov_panel_grabbing.png}} \caption{\subref{fig:rov_and_panel} ROV performing oil\&gas valve manipulation tasks. \subref{fig:rov_stereo_camera} ROV stereo camera system and manipulation arms. \subref{fig:panel_analog_camera} ROV first person view while approaching oil\&gas panel.} \label{fig:dexrov_in_action} \end{figure} To address the challenges of underwater vision, we combine plane registration and feature tracking methods. 3D planes are extracted from dense point cloud (DPC) generators, which produce complete disparity maps at the cost of depth accuracy; density being the key factor to find reliable 3D planes. This is particularly useful in structured or man-made environments, which predominantly contain planar surfaces, like the installations used in OGP. Furthermore, a decision-making strategy based on image quality is introduced: it allows to select the visual odometry method to be used in order to obtain reliable measurements and improve computation times. In summary our contributions are: \begin{itemize} \item Development of different visual odometry (VO) modalities based on: knowledge-enabled landmarks, 3D plane registration and feature tracking. \item Integration of the multimodal VO into an underwater localization filter that adapts its inputs based on image quality assessments. \item A two-step navigation scheme for structured environments. Initial suboptimal localization measurements are used to compute a coarse probabilistic 3D map of the workspace. In turn, this map is used to filter noise and optimize the integration of further measurements. \item Validation of the presented scheme in realistic field conditions with data from sea trails off-shore the coast of Marseille, France. \end{itemize} \subsection{Image quality based navigation performance} \label{exp:visual odometry performance} Based on the previous experiment, we choose Dispnet as dense point cloud generator. To analyze the strengths and weaknesses of the VO methods, the ROV circles the panel while acquiring very diverse stereo imagery. The panel has three sides with distinct purposes: valve manipulation (side 1), dextereous grasping (side 2) and textureless side (side 3), see Fig.~\ref{fig:panel_in_simulation}. This last panel side helps evaluating how the methods work with scarce image features, and how our image quality measure $\overline{m}_I(t)$ from Sec.~\ref{sec: adaptive_navigation_scheme} evaluates this. \begin{figure}[!b] \centering \subfigure[Side 1]{\label{fig:panel_side_1} \includegraphics[width=0.3\linewidth]{fig/panel/panel_side_1.png}} \subfigure[Side 2]{\label{fig:panel_side_2} \includegraphics[width=0.3\linewidth]{fig/panel/panel_side_2.png}} \subfigure[Side 3]{\label{fig:panel_side_3} \includegraphics[width=0.33\linewidth]{fig/panel/panel_side_3.png}} \caption{Oil\&gas panel sides for \subref{fig:panel_side_1} valve manipulation, \subref{fig:panel_side_2} dexterous grasping, \subref{fig:panel_side_3} and textureless region visualization.} \label{fig:panel_in_simulation} \end{figure} The expected error pose is defined as the difference between the ground-truth robot pose in simulation $\ensuremath{\trfFr{\frameRobot}{\frameOdom}{_{S}}}$ and the robot pose determined from visual odometry $\ensuremath{\trfFr{\frameRobot}{\frameOdom}{_{VO}}}$: \begin{equation} \small \begin{aligned} \small m(\mathcal{E})= & \robotPoseError{S}{VO} \\ = & \langle \robotPositionError{S}{VO} , \robotOrientationError{S}{VO} \rangle \end{aligned} \label{eq:pose_error_measure} \end{equation} where $\panelPositionError{S}{VO}$ is the Euclidean distance between positions and $\panelOrientationError{S}{VO}$ is the minimal geodesic distance between orientations. For our experiments, we also compute the \emph{lag-one autocorrelation} $m_{A}=\sum_{t}\ensuremath{\trfFr{\frameRobot}{\frameOdom}{_{F}}} (t)\ensuremath{\trfFr{\frameRobot}{\frameOdom}{_{F}}} (t-1)$ on the EKF filter predicted poses $\ensuremath{\trfFr{\frameRobot}{\frameOdom}{_{F}}}$; $m_{A}$ is a measure of trajectory smoothness, important to prevent the robot from performing sudden jumps. \subsubsection{Visual odometry accuracy} \label{exp: vo accuracy} First, we evaluate the accuracy of the proposed VO methods. Fig.~\ref{fig:visual odometry simulation tests}(top) shows the results, the time axis has been normalized and the error is logarithmically scaled for better readability. The orange horizontal lines indicate the time when no visual marker is detected, and the vertical red lines show when the ROV is transitioning into another side of the panel. As expected, the orientation derived from the markers is the most accurate but it also presents the outliers with the largest errors, e.g., close to \SI{0.1}{s} and \SI{0.7}{s} in Fig.~\ref{fig:visual odometry simulation tests}. The feature tracking ORB-SLAM2 method presents the greatest error but with the least variance. When there are very scarce features to track, such as in panel's side 3, the error abruptly increases until a memory-saved keyframe from the panel's side 1 is seen again; close to time \SI{1.0}{s}. \begin{figure}[!b] \centering \resizebox{\linewidth}{!}{\inputpgf{fig/image_quality}{exp-4-2_.pgf}} \caption{(Top) Orientation error for different visual odometry methods. No markers detected for sampling times marked \textcolor{orange}{orange}, and changes of panel side with a \textcolor{red}{red} line. (Bottom) Image quality measurement $\overline{m}_I(t)$ per stereo pair.} \label{fig:visual odometry simulation tests} \end{figure} The plane registration method has similar accuracy as the visual marker-based odometry and with scarce outliers only when the ROV transitions between panel sides. During these periods the corner of the panel is seen, which is not a planar but cylindrical surface. Then, depending on the viewpoint it will be represented with different various planes. These results are complemented by Table~\ref{table:localization_all_vs_adaptive}(a), which shows the higher computational costs of the plane-based VO. During field trials, the overall ROV perception$+$manipulation control system and its graphical interface have reported peaks of $92\%$ GPU RAM usage; hence, using Dispnet can lead to GPU overuse and spurious 3D maps. Likewise, the slow update times of plane-based VO might limit the ROV velocity. For these reasons, VO based on feature tracking is given preference when the image quality is good. \begin{table}[!b] \footnotesize \captionsetup{justification=centering} \caption{Image quality based navigation performance} \begin{tabularx}{.45\linewidth}{@{}XYY@{}} \toprule ~ & VO-ORB & VO-planes \\ \midrule CPU $\left[ \% \right]$ & 3.2 & 6.8 \\ GPU $\left[ \% \right]$ & 0.1 & 17.6 \\ Time $\left[ s \right]$ & 0.145 & 3.151 \\ \bottomrule \end{tabularx} \hfill \begin{tabularx}{.5\linewidth}{@{}XYY@{}} \toprule ~ & EKF-all & EKF-adpative \\ \midrule $\bar{m}_{M,F}(\langle \bar{\mathbf{p}} \rangle )$ $\mathrm{[m]}$ & 0.73 \phantom{a}$\pm$0.38 & 0.61 \phantom{a}$\pm$0.14 \\ $\bar{m}_{M,F}(\langle \bar{\mathbf{q}} \rangle )$ $\mathrm{[deg]}$ & 8.93 \phantom{a}$\pm$4.22 & 3.02 \phantom{a}$\pm$1.06 \\ $m_{A}$ & 0.92 & 0.95 \\ \bottomrule \end{tabularx} \label{table:localization_all_vs_adaptive} \vspace{0.1cm} \begin{flushleft} (a) Computation performance \hspace{0.7cm}(b) Pose error and traj. autocorrelation \end{flushleft} \end{table} \subsubsection{Image quality assessment} \label{exp:iqa} In this experiment, we validate that the proposed image quality measure $\overline{m}_I(t)$ detects when an image has low contrast and/or large uniform texture regions. This is shown in Fig.~\ref{fig:visual odometry simulation tests}(bottom); $\overline{m}_I(t)$ is the lowest when panel side 3 is in view and when the ROV navigates around the corners. As it can be seen, $\overline{m}_I(t)$ mostly shows an inverse behavior than the VO accuracy with ORB-SLAM2. Based on this simulations, we set a threshold $(\approx0.45)$ for $\overline{m}_I(t)$ to only trigger the computationally expensive plane-based VO when the image quality is poor. When using the IQA to decide which VO inputs to integrate into the localization filter (\emph{EKF-adaptive}), we reduce the pose error and increase the smoothness of the followed trajectory, see Table~\ref{table:localization_all_vs_adaptive}(b). Simply integrating all odometry inputs (\emph{EKF-all}) does not boost performance as the Kalman filter does not reason about the quality of the sensor data except for examining the inputs covariance matrix. \subsection{Dense depth mapping and plane extraction} \label{sec: dense depth mapping and computation} \subsubsection{Depth map computation} \label{sec: depth map computation} This is a preprocessing step for the plane-based odometry computation (Sec.~\ref{sec: plane-based method}). We consider two state-of-the-art real-time dense point cloud (DPC) generators: Efficient Large-Scale Stereo Matching (ELAS)~\cite{Geiger2011_ELAS} and Dispnet~\cite{Mayer2016_Dispnet}. ELAS is geometry based and relies on a probabilistic model built from sparse keypoints matches and a triangulated mesh, which is used to compute remaining ambiguous disparities. Since the probabilistic prior is piece wise linear, the method is robust against uniform textured areas. Dispnet is a data driven approach that uses a convolutional neural network for disparity regression. The reason to compare these methods is due to their underlying distinct approaches, which in turn offer different advantages and disadvantages. For example, ELAS has faster processing times in CPU and outputs more precise but incomplete maps when specular reflections or very large textureless areas occur. Dispnet produces a complete map even in the presence of image distortions, but smooths the disparity changes around object boundaries. On top of the depth estimation, it is important to include techniques that reduce noise-induced artifacts commonly present when performing outdoor stereo reconstruction. Fig.~\ref{fig:second_stage_pc_comet} shows an example where object borders produce comet-like streaks fading away from the camera position. We also encountered this artifact when running the system with Dispnet during sea field trials. It was observed that when the GPU RAM memory gets overloaded some layers of the Dispnet neural network produce spurious results. In order to mitigate these artifacts, the incoming point cloud is filtered by rejecting points which do not intersect with the octomap~\cite{Hornung2013} representing the workspace obtained from the first navigation stage (\emph{loose localization}). For an efficient nearest-neighbor search between point cloud and octomap voxels, a \emph{kd}-tree is applied to represent the workspace octomap. Consequently, a substantial amount of points is neglected (Fig.~\ref{fig:overview_states_localization}(g) -- depth map point cloud generation) and also reduces computation cost and memory. \subsubsection{Plane extraction} \label{sec: plane extraction} Due to the noisy nature of stereo generated pointclouds, we used a region growing based technique for plane extraction \cite{poppinga2008fast}. It also outputs a covariance matrix that describes the planarity of the found planes, which can then be integrated for a better estimation of the localization uncertainty (Fig.~\ref{fig:overview_states_localization}(g) -- plane extraction-based visual odometry). Moreover, it efficiently represents not only planes but found holes as polygons, which allows to reason about the quality of the data. In summary, point clouds are segmented into several planes $\mathbb{\varPi} = \{\pi_i | i = 1,...,N\} $. Initially, an arbitrary point $p_0$ and its nearest neighbor $p_1$ form an initial set $\mathbb{P}_i$ representing the plane $\pi_i$. Then the set $\mathbb{P}_i$ grows by finding its nearest neighbors $p_j$ and adding $p_j$ to $\mathbb{P}_i$ if $p_j$ belongs to $\pi_i$ (plane test), and stops when no more points can be added. Afterwards, the process continues until all the points are distributed into a plane set $\mathbb{P}$ or considered as noise. \subsection{Plane segmentation from dense depth maps} \label{exp: dense maps} In this first experiment, we investigate how the plane extraction and registration algorithms perform with different dense point cloud generators. Table~\ref{tab:depth_map_planes} shows the experiment results. We use as a baseline the simulated RGB-D camera available in the Gazebo simulation engine, which provides ground truth depth/disparity maps. To measure the accuracy of the stereo disparities (second column) the same principle as the 2015 KITTI stereo benchmark was followed~\cite{Menze2015_sceneflow}, all disparity differences greater than 3 pixels or $5\%$ from the ground truth are considered erroneous. The coverage score (third column) counts how many image pixels have a valid associated disparity value; textureless regions reduce this value. Furthermore, we also count the number of extracted planes and holes within them (fourth and fifth column) using the method from Sec.~\ref{sec: plane extraction}. Finally, the last column of Table~\ref{tab:depth_map_planes} shows the orientation error computed from the plane registration, Sec.~\ref{sec: plane-based method}. \begin{table}[!b] \footnotesize \centering \captionsetup{justification=centering} \caption{Dense map, plane extraction and orientation measures on simulated stereo sequences} \begin{tabularx}{\linewidth}{@{}XYYzzY@{}} \toprule \textbf{Method} & \textbf{Accuracy} & \textbf{Coverage} & \textbf{Planes} & \textbf{Holes} & \textbf{Error} $[^\circ]$ \\ \midrule RGB-D & 1.0 & 1.0 & 1620 & 456 & $11.7\pm3.3$ \\ ELAS & 0.781 & 0.579 & 2708 & 713 & $16.2\pm7.3$ \\ Dispnet & 0.713 & 0.943 & 1987 & 123 & $19.4\pm8.5$ \\ ELAS+Filter & 0.854 & 0.468 & 2061 & 204 & $12.1\pm6.4$ \\ Dispnet+Filter & 0.798 & 0.833 & 1254 & 18 & $09.3\pm2.1$ \\ \bottomrule \end{tabularx} \label{tab:depth_map_planes} \end{table} It is important to note that the tests are performed one more time using the probabilistic map generated from the \emph{first stage} of our methodology (Fig.~\ref{fig:overview_states_localization}(c)) as a filter. We draw the next conclusions from this experiment: \begin{enumerate} \item[1)] ELAS depth maps have more accurate 3D information at the cost of incomplete maps, which produce higher number of planes and holes due to the inability to connect regions corresponding to the same planes. \item[2)] Since these redundant planes are still accurate space representations, they produce better orientation estimations than the complete but inaccurate Dispnet maps. \item[3)] Filtering point clouds with the probabilistic map boosts accuracy and reduces the coverage of both methods, which validates the efficiency of our two-stage navigation scheme. \item[4)] Dispnet$+$Filter orientation accuracy is even greater than the one based on the simulated RGB-D camera. From our observations, Dispnet$+$Filter generates less planes than the RGB-D depth maps; RGB-D very high accuracy produces planes for small objects such as the panel's valves which add ambiguity. \item[5)] Thus, highly accurate point clouds (overfitting) negatively affects plane registration, i.e., the likelihood of incorrectly registering nearby small planes increases. \item[6)] Based on the 367 analyzed image frames, the mean number of planes generated with Dispet+Filter per frame is $3$ or $4$. \end{enumerate} \section{Experimental Results} \label{sec:experiments} The first two experiments are performed in simulation to analyze their algorithmic behavior. The simulator ambient lighting parameters are adjusted to match conditions from the sea trials. The simulated ROV navigates around while keeping $\approx$\SI{1.5}{m} from the OGP panel since it was found to be an optimal distance for our stereo camera baseline of \SI{30}{cm}; also a constant $z$-axis value (depth) is kept. First, we evaluate the impact of the dense pointcloud generators, ELAS and Dispnet, on the plane extraction, registration and orientation computation (Sec.~\ref{sec: plane extraction},~\ref{sec: plane-based method}). Furthermore, we study how the filter based on the probabilistic map generated from the \emph{first stage} of our navigation scheme improves performance. The second experiment assesses the accuracy of the VO approaches, i.e., our plane registration and feature tracking (ORB-SLAM2), with different types of imagery. The last experiment tests our complete pipeline with real-world data from DexROV field trials in the sea of Marseille, France (see Fig.~\ref{fig:dexrov_in_action}). The cameras used are Point Grey Grasshoppers2 which have a resolution of $688 \times 516$ pixels and \SI{10}{Hz} rate; both are in underwater housings with flat-panel that allows for image rectification using the PinAx model~\cite{Luczynski2017_pinax}. \section{Conclusion} \label{sec:conclusion} Underwater operations are harsh due to the dynamic environment and the limited access to the system. However, the commercial demand to develop these technologies increases every year. One of the many challenges to tackle, and commonly the first in the work pipeline, is the achievement of robust, reliable and precise localization. For this reason, we investigate the use of visual odometry in underwater structured scenarios, especially a plane-based method adapted for underwater use with stereo processing and a standard feature based method. Furthermore, an image quality assessment is introduced that allows decision making to exclude computationally expensive visual processing, which is likely to lead to results with high uncertainty. The approach is validated in simulation and especially also in challenging field trials. \section{Related Work} \label{sec:related work} A great number of theoretical approaches on localization filters for marine robotics have been proposed in the literature. In recent years, this also includes increasing efforts to address practical issues such as multi-rate sampling, sensor glitches and dynamic environmental conditions. In \cite{Paull2014_uwnavigation}, a review of the state-of-the-art in underwater localization is presented and classified into three main classes: inertial/dead reckoning, acoustic and geophysical. The surveyed methods show a clear shift from technologies like USBL/LBL positioning systems~\cite{Morgado2013_usbl} towards two research areas. First, dynamic multiagent systems which include a surface vehicle that complements the underwater vehicles position with GPS data~\cite{Campos2016_multivehicle}; and secondly, the integration of visual navigation techniques, i.e., visual odometry~\cite{Sukvichai2016_auvvo} and SLAM~\cite{Fallon2013_sonarslam}, into marine systems. We also integrate inertial data from DVL and IMU with vision-based techniques using standard 2D features and in addition 3D plane registration. The work in~\cite{Proenca2017_rgbdodometry} shows that the combination of standard visual features with geometric visual primitives increases odometry robustness in low texture regions, highly frequent in underwater scenes. Three methods are commonly used for plane primitive extraction: RANSAC, Hough transform and Region Growing (RG). State of the art methods~\cite{poppinga2008fast,Feng2014_PEAC,Proenca2018_CAPE} often use RG because it exploits the connectivity information of the 3D points and, thus, have more consistent results in the presence of noise. These are better suited for our application since the input point cloud for the plane extraction algorithm is not directly generated from an RGB-D camera but from a stereo image processing pipeline. We compare some of these stereo pipelines to investigate their impact on the overall localization accuracy (see Sec.~\ref{exp: dense maps}). Finally, to test the complete framework, we used the \emph{continuous system integration and validation} (CSI) architecture proposed in our previous work~\cite{Fromm2017}. With this architecture, parts of the developmental stack can be synchronized with real-world data from field trials to close the discrepancy between simulation and the real world; this process is inspired by the \emph{simulation in the loop} (SIL) methodology~\cite{Mueller2018_DexROVSIL}. Based on this, we first compute the accuracy of our approach in an optimized simulation environment reflecting similar light conditions as observed in underwater trials. Then, its effectiveness is validated on field trial data featuring real-world environmental conditions. \subsection{Knowledge-enabled localization} \label{sec: knowledge-enable localization} Underwater missions are cost-intensive and high-risk, thus prior knowledge about the mission reduces the risk of failures and increases safety. Especially in visual inspection or manipulation tasks of man-made structures, the incorporation of prior knowledge can be exploited. Therefore, we built a \emph{knowledge base} that contains properties of task-related objects. Along with offline information, like CAD models and kinematic descriptions of the robot and OGP panel, the knowledge base is updated based on current information gathered over the execution of a task, e.g.\ panel valve poses. \subsubsection{Panel Pose Estimation} \label{sec:method:panel_det} The panel pose estimation is the basis for projecting the panel model and its kinematic properties into the world-model. This further enables reliable task benchmarking in simulation and real operations, i.e., manipulation of valves and handles. Our approach incorporates offline knowledge such as the panel CAD model and visual markers placed at predefined locations, see Fig.~\ref{fig:overview_states_localization}(a). Based on this augmentation of the panel with markers, we exploit the panel as a fixed landmark and infer the robot pose whenever a visual marker is in the camera view as described in the next Sec.~\ref{sec:marker localization}. The panel pose in odometry frame \trfFr{P}{O} can be reliably estimated using the detected marker pose w.r.t.~the camera frame \trfFr{M}{C}, the camera pose on the robot frame \trfFr{C}{R}, the panel pose in marker frame \trfFr{P}{M}, and the current robot pose in odometry frame \trfFr{R}{O}, see Fig.~\ref{fig:overview_states_localization}(e) \begin{equation} \trfFr{P}{O} = \trfFr{R}{O} \trfFr{C}{R} \trfFr{M}{C} \trfFr{P}{M} \end{equation} When $n$ different markers are constantly detected during $k$ image frames $I$, $n$ pose estimates $\trfFr{P}{O}$ are extracted. These are used to compute the mean position and orientation, determined by \emph{spherical linear interpolation (Slerp)} \subsubsection{Visual Landmark-Based Odometry} \label{sec:marker localization} Once the panel pose has been estimated and fixed, the robot pose can be inferred every time there is a visual marker observation and used as an Extended Kalman Filter input modality Fig.~\ref{fig:overview_states_localization}(e) shows a sample pose estimate of a visual marker; note that the panel is partially observed but the marker is used to infer the panel pose through the space transformations. Since the panel pose is fixed, the robot pose \trfFr{R}{O} can be estimated as follows: \begin{equation} \trfFr{R}{O} = \trfFr{P}{O} \trfFr{M}{P} \trfFr{C}{M} \trfFr{R}{C} \end{equation} where \trfFr{P}{O} is the panel pose in odometry frame, \trfFr{M}{P} is one marker pose in panel frame, \trfFr{C}{M} is the camera pose w.r.t.\ the marker and \trfFr{R}{C} is the robot fixed pose w.r.t.\ the camera. Further on, the means of robot position $\posFrMean{R}{O}$ and orientation $\quatFrMean{R}{O}$ w.r.t.\ the odometry frame are estimated from multiple marker detections using \emph{Slerp}. In addition, a covariance matrix \covFr{R}{O} for the robot pose is computed: \begin{equation} \covFr{R}{O}=\mathrm{diag}(\sigma^2_{\pos_{x}},\sigma^2_{\pos_{y}},\sigma^2_{\pos_{z}},\sigma^2_{\quat{q}_{\phi}},\sigma^2_{\quat{q}_{\theta}},\sigma^2_{\quat{q}_{\psi}}). \end{equation} The full robot pose estimate $\trfFr{R}{O} = \langle \posFrMean{R}{O},\quatFrMean{R}{O} \rangle$ along with the respective covariance matrix \covFr{R}{O} is then taken as an input for the localization filter in the final setup. Alternatively, it can be used as a ground truth value to optimize parameters, i.e., sensor biases and associated covariances, since it is difficult to acquire absolute global ground truth underwater. For more details refer to our work in~\cite{Mueller2018_DexROVSIL}. \subsection{Field trials localization} \label{exp:field trials localization} In the following experiments, we use the visual landmarks (markers) pose estimates $\ensuremath{\trfFr{\frameRobot}{\frameOdom}{_{M}}}$ as ground truth since they are quite accurate and the robot global pose can be obtained from them (Sec.~\ref{sec:marker localization}). We perform three different tests $\mathcal{T}_{Li}$ explained in Table~\ref{table:localization tests description}; the corresponding results are shown in Table~\ref{table:localization_measures} and Fig.~\ref{fig:results_localization_real_trajectory}. In these tests we compute the measure $m_{M,F}(\mathcal{T}_{Li})=\robotPoseError{M}{F}$ as defined in equation~\ref{eq:pose_error_measure}, plus the \emph{lag-one autocorrelation} $m_{A}(\mathcal{T}_{Li})$. \begin{table}[!b] \centering \footnotesize \captionsetup{justification=centering} \caption{Description of localization tests $\mathcal{T}_{Li}$} \label{table:localization tests description} \begin{adjustbox}{max width=.95\linewidth} \begin{tabularx}{\linewidth}{lX} \toprule \textbf{Test} & \textbf{Description} \\ \midrule $\mathcal{T}_{L1}$ & EKF with real-world data, using navigation sensors and visual markers.\\ $\mathcal{T}_{L2}$ & $\mathcal{T}_{L1}$ plus visual odometry from plane registration (Sec.~\ref{sec: plane-based method}) and ORB-SLAM2 feature tracking (Sec.~\ref{sec: feature-based method}); selectively used based on IQA (Sec.~\ref{sec: adaptive_navigation_scheme}). \\ $\mathcal{T}_{L3}$ & $\mathcal{T}_{L2}$ minus odometry from visual markers i.e., navigation sensors and image quality based VO inputs.\\ \bottomrule \end{tabularx} \end{adjustbox} \end{table} \begin{table}[!b] \centering \footnotesize \captionsetup{justification=centering} \caption{Tests $\mathcal{T}_{Li}$ measure results for position/orientation error and trajectory autocorrelation} \begin{tabularx}{\linewidth}{@{}XYYY@{}} \toprule ~ & $\mathcal{T}_{L1}$ & $\mathcal{T}_{L2}$ & $\mathcal{T}_{L3}$ \\ \midrule $\bar{m}_{M,F}(\mathcal{T}_{Li}\langle \bar{\mathbf{p}} \rangle )\mathrm{[m]}$ & 0.65 $\pm$ 0.58 & 0.31 $\pm$ 0.11 & 0.85 $\pm$ 0.22 \\ $\bar{m}_{M,F}(\mathcal{T}_{Li}\langle \bar{\mathbf{q}} \rangle )\mathrm{[deg]}$ & 14.65 $\pm$ 8.42 & 7.21 $\pm$ 2.10 & 11.89 $\pm$ 4.55 \\ $m_{A}(\mathcal{T}_{Li})$ & 0.88 & 0.94 & 0.91 \\ \bottomrule \end{tabularx} \label{table:localization_measures} \end{table} The use of visual landmarks has shown to substantially improve the localization filter accuracy~\cite{Mueller2018_DexROVSIL} compared to using only navigation sensors. With data from DexROV sea trials, we first evaluate this method in $\mathcal{T}_{L1}$ and use it as reference. Fig.~\ref{fig:results_localization_real_markers} shows that the majority of the largest errors occur when the robot is closer to the panel's corners because markers are observed from highly skewed perspectives. Or when markers are not in view for a long period of time, e.g., in \emph{reference point 1} in Fig.~\ref{fig:results_localization_real_markers}. Of course, there can be other sources of error like spurious DVL measurements that affect the overall accuracy. In test $\mathcal{T}_{L2}$, we use our navigation scheme based on IQA. Table~\ref{table:localization_measures} and Fig.~\ref{fig:results_localization_real_markers_planes} show great reduction in the pose/orientation error $(\approx\%50)$ and an increase in the autocorrelation measure, i.e., smoother trajectories. Moreover, errors at the panel's corners decrease, e.g. \emph{reference point 1}. However, the largest errors still happen at these locations; after all, less features are observable and cylindrical corners (see Fig.~\ref{fig:rov_and_panel}) are imperfectly modeled by planes. \input{sections/field_trials_localization_plot} Finally in test $\mathcal{T}_{L3}$, we analyze the performance of our method without the use of visual landmarks. The objective is to strive towards a more general localization filter that can function without fiducial landmarks. Table~\ref{table:localization_measures} shows that although the position and orientation error increase, they are not far from $\mathcal{T}_{L1}$ results. Furthermore, the error variance is significantly less; in Fig~\ref{fig:results_localization_real_plane} the circles representing the pose errors have a more uniform size. This is more suitable for control algorithms, i.e., waypoint navigation and manipulation, which need a certain response time to converge into desired states. Highly variable measures may cause controllers to not converge. The same advantage can be said about the high autocorrelation values from $\mathcal{T}_{L2}$ and $\mathcal{T}_{L3}$. In contrast, $\mathcal{T}_{L1}$ variances are more than $50\%$ of the mean error.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Covering algorithms}\label{sec:algorithm} We can now identify feasible executions through a block simply by checking if the reachability variable associated with this block evaluates to $\true$ in a satisfying valuation of the reachability verification condition. Further, due to the single static assignment performed before generating the formula, we can identify the initial values for each variable that are needed to force the execution of this path. That is, a valuation $\val$ satisfying the $\vc$ can serve as a test case for a block associated with a reachability variable $\rvar$, if $\val(\rvar)=\true$. \begin{definition}[Test Case] Given a reachability verification condition $\vc$ of a program. Let $B$ be a block in this program, and $\rvar$ be the reachability variable associated with this block. A \emph{test case} for the block $B$ is a valuation $\val$ of $\vc$, such that $\val\models\vc$ and $\val(\rvar)$ is $\true$. \end{definition} In the following we present two algorithms to compute test cases for loop-free programs. The first algorithm computes a set of test cases to cover all feasible control-flow path, the second one computes a more compact set that only covers all feasible statements. \paragraph{Path Coverage Algorithm.} To efficiently generate a set of test cases that covers all feasible control-flow paths, we need an algorithm that checks which combinations of reachability variables in a reachability verification condition can be set to $\true$. That is, after finding one satisfying valuation for a reachability verification condition, this algorithm has to modify the next query in a way that ensures, that the same valuation is not computed again. This procedure has to be repeated until no further satisfying valuation can be found. \input{alg_maxsat1} Algorithm $\algoA$, given in Algorithm~\ref{alg:maxsat1}, uses \emph{blocking clauses} to guarantee that the every valuation is only returned once. The blocking clause is the negated conjunction of all assignments to reachability variables in a valuation $\val$. The algorithm uses the oracle-function \texttt{checksat} (see line~4 and 16), which has to be provided by a theorem prover. The function takes a first-order logic formula as input and returns a satisfying assignment for this formula in form of a set of pair of variable and value for each free variable in that formula. If the formula is not satisfiable, \texttt{checksat} returns the empty set. The algorithm uses a local copy $\psi$ of the reachability verification condition $\vc$. As long as \texttt{checksat} is able to compute a satisfying valuation $\val$ for $\psi$, the algorithm adds this valuation to the set of test cases $\tcs$ (line~6), and then builds a blocking clause consisting of the disjunction of the negated reachability variables which are assigned to $\true$ in $\val$ (line~8). The formula $\psi$ is conjuncted with this blocking clause (line~13), and the algorithm starts over by checking if there is a satisfying valuation for the new formula (line~14). The algorithm terminates when $\psi$ becomes unsatisfiable. \begin{theorem}[Correctness of \algoA]\label{thm:alg1} Given a loop-free and passive program $P$ with verification condition $\vc$. Let $\rvars$ be the set of reachability variables used in $\vc$. Algorithm $\algoA$, started with the arguments $\vc$ and $\rvars$, terminates and returns a set $\tcs$. For any feasible and complete path $\pi$ there is a test case in $\tcs$ for this path. \end{theorem} \begin{proof} There are only finitely many solutions for the variables $\rvars$ that will satisfy the formula $\vc$. Due to the introduction of the blocking clause, every solution will be found only once. Hence, after finitely many iteration the formula $\psi$ must be unsatisfiable and the algorithm terminates. If $\pi$ is a feasible and complete path, then by Theorem~\ref{thm:vc} there is a valuation $\val$ with $\val(\rvar)=true$ for every block visited by $\pi$. Such a valuation must be found by the algorithm before a corresponding blocking clause is inserted into $\psi$. The corresponding test case is then inserted into $\tcs$ and is a test case for $\pi$. \end{proof} Note that $\algoA$ is complete for loop-free programs. For arbitrary programs that have been transformed using the steps from Section~\ref{sec:loopunwinding}, the algorithm still produces only feasible test cases due to the soundness of the abstraction. The advantage of using blocking clauses is that $\algoA$ does not restrict the oracle \texttt{checksat} in how it should explore the feasible paths encoded in the reachability verification condition. The drawback of $\algoA$ is that, for each explored path, a blocking clause is added to the formula and thus, the increasing size of the formula might slow down the \texttt{checksat} queries if many paths are explored. This limits the scalability of our algorithm. In Section~\ref{sec:experiments} we evaluate how the performance of $\algoA$ changes with an increasing size of the input program. \paragraph{Statement Coverage Algorithm.} In some cases one might only be interested in covering all feasible statements. To avoid exercising all feasible paths, we present a second algorithm, $\algoB$, in Algorithm~\ref{alg:maxsat2} that computes a compact set of test cases to cover all feasible statements. The algorithm uses \emph{enabling clauses} instead of blocking clauses that prevent the oracle from computing the same valuation twice. An enabling clause is the disjunction of all reachability variables that have not been assigned to $\true$ by previous satisfying valuation of the reachability verification condition. \input{alg_maxsat2} The algorithm takes as input a reachability verification condition $\vc$, and the set of all reachability variables $\rvars$ used in this formula. Like $\algoA$, $\algoB$ uses the oracle function \texttt{checksat}. First, it checks if there exists any satisfying valuation $\val$ for $\vc$. If so, the algorithm adds $\val$ to the set of test cases (line~5). Then, the algorithm removes all reachability variables from the set $\rvars$, which are assigned to $\true$ in $\vc$ (line~8). While removing the reachability variables which are assigned to $\true$, the algorithm also has to check if this reachability variable corresponds to a block created during loop unwinding. In that case, all clones of this block are removed from $\rvars$ as well using the helper function $\removedouble{}$ (line~9). After that, the algorithm computes a new enabling clause $\formula$ that equals to the disjunction of the remaining reachability variables in $\rvars$ (line~13) and starts over by checking if $\vc$ in conjunction with $\formula$ is satisfiable (line~16). That is, conjunction $\vc\wedge\formula$ restricts the feasible executions in $\vc$ to those where at least one reachability variable in $\rvars$ is set to $\true$. Note that, if the set $\rvars$ is empty, the enabling clause $\formula$ becomes $\false$, and thus the conjunction with $\vc$ becomes unsatisfiable. That is, the algorithm terminates if all blocks have been visited once, or if there is no feasible execution passing the remaining blocks. \begin{theorem}[Correctness of \algoB]\label{thm:alg2} Given a loop-free and passive program $P$ with reachability verification condition $\vc$. Let $\rvars$ be the set of reachability variables used in $\vc$. Algorithm $\algoB$, started with the arguments $\vc$ and $\rvars$, terminates and returns a set $\tcs$. For any block in the program there exists a feasible paths $\pi$ passing this block if and only if there exists a test case $\val\in\tcs$, that passes this block. \end{theorem} \begin{proof} In every iteration of the loop at least one variable of the set $\rvars$ will be removed. This is because the formula $\phi$ will only allow valuations such that for at least one $\rvar\in\rvars$ the valuation $\val(\rvar)$ is true. Since $\rvars$ contains only finitely many variables the algorithm must terminate. If $\pi$ is a feasible path visiting the block associated with the variable $\rvar$, then there is a valuation $\val$ that satisfies $\vc$ with $\val(\rvar) = true$. Such a valuation must eventually be found, since $\vc\land\phi$ is only unsatisfiable if $\rvar \notin\rvars$. The valuation is added to the set of test cases $\tcs$. \end{proof} The benefit of $\algoB$ compared to $\algoA$ is that it will produce at most $|\rvars|$ test cases, as each iteration of the loop will generate only one test case and remove at least one element from $\rvars$. That is, the resulting set $\tcs$ can be used more efficiently if only statement coverage is needed. However, the enabling clause might cause the theorem prover which realizes \texttt{checksat} to take detours or throw away information which could be reused. It is not obvious which of both algorithms will perform better in terms of computation time. Therefore, in the following, we carry out some experiments to evaluate the performance of both algorithms. Note that, like $\algoA$, $\algoB$ is complete for loop-free programs and sound for arbitrary programs. That is, any block that is not covered by these algorithms is unreachable code (in the loop-free program). \section{Conclusion}\label{sec:conclusion} We have presented two algorithms to compute test cases that cover those statements respectively control-flow paths which have feasible executions within a certain number of loop unwindings. The algorithms compute a set of test cases in a fully automatic way without requiring any user-provided information about the environment of the program. The algorithms guarantee that these executions also exist in the original program (with loops). We further have presented a fully automatic way to compute procedure summaries, which gives our algorithm the potential to scale even to larger programs. If no procedure summaries are used, the presented algorithms cover \emph{all} statements/paths with feasible executions within the selected number of unwindings. That is, besides returning test cases for the feasible statements/paths one major result is that all statements that are not covered cannot be covered by \emph{any} execution and thus are dead code. The experiments show that the preliminary implementation already is able to outperform existing approaches that perform similar tasks. The experiments also show that computing a feasible path cover is almost as efficient as computing a feasible statement cover with the used oracle even for procedures of up to 300 lines of code. Due to the early stage of development there are still some limitations which refrain us from reporting a practical use of the proposed algorithms. So far, we do not have a proper translation from high level programming language into our intermediate format. Current translations into unstructured intermediate verification languages such as Boogie~\cite{Barnett06boogie} are built to preserve all failing executions of a program for the purpose of proving partial correctness. However, these translations add feasible executions to the program during translation which breaks our notion of soundness. Further, our language does not support assertions. Runtime errors are guarded using conditional choice to give the test case generation the possibility to generate test cases that provoke runtime errors. A reasonable translation which only under-approximates feasible executions is still part of our future research. Another problem is our oracle. Theorem provers are limited in their ability to find satisfying valuations for verification conditions. If the program contains, e.g., non-linear arithmetic, a theorem prover will not be able to find a valuation in every case. This does not affect the soundness of our approach, but it will prevent the algorithm from covering all feasible paths (i.e., the approach is not complete anymore). To make these algorithms applicable to real world programs, a combination with dynamic analysis might be required to identify feasible executions for those parts where the code is not available, or where the theorem prover is inconclusive. \paragraph{Future Work.} Our future work encompasses the development of a proper translation from Java into our unstructured language. This step is essential to evaluate the practical use of the proposed method and to extend its use to other applications. One problem when analyzing real programs is intellectual property boundaries and the availability of code of third-party libraries. We plan to develop a combination of this approach with random testing (e.g., \cite{DBLP:conf/oopsla/PachecoE07}), where random testing is used to compute procedure summaries for library procedure(s) where we cannot access the code. The proposed procedure summaries have to be recomputed if the available summaries for a procedure do not represent any feasible execution in the current calling context. Therefore, we plan to develop a refinement loop which stores summaries more efficiently. Another application would be to change the reachability variables in a way that they are only true if an assertion inside a block fails rather than if the block is reached. This would allow us to identify \emph{all} paths that violate assertions in the loop-free program. Encoding failing assertions this way can be seen as an extension of the work of Leino et al in \cite{Leino:2005:GET:1065095.1065103}. In the theorem prover, further optimizations could be made to improve the performance of $\algoB$. Implementing a strategy to find new valuations that, e.g., change as many reachability variables as possible from the last valuation could lead to a much faster computation of a feasible statement cover. In the future, we plan to implement a variation of the algorithm $\algoVSTTE$~\cite{vstte12} inside the theorem prover. We believe that the presented method can be a powerful extension to dynamic program analysis by providing information about which parts of a program can be executed within the given unwinding, what valuation is needed to execute them, and which parts can never be executed. The major benefit of this kind of program analysis is that it is \emph{user friendly} in a way that it does not require any input besides the program and that any output refers to a real execution in the program. That is, it can be used without any extra work and without any expert knowledge. However, more work is required to find practical evidence for the usefulness of the presented ideas. \section{Experiments}\label{sec:experiments} We have implemented a prototype of the presented algorithms. As this prototype still is in a very early stage of development, the goal of this experiments is only to evaluate the computation time of the queries needed to cover all feasible statements in comparison to similar approaches. Other experiments, such as the applicability to real world software remain part of future work. We compare the algorithms from Section~\ref{sec:algorithm} with two other approaches that compute a covering set of feasible executions: A worst-case optimal approach $\algoFM$ from \cite{Hoenicke:2009:DWP:1693345.1693374} and a query-optimal approach $\algoVSTTE$ from \cite{vstte12}. The worst-case optimal approach checks if there exists a feasible control-flow path passing each \emph{minimal} block. A block is minimal, if there exists no block that is passed by a strict subset of the executions passing this block~\cite{doomedjournal,Bertolino:1993:UEA:156928.156932}. Each implementation uses helper variables to build queries that ask the theorem prover for the existence of a path passing through one particular block. The query-optimal approach~\cite{vstte12} uses helper variables to count how many minimal elements occur on one feasible execution and then applies a greedy strategy to cover as many minimal elements as possible with one valuation of the formula. Note that the purpose of $\algoFM$ and $\algoVSTTE$ is slightly different from the purpose of the algorithms in this paper. The $\algoFM$ and $\algoVSTTE$ use a loop-free abstraction of the input program that over-approximates the set of feasible executions of the original program (see, \cite{doomedjournal}). On this abstraction they prove the existence of blocks which cannot be part of any terminating execution. To be comparable, we use the same abstraction for all algorithms. That is, we use $\algoFM$ and $\algoVSTTE$ to check the existence of statements that do not occur on feasible executions. Since both algorithms are complete for loop-free programs, they return the same result as $\algoB$. Note that the result and purpose of all algorithms is slightly different. However, all of them use a theorem prover as an oracle to identify executions that cover all feasible statements in a program. For now, our implementation works only for the simple language from Section~\ref{sec:preliminaries}. An implementation for a high-level language is not yet available. Hence the purpose of the experiments is only to measure the efficiency of the queries. Therefore, we decide to use randomly generated programs as input data. Generated programs have several benefits. We can control the size and shape of the program, we can generate an arbitrary number of different programs that share some property (e.g., number of control-flow diamonds), and they often have lots of infeasible control-flow paths. We are aware that randomly generated input is a controversial issue when evaluating research results, but we believe that, as we want to evaluate the performance of the algorithms, and not their detection rate or practical use, they are a good choice. A more technical discussion on this issue follows in the threats to validity. \paragraph{Experimental Setup.} As experimental data, we use 80 randomly generate unstructured programs. Each program has between 2 and 9 control-flow diamond shapes, and each diamond shape has 2 levels of nested if-then-else blocks (i.e., there are 4 distinct paths through each diamond). A block has 3 statements, which are either assignments of (linear arithmetic) expressions to unbounded integer variables or assumptions guarding the conditional choice. Each program has between 90 and 350 lines of code and modifies between 10 and 20 different variables. For each number of control-flow diamonds, we generated 10 different random programs and computed the average run-time of the algorithms. This is necessary to get an estimate of the performance of each algorithm, as their computation time strongly depends on the overall number of feasible executions in the analyzed program. For a fair comparison, we use the theorem prover SMTInterpol~\footnote{\url{http://ultimate.informatik.uni-freiburg.de/smtinterpol/}} in all four algorithms. For each algorithm, we record how often the theorem prover is asked to check the satisfiability of a formula and we record the time it takes until the theorem prover returns with a result. All experiments are carried out on a standard desktop computer with ample amount of memory. \input{exptable1} \paragraph{Discussion.} Table~\ref{tbl:results} shows the summary of the results for all algorithms after analyzing 80 benchmark programs. Figure~\ref{fig:time} gives a more detailed view on the computation time per program. The x-axis scales over the number of control-flow diamonds ranging from 2 diamonds to 9. Figure~\ref{fig:queries} gives a detailed view on the number of queries. As before, the x-axis scales over the number of control-flow diamonds. The algorithms $\algoA$ and $\algoB$ are clearly faster than $\algoFM$ and $\algoVSTTE$. Overall, $\algoB$ tends to be the fastest one. Figure~\ref{fig:compare} shows the computation time for $\algoA$ and $\algoB$ in a higher resolution. \begin{figure}[h] \centering \epsfig{file=data/c01,width=0.90\linewidth,clip=} \caption{Runtime comparison of the algorithms proposed in Section~\ref{sec:algorithm}. The ticks on the x-axis represent the number of control-flow diamonds in the randomly generated programs. \label{fig:compare}} \captionspace \end{figure} It turns out that the difference between the computation time of $\algoA$ and $\algoB$ tends to become bigger for larger programs. As expected, $\algoB$ works a bit more efficient as the size of the formula is always bounded, while $\algoA$ asserts one new term for every counterexample found. However, comparing the number of theorem prover calls, there is a huge difference between $\algoA$ and $\algoB$. While $\algoB$ never exceeds a total of 20 queries per program, $\algoA$ skyrockets already for small programs. For a program with 10 control-flow diamonds, $\algoA$ uses more than 2000 theorem prover calls, where $\algoB$ only need 10. Still, $\algoB$ is only $0.03$ seconds faster on this example ($<10\%$). \begin{figure}[h] \centering \epsfig{file=data/tall,width=.95\linewidth,clip=} \caption{Computation time for each algorithm. The ticks on the x-axis represent the number of control-flow diamonds in the randomly generated programs. \label{fig:time}} \captionspace \end{figure} \begin{figure}[h] \centering \epsfig{file=data/qall,width=.95\linewidth,clip=} \caption{Number of call to the theorem prover for each algorithm. The ticks on the x-axis represent the number of control-flow diamonds in the randomly generated programs. \label{fig:queries}} \captionspace \end{figure} These results show that, even though $\algoB$ might be slightly more efficient than $\algoA$, the number of queries is not an important factor for the computation time. In fact, internally, the theorem prover tries to find a new counterexample by changing as few variables as possible which is very close to the idea of $\algoA$. $\algoB$, which queries if there exists a counterexample through a block that has not been visited so far, will internally perform the same steps as $\algoA$ and thus, the performance gain is only rooted in the smaller formulas and reduced communication between application and prover. However, the results also show that, when using a theorem prover, computing a path cover with $\algoA$ is not significantly more expensive than computing only a statement cover with $\algoB$. The computation time for $\algoFM$ and $\algoVSTTE$ are significantly higher than the one for the presented algorithms. For $\algoFM$, some queries, and thus some computation time, could be saved by utilizing the counterexamples to avoid redundant queries. However, the number of queries cannot become better than the one of $\algoB$ due to the kind of queries. The most significant benefit of $\algoA$ and $\algoB$ over $\algoFM$ is that they don't have to inject helper variables in the program. In fact $\algoA$ and $\algoB$ also use one variable per block to encode the reachability, but this variable is added to the formula and not to the program. Thus, it is not considered during single static assignment, which would create multiple copies for each variable. For the query-optimal algorithm $\algoVSTTE$, the computation time becomes extremely large for our random programs. This is due to the fact that $\algoVSTTE$ tries to find the best possible counterexample (that is, the one with the most previously uncovered blocks) with each query. Internally, the theorem prover will exercise several counterexamples and discard them until the best one is found. The procedure is similar to the one used in $\algoA$ and $\algoB$: the theorem prover computes a counterexample and then assures that this example cannot be found again, and then starts over. But in contrast to $\algoVSTTE$, our algorithms do not force the theorem prover to find a path that satisfies additional constraints, and, hence, relaxing the problem that has to be solved by the theorem prover. Even though one might find benchmarks where $\algoVSTTE$ is significantly faster than $\algoFM$, the algorithms $\algoA$ and $\algoB$ will always be more efficient since they pose easier (and, hence, faster) queries to the theorem prover. The presented results should not be interpreted as an argument against a query-optimal algorithm. We rather conclude that the place for such optimizations is inside the theorem prover. Modifying the way, the theorem prover finds a new counterexample can lead to tremendous performance improvements. However, such changes have to consider the structure of verification conditions and thus will exceed the functionality of a general theorem prover. \paragraph{Threats to validity.} We emphasize that the purpose of the experiments is only to evaluate the performance of $\algoA$ and $\algoB$. These experiments are not valid to reason about practical use or scalability of the method. We report several internal threats to validity: The experiments only used a very restricted background theory. However, the path reasoning described in this paper prunes the search space for the theorem prover even if we use richer logics including arrays or quantifiers. As shown in our experiments, the algorithms proposed in this paper pose easier problems to a theorem prover. This won't change if we switch to richer logics since our algorithms only limit the theorem prover to reason about feasible paths while all other algorithms pose additional constraints on such a path. If we use richer logics we only limit the number of paths. But still it remains easier to just find a path than to find one that satisfies some additional condition. We have chosen randomly generated programs as input for two reasons. First, we wanted to be able to scale the number of paths and use the most difficult shape of the control structure for our techniques. Hence, we had to scale the number of diamonds in the control flow graph. Second, we did not implement a parser for a specific language. Existing translations from high-level languages into unstructured languages are not suitable for our algorithms as they over-approximate the set of infeasible executions to retain soundness w.r.t. partial correctness proofs. These translations might both over- and under-approximate the set of feasible executions of a program and thus violate our notion of soundness. However, for the purpose of comparing the performance of the different algorithms, the experiments are still valid. In our experiments we only used SMTinterpol to answer the queries. For the comparison of $\algoA$ with the other algorithms, the choice of the theorem prover can make a significant difference. SMTinterpol tries to find a valuation for a formula by making as few changes as possible to the previous valuation. If a theorem prover chooses a different strategy, in particular $\algoB$ might become much fast. However, we are not aware of any theorem prover that uses this kind of strategy. \section{Extended Weakest-Liberal Precondition}\label{sec:exvc} That is, in a satisfying valuation of the VC, each block variable $B_{\texttt{label\_i}}$ is assigned to $\true$ if, for any pre-state, there exists a terminating execution starting in $\texttt{label\_i}$. However, it is not enough to find one $\true$ evaluation for each $B_{\texttt{label\_i}}$ to cover the entire graph. To efficiently find valuations which cover the whole CFG, we want to have helper variables which are true if and only if there is a normal terminating execution of that block starting in the initial state of the program. If we have a satisfying valuation in which $B_{\texttt{label\_i}}$ is set to $\true$, we only know that there is a normal terminating execution starting at $B_{\texttt{label\_i}}$. It does not say that this execution can be extended to one starting in the initial block. Even if we might find a valuation where $B_{\texttt{label\_i}}$ is $\true$ it still might be unreachable. Hence, to avoid encoding all paths simultaneously (and thus adding redundancy again), we need helper variables: We invert the directions of the edges in the CFG and for each block: $$\texttt{label} : S; \textbf{comefrom} (\texttt{label\_1}\ldots\texttt{label\_n}) $$ we generate an auxiliary variable $R_{\texttt{label}}$ and a formula $$ RTerm_\texttt{label} : R_{\texttt{label}} \equiv B_{\texttt{label}} \wedge \bigvee_{1\leq i\leq n} R_{\texttt{label\_i}} ) $$ Our new verification condition looks as follows: $$ EXVC := (\wedge_{i} RTerm_\texttt{label\_i}) \wedge (\wedge_{i} Term_\texttt{label\_i}) \wedge \neg B_{\texttt{label\_0}} $$ \begin{theorem} Something like: Given $VC$ and $EXVC$. A valuation $\val$ satisfies $EXVC$ if and only if $\val$ also satisfies $VC$ and for each $RTerm_\texttt{label}$ with $\phi(RTerm_\texttt{label})=\true$ there exists a normal terminating execution in the program. \end{theorem} Now, the problem of finding a set of valuations satisfying this formula which already covers the entire CFG quickly corresponds to the problem of finding a $\true$ valuation for all $RTerm_\texttt{label}$ as fast as possible. \section{Introduction}\label{sec:introduction} Using static analysis to find feasible executions of a program that pass a particular subset of program statements is an interesting problem. Even though in general not decidable, there is ongoing research effort to develop algorithms and tools that are able to solve this problems for a reasonable large number of cases. Such tools can be used, e.g., to automatically generate test cases that cover large portions of a programs source code and trigger rare behavior, or to identify program fragments for which no suitable test case can be found. The later case sometimes is referred to as \emph{infeasible code detection}~\cite{vstte12}. Code is considered to be infeasible if no terminating execution can be found for it. Infeasible code can be seen as a superset of unreachable code as there might be executions reaching a piece of infeasible code which, however, fail during their later execution. In particular, a counterexample for the infeasibility of a piece of code is a terminating execution that executes this code. That is, finding a set of test cases that cover all statements in a program is equivalent to proving the absence of infeasible code. Existing approaches to detect infeasible code do not yet exploit the fact that counterexamples for infeasibility might constitute feasible test cases. In this paper, we discuss a bounded approach towards infeasible code detection that generates test cases that cover all statements which have feasible executions within a given (bounded) number of loop unwindings. The interesting aspect of bounded infeasible code detection over existing (unbounded) approaches is that counterexamples for infeasibility are likely to represent actual executions of the program, as compared to the unbounded case, where these counterexamples might be introduced by the necessary over-approximation of the feasible executions. The paper proposes two novel ideas: the concept of \emph{reachability verification condition}, which is a formula representation of the program which, similar the weakest-liberal precondition or strongest postcondition, models all feasible executions of a program. But in contrast to existing concepts, a satisfying assignment to the reachability verification condition can directly be mapped to an execution of the program from source to sink. For example, a valuation of \emph{wlp} can represent a feasible execution starting from any point in a program, but this does not yet imply that this point is actually reachable from the initial states of the program. Certainly there are ways to encode the desired property using \emph{wlp}, or \emph{sp} by adding helper variables to the program (see, e.g.,~\cite{doomedjournal,vstte12}), however, we claim that the proposed reachability verification condition provides a better formal basis to show the absence of infeasible code, as it, e.g., can make better use of the theorem prover stack which results in a more efficient and scalable solution. We suggest two algorithms to compute feasible executions of a program based on the reachability verification condition. One uses so-called \emph{blocking clauses} to prevent the theorem prover from exercising the same path twice, the other algorithm uses \emph{enabling clauses} to urge the theorem prover to consider a solution that passes program fragments that have not been accessed before. Both algorithms return a set of feasible executions in the bounded program. Further, both algorithms guarantee that any statement not executed by these test cases is infeasible within the given bounds. We do a preliminary evaluation of our algorithms against existing algorithms to detect infeasible code. Based on the reachability verification condition, as a second novelty, we propose a technique to compute procedure summaries for bounded infeasible code detection. As the presented algorithms return a set of feasible executions, we can extract pairs of input and output values for each execution to construct procedure summaries. The summaries are a strict under-approximation of the possible executions of the summarized procedure. Therefore, the computed summaries are sound to show the presence of feasible executions, but unsound to show their absence. To overcome this gap, we suggest an on-demand computation of summaries if no feasible execution can be found with the given summary. Within the scope of this paper, we do not evaluate the concept of summaries as more implementation effort is required until viable results can be presented. In Section~\ref{sec:loopunwinding} we explain how we address the problem of computing the weakest-liberal preconditions for general programs. In Section~\ref{sec:vc} we show how a feasible execution that visits certain blocks can efficiently be expressed as a formula and introduce the concept of reachability verification condition. In Section~\ref{sec:algorithm} we present two different algorithms to address the problem of generating test cases with optimal coverage. In Section~\ref{sec:procsum} we show how procedure summaries can be computed with our test case generation algorithm. We present an experimental evaluation of our algorithms in Section~\ref{sec:experiments}. \section{Program Transformation}\label{sec:loopunwinding} As the weakest-liberal precondition cannot be computed for programs with loops in the general case, an abstraction is needed. Depending on the purpose of the analysis, different information about the possible executions of the program has to be preserved to retain \emph{soundness}. E.g., when proving partial correctness~\cite{Barnett06boogie,barnett2005} of a program, the set of all executions that fail has to be preserved (or might be over-approximated), while terminating or blocking executions might be omitted or added. For our purpose of identifying a set of executions containing all feasible statements, such an abstraction, which over-approximates the executions of a program is not suitable as we might report executions which do not exist in the original program. Instead we need a loop unwinding which does not add any (feasible) executions. \paragraph{Loop Unwinding.} Our loop unwinding technique is sketched in Figure~\ref{fig:unwinding}. As we assume (w.l.o.g) that the control-flow graph of our input program is reducible, we can identify one unique entry point for each loop, the \emph{loop header} $B_h$, and a \emph{loop body} $B$. The loop header contains only a transition to the loop body and the \emph{loop exit} $B_e$. \begin{figure} \centering \begin{minipage}{.73\linewidth} \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=2cm, semithick] \tikzstyle{every state}=[fill=none,draw,text=black] \tikzstyle{lbody}=[line width=2mm,join=round,fill=black!10] \node[state] (A) {$B_h$}; \node[lbody] (B) [above right of=A] {$B$}; \node[state] (C) [below right of=B] {$B_e$}; \path (A) edge [bend left] node {} (B) edge node {} (C) (B) edge [bend left] node {} (A); \end{tikzpicture} \end{minipage} \vspace{0.5cm} \begin{minipage}{.73\linewidth} \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.5cm, semithick] \tikzstyle{every state}=[fill=none,draw,text=black] \tikzstyle{lbody}=[line width=2mm,join=round,fill=black!10] \tikzstyle{dottedstate}=[circle,fill=none,draw,text=black,dashed] \tikzstyle{dottedlbody}=[line width=2mm,join=round,fill=black!5, dashed] \node[state] (A) {$B_h$}; \node[lbody] (B) [above right of=A] {$B$}; \node[state] (C) [right of=B] {$B_h$}; \node[dottedlbody] (D) [above right of=C] {$B$}; \node[dottedstate] (E) [right of=D] {$B_h$}; \node[state] (F) [below right of=D, right of=C] {$B_e$}; \path (A) edge [bend left] node {} (B) edge node {} (F) (B) edge node {} (C) (C) edge [bend left] node {} (F); \path[dashed] (C) edge [bend left] node {} (D) (D) edge node {} (E) (E) edge [bend left] node {} (F); \end{tikzpicture} \end{minipage} \caption[Loop abstraction]{Finite loop unwinding} \label{fig:unwinding} \end{figure} We can now unwind the loop once by simply redirecting the target of the back-edge that goes from $B$ to $B_h$, to $B_e$ (and thus transforming the loop into an if-then-else). To unwind the loop $k$-times, for each unwinding, we have to create a copy of $B$ and $B_h$, and redirect the outgoing edge of the $B$ introduced in the previous unwinding to the newly introduced $B_h$. That is, the loop is transformed to an if-then-else tree of depth $k$. This abstraction is limited to finding executions that reach statements within less than $(k+1)$ loop iterations, however, as the abstraction never adds a feasible execution, we have the guarantee that this execution really exists. \begin{lemma}\label{thrm:soundness} Given a program $\prg$ and a program $\prg'$ which is generated from $\prg$ by $k$-times loop unwinding. Any feasible execution of $\prg'$ is also a feasible execution of $\prg$. \end{lemma} \paragraph{Procedure Calls.} Procedure calls are another problem when computing the weakest (liberal) precondition. First, they can introduce looping control flow via recursion, and second, inlining each procedure call might dramatically increase the size of the program that has to be considered. For recursive procedure calls, we can apply the same loop unwinding used for normal loops. To inline a procedure, we split the block at the location of the procedure call in two blocks and add all blocks of the body of the called procedure in between (and rename variables and labels if necessary). Then, we add additional assignments to map the parameters of the called procedure to the arguments used in the procedure call and the variable carrying the return value of the procedure to those receiving it in the calling procedure. If inlining all procedure calls is not feasible due to the size of the program, the call has to be replaced by a summary of the procedure body instead. We propose a technique that retains the soundness from Lemma~\ref{thrm:soundness} later on in Section~\ref{sec:procsum}. \paragraph{Single Static Assignment.} For the resulting loop-free program, we perform a single static assignment transformation~\cite{Cytron:1991:ECS:115372.115320} which introduces auxiliary variables to ensure that each program variable is assigned at most once on each execution path \cite{Flanagan:2001:AEE:360204.360220}. For convenience we use the following notation: given a program variable $v$, the single static assignment transformation transforms an assignment $v := v + 1$ into $v_{i+1} := v_{i} + 1$, where $v_{i+1}$ and $v_{i}$ are auxiliary variable (and the index represents the incarnation of $v$). In the resulting program, each variable is written at most once. Hence, we can replace all assignments by assumptions without altering the feasible executions of the program. In that sense, the transformed program is passive as it does not change the values of variables. As single static assignment is used frequently in verification, we refer to the related work for more details (e.g., \cite{Flanagan:2001:AEE:360204.360220,Leino:2005:EWP:1066417.1066421,barnett2005}). \section{Preliminaries}\label{sec:preliminaries} For simplicity, we consider only simple unstructured programs written in the language given in Figure~\ref{fig:language}. \begin{figure}[htdp] \begin{center} \begin{align*} \textit{Program} ::= & \; \textit{Procedure}^+ \\ \textit{Procedure} ::= & \; \textbf{proc} \, \textit{ProcName} \textbf{(}\textit{VarId}^*\textbf{)} \, [ \, \textbf{returns} \, \textit{VarId} \, ] \, \textbf{\{} \, \textit{Block}^+ \, \textbf{\}} \\ \textit{Block} ::= & \; \textit{label} : \; \textit{Stmt}^* \, [ \, \textbf{goto} \, \textit{label}^+\textbf{;} \, ]\\ \textit{Stmt} ::= & \; \textit{VarId} := \textit{Expr}\textbf{;} \; | \; \textbf{assume} \, \textit{Expr}\textbf{;} \\ & \mid \textit{VarId} := \textbf{call}\, \textit{ProcName}\textbf{(}\textit{Expr}^*\textbf{)}\textbf{;} \end{align*} \end{center} \caption{Simple (unstructured) Language \label{fig:language}} \end{figure} Expressions are sorted first order logic terms of appropriate sort. The expression after an \textbf{assume} statement have Boolean sort. A program is given by a set of \textit{Procedures} each with a unique name. The special procedure named ``main'' is the entry point of a program. Every procedure contains at least one block of code. A block consists of a label, a (possibly empty) sequence of statements, and non-deterministic $\goto$ statement that lists transitions to successor blocks. The $\goto$ statement is omitted for the blocks that have no successors. A statement can either be an assignment of a term to a variable, an assumption, or a procedure call. A call to a procedure is indicated by the \textbf{call} keyword followed by the name of the procedure to call, and the (possible empty) list of arguments. A procedure can return a value by writing into the variable mentioned in the \textbf{returns} declaration. If this declaration is omitted, the procedure cannot return a value. \begin{figure}[htbp] \begin{verbatim} proc foo(x, y) returns z { l0: goto l1, l2; l1: assume y > 0; z := x + y; goto l3; l2: assume y <= 0; z := x - y; goto l3; l3: } proc main() { l0: r := call foo(0, 1); } \end{verbatim} \caption{\label{fig:languageexample}Example of our Simple Language} \end{figure} If the conditional of an assumption evaluates to $\false$, the execution blocks. Figure~\ref{fig:languageexample} shows a small example of our simple language. We assume that every procedure contains a unique \emph{initial block} $Block_0$ and a unique \emph{final block} that has no successor. A procedure \emph{terminates} if it reaches the end of the final block. A program \emph{terminates} if the ``main'' procedure terminates. We further assume the directed graph which is given by the transitions between the blocks is reducible. The presented language is simple but yet expressive enough to encode high level programming languages such as \texttt{C}~\cite{Cohen:2009:VPS:1616077.1616080}. In this paper we do not address the problems that can arise during this translation and refer to related work instead. The weakest-liberal precondition~\cite{nla.cat-vn2681671,barnett2005} semantics of our language is defined in the standard way: \begin{center} \begin{tabular}{ c | c } $\mathit{\st}$ & $\wlp ( \mathit{\st}, Q)$ \\ \hline $\textbf{assume}\; E $ & $E \implies Q$ \\ $ \textit{VarId} := \textit{Expr} $ & $Q[\textit{VarId}/ \textit{Expr}]$ \\ $S;T$ & $\wlp(S,\wlp(T,Q))$ \\ \end{tabular} \end{center} A sequence of statements $\st$ in our language has a \emph{feasible execution} if and only if there exists an initial valuation $\val$ of the program variables, such that in the execution of $\st$ all \textbf{assume} statements are satisfied. \begin{theorem} A sequence of statements $\st$ has a feasible execution if and only if there exists a valuation $\val$ of the program variables, such that $\val \not\models \wlp(\st, \false)$. \end{theorem} Hence, the initial state of a feasible execution of $\st$ can be derived from a counterexample to the formula representation of the weak-liberal precondition $\wlp(\st, \false)$. A path in a program is a sequence of blocks $\pi = \textit{Block}_0 \ldots \textit{Block}_n$ such that there is a transition from any $\textit{Block}_i$ to $\textit{Block}_{i+1}$ for $0\leq i<n$. We extend the definition of feasible executions from statements to paths by concatenating the statements of each block. We say that a path $\pi$ is a \emph{complete path} if it starts in the initial block and ends in the final block. In the following, we always refer to complete paths unless explicitly stated differently. A path is \emph{feasible}, if there exists a feasible execution for that path. \begin{theorem} Given a path $\pi = \textit{Block}_0 \ldots \textit{Block}_n $ in a program $\prg$ where $\st_i$ represents the statements of $\textit{Block}_i$. The path $\pi$ is called feasible, if and only if there exists a valuation $\val$ of the program variables, such that $\val \not\models \wlp(\st_0; \ldots; \st_n, \false)$. \end{theorem} Note that our simple language does \emph{not} support assertions. For the weakest liberal precondition, assertions are treated in the same way as assumptions. That is, we might render a path infeasible because it's execution fails, but still this path might be executable. As our goal is to execute all possible control-flow paths, we encode assertions as conditional choice. This allows us later on to check if there exist test cases that violate an assertion. \section{Procedure Summaries}\label{sec:procsum} For large programs, inlining all procedure calls as proposed in Section~\ref{sec:loopunwinding} might not be feasible. However, replacing them by using assume-guarantee reasoning as it is done, e.g., in static checking~\cite{Barnett06boogie} is not a feasible solution either. Using contracts requires the necessary expertise from the programmer to write proper pre- and postconditions, and thus, it would violate our goal of having a fully automatic tool. If trivial contracts are generated automatically (e.g., \cite{doomedjournal}), it will introduce feasible executions that do not exist in the original program. This would break the \emph{soundness} requirement from Lemma~\ref{thrm:soundness} that each of the test cases returned by the algorithms $\algoA$ and $\algoB$ must represent a feasible path in the (loop-free) program. Instead of inlining each procedure call, we propose to replace them by a \emph{summary} of the original procedure which represents \emph{some} feasible executions of the procedure. The summary can be obtained directly by applying $\algoA$ or $\algoB$ to the body of the called procedure. Each valuation $\val$ in the set $\tcs$ returned by these algorithms contains values for all incarnations of the variables used in the procedure body on one feasible execution. In particular, for a variable $v$, with the first incarnation $v_0$ and the last incarnation $v_n$, $\val(v_0)$ represents one feasible input value for the considered procedure and $\val(v_n)$ represents the value of $v$ after this procedure returns. That is, given a procedure $P$ with verification condition $\vc$ and reachability variables $\rvars$, let $\tcs = \algoA(\vc,\rvars)$ or $\tcs = \algoB(\vc,\rvars)$ respectively. Furthermore let $V$ be the set of variables which are visible to the outside of $P$, that is, parameters and global variables. The summary $Sum$ of $P$ is expressed by the formula: \[ Sum := \bigvee_{\val\in\tcs} ( \bigwedge_{v\in V}(v_0=\val(v_0)) \wedge \bigwedge_{v\in V}(v_n=\val(v_n)) ), \] where $n$ refers to the maximum incarnation of a particular variable $v$. The summary can be interpreted as encoding each feasible path of $P$ by the condition that, if the initial values for each variable are set appropriately, the post-state of this execution is established. We need an underapproximation of the feasible executions of the procedure as the procedure summary. Therefore we encode the summary of the previously computed paths and let the theorem prover choose the right path. In practice, in particular when using $\algoA$, it can be useful to consider only a subset of $\tcs$ for the summary construction, as a formula representing all paths might outgrow the actual verification condition of the procedure. On the caller side, we can now replace the call to a procedure $P$ by an assumption $\assume Sum$ where $Sum$ is the procedure summary of $P$. We further have to add some framing assignments to map the input- and output variables of the called procedure to the one of the calling procedure. We illustrate this step using the following example program: \begin{verbatim} proc foo(a, b) returns c { l1: goto l2, l3; l2: assume b > 0; c := a + 1; goto l4; l3: assume b <= 0; c := a - 1; goto l4; l4: } proc bar(x) returns z { l1: z := call foo(x,1); } \end{verbatim} Applying the algorithm $\algoA$ to the procedure \verb|foo| will result in a summary like: \[ Sum := \begin{array}[c]{r} (a_0=0\wedge b_0=0)\wedge(c_1=a_0-1) \\ \vee (a_0=0\wedge b_0=1)\wedge(c_1=a_0+1) \end{array} \] This summary can be used to replace the call statement in \verb|bar| after the single static assignment has been performed as follows: \begin{verbatim} proc bar(x) returns z { l1: assume a0=x; assume b0=1; assume Sum; assume z1=c1; } \end{verbatim} Note that, to avoid recomputing the single static assignment, when reaching a call statement, we increment the incarnation count for each variable that might be modified by this procedure and the incarnation count of each global variable. Therefore, we have to add frame conditions if a global variable is not changed by the summary (in this example it is not necessary, as there are no global variables). A procedure summary can be seen as a switch case over possible input values. That is, the summary provides the return values for a particular set of input values to the called procedure. Any execution that calls the procedure with other input values becomes infeasible. In that sense, using procedure summaries is an under-approximation of the set of feasible executions and thus sound for our purpose. \begin{lemma}[Soundness]\label{lm:inlinesound} Given a loop-free procedure $P$ which calls another loop-free procedure $P'$. Let $P^\#$ be the version of procedure $P$ where all calls to $P'$ have been replaced by the summary of $P'$. Any feasible execution of $P^\#$ is also a feasible execution of $P$. \end{lemma} Using these summaries is a very strong abstraction as only a very limited number of possible input values is considered as the set of feasible executions of the called procedure is reduced to one per control-flow path (or even less, if algorithm $\algoB$ is used). In particular, this causes problems if a procedure is called with constant values as arguments. In the example above, inlining only works if the theorem prover picks the same constant when computing the summary that is used on the caller side (which is the case here). If the constants do not match, the summary might provide no feasible path through the procedure, which is still sound but not useful. In that case, a new summary has to be computed where the constant values from the caller side are used as a precondition for the procedure (e.g., by adding an appropriate assume statement to the first block of the called procedure) before re-applying algorithm $\algoA$ or $\algoB$. The benefit of this summary computation is that it is fully automatic and the computation of the summary is relatively cheap, because the called procedure has to be analyzed at least once anyway. However, it is not a silver bullet and its practical value has to be evaluated in our future work. We do not consider procedure summaries as an efficient optimization. They rather are a necessary abstraction to keep our method scalable. \section{Related Work}\label{sec:related} Automatic test case generation is a wide field ranging from purely random generation of input values (e.g., \cite{PachecoLET2007}) to complex static analysis. The presented algorithms can best be compared to tools that provide automatic white-box test case generation. Probably the most notable tools in this field are PREfix~\cite{Bush:2000:SAF:348422.348428} and Pex~\cite{DBLP:conf/sigsoft/TillmannS05a}. Both algorithms use symbolic execution to generate test cases that provoke a particular behavior. Pex further allows the specification of parameterized unit tests. Symbolic execution analyzes a program path-by-path and then uses constraint solving to identify adequate input to execute this path. In contrast, our approach encodes all paths into one first-order formula and then calls a theorem prover to return any path and the input values needed to execute this path. In a way, symbolic execution selects a path and then searches feasible input values for this path, while our approach just asks the theorem prover for \emph{any} path which is feasible. One advantage of our approach is that it might be more efficient to ask the theorem prover for a feasible path than checking for each path if it is feasible. Many other approaches to static analysis-based automatic test case generation and bounded model checking exist but, due to the early stage of the development of the proposed ideas, a detailed comparison is subject to future work. In \cite{Engel07generatingunit} test cases are generated from interactive correctness proofs. The approach of using techniques from verification to identify feasible control-flow paths for test case generation is similar to ours. However, they generate test cases from a correctness proof, which might contain an over-approximation of the feasible executions. This can result in non-executable test cases. Our approach under-approximates the set of feasible executions and thus, any of the generated test cases can be executed. Using a first-order formula representation of a program and a theorem prover to identify particular paths in that program goes back to, e.g., ESC~\cite{Flanagan:2002:ESC:543552.512558} and, more recently, Boogie~\cite{Leino:2005:EWP:1066417.1066421,Flanagan:2001:AEE:360204.360220,Barnett06boogie}. These approaches use similar program transformation steps to generate the formula representation of a program. However, the purpose of these approaches is to show the absence of failing executions. Therefore, their formula represents an approximation of the weakest precondition of the program with postcondition $\true$. In contrast, we use the negated $\wlp$ with postcondition $\false$. Showing the absence of failing executions is a more complicated task and requires a user-provided specification of the intended behavior of the program. In \cite{Grigore:2009:SPU:1557898.1557904}, Grigore et al propose to use the strongest postcondition instead of the weakest precondition. This would also be possible for our approach. As mentioned in Section~\ref{sec:vc}, the reachability variables are used to avoid encoding the complete strongest postcondition. However, it would be possible to use $sp$ and modify the reachability variables to encode $\wlp$. Recently there has been some research on $\wlp$ based program analysis: in \cite{1292319}, an algorithm to detect unreachable code is presented. This algorithm can be seen as a variation of $\algoB$. However, it does not return test cases. The algorithms $\algoFM$~\cite{Hoenicke:2009:DWP:1693345.1693374,doomedjournal}, and $\algoVSTTE$~\cite{vstte12} detect code which never occurs on feasible executions. While $\algoFM$ detects \emph{doomed program points}, i.e. control-locations, $\algoVSTTE$ detects statements, i.e. edges in the CFG. If a piece of code cannot be proved doomed/infeasible, a counter example is obtained which represents a normal-terminating executions. The main difference to our approach is that their formula is satisfied by all executions that either block \emph{or fail}. We do not consider that an execution might fail and leave this to the execution of the test case. There are several strategies to cover control-flow graphs. The most related to this work is \cite{vstte12}, which has already been explained above. Other algorithms such as, \cite{Bertolino:1994:AGP:203102.203103,Bertolino:1993:UEA:156928.156932,Forgacs:1997:FTP:267896.267922} present strategies to compute feasible path covers efficiently. These algorithms use dynamic analysis and are therefore not complete. Lahiri et al~\cite{DBLP:conf/cav/LahiriNO06} used a procedure similar to one of our proposed algorithms to efficiently compute predicate abstraction. They used an AllSMT loop over a set of \textsl{important} predicates. One of our algorithms, $\algoA$, lifts this idea to the context of test case generation and path coverage. Our second algorithm, $\algoB$ cannot be used in their context since the authors of this paper need to get all satisfying assignments for the set of predicates. In contrast, we are only interested in the set of predicates that are satisfied in at least one model of the SMT solver. \section{Reachability Verification Condition}\label{sec:vc} This section explains how to find a formula $\vc$ (the reachability verification condition) such that every satisfying valuation $\val$ corresponds to a terminating execution of the program. Moreover, it is possible to determine from the valuation, which blocks of the program were reached by this execution. For this purpose $\vc$ contains an auxiliary variable $\rvar_i$ for each block that is true if the block is visited by the execution. From such an execution we can derive a test case by looking at the initial valuation of the variables. A test case of a program can be found using the weakest (liberal) precondition. If a state satisfies the weakest precondition $wp(S,\true)$ of a program $S$ it will produce a non-failing run. However, it may still block in an \texttt{assume} statement. Since we desire to find non-blocking test cases we follow~\cite{vstte12} and use the weakest liberal precondition of false. A state satisfies $\wlp(S, \false)$ if and only if it does not terminate. Hence we can use $\lnot \wlp(S,\false)$ to find terminating runs of $S$. For a loop-free program, computing the weakest (liberal) precondition is straight forward and has been discussed in many previous articles (e.g., \cite{barnett2005,Leino:2005:EWP:1066417.1066421,Flanagan:2002:ESC:543552.512558,Grigore:2009:SPU:1557898.1557904}). To avoid exponential explosion of the formula size, for each block \[ \textit{Block}_i ::= \, i : S_i; \goto Succ_i \] we introduce an auxiliary variable $B_i$ that represents the formula $\lnot \wlp(Block_i, \false)$, where $Block_i$ is the program fragment starting at label $i$ and continuing to the termination point of the program. These variables can be defined as \begin{align*} \WLP : \bigwedge_{0\leq i< n} B_i &\equiv \lnot \wlp\biggl(S_i, \bigwedge_{j \in Succ_i} \lnot B_j\biggr) \\ {}\land B_n &\equiv \lnot\wlp(S_n, \false). \end{align*} Introducing the auxiliary variables avoids copying the $\wlp$ of the successor blocks. If we are interested in a terminating execution that starts in the initial location $0$, we can find a satisfying valuation for $$ \WLP \land B_0$$ \begin{lemma}\label{lemma:wlp} There is a satisfying valuation $\val$ for the formula $\WLP$ with $\val(B_i)= \true$ if and only if there is a terminating execution for the program fragment starting at the block $Block_i$. \end{lemma} Proof is given in \cite{vstte12}. Thus a satisfying valuation $\val$ of $\WLP \land B_0$ corresponds to a terminating execution of the whole program. Moreover if $\val(B_i)$ is true, the same valuation also corresponds to a terminating execution starting at the block $Block_i$. However, it does not mean that there is an execution that starts in the initial state, visits the block $Block_i$, and then terminates. This is because the formula does not encode that $Block_i$ is reachable from the initial state. To overcome this problem one may use the strongest post-condition to compute the states for which $Block_i$ is reachable. This roughly doubles the formula. In our case there is a more simple check for reachability. Again, we introduce an auxiliary variable $\rvar_{i}$ for every block label $i$ that holds if the execution reaches $Block_i$ from the initial state and terminates. Let $Pre_i$ be the set of predecessors of $Block_i$, i.e., the set of all $j$ such that the final $\goto$ instruction of $Block_j$ may jump to $Block_i$. Then we can fix the auxiliary variables $\rvar_i$ using $\WLP$ as follows \[ \vc: \WLP \land \rvar_0 \equiv B_0 \land \bigwedge_{1\leq i\leq n} \biggl(\rvar_i \equiv B_i \land \bigvee_{j\in Pre_i} \rvar_j\biggr). \] That is, the reachability variable of the initial block is set to true if the run is terminating. The reachability variable of other blocks is set to true if the current valuation describes a normally terminating execution starting at this block and at least one predecessor has its reachability variable set to true. \begin{theorem}\label{thm:vc} There is a valuation $\val$ that satisfies $\vc$ with $\val(\rvar_0)=true$ if and only if the corresponding initial state leads to a feasible complete path $\pi$ for the procedure. Moreover, the value of the reachability variable $\val(\rvar_i)$ is true if and only if there is a path $\pi$ starting in this initial state that visits block $Block_i$. \end{theorem} \begin{proof} Let there be a feasible path $\pi$ and let $\val$ be the corresponding valuation for the initial variables. If one sets the value of each of the auxiliary variable $B_i$ and $\rvar_i$ according to its definition in $\vc$, then $\vc$ is satisfied by $\val$. Moreover, the $B_i$ variables for every visited block must be true by Lemma~\ref{lemma:wlp}. Then also $\val(\rvar_0)$ must be true, i.e., the reachability variable for the initial state must be true. By induction one can see that $\val(\rvar_i)$ must also be true for every visited block $Block_i$. For the other direction, let $\val$ be a satisfying valuation for $\vc$ with $\val(\rvar_0)=true$. Then also $\val(B_0)=true$ holds. Hence, by Lemma~\ref{lemma:wlp} this valuation corresponds to a feasible path $\pi$. Let $\val(\rvar_i)=true$ for some block. If $i=0$ then this is the initial block which is visited by the feasible path $\pi$. For $i\neq 0$ there is some predecessor $j\in Pre_i$ with $\val(\rvar_j) = true$. By induction over the order of the blocks (note that the code is loop-free) one can assume that there is a feasible path starting in this initial state that visits $Block_j$. Since $Block_j$ ends with a non-deterministic $\goto$ that can jump to $Block_i$, the latter block is reachable. Moreover since $\rvar_i$ is true, also $B_i$ must be true and by Lemma~\ref{lemma:wlp} the valuation corresponds to a terminating run starting at Block $Block_i$. Thus, there is a run that starts at the initial state, reaches block $Block_i$, and terminates. \end{proof} Thus $\vc$ is the reachability verification condition that can be used to generate test cases of the program that reach certain blocks. To cover all statements by test cases, one needs to find a set of valuations for $\vc$, such that each $\rvar_i$ variable is true at least in one valuation. The following section will tackle this problem.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}} Communication complexity of Boolean functions has a long and rich past, stemming from the paper of Yao in 1979~\cite{yao79}, whose motivation was to study the area of VLSI circuits. In the years that followed, tremendous progress has been made in developing a rich array of lower bound techniques for various models of communication complexity (see e.g.~\cite{KN97}). From the physics side, the question of studying how much communication is needed to simulate distributions arising from physical phenomena, such as measuring bipartite quantum states, was posed in 1992 by Maudlin, a philosopher of science, who wanted to quantify the non-locality inherent to these systems~\cite{maudlin92}. Maudlin, and the authors who followed~\cite{bct99,steiner00,tb03,cgmp05,dlr07} (some independently of his work, and of each other) progressively improved upper bounds on simulating correlations of the 2 qubit singlet state. In a recent breakthrough, Regev and Toner~\cite{rt07} proved that two bits of communication suffice to simulate the correlations arising from two-outcome measurements of arbitrary-dimension bipartite quantum states. In the more general case of non-binary outcomes, Shi and Zhu gave a protocol to approximate quantum distributions within constant error, using constant communication~\cite{shi05}. No non-trivial lower bounds are known for this problem. In this paper, we consider the more general framework of simulating non-signaling distributions. These are distributions of the form $p(a,b|x,y)$, where Alice gets input $x$ and produces an output $a$, and Bob gets input $y$ and outputs $b$. The non-signaling condition is a fundamental property of bipartite physical systems, which states that the players gain no information on the other player's input. In particular, distributions arising from quantum measurements on shared bipartite states are non-signaling, and Boolean functions may be reduced to extremal non-signaling distributions{ with Boolean outcomes and uniform marginals}. Outside of the realm of Boolean functions, a very limited number of tools are available to analyse the communication complexity of distributed tasks, especially for quantum distributions with non-uniform marginals. In such cases, the distributions live in a larger-dimensional space and cannot be cast as communication matrices, so standard techniques do not apply. The structure of non-signaling distributions has been the object of much study in the quantum information community, yet outside the case of distributions with Boolean inputs or outcomes~\cite{jones05,barrettpironio05}, or with uniform marginal distributions, much remains to be understood. Our main contribution is a new method for handling all non-signaling distributions, including the case of non-Boolean outcomes and non-uniform marginals, based on affine combinations of lower-complexity distributions, which we use to obtain both upper and lower bounds on communication. We use the elegant geometric structure of the non-signaling distributions to analyse the communication complexity of Boolean functions, but also non-Boolean or partial functions. Although they are formulated, and proven, in quite a different way, our lower bounds turn out to subsume Linial and Shraibman's factorization norm lower bounds~\cite{ls07}, in the restricted case of Boolean functions. Similarly, our upper bounds extend the upper bounds of Shi and Zhu for approximating quantum distributions~\cite{shi05} to all non-signaling distributions (in particular distributions obtained by protocols using entanglement \emph{and} quantum communication). \change{One of our primary aims in this work has been to simplify all proofs as much as possible so that they use elementary techniques and clean arguments, and we believe that the intuition provides a clear picture of the underlying structure.}{} Our complexity measures can be expressed as linear (or semidefinite) programs, and when we consider the dual of our lower bound expressions, these turn out to correspond precisely to maximal Bell inequality violations in the case of classical communication, and Tsirelson inequality violations for quantum communication. Hence, we have made formal the intuition that large Bell inequalities should lead to large lower bounds on communication complexity. We also show that there cannot be a large gap between the classical and quantum expressions. This was previously known only in the case of distributions with Boolean outcomes and uniform marginals, and followed by Tsirelson's theorem and Grothendieck's inequality, neither of which are known to extend beyond this special case. This also shows that our method, as was already the case for Linial and Shraibman's bounds, cannot hope to prove large gaps between classical and quantum communication complexity. While this is a negative result, it also sheds some light on the relationship between the Linial and Shraibman family of lower bound techniques, and the information theoretic methods, such as the recent subdistribution bound~\cite{jkn08}, one of the few lower bound techniques not known to follow from Linial and Shraibman. We give an example of a problem~\cite{bct99} for which rectangle size gives an exponentially better lower bound than our method. \enlever{Although our results hold for more general settings, the case of distributions over binary outcomes, where $a,b\in \{\pm1\}$, is of particular interest. When a distribution is non-signaling, instead of considering the whole joint distribution, it then suffices to consider the {\em correlations}, which are defined as the expected value of $a\cdot b$ conditioned on $x,y$, and the {\em marginals}, which are the expected value of $a$ given $x$, and of $b$ given $y$. If we disregard the marginals, then Regev and Toner's result shows that simulating exactly correlations of quantum measurements can be achieved with constant communication, but their protocol does not carry over to the simulation of the full distribution, including the marginals. This more general question of simulating quantum correlations as well as the marginals exactly remains open, and to our knowledge, there are no non-trivial upper or lower bounds known.} \vspace{2mm} \paragraph{Summary of results} The paper is organized as follows. In Section~\ref{sec:preliminaries}, we give the required definitions and models of communication complexity and characterizations of the classes of distributions we consider. In Section~\ref{sec:lower-bounds}, we prove our lower bound on classical and quantum communication, and show that it coincides with Linial and Shraibman's method in the special case of Boolean functions ({\bf Theorem~\ref{thm:quasi-lhv}}). Our lower bounds are linear programs (respectively, SDPs in the quantum case), and in Section~\ref{sec:dual}, we show that the dual linear programs (resp.\, SDPs) have a natural interpretation in quantum information, as they coincide with Bell (resp.\, Tsirelson) inequality violations ({\bf Theorem~\ref{thm:lp-bell}}). We also give a dual expression which also has a natural interpretation, as the maximum winning probability of an associated XOR game ({\bf Corollary~\ref{cor:LS-game}}). The primal form is also the multiplicative inverse of the maximum winning probability of the associated XOR game, where all inputs have the same winning probability. In Section~\ref{sec:gamma-vs-nu}, we compare the two methods and show that the quantum and classical lower bound expressions can differ by at most a factor that is linear in the number of outcomes. (Theorem~\ref{thm:nu-gamma2}). Finally, in Section~\ref{sec:smp}, we give upper bounds on simultaneous messages complexity in terms of our lower bound expression ({\bf Theorem~\ref{thm:smp}}). We use fingerprinting methods~\cite{bcwdw01, yaofinger03, shi05, gkr06} to give very simple proofs that classical communication with shared randomness, or quantum communication with shared entanglement, can be simulated in the simultaneous messages model, with exponential blowup in communication, and in particular that any quantum distribution can be approximated with constant communication. \addforqip{ \paragraph{Summary of results} We define a measure of complexity related to the factorization norm~\cite{ls07}. Let $\L, {\cal Q}$, and ${\cal C}$ be the set of local, quantum, and non-signaling distributions, respectively. \noindent{\bf Definition.} For any non-signaling distribution $\mathbf{p} $, \begin{myitemize} \item $\tilde{\nu}(\mathbf{p}) = \min \{ \sum_i \abs{q_i} : \exists \mathbf{p}_i\in\L,q_i\in \mathbb R, \mathbf{p} = \sum_i q_i \mathbf{p}_i \} $, and $\tilde{\nu}^\epsilon(\mathbf{p}) = \min \{ \tilde{\nu}(\mathbf{p}') : \delta(\mathbf{p},\mathbf{p}') \leq \epsilon \} $,\item $\tilde{\gamma}_2(\mathbf{p}) = \min \{ \sum_i\abs{q_i} : \exists \mathbf{p}_i\in {\cal Q}, q_i\in \mathbb R, \mathbf{p} = \sum_i q_i \mathbf{p}_i \} $, and $\tilde{\gamma}_2^\epsilon(\mathbf{p}) = \min \{ \tilde{\gamma}_2(\mathbf{p}') : \delta(\mathbf{p},\mathbf{p}')\leq \epsilon \} $. \end{myitemize} We show the following general lower bounds for classical and quantum communication complexity. \noindent{\bf Theorem.} For any non-signaling distribution $\mathbf{p}$, \begin{myenumerate} \item $R_0^{\mathrm{pub}}(\mathbf{p}) \geq \log(\tilde{\nu}(\mathbf{p}))-1$, and $R_\epsilon^{\mathrm{pub}}(\mathbf{p})\geq \log(\tilde{\nu}^{\epsilon}(\mathbf{p}))-1$. \item $Q_0^{\mathrm{ent}}(\mathbf{p}) \geq \frac{1}{2}\log(\tilde{\gamma}_2(\mathbf{p}))-1$, and $Q_\epsilon^{\mathrm{ent}}(\mathbf{p})\geq \frac{1}{2}\log(\tilde{\gamma}_2^{\epsilon}(\mathbf{p}))-1$. \end{myenumerate} The quantities $\tilde{\gamma}_2$ and $\tilde{\nu}$ amount to finding maximum Bell or Tsirelson normalized inequality violations. \noindent{\bf Theorem.} For any non-signaling distribution $\mathbf{p}$, \begin{myenumerate} \item $\tilde{\nu}(\mathbf{p})=\max \{ B(\mathbf{p}): \forall \mathbf{p}'\in \L,\ |B(\mathbf{p}')|\leq 1 \}$, and \item $\tilde{\gamma}_2(\mathbf{p})=\max \{ B(\mathbf{p}): \forall \mathbf{p}'\in {\cal Q},\ |B(\mathbf{p}')| \leq 1 \}$, \end{myenumerate} where the maximization is over linear functionals $B:\mathsf{aff}({\cal C})\mapsto\mathbb R$. The $\tilde{\nu}$ and $\tilde{\gamma}_2$ methods are shown to be closely related. \noindent{\bf Theorem.} For any non-signaling distribution $\mathbf{p}$, with inputs in $\mathcal{X} \times \mathcal{Y}$ and outcomes in $\mathcal{A} \times \mathcal{B}$ with $A=|\mathcal{A}|, B=|\mathcal{B}|$, $\tilde{\nu}(\mathbf{p})\leq[2AB(K_G+1)-1]\tilde{\gamma}_2(\mathbf{p})$. Finally, we show upper bounds for the simultaneous messages model in terms of $\tilde{\gamma}_2$. \noindent{\bf Theorem.} For any non-signaling distribution $\mathbf{p}$ with inputs in $\mathcal{X} \times \mathcal{Y}$ with $|\mathcal{X} \times \mathcal{Y}| \leq 2^n$, and outcomes in $\mathcal{A} \times \mathcal{B}$ with $A=|\mathcal{A}|, B=|\mathcal{B}|$, and any $\epsilon, \delta < 1/2$, \begin{myenumerate} \item $R_{\epsilon + \delta}^{\parallel,\mathrm{pub}}(\mathbf{p}) \leq 16 \left[\frac{AB\tilde{\nu}^\epsilon(p)}{\delta}\right]^2 \ln\left[\frac{4AB}{\delta}\right] \log(AB) $, and \item $Q_{\epsilon+\delta}^{\parallel}(\mathbf{p}) \leq O\left((AB)^5\left[\frac{\tilde{\nu}^\epsilon(p)}{\delta}\right]^4\ln\left[\frac{AB}{\delta}\right]\log(n)\right) $. \end{myenumerate} } \paragraph{Related work} The use of affine combinations for non-signaling distributions has roots in the quantum logic community, where quantum non-locality has been studied within the setting of more general probability theories~\cite{foulisrandall81, randallfoulis81, klayrandallfoulis87, wilce92}. Until recently, this line of work was largely unknown in the quantum information theory community~\cite{barrett07, bblw}. The structure of the non-signaling polytope has been the object of much study. A complete characterization of the vertices has been obtained in some, but not all cases: for two players, the case of binary inputs~\cite{barrettlinden05}, and the case of binary outputs~\cite{barrettpironio05,jones05} are known, and for $n$ players, the case of Boolean inputs and outputs is known~\cite{barrettpironio05}. The work on simulating quantum distributions has focused mainly on providing upper bounds, and most results apply to simulating the correlations only. A few results address the simulation of quantum distributions with non-uniform marginals. Bacon and Toner give an upper bound of 2 bits for non-maximally entangled qubit pairs~\cite{tb03}. Shi and Zhu~\cite{shi05} show a constant upper bound for approximating any quantum distribution (including the marginals) to within a constant. Pironio gives a general lower bound technique based on Bell-like inequalities~\cite{pironio03}. There are a few ad hoc lower bounds on simulating quantum distributions, including a linear lower bound for a distribution based on Deutsch-Jozsa's problem~\cite{bct99}, and a recent lower bound of Gavinsky~\cite{gavinsky09}. The $\gamma_2$ method was first introduced as a measure of the complexity of matrices~\cite{lmss07}. It was shown to be a lower bound on communication complexity~\cite{ls07}, and to generalize many previously known methods. Lee \textit{et al.} use it to establish direct product theorems and relate the dual norm of $\gamma_2$ to the value of XOR games \cite{lss08}. Lee and Shraibman~\cite{les07} use a multidimensional generalization of a related quantity $\mu$ (where the norm-1 ball consists of cylinder intersections) to prove a lower bound in the multiparty number-on-the-forehead-model, for the disjointness function. \addforqip{ \paragraph{What is new since QIP 08.} We have presented a preliminary version of this work at QIP 2008. All the results in this submission are either new or have been signifcantly generalized, and the proofs have been simplified. More specifically, we presented the lower bound and connection to Bell inequalities in the case of binary outcomes and uniform marginals, for classical communication only. The relation between $\tilde{\gamma}_2$ and $\tilde{\nu}$ is new, as are the upper bounds. } \section{Preliminaries}\label{sec:preliminaries} In this paper, we extend the framework of communication complexity to non-signaling distributions. This framework encompasses the standard models of communication complexity of Boolean functions but also total and partial non-Boolean functions and relations, as well as distributions arising from the measurements of bipartite quantum states. Most results we present also extend to the multipartite setting. \subsection{Non-signaling distributions} Non-signaling, a fundamental postulate of physics, states that any observation on part of a system cannot instantaneously affect a remote part of the system, or similarly, that no signal can travel instantaneously. We consider distributions $p(a,b|x,y)$ where $x\in \mathcal{X}, y\in \mathcal{Y}$ are the inputs of the players, and they are required to each produce an outcome $a\in \mathcal{A}, b\in \mathcal{B}$, distributed according to $p(a,b|x,y)$. We restrict ourselves to the distributions where each player's outcome does not depend on the other player's input. Mathematically, non-signaling (also called causality) is defined as follows. \noindent \begin{defn}[Non-signaling distributions] A bipartite, conditional distribution $\mathbf p$ is non-signaling if \begin{eqnarray*} \forall a,x,y,y',& \sum_b p(a,b|x,y) = \sum_b p(a,b|x,y') ,\\ \forall b,x, x',y,& \sum_a p(a,b|x,y) = \sum_a p(a,b|x',y). \end{eqnarray*} \end{defn} For any non-signaling distribution, the marginal distribution on Alice's output $p(a|x,y) = \sum_b p(a,b|x,y)$ does not depend on $y$, so we write $p(a|x)$, and similarly $p(b|y)$ for the marginal distribution on Bob's output. We denote by ${\cal C}$ the set of all non-signaling distributions. In the case of binary outcomes, more specifically, $\mathcal{A}=\mathcal{B}=\{\pm1\}$, it is known that a non-signaling distribution is uniquely determined by the (expected) correlations, defined as $C(x,y)= E(a\cdot b|x,y)$, and the (expected) marginals, defined as $M_A(x)=E(a|x), M_B(y)=E(b|y)$. \noindent \begin{proposition}\label{prop:representation} For any functions $C:\mathcal{X}\times \mathcal{Y} \rightarrow [-1,1] $, $M_A:\mathcal{X}\rightarrow [-1,1]$, $M_B:\mathcal{Y}\rightarrow [-1,1]$, satisfying $1+ a\cdot b\; C(x,y) + a M_A(x) + b M_B(y)\geq 0$ $\forall (x,y)\in \mathcal{X}\times \mathcal{Y}$ and $a,b\in\{\pm1\}$, there is a unique non-signaling distribution $\mathbf{p}$ such that $\forall\ x,y, E(a\cdot b|x,y)=C(x,y)$ and $E(a|x)=M_A(x)$ and $E(b|y)=M_B(y)$, where $a,b$ are distributed according to $\mathbf{p}$. \end{proposition} \begin{proof} Fix $x,y$. $C, M_A, M_B$ are obtained from $\mathbf p$ by the following full rank system of equations. $$ \left( \begin{array}{rrrr} 1 & -1 & -1 & 1\\ 1 & 1 & -1 & -1\\ 1 & -1 & 1 & -1\\ 1 & 1 & 1 & 1 \end{array} \right) \left( \begin{array}{r} p(+1,+1|x,y)\\ p(+1,-1|x,y)\\ p(-1,+1|x,y)\\ p(-1,-1|x,y) \end{array} \right) = \left( \begin{array}{c} C(x,y)\\ M_{A}(x)\\ M_{B}(y)\\ 1 \end{array} \right). $$ Computing the inverse yields $p(a,b|x,y) = \frac{1}{4}(1+ a\cdot b\; C(x,y) + a M_A(x) + b M_B(y))$. \end{proof} We will write ${\mathbf p}=(C, M_A, M_B)$ and use both notations interchangeably when considering distributions over binary outcomes. We also denote by ${\cal C}_0$ the set of non-signaling distributions with uniform marginals, that is, ${\mathbf p}=(C, 0, 0)$, and write $C\in {\cal C}_0$, omitting the marginals when there is no ambiguity. \subsubsection{Boolean functions} \enlever{We formulate communication complexity in a slightly different way from the standard model, since this tends to simplify notations and proofs, and gives way to more intuitive interpretations of the results. We will consider Boolean functions using ``physicists' Boolean values'' $\pm 1$ instead of the usual 0/1. When we want to see how well a protocol approximates a function, we will compare the value of the function to the expected value produced by the protocol (instead of comparing the value of a 0/1-valued function to the probability that the protocol outputs 1). Instead of requiring one player to output the value of the function, we require each player to produce an output, and we require the product of the outputs to have the correct expectation (in the case of 0/1 functions, this is the same as XOR games). This fits the scenario of quantum measurements better, but we feel that it also lends more symmetry to the communication complexity scenario, since neither player is singled out to output the value of the function. } The communication complexity of Boolean functions is a special case of the problem of simulating non-signaling distributions. As we shall see in Section~\ref{subsec:characterization-classical}, it happens that the associated distributions are extremal points of the non-signaling polytope. If the distribution stipulates that the product of the players' outputs equal some function $f: \mathcal{X} \times \mathcal{Y} \rightarrow \{\pm1\}$ then this corresponds to the standard model of communication complexity (up to an additional bit of communication, for Bob to output $f(x,y)$). If we further require that Alice's output be +1 or -1 with equal probability, likewise for Bob, then the distribution is non-signaling and has the following form: \begin{defn}\label{def:boolean-functions} For a function $f:\mathcal{X}\times \mathcal{Y} \rightarrow \{-1, 1\}$, denote $\mathbf{p}_f$ the distribution defined by $p_f(a,b|x,y) = \frac{1}{2}$ if $f(x,y)=a\cdot b$ and 0 otherwise. Equivalently, $\mathbf{p}_f=(C_f,0,0)$ where $C_f(x,y)=f(x,y)$. \end{defn} In the case of randomized communication complexity, a protocol that simulates a Boolean function with error probability $\epsilon$ corresponds to simulating correlations $C'$ scaled down by a factor at most $1-2\epsilon$, that is, $\forall x,y, \mathrm{sgn}(C'(x,y))=C_f(x,y)$ and $\abs{C'(x,y)} \geq 1-2\epsilon$. While we will not consider these cases in full detail, non-Boolean functions, partial functions and some classes of relations may be handled in a similar fashion, hence our techniques can be used to show lower bounds in these settings as well. \subsubsection{Quantum distributions} Of particular interest in the study of quantum non-locality are the distributions arising from measuring bipartite quantum states. We will use the following definition: \noindent \begin{defn}\label{def:quantum-distribution} A distribution $\mathbf{p}$ is {\em quantum} if there exists a quantum state $\ket{\psi}$ in a Hilbert space ${\mathcal{H}}$ and measurement operators $\{E_a(x):a\in \mathcal{A},x\in \mathcal{X}\}$ and $\{E_b(y):b\in \mathcal{B}, y\in \mathcal{Y}\}$, such that $p(a,b|x,y)=\bra{\psi}E_a(x) E_b(y)\ket{\psi}$, with the measurement operators satisfying \begin{enumerate} \item $E_a(x)^\dagger=E_a(x)$ and $E_b(y)^\dagger=E_b(y)$, \item $E_a(x)\cdot E_{a'}(x)=\delta_{aa'}E_a(x)$ and $E_b(y)\cdot E_{b'}(y)=\delta_{bb'}E_b(y)$, \item $\sum_aE_a(x)=\mathbbm 1$ and $\sum_bE_b(x)=\mathbbm 1$, where $\mathbbm 1$ is the identity operators on ${\mathcal{H}}$, \enlever{ \item $[E_a(x),E_b(y)]=0$.} \item $E_a(x)\cdot E_b(y)=E_b(y)\cdot E_a(x)$. \end{enumerate} \end{defn} Note that a more standard definition would be to replace the last condition on the measurement operators (commutativity) by the stronger condition that the operators $E_a(x)$ act non-trivially on a subspace ${\mathcal{H}}_A$ only, and that the operators $E_b(y)$ act non-trivially on a subspace ${\mathcal{H}}_B$ only, with ${\mathcal{H}}={\mathcal{H}}_A\otimes{\mathcal{H}}_B$. If we restrict the Hilbert space ${\mathcal{H}}$ to be finite-dimensional, these two definitions are equivalent, but whether this also holds in full generality is still unknown. We use this less standard definition because this will allow us to use the results from \cite{npa08} (see this reference for a discussion about the different definitions). We denote by ${\cal Q}$ the set of all quantum distributions. In the restricted case of binary outcomes with uniform marginals, we let ${\cal Q}_0$ be the set of all quantum correlations. The communication complexity of simulating traceless binary measurements on maximally entangled states has been settled by Regev and Toner using two bits of communication, since in this case the marginals are uniform~\cite{rt07}. Their technique also handles general binary measurements on any entangled state, but in this case they only simulate the correlations. The complexity of simulating the full joint distribution exactly when the marginals are non-uniform remains open. \subsection {Models of communication complexity} We consider the following model of communication complexity of non-signaling distributions $\mathbf{p}$. Alice gets input $x$, Bob gets input $y$, and after exchanging bits or qubits, Alice has to output $a$ and Bob $b$ so that the joint distribution is $p(a,b|x,y)$. $R_0(\mathbf{p})$ denotes the communication complexity of simulating $\mathbf{p}$ exactly, using private randomness and classical communication. $Q_0(\mathbf{p})$ denotes the communication complexity of simulating $\mathbf{p}$ exactly, using quantum communication. We use superscripts ``$\mathrm{pub}$'' and ``$\mathrm{ent}$'' in the case where the players share random bits or quantum entanglement. For $R_\epsilon(\mathbf{p})$, we are only required to simulate some distribution $\mathbf{p}'$ such that $\delta(\mathbf{p},\mathbf{p}')\leq\epsilon$, where $\delta(\mathbf{p},\mathbf{p}')=\max\{|p(\mathcal{E}|x,y)-p'(\mathcal{E}|x,y)|:x,y\in\mathcal{X}\times\mathcal{Y},\mathcal{E}\subseteq\mathcal{A}\times\mathcal{B}\}$ is the total variation distance (or statistical distance) between two distributions. For distributions with binary outcomes, we write $R_\epsilon(C,M_A,M_B)$ and $Q_\epsilon(C,M_A,M_B)$. In the case of Boolean functions, $R_\epsilon(C)=R_\epsilon(C,0,0)$ corresponds to the usual notion of computing $f$ with probability at least $1-\epsilon$, where $C$ is the $\pm 1$ communication matrix of $f$. From the point of view of communication, distributions with uniform marginals are the easiest to simulate. Suppose we have a protocol that simulates correlations $C$ with arbitrary marginals. By using just an additional shared random bit, both players can flip their outcome whenever the shared random bit is 1. Since each players' marginal outcome is now an even coin flip, this protocol simulates the distribution $(C, 0,0)$. \begin{proposition} \label{prop:uniform-marginals} For any Boolean non-signaling distribution $(C, M_A, M_B)$, \enlever{ simulating $(C, 0,0)$ requires no more communication than simulating $(C, M_A,M_B)$ (with 1 additional bit of shared randomness).} we have $R_\epsilon^{\mathrm{pub}}(C, 0, 0)\leq R_\epsilon^{\mathrm{pub}}(C, M_A, M_B)$ and $Q_\epsilon^{\mathrm{ent}}(C, 0, 0)\leq Q_\epsilon^{\mathrm{ent}}(C, M_A, M_B)$. \end{proposition} \subsection{Characterization of the sets of local and non-signaling distributions} \label{subsec:characterization-classical} In the quantum information literature, the distributions that can be simulated with shared randomness and no communication (also called a local hidden variable model) are called local distributions. \noindent \begin{defn}\label{def:non-signaling} {\em Local deterministic distributions} are of the form $p(a,b|x,y)=\delta_{a=\lambda_A(x)} \cdot \delta_{b=\lambda_B(y)}$ where $\lambda_A:\mathcal{X}\rightarrow \mathcal{A}$ and $\lambda_B:\mathcal{Y} \rightarrow \mathcal{B}$, and $\delta$ is the Kronecker delta. A distribution is {\em local} if it can be written as a convex combination of local deterministic distributions. \end{defn} We denote by $\Lambda$ the set of local deterministic distributions $\{{\mathbf p}^{\lambda}\}_{\lambda\in \Lambda}$ and by $\L$ the set of local distributions. Let $\mathsf{conv}(A)$ denote the convex hull of $A$. In the case of binary outcomes, we have \begin{proposition} $\L = \mathsf{conv}(\{ (u^Tv,u,v) : u\in \{\pm1\}^\mathcal{X}, v \in \{\pm1\}^\mathcal{Y}\})$. \end{proposition} We also denote by~$\L_0$ the set of local correlations over binary outcomes with uniform marginals. The quantum information literature reveals a great deal of insight into the structure of the classical, quantum, and non-signaling distributions. It is well known that $\L$ and ${\cal C}$ are polytopes. While the extremal points of $\L$ are simply the local deterministic distributions, the non-signaling polytope ${\cal C}$ has a more complex structure~\cite{jones05, barrettpironio05}. ${\cal C}_0$ is the convex hull of the distributions obtained from Boolean functions. \begin{proposition} ${\cal C}_0 = \mathsf{conv}(\{ (C_f,0,0) : C_f\in \{\pm 1\}^{\mathcal{X}\times\mathcal{Y}} \})$. \end{proposition} We show that ${\cal C}$ is the affine hull of the local polytope (restricted to the positive orthant since all probabilities $p(a,b|x,y)$ must be positive). We give a simple proof for the case of binary outcomes but this carries over to the general case. This was shown independently of us, on a few occasions in different communities~\cite{randallfoulis81, foulisrandall81, klayrandallfoulis87, wilce92, barrett07}. \begin{thm} \label{thm:qlhv} ${\cal C} = \mathsf{aff}^+\{\L\}$, where $\mathsf{aff}^+\{\L\}$ is the restriction to the positive orthant of the affine hull of $\L$, and $\textrm{dim}{\cal C}=\textrm{dim}\L=|\mathcal{X}|\times|\mathcal{Y}|+|\mathcal{X}|+|\mathcal{Y}|$. \end{thm} \begin{proof} We show that $\mathsf{aff}({\cal C})=\mathsf{aff}(\L)$. The theorem then follows by restricting to the positive orthant, and using the fact that ${\cal C}=\mathsf{aff}^+({\cal C})$. [{\bf$\mathsf{aff}(\L)\subseteq \mathsf{aff}({\cal C})$}] Since any local distribution satisfies the (linear) non-signaling constraints in Def.~\ref{def:non-signaling}, this is also true for any affine combination of local distributions. [{\bf$\mathsf{aff}({\cal C})\subseteq \mathsf{aff}(\L)$}] For any $(\sigma,\pi)\in \mathcal{X}\times \mathcal{Y}$, we define the distribution $\mathbf{p}_{\sigma\pi}=(C_{\sigma\pi},u_{\sigma\pi},v_{\sigma\pi})$ with correlations $C_{\sigma\pi}(x,y)=\delta_{x=\sigma}\delta_{y=\pi}$ and marginals $u_{\sigma\pi}(x)=0,v_{\sigma\pi}(y)=0$. Similarly, we define for any $\sigma\in \mathcal{X}$ the distribution $\mathbf{p}_{\sigma\cdot}=(C_{\sigma\cdot},u_{\sigma\cdot},v_{\sigma\cdot})$ with $C_{\sigma\cdot}(x,y)=0,u_{\sigma\cdot}(x)=\delta_{x=\sigma},v_{\sigma\cdot}(y)=0$, and for any $\pi\in \mathcal{Y}$ the distribution $\mathbf{p}_{\cdot\pi}=(C_{\cdot\pi},u_{\cdot\pi},v_{\cdot\pi})$ with $C_{\cdot\pi}(x,y)=0,u_{\cdot\pi}(x)=0,v_{\cdot\pi}(y)=\delta_{y=\pi}$. It is straightforward to check that these $|\mathcal{X}|\times|\mathcal{Y}|+|\mathcal{X}|+|\mathcal{Y}|$ distributions are local, and that they constitute a basis for the vector space embedding $\mathsf{aff}({\cal C})$, which consists of vectors of the form $(C,u,v)$. \end{proof} This implies that while local distributions are \emph{convex} combinations of local deterministic distributions ${\mathbf p}^\lambda\in\Lambda$, non-signaling distributions are \emph{affine} combinations of these distributions. \begin{cor}[Affine model] A distribution ${\mathbf p} {\in}{\cal C}$ if and only if $\,\exists q_\lambda\in\mathbb R $ with ${\mathbf p} =\sum_{\lambda \in \Lambda} q_\lambda {\mathbf p} ^\lambda$. \end{cor} Note that since ${\mathbf p}$ is a distribution, this implies $\sum_{\lambda \in \Lambda} q_\lambda =1$. Since weights in an affine combination may be negative, but still sum up to one, this may be interpreted as a \emph{quasi-mixture} of local distributions, some distributions being used with possibly ``negative probability''. Surprisingly this is not a new notion; see for example Groenewold~\cite{groenewold85} who gave an affine model for quantum distributions; or a discussion of ``negative probability'' by Feynman~\cite{feynman86}. \subsection{Characterization of the set of quantum distributions} \label{subsec:characterization-quantum} As for the set of quantum distributions ${\cal Q}$, it is known to be convex, but not a polytope. Although no simple characterization of ${\cal Q}$ is known, Navascues, Pironio and Acin have given a characterization for a hierarchy of sets $\{{\cal Q}^n: n\in\mathbb{N}_0\}$, such that ${\cal Q}^n\subseteq {\cal Q}^{n-1}$ and ${\cal Q}^n\to {\cal Q}$ for $n\to\infty$~\cite{npa08}. We briefly introduce this hierarchy because it will be useful in Section~\ref{sec:dual}, but we refer the reader to~\cite{npa08} for full details. Let $\mathcal{S}_n$ be the set of all monomials of degree up to $n$ in measurement operators $E_a(x)$ and $E_b(y)$ (for example, $\mathbbm 1$, $E_a(x)$ and $E_a(x)E_{a'}(x)E_b(y)$ are a monomials of degree 0, 1 and 3, respectively). Due to the conditions in Definition~\ref{def:quantum-distribution}, the operators in $\mathcal{S}_n$ (and their Hermitian conjugates) satisfy linear equations such as $E_a(x)^\dagger-E_a(x)=0$, $\sum_a E_a(x)-\mathbbm 1=0$, or higher order equations such as $E_a(x)^\dagger E_a(x)E_b(y)-E_b(y)^\dagger E_a(x)=0$. Let us suppose that we have $m(n)$ linearly independent equations for the operators in $\mathcal{S}_n$. These equations may be written as $\sum_{S,T\in\mathcal{S}_n} (F_k)_{S,T} S^\dagger T=0$, where, for all $k\in[m(n)]$, $F_k$ is a matrix whose rows and columns are labelled by the elements of $\mathcal{S}_n$. We are now ready to define the set of distributions ${\cal Q}^n$. \begin{defn}[Quantum hierarchy] A distribution $\mathbf{p}$ is in ${\cal Q}^n$ if and only if there exists a positive-semidefinite matrix $\Gamma\succcurlyeq 0$, whose rows and columns are labelled by the elements of $\mathcal{S}_n$, satisfying \begin{enumerate} \item $\Gamma_{\mathbbm 1,\mathbbm 1}=1$, \item $\Gamma_{E_a(x),E_b(y)}=p(a,b|x,y)$, for all $a,b,x,y\in\mathcal{A}\times\mathcal{B}\times\mathcal{X}\times\mathcal{Y}$ \item ${\rm tr}(F_k ^\dagger\Gamma)=0$ for all $k\in[m(n)]$. \end{enumerate} \end{defn} Comparing with Definition~\ref{def:quantum-distribution} for ${\cal Q}$, we immediately get that ${\cal Q}\subseteq{\cal Q}^n$ by setting $\Gamma_{S,T}=\bra{\psi}S^\dagger T\ket{\psi}$. The proof that ${\cal Q}^n$ converges to ${\cal Q}$ is much more involved and is given in~\cite{npa08}. In the special case of binary outcomes with uniform marginals, the hierarchy collapses at the first level, that is, ${\cal Q}_0^1={\cal Q}_0$. This was known before the hierarchy was introduced, as a consequence of the following theorem of Tsirelson. \begin{thm}[\cite{tsi85}]\label{thm:tsirelson} Let $\mathbb{S}_n$ be the set of unit vectors in $\mathbb{R}^{n}$, and $\mathcal{H}^d$ be a $d$-dimensional Hilbert space. \noindent \begin{enumerate} \item If $(C,M_A,M_B)\in{\cal Q}$ is a probability distribution obtained by performing binary measurements on a quantum state $\ket{\psi} \in \mathcal{H}^d \otimes \mathcal{H}^d$, then there exists vectors $\vec{a}(x),\vec{b}(y)\in\mathbb{S}_{2d^2}$ such that $C(x,y)=\vec{a}(x)\cdot\vec{b}(y)$. \item If $\vec{a}(x),\vec{b}(y)$ are unit vectors in $\mathbb{S}_n$, then there exists a probability distribution $(C,0,0)\in{\cal Q}$ obtained by performing binary measurements on a maximally entangled state $\ket{\psi} \in \mathcal{H}^{\floor{\frac{n}{2}}} \otimes \mathcal{H}^{\floor{\frac{n}{2}}}$ such that $C(x,y)=\vec{a}(x)\cdot\vec{b}(y)$. \end{enumerate} \end{thm} \enlever{ If we consider only those measurements where only two outcomes are possible, and these outcomes are equally likely when the measurement is applied to a maximally mixed state, then the following holds. \begin{thm}[Tsirelson~\cite{tsi85}]\label{thm:tsirelson} Let $\mathcal{O}_d$ be the set of binary observables over $\mathcal{H}^d$, and $\mathbb{S}_n$ be the set of unit vectors in $\mathbb{R}^{n}$. For any $d>0$, and $\ket{\psi} \in \mathcal{H}^d \otimes \mathcal{H}^d$, there is a function $\vec{\nu}:\mathcal{O}_d\longrightarrow\mathbb{S}_{2d^2} $ such that the following holds. If $\hat{A}$ and $\hat{B}$ is a binary observable pair over $\mathcal{H}^d$, and $A,B$ are the outcomes of measuring $\ket{\psi}$ according to $\hat{A}$ and $\hat{B}$, then $$E(AB|\hat{A}\otimes \hat{B}) = {\vec{\nu}(\hat{A})} \cdot {\vec{\nu}(\hat{B})}.$$ For any $n>0$ and for an arbitrary maximally entangled state $\ket{\psi}$ on $\mathcal{H}^{\floor{\frac{n}{2}}} \otimes \mathcal{H}^{\floor{\frac{n}{2}}}$, there is a function $\hat{\mu}: \mathbb{S}_n\longrightarrow\mathcal{O}_{\floor{\frac{n}{2}}}$ such that the following holds. If $\vec{a}$ and $\vec{b}$ are vectors in $\mathbb{S}_n$, and $A,B$ are the outcomes of measuring $\ket{\psi}$ according to $\hat{\mu}(\vec{a})$ and $\hat{\mu}(\vec{b})$, then $$E(AB|\hat{\mu}(\vec{a})\otimes \hat{\mu}(\vec{b})) = \vec{a} \cdot \vec{b}.$$ \end{thm} } \begin{cor} ${\cal Q}_0 = \{ C : C(x,y)= \vec{a}(x)\cdot\vec{b}(y), \norm{\vec{a}(x)}=\norm{\vec{b}(y)} = 1 \,\forall x,y\}$. \end{cor} Clearly, $\L\subseteq {\cal Q} \subseteq {\cal C}$. The existence of Grothendieck's constant (see e.g. \cite{an06}) implies the following statement. \noindent \begin{proposition} \label{prop:grothendieck} $\L_0 \subseteq {\cal Q}_0 \subseteq K_G \L_0$, where $K_G$ is Grothendieck's constant. \end{proposition} \section{Lower bounds for non-signaling distributions}\label{sec:lower-bounds} We extend Linial and Shraibman's factorization norm ($\gamma_2$) and nuclear norm ($\nu$) lower bound methods~\cite{ls07} to the simulation of any non-signaling distributions. The proof we give is simple, especially in the setting studied by Linial and Shraibman, for Boolean functions, which corresponds in our setting to binary outputs and uniform marginal distributions. The main intuition is that $c$ bits of communication can increase correlations by at most a factor of~$2^c$. \subsection{Communication vs scaled-down distribution} We first show that if a distribution $\mathbf{p}$ may be simulated with $t$ bits of communication (or $q$ qubits of quantum communication), then a scaled-down version of this distribution is local (or quantum). From this local (or quantum) distribution, we derive an affine model for $\mathbf{p}$ (Theorem~\ref{thm:quasi-lhv}) which gives the lower bound on communication. \begin{lemma} \label{lm:decreased-correlations-pi} Let $\mathbf p$ be a non-signaling distribution over $\mathcal A \times \mathcal B$ with input set $\mathcal X \times \mathcal Y$. \begin{enumerate} \item Assume that $R_0^{\mathrm{pub}}(\mathbf p) \leq t$, then there exists two marginal distributions $p_A(a|x)$ and $p_B(b|y)$ such that the distribution $ p_l(a,b|x,y)=\frac{1}{2^t}p(a,b|x,y)+(1-\frac{1}{2^t})p_A(a|x)p_B(b|y) $ is local. \item Assume that $Q_0^{\mathrm{ent}}(\mathbf p)\leq q$, then there exists two marginal distributions $p_A(a|x)$ and $p_B(b|y)$ such that the distribution $ p_l(a,b|x,y)=\frac{1}{2^{2q}}p(a,b|x,y)+(1-\frac{1}{2^{2q}})p_A(a|x)p_B(b|y) $ is quantum. \item Assume that $\mathbf p=(C,0,0)$ and $Q_0^{\mathrm{ent}}(C) \leq q$, then $C / {2^q} \in {\cal Q}_0$. \end{enumerate} \end{lemma} \begin{proof} We assume that the length of the transcript is exactly t bits for each execution of the protocol, adding dummy bits if necessary. We now fix some notations. In the original protocol, the players pick a random string $\lambda$ and exchange some communication whose transcript is denoted $T(x,y,\lambda)$. Alice then outputs some value $a$ according to a probability distribution $p_P(a|x, \lambda, T)$. Similarly, Bob outputs some value $b$ according to a probability distribution $p_P(b|y, \lambda, T)$. From Alice's point of view, on input $x$ and shared randomness $\lambda$, only a subset of the set of all $t$-bit transcripts can be produced: the transcripts $S\in\{0,1\}^t$ for which there exists a $y$ such that $S=T(x,y,\lambda)$. We will call these transcripts the set of valid transcripts for $(x,\lambda)$. The set of valid transcripts for Bob is defined similarly. We denote these sets respectively $U_{x,\lambda}$ and $V_{y,\lambda}$. We now define a local protocol for the distribution $p_l(a,b|x,y)$: \begin{itemize} \item As in the original protocol, Alice and Bob initially share some random string $\lambda$. \item Using additional shared randomness, Alice and Bob choose a transcript $T$ uniformly at random in $\{0,1\}^t$. \item If $T$ is a valid transcript for $(x,\lambda)$, she outputs $a$ according to the distribution $p_P(a|x,\lambda,T)$. If it is not, Alice outputs $a$ according to a distribution $p_A(a|x)$ which we will define later. \item Bob does the same. We will also define the distribution $p_B(b|y)$ later. \end{itemize} Let $\mu$ be the distribution over the randomness and the $t$-bit strings in the local protocol. By definition, the distribution produced by this protocol is \begin{eqnarray*} p_l(a,b|x,y)&=& \sum_\lambda \mu(\lambda) \left[ \sum_{T\in U_{x,\lambda} \cap V_{y,\lambda}} \mu(T) p_P(a|x,\lambda, T) p_P(b|y,\lambda, T) + p_B(b|y) \sum_{T\in U_{x,\lambda} \cap \bar V_{y,\lambda}} \mu(T) p_P(a|x,\lambda, T) \right.\\ &+&\left. p_A(a|x) \sum_{T\in \bar U_{x,\lambda} \cap V_{y,\lambda}} \mu(T) p_P(b|y,\lambda, T) + p_B(b|y) p_A(a|x) \sum_{T\in \bar U_{x,\lambda} \cap \bar V_{y,\lambda}} \mu(T) \right] \end{eqnarray*} We now analyze each term separately. For fixed inputs $x,y$ and shared randomness $\lambda$, there is only one transcript which is valid for both Alice and Bob, and when they use this transcript for each $\lambda$, they output according to the distribution $\mathbf p$. Therefore, we have $$ \sum_\lambda \mu(\lambda) \sum_{T\in U_{x,\lambda} \cap V_{y,\lambda}} \mu(T) p_P(a|x,\lambda, T) p_P(b|y,\lambda, T) = \frac 1 {2^t} p(a,b|x,y).$$ Let $A_x$ be the event that Alice's transcript is valid for $x$ (over random $\lambda,T$), and $\bar A_x$ its negation (similarly $B_y$ and $\bar B_y$ for Bob). \enlever{By definition, we have $\mu(A_x)=\sum_\lambda \mu(\lambda) \sum_{T\in U_{x,\lambda}} \mu(T)$, and $\mu(B_y)=\sum_\lambda \mu(\lambda) \sum_{T\in V_{y,\lambda}} \mu(T)$.} We denote $$p_P(a|x, A_x \cap \bar B_y)= \frac {\sum_\lambda \mu(\lambda) \sum_{T \in U_{x,\lambda} \cap \bar V_{y,\lambda}} \mu(T) p_P(a|x, \lambda, T)} {\mu(A_x \cap \bar B_y)},$$ where, by definition, we have $\mu(A_x\cap \bar B_y)=\sum_\lambda \mu(\lambda) \sum_{T\in U_{x,\lambda} \cap \bar V_{y,\lambda}} \mu(T)$. We will show that this distribution is independent of $y$ and that the corresponding distribution $p_P(b|y, \bar A_x \cap B_y)$ for Bob is independent of $x$. Using these distributions, we may write $p_l(a,b|x,y)$ as \begin{eqnarray*} p_l(a,b|x,y)&=& \frac 1 {2^t} p(a,b|x,y) + \mu(A_x \cap \bar B_y) p_B(b|y) p_P(a|x,A_x\cap \bar B_y)\\ &+& \mu(\bar A_x \cap B_y) p_A(a|x) p_P(b|x,\bar A_x\cap B_y) + \mu(\bar A_x \cap \bar B_y) p_B(b|y) p_A(a|x) \end{eqnarray*} Summing over $b$, and using the fact that $\mathbf p_l$ and $\mathbf p$ are non-signaling, we have \begin{eqnarray*} p_l(a|x)&=&\frac{1}{2^t}p(a|x)+\mu(A_x\cap \bar{B}_y) p_P(a|x, A_x\cap \bar{B}_y)\\ &+&\mu(\bar{A}_x\cap {B}_y) p_A(a|x)+\mu(\bar{A}_x\cap \bar{B}_y)p_A(a|x)\\ &=&\frac{1}{2^t}p(a|x)+\mu(A_x \cap \bar{B}_y) p_P(a|x, A_x \cap \bar{B}_y)+\mu(\bar{A}_x)p_A(a|x), \end{eqnarray*} Note that by definition, $\mu(A_x)=\sum_\lambda \mu(\lambda) \sum_{T\in U_{x,\lambda}} \mu(T)$ is independent of $y$, therefore so is $\mu(A_x \cap \bar B_y) = \mu(A_x) - \mu(A_x \cap B_y)= \mu(A_x) -\frac 1 {2^t}$. \enlever{This proves that $\mu(A_x \cap \bar B_y)$ is independent of $y$.} From the expression for $p_l(a|x)$, we can conclude that $p_P(a|x,A_x \cap \bar B_y)$ is independent of $y$ and can be evaluated by Alice (and similarly for the analogue distribution for Bob). We now set \begin{eqnarray*} p_A(a|x)&=&p_P(a|x,A_x\cap \bar{B}_y)\\ p_B(b|y)&=&p_P(b|y, \bar{A}_x\cap {B}_y). \end{eqnarray*} Therefore, the final distribution obtained from the local protocol may be written as \begin{eqnarray*} p_l(a,b|x,y)&=&\frac{1}{2^t}p(a,b|x,y)+\mu(A_x\cap \bar{B}_y)p_A(a|x)p_B(b|y)\\ &+& \mu(\bar{A}_x\cap {B}_y)p_A(a|x)p_B(b|y)+\mu(\bar{A}_x\cap\bar{B}_y)p_A(a|x)p_B(b|y)\\ &=&\frac{1}{2^t}p(ab|xy)+(1-\frac{1}{2^t})p_A(a|x)p_B(b|y). \end{eqnarray*} For quantum protocols, we first simulate quantum communication using shared entanglement and teleportation, which uses 2 bits of classical communication for each qubit. Starting with this protocol using $2q$ bits of classical communication, we may use the same idea as in the classical case, that is choosing a random $2q$-bit string interpreted as the transcript, and replacing the players' respective outputs by independent random outputs chosen according to $p_A$ and $p_B$ if the random transcript does not match the bits they would have sent in the original protocol. In the case of binary outputs with uniform marginals, that is, $\mathbf{p}=(C,0,0)$, we may improve the exponent of the scaling-down coefficient $2^{2q}$ by a factor of $2$ using a more involved analysis and a variation of a result by~\cite{kremer, yao, ls07} (the proof is given in Appendix~\ref{appendix:quantum-communication} for completeness). \begin{lemma}[\cite{kremer, yao, ls07}] \label{lemma:quantum-communication} Let $(C, M_A, M_B)$ be a distribution simulated by a quantum protocol with shared entanglement using $q_A$ qubits of communication from Alice to Bob and $q_B$ qubits from Bob to Alice. There exist vectors $\vec{a}(x),\vec{b}(y)$ with $\norm{\vec{a}(x)}\leq 2^{q_B}$ and $\norm{\vec{b}(y)} \leq 2^{q_A}$ such that $C(x,y) = \vec{a}(x)\cdot\vec{b}(y)$. \end{lemma} The fact that $C/2^q \in{\cal Q}_0$ then follows from Theorem~\ref{thm:tsirelson} part~2. \end{proof} \subsection{Communication vs affine models} By Theorem~\ref{thm:qlhv}, we know that any non-signaling distribution can be written as an affine combination of local distributions, which we call affine model. In this section we show that using Lemma~\ref{lm:decreased-correlations-pi}, an explicit affine model can be derived from a (classical or quantum) communication protocol for $\mathbf{p}$, which gives us a lower bound technique for communication complexity in terms of how ``good'' the affine model is. Let us define the following quantities, which as we will see may be considered as extensions of the $\nu$ and $\gamma_2$ quantities of~\cite{ls07} (defined below) to distributions. \begin{defn}\label{def:lp-quasi-probas} \begin{myitemize} \item $\tilde{\nu}(\mathbf{p}) = \min \{ \sum_i \abs{q_i} : \exists \mathbf{p}_i\in\L,q_i\in \mathbb R, \mathbf{p} = \sum_i q_i \mathbf{p}_i \} $, \item $\tilde{\gamma}_2(\mathbf{p}) = \min \{ \sum_i\abs{q_i} : \exists \mathbf{p}_i\in {\cal Q}, q_i\in \mathbb R, \mathbf{p} = \sum_i q_i \mathbf{p}_i \} $, \item $\tilde{\nu}^\epsilon(\mathbf{p}) = \min \{ \tilde{\nu}(\mathbf{p}') : \delta(\mathbf{p},\mathbf{p}') \leq \epsilon \} $, \item $\tilde{\gamma}_2^\epsilon(\mathbf{p}) = \min \{ \tilde{\gamma}_2(\mathbf{p}') : \delta(\mathbf{p},\mathbf{p}')\leq \epsilon \} $. \end{myitemize} \end{defn} The quantities $\tilde{\nu}(\mathbf{p})$ and $\tilde{\gamma}_2(\mathbf{p})$ show how well $\mathbf{p}$ may be represented as an affine combination of local or quantum distributions, a \emph{good} affine combination being one where the sum of absolute values of coefficients $q_i$ is as low as possible. For a local distribution, we may take positive coefficients $q_i$, and therefore obtain the minimum possible value $\tilde{\nu}(\mathbf{p})=1$ (note that $\sum_i q_i \mathbf{p}_i=\mathbf{p}$ implies in particular $\sum_i q_i=1$), and similarly for quantum distributions, so that \begin{lemma}\label{lem:charact-locality-quanticity} $\mathbf{p}\in\L\Longleftrightarrow \tilde{\nu}(\mathbf{p})=1$, and $\mathbf{p}\in{\cal Q}\Longleftrightarrow \tilde{\gamma}_2(\mathbf{p})=1$. \end{lemma} In other words, the set of local distributions $\L$ form the unit sphere of $\tilde{\nu}$, and similarly the set of quantum distributions ${\cal Q}$ form the unit sphere of $\tilde{\gamma}_2$. In the binary case, observe that by Proposition~\ref{prop:uniform-marginals}, we have $\tilde{\gamma}_2(C) \leq \tilde{\gamma}_2(C,u,v)$ and $\tilde{\nu}(C) \leq \tilde{\nu}(C,u,v)$. By Proposition~\ref{prop:grothendieck}, $\tilde{\gamma}_2(C) \leq \tilde{\nu}(C) \leq K_G \tilde{\gamma}_2(C)$. Similar properties hold for the approximate versions $\tilde{\nu}^\epsilon(C)$ and $\tilde{\gamma}_2^\epsilon(C)$. We have shown (Lemma~\ref{lm:decreased-correlations-pi}) that distributions scaled down exponentially in the communication are local; from these local protocols we can build up an affine model for the original distribution, in order to establish the lower bound. \begin{thm}\label{thm:quasi-lhv} Let $\mathbf{p}$ be a non-signaling distribution over $\mathcal{A}\times\mathcal{B}$ with input set $\mathcal{X}\times\mathcal{Y}$, and $C:\mathcal{X}\times\mathcal{Y}\to[-1, 1]$ be a correlation matrix. \begin{enumerate} \item If $R_0^{\mathrm{pub}}(\mathbf{p})\leq t$, then $\tilde{\nu}(\mathbf{p}) \leq 2^{t+1}-1.$ \item If $R_0^{\mathrm{pub}}(C)\leq t$, then $\tilde{\nu}(C)\leq 2^{t}$. \item If $Q_0^{\mathrm{ent}}(\mathbf{p})\leq q$, then $\tilde{\gamma}_2(\mathbf{p}) \leq 2^{2q+1}-1.$ \item If $Q_0^{\mathrm{ent}}(C)\leq q$, then $\tilde{\gamma}_2(C)\leq 2^{q}$. \end{enumerate} \end{thm} \begin{proof} We give a proof for the classical case, the quantum case follows by using teleportation. Let $c$ be the number of bits exchanged. From Lemma~\ref{lm:decreased-correlations-pi}, we know that there exists marginal distributions $p_A(a|x)$ and $p_B(b|y)$ such that $p_l(a,b|x,y)=\frac{1}{2^t}p(a,b|x,y)+(1-\frac{1}{2^t})p_A(a|x)p_B(b|y)$ is local. This gives an affine model for $p(a,b|x,y)$, as the following combination of two local distributions: $$p(a,b|x,y)=2^t p_l(a,b|x,y) + (1-2^t) p_A(a|x) p_B(b|y).$$ Then $\tilde{\nu}(\mathbf{p}) \leq 2^{t+1} -1$. In the case of binary outputs with uniform marginals, $\mathbf{p}_l=( C/ 2^t,0,0)$, and Lemma~\ref{lm:decreased-correlations-pi} implies that $ C/2^t \in \L_0$. By following the local protocol for $ C/2^t$ and letting Alice flip her output, we also get a local protocol for $- C/2^t$, so $-C/2^t \in \L_0$ as well. Notice that we may build an affine model for $C$ as a combination of $C/2^t$ and $-C/2^t$: $$C = \frac{1}{2}(2^t+1)\frac{C}{2^t} - \frac{1}{2}(2^t-1)\frac{C}{2^t}.$$ Then, $\tilde{\nu}(C) \leq {2^t}$. \end{proof} This implies the following lower bounds on classical and quantum communication complexity: \begin{cor}\label{cor:lower-bound-quasi} For any non-signaling distribution $\mathbf{p}$ and correlation matrix $C$, \begin{enumerate} \item $R_0^{\mathrm{pub}}(\mathbf{p}) \geq \log(\tilde{\nu}(\mathbf{p}))-1$, and $R_\epsilon^{\mathrm{pub}}(\mathbf{p})\geq \log(\tilde{\nu}^{\epsilon}(\mathbf{p}))-1$. \item $Q_0^{\mathrm{ent}}(\mathbf{p}) \geq \frac{1}{2}\log(\tilde{\gamma}_2(\mathbf{p}))-1$, and $Q_\epsilon^{\mathrm{ent}}(\mathbf{p})\geq \frac{1}{2}\log(\tilde{\gamma}_2^{\epsilon}(\mathbf{p}))-1$. \item $Q_0^{\mathrm{ent}}(C) \geq \log(\tilde{\gamma}_2(C))$, and $Q_\epsilon^{\mathrm{ent}}(C)\geq \log(\tilde{\gamma}_2^{\epsilon}(C))$. \end{enumerate} \end{cor} \subsection{Factorization norm and related measures} In the special case of distributions over binary variables with uniform marginals, the quantities~$\tilde{\nu}$ and~$\tilde{\gamma}_2$ become equivalent to the original quantities defined in~\cite{lmss07,ls07} (at least for the interesting case of non-local correlations, that is correlations with non-zero communication complexity). When the marginals are uniform we omit them and write $\tilde{\nu}(C)$ and $\tilde{\gamma}_2(C)$. The following are reformulations as Minkowski functionals of the definitions appearing in~\cite{lmss07,ls07}. \begin{defn} \begin{itemize} \item $\nu(C) = \min \{ \Lambda>0 :\frac{1}{\Lambda}C \in \L_0\} $, \item $\gamma_2(C) = \min \{ \Lambda>0 :\frac{1}{\Lambda}C \in {\cal Q}_0\} $, \item $\nu^\alpha(C) = \min \{ \nu(C'): 1\leq C(x,y)C'(x,y)\leq \alpha,\ \forall x,y\in\mathcal{X}\times\mathcal{Y}\} $, \item $\gamma_2^\alpha(C) = \min \{ \gamma_2(C'): 1\leq C(x,y)C'(x,y)\leq \alpha,\ \forall x,y\in\mathcal{X}\times\mathcal{Y}\} $. \end{itemize} \end{defn} \begin{lemma} \label{lemma:cmargvscquasi} For any correlation matrix $C:\mathcal{X}\times \mathcal{Y} \rightarrow [-1,1]$, \begin{enumerate} \item $\tilde{\nu}(C) =1$ iff $\nu(C)\leq 1$, and $\tilde{\gamma}_2(C) =1$ iff $\gamma_2(C)\leq 1$, \item $\tilde{\nu}(C)>1\Longrightarrow \nu(C)=\tilde{\nu}(C)$, \item $\tilde{\gamma}_2(C)>1\Longrightarrow \gamma_2(C)=\tilde{\gamma}_2(C)$. \end{enumerate} \end{lemma} \begin{proof} The first item follows by definition of $\nu$ and $\gamma_2$. For the next items, we give the proof for $\nu$, and the proof for $\gamma_2$ is similar. The key to the proof is that if $C\in\L_0$, then $-C\in\L_0$ (it suffices for one of the players to flip his output). [$\tilde{\nu}(C)\leq \nu(C)$] If $\tilde{\nu}(C)>1$, then $\Lambda=\nu(C) > 1$. Let $C^+=\frac{C}{\Lambda}$ and $C^-=-\frac{C}{\Lambda}$. By definition of $\nu(C)$, both $C^+$ and $C^-$ are in $\L_0$. Furthermore, let $q_+=\frac{1+\Lambda}{2}\geq 0$ and $q_-=\frac{1-\Lambda}{2}\leq 0$. Since $C=q_+C^++q_-C^-$, this determines an affine model for $C$ with $|q_+|+|q_-|=\Lambda$. [$\tilde{\nu}(C)\geq \nu(C)$] Let $\Lambda=\tilde{\nu}(C)$. By definition of $\tilde{\nu}(C)$, there exists $C_i$ and $q_i$ such that $C=\sum_i q_i C_i$ and $\Lambda=\sum_i|q_i|$. Let $\tilde{C}_i=\mathrm{sgn}(q_i)C_i$ and $p_i=\frac{|q_i|}{\Lambda}$. Then, $\frac{C}{\Lambda}=\sum_i p_i \tilde{C}_i$ and therefore $\frac{1}{\Lambda}C\in \L_0$ since $\tilde{C}_i\in \L_0$. \end{proof} In the special case of sign matrices (corresponding to Boolean functions, as shown above), we also have the following correspondence between $\tilde{\nu}^\epsilon,\tilde{\gamma}_2^\epsilon$, and $\nu^\alpha,\gamma_2^\alpha$. \begin{lemma} \label{lem:epsilon-alpha} Let $0\leq\epsilon< 1/2$ and $\alpha=\frac{1}{1-2\epsilon}$. For any sign matrix $C:\mathcal{X}\times \mathcal{Y} \rightarrow \{-1,1\}$, \begin{enumerate} \item $\tilde{\nu}^\epsilon(C)>1\Longrightarrow \nu^\alpha(C)=\frac{\tilde{\nu}^\epsilon(C)}{1-2\epsilon}$, \item $\tilde{\gamma}_2^\epsilon(C)>1\Longrightarrow \gamma_2^\alpha(C)=\frac{\tilde{\gamma}_2^\epsilon(C)}{1-2\epsilon}$. \end{enumerate} \end{lemma} \begin{proof} We give the proof for $\nu^\alpha$, the proof for $\gamma_2^\alpha$ is similar. [$\nu^\alpha(C)\leq\frac{\tilde{\nu}^\epsilon(C)}{1-2\epsilon}$] By definition of $\tilde{\nu}^\epsilon(C)$, there exists a correlation matrix $C'$ such that $\tilde{\nu}(C')=\tilde{\nu}^\epsilon(C)$ and $|C(x,y)-C'(x,y)|\leq 2\epsilon$ for all $x,y\in\mathcal{X}\times\mathcal{Y}$. Since $C$ is a sign matrix, and $C'$ is a correlation matrix, $\mathrm{sgn}(C'(x,y))=C(x,y)$ and $1-2\epsilon \leq |C'(x,y)|\leq 1$. Hence $1\leq C(x,y)\frac{C'(x,y)}{1-2\epsilon}\leq\frac{1}{1-2\epsilon}= \alpha$ This implies that $\nu^\alpha(C)\leq\nu(\frac{C'}{1-2\epsilon})= \frac{\nu(C')}{1-2\epsilon}=\frac{\tilde{\nu}(C')}{1-2\epsilon}$, where we used the fact that $\nu(C')=\tilde{\nu}(C')$ since $\tilde{\nu}(C')>1$. [$\nu^\alpha(C)\geq\frac{\tilde{\nu}^\epsilon(C)}{1-2\epsilon}$] By definition of $\nu^\alpha(C)$, there exists a (not necessarily correlation) matrix $C'$ such that $\nu(C')=\nu^\alpha(C)$ and $1\leq C(x,y)C'(x,y)\leq \alpha$ for all $x,y$. Since $C$ is a sign matrix, this implies $\mathrm{sgn}(C'(x,y))=C(x,y)$ and $1-2\epsilon \leq |\frac{C'(x,y)}{\alpha}|\leq 1$. Therefore, $|C(x,y)-\frac{C'(x,y)}{\alpha}|\leq 2\epsilon$ for all $x,y$. This implies that $\tilde{\nu}^\epsilon(C)\leq\tilde{\nu}(\frac{C'}{\alpha}) =\nu(\frac{C'}{\alpha})=(1-2\epsilon)\nu(C')$, where we have used the fact that $\tilde{\nu}(\frac{C'}{\alpha}) =\nu(\frac{C'}{\alpha})$ since $\tilde{\nu}(\frac{C'}{\alpha})\geq\tilde{\nu}^\epsilon(C)>1$. \end{proof} Just as the special case $\nu(C)$, $\tilde{\nu}(\mathbf{p})$ may be expressed as a linear program. However, while $\gamma_2(C)$ could be expressed as a semidefinite program, this may not be true in general for $\tilde{\gamma}_2(\mathbf{p})$. Nevertheless, using the hierarchy $\{{\cal Q}^n: n\in\mathbb{N}_0\}$ introduced in~\cite{npa08}, it admits SDP relaxations $\{\tilde{\gamma}_2^n(\mathbf{p}): n\in\mathbb{N}_0\}$. \begin{defn}\label{def:sdp-relaxation} $\tilde{\gamma}_2^n(\mathbf{p}) = \min \{ \sum_i\abs{q_i} : \exists \mathbf{p}_i\in {\cal Q}^n, q_i\in \mathbb R, \mathbf{p} = \sum_i q_i \mathbf{p}_i \} $. \end{defn} The fact that ${\cal Q}^n\subseteq {\cal Q}^{n-1}$ implies $\tilde{\gamma}_2^n(\mathbf{p})\geq \tilde{\gamma}_2^{n-1}(\mathbf{p})$, and by continuity of the minimization function, $\tilde{\gamma}_2^n(\mathbf{p})\to\tilde{\gamma}_2(\mathbf{p})$ for $n\to\infty$. Lemmas~\ref{lemma:cmargvscquasi} and~\ref{lem:epsilon-alpha} establish that Corollary~\ref{cor:lower-bound-quasi} is a generalization of Linial and Shraibman's factorization norm lower bound technique. Note that Linial and Shraibman use $\gamma_2^\alpha$ to derive a lower bound not only on the quantum communication complexity $Q_\epsilon^{\mathrm{ent}}$, but also on the classical complexity $R_\epsilon^{\mathrm{pub}}$. In the case of binary outcomes with uniform marginals (which includes Boolean functions, studied by Linial and Shraibman, as a special case), we obtain a similar result by combining our bound for $Q_\epsilon^{\mathrm{ent}}(C)$ with the fact that $Q_\epsilon^{\mathrm{ent}}(C)\leq \lceil{\smfrac{1}{2}R_\epsilon^{\mathrm{pub}}(C)}\rceil$, which follows from superdense coding. This implies $R_\epsilon^{\mathrm{pub}}(C) \geq 2\log(\gamma_2^\epsilon(C))-1$. In the general case, however, we can only prove that $R_\epsilon^{\mathrm{pub}}(\mathbf{p}) \geq \log(\gamma_2^\epsilon(\mathbf{p}))-1$. This may be due to the fact that the result holds in the much more general setting of non-signaling distributions with arbitrary outcomes and marginals. Because of Proposition~\ref{prop:grothendieck}, we know that $\nu(C) \leq K_G \gamma_2(C)$ for correlations. Note also that although $\gamma_2$ and $\nu$ are matrix norms, this fails to be the case for $\tilde{\gamma}_2$ and $\tilde{\nu}$, even in the case of correlations. Nevertheless, it is still possible to formulate dual quantities, which turn out to have sufficient structure, as we show in the next section. \enlever{ \begin{lemma}\label{lemma:opposite-distribution} If $(C,u,v)\in\L$, then $-\frac{1}{3}(C,u,v)\in\L$, and if $(C,u,v)\in{\cal Q}$, then $-\frac{1}{3}(C,u,v)\in{\cal Q}$. \end{lemma} \begin{proof} Assume $(C,u,v)\in \L$. Then also $(-C,-u,v)\in \L$ , since Alice can flip her outcome. Similarly, $(-C, u,-v), (C,-u,-v)\in \L$. It remains to notice that $-\frac{1}{3}{(C,u,v)} = \frac{1}{3}((-C,-u,v)+(-C, u,-v)+ (C,-u,-v)\in \L$. The proof is the same for $(C,u,v)\in {\cal Q}$. \end{proof} \begin{lemma}\label{lemma:upperbound-tilde} If $\mathbf{p}\notin\L$ then $\tilde{\nu}(\mathbf{p})\leq\frac{3\nu(\mathbf{p})-1}{2}$, and if $\mathbf{p}\notin{\cal Q} $ then $\tilde{\gamma}_2(\mathbf{p})\leq\frac{3\gamma_2(\mathbf{p})-1}{2}$. \end{lemma} \begin{proof} Let $T=\nu(C,M_A,M_B)$ and $(C_+,u_+,v_+)=\frac{1}{T}(C,M_A, M_B)\in \L$. By Lemma~\ref{lemma:opposite-distribution}, $(C_-,u_-,v_-)=-\frac{1}{3}(C_+,u_+,v_+)\in\L$ as well. Let $q_{+}=\frac{3T+1}{4}$ and $q_{-}=\frac{3-3T}{4}$. It is straightforward to check that for $T\geq 1$, $q_{+}+q_{-}=1$, $q_{+}(C_+,u_+,v_+)+q_{-}(C_-,u_-,v_-)=T(C_+,u_+,v_+)$, and $|q_+|+|q_-|=\frac{3T-1}{2}$. The proof for $\tilde{\gamma}_2(\mathbf{p})$ is exactly the same. \end{proof} } \enlever{ \begin{cor}\label{thm:linear-program} For any non-signaling probability distribution $\mathbf{p}$, we have $R^C(\mathbf{p})\geq \log Q$, where $Q$ is the optimal value of the following linear program: \begin{eqnarray*} Q=\textrm{min} && \sum_{\lambda} |q_\lambda|,\\ \textrm{subject to} && D\ \mathbf{q} = \mathbf{p} \end{eqnarray*} \end{cor} } \enlever{ By Proposition~\ref{prop:uniform-marginals}, setting the marginals to zero is equivalent to disregarding the marginals altogether. Therefore, Corollary~\ref{cor:lower-bound-quasi} gives an interpretation of $\nu(C)$ and $\gamma_2(C)$ for any correlation matrix $C$, whereas in the case of a sign matrix ($\pm 1$ correlations), we get Linial-Shraibman's communication complexity bounds as a special case. } \section{Duality, Bell inequalities, and XOR games}\label{sec:dual} In their primal formulation, the $\tilde{\gamma}_2$ and $\tilde{\nu}$ methods are difficult to apply since they are formulated as a minimization problem. Transposing to the dual space not only turns the method into a maximization problem; it also has a very natural, well-understood interpretation since it coincides with maximal violations of Bell and Tsirelson inequalities. This is particularly relevant to physics, since it formalizes in very precise terms the intuition that distributions with large Bell inequality violations should require more communication to simulate. Recall that for any norm $\norm{\cdot}$ on a vector space $V$, the dual norm is $\norm{B}^*= \max_{v\in V:\norm{v}\leq 1} B(v)$, where $B$ is a linear functional on $V$. \enlever{Consider for example the rectangle method in deterministic communication complexity, which can be formulated as $D(f) \geq \min\{\sum_{u,v} \alpha_{u,v} : M_f = \sum_{u,v\in \{0,1\}^{\mathcal{X}\times \mathcal{Y}}} \alpha_{u,v} u^Tv, \alpha_{u,v}\in \{0,1\}\}$. Similar to taking the dual norm, it is often easier to consider the associated maximization problem, which is the rectangle size method , expressed as $D(f) \geq \max_{B\in \{0,1\}^{\mathcal{X}\times \mathcal{Y}}}\frac{{\rm tr}(M_f^TB)}{}$.} \subsection{Bell and Tsirelson inequalities} Bell inequalities were first introduced by Bell~\cite{bell64}, as bounds on the correlations that could be achieved by any \emph{local} physical theory. He showed that quantum correlations could violate these inequalities and therefore exhibited non-locality. Tsirelson later proved that quantum correlations should also respect some bound (known as the Tsirelson bound), giving a first example of a ``Tsirelson-like'' inequality for quantum distributions~\cite{tsirelson80}. Since the set of non-signaling distributions ${\cal C}$ lies in an affine space $\mathsf{aff}({\cal C})$, we may consider the isomorphic dual space of linear functionals over this space. The dual quantity $\tilde{\nu}^*$ (technically not a dual norm since $\tilde{\nu}$ itself is not a norm in the general case) is the maximum value of a linear functional in the dual space on local distributions, and $\tilde{\gamma}_2^*$ is the maximum value of a linear functional on quantum distributions. These are exactly what is captured by the Bell and Tsirelson inequalities. \begin{defn}[Bell and Tsirelson inequalities] \label{defn:bell} Let $B:\mathsf{aff}({\cal C})\mapsto\mathbb R$ be a linear functional on the (affine hull of the) set of non-signaling distributions, $B(\mathbf{p})=\sum_{a,b,x,y} B_{abxy} p(a,b|x,y)$. Define $\tilde{\nu}^*(B)=\max_{\mathbf{p}\in\L}B(\mathbf{p})$ and $\tilde{\gamma}_2^*(B)=\max_{\mathbf{p}\in{\cal Q}}B(\mathbf{p})$. A Bell inequality is a linear inequality satisfied by any local distribution, $B(\mathbf{p})\leq \tilde{\nu}^*(B)\ (\forall\ \mathbf{p}\in\L),$ and a Tsirelson inequality is a linear inequality satisfied by any quantum distribution, $B(\mathbf{p})\leq \tilde{\gamma}_2^*(B)\ (\forall\ \mathbf{p}\in{\cal Q}).$ \end{defn} By linearity (Proposition~\ref{prop:representation}) Bell inequalities are often expressed as linear functionals over the correlations in the case of binary outputs and uniform marginals. \enlever{Since $\L$ is a polytope, it suffices that the inequality is satisfied for any extremal vertex $(u^Tv,u,v)$ ($\forall\ u\in \{-1, +1\}^X, v \in \{-1, +1\}^Y$) for it to be satisfied for any local distribution in $\L$. This means in particular that $\nu^*(B)=\max\{B(u^Tv,u,v):u\in \{-1, +1\}^X, v \in \{-1, +1\}^Y\}$.} \enlever{We may now show how the quantities $\nu$ and $\gamma_2$ relate to Bell and Tsirelson inequalities. This follows by duality. While $\nu$ and $\gamma_2$ in the case of uniform marginals, as defined in~\cite{ls07}, are matrix norms, our generalized quantities are not norms on $\mathsf{aff}({\cal C})$ because they do not satisfy the condition $\norm{a(C,u,v)}=\abs{a}\norm{(C,u,v)}$ for $a<0$. They do however satisfy this condition for $a>0$, as well as subadditivity, which is sufficient to show nice properties. In particular, note that we have defined the local and quantum bounds of a Bell or Tsirelson inequality as a dual quantities to $\nu$ and $\gamma_2$ (this would be the expression for the dual norm if these quantities were norms, such as in the case of uniform marginals): \begin{enumerate} \item $\nu^*(B)=\max_{\mathbf{p}:\nu(\mathbf{p})=1} B(\mathbf{p})=\max_{\mathbf{p}\in\L} B(\mathbf{p})$, \item $\gamma_2^*(B)=\max_{\mathbf{p}:\gamma_2(\mathbf{p})=1} B(\mathbf{p})=\max_{\mathbf{p}\in{\cal Q}} B(\mathbf{p})$, \end{enumerate} where we have used the characterization of local and quantum distributions from Corollary~\ref{lem:charact-locality-quanticity}. Qualifying these quantities of dual is legitimate, since we may show that the duals of $\nu^*$ and $\gamma_2^*$ are indeed $\nu$ and $\gamma_2$: } Finally, $\tilde{\gamma}_2$ and $\tilde{\nu}$ amount to finding a maximum violation of a (normalized) Bell or Tsirelson inequality. \begin{thm}\label{thm:lp-bell} For any distribution $\mathbf{p}\in \mathcal{C}$, \begin{enumerate} \item $\tilde{\nu}(\mathbf{p})=\max \{ B(\mathbf{p}): \forall \mathbf{p}'\in \L,\ |B(\mathbf{p}')|\leq 1 \}$, and \item $\tilde{\gamma}_2(\mathbf{p})=\max \{ B(\mathbf{p}): \forall \mathbf{p}'\in {\cal Q},\ |B(\mathbf{p}')| \leq 1 \}$, \end{enumerate} where the maximization is over linear functionals $B:\mathsf{aff}({\cal C})\mapsto\mathbb R$. \end{thm} \begin{proof} \begin{enumerate} \item This follows by LP duality from the definition of $\tilde{\nu}$. \item We use the SDP relaxation $\tilde{\gamma}_2^n(\mathbf{p})$, which may be expressed as \[ \tilde{\gamma}_2^n(\mathbf{p})=\min \{ q_+ + q_- : \exists \mathbf{p}_+,\mathbf{p}_-\in {\cal Q}^n, q_+,q_-\geq 0, \mathbf{p} = q_+ \mathbf{p}_+-q_-\mathbf{p}_-\}, \] and define \[ \beta^n(\mathbf{p})=\max \{ B(\mathbf{p}): \forall \mathbf{p}'\in {\cal Q}^n,\ |B(\mathbf{p}')| \leq 1 \}. \] We now show that $\beta^n(\mathbf{p})=\tilde{\gamma}_2^n(\mathbf{p})$, which proves our statement by taking the limit $n\to\infty$. \end{enumerate} [$\beta^n(\mathbf{p})\leq\tilde{\gamma}_2^n(\mathbf{p})$] Let $\tilde{\gamma}_2^n(\mathbf{p})=q_+ + q_-$, where $q_+,q_-\geq 0$ and $\mathbf{p} = q_+ \mathbf{p}_+-q_-\mathbf{p}_-$ for some $\mathbf{p}_+,\mathbf{p}_-\in {\cal Q}^n$. Similarly, let $ \beta^n(\mathbf{p})= B(\mathbf{p})$, where $|B(\mathbf{p}')| \leq 1$ for all $\mathbf{p}'\in {\cal Q}^n$. It then follows that \[ B(\mathbf{p})=q_+ B(\mathbf{p}_+)-q_-B(\mathbf{p}_-) \leq q_+ |B(\mathbf{p}_+)|+q_-|B(\mathbf{p}_-)|\leq q_+ + q_-. \] [$\beta^n(\mathbf{p})\geq\tilde{\gamma}_2^n(\mathbf{p})$] In order to use SDP duality, we first express $\tilde{\gamma}_2^n(\mathbf{p})$ in standard SDP form. Using the definition of ${\cal Q}^n$, \begin{eqnarray*} \lefteqn{\tilde{\gamma}_2^n(\mathbf{p})=\min \Gamma_{\mathbbm 1,\mathbbm 1}^++\Gamma_{\mathbbm 1,\mathbbm 1}^- }\\ &\textrm{subject to}& \Gamma^+,\Gamma^-\succcurlyeq 0,\\ &&\Gamma_{E_a(x),E_b(y)}^+-\Gamma_{E_a(x),E_b(y)}^-=p(a,b|x,y),\\ &&{\rm tr}(F_k ^\dagger\Gamma^+)={\rm tr}(F_k ^\dagger\Gamma^-)=0\quad \forall k\in[m(n)]. \end{eqnarray*} The dual SDP then reads \begin{eqnarray*} \lefteqn{\delta^n(\mathbf{p})=\max \sum_{a,b,x,y} B_{abxy} p(a,b|x,y)}\\ &\textrm{subject to}& \sum_{a,b,x,y} B_{abxy} \Gamma_{E_a(x),E_b(y)} \geq -[\Gamma_{\mathbbm 1,\mathbbm 1}+\sum_{k\in[m(n)]}B_k^-{\rm tr}(F_k ^\dagger\Gamma)] \quad\forall\ \Gamma\succcurlyeq 0,\\ &&\sum_{a,b,x,y} B_{abxy} \Gamma_{E_a(x),E_b(y)} \leq \Gamma_{\mathbbm 1,\mathbbm 1}+\sum_{k\in[m(n)]}B_k^+{\rm tr}(F_k ^\dagger\Gamma) \quad\forall\ \Gamma\succcurlyeq 0. \end{eqnarray*} It may be shown that the dual is strictly feasible, so that strong duality holds and $\delta^n(\mathbf{p})=\tilde{\gamma}_2^n(\mathbf{p})$ (see~\cite{VB96}). Together with the definition of ${\cal Q}^n$, this shows that a feasible solution for $\delta^n(\mathbf{p})$ implies a feasible solution for $\beta^n(\mathbf{p})$, so that $\beta^n(\mathbf{p})\geq\delta^n(\mathbf{p})$. \end{proof} \subsection{XOR games} In this section, we consider distributions over binary variables with uniform marginals, $\mathbf{p}=(C,0,0)$, and furthermore restrict to the case of sign matrices $C\in \{\pm 1\}^{\mathcal{X}\times\mathcal{Y}}$. As we have seen before, this corresponds to the standard framework of communication complexity of Boolean functions, and we have $\tilde{\nu}(C,0,0)=\nu(C)$. We show a close relation between $\nu(C)$, XOR games and Bell inequalities. In an XOR game, Alice is given some input $x$ and Bob is given an input $y$, and they should output $a=\pm 1$ and $b=\pm 1$. They win if $a\cdot b$ equals some $\pm1$ function $G(x,y)$. Since they are not allowed to communicate, their strategy may be represented as a local correlation matrix $S\in\L_0$. We consider the distributional version of this game, where $\mu$ is a distribution on the inputs. The winning bias given some strategy $S$ with respect to $\mu$ is $\epsilon_\mu(G{\parallel}S) = \sum_{x,y} \mu(x,y) G(x,y) S(x,y)$, and $\epsilon_\mu^{\mathrm{pub}}(G) = \max_{S\in \L_0} \epsilon_\mu(G{\parallel}S)$ is the maximum winning bias of any local (classical) strategy. (For convenience, we consider the bias instead of game value $\omega_\mu^{\mathrm{pub}}(G)=(1+\epsilon_\mu^{\mathrm{pub}}(G))/2$.) Define $\epsilon_\mu^\mathrm{ent}(G)$ similarly for quantum strategies. When the input distribution is not fixed, we define the game biases as $\epsilon^{\mathrm{pub}}(G)=\min_\mu\epsilon_\mu^{\mathrm{pub}}(G)$ and $\epsilon^\mathrm{ent}(G)=\min_\mu\epsilon^\mathrm{ent}_\mu(G)$. \begin{lemma} There is a bijection between XOR games $(G,\mu)$ and normalized correlation Bell inequalities. \end{lemma} \begin{proof} An XOR game $(G,\mu)$ determines a linear functional $G{{\circ}}\mu\,(C)= \epsilon_\mu(G{\parallel}C) $ on the set of correlation matrices, where ${\circ}$ is the Hadamard (entrywise) product. By Definition~\ref{defn:bell}, $\nu^*(G{\circ} \mu)=\epsilon_\mu^{\mathrm{pub}}(G)$, and $\epsilon_\mu(G{\parallel}C) \leq\epsilon_\mu^{\mathrm{pub}}(G)$ is a Bell inequality satisfied by any local correlation matrix $C$. Similarly, when the players are allowed to use entanglement, we get a Tsirelson inequality on quantum correlations, $ \epsilon_\mu(G{\parallel}C) \leq \epsilon^\mathrm{ent}_\mu(G)$ (the quantum bias is also equivalent to a dual norm $\epsilon^\mathrm{ent}_\mu(G)=\gamma_2^*(G{\circ}\mu)$). Conversely, consider a general linear functional $B(C)=\sum_{x,y}B_{xy}C(x,y)$ on $\mathsf{aff}({\cal C}_0)$, defining a correlation Bell inequality $B(C)\leq \nu^*(B)\ \forall\ C\in \L_0$. Dividing this Bell inequality by $N=\sum_{x,y}|B_{xy}|$, we see that it determines an XOR game specified by a sign matrix $G(x,y)=\mathrm{sgn}(B_{xy})$ and an input distribution $\mu_{xy}=\frac{|B_{xy}|}{N}$, and having a game bias $\epsilon_\mu^{\mathrm{pub}}(G)=\frac{\nu^*(B)}{N}$. \end{proof} By Theorem~\ref{thm:lp-bell} and the previous bijection (see also Lee \textit{et al.}~\cite{lss08}): \begin{cor}\label{cor:LS-game} \begin{enumerate} \item $\nu(C)=\max_{\mu,G}\frac{\epsilon_\mu(G{\parallel}C)}{\epsilon_\mu^{\mathrm{pub}}(G)}$ where the maximum is over XOR games $(G,\mu)$. \item $\nu(C)\geq\frac{1}{\epsilon^{\mathrm{pub}}(C)}$. \end{enumerate} \end{cor} \enlever{\begin{proof} By definition of the dual norm, we have $\nu(C)=\max_B\frac{B(C)}{\nu^*(B)}$. The lemma then follows from the equivalence between Bell inequalities and XOR games. \end{proof}} \enlever{ Setting $S=C$, we have \begin{cor} $\nu(C)\geq\frac{1}{\epsilon(C)}$. \end{cor}} The second part follows by letting $G=C$. Even though playing correlations $C$ for a game $G=C$ allows us to win with probability one, there are cases where some other game $G\neq C$ yields a larger ratio. In these cases, we have $\nu(C)>\frac{1}{\epsilon^{\mathrm{pub}}(C)}$ so that $\nu$ gives a stronger lower bound for communication complexity than the game value (which has been shown to be equivalent to the discrepancy method~\cite{lss08}). We can characterize when the inequality is tight. Let $\epsilon^{\mathrm{pub}}_=(C)= \max_{S\in\L_0} \{ \beta : \forall x,y, C(x,y)S(x,y){=}\beta\} $, that is, we only consider strategies that wins the game with equal bias with respect to all distributions. For the sake of comparison, the game bias may also be expressed as~\cite{vonneumann28}: $$\epsilon^{\mathrm{pub}}(C) =\max_{S\in \L_0} \{ \beta : \forall x,y, C(x,y)S(x,y){\geq}\beta\} = \max_{S\in \L_0} \min_{x,y} C(x,y)S(x,y).$$ \begin{lemma} ${\nu(C)} = \frac{1}{\epsilon^{\mathrm{pub}}_=(C)} $. \end{lemma} \enlever{ \begin{lemma}\label{lem:minmax} $\epsilon^{\mathrm{pub}}(C) = \max_{S\in \L_0} \{ \beta : \forall x,y, C(x,y)S(x,y){\geq}\beta\} = \max_{S\in \L_0} \min_{x,y} C(x,y)S(x,y)\} $. \end{lemma} } We can also relate the game value to $\nu^\alpha(C)$, as it was shown in~\cite{lss08} that for $\alpha\to\infty$, $\nu^\infty(C)$ is exactly the inverse of the game bias $\frac{1}{\epsilon^{\mathrm{pub}}(C)}$. We show that this holds as soon as $\alpha=\frac{1}{1-2\epsilon}$ is large enough for $C$ to be local up to an error $\epsilon$, completing the picture given in Lemma~\ref{lem:epsilon-alpha}. \begin{lemma}\label{lem:gamma2-infinity} Let $0\leq\epsilon< 1/2$ and $\alpha=\frac{1}{1-2\epsilon}$. For any sign matrix $C:\mathcal{X}\times \mathcal{Y} \rightarrow \{-1,1\}$, \begin{enumerate} \item $\tilde{\nu}^\epsilon(C)=1 \Longleftrightarrow \epsilon \geq 1 - \omega^{\mathrm{pub}}(C) \Longleftrightarrow\alpha\geq\frac{1}{\epsilon^{\mathrm{pub}}(C)} \Longleftrightarrow\nu^\alpha(C)=\nu^\infty(C)=\frac{1}{\epsilon^{\mathrm{pub}}(C)}$ \item $\tilde{\gamma}_2^\epsilon(C)=1 \Longleftrightarrow \epsilon \geq 1 - \omega^\mathrm{ent}(C) \Longleftrightarrow\alpha\geq\frac{1}{\epsilon^\mathrm{ent}(C)} \Longleftrightarrow\gamma_2^\alpha(C)=\gamma_2^\infty(C)=\frac{1}{\epsilon^\mathrm{ent}(C)}$ \end{enumerate} \end{lemma} \begin{proof}[Proof] By von Neumann's minmax principle~\cite{vonneumann28}, \begin{eqnarray*} \epsilon^{\mathrm{pub}}(C)& =& \max_{S\in \L_0} \min_{x,y} C(x,y)S(x,y) \\ & =& \max_{S\in \L_0} \min_{x,y} 1- |C(x,y) - S(x,y)| \\ \end{eqnarray*} where we used the fact that $C$ is a sign matrix. This implies that $\tilde{\nu}^\epsilon(C)=1\Leftrightarrow\epsilon\geq \frac{1-\epsilon^{\mathrm{pub}}(C)}{2}\Leftrightarrow\alpha\geq\frac{1}{\epsilon^{\mathrm{pub}}(C)}$. By Lemma~\ref{lem:epsilon-alpha}, this in turn implies that $\nu^\alpha(C)=\frac{\tilde{\nu}^\epsilon(C)}{1-2\epsilon}$ for all $\epsilon<\frac{1-\epsilon^{\mathrm{pub}}(C)}{2}$. By continuity, taking the limit $\epsilon\to\frac{1-\epsilon^{\mathrm{pub}}(C)}{2}$ yields $\nu^\alpha(C)=\frac{1}{\epsilon^{\mathrm{pub}}(C)}$ for $\alpha=\frac{1}{\epsilon^{\mathrm{pub}}(C)}$. From~\cite{lss08}, $\nu^\infty(C)=\frac{1}{\epsilon^{\mathrm{pub}}(C)}$, and the lemma follows by the monotonicity of $\nu^\alpha(C)$ as a function of $\alpha$. \end{proof} \section{Comparing $\tilde{\gamma}_2$ and $\tilde{\nu}$} \label{sec:gamma-vs-nu} It is known that because of Grothendieck's inequality, $\gamma_2$ and $\nu$ differ by at most a constant. Although neither of these hold beyond the Boolean setting with uniform marginals, we show in this section that this surprisingly also extends to non-signaling distributions. \begin{thm}\label{thm:nu-gamma2} For any distribution $\mathbf{p}\in{\cal C}$, with inputs in $\mathcal{X} \times \mathcal{Y}$ and outcomes in $\mathcal{A} \times \mathcal{B}$ with $A=|\mathcal{A}|, B=|\mathcal{B}|$, \begin{enumerate} \item $\tilde{\nu}(\mathbf{p})\leq (2K_G+1)\tilde{\gamma}_2(\mathbf{p})$ when $A=B=2$, \item $\tilde{\nu}(\mathbf{p})\leq[2AB(K_G+1)-1]\tilde{\gamma}_2(\mathbf{p})$ for any $A,B$. \end{enumerate} \end{thm} The negative consequence of this is that one cannot hope to prove separations between classical and quantum communication using this method, except in the case where the number of outcomes is large. For binary outcomes at least, this says that arguments based on analysing the distance to the quantum set only, without taking into account the particular structure of the distribution, will not suffice to prove large separations; and other techniques, such as information theoretic arguments, may be necessary. For example, Brassard \textit{et al.}~\cite{bct99} give a (promise) distribution based on the Deutsch-Jozsa problem, which can be obtained exactly with entanglement and no communication, but which requires linear communication to simulate exactly. The lower bound is proven using a corruption bound~\cite{bcw98}, which is closely related to the information theoretic subdistribution bound~\cite{jkn08}. For this problem, $\mathcal{X}=\mathcal{Y}=\{0,1\}^n$ and $\mathcal{A}=\mathcal{B}=[n]$, therefore our method can only prove a lower bound logarithmic in $n$. This is the first example of a problem for which the corruption bound gives an exponentially better lower bound than the Linial and Shraibman family of methods. On the positive side, this is very interesting for quantum information, since (by Theorem~\ref{thm:lp-bell}), it tells us that the set of quantum distributions cannot be much larger than the local polytope, for any number of inputs and outcomes. For binary correlations, this follows from the theorems of Tsirelson (Theorem~\ref{thm:tsirelson}) and Grothendieck (Proposition~\ref{prop:grothendieck}), but no extensions are known for these results in the more general setting. The proof will use two rather straightforward lemmas. \begin{lemma}\label{lem:composition} If $\mathbf{p}=\sum_{i\in[I]}q_i\mathbf{p}_i$, where $\mathbf{p}_i\in{\cal C}$ and $q_i\in\mathbb R$ for all $i\in[I]$, then $\tilde{\nu}(\mathbf{p})\leq \sum_{i\in[I]}|q_i|\tilde{\nu}(\mathbf{p}_i)$. \end{lemma} \begin{proof} By definition, for each $\mathbf{p}_i$, there exists $\mathbf{p}_i^+,\mathbf{p}_i^-\in\L$ and $q_i^+,q_i^-\geq 0$ such that $\mathbf{p}_i=q_i^+\mathbf{p}_i^+-q_i^-\mathbf{p}_i^-$, and $q_i^++q_i^-=\tilde{\nu}(\mathbf{p}_i)$. Therefore, $\mathbf{p}=\sum_{i\in[I]}q_i(q_i^+\mathbf{p}_i^+-q_i^-\mathbf{p}_i^-)$ and $\sum_{i\in[I]}(|q_iq_i^+|+|q_iq_i^-|)=\sum_i|q_i|(q_i^++q_i^-)=\sum_i|q_i|\tilde{\nu}(\mathbf{p}_i)$. \end{proof} \begin{lemma}\label{lem:extension} Let $\mathbf{p},\mathbf{p}'\in{\cal C}$ be non-signaling distributions with inputs in $\mathcal{X} \times \mathcal{Y}$ for both distributions, outcomes in $\mathcal{A} \times \mathcal{B}$ for $\mathbf{p}$, and outcomes in $\mathcal{A}' \times \mathcal{B}'$ for $\mathbf{p}'$, such that $\mathcal{A}\subseteq\mathcal{A}'$ and $\mathcal{B}\subseteq\mathcal{B}'$. If, for any $(a,b)\in\mathcal{A}\times\mathcal{B}$ $p'(a,b|x,y)=p(a,b|x,y)$, then $\tilde{\nu}(\mathbf{p}')=\tilde{\nu}(\mathbf{p})$. \end{lemma} \begin{proof} Let $\mathcal{E}=(\mathcal{A}'\times\mathcal{B}')\setminus(\mathcal{A}\times\mathcal{B})$. First, note that since $p'(a,b|x,y)=p(a,b|x,y)$ for any $(a,b)\in\mathcal{A}\times\mathcal{B}$, we have, by normalization of $\mathbf{p}$, $p'(a,b|x,y)=0$ for any $(a,b)\in\mathcal{E}$. [$\tilde{\nu}(\mathbf{p}')\leq\tilde{\nu}(\mathbf{p})$] Let $\mathbf{p}=q_+\mathbf{p}^+-q_-\mathbf{p}^-$ be an affine model for $\mathbf{p}$. Obviously, this implies an affine model for $\mathbf{p}'$ by extending the local distributions $\mathbf{p}^+,\mathbf{p} ^-$ from $\mathcal{A} \times \mathcal{B}$ to $\mathcal{A}' \times \mathcal{B}'$, by setting $p^+(a,b|x,y)=p^-(a,b|x,y)=0$ for any $(a,b)\in\mathcal{E}$, so $\tilde{\nu}(\mathbf{p}')\leq\tilde{\nu}(\mathbf{p})$. [$\tilde{\nu}(\mathbf{p}')\geq\tilde{\nu}(\mathbf{p})$] Let $\mathbf{p}'=q_+\mathbf{p}'^+-q_-\mathbf{p}'^-$ be an affine model for $\mathbf{p}'$. We may not immediately derive an affine model for $\mathbf{p}$ since it could be the case that $p'^+(a,b|x,y)$ or $p'^-(a,b|x,y)$ is non zero for some $(a,b)\in\mathcal{E}$. However, we have $q_+p'^+(a,b|x,y)-q_-p'^-(a,b|x,y)=p'(a,b|x,y)=0$ for any $(a,b)\in\mathcal{E}$, so we may define an affine model $\mathbf{p}=q_+\mathbf{p}^+-q_-\mathbf{p}^-$, where $\mathbf{p}^+$ and $\mathbf{p}^-$ are distributions on $\mathcal{A}\times\mathcal{B}$ such that $$ p^+(a,b|x,y)=p'^+(a,b|x,y) +\frac{1}{A}\sum_{a'\notin\mathcal{A}}p'^+(a',b|x,y) +\frac{1}{B}\sum_{b'\notin\mathcal{B}}p'^+(a,b'|x,y) +\frac{1}{AB}\sum_{a'\notin\mathcal{A},b'\notin\mathcal{B}}p'^+(a',b'|x,y), $$ and similarly for $\mathbf{p}^-$. These are local since it suffices for Alice and Bob to use the local protocol for $\mathbf{p}'^+$ or $\mathbf{p}'^-$ and for Alice to replace any output $a\notin\mathcal{A}$ by a uniformly random output $a'\in\mathcal{A}$ (similarly for Bob). Therefore, we also have $\tilde{\nu}(\mathbf{p}')\geq\tilde{\nu}(\mathbf{p})$. \end{proof} Before proving Theorem~\ref{thm:nu-gamma2}, we first consider the special case of quantum distributions, such that $\tilde{\gamma}_2(\mathbf{p})=1$. As we shall see in Section~\ref{sec:smp}, this special case implies the constant upper bound of Shi and Zhu on approximating any quantum distribution~\cite{shi05}, which they prove using diamond norms. This also immediately gives an upper bound on maximum Bell inequality violations for quantum distributions, by Theorem~\ref{thm:lp-bell}, which may be of independent interest in quantum information theory. \begin{proposition}\label{prop:nu-quantum-dist} For any quantum distribution $\mathbf{p}\in{\cal Q}$, with inputs in $\mathcal{X} \times \mathcal{Y}$ and outcomes in $\mathcal{A} \times \mathcal{B}$ with $A=|\mathcal{A}|, B=|\mathcal{B}|$, \begin{enumerate} \item $\tilde{\nu}(\mathbf{p})\leq 2K_G+1$ when $A=B=2$, \item $\tilde{\nu}(\mathbf{p})\leq2AB(K_G+1)-1$ for any $A,B$. \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate} \item Since $A=B=2$, we may write the distribution as correlations and marginals, $\mathbf{p}=(C,M_A,M_B)$. Since $(C,M_A,M_B)\in{\cal Q}$, we also have $(C,0,0)\in{\cal Q}$, and by Tsirelson's theorem, $(C/K_G,0,0)\in\L$. Moreover, it is immediate that $(M_AM_B,M_A,M_B),(M_AM_B,0,0)$ and $(0,0,0)$ are local distributions as well, so that we have the following affine model for $(C,M_A,M_B)$ $$ (C,M_A,M_B)=K_G(C/K_G,0,0)+(M_AM_B,M_A,M_B)-(M_AM_B,0,0)-(K_G-1)(0,0,0). $$ This implies that $\tilde{\nu}(C,M_A,M_B)\leq 2K_G+1$. \item For the general case, we will reduce to the binary case. Let us introduce an additional output $\varnothing$, and set $\mathcal{A}'=\mathcal{A}\cup\{\varnothing\}$ and $\mathcal{B}'=\mathcal{B}\cup\{\varnothing\}$. We first extend the distribution $\mathbf{p}$ to a distribution $\mathbf{p}'$ on $\mathcal{A}'\times\mathcal{B}'$ by setting $p'(a,b|x,y)=p(a,b|x,y)$ for any $(a,b)\in\mathcal{A}\times\mathcal{B}$, and $p'(a,b|x,y)=0$ otherwise. By Lemma~\ref{lem:extension}, we have $\tilde{\nu}(\mathbf{p})=\tilde{\nu}(\mathbf{p}')$. For each $(\alpha,\beta)\in\mathcal{A}\times\mathcal{B}$, we also define a probability distribution $\mathbf{p}_{\alpha\beta}$ on $\mathcal{A}'\times\mathcal{B}'$: $$ p_{\alpha\beta}(a,b|x,y)= \begin{cases} p(\alpha , \beta|x,y) & \textrm{if } (a,b)=(\alpha,\beta),\\ p(\alpha|x)-p(\alpha,\beta|x,y) & \textrm{if } (a,b)=(\alpha, \varnothing),\\ p(\beta|y) -p(\alpha,\beta|x,y) & \textrm{if } (a,b)=(\varnothing, \beta),\\ 1-p(\alpha|x)-p(\beta|y)+p(\alpha, \beta|x,y) & \textrm{if } (a,b)=(\varnothing,\varnothing),\\ 0 & \textrm{otherwise}. \end{cases} $$ Notice that $p_{\alpha \beta} \in {\cal Q}$, since a protocol for $p_{\alpha \beta}$ can be obtained from a protocol for $p$: Alice outputs $\varnothing$ whenever her outcome is not $\alpha$, similarly for Bob. Let $\mathcal{A}_\alpha=\{\alpha,\varnothing\}$ and $\mathcal{B}_\beta=\{\beta,\varnothing\}$. Since $p_{\alpha\beta}(a,b|x,y)=0$ when $(a,b)\notin\mathcal{A}_\alpha\times\mathcal{B}_\beta$, we may define distributions $\mathbf{p}'_{\alpha\beta}$ on $\mathcal{A}_\alpha\times\mathcal{B}_\beta$ such that $p_{\alpha\beta}'(a,b|x,y)=p_{\alpha\beta}(a,b|x,y)$ for all $(a,b)\in\mathcal{A}_\alpha\times\mathcal{B}_\beta$. By Lemma~\ref{lem:extension}, these are such that $\tilde{\nu}(\mathbf{p}'_{\alpha\beta})=\tilde{\nu}(\mathbf{p}_{\alpha\beta})$, and since these are binary distributions, $\tilde{\nu}(\mathbf{p}'_{\alpha\beta})\leq 2K_G+1$. Let us define three distributions $\mathbf{p_A}, \mathbf{p_B},\mathbf{p}_\varnothing$ on $\mathcal{A}'\times\mathcal{B}'$ as follows. We let $\mathbf{p_A}(a,\varnothing|x,y)=p(a|x), \mathbf{p_B}(\varnothing,b|x,y)=p(b|y)$, and 0 everywhere else; and $p_\varnothing(a,b|x,y)=1$ if $(a,b)=(\varnothing,\varnothing)$, and $0$ otherwise. These are product distributions, so $\mathbf{p_A},\mathbf{p_B},\mathbf{p}_\varnothing\in\L$ and $\tilde{\nu}=1$ for all three distributions. We may now build the following affine model for $\mathbf{p}'$ $$ \mathbf{p}'= \sum_{(\alpha,\beta)\in\mathcal{A}\times\mathcal{B}} \mathbf{p}'_{\alpha\beta} -(B{-}1)\mathbf{p_A}- (A{-}1)\mathbf{p_B} -(AB{-}A{-}B{+}1)\mathbf{p}_\varnothing, $$ From Lemma~\ref{lem:composition}, we conclude that $\tilde{\nu}(\mathbf{p}')\leq AB(2K_G+2)-1$ \end{enumerate} \end{proof} The proof of Theorem~\ref{thm:nu-gamma2} immediately follows. \begin{proof}[Proof of Theorem~\ref{thm:nu-gamma2}] By definition of $\tilde{\gamma}_2(\mathbf{p})$, there exists $\mathbf{p}^+,\mathbf{p}^-\in{\cal Q}$ and $q_+,q_-\geq 0$ such that $\mathbf{p}=q_+\mathbf{p}^+-q_-\mathbf{p}^-$ and $q_++q_-=\tilde{\gamma}_2(\mathbf{p})$. From Lemma~\ref{lem:composition}, $\tilde{\nu}(\mathbf{p})\leq q_+\tilde{\nu}(\mathbf{p}^+)+q_- \tilde{\nu}(\mathbf{p}^-)$, and Proposition~\ref{prop:nu-quantum-dist} immediately concludes the proof. \end{proof} \section{Upper bounds for non-signaling distributions} \label{sec:smp} We have seen that if a distribution can be simulated using $t$ bits of communication, then it may be represented by an affine model with coefficients exponential in $t$ (Theorem~\ref{thm:quasi-lhv}). In this section, we consider the converse: how much communication is sufficient to simulate a distribution, given an affine model? This approach allows us to show that any (shared randomness or entanglement-assisted) communication protocol can be simulated with simultaneous messages, with an exponential cost to the simulation, which was previously known only in the case of Boolean functions~\cite{yaofinger03,shi05,gkr06}. Our results imply for example that for any quantum distribution $\mathbf{p}\in {\cal Q}$, $Q_\varepsilon^\parallel(\mathbf{p})= O(\log(n))$, where $n$ is the input size. This in effect replaces arbitrary entanglement in the state being measured, with logarithmic quantum communication (using no additional resources such as shared randomness). We use the superscript $\parallel$ to indicate the simultaneous messages model, where Alice and Bob each send a message to the referee, who without knowing the inputs, outputs the value of the function, or more generally, outputs $a,b$ with the correct probability distribution conditioned on the inputs $x,y$. \begin{thm}\label{thm:smp} For any distribution $\mathbf{p}\in {\cal C}$ with inputs in $\mathcal{X} \times \mathcal{Y}$ with $|\mathcal{X} \times \mathcal{Y}| \leq 2^n$, and outcomes in $\mathcal{A} \times \mathcal{B}$ with $A=|\mathcal{A}|, B=|\mathcal{B}|$, and any $\epsilon, \delta < 1/2$, \begin{enumerate} \item $R_{\epsilon + \delta}^{\parallel,\mathrm{pub}}(\mathbf{p}) \leq 16 \left[\frac{AB\tilde{\nu}^\epsilon(\mathbf{p})}{\delta}\right]^2 \ln\left[\frac{4AB}{\delta}\right] \log(AB) $\enlever{$\leq O\left( (AB)^3\left[\frac{\tilde{\gamma}_2^\epsilon(\mathbf{p})}{\delta}\right]^2 \ln\left[\frac{AB}{\delta}\right] \log(AB)\right)$}, \enlever{ \item $R_{\epsilon + \delta}^{\parallel,\mathrm{ent}}(\mathbf{p}) \leq 16 \left[\frac{AB\tilde{\gamma}_2^\epsilon(\mathbf{p})}{\delta}\right]^2 \ln\left[\frac{4AB}{\delta}\right] \log(AB)$, } \item $Q_{\epsilon+\delta}^{\parallel}(\mathbf{p}) \leq O\left((AB)^5\left[\frac{\tilde{\nu}^\epsilon(\mathbf{p})}{\delta}\right]^4\ln\left[\frac{AB}{\delta}\right]\log(n)\right) $\enlever{$\leq O\left((AB)^9\left[\frac{\tilde{\gamma}_2^\epsilon(\mathbf{p})}{\delta}\right]^4\ln\left[\frac{AB}{\delta}\right]\log(n)\right)$}. \end{enumerate} \end{thm} The proof relies on Hoeffding's inequality~\cite{mcdiarmid}. \begin{proposition}[Hoeffding's inequality] \label{prop:hoeffding} Let $X$ be a random variable with values in $[a,b]$. Let $X_t$ be the $t$-th of $T$ independent trials of $X$, and $S=\frac{1}{T}\sum_{t=1}^T X_t$. Then, $\Pr[S-E(X) \geq \beta]\leq e^{-\frac{2T\beta^2}{(b-a)^2}}$, and $\Pr[E(X)-S \geq \beta]\leq e^{-\frac{2T\beta^2}{(b-a)^2}}$, for any $\beta\geq0$. \end{proposition} We will also use the following lemma. \begin{lemma}\label{lem:estimated-distribution} Let $\mathbf{p}$ be a probability distribution on $\mathcal{V}$ with $V=|\mathcal{V}|$, and $e:\mathbb R^+\to\mathbb R^+$. For each $v\in \mathcal{V}$, let $Q_v$ be a random variable such that $\forall \beta\geq 0$, $\Pr[{Q}_v\geq p(v) + \beta ]\leq e(\beta)$ and $\Pr[{Q}_v \leq p(v) - \beta ]\leq e(\beta)$. Then, given samples $\{Q_v:v\in \mathcal{V}\}$, and without knowing $\mathbf{p}$, we may simulate a probability distribution $\mathbf{p'}$ such that $\delta(\mathbf{p'},\mathbf{p})\leq 2V[\beta+e(\beta)]$. \end{lemma} \begin{proof} In order to use the variables $Q_v$ as estimations for $p(v)$, we must first make them positive, and then renormalize them so that they sum up to $1$. Let $R_v = \max\{0,Q_v\}$. Then we may easily verify that \begin{eqnarray*} \Pr[R_v \geq p(v) + \beta ]&\leq& e(\beta),\\ \Pr[R_v \leq p(v) - \beta ]&\leq& e(\beta). \end{eqnarray*} \enlever{ For any subset $\mathcal{E}\subseteq\mathcal{V}$ of size $E=|\mathcal{E}|$, we also define the estimates $R_{\mathcal{E}}=\sum_{v\in \mathcal{E}} R_v$ for $p(\mathcal{E})$. For any $v$, we have $R_{v}-p(v) \geq \beta$ with probability at least $1-e(\beta)$. Therefore, with probability at least $1-E e(\beta)$, we have $R_{v}-p(v) \geq \beta$ simultaneously for all $v\in \mathcal{E}$, and therefore by summation also $R_\mathcal{E}-p(\mathcal{E}) \geq E\beta$. Similarly, with probability at least $1-E e(\beta)$, we have $p(v)-R_{v} \geq \beta$ simultaneously for all $v\in \mathcal{E}$, and therefore also $p(\mathcal{E})-R_\mathcal{E} \geq E\beta$. Hence, we have the following bounds for $R_\mathcal{E}$} For any subset $\mathcal{E}\subseteq\mathcal{V}$ of size $E=|\mathcal{E}|$, we also define the estimates $R_{\mathcal{E}}=\sum_{v\in \mathcal{E}} R_v$ for $p(\mathcal{E})$. By summing, \begin{eqnarray*} \Pr[R_\mathcal{E} \geq p(\mathcal{E}) + E\beta ]&\leq& Ee(\beta),\\ \Pr[R_\mathcal{E} \leq p(\mathcal{E}) - E\beta ]&\leq& Ee(\beta). \end{eqnarray*} In order to renormalize the estimated probabilities, let $R_{\mathcal{V}}=\sum_{v\in{\mathcal{V}}} R_{v}$. If $R_{\mathcal{V}}>1$, we use as final estimates $S_{v}=R_{v}/R_{\mathcal{V}}$. On the other hand, if $R_{\mathcal{V}}\leq 1$, we keep $S_{v}=R_{v}$ and introduce a dummy output $\varnothing\notin\mathcal{V}$ with estimated probability $S_\varnothing=1-R_{\mathcal{V}}$ (we extend the original distribution to $\mathcal{V}\cup\{\varnothing\}$, setting $p(\varnothing)=0$). By outputting $v$ with probability $S_{v}$, we then simulate some distribution $p'(v)=E(S_{v})$, and it suffices to show that $|E(S_{\mathcal{E}})-p(\mathcal{E})|\leq2V[\beta+e(\beta)]$ for any $\mathcal{E}\subseteq\mathcal{V}\cup\{\varnothing\}$. We first upper bound $E(S_\mathcal{E})$ for $\mathcal{E}\in\mathcal{V}$. Since $S_\mathcal{E}\leq R_\mathcal{E}$, we obtain from the bounds on $R_\mathcal{E}$ that $\Pr[S_\mathcal{E}\geq p(\mathcal{E}) + E\beta ]\leq Ee(\beta)$. Therefore, we have $S_\mathcal{E}<p(\mathcal{E})+E\beta$ with probability at least $1-Ee(\beta)$, and $S_\mathcal{E}\leq 1$ with probability at most $Ee(\beta)$. This implies that $E(S_\mathcal{E})\leq p(\mathcal{E})+E\left[\beta+e(\beta)\right]$. To lower bound $E(S_\mathcal{E})$, we note that with probability at least $1-Ee(\beta)$, we have $R_\mathcal{E}>p(\mathcal{E})-E\beta$, and with probability at least $1-Ve(\beta)$, we have $R_{\mathcal{V}}<1+V\beta$. Therefore, with probability at least $1-(E+V)e(\beta)$, both these events happen at the same time, so that $S_\mathcal{E}=R_\mathcal{E}/R_{\mathcal{V}}>(p(\mathcal{E})-E\beta)(1-V\beta)\geq p(\mathcal{E})-(E+V)\beta$. This implies that $E(S_\mathcal{E})\geq p(\mathcal{E})-(E+V)\left[\beta+e(\beta)\right]$. Since $S_\varnothing=1-S_{\mathcal{V}}$, this also implies that $E(S_\varnothing)\leq 2V\left[\beta+e(\beta)\right]$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:smp}] \noindent { 1.} Let $\Lambda=\tilde{\nu}(\mathbf{p})$, $\mathbf{p}= q_+ \mathbf{p}^+ - q_- \mathbf{p}^-$, with $q_+, q_-\geq 0$, $q_+ + q_- = \Lambda$ and $\mathbf{p}^+, \mathbf{p}^- \in \L$. Let $P^+,P^-$ be protocols for $\mathbf{p}^+$ and $\mathbf{p}^-$, respectively. These protocols use shared randomness but no communication. To simulate $\mathbf{p}$, Alice and Bob make $T$ independent runs of $P^+$, where we label the outcome of the $t$-th run $(a_t^+,b_t^+)$. Similarly, let $(a_t^-,b_t^-)$ be the outcome of the $t$-th run of $P^-$. They send the list of outcomes to the referee. The idea is for the referee to estimate $p(a,b|x,y)$ based on the $2T$ samples, and output according to the estimated distribution. Let $P^+_{t,a,b}$ be an indicator variable which equals 1 if $a_t^+=a$ and $b_t^+=b$, and 0 otherwise. Define $P^-_{t,a,b}$ similarly. Furthermore, let $P_{t,a,b}=q_+P^+_{t,a,b}-q_-P^-_{t,a,b}$. Then $E(P_{t,a,b})=p(a,b|x,y)$ and $P_{t,a,b}\in [-q_-,q_+]$. Let $P_{a,b}= \frac{1}{T}\sum_{t=1}^{T}P_{t,a,b}$ be the referee's estimate for $p(a,b|x,y)$. By Hoeffding's inequality, \begin{eqnarray*} \Pr[P_{a,b}\geq p(a,b|x,y) + \beta ]&\leq& e^{-\frac{2T\beta^2}{\Lambda^2}},\\ \Pr[P_{a,b} \leq p(a,b|x,y)-\beta ]&\leq& e^{-\frac{2T\beta^2}{\Lambda^2}}. \end{eqnarray*} Lemma~\ref{lem:estimated-distribution} with $\mathcal{V}=\mathcal{A}\times \mathcal{B}$, ${Q}_{a,b}=P_{a,b}$ and $e(\beta)=e^{-\frac{2T\beta^2}{\Lambda^2}}$ then implies that the referee may simulate a probability distribution $\mathbf{p'}$ such that $\delta(\mathbf{p'},\mathbf{p})\leq 2AB(\beta+e^{-\frac{2T\beta^2}{\Lambda^2}})$. It then suffices to set $\beta=\frac{\delta}{4AB}$, and $T=8 \left[\frac{AB\Lambda}{\delta}\right]^2\ln\left[\frac{4AB}{\delta}\right]$ to conclude the proof, since Alice sends $2T\log A$ and Bob sends $2T\log B$ bits to the referee. For $\tilde{\nu}^\epsilon$, apply this proof to the distribution $\mathbf{p''}$ with statistical distance $\delta(\mathbf{p},\mathbf{p''})\leq \epsilon$ and $\tilde{\nu}(\mathbf{p''})=\tilde{\nu}^\epsilon(\mathbf{p})$. \enlever{\item The proof is the same as above, except $\mathbf{p^+,p^-}\in {\cal Q}$ so Alice and Bob can simulate them with shared entanglement.} Note that the same proof gives an upper bound on $R_{\epsilon + \delta}^{\parallel,\mathrm{ent}}$ in terms of $\tilde{\gamma}_2$. \noindent { 2.} If shared randomness is not available but quantum messages are, then we can use quantum fingerprinting~\cite{bcwdw01,yaofinger03} to send the results of the repeated protocol to the referee. Let $(a^+(r),b^+(r))$ be the outcomes of $P^+$ using $r$ as shared randomness. We use the random variable $A^+_{a}(r)$ as an indicator variable for $a^+(r)=a$; similarly $B^+_{b}$, and $P^+_{\mathcal{E}}=\sum_{(a,b)\in\mathcal{E}}A^+_{a}B^+_{b}$. We can easily adapt the proof of Newman's Theorem~\cite{newman91}, to show that there exists a set of $L$ random strings ${\cal{R}} = \{r_1,\ldots r_L\}$ such that $\forall x,y, \abs{E_{r_i\in {\cal R}}(\tilde{P}_\mathcal{E}^+(r_i)) - {E(P_\mathcal{E}^+)} } \leq \alpha $ provided $L \geq \frac{4n}{\alpha^2}$, where $n$ is the input length, and $\tilde{P}_\mathcal{E}^+$ is the random variable where randomness is taken from $\cal R$. In other words, by taking the randomness from $\cal R$, we may simulate a probability distribution $\tilde{\mathbf{p}}^+$ such that $\delta(\tilde{\mathbf{p}}^+,\mathbf{p}^+)\leq\alpha$. For each $a,b\in \mathcal{A}\times \mathcal{B}$, Alice and Bob send $T$ copies of the states $\ket{\phi_a^+}=\frac{1}{\sqrt{L}} \sum_{1\leq i\leq L} \ket{A^+_a(r_i)}\ket{1}\ket{i}$ and $\ket{\phi_b^+} =\frac{1}{\sqrt{L}} \sum_{1\leq i\leq L} \ket{1}\ket{B^+_a(r_i)}\ket{i}$ to the referee. The inner product is $$\braket{\phi_a^+}{\phi_b^+}=\frac{1}{{L}} \sum_{1\leq i\leq L} \braket{A^+_a(r_i)}{1}\braket{1}{B^+_b(r_i)} = \tilde{p}^+(a,b|x,y),$$ where the expectation is taken over the random choices $r_1,\ldots r_L$. The referee then uses inner product estimation~\cite{bcwdw01}: for each copy, he performs a measurement on $\ket{\phi_a^+}\otimes\ket{\phi_b^+}$ to obtain a random variable $Z^+_{t,a,b}\in\{0,1\}$ such that $\Pr[Z^+_{t,a,b}=1]=\frac{1-\abs{\braket{\phi_b^+}{\phi_a^+}}^2}{2}$, then he sets $Z^+_{a,b}=\frac{1}{T}\sum_{t=1}^TZ^+_{t,a,b}$. Let ${Q}^+_{a,b}=\sqrt{1-2Z^+_{a,b}}$ if $Z^+_{a,b}\leq 1/2$ and ${Q}^+_{a,b}=0$ otherwise. This serves as an approximation for $\tilde{p}^+(a,b|x,y)=\abs{\braket{\phi_b^+}{\phi_a^+}}$, and Hoeffding's inequality then yields \begin{eqnarray*} \Pr[Q^+_{a,b} \geq \tilde{p}^+(a,b|x,y) + \beta ]&\leq& e^{-\frac{T\beta^4}{2}},\\ \Pr[ Q^+_{a,b} \leq \tilde{p}^+(a,b|x,y)-\beta ]&\leq& e^{-\frac{T\beta^4}{2}}. \end{eqnarray*} Let $Q^-_{a,b}$ be an estimate for $\tilde{p}^-(a,b|x,y)$ obtained using the same method. The referee then obtains an estimate for $\tilde{p}(a,b|x,y)=q_+\tilde{p}^+(a,b|x,y)-q_-\tilde{p}^-(a,b|x,y)$, by setting $Q_{a,b}=q_+Q^+_{a,b}+q_-Q^-_{a,b}$, such that \begin{eqnarray*} \Pr[Q_{a,b}\geq \tilde{p}(a,b|x,y) + \beta ]&\leq& 2e^{-\frac{T\beta^4}{2\Lambda^4}},\\ \Pr[ Q_{a,b} \leq \tilde{p}(a,b|x,y)- \beta ]&\leq& 2e^{-\frac{T\beta^4}{2\Lambda^4}}. \end{eqnarray*} Lemma~\ref{lem:estimated-distribution} with $e(\beta)=2e^{-\frac{T\beta^4}{2\Lambda^4}}$ then implies that the referee may simulate a probability distribution $\mathbf{p}^s$ such that $\delta(\mathbf{p}^s,\tilde{\mathbf{p}})\leq 2AB(\beta+2e^{-\frac{T\beta^4}{2\Lambda^4}})$. Since $\delta(\tilde{\mathbf{p}},\mathbf{p})\leq\Lambda\alpha$, we need to pick $T,L=\frac{4n}{\alpha}$ large enough so that $\Lambda\alpha+2AB\left[\beta+2e^{-T\beta^4/2\Lambda^4}\right]\leq \delta$. Setting $\alpha=\frac{\delta}{2\Lambda}$, $\beta=\frac{\delta}{8AB}$, $T=2\frac{\Lambda^4}{\beta^4}\ln(\frac{16AB}{\delta}) = 2^{13}\left[\frac{AB\Lambda}{\delta}\right]^4\ln(\frac{16AB}{\delta})$ and $L=\frac{4n}{\alpha^2}= \frac{16n\Lambda^2}{\delta^2}$, the total complexity of the protocol is $4ABT(\log(L)+2) = O((AB)^5\left[\frac{\Lambda}{\delta}\right]^4\ln\left[\frac{AB}{\delta}\right]\log(n))$. (We may assume that $\frac{\Lambda}{\delta}\leq n^{1/4}$, otherwise this protocol performs worse than the trivial protocol.) \end{proof} In the case of Boolean functions, corresponding to correlations $C_f(x,y)\in\{\pm 1\}$ (see Def.~\ref{def:boolean-functions}), the referee's job is made easier by the fact that he only needs to determine the sign of the correlation with probability $1-\delta$. This allows us to get some improvements in the upper bounds. Similar improvements can be obtained for other types of promises on the distribution. \begin{thm} \label{thm:smp-boolean} Let $f:\{0,1\}^n \times \{0,1\}^n \rightarrow \{0,1\}$, with associated sign matrix $C_f$, and $\epsilon, \delta < 1/2$. \begin{enumerate} \item $R_{\delta}^{\parallel,\mathrm{pub}}(f) \leq 4\left[\frac{\tilde{\nu}^\epsilon(C_f)}{1-2\epsilon}\right]^2\ln(\frac{1}{\delta}) $ \enlever{$\leq 4K_G^2 \left[\frac{\tilde{\gamma}_2^\epsilon(C_f)}{1-2\epsilon}\right]^2 \ln(\frac{1}{\delta})$}, \enlever{\item $R_{\delta}^{\parallel,\mathrm{ent}}(f)\leq 4\left[\frac{\tilde{\gamma}_2^\epsilon(C_f)}{1-2\epsilon}\right]^2 \ln(\frac{1}{\delta}),$} \item $Q_{\delta}^{\parallel}(f) \leq O\left(\log(n) \left[\frac{\tilde{\nu}^\epsilon(C_f)}{1-2\epsilon}\right]^4\ln(\frac{1}{\delta})\right) $\enlever{$\leq O\left(\log(n) \left[\frac{\tilde{\gamma}_2^\epsilon(C_f)}{1-2\epsilon}\right]^4\ln(\frac{1}{\delta})\right)$}. \end{enumerate} \end{thm} From Lemmas~\ref{lem:epsilon-alpha} and~\ref{lem:gamma2-infinity}, these bounds may also be expressed in terms of $\gamma_2^\alpha$, and the best upper bounds are obtained from $\gamma_2^\infty(C_f)=\frac{1}{\epsilon^\mathrm{ent}(C_f)}$. The first item then coincides with the upper bound of~\cite{ls07}. \enlever{Together with the lower bounds from Section~\ref{sec:lower-bounds}, the first and last items also imply the results of Yao, Shi and Zhu, and Gavinsky \textit{et al.}~\cite{yaofinger03, shi05, gkr06}, who show how to simulate any (logarithmic) communication protocol in the simultaneous messages model, with an exponential blowup in communication. These results extend to arbitrary distributions. In particular, this gives as a special case a much simpler proof of the constant upper bound on approximating quantum distributions, which Shi and Zhu prove using sophisticated techniques based on diamond norms.} Together with the bound between $\tilde{\nu}$ and $\tilde{\gamma}_2$ from Section~\ref{sec:gamma-vs-nu}, and the lower bounds on communication complexity from Section~\ref{sec:lower-bounds}, Theorems~\ref{thm:smp} and~\ref{thm:smp-boolean} immediately imply the following corollaries. \begin{cor} \label{cor:smp} Let $f:\{0,1\}^n \times \{0,1\}^n \rightarrow \{0,1\}$. For any $\epsilon, \delta < 1/2$, if $Q_\epsilon^\mathrm{ent}(f) \leq q$, then \begin{enumerate} \item $R_{\delta}^{\parallel,\mathrm{pub}}(f) \leq K_G^2 \cdot 2^{2q+2} \ln(\frac{1}{\delta})\frac{1}{(1{-}2\epsilon)^2} $, \item $Q_{\delta}^{\parallel}(f)\leq O\left(\log(n) 2^{4q}\ln(\frac{1}{\delta})\frac{1}{(1{-}2\epsilon)^4}\right)$. \end{enumerate} Let $\mathbf{p}\in {\cal C}$ be a distribution with inputs in $\mathcal{X} \times \mathcal{Y}$ with $|\mathcal{X} \times \mathcal{Y}| \leq 2^n$, and outcomes in $\mathcal{A} \times \mathcal{B}$ with $A=|\mathcal{A}|, B=|\mathcal{B}|$. For any $\epsilon, \delta < 1/2$, if $Q_\epsilon^{\mathrm{ent}}(\mathbf{p}) \leq q$, then \begin{enumerate} \addtocounter{enumi}{2} \item \enlever{$R_{\epsilon + \delta}^{\parallel,\mathrm{pub}}(\mathbf{p}) \leq O\left(2^{2c}\left[\frac{AB}{\delta}\right]^2 \ln^2\left[\frac{AB}{\delta}\right] \right)$ and }$R_{\epsilon + \delta}^{\parallel,\mathrm{pub}}(\mathbf{p}) \leq O\left(2^{4q}\frac{(AB)^4}{\delta^2} \ln^2\left[\frac{AB}{\delta}\right] \right)$, \enlever{\item $R_{\epsilon + \delta}^{\parallel,\mathrm{ent}}(\mathbf{p}) \leq O\left( 2^{2q}\left[\frac{AB}{\delta}\right]^2 \ln^2\left[\frac{AB}{\delta}\right] \right)$, } \item \enlever{$Q_{\epsilon+\delta}^{\parallel}(\mathbf{p}) \leq O\left(2^{4c}\ \frac{(AB)^5}{\delta^4}\ln\left[\frac{AB}{\delta}\right]\log(n)\right)$ and} $Q_{\epsilon+\delta}^{\parallel}(\mathbf{p}) \leq O\left(2^{8q}\ \frac{(AB)^9}{\delta^4}\ln\left[\frac{AB}{\delta}\right]\log(n)\right)$. \end{enumerate} \end{cor} The first two items can be compared to results of Yao, Shi and Zhu, and Gavinsky \textit{et al.}~\cite{yaofinger03,shi05,gkr06}, who show how to simulate any (logarithmic) communication protocol for Boolean functions in the simultaneous messages model, with an exponential blowup in communication. The last two items extend these results to arbitrary non-signaling distributions. In particular, Item~3 gives in the special case $q=0$, that is, $\mathbf{p}\in {\cal Q}$, a much simpler proof of the constant upper bound on approximating quantum distributions, which Shi and Zhu prove using sophisticated techniques based on diamond norms~\cite{shi05}. Moreover, Item~3 is much more general as it also allows to simulate protocols requiring quantum communication in addition to entanglement. As for Item~4, it also has new interesting consequences. For example, it implies that quantum distributions ($q=0$) can be approximated with logarithmic quantum communication in the simultaneous messages model, using no additional resources such as shared randomness, and regardless of the amount of entanglement in the bipartite state measured by the two parties. \section{Conclusion and open problems} By studying communication complexity in the framework provided by the study of quantum non-locality (and beyond), we have given very natural and intuitive interpretations of the otherwise very abstract lower bounds of Linial and Shraibman. Conversely, bridging this gap has allowed us to port these very strong and mathematically elegant lower bound methods to the much more general problem of simulating non-signaling distributions. Since many communication problems may be reduced to the task of simulating a non-signaling distribution, we hope to see applications of this lower bound method to concrete problems for which standard techniques do not apply, in particular for cases that are not Boolean functions, such as non-Boolean functions, partial functions or relations. Let us also note that our method can be generalized to multipartite non-signaling distributions, and will hopefully lead to applications in the number-on-the-forehead model, for which quantum lower bounds seem hard to prove. In the case of binary distributions with uniform marginals (which includes in particular Boolean functions), Tsirelson's theorem (Theorem~\ref{thm:tsirelson}) and the existence of Grothendieck's constant (Proposition~\ref{prop:grothendieck}) imply that there is at most a constant gap between $\nu$ and $\gamma_2$. For this reason, it was known that Linial and Shraibman's factorization norm lower bound technique give lower bounds of the same of order for classical and quantum communication (note that this is also true for the related discrepancy method). Despite the fact that Tsirelson's theorem and Grothendieck's inequality are not known to extend beyond the case of Boolean outcomes with uniform marginals, we have shown that in the general case of distributions, there is also a constant gap between $\tilde{\nu}$ and $\tilde{\gamma}_2$. While this may be seen as a negative result, this also reveals interesting information about the structure of the sets of local and quantum distributions. In particular, this could have interesting consequences for the study of non-local games. \section*{Acknowledgements} We are grateful to Benjamin Toner for pointing us towards the existing literature on non-signaling distributions as well as very useful discussions of the Linial and Shraibman lower bound on communication complexity. We also thank Peter H{\o}yer, Troy Lee, Oded Regev, Mario Szegedy, and Dieter van Melkebeek with whom we had many stimulating discussions. Part of this work was done while J. Roland was affiliated with FNRS Belgium and U.C. Berkeley. \enlever{ We are grateful to Benjamin Toner for pointing us towards the existing literature on non-signaling distributions. We also thank Peter H{\o}yer for suggesting the connection to Fourier transforms and allowing us to include the result in this paper, and Troy Lee, Oded Regev, and Mario Szegedy with whom we had many stimulating discussions.} The research was supported by the EU 5th framework program QAP, the ANR Blanc AlgoQP and the ANR D\'efis QRAC.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} According to an ancient Roman author, Pliny the Elder, \textit{the very art of painting originates from trailing the edge of shadow}. If art can be defined as creating visual or auditory elements that express the author's imaginative or technical skill, then shadow art represents those skills in play with shadows. Most of us have seen or atleast heard of "someone" making "something" interesting out of shadows. However, it is usually limited to people playing around with their fingers around a lamp, making shadows of rabbits or horses on the wall. In this work, we show how differentiable rendering can be used to generate some amazing 3D sculptures which cast mind-boggling shadows when lit from different directions. Figure \ref{fig:examples} (a) shows the cover of the book \textit{Gödel, Escher, Bach} by \textit{ Douglas Hofstadter} that features blocks casting shadows of different letters when seen from different sides. \textit{Kumi Yamashita} - one of the most prominent contemporary artists - demonstrated that seemingly simple objects arranged in a certain pattern cast startling silhouettes when lit from just the right direction. An exclamation point becomes a question mark when lit from its side (Figure \ref{fig:examples} (b)) and a bunch of aluminum numbers placed in a 30-story office add up to an image of a girl overlooking the crowd below (Figure \ref{fig:examples} (c)). All of these, and other pieces made by \textit{Kumi Yamashita} not only please our eyes, but also inspire emotion and pose intriguing questions. \textit{Tim Noble} and \textit{Sue Webster} have been presenting this type of artwork since 1997, creating projected shadows of people in various positions (Figure \ref{fig:examples} (d)). This specifically arranged ensemble shows how readily available objects can cast the clearest of illusions of clearly recognizable scenes (Figure \ref{fig:examples}(e)). Figure \ref{fig:examples} (f) shows the aquarium of floating characters by \textit{Shigeo Fukuda} where the shadows of the fish reveal their names in kanji characters. Even after such fascinating effects, the current state of shadow art seems to be well described by Louisa May Alcott, who says \textit{"Some people seemed to get all sunshine, and some all shadow…"}. Shadow art was first introduced to the vision and graphics community by \cite{mitra2009shadow} where they formally addressed the problem in an optimization framework. Since then, no significant progress has been observed in this direction. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{images/examples.pdf} \caption{Examples of shadow art sculptures by (a) \textit{Douglas Hofstadter}, (b, c) \textit{Kumi Yamashita}, (d, e) \textit{Tim Noble} and \textit{Sue Webster}, and (f) \textit{Shigeo Fukuda}.} \label{fig:examples} \end{figure} \textbf{The question:} Can we develop a method that learns to create or optimize to 3D sculptures that can generate such artistic effects through their shadows? In this work, we attempt to answer this question through the use of \textit{Differentiable Rendering}. Here, instead of trying to realistically render a scene of our creation, we try to reconstruct a representation of a scene from one or more images of a scene \cite{kato2020differentiable}. Our work is mostly inspired by examples of shadow art shown in Figure \ref{fig:examples}. Specifically, our objective is to generate 3D shadow art sculptures that cast different shadows (of some recognizable objects) when lit from different directions using a differentiable renderer. \textbf{Why differentiable rendering?} Most learning-based methods for 3D reconstruction require accurate 3D ground truths as supervision for training. However, all we have is a set of desired shadow images in our case. Differentiable rendering based methods require only 2D supervision in the form of single or multi-view images to estimate the underlying 3D shape, thus, eliminating the need for any 3D data collection and annotation. \textbf{Contributions.} The following are the major contributions of this work. \begin{itemize} \item We introduce the creation for 3D shadow art sculptures that cast different shadows when lit from different directions using differentiable rendering just from the input shadow images and the corresponding projection information. \item We demonstrate the efficacy of deploying differentiable rendering pipeline over voxel and mesh based representations to generate shadow art sculptures. \item We show that the proposed framework can create artistic effects that go beyond those seen in contemporary art pieces by generating 3D sculptures using half-toned face images and its sketches drawn from multiple viewpoints. \item To the best of our knowledge, ours is the first work to address shadow art using differentiable rendering. \end{itemize} \textbf{Organization.} We start by describing the literature covering the relevant related works in Section \ref{sec:related_work}. We discuss the problem statement more formally and describe both voxel and mesh based differentiable rendering pipelines in Section \ref{sec:method}. Section \ref{sec:loss_function} describes the loss functions deployed for optimization. In Section \ref{sec:experiments}, we perform qualitative and quantitative analysis of results and compare the performance of the proposed framework with that of the state-of-the-art method. Section \ref{sec:applications} describes interesting artistic effects and applications of shadow art before we conclude the work in Section \ref{sec:conclusion}. \section{Related Work} \label{sec:related_work} Shadows play an essential role in the way we perceive the world and have been central in capturing the imagination of many artists including stage performers. Several artists have typically used manual, trial-and-error style approaches to create 3D shadow sculptures. However, with the advent of digital design technology, the need of an automated framework is inevitable. \textbf{Shadow Art.} Shadows in many computer graphics and computer vision applications have been studied from both perceptual (artist's) and mathematical (programmer's) point of views. It started with studying the effect of shadow quality on perception of spatial relationships in a computer generated image \cite{wanger1992effect, wanger1992perceiving}. Pellacini \emph{et al.} developed an interface for interactive cinematic shadow design that allows the user to modify the positions of light sources and shadow blockers by specifying constraints on the desired shadows \cite{pellacini2002user}. The idea of computing the shadow volume from a set of shadow images evolved after that. This is similar to the construction of a visual hull used for 3D reconstruction. Visual hull is the intersection volume of a set of generalized cones constructed from silhouette images and the corresponding camera locations \cite{laurentini1994visual}. Sinha and Polleyfeys \cite{sinha2005multi} studied the reconstruction of closed continuous surfaces from multiple calibrated images using min-cuts with strict silhouette constraints. \textbf{Relation with the state-of-the-art method.} The work closest to ours is by Mitra \emph{et al.} \cite{mitra2009shadow}. They described shadow art more formally by introducing a voxel-based optimization framework to recover the 3D shape from arbitrary input images by deforming the input shadow images and handled inherent image inconsistencies. In this work, we demonstrate the potential of differentiable rendering in generating 3D shadow sculptures all from the arbitrary shadow images without any explicit input image deformation. Although the associated 3D object might not exist in the real-world, but the method still creates shadow sculptures that go beyond those seen in contemporary art pieces casting the physically realizable shadows when lit from appropriate directions. \textbf{Differentiable Rendering.} We briefly review methods that learn the 3D geometry via differentiable rendering. These methods are categorized based on the underlying representation of 3D geometry: point clouds, voxels, meshes, or neural implicit representation. In this work, we primarily focus on voxel and mesh based representations. Several methods operate on voxel grids \cite{lombardi2019neural, nguyen2018rendernet, paschalidou2019superquadrics, tulsiani2017multi}. Paschalidou \emph{et al.} \cite{paschalidou2019superquadrics} and Tulsiani \emph{et al.} \cite{tulsiani2017multi} propose a probabilistic ray potential formulation. Although they provide a solid mathematical framework, all intermediate evaluations need to be saved for backpropagation. This limits these approaches to relatively small-resolution voxel grids. On one hand, Sitzmann \emph{et al.} \cite{sitzmann2019scene} have inferred implicit scene representations from RGB images via an LSTM-based differentiable renderer and Liu \emph{et al.} \cite{liu2019learning} perform max-pooling over the intersections of rays with a sparse number of supporting regions from multi-view silhouettes. On the other hand, \cite{niemeyer2020differentiable} show that volumetric rendering is inherently differentiable for implicit representations and hence, no intermediate results need to be saved for the backward pass. OpenDR \cite{loper2014opendr} roughly approximates the backward pass of the traditional mesh-based graphics pipeline. Liu \emph{et al.} \cite{liu2019soft} proposed Soft Rasterizer to replace the rasterization step with a soft version to make it differentiable using a deformable template mesh for training and yields compelling results in reconstruction tasks. We deploy this in our mesh-based differentiable rendering pipeline for rasterization. Both voxel and mesh based representations have their own strength and weaknesses. In this work, we describe the differentiable rendering optimization framework for both these 3D representation and discuss which model fits the best for different scenarios to create plausible shadow art sculptures \begin{figure*}[ht] \centering \includegraphics[width=0.9\linewidth]{images/block_diagram.png} \caption{Information flow in the proposed mesh-based differentiable rendering pipeline.} \label{fig:block_diagram} \end{figure*} \section{Method} \label{sec:method} \subsection{Problem Formulation} \label{prob_formulation} The key idea of our work is to generate an artistic 3D sculpture $\mathcal{S}$ that casts $N$ different shadows when lit from $N$ different directions using differentiable rendering based optimization pipeline. The prime focus here is to create interesting shadow art effects using the 3D sculpture $\mathcal{S}$. The input to the pipeline is a set $\mathcal{X} = \{X_{1}, X_{2}, ..., X_{N}\}$ of shadow configuration $X_{i} = (I_{i}, P_{i})$. $I_{i}$ represents the target shadow image and $P_{i}$ is the corresponding projection information. The shadow of an object can be regarded as its projection on a planar surface. Assuming directional lighting, this projection is an \textit{orthographic projection} when the surface is perpendicular to the lighting direction and a \textit{perspective projection}, otherwise \cite{Abbott1971}. Obtaining shadow of an object is equivalent to finding the corresponding silhouette captured by a camera pointing in the same direction as the light source. Therefore, $I_{i}$ the shadow image, is essentially a silhouette. From here on, we shall use the term silhouette images and shadow images, interchangeably. The shadow art problem is similar to a multi-view 3D reconstruction problem \cite{lee2003silhouette, mulayim2003silhouette}, where we try to estimate the 3D structure of an object given its $N$ silhouette views. However, the key differences in shadow art are: (i) the $N$ views can correspond to arbitrary silhouettes (not necessarily of the same object) and (ii) the learned 3D sculpture may bear no resemblance with any real-world object and just be an abstract art that casts the desired shadows when lit from appropriate directions. Undoubtedly, there exist multiple 3D shapes that can cast the same set of shadows. However, our concern is just to learn one such 3D sculpture that can create the desired artistic effects through its shadows. \subsection{System Overview} By providing shadow configuration $\mathcal{X} = \{X_{i} = (I_{i}, P_{i})| i=1,2,..., N\}$ as input to the pipeline, the objective is to learn the underlying 3D sculpture $\mathcal{S}$, as described earlier. The projection information $P_{i}$ corresponds to the camera position (and hence, the light source position) associated with $i^{th}$ shadow image $I_{i}$ such that $P_{i} = (\mathbf{R}_{i}, \mathbf{t}_{i})$. Here, $\mathbf{R}_{i}$ and $\mathbf{t}_{i}$ are the 3D rotation and translation of the camera, respectively. We start by initialising $\mathcal{S}$ with a standard geometry which is further optimized by minimizing image-based losses, such that the rendered silhouette images $\widetilde{I}_{i} = I_{i}$ for all $i = 1, 2, ..., N$. The prime reason for using differentiable rendering is that it allows gradient flow directly from images back to parameters of $\mathcal{S}$ to optimize it in an iterative fashion. In other words, it does not require any explicit 3D supervision and optimizes the 3D shape solely from image based losses. For further simplicity, let the set of target shadow images and the associated projection information be denoted as $\mathcal{I}$ and $\mathcal{P}$, respectively, such that $\mathcal{I} = \{I_{1}, I_{2},..., I_{N}\}$ and $\mathcal{P} = \{P_{1}, P_{2},..., P_{N}\}$. Further, let $\widetilde{\mathcal{I}} = \{\widetilde{I}_{1}, \widetilde{I}_{2},..., \widetilde{I}_{N}\}$ be the set of shadow images obtained from learned 3D sculpture $\mathcal{S}$ as per projections $\mathcal{P}$. In this work, we consider two common representations for 3D shapes i.e. \textit{voxel} and \textit{mesh} based representations. In the following section, we elaborate the optimization pipelines for voxel and mesh based representations of the 3D object to create visually plausible shadow art using differentiable rendering. \subsection{Voxel Based Optimization} In this section, we look at a differentiable rendering pipeline that uses voxels to represent the 3D geometry. A voxel is a unit cube representation of a 3D space. The 3D space is quantized to a grid of such unit cubes. It is parameterized by an $N$-dimensional vector containing information about the volume occupied in 3D space. Additionally, it encodes occupancy, transparency, color, and material information. Even though occupancy and transparency probabilities (in the range $[0,1]$) are different, they can be interpreted in the same way in order to maintain differentiability during the ray marching operation \cite{kato2020differentiable}. A typical rendering process involves collecting and aggregating the voxels located along a ray and assigning a specific color to each pixel based on the transparency or the density value. All the voxels that are located along a ray projecting to a pixel are taken into account when rendering that pixel. However, our objective is to do the inverse, i.e., to find the 3D geometry associated with silhouettes corresponding to different directions. We assume that the 3D object $\mathcal{S}$ is enclosed in a 3D cube of known size centered at the origin. Hence, $\mathcal{S}$ can be defined by a learnable 3D tensor $V$ that stores the density values for each voxel. We initialize $V$ with all ones. The color value for each voxel is set to 1 and is kept fixed in the form of a color tensor $C$. Next, we render $\mathcal{S}$ using a differentiable volumetric rendering method described in \cite{ravi2020pytorch3d}. To restrict the voxel density values to the range $[0,1]$, $V$ is passed through a sigmoid activation function $(\sigma)$ to obtain $\widetilde{V}$, as described in Equation \ref{eq:sigmoid}. \begin{equation}\label{eq:sigmoid} \centering \widetilde{V} = \sigma(V) \end{equation} We then pass $\widetilde{V}$ through the differentiable volume renderer $\mathcal{R}_{vol}$ along with the fixed color tensor $C$ and the associated projection information $\mathcal{P}$ to obtain the set of corresponding rendered images $\widetilde{I}$, as described in Equation \ref{eq:voxrender}. \begin{equation} \label{eq:voxrender} \centering \widetilde{\mathcal{I}} = \mathcal{R}_{vol}(\widetilde{V}, C, \mathcal{P}) \end{equation} The voxel densities $V$ are optimized by minimizing the image level loss between a set of rendered shadow images $\widetilde{\mathcal{I}}$ and the corresponding target shadows in $\mathcal{I}$. The image level loss $\mathcal{L}_{img}$ is a weighted combination of $L_{1}$ and $L_{2}$ losses, as described in Equation \ref{eq:imgloss}. \begin{equation}\label{eq:imgloss} \mathcal{L}_{img} = \lambda_{1}\mathcal{L}_{L_{1}} + \lambda_{2}\mathcal{L}_{L_{2}} \end{equation} Here, $\lambda_{1} = 10.0$ and $\lambda_{2} = 10.0$ are the weights associated with $L_{1}$ and $L_{2}$ losses, respectively. The resulting voxel based representation of $\mathcal{S}$ can finally be converted to a 3D mesh making it suitable for 3D printing. One simple way to achieve this is by creating faces around each voxel having density greater than a certain threshold value (as described in \cite{ravi2020pytorch3d}). \subsection{Mesh Based Optimization} In this section, we also propose to use mesh based differentiable rendering to meet our objective. The entire workflow is described in Figure \ref{fig:block_diagram}. The 3D object $\mathcal{S}$ can be represented as a mesh $\mathcal{M}(V, F)$. Here, $V$ is a set of vertices connected by a set of triangular faces $F$ that define the surface of $\mathcal{S}$. We start by initializing a source mesh $\mathcal{S}_{src}$ = $\mathcal{M}(V_{src}, F_{src})$ with an icosphere consisting of $|V_{src}|$ vertices and $|F_{src}|$ faces. The idea is to learn the per-vertex displacements $V_{d}$ to deform $\mathcal{S}_{src}$ to the final desired mesh that casts desired shadows (silhouettes), when lit from appropriate directions. This is achieved by rendering the deformed mesh $\mathcal{S}_{def} = \mathcal{M}(V_{def}, F_{def})$ through a mesh-based differentiable silhouette renderer $\mathcal{R}_{silh}$ (as described in \cite{ravi2020pytorch3d}) from the associated projection $\mathcal{P}$ such that, \begin{equation} \centering \begin{split} & V_{def} = V_{src} + V_{d} \\ & F_{def} = F_{src} \\ & \widetilde{\mathcal{I}} = \mathcal{R}_{silh}(\mathcal{S}_{def}, \mathcal{P}) \end{split} \label{eq:silhrender} \end{equation} \subsubsection{Loss Function}\label{sec:loss_function} The source mesh is optimized by minimizing image level loss $\mathcal{L}_{img}$ (described in Equation \ref{eq:imgloss}), normal consistency loss, and imposing Laplacian and edge length regularisation.\\ \\ \textit{\textbf{Normal consistency.}} We use normal consistency loss to ensure smoothness in the resulting 3D sculpture. For a mesh $\mathcal{M}(V,F)$, let $e = (\mathbf{v}_{x}, \mathbf{v}_{y})$ be the connecting edge of two neighboring faces $f_{x} = (\mathbf{v}_{x}, \mathbf{v}_{y}, \mathbf{a})$ and $f_{y} = (\mathbf{v}_{x}, \mathbf{v}_{y}, \mathbf{b})$, such that $f_{x}, f_{y} \in F$ with normal vectors $\mathbf{n}_{x}$ and $\mathbf{n}_{y}$, respectively. If $\widetilde{\mathcal{E}}$ is the set of all such connecting edges $e$ and $|F|$ is the total number of faces in mesh, the normal consistency over all such neighbouring faces $f_{x}$ and $f_{y}$ is given as per Equation \ref{eq:normal_cons}. \begin{equation}\label{eq:normal_cons} \centering \mathcal{L}_{norm} = \frac{1}{|F|}\sum_{e \in \widetilde{\mathcal{E}}}(1 - cos(\mathbf{n}_{x}, \mathbf{n}_{y})) \end{equation} where, \begin{equation*} \centering \begin{split} & \mathbf{n}_{x} = (\mathbf{v}_{y} - \mathbf{v}_{x}) \times (\mathbf{a} - \mathbf{v}_{x})\\ & \mathbf{n}_{y} = (\mathbf{b} - \mathbf{v}_{x}) \times (\mathbf{v}_{y} - \mathbf{v}_{x}). \end{split} \end{equation*} \\ \textit{\textbf{Laplacian regularisation.}} In order to prevent the model from generating large deformations, we impose uniform Laplacian smoothing \cite{nealen2006laplacian}, as described by Equation \ref{eq:laplacian}. \begin{equation}\label{eq:laplacian} \centering \mathcal{L}_{lap} = \frac{1}{|V|}\sum_{i=1}^{|V|}\left(\bigg\lVert\sum_{\mathbf{v}_{j} \in \mathcal{N}(\mathbf{v}_{i})}w_{ij}\mathbf{v}_{j} - \mathbf{v}_{i}\bigg\rVert_{1}\right) \end{equation} Here, $|V|$ is the number of vertices in the mesh $\mathcal{M}$ and $\mathcal{N}(\mathbf{v}_{i})$ is the neighbourhood of vertex $\mathbf{v}_{i}$. \begin{equation*} \centering w_{ij} = \frac{\omega_{ij}}{\sum_{k \in \mathcal{N}(i)} \omega_{ik}} \end{equation*} For uniform Laplacian smoothing, $\omega_{ij} = 1$, if $(\mathbf{v}_{i}, \mathbf{v}_{j})$ form an edge, $\omega_{ij} = -1$ if $i = j$, and $\omega_{ij} = 0$, otherwise.\\ \\ \textit{\textbf{Edge length regularisation.}} Edge-length regularisation is included to prevent the model from generating flying vertices and is given by Equation \ref{eq:edge_len}. \begin{equation}\label{eq:edge_len} \centering \mathcal{L}_{edge} = \sum_{i=1}^{|V|}\sum_{\mathbf{v}_{j} \in \mathcal{N}(\mathbf{v}_{i})}\parallel\mathbf{v}_{i} - \mathbf{v}_{j}\parallel_{2}^{2} \end{equation}\\ Finally, the overall loss function is as described in Equation \ref{eq:overall_loss}. \begin{equation}\label{eq:overall_loss} \centering \mathcal{L}_{total} = \lambda_{a}\mathcal{L}_{img} + \lambda_{b}\mathcal{L}_{norm} + \lambda_{c}\mathcal{L}_{lap} + \lambda_{c}\mathcal{L}_{edge} \end{equation} Here, $\lambda_{a} = 1.6$, $\lambda_{b} = 2.1$, $\lambda_{c} = 0.9$, and $\lambda_{d} = 1.8$ are the weights associated with the losses $\mathcal{L}_{img}$, $\mathcal{L}_{norm}$, $\mathcal{L}_{lap}$, and $\mathcal{L}_{edge}$, respectively. \subsection{Implementation Details} The aforementioned differentiable rendering pipelines are implemented using Pytorch3D \cite{ravi2020pytorch3d}. As an initialisation for mesh, we use a level 4 icosphere composed of 2,562 vertices and 5,120 faces. For the voxel based rendering pipeline, we assume that the object is inside a cube (a grid of $128 \times 128 \times 128$ voxels) centered at origin with side of length 1.7 world units. We train the optimization pipeline with custom silhouette images of size $128 \times 128$ for 2000 epochs. We keep the learning rate to $1 \times 10^{-4}$. We keep the learning rate to $1 \times 10^{-2}$ and train the optimization pipeline for 500 epochs. The training is performed on NVIDIA Quadro RTX 5000 with 16 GB memory. \section{Experimental Analysis}\label{sec:experiments} In this section, we perform an extensive analysis over the results obtained using voxel and mesh based differentiable rendering pipelines to create plausible shadow art effects. We start by discussing the evaluation metrics and perform ablation studies to understand the effect of various loss terms in the design. \subsection{Evaluation Metrics} Following our discussion in Section \ref{prob_formulation}, we assess the quality of silhouettes (shadow images) obtained through the 3D sculpture $\mathcal{S}$ as per projections $\mathcal{P}$. To compare the rendered silhouette images with the target silhouette images (representing shadows), we use Intersection over Union (IoU) and Dice Score (DS). Additionally, we need to quantify the quality of the 3D sculpture $\mathcal{S}$ obtained after optimization. While we do not have any ground truth for 3D shape, and this is an optimization framework, we need a "no reference" quality metric. Therefore, we decided to use normal consistency evaluated over $\mathcal{S}$ to assess the quality of the mesh. \begin{figure*}[ht] \centering \includegraphics[width=\linewidth]{images/res.pdf} \caption{Qualitative and quantitative results on (a) two views (b,c) three orthogonal views, and (d, e) three non-orthogonal views using voxel and mesh-based rendering for shadow art.} \label{fig:res} \end{figure*} \subsection{Ablation Study} Figure \ref{fig:ablation} depicts the qualitative effect of different loss terms used in the optimization pipeline. The underlying mesh in this figure corresponds to the arrangement shown in Figure \ref{fig:res} (c). The image based loss $\mathcal{L}_{img}$ alone is not sufficient for generating plausible 3D sculptures as they are expected to suffer from distortions due to flying vertices (spike-like structures in Figure \ref{fig:ablation} (a)) or large deformations. Since we do not have any ground truth for explicit 3D supervision, we examine the effect of including regularisation in the objective function. Figure \ref{fig:ablation} (b) shows that the spikes are reduced by introducing edge-length regularisation. Further, as shown in Figure \ref{fig:ablation} (c), Laplacian smoothing prevents the sculpture from experiencing super large deformations. Finally, normal consistency loss ensures further smoothness in the optimized surface. Figure \ref{fig:ablation} (d) shows the result obtained by applying all the aforementioned regularization terms along with the image based loss. The resulting quality of the mesh validates our choice of loss terms. \begin{figure}[h] \centering \includegraphics[width=\linewidth, height=2.5cm]{images/ablation.pdf} \caption{Qualitative analysis of effect of various loss terms. (a) $\mathcal{L}_{img}$, (b) $\mathcal{L}_{img} + \mathcal{L}_{edge}$, (c) $\mathcal{L}_{img} + \mathcal{L}_{edge} + \mathcal{L}_{lap}$, and (d) $\mathcal{L}_{img} + \mathcal{L}_{edge} + \mathcal{L}_{lap} + \mathcal{L}_{norm}$. } \label{fig:ablation} \end{figure} \begin{figure*}[ht] \centering \includegraphics[width=\linewidth]{images/comp.pdf} \caption{Qualitative evaluation of results obtained through (A) shadow art tool in \cite{mitra2009shadow} and (B) our voxel based rendering pipeline. The inconsistencies are highlighted in orange color.} \label{fig:comparison} \end{figure*} \subsection{Qualitative and Quantitative Analysis} In this section, we perform the qualitative and quantitative evaluation on a wide variety of shadow images including those used in \cite{mitra2009shadow} to illustrate the versatility of our approach in generating 3D shadow art sculptures represented using both voxels and mesh. For every result in Figure \ref{fig:res} (a)-(d), we show the learned 3D sculptures (voxel and mesh based) along with the respective shadows casted from different specified directions. We could not include the optimized 3D sculpture from \cite{mitra2009shadow} as the associated object file was not downloadable through their optimization tool. We have been able to incorporate both orthogonal (Figure \ref{fig:res} (a, b, c)) and non-orthogonal views (Figure \ref{fig:res} (d) and Figure \ref{fig:teaser} (b)) to obtain the shadows that are consistent with the desired target shadow images. For a quantitative comparison, we also report IoU and Dice score. As depicted in Figure \ref{fig:res}, the IoU and Dice Score are comparable for both voxel and mesh based renderings. However, the corresponding voxel based 3D sculptures are not that smooth (low normal consistency value) when compared to those of mesh based 3D sculptures. It is important to note that the underlying voxel representation has been converted to a mesh representation to compute normal consistency values. While \cite{mitra2009shadow} have focused only on foreground inconsistencies (marked in orange color), we also show the background inconsistencies (marked in blue color) that appear in some of rendered shadow images. Ours is an end-to-end optimization approach without any additional editing tool to prune the generated 3D sculpture. In some cases, the mesh based approach is found to produce certain discontinuities near non-convex regions (Figure \ref{fig:res} (b,d)) for atleast one view. This is mainly attributed to the inability of icosphere to handle sharp discontinuities in the desired shape, especially when regularisation has been imposed (Equation \ref{eq:overall_loss}). The voxel based approaches may contain a few outliers (voxels outside the desired 3D shape, as marked in blue in Figure \ref{fig:res} (d)) which is generally not the case with mesh based approaches. However, the mesh based differentiable rendering method lags in handling sharp discontinuities and holes present in the shadow images. While these shortcomings are handled effectively by voxel based methods, they tend to generate discretized 3D sculpture and are often associated with high memory and computational requirements. Overall, the differentiable rendering based optimization for both the approaches has been able to generate plausible 3D shadow art sculptures and is observed to have outperformed \cite{mitra2009shadow} in handling shadow inconsistencies by a large extent without having to explicitly deform the desired shadow images. \begin{figure*}[ht] \centering \includegraphics[width=\linewidth]{images/faces.pdf} \caption{A seemingly random voxel soup creates three distinct shadow images of (a) \textit{Albert Einstein}, \textit{Nikola Tesla}, and \textit{APJ Abdul Kalam}, (b) \textit{Minions}, and (c) \textit{Ironman}.} \label{fig:faces} \end{figure*} \begin{figure*}[h] \centering \includegraphics[width=\linewidth]{images/3d_recon.pdf} \caption{3D reconstruction of (a) flower vase, (b) pen-stand, and (c) coffee mug using the associated hand drawn sketches from three different views.} \label{fig:3D_recon} \end{figure*} \subsection{Comparison with the State-of-the-art method} We show the qualitative comparison of the results obtained using our voxel based differentiable rendering pipeline and the voxel based optimization tool presented in \cite{mitra2009shadow} without any deformation to the target shadow image. In Figure \ref{fig:comparison}, we observe that the shadows rendered using the proposed pipeline are highly consistent with that of the desired target shadows when compared to those produced by \cite{mitra2009shadow}. The authors of \cite{mitra2009shadow} argue that finding a consistent configuration for a given choice of input images might be impossible and hence, propose to introduce deformation in the input image so as to achieve consistency of the rendered shadow images with the desired ones. However, the differentiable rendering based optimization can handle inconsistencies without causing any explicit change in the target shadow images. \section{Applications}\label{sec:applications} In this section, we show some additional artistic shadow art creations and an extension to yet another application that can also benefit from our optimization approach. Figure \ref{fig:faces} depicts the creation of faces of well known scientists around the world and movie characters like \textit{Minions} and \textit{Ironman}, demonstrating the strength of differentiable rendering based optimization approach to handle complex objects or scenes with consistency. In addition to the binary silhouette images, half-toned images can also be used to generate 3D shadow art sculptures, as shown in Figure \ref{fig:teaser}. Another interesting extension is towards sketch-based modeling \cite{olsen2009sketch} where we use hand-drawn sketches of a shape from different viewpoints to automatically create the underlying 3D object. We demonstrate the creation of a flower vase (Figure \ref{fig:3D_recon} (a)), pen-stand (Figure \ref{fig:3D_recon} (b)), and a coffee mug (Figure \ref{fig:teaser} (c)) solely from hand-drawn sketches from three different views. \section{Conclusion}\label{sec:conclusion} We have introduced an optimization framework for generating 3D shadow art sculptures from a set of shadow images and the associated projection information. The key idea is to explore the strength of differentiable rendering in creating visually plausible and consistent shadows of rigid objects, faces, and animated movie characters by generating the associated 3D sculpture. We have discussed both voxel and mesh-based rendering pipelines and have identified the benefits of each of them for the task at hand. Additionally, we have demonstrated the applicability of the proposed framework in reconstructing 3D shapes using their sketches drawn from three different viewpoints. At present, we have primarily considered the shadows that are associated with static sculptures and hence, themselves are static in nature. Dynamic shadow art can also be explored in near future. \newpage {\small \bibliographystyle{ieee_fullname}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In the standard framework of structure formation in a $\Lambda$CDM universe, galaxies are predicted to form and evolve in dark matter halos \citep{1978White}. To extract cosmological information and understand galaxy formation from observed galaxy clustering statistics, it is critical to correctly model the connection between galaxies and their underlying dark matter halos. The most popular and efficient model of the galaxy-halo connection for cosmological studies is the Halo Occupation Distribution model \citep[HOD; e.g.][]{2000Peacock, 2001Scoccimarro, 2002Berlind, 2005Zheng, 2007bZheng}. The HOD is an empirical model that makes the assumption that all galaxies live inside dark matter halos, and links galaxy occupation to specific halo properties. The most popular form of the HOD assumes that galaxy occupation is determined {\it solely} by halo mass, an assumption that rests on the long-standing and widely accepted theoretical prediction that halo mass is the attribute that most strongly correlates with the halo abundance and halo clustering as well as the properties of the galaxies residing in it \citep[][]{1978White, 1984Blumenthal}. However, there is mounting evidence that the mass-only HOD model is insufficient in accurately capturing the observed galaxy clustering on moderate to small scales (around and less than $10 h^{-1}$Mpc). A series of studies employing hydrodynamical simulations and semi-analytic models have found clear evidence that galaxy occupation correlates with secondary halo properties beyond just halo mass \citep[e.g.][]{2006Zhu, 2018Artale, 2018Zehavi, 2019Bose, 2019Contreras, 2020Hadzhiyska, 2020Xu}. This phenomenon is commonly known as \textit{Galaxy Assembly Bias}, or just assembly bias hereafter. In \citet{2021Yuan}, we present observational evidence for galaxy assembly bias by analyzing the full-shape redshift-space clustering of BOSS CMASS galaxies \citep[][]{2011Eisenstein, 2013Dawson}. We found that by including assembly bias and a secondary dependence on the local environment, the HOD model makes a significantly better prediction of the observed redshift-space clustering. We also found that the predicted galaxy-galaxy lensing signal also becomes significantly more consistent with data, thus potentially alleviating the ``lensing is low'' tension, where the observed lensing signal was consistently lower than model predictions by $30-40\%$ \citep[][]{2017Leauthaud, 2019bLange}. All these studies combine to refute the mass-only ansatz of the basic HOD model, demanding a set of robust physically-motivated extensions to improve the HOD's predictive power. As shown in \citet{2021Yuan}, a key challenge with extended HODs is computational efficiency. The secondary dependencies significantly increase the complexity of the model. Additionally, in order to produce more physical galaxy distributions, we adopt a particle-based approach, where we directly link galaxies to the dark matter particles, as opposed to estimating galaxy positions according to analytical models. However, due to the large number of particles ($>100$ times that of the halos) in a high-resolution simulation box, a particle-based approach also significantly increases the computational cost. The combination of using particles and introducing secondary dependencies can make the HOD code too slow for comprehensive parameter space explorations. Indeed, a shortcoming of the \citet{2021Yuan} analysis is that instead of sampling the full extended HOD posterior space, we opted for a much cheaper global optimization routine, which could potentially miss interesting regions of parameter space, particularly in high dimension spaces. Moreover, future cosmological applications of the extended HODs will likely require HOD sampling at a range of different cosmologies, further increasing the computational cost. Thus, performance is of great priority for a robust extended HOD code. With the advent of a new generation of cosmological surveys with much greater depth, such as the extended Baryon Oscillation Spectroscopic Survey \citep[eBOSS;][]{2016Dawson} and the Dark Energy Spectroscopic Instrument \citep[DESI;][]{2016DESI}, there arises a new opportunity to simultaneously utilize multiple galaxy tracer types to probe structure, the so-called multi-tracer analysis \citep[e.g.][]{2020Alam, 2021Zhao}. There are three types of galaxies that are most relevant in current and upcoming cosmological surveys: luminous red galaxies (LRGs), which tend to be massive, spheroidal, and quenched; emission line galaxies (ELGs), which tend to be less massive, disk-like, and star-forming; and quasi stellar objects (QSOs), whose emissions are dominated by their active galactic nuiclei (AGNs). Multi-tracer studies can not only bring additional statistical power to cosmology studies, but also leverage the potential difference in the clustering of different galaxy types to constrain the physics of galaxy formation. To enable such multi-tracer analyses, it is extremely helpful to devise a multi-tracer HOD model, where we simultaneously assign multiple types of galaxies to each halo. In this paper, we introduce the \textsc{AbacusHOD}\ framework and the accompanying \textsc{Python} code module. This HOD framework is extended, efficient, and multi-tracer. It is developed in conjunction with the state-of-the-art \textsc{AbacusSummit} simulations \citep{2021Maksimova} and designed specifically with DESI in mind. However, the model framework is also broadly applicable to other simulations and surveys. This paper also presents two applications of the \textsc{AbacusHOD}\ framework and the \textsc{AbacusSummit} simulations to observational data, demonstrating its effectiveness in modeling observed galaxy clustering down to highly non-linear scales. We should also mention that there have been several other important HOD analysis frameworks aimed at efficiently deriving HOD fits. One is the so-called emulator approach \citep[e.g.][]{2019DeRose, 2019Zhai, 2019Wibking, 2019bWibking}, where a surrogate model is used to approximate the complex clustering predictions. This approach saves computation by training the surrogate model on a modest number of HOD realizations. However, the success of an emulator model relies delicately on the choice of the surrogate model, while the accuracy often falls off drastically outside the training range. Another approach that avoids emulation is often referred to as tabulation (first described in \citet{2016Zheng}, a popular example is \textsc{TabCorr}\footnote{\url{https://github.com/johannesulf/TabCorr}}), which focuses on minimizing the cost of re-computing clustering measurements. These approaches pre-tabulate the halo pair counts and then compute each HOD evaluation as a re-weighted sum of the tabulated halo pair counts. While tabulation can make HOD evaluations quite fast, it is limited by the type of statistics and the binning beforehand. HOD extensions and introducing particles also significantly increase the complexity in tabulated approaches. The \textsc{AbacusHOD}\ framework aims to bypass the limits of emulators and tabulation by directly optimizing HOD evaluation and clustering calculation, ensuring maximum flexibility in the HOD model itself. The paper is outlined as follows: In Section~\ref{sec:model}, we describe the theory behind the \textsc{AbacusHOD}\ framework. In Section~\ref{sec:simulation}, we briefly describe \textsc{AbacusSummit} simulations that the \textsc{AbacusHOD}\ is currently built on. In Section~\ref{sec:algorithm}, we present the core algorithm and its many optimizations. In Section~\ref{sec:application}, we showcase the first example application of \textsc{AbacusHOD}, specifically to model the full-shape redshift-space clustering of CMASS galaxies. \red{In Section~\ref{sec:eboss}, we showcase the second example application, where we model clustering of the multi-tracer eBOSS sample. In Section~\ref{sec:discuss}, we discuss some interesting aspects of our analyses, especially with regard to the ``lensing is low'' issue, and compare to previous analyses.} Finally, we conclude in Section~\ref{sec:conclude}. \section{The extended HOD framework} \label{sec:model} In this section, we introduce the extended multi-tracer HOD framework, starting with the baseline HOD model for the three dark-time tracers expected for DESI: LRG, ELG, and QSO. Then we describe extensions to the baseline model, including satellite profile variations, velocity bias, assembly bias, and environment-based secondary bias. \subsection{The baseline HOD model} The baseline HOD for LRGs comes from the 5-parameter model described in \citet{2007bZheng}, which gives the mean expected number of central and satellite galaxies per halo given halo mass: \begin{align} \bar{n}_{\mathrm{cent}}^{\mathrm{LRG}}(M) & = \frac{1}{2}\mathrm{erfc} \left[\frac{\log_{10}(M_{\mathrm{cut}}/M)}{\sqrt{2}\sigma}\right], \label{equ:zheng_hod_cent}\\ \bar{n}_{\mathrm{sat}}^{\mathrm{LRG}}(M) & = \left[\frac{M-\kappa M_{\mathrm{cut}}}{M_1}\right]^{\alpha}\bar{n}_{\mathrm{cent}}^{\mathrm{LRG}}(M), \label{equ:zheng_hod_sat} \end{align} where the five parameters characterizing the model are $M_{\mathrm{cut}}, M_1, \sigma, \alpha, \kappa$. $M_{\mathrm{cut}}$ characterizes the minimum halo mass to host a central galaxy. $M_1$ characterizes the typical halo mass that hosts one satellite galaxy. $\sigma$ describes the steepness of the transition from 0 to 1 in the number of central galaxies. $\alpha$ is the power law index on the number of satellite galaxies. $\kappa M_\mathrm{cut}$ gives the minimum halo mass to host a satellite galaxy. We have added a modulation term $\bar{n}_{\mathrm{cent}}^{\mathrm{LRG}}(M)$ to the satellite occupation function to remove satellites from halos without centrals. In the baseline implementation, the actual number of central galaxy per halo is drawn from a Bernoulli distribution with mean equal to $\bar{n}^{\mathrm{LRG}}_{\mathrm{cent}}(M)$, and the actual number of satellite galaxies is drawn from a Poisson distribution with mean equal to $\bar{n}^{\mathrm{LRG}}_{\mathrm{sat}}(M)$. The central galaxy is assigned to the center of mass of the largest sub-halo, with the velocity vector also set to that of the center of mass of the largest sub-halo. Satellite galaxies are assigned to particles of the halo with equal weights. Similarly, the baseline HOD for ELGs is based on the parametric model described in \citet{2020Alam}: \begin{align} \bar{n}_{\mathrm{cent}}^{\mathrm{ELG}}(M) &= 2 A \phi(M) \Phi(\gamma M) + & \nonumber \\ \frac{1}{2Q} & \left[1+\mathrm{erf}\left(\frac{\log_{10}{M_h}-\log_{10}{M_{\mathrm{cut}}}}{0.01}\right) \right], \label{eq:NHMQ} \end{align} where \begin{align} \phi(x) &=\mathcal{N}(\log_{10}{ M_{\mathrm{cut}}},\sigma_M), \label{eq:NHMQ-phi}\\ \Phi(x) &= \int_{-\infty}^x \phi(t) \, dt = \frac{1}{2} \left[ 1+\mathrm{erf} \left(\frac{x}{\sqrt{2}} \right) \right], \label{eq:NHMQ-Phi}\\ A &=p_{\rm max} -1/Q. \label{eq:alam_hod_elg} \end{align} whereas the satellite occupation continues to adopt the power law form of Equation~\ref{equ:zheng_hod_sat}. Compared to Equ~9-12 in \citet{2020Alam}, we modified the definition of $A$ by skipping the denominator for ease of computation. We do not notice any significant change to the functional form of the central occupation. The baseline QSO HOD adopts the same functional form as the LRGs, referring back to Equation~\ref{equ:zheng_hod_cent}-\ref{equ:zheng_hod_sat}. We also implement a overall amplitude parameter for each tracer to account for incompleteness. In the current version of \textsc{AbacusHOD}, we enforce that each halo can only host at most one central galaxy, which can be of any tracer type. Similarly, each particle can also host at most one satellite galaxy, which can be of any tracer type. We do not enforce any type of central-satellite conformity or 2-halo conformity. In the following subsections, we introduce our framework of extending the baseline HODs with physical generalizations. \subsection{The satellite profile generalizations} In the baseline implementation, we assume that the 1-halo distribution of satellites tracks the halo density profile, where we assign satellites to halo particles with equal weight. \citet{2019Bose} used hydrodynamical simulations to show that this is a reasonable assumption for mass-selected galaxy samples. In this section, we relax that assumption by introducing several physically motivated generalizations that allow satellite profile to deviate from that of the dark matter halo. Our generalizations are based on re-weighting existing particles in the halo instead of simply moving galaxies, thus preserving Newtonian physics. These generalizations correspond to parameters $s$, $s_p$, and $s_v$, which were previously introduced in Section~3.2 of \citet{2018Yuan} and Section~2.2 of \citet{2021Yuan}. We summarize the key ideas here. For instance, the $s\in [-1, 1]$ parameter biases the satellite profile radially, where $s>0$ corresponds to the distribution of satellites being puffier than the halo profile, and $s<0$ corresponding to satellite distribution being more concentrated. The $s$ parameter works by first ranking all particles within a halo by their radial distance to halo center, and then assigning to each particle a weight that linearly depends on rank. Specifically, the probability for the $i$-th ranked particle to host a satellite galaxy is given by \begin{equation} p_i = \frac{\bar{n}_\mathrm{sat}}{N_p}\left[1+s(1 - \frac{2r_i}{N_p-1})\right],\ \ \ \ (i = 0, 1, 2, ..., N_p - 1) \label{equ:pi_s} \end{equation} where $N_p$ is the number of particles in the halo, and $r_i$ is the rank of the $i$-th particle. Similarly, we introduce the $s_v$ parameter, which biases the satellite profile based on particle peculiar velocity, and the $s_p$ parameter, which is based on particle perihelion distance. A detailed description of these two parameters, including how to estimate perihelion distance, can be found in Section~3.3 and 3.4 of \citet{2018Yuan}. There are several motivations in including these satellite profile generalization parameters. Baryonic processes can bias the concentration of baryons within the dark matter potential well \citep[e.g.][]{2010Duffy, 2010Abadi, 2017Chua, 2017Peirani, 2020Amodeo}. Splashback and infall can introduce biases for satellites with eccentric orbits \citep{2014Behroozi, 2014Diemer, 2014Adhikari, 2015More}. In our previous analyses, we have also used the $s_v$ parameter as an alternative but more physical model for satellite velocity bias. We discuss this approach further in the following subsection. \subsection{Velocity bias} While velocity measurements are not relevant for the study of galaxy positions in real space, the velocities do become entangled with the line-of-sight (LOS) positions in redshift-space due to redshift-space distortions \citep{1987Kaiser}. Thus, to model the redshift-space clustering of galaxies with high fidelity, we need to introduce a more flexible model of galaxy velocities. In the baseline implementation, we assume the velocity of the central galaxy to be the bulk velocity of the largest subhalo, and the velocity of the satellite galaxies to be the same as their host particles. Following observational evidence presented in \citet{2014Reid} and \citet{2015aGuo}, we introduce a velocity bias model that allows for deviations in central and satellite velocities from the underlying dark matter. First we add an additional Gaussian scatter to the LOS component of the central galaxy velocity, with width equal to the halo particle velocity dispersion. The central galaxy velocity along the LOS is thus given by \begin{equation} v_\mathrm{cent, z} = v_\mathrm{L2, z} + \alpha_c \delta v(\sigma_{\mathrm{LOS}}), \label{equ:alphac} \end{equation} where $v_\mathrm{L2, z}$ denotes the line-of-sight component of the subhalo velocity, $\delta v(\sigma_{\mathrm{LOS}})$ denotes the Gaussian scatter, and $\alpha_c$ is the central velocity bias parameter, which modulates the amplitude of the central velocity bias effect. By definition, $\alpha_c$ is non-negative, and $\alpha_c = 0$ corresponds to no central velocity bias. For the satellite galaxies, we allow for their peculiar velocities to be systematically higher or lower than their host particle velocities. This satellite velocity bias effect is given by \begin{equation} v_\mathrm{sat, z} = v_\mathrm{L2, z} + \alpha_s (v_\mathrm{p, z} - v_\mathrm{L2, z}), \label{equ:alpha_s} \end{equation} where $v_\mathrm{p, z}$ denotes the line-of-sight component of particle velocity, and $\alpha_s$ is the satellite velocity bias parameter. $\alpha_s = 1$ corresponds to no satellite velocity bias. While this $(\alpha_c, \alpha_s)$ model presented here is the most common implementation of velocity bias, it does break Newtonian physics by modifying satellite velocity without modifying its position. In \citet{2021Yuan}, we used a different implementation of velocity bias, where we replaced the $\alpha_s$ parameter with the $s_v$ parameter. The $s_v$ parameter simultaneously modulates the radial distribution and peculiar velocity of the satellite by preferentially assigning satellites to particles with higher or lower peculiar velocities, thus ensuring Newtonian orbits. However, the observed velocity bias is not necessarily exclusively due to halo physics, but could also be due to redshift systematics, which decouples the satellite velocities from satellite position. In this analysis, we use the more common $(\alpha_c, \alpha_s)$ prescription to marginalize over such observation systematics. \subsection{Secondary biases} Following lessons learned in \citet{2021Yuan}, we extend the standard HOD with two secondary dependencies, one on halo concentration (assembly bias), and one on the local environment. The concentration dependency describes the classical galaxy assembly bias effect, where the HOD model depends on the assembly history of the halo (encoded by halo concentration) in addition to halo mass. The local environment dependency is more novel, but it was found to be an necessary tracer of secondary biases in both simulations \citep{2020Hadzhiyska, 2020Xu} and observations \citep{2021Yuan}. There are multiple frameworks for introducing these secondary dependencies. \citet{2016Hearin} and \citet{2021Yuan} both adopt a ``galaxy swap" approach, where galaxies are swapped between halos of different secondary properties. This approach naturally conserves the total number density of galaxies, but it tends to be computationally expensive. In the \textsc{AbacusHOD}\ package, we adopt a different but computationally cheaper approach, where we analytically mix the secondary property with halo mass. This approach was used previously both in \citet{2019Walsh} and \citet{2020Xu}. We specifically follow the analytic prescription of \citet{2020Xu}, where the secondary halo property is directly tied to the two mass parameters in the baseline HOD, $M_{\mathrm{cut}}$ and $M_1$: \begin{align} \log_{10} M_{\mathrm{cut}}^{\mathrm{mod}} & = \log_{10} M_{\mathrm{cut}} + A_\mathrm{cent}(c^{\mathrm{rank}} - 0.5) + B_\mathrm{cent}(\delta^{\mathrm{rank}} - 0.5) \\ \log_{10} M_{1}^{\mathrm{mod}} & = \log_{10} M_{1} + A_\mathrm{sat}(c^{\mathrm{rank}} - 0.5) + B_\mathrm{sat}(\delta^{\mathrm{rank}} - 0.5) \label{equ:AB} \end{align} where $c$ and $\delta$ are the halo concentration and local overdensity, respectively. These secondary properties are ranked within narrow halo mass bins, and the resulting ranks $c^{\mathrm{rank}}$ and $\delta^{\mathrm{rank}}$ are normalized to range from 0 to 1, with 0.5 corresponding to the median. For example, $c^{\mathrm{rank}} > 0.5$ corresponds to a halo with above median concentration, $c^{\mathrm{rank}} < 0.5$ corresponds to a halo with below median concentration, and the same logic applies for the environment rank $\delta^{\mathrm{rank}}$. The tetrad $(A_\mathrm{cent}, B_\mathrm{cent}, A_\mathrm{sat}, B_\mathrm{sat})$ form the four parameters describing the amplitude of secondary biases in our HOD model. No secondary bias corresponds to all four parameters being equal to zero. We should also point out that in this prescription, the sign of the secondary bias parameters goes in the opposite direction of the secondary bias parameters in \citet{2021Yuan}. For example, in this new prescription, a positive $A_\mathrm{cent}$ would increase $\log M_\mathrm{cut}$ for the more concentrated ($c^{\mathrm{rank}} > 0.5$) halos, which reduces the number of galaxies in more concentrated halo at fixed halo mass (refer to Equation~\ref{equ:zheng_hod_cent} and Equation~\ref{eq:NHMQ}). Whereas in the model used in \citet{2021Yuan}, a positive $A$ increases the number of galaxies in more concentrated halos. The same logic applies to the environment-based bias, where we switch $A$ with $B$. So to summarize so far, positive $A$ ($B$) means high-concentration (environment) halos have fewer galaxies and low-concentration (environment) halos have more galaxies at fixed mass. The concentration $c$ is defined as the ratio $c = r_{90}/r_{25}$, where $r_x$ refers to the radius that encloses $x\%$ of the total halo mass. The local overdensity $\delta$ is calculated in a very similar fashion to \citet{2021Yuan}. First, for each halo, we compute the total enclosed mass of the neighboring halos, where a neighbor halo is defined to be within 5$h^{-1}$Mpc but beyond the halo radius $r_{98}$. Then we divide the enclosed mass by the average enclosed mass to obtain the local overdensity. Mathematically, we express this definition as \begin{equation} \delta = \frac{M(r_{98} < r < 5h^{-1}\mathrm{Mpc})}{\langle M(r_{98} < r < 5h^{-1}\mathrm{Mpc})\rangle} - 1, \end{equation} where $M$ denotes the enclosed mass of the neighboring halos. To gain intuition on how the secondary biases impact galaxy clustering, we show the derivatives of the galaxy projected clustering function $w_p$ (see Equation~\ref{equ:wp_def}) against each of the four secondary bias parameters in Figure~\ref{fig:derivs}. The top panel shows the derivatives against assembly bias parameters, whereas the bottom panel shows the derivatives against the environment-based secondary bias parameters. For the assembly bias parameters, we see that they are more important at very small scales $r_p < 1h^{-1}$Mpc while their effect diminishes at larger scales. At $r_p < 1h^{-1}$Mpc, the clustering is dominated by the 1-halo term, i.e. central-satellite clustering and satellite-satellite clustering. It makes sense that both the central and satellite derivatives are positive in this regime. Specifically, for a positive assembly bias parameter, the more concentrated halos ($c_{\mathrm{rank}} > 0.5$) correspond to a higher $M_{\mathrm{cut}}^{\mathrm{mod}}$ and $M_{1}^{\mathrm{mod}}$, and higher $M_{\mathrm{cut}}^{\mathrm{mod}}$ and $M_{1}^{\mathrm{mod}}$ mean fewer centrals and satellites for those halos. Thus, positive assembly bias parameters disfavor more concentrated halos and favor less concentrated halos. By putting more centrals and satellite into less concentrated halos, positive assembly bias boosts the central-satellite and satellite-satellite pair counts at the halo-size scale, which is typically $0.1-1h^{-1}$Mpc, thus boosting the clustering at those scales. \begin{figure} \centering \begin{subfigure}[]{0.4\textwidth} \hspace*{-0.8cm} \includegraphics[width = 3.4in]{./plot_deriv_A.pdf} \vspace{-0.8cm} \caption{Derivative against assembly bias parameters $A_\mathrm{cent}$ and $A_\mathrm{sat}$} \label{fig:derivA} \end{subfigure} \begin{subfigure}[]{0.4\textwidth} \hspace*{-1.1cm} \includegraphics[width = 3.5in]{./plot_deriv_B.pdf} \vspace{-0.8cm} \caption{Derivative against environment-based bias parameters $B_\mathrm{cent}$ and $B_\mathrm{sat}$} \label{fig:derivB} \end{subfigure} \caption{Derivatives of the projected galaxy correlation function $w_p$ (Equation~\ref{equ:wp_def}) against the secondary bias parameters. This is to show help the readers gain intuition on how the four secondary bias parameters impact the predicted galaxy clustering. The top panel shows the derivative against the assembly bias parameters $A_\mathrm{cent}$ and $A_\mathrm{sat}$, whereas the bottom panel shows the derivatives against the environment-based secondary bias parameters $B_\mathrm{cent}$ and $B_\mathrm{sat}$. } \label{fig:derivs} \end{figure} The bottom panel of Figure~\ref{fig:derivs} shows that both environment-based bias parameters have negative derivatives in the projected galaxy clustering. This makes sense because for positive environment-based bias parameters, the $M_{\mathrm{cut}}^{\mathrm{mod}}$ and $M_{1}^{\mathrm{mod}}$ parameters for halos in denser environments are increased. Thus, positive environment bias parameters favor halos in less dense environments to host galaxies, which leads to lower galaxy pair counts, thus the lower clustering amplitude. We see that the effect is strongest at around $r_p\sim 3h^{-1}$Mpc, which is the characteristic scale that our environment is defined at. Compared to the concentration-based assembly bias parameters, it is clear that while assembly bias mostly impacts clustering in the 1-halo term, environment-based secondary bias affects mostly the 2-halo term and extends out to much larger scales. The clustering signature of these secondary biases is ultimately the combined effect of occupational biases such as the ones modeled in this section, and of how halo clustering depends on secondary properties, an effect known as halo assembly bias \citep[e.g.][]{2005Gao, 2007Croton}. Specifically, at fixed halo mass, high-environment and high-concentration halos tend to be more clustered, so when one varies the galaxy occupation as a function of these secondary parameters, one also changes the galaxy clustering by favoring more or less clustered halos. We refer the readers to \citet{2021bHadzhiyska} for a detailed presentation of the interaction between galaxy occupational variation and halo assembly bias. \section{The \textsc{AbacusSummit} simulations} \label{sec:simulation} In principle, our extended model is not simulation specific, as long as the simulation outputs a halo and particle catalog. Currently, the \textsc{AbacusHOD}\ code package is specifically set up for the \textsc{AbacusSummit} simulation suite, which is a set of large, high-accuracy cosmological N-body simulations using the \textsc{Abacus} N-body code \citep{2019Garrison, 2021bGarrison}, designed to meet the Cosmological Simulation Requirements of the Dark Energy Spectroscopic Instrument (DESI) survey \citep{2013arXiv1308.0847L}. \textsc{AbacusSummit} consists of over 150 simulations, containing approximately 60 trillion particles at 97 different cosmologies. A typical base simulation box contains $6912^3$ particles within a $(2h^{-1}$Gpc$)^3$ volume, which yields a particle mass of $2.1 \times 10^9 h^{-1}M_\odot$. \footnote{For more details, see \url{https://abacussummit.readthedocs.io/en/latest/abacussummit.html}} The set of example fits presented in this paper are done primarily using the $z = 0.5$ slice of the \verb+AbacusSummit_base_c000_ph000+ box, which is $(2h^{-1}$Gpc$)^3$ in volume and adopts the Planck 2018 $\Lambda$CDM cosmology ($\Omega_c h^2 = 0.1200$, $\Omega_b h^2 = 0.02237$, $\sigma_8 = 0.811355$, $n_s = 0.9649$, $h = 0.6736$, $w_0 = -1$, and $w_a = 0$). The {\sc CompaSO} halo finder is a highly efficient on-the-fly group finder specifically designed for the \textsc{AbacusSummit} simulations \citep{2021Hadzhiyska}. {\sc CompaSO} builds on the existing spherical overdensity (SO) algorithm by taking into consideration the tidal radius around a smaller halo before competitively assigning halo membership to the particles in an effort to more effectively deblend halos. Among other features, the {\sc CompaSO} finder also allows for the formation of new halos on the outskirts of growing halos, which alleviates a known issue of configuration-space halo finders of failing to identify halos close to the centers of larger halos. We also run a post-processing ``cleaning'' procedure that leverages the halo merger trees to ``re-merge'' a subset of halos. This is done both to remove over-deblended halos in the spherical overdensity finder, and to intentionally merge physically associated halos that have merged and then physically separated. An example of such dissociation is what is known as splashback \citep[e.g.][]{2014Diemer, 2015bMore, 2016More}, where halos that were once part of a larger halos have since exited following at least one orbital passage within their former hosts. In \citet{2021Bose}, we find that remerging such halos signicantly improves the fidelity of the halo catalog, and the resulting ``cleaned'' halo catalog achieves significantly better fits on data in an HOD analysis. The fits presented in later sections of this paper are carried out with the cleaned halo catalogs. \section{\textsc{AbacusHOD}: core algorithm and optimizations} \label{sec:algorithm} The \textsc{AbacusHOD}\ module loads the halo and particle catalogs from the \textsc{AbacusSummit} simulations and outputs multi-tracer mock galaxy catalogs. This code is designed particularly for efficient HOD parameter searches, in which many HOD parameter sets will be requested in quick succession. In this section, we describe the core algorithm and some key optimizations implemented to maximize efficiency. The mock galaxy generation is divided into two stages, a preparation stage and an HOD stage. The preparation stage needs to be run first and serves to process the raw halo and particle files, front-loading all the expensive I/O and optimizes the simulation data for the second much faster HOD evaluation. \subsection{The preparation stage} One key objective of the preparation stage is to downsample the halos and particles from the simulation box. This is because the speed of evaluating an HOD scales roughly linearly with the number of halos and particles passed to the HOD code, barring a small amount of overheads. Thus, by optimally downsampling the halos and particles in the preparation stage, we can substantially increase the efficiency of each evaluation of the HOD stage. To this end, we implement a mass-dependent downsampling of the halos and particles. Specifically, we use a sigmoid function to aggressively downsample low mass halos while preserving most halos at high mass, where the turn off mass and the turn off rate depends on the tracer type. For the particles, we apply a uniform downsampling for all masses in addition to the sigmoid turn off. The goal is to reduce the number of particles until the number of particles per halo is only 10-100 times the expected number of satellites. By default, we implement two sets of downsampling filters, one designed for CMASS LRGs, and the other designed for ELGs and QSOs. The second filter goes to substantially lower halo mass and thus contains a significantly larger number of halos, resulting in lower performance in the HOD stage. The user should use these downsampling filters as a point of reference and customize the downsampling function as needed. The default filter for LRGs is shown in Equation~\ref{equ:downsampling}. Another key objective of the preparation stage is to precompute all the halo and particle properties necessary for the HOD model and concatenate them into a large contiguous array, Along with relevant halo and particle properties, the code also marks each halo and particle with a random number. The random numbers are for drawing from the central Bernoulli distribution and the satellite Poisson distribution. Pre-generating random numbers for all halos and particles not only reduces the computational cost when running HOD chains, but also carries the additional benefits of removing realization noise and making the mocks reproducible. The removal of realization noise also allows for calculation of more accurate derivatives of summary statistics against HOD parameters. \subsection{The HOD stage} The HOD stage centers around the \textsc{AbacusHOD} class object, which loads the downsampled halo and particle catalogs from the preparation stage onto memory when initialized, and then takes input HOD parameters and returns galaxy mock catalogs. In each HOD call, the centrals and satellites are generated separately and then concatenated into one unified output dictionary. To efficiently generate mocks given an input HOD, we adopt a multi-threaded two-pass memory-in-place algorithm accelerated with \textsc{numba}. To maximize the efficiency of multi-threading, the first pass serves to exactly determine the galaxy generating workload, and evenly partition the workload across all threads and pre-allocate the amount of memory needed for each thread. It does so by looping through the halos (particles) and calculating the number of centrals (satellites) to be generated for each halo by comparing the mean number of centrals (satellites) of that halo to its corresponding pre-generated random numbers. Then, it allocates an empty array for all galaxies to be generated, including their properties (position, velocity, and etc.). The galaxy array is then evenly partitioned by the number of threads and each partition is assigned to a thread. Finally on the second pass, each thread loops through its assigned halos and particles and fills out the galaxy array. A two-pass approach achieves significantly better performance than a typical brute-force approach by storing halos/particles data in memory, avoiding the costly operation of copy-pasting the entire galaxy array every time new galaxies need to be added. We further accelerate the halo and particle for-loops with \textsc{numba} no-python compiler to compile the slower Python code into faster machine code, and we take advantage of all available processor cores with by multi-threading over the halo and particle loops. \subsection{Utility functions} As part of the \textsc{AbacusHOD}\ package, we also provide additional utility functions that are commonly needed for HOD analyses. These include several 2-point correlation function (2PCF) calculators, a galaxy-galaxy lensing calculator, and sampling scripts. The provided 2PCF calculators are based on the highly-optimized grid-based \textsc{Corrfunc} code \citep{2020Sinha}. We further optimize the performance of \textsc{Corrfunc} to match that of the HOD code. The galaxy-galaxy lensing calculator is highly optimized compared to the \textsc{halotools} lensing calculator \citep{2017Hearin}. It takes advantage of the fact that the g-g lensing measurement $\Delta \Sigma$ is a linearly combination of the $\Delta\Sigma$ at each galaxy position. It pre-computes and saves the $\Delta\Sigma$ at each halo and particle position. For each HOD evaluation, it simply conducts a weighted sum of the halos and particles with galaxy weights given by the HOD. This lensing calculator is suited for fitting lensing measurements, not for making a single lensing prediction due to the high cost of pre-computing halo and particle $\Delta\Sigma$. We provide two popular methods of HOD sampling with \textsc{AbacusHOD}. The first one is an MCMC based script with \textsc{emcee} \citep{2013Foreman}, and the second one is a nested sampling script with \textsc{dynesty} \citep{2018Speagle}. We recommend the nested sampling script for much higher sampling efficiency and the natural calculation of model evidence, which is an essential metric for model comparisons. These scripts can be found at \texttt{abacusutils/scripts/hod/}. We provide a case study using the nested sampler in later sections. \subsection{Performance} A key characteristic of the \textsc{AbacusHOD}\ code is its high efficiency. In this sub-section, we offer a performance benchmark of the code. Our test system consists of two Intel Xeon Gold 5218 CPUs clocked at 2.30GHz, for a total of 32 cores a single node, and 256GB of DDR4 RAM. We use \textsc{Python} version 3.7.9, \textsc{Numpy} version 1.19.2, and \textsc{numba} version 0.51.2. For the test runs, we process a single \textsc{AbacusSummit} base simulation box at Planck cosmology, specifically the \texttt{AbacusSummit\_base\_c000\_ph000} box. We load the cleaned \textsc{CompaSO} halos at the $z = 0.5$ snapshot and downsample the halos and particles using the default filters provided in the package, as shown in Equation~\ref{equ:downsampling}: \begin{align} f_\mathrm{halos} & = \frac{1}{1+0.1\exp(-(\log M_h - 13.3)\times 25)}, \nonumber\\ f_\mathrm{particles} & = \frac{4}{200+\exp(-(\log M_h - 13.7)\times 8)}. \label{equ:downsampling} \end{align} The downsampling reduces the total number of halos from $3.99\times 10^8$ down to $6.18\times 10^6$, and the total number of subsample particles from $3.15\times 10^8$ down to $1.77\times 10^7$. We note that, given the small number of satellites produced, it is likely possible to further downsample the particles. However, we do not further optimize the particle sample for this analysis since generating satellites is not the performance bottleneck in our tests. We pick a fiducial baseline HOD prescription $\log_{10} M_\mathrm{cut} = 12.8$, $\log_{10}M_1 = 14.0$, $\sigma = 0.5$, $\alpha = 1.0$, and $\kappa = 0.5$, roughly resembling a CMASS-like sample. The construction of the \textsc{AbacusHOD}\ object, i.e. loading of halo and particle subsamples onto memory, takes approximately $10$ seconds. Then, we run the HOD code once to compile the code with \textsc{numba}, which takes around $10$ seconds. Then we repeat each HOD run 20 times, and take the average run time. Here we showcase how the run time of the mock generation and the 2PCF calculator depends on the number of threads, and the number density of galaxies. Specifically for the 2PCF calculator, we compute $\xi(r_p, \pi)$ with 8 logarithmic bins in $r_p$ between $0.169h^{-1}$Mpc and 30$h^{-1}$Mpc, and 6 linear bins in $\pi$ between 0 and 30$h^{-1}$Mpc. Figure~\ref{fig:timing_thread} shows how the timing of an HOD evaluation and the 2PCF calculation scales with the number of threads, where the galaxy number density is fixed at BOSS CMASS average density, $3\times 10^{-4}h^3$Mpc$^{-3}$. We see that both calculations are highly scalable at below $N_\mathrm{thread} < 32$, above which we start to lose per thread efficiency due to hyper-threading, i.e. running more than one thread per core provides little to no gain. The best-case timing for the HOD code (mock generation) is 0.17 seconds with 32 threads. For the 2PCF calculator, the best timing is 0.18 seconds with 64 threads. \begin{figure} \centering \hspace*{-0.6cm} \includegraphics[width = 3.2in]{./plot_timing_thread.pdf} \vspace{-0.3cm} \caption{Timing of the HOD evaluation and the 2PCF calculator as a function of the number of threads when running on a 32 core node, at fixed galaxy number density (CMASS average density). Both calculations are scalable. The dashed line shows the minimum total timing, at just below 0.4 seconds per HOD call. The timing plateaus above 32 threads, where running multiple threads per core (hyper-threading) provides little to no performance gain.} \label{fig:timing_thread} \end{figure} \begin{figure} \centering \hspace*{-0.6cm} \includegraphics[width = 3.2in]{./plot_timing_density.pdf} \vspace{-0.3cm} \caption{Timing of the HOD code and the 2PCF calculator as a function of galaxy number density at $N_\mathrm{thread} = 32$. The timing of the HOD code is largely independent of number density, whereas the 2PCF timing scales roughly linearly with number density. } \label{fig:timing_density} \end{figure} Figure~\ref{fig:timing_density} shows how the timing scales with the galaxy number density, at fixed number of threads ($N_\mathrm{thread} = 32$). The dashed line denotes the CMASS average density. We see that the HOD code is largely independent of the galaxy number density, since the HOD code timing depends on the number of halos and particles considered, not the number of galaxies produced by that sweep. The 2PCF code scales roughly linearly with the number density. This suggests that the 2PCF timing is likely dominated by linear overheads, such as gridding. We have optimized the 2PCF calculator by choosing the optimal grid size. The user may find further tuning of the grid size necessary depending on the simulation size and sample density. The performance of \textsc{AbacusHOD}\ in a real world application is highly dependent on the hardware, simulations, and the summary statistics. Our tests are conducted on a single node system with generous memory and relatively fast processors. On a cluster system like cori at NERSC, the user might also benefit from chain-level parallelization instead of focusing on the timing of a single HOD evaluation. We also did not present the timing for lower mass tracers such as ELGs, but internal testings show that evaluating an eBOSS-like ELG HOD is approximately 2-3 times slower than evaluating a CMASS-like LRG HOD. An important limiting factor in the performance is the summary statistics calculator. While we provide fast calculators for 2PCF and galaxy-galaxy lensing, anything beyond these statistics remains the responsibility of the user for now. Compared to other existing HOD implementations, \textsc{AbacusHOD}\ is $\sim 100$ times faster than other particle-based algorithms, including the \textsc{GRAND-HOD} code we developed in \citet{2018Yuan} and the \textsc{halotools} implementation in \citet{2013Behroozi}. The tabulated HOD (e.g. \textsc{TabCorr}\footnote{\url{https://github.com/johannesulf/TabCorr}}) approach can be of similar or better performance by pre-computing all the halo clustering and then convolving halo occupation with the pre-computing clustering \citep{2016Zheng}. The tabulated HOD approach achieves good performance by sacrificing model flexibility, where extending a tabulated HOD models with additional features could add significant complexities to the pre-tabulation and the computation of the convolution. \section{Application to BOSS LRG clustering} \label{sec:application} In this section, we apply the \textsc{AbacusHOD}\ package to fitting the BOSS CMASS and LOWZ LRG clustering. Besides serving as an example application for the \textsc{AbacusHOD}\ package, we also compare the constraining power of the projected clustering and the redshift-space clustering. \red{We test the necessity of various HOD extensions for this galaxy sample, which leads to implications in the galaxy-galaxy lensing tension \citep{2017Leauthaud} (discussed in section~\ref{subsec:lensing}).} For brevity, we lead with a detailed analysis on CMASS but only summarize the key results of the LOWZ analysis. \subsection{BOSS galaxy sample} \label{sbsec:cmass} The Baryon Oscillation Spectroscopic Survey \citep[BOSS; ][]{2012Bolton, 2013Dawson} is part of the SDSS-III programme \citep{2011Eisenstein}. BOSS Data Release 12 (DR12) provides redshifts for 1.5 million galaxies in an effective area of 9329 square degrees divided into two samples: LOWZ and CMASS. The LOWZ galaxies are selected to be the brightest and reddest of the low-redshift galaxy population at $z < 0.4$, whereas the CMASS sample is designed to isolate galaxies of approximately constant mass at higher redshift ($z > 0.4$), most of them being also Luminous Red Galaxies \citep[LRGs,][]{2016Reid, 2016Torres}. The survey footprint is divided into chunks which are covered in overlapping plates of radius $\sim 1.49$ degrees. Each plate can house up to 1000 fibres, but due to the finite size of the fibre housing, no two fibres can be placed closer than $62$ arcsec, referred to as the fibre collision scale \citep{2012Guo}. For the CMASS analysis, we limit our measurements to the galaxy sample between redshift $0.46 < z < 0.6$ in DR12, whereas for LOWZ, we adopt redshift range $0.15 < z < 0.4$. We choose these moderate redshift ranges for completeness and to minimize the systematics due to redshift evolution. Applying this redshift range to both the north and south galactic caps gives a total of approximately 600,000 galaxies in our CMASS sample, and just under 400,000 galaxies in our LOWZ sample. The average galaxy number density is given by $n_\mathrm{CMASS} = (3.01\pm 0.03)\times 10^{-4} h^{3}$Mpc$^{-3}$ for CMASS and $n_\mathrm{LOWZ} = (3.26\pm 0.03)\times 10^{-4} h^{3}$Mpc$^{-3}$ for LOWZ. We consider two key 2-point statistics on the data. The first is the redshift-space 2PCF $\xi(r_p, \pi)$, which can be computed using the \citet{1993Landy} estimator: \begin{equation} \xi(r_p, \pi) = \frac{DD - 2DR + RR}{RR}, \label{equ:xi_def} \end{equation} where $DD$, $DR$, and $RR$ are the normalized numbers of data-data, data-random, and random-random pair counts in each bin of $(r_p, \pi)$, and $r_p$ and $\pi$ are transverse and line-of-sight (LOS) separations in comoving units. For this paper, we choose a coarse binning to ensure reasonable accuracy on the covariance matrix, with 8 logarithmically-spaced bins between 0.169$h^{-1}$Mpc and 30$h^{-1}$Mpc in the transverse direction, and 6 linearly-spaced bins between 0 and 30$h^{-1}$Mpc bins along the line-of-sight direction. The same binning is used for both the CMASS and LOWZ samples. The second statistic is the projected galaxy 2PCF, commonly referred to as $w_p$. It is simply defined as the line-of-sight integral of the redshift-space $\xi(r_p, \pi)$, \begin{equation} w_p(r_p) = 2\int_0^{\pi_{\mathrm{max}}} \xi(r_p, \pi)d\pi, \label{equ:wp_def} \end{equation} where $\pi_{\mathrm{max}} = 30 h^{-1}$Mpc. We use a finer binning for $w_p$, with a total of 18 bins between 0.169$h^{-1}$Mpc and 30$h^{-1}$Mpc. We have corrected the data for fibre collision effects following the method of \cite{2012Guo}, by separating galaxies into collided and decollided populations and assuming those collided galaxies with measured redshifts in the plate-overlap regions are representative of the overall collided population. The final corrected correlation function can be obtained by summing up the contributions from the two populations. However, scales below 0.5$h^{-1}$Mpc likely still suffer from systematics even after the correction, and they show a turn off that is qualitatively inconsistent with theoretical expectations and simulations. Thus, we remove the first three bins in $w_p$, and the first column of $\xi(r_p, \pi)$ from the fit. In fact, in our tests, we find that the removal of these bins yield a significantly better fit in terms of $\chi^2$/d.o.f. The covariance matrix is calculated from jackknife sub-samples and is described in detail in Section~3.1 of \citet{2021Yuan}. \subsection{CMASS $w_p$ fit} \label{subsec:wpfit} To fit the observed CMASS projected galaxy 2PCF $w_p$, we start with our \textsc{AbacusSummit} base box at Planck cosmology. For this analysis, we use the cleaned \textsc{CompaSO} halos and their corresponding particles at the $z = 0.5$ snapshot. \begin{table} \centering \begin{tabular}{ c | c c c c} \hline Parameter name & $\mu_{\mathrm{prior}}$ & $\sigma_{\mathrm{prior}}$ & best-fit & posterior median\\ \hline $\log_{10}(M_{\mathrm{cut}}/h^{-1}M_\odot)$ & 13.3 & 0.5 & 12.9 & 13.1 \\ $\log_{10}(M_1/h^{-1}M_\odot)$ & 14.3 & 0.5 & 14.2 & 14.3 \\ $\sigma$ & 0.5 & 0.2 & 2.7$\times 10^{-3}$ & 0.26\\ $\alpha$ & 1.0 & 0.3 & 1.2 & 1.0\\ $\kappa$ & 0.5 & 0.2 & 0.08 & 0.45\\ \hline \end{tabular} \caption{The assumed priors, the maximum-likelihood values, and posterior medians of the baseline HOD model, when constrained on $w_p$. We choose the priors to be Gaussians with broad non-informative width. } \label{tab:abacushod_bestfits} \end{table} \begin{figure*} \centering \hspace*{-0.6cm} \includegraphics[width = 5.5in]{./cornerplot_wp_base_cleaned_skip3bins.pdf} \vspace{-0.3cm} \caption{The 1D and 2D marginalized posterior constraints on the baseline HOD parameters from the $w_p$ fit. The contours shown correspond to 1, 2, and 3$\sigma$ uncertainties. The vertical and horizontal lines show the maximum-likelihood values for reference. The values displayed above the 1D marginals are posterior medians with the upper/lower bounds associated with the 0.025 and 0.975 quantiles.} \label{fig:corner_wp} \end{figure*} We assume Gaussian likelihood and express the log-likelhood in terms of the chi-squared, $\chi^2$. The $\chi^2$ is given in two parts, corresponding to errors on the projected 2PCF and errors on the galaxy number density: \begin{equation} \chi^2 = \chi^2_{w_p} + \chi^2_{n_g}, \label{equ:logL_wp} \end{equation} where \begin{equation} \chi^2_{w_p} = (w_{p,\mathrm{mock}} - w_{p,\mathrm{data}})^T \boldsymbol{C}^{-1}(w_{p,\mathrm{mock}} - w_{p,\mathrm{data}}), \label{equ:chi2wp} \end{equation} and \begin{equation} \chi^2_{n_g} = \begin{cases} \left(\frac{n_{\mathrm{mock}} - n_{\mathrm{data}}}{\sigma_{n}}\right)^2 & (n_{\mathrm{mock}} < n_{\mathrm{data}}) \\ 0 & (n_{\mathrm{mock}} \geq n_{\mathrm{data}}). \end{cases} \label{equ:chi2ng} \end{equation} where $\boldsymbol{C}$ as the jackknife covariance matrix on $\xi$, and $\sigma_n$ is the uncertainty of the galaxy number density. The $\chi^2_{n_g}$ is an asymmetric normal around the observed number density $n_\mathrm{data}$. When the mock number density is less than the data number density $(n_{\mathrm{mock}} < n_{\mathrm{data}})$, we give a Gaussian-type penalty on the difference between $(n_{\mathrm{mock}}$ and $n_{\mathrm{data}})$. When the mock number density is higher than data number density $(n_{\mathrm{mock}} \geq n_{\mathrm{data}})$, we invoke the incompleteness fraction $f_{\mathrm{ic}}$ that uniformly downsamples the mock galaxies to match the data number density. In this case, we impose no penalty. This definition of $\chi^2_{n_g}$ allows for incompleteness in the observed galaxy sample while penalizing HOD models that produce insufficient galaxy number density or too many galaxies. For the rest of this paper, we set $n_\mathrm{data} = 3.0\times 10^{-4} h^{3}$Mpc$^{-3}$ and a rather lenient $\sigma_n = 4.0\times 10^{-5} h^{3}$Mpc$^{-3}$. We sample the baseline HOD parameter space, without any extensions, using the \textsc{dynesty} nested sampler \citep{2018Speagle, 2019Speagle}. While being able to sample the posterior space more efficiently than an Markov Chain Monte Carlo sampler, nested sampling codes such as \textsc{dynesty} also compute the Bayesian evidence, \begin{equation} \mathcal{Z} = P(D|M) = \int_{\Omega_{\Theta}} P(D|\Theta, M)P(\Theta|M)d\Theta. \label{equ:evidence} \end{equation} where $M$ represents the model, $D$ represents the data, and $\Theta$ represents the model parameters. The evidence can simply be interpreted as the marginal likelihood of the data given the model, and serves as an important metric in Bayesian model comparisons. Our tests show that, with sufficiently high number of live points, the nested samplings runs also are able to accurately identify the maximum likelihood point in high dimensional spaces. In our \textsc{dynesty} runs, we use 500 live points and a uniform sampler. The stopping criterion is set to $d\log\mathcal{Z} > 0.01$. We assume broad Gaussian priors for all 5 baseline HOD parameters, as summarized in Table~\ref{tab:abacushod_bestfits}. \red{The best-fit $\chi^2 = 11$ (d.o.f = 10), and the best-fit HOD parameters are summarized in Table~\ref{tab:abacushod_bestfits}. Figure~\ref{fig:corner_wp} shows the $1,2,3\sigma$ posterior constraints. The best-fit corresponds to a galaxy number density of $n_\mathrm{fit} = 5.0\times 10^{-4} h^{3}$Mpc$^{-3}$ and a satellite fraction of 9.6$\%$. The best-fit parameters are largely within the expected range. The small $\sigma$ value corresponds to a sharp mass cut off for the central galaxies, which is reasonable given the constant mass selection cuts of CMASS galaxies. However, referring to the 2D marginalized posteriors, the typical value of $\sigma$ in the fit is closer to 0.3, and the maximum-likelihood mode might be a relative outlier.} It is also apparent from the posterior constraints that the HOD parameters are degenerate and not well constrained. The positive correlation between $\log M_\mathrm{cut}$ and $\log M_1$ and the negative correlation between $\log M_\mathrm{cut}$ and $\alpha$ suggest a well-constrained satellite fraction. The positive correlation between $\log M_\mathrm{cut}$ and $\sigma$ suggest a well-constrained average bias on the centrals. It is possible that using average bias and satellite fraction would result in a more orthogonal HOD parameter basis. \begin{figure*} \centering \hspace*{-0.6cm} \includegraphics[width = 7in]{./cornerplot_xi_base_velbias_nlive500_cleaned_logsigma_skipcol.pdf} \vspace{-0.3cm} \caption{The 1D and 2D marginalized posterior constraints on the baseline HOD parameters and velocity bias paraemters from the $\xi(r_p, \pi)$ fit. The contours shown correspond to 1, 2, and 3$\sigma$ uncertainties. The vertical and horizontal lines show the maximum-likelihood values for reference. The values displayed above the 1D marginals are posterior medians with the upper/lower bounds associated with the 0.025 and 0.975 quantiles, or approximately the $2\sigma$ interval. Compared to the projected clustering $w_p$ constraints shown in Figure~\ref{fig:corner_wp}, the full-shape redshift-space clustering gives much tighter constraints on parameters and breaks multiple parameter degeneracies. } \label{fig:corner_xi_base_velbias} \end{figure*} \subsection{CMASS $\xi(r_p, \pi)$ fit} \label{subsec:xifit} In \citet{2021Yuan}, we found that the redshift-space 2PCF, specifically in the form of $\xi(r_p, \pi)$, offers significantly more constraining power on the HOD and assembly bias than the projected 2PCF. In this section, we present \textsc{AbacusHOD}\ fits to the BOSS CMASS $\xi(r_p, \pi)$, and discuss evidence for various HOD extensions in the \textsc{AbacusHOD}\ framework. We follow the same routine as outlined for the projected 2PCF $w_p$, using the same simulation box and the same redshift snapshot ($z = 0.5$). We also assume Gaussian likelihood, where the covariance matrix is computed from 400 jack knife samples of the data. We additionally apply corrections to the covariance matrix due to the limited simulation volume and the Hartlap correction following \citet{2007Hartlap}. Sampling was performed with \textsc{dynesty}, with the same settings as before. In the bare minimum, we need to extend the baseline HOD model by including velocity bias. In \citet{2021Yuan}, we used a novel physically motivated implementation of satellite velocity bias, encoded by the parameter $s_v$. Here we use the more canonical $(\alpha_s, \alpha_c)$ model of velocity bias to also account for observational systematics. Figure~\ref{fig:corner_xi_base_velbias} showcases the posterior constraints, and the best-fit values are summarized in the third column of Table~\ref{tab:abacushod_bestfits_xi}, tagged ``Baseline'' at the top. With the $\xi(r_p, \pi)$ fit, we recover reasonable HOD parameter values. Unlike the $w_p$ fit, we do not see strong degeneracies in the marginalized posteriors. As a result, the fit yields much tighter constraints on the HOD parameters $\log M_\mathrm{cut}$, $\log M_1$, and $\alpha$. For example, the 1$\sigma$ interval on $\log M_\mathrm{cut}$ is approximately 15 times tighter than when constrained just on $w_p$. Similarly, the $1\sigma$ interval is 5 times tighter in $\log M_1$ and 3 times tighter in $\alpha$. This showcases the extra information provided in the redshift-space 2PCF. In terms of the velocity bias, we find a non-zero central velocity bias with 5$\sigma$ significance, and a satellite velocity bias consistent with 1. This measurement is consistent with the results of \citet{2015aGuo}, where the authors also found $\alpha_c \approx 0.2$ and $\alpha_s \approx 1$, albeit with lower signal-to-noise. In \citet{2021Yuan}, we found a similar $\alpha_c \approx 0.2$, but we found a significantly non-zero satellite velocity bias $s_v = 0.5-0.8$, which would suggest the satellite velocity dispersion to be larger than that of the dark matter. However, while the model difference can be partly responsible for this discrepancy, we also find the fit to be simulation dependent. \citet{2021Yuan} used the \textsc{AbacusCosmos} simulations, which are lower resolution and had a slightly different cosmology. The two simulations also use different halo finders, as detailed in \citet{2021Hadzhiyska} and \citet{2021Bose}. We find that if we fit $\xi(r_p, \pi)$ with the same HOD + $(\alpha_c, s_v)$ model as in \citet{2021Yuan}, but using the new \textsc{AbacusSummit} simulations, we recover a much smaller best-fit value $s_v \approx 0.2\pm 0.4$, statistically consistent with no satellite velocity bias. \begin{table*} \centering \begin{tabular}{ c | c c c c c} \hline Parameter name & $\mu_\mathrm{prior}\pm\sigma_\mathrm{prior}$ & Baseline & $B_\mathrm{cent}$, $B_\mathrm{sat}$ & $s, B_\mathrm{cent}, B_\mathrm{sat}$ & $A_\mathrm{cent}$, $A_\mathrm{sat}$ \\ \hline \\[-1em] $\log_{10}M_{\mathrm{cut}}$ & 13.3$\pm$0.5 & $12.86^{+0.01}_{-0.01}$ & $12.80^{+0.02}_{-0.02}$ & $12.78^{+0.02}_{-0.02}$ & $12.88^{+0.02}_{-0.01}$ \\ \\[-1em] $\log_{10}M_1$ & 14.3$\pm$0.5 & $14.10^{+0.02}_{-0.02}$ & $14.00^{+0.04}_{-0.04}$ & $13.88^{+0.07}_{-0.05}$ & $14.17^{+0.03}_{-0.02}$\\ \\[-1em] $\log_{10} \sigma$ & $-1\pm$1 & $-2.8^{+0.4}_{-0.7}$ & $-2.9^{+0.4}_{-0.7}$ & $-2.9^{+0.4}_{-0.7}$ & $-2.2^{+0.4}_{-0.7}$ \\ \\[-1em] $\alpha$ & 1.0$\pm$0.3 & $1.12^{+0.04}_{-0.04}$ & $1.03^{+0.04}_{-0.04}$ & $1.05^{+0.04}_{-0.04}$ & $1.09^{+0.04}_{-0.04}$ \\ \\[-1em] $\kappa$ & 0.5$\pm$0.2 & $0.2^{+0.2}_{-0.1}$ & $0.3^{+0.2}_{-0.2}$ & $0.5^{+0.2}_{-0.2}$ & $0.15^{+0.17}_{-0.15}$ \\ \\[-1em] \hline \\[-1em] $\alpha_c$ & 0.3$\pm$0.2 & $0.22^{+0.02}_{-0.02}$ & $0.18^{+0.03}_{-0.04}$ & $0.10^{+0.04}_{-0.05}$ & $0.22^{+0.02}_{-0.02}$ \\ \\[-1em] $\alpha_s$ & 1.0$\pm$ 0.3 & $0.98^{+0.03}_{-0.04}$ & $1.00^{+0.03}_{-0.03}$ & $0.84^{+0.07}_{-0.05}$ & $0.98^{+0.03}_{-0.03}$ \\ \\[-1em] $s$ & 0.0$\pm$0.3 & / & / & $-0.63^{+0.2}_{-0.1}$ & / \\ \\[-1em] $A_\mathrm{cent}$ & 0.0$\pm$0.3 & / & / & / & $-0.40^{+0.09}_{-0.17}$\\ \\[-1em] $A_\mathrm{sat}$ & 0.0$\pm$0.3 & / & / & / & $0.2^{+0.2}_{-0.3}$ \\ \\[-1em] $B_\mathrm{cent}$ & 0.0$\pm$0.3 & / & $-0.04^{+0.02}_{-0.02}$ & $-0.04^{+0.03}_{-0.03}$ & / \\ \\[-1em] $B_\mathrm{sat}$ & 0.0$\pm$0.3 & / & $-0.17^{+0.11}_{-0.12}$ & $-0.15^{+0.09}_{-0.10}$ & / \\ \\[-1em] \hline $f_{\mathrm{ic}}$ & / & 0.58 & 0.46 & 0.43 & 0.58 \\ $f_{\mathrm{sat}}$ & / & 0.11 & 0.13 & 0.15 & 0.11 \\ \hline $\chi^2$ (DoF) & / & 60 (35) & 42 (33) & 33 (32) & 54 (33)\\ $\log \mathcal{Z}$ & / & $-62$ & $-52$ & $-51$ & $-59$ \\ \hline \end{tabular} \caption{Summary of the key HOD fits on CMASS redshift-space 2PCF $\xi(r_p, \pi)$. The first column lists the HOD parameters, incompleteness factor $f_{\mathrm{ic}}$, satellite fraction $f_{\mathrm{sat}}$, the final $\chi^2$, degree-of-freedom (DoF), and the marginalized Bayesian evidence. The second and third columns show the prior constraints. The third column summarizes the best-fit parameter values of the baseline CMASS redshift-space 2PCF fit with baseline HOD + velocity bias ($\alpha_c, \alpha_s$). The following columns list the best-fit parameters when we introduce additional parameters as shown in the top row. The errors shown are $1\sigma$ marginalized errors. The fourth column shows that the addition of the environment-based secondary bias parameters substantially improves the fit and their inclusion is strongly preferred by the data. The negative $B_\mathrm{cent}$ and $B_\mathrm{sat}$ values suggest that galaxies preferentially occupy less massive galaxies in denser environments. Figure~\ref{fig:corner_B} shows the 2D posteriors of $B_\mathrm{cent}$ and $B_\mathrm{sat}$, showcasing a $>3\sigma$ detection. The next column shows that the satellite profile parameter $s$ further improves the fit, preferring a less concentrated satellite profile relative to the halo. The last column shows that the concentration-based assembly bias parameters moderately improve the fit. Figure~\ref{fig:corner_A} shows the 2D posteriors of the assembly bias parameters, showing a weaker detection at just above $2\sigma$. The best-fit values suggest that centrals preferentially occupy more concentrated (older) halos whereas satellites occupy less concentrated (younger) halos, which aligns with theory intuition. } \label{tab:abacushod_bestfits_xi} \end{table*} \subsection{Introducing secondary biases} \label{subsec:abfit} \begin{figure} \centering \hspace*{-0.6cm} \includegraphics[width = 3.6in]{./cornerplot_xi_base_velbias_Bold_nlive500_cleaned_logsigma_skipcol_Boldonly_overlap.pdf} \vspace{-0.3cm} \caption{The 1D and 2D marginalized posterior constraints on the environment-based secondary bias parameters $B_\mathrm{cent}, B_\mathrm{sat}$ from the $\xi(r_p, \pi)$ fit. The contours shown correspond to 1, 2, and 3$\sigma$ uncertainties. The vertical and horizontal lines mark the zeros. The values displayed above the 1D marginals are posterior medians with the upper/lower bounds associated with the 0.025 and 0.975 quantiles, or approximately the $2\sigma$ interval. The blue contours show the constraints when we include $B_\mathrm{cent} and B_\mathrm{sat}$ in addition to the baseline HOD + velocity bias model. The magenta contours show the constraints when we also include the satellite profile parameter $s$. We see that while the marginalized 1D posteriors do not show significant detections, the 2D posterior shows that the preference for nonzero $B_\mathrm{cent}$ and $B_\mathrm{sat}$ is strong. } \label{fig:corner_B} \end{figure} \begin{figure} \centering \hspace*{-0.6cm} \includegraphics[width = 3.6in]{./cornerplot_xi_base_velbias_A_nlive500_cleaned_logsigma_skipcol_Aonly_overlap.pdf} \vspace{-0.3cm} \caption{The 1D and 2D marginalized posterior constraints on the assembly bias parameters $A_\mathrm{cent}, A_\mathrm{sat}$ from the $\xi(r_p, \pi)$ fit. The contours shown correspond to 1 and 2$\sigma$ uncertainties. The 3$\sigma$ contour is less constrained, and we omit it for better visualization. The vertical and horizontal lines show the zeros. The values displayed above the 1D marginals are posterior medians with the upper/lower bounds associated with the 0.025 and 0.975 quantiles, or approximately the $2\sigma$ interval. The blue contours correspond to the constraints when only assembly bias parameters are added to the baseline HOD + velocity bias model, whereas the magenta corresponds to when we also add satellite profile parameter $s$ to the model. We find a $2\sigma$ detection of central assembly bias. The detection of satellite assembly bias is weak, especially when we include satellite profile parameter $s$. } \label{fig:corner_A} \end{figure} In this section, we further extend the baseline + velocity bias model with the environment-based secondary bias and the concentration-based assembly bias. First we extend the baseline + velocity bias model with the following parameters one at a time: $s$, $s_v$, $s_p$, ($A_\mathrm{cent}$, $A_\mathrm{sat}$), and ($B_\mathrm{cent}$, $B_\mathrm{sat}$), where each pair of secondary biases is considered one ``parameter''. We find that $s$, ($A_\mathrm{cent}$, $A_\mathrm{sat}$), and ($B_\mathrm{cent}$, $B_\mathrm{sat}$) significantly improve the best-fit $\chi^2$, with $\Delta \chi^2 = -7.1$, $\Delta \chi^2 = -6.1$, and $\Delta \chi^2 = -18.2$ respectively. The other parameters do not significantly improve the fit. The results that assembly bias and environment-based secondary bias improve the fit on the redshift-space 2PCF are qualitatively consistent with our findings in \citet{2021Yuan}. Since the environment-based secondary bias brings the largest improvement to the fit, we first introduce $B_\mathrm{cent}$ and $B_\mathrm{sat}$ and use \textsc{dynesty} to compute the model evidence and sample the posterior space. This model thus includes the 5 baseline HOD parameters, velocity bias ($\alpha_c, \alpha_s$), and environment-based secondary bias ($B_\mathrm{cent}, B_\mathrm{sat}$). The fourth column of Table~\ref{tab:abacushod_bestfits_xi} summarizes the resulting best-fit parameter values. The model yields the best-fit $\chi^2 = 42$, a significant improvement over the baseline + velocity bias model. The marginalized evidence also significantly improves, suggesting that the observed redshift-space 2PCF significantly prefers the inclusion of environment-based secondary bias in the model. The blue contours in Figure~\ref{fig:corner_B} show the posterior constraints on $B_\mathrm{cent}$, and $B_\mathrm{sat}$ from this fit. We see that while the marginalized 1D posteriors do not show significant detections, the 2D posterior shows that the preference for nonzero $B_\mathrm{cent}$ and $B_\mathrm{sat}$ is quite significant. The negative values of $B_\mathrm{cent}$ and $B_\mathrm{sat}$ are consistent with the positive $A_e$ value reported in \citet{2021Yuan} due to the definitional differences. In both analyses, we find that the data preferentially put galaxies in halos in denser environments. It is also worth noting that we find the environment-based bias for the satellite galaxies to be stronger than that for the central galaxies. This shows the need for separate secondary bias prescriptions for the centrals and satellites, as opposed to the unified prescription used in \citet{2021Yuan}. Revisiting the other parameters in this fit, we see that we continue to find strong evidence for central velocity bias, but no evidence for satellite velocity bias. Interestingly, the baseline HOD parameter fits seem to be sensitive to the inclusion of environment-based secondary bias. Specifically we see a decrease in $M_\mathrm{cut}$, $M_1$, and $\alpha$ compared to the fit without the secondary bias. These decreases translate to moving both centrals and satellites to lower mass halos. This is the same preference we saw in \citet{2021Yuan}, which in turns decreases the predicted weak lensing signal. We revisit the lensing discussion in the following section. The fifth column of Table~\ref{tab:abacushod_bestfits_xi} summarizes the best-fit parameters when we also include the satellite profile parameter $s$ in addition to the environment-based secondary bias parameters ($B_\mathrm{cent}, B_\mathrm{sat}$). We see a further improvement to the best-fit $\chi^2$, down to 33 with 32 degrees of freedom. The model evidence also sees a further improvement. The introduction of $s$ does not significantly bias the best-fit values of the environment-based secondary bias parameters, but does affect the best-fit values of the baseline parameters and the velocity bias parameters. Specifically, we see a further decrease in the halo mass of central galaxies and satellite galaxies. The decrease in $M_1$ while $\alpha$ remains the same results in an increase in the inferred satellite fraction. The negative $s$ itself implies a less concentrated satellite galaxy distribution relative to the halo profile, preferring the outer regions of the halo over the halo core. The magenta contours in Figure~\ref{fig:corner_B} show that the inclusion of $s$ does not alter the posterior constraints on environment-based secondary bias parameters. \red{ Figure~\ref{fig:xifit} showcases our best fit with environment-based secondary bias, corresponding to the fifth column of Table~\ref{tab:abacushod_bestfits_xi}. Specifically, the left hand side shows the target data vector of our analysis, i.e. the CMASS redshift-space $\xi(r_p, \pi)$ measurement. The right hand side shows the difference between our best fit and the data vector, normalized by data error bars, which we compute from the diagonal of the data covariance matrix. We achieve good fit on most bins, within $1-2\sigma$, with the exception being a few bins at $3-5h^{-1}$Mpc transverse and large $\pi$. However, note that these bins at larger $r_p$ and $\pi$ tend to be covariant, so the diagonal errors quoted underestimate the true level of uncertainty in the data, and the discrepancy between the data and the best fit is less statistically significant. There is, however, a trend the transverse direction, where the model tends to overestimate at small $r_p$ but underestimate at larger $r_p$. This suggests there is still a small residual signal that our model has not accounted for fully. } \begin{figure*} \centering \hspace*{-0.6cm} \includegraphics[width = 7in]{./plot_bestfit_compare_A_B.pdf} \vspace{-0.3cm} \caption{\red{The CMASS $\xi(r_p, \pi)$ that we fit to in this section (left panel) and our best fit with environment-based secondary biases (right panel). The right hand side specifically shows the difference between the best fit and the data, normalized by the data error bars, computed from the diagonal of the jackknife covariance matrix. We achieve good fit on most bins, with the exceptions being the a few bins at a few megaparsecs transverse and large $\pi$.} } \label{fig:xifit} \end{figure*} Next, we test the baseline + velocity bias ($\alpha_c, \alpha_s$) + assembly bias ($A_\mathrm{cent}, A_\mathrm{sat}$) model. Again, we use \textsc{dynesty} to compute the model evidence and sample the posterior space. The sixth column of Table~\ref{tab:abacushod_bestfits_xi} summarizes the best-fit parameter values when we include parameter $A_\mathrm{cent}$ and $A_\mathrm{sat}$. The inclusion of the assembly bias parameters moderately reduces the $\chi^2$ per degree of freedom, and improves the marginalized Bayesian evidence. The blue contours of Figure~\ref{fig:corner_A} show the posterior constraints on the assembly bias parameters. The 2D constraints show that the detection of central assembly bias is approximately $2\sigma$ confidence, whereas that of satellite assembly bias is less than 1$\sigma$. The magenta contours show the constraints when we also add the satellite profile parameter $s$. We recover the same constraints for the central assembly bias but no evidence for the satellite assembly bias. Overall, the evidence for assembly bias is less significant than that of environment-based secondary bias. The central and satellite assembly bias also seem to exhibit opposite behaviors. The best-fit values suggest that the centrals tend to live in more concentrated halos while the satellites prefer to live in less concentrated halos. This is consistent with the fact that more concentrated halos tend to be older, where more satellites have already merged with the central. The discrepant assembly bias signature between centrals and satellites has also recently been found in the BOSS LOWZ sample \citep{2021Lange}, though they found the assembly bias signature to be dependent on redshift. It also appears, based on the 2D constraints, that the central assembly bias and the satellite assembly bias are seemingly uncorrelated. \subsection{LOWZ fits} We repeat our analysis for the BOSS LOWZ sample in redshift range $0.15 < z < 0.40$. We continue to find that the baseline 5-parameter HOD model provides a good fit on the projected galaxy 2-point correlation function $w_p$, yielding best-fit $\chi^2 = 10$ with a DoF = 10. We also find consistent results in the full-shape $\xi(r_p, \pi)$ fit. Most notably, the environment-based secondary bias continues to enable a significantly better fit on the observed redshift-space clustering than the concentration-based assembly bias. Specifically, without any secondary biases, the baseline HOD plus velocity bias model achieves a best-fit $\chi^2/\mathrm{DoF} = 1.42$. With assembly bias parameters $A_\mathrm{cent}$ and $A_\mathrm{sat}$, we get $\chi^2/\mathrm{DoF} = 1.36$, but with environment-based bias parameters $B_\mathrm{cent}$ and $B_\mathrm{sat}$, we get a much improved $\chi^2/\mathrm{DoF} = 0.97$. This is similar to the behavior we see for the CMASS fits in Table~\ref{tab:abacushod_bestfits_xi}. We skip the detailed figures and tables for brevity. \section{Application to \lowercase{e}BOSS multi-tracer clustering} \label{sec:eboss} \red{ In this section, we apply the \textsc{AbacusHOD}\ framework to fitting the multi-tracer cross-correlation measurements of the eBOSS galaxy samples. This serves as a scientifically interesting application that showcases the multi-tracer capabilities of the \textsc{AbacusHOD}\ framework. } \subsection{The eBOSS sample} \red{ The dataset comes from the extended Baryon Oscillation Spectroscopic (eBOSS) survey \citep{2016Dawson}. The eBOSS project is one of the programmes within the wider 5-year Sloan Digital Sky Survey-IV \citep[SDSS-IV;][]{2017Blanton}. The eBOSS sample consists of four different types of tracers, namely LRGs; Emission Line Galaxies (ELG); Quasi-Stellar Objects (QSO); and Lyman Alpha Forest. For this analysis, we are using a subset of the eBOSS samples that covers the redshift range from 0.7 to 1.1, where all of the three tracers of interest, namely LRG, ELG and QSOs, overlap. The overlap region can be used to study these tracers with cross-correlations and this results in dense enough galaxy samples to probe the underlying dark matter distribution through the combined samples. We use intermediate versions of the data release 16 (DR16) catalogues produced by the eBOSS collaboration \citep[][]{2021Raichoor}. Any changes between the version we have used and the final versions are expected to be minor and mainly affect the results at large scales. The details of target selection and the cross-correlation measurements are found in \citet{2020Alam}. The mean number density per tracer is $n_\mathrm{LRG} = 1\times 10^{-4}h^3$Mpc$^{-3}$, $n_\mathrm{ELG} = 4\times 10^{-4}h^3$Mpc$^{-3}$, and $n_\mathrm{QSO} = 2\times 10^{-5}h^3$Mpc$^{-3}$. The exact $n(z)$ distribution can be found in figure~1 of \citet{2020Alam}. } \subsection{Fitting eBOSS auto/cross-correlations} \red{ The eBOSS cross-correlations are measured within the overlapping footprint of the three tracers, resulting in a set of 6 projected auto/cross-correlation measurements, as showcased by the orange data points in Figure~\ref{fig:ebossfit}. The errorbars are estimated from the jackknife covariance for each measurement. We refer the readers to Section~4 of \citet{2020Alam} for a detailed description of the data vector measurements and the associated systematics.} \red{ For this analysis, we only invoke the baseline HOD for each of the three tracers in \textsc{AbacusHOD} (Equation~\ref{equ:zheng_hod_cent}-\ref{eq:alam_hod_elg}), for a total of 20 HOD parameters. Additionally, we account for incompleteness in each tracer. We define the $\chi^2$ similar to Equation~\ref{equ:logL_wp}, except the $\chi^2_{w_p}$ is now the summation of 6 individual $\chi^2$ terms, one for each auto/cross-correlation measurement, and the $\chi^2_{n_g}$ term is the summation of 3 terms, one for each tracer. We calculate a jackknife covariance for each of the auto/cross-correlation measurement, but we do not account for the covariance between the 6 measurements in this analysis. To derive the best fit, we follow the methodology of \citet{2021Yuan} in using a global optimization technique known as the covariance matrix adaptation evolution strategy \citep[CMAES;][]{2001Hansen}. We use an implementation that is part of the publicly available StochOPy (STOCHastic OPtimization for PYthon) package.\footnote{ https://github.com/keurfonluu/StochOPy}.} \begin{figure*} \centering \hspace*{-0.6cm} \includegraphics[width = 7in]{./plot_bestfit_wp_eboss_optimize.pdf} \vspace{-0.3cm} \caption{\red{The eBOSS auto/cross-correlation measurements in orange, and the \textsc{AbacusHOD}\ best-fit in blue. We achieve a good fit, with the best-fit $\chi^2 = 89$ and DoF = 82. Visually, the ELG auto-correlation function appears to show large discrepancies between the data and the prediction in the largest $r$ bins. However, those bins are highly covariant, and the errorbars shown significantly underestimate the true level of uncertainty.} } \label{fig:ebossfit} \end{figure*} \red{ We achieve a good fit on the data, with the best-fit $\chi^2 = 89$ and DoF = 82. We showcase the best-fit with the blue curves in Figure~\ref{fig:ebossfit}. Visually, the largest deviation comes from the large-scale bins of the ELG auto-correlation function. However, those large scale bins are highly covariant, and the errorbars shown significantly underestimate the true level of uncertainties. These measurements will be dramatically improved with DESI. Compared to the best fit shown in Figure~5 of \citet{2020Alam}, we see broad consistencies between the two fits. This is to be expected as the two fits use equivalent HOD models, though implemented on different simulations. Another difference is that the \citet{2020Alam} analysis only fits the 3 auto-correlation functions whereas we fit all 6 measurements simultaneously. } \red{ The best-fit HOD parameters are summarized in Table~\ref{tab:ebosshod} and visualized in Figure~\ref{fig:mthod}. Compared to Table~1 of \citet{2020Alam}, there are some inconsistencies. However, these differences can be due to differences in HOD implementation and specifications of the simulations and halo finders. The more important point is that both models can model the auto/cross-correlation functions sufficiently well, and the best-fit predictions are consistent between the two models. We can compute the typical halo mass from the best-fit HOD parameters, yielding $M_h^{\mathrm{LRG}} = 1.9\times 10^{13}h^{-1}M_\odot$, $M_h^{\mathrm{ELG}} = 3.0\times 10^{12}h^{-1}M_\odot$, and $M_h^{\mathrm{QSO}} = 6.8\times 10^{12}h^{-1}M_\odot$. This is consistent with the findings of \citet{2020Alam}, where the authors found a mean mass per tracer of $M_h^{\mathrm{LRG}} = 1.9\times 10^{13}h^{-1}M_\odot$, $M_h^{\mathrm{ELG}} = 2.9\times 10^{12}h^{-1}M_\odot$, and $M_h^{\mathrm{QSO}} = 5\times 10^{12}h^{-1}M_\odot$. While the mean halo mass of the LRGs and ELGs match exactly, our inferred QSO halo mass is somewhat larger than previous studies but within statistical uncertainty \citep[also refer to ][]{2017Laurent, 2017Rodriguez}. } \begin{figure} \centering \hspace*{-0.6cm} \includegraphics[width = 3.5in]{./plot_bestfit_wp_eboss_new_hodstacked.pdf} \vspace{-0.6cm} \caption{\red{The best-fit HOD model as a function of halo mass for all three eBOSS tracers. The corresponding parameter values are listed in Table~\ref{tab:ebosshod}. The shaded areas show the central occupation, stacked to show the total galaxy occupation as a function of halo mass. The dashed curves show the satellite distribution. This plot shows the different mass regimes of the three tracers, with LRGs occupying the most massive halos, whereas ELGs occupation extends down to much lower mass.} } \label{fig:mthod} \end{figure} \red{ In terms of satellite fraction, we find the LRGs have a satellite fraction of $13\%$, consistent with \citet{2020Alam} but slightly lower than \citet{2017Zhai}. For QSOs, we find a satellite fraction of approximately $5\%$, consistent with previous QSO studies \citep[e.g.][]{2017Rodriguez}, but much lower than the outlier $30\%$ reported in \citet{2020Alam}. However, we do find a different mode in the likelihood surface that offers comparable $\chi^2$ and a much higher QSO satellite fraction ($\sim 34\%$). We reject that mode for rather extreme parameter values in other HOD parameters. Nevertheless, this highlights the limitation of an optimization analysis, and calls for a comprehensive posterior analysis. For ELGs, we find a satellite fraction of $7\%$, which is lower than previously reported values in the range of $12\%$ to $17\%$. Compared to \citet{2020Alam}, this is due to us finding both a higher $M_1$ and a lower $\alpha$. We suspect this is partially due to differences in the ELG HOD implementation and differences in halo finder. The \textsc{CompaSO} halos used in this analysis is known to deblend halos more aggressively than Friends-of-friends halo finders and \textsc{Rockstar} \citep[][]{2013Behroozi}, resulting in more ELGs being identified as centrals. } \begin{table} \centering \begin{tabular}{lccc} \hline Parameters & LRG & QSO & ELG\\ \hline $\log_{10}M_\mathrm{cut}$ & 12.8 & 12.2 & 11.83\\ $\log_{10}\sigma$ & $-1.0$ & $-1.63$ & $-0.24$ \\ $\gamma$ & - & - & 5.8 \\ $Q$ & - & - & 19.0 \\ $\log_{10}(M_1)$ & 14.0 & 14.0 & 14.0 \\ $\kappa$ & 0.63 & 0.63 & 0.82 \\ $\alpha$ & 0.78 & 1.04 & 0.70 \\ $p_{\rm max}$ & 1.0 (fixed) & 0.85 & 0.68 \\ \hline \end{tabular} \caption{The best fit \textsc{AbacusHOD}\ parameters for all three eBOSS tracers. This should be compared to Table~1 of \citet{2020Alam}. The ELG column here should be compared to the ELG (HMQ) column in \citet{2020Alam}. We do not implement $p_{\rm max}$ for LRGs as it is redundant with the incompleteness factor in our implementation.} \label{tab:ebosshod} \end{table} \section{Discussions} \label{sec:discuss} \subsection{Sensitivity to environment definition} The choice of $r_\mathrm{max} = 5h^{-1}$Mpc in our environment definition is somewhat arbitrary. It is the recommended value from \citet{2020Hadzhiyska}, in which the authors found $r_\mathrm{max} = 5h^{-1}$Mpc to be best at capturing the secondary bias signature in hydrodynamical simulations. In \citet{2021Yuan}, we also found $r_\mathrm{max} = 4-5h^{-1}$Mpc to be the value that yields the best fit. However, now that we are using a different simulation set and different halo finder, we again test different values of $r_\mathrm{max}$. We find no significant improvement to the fit as we vary $r_\mathrm{max}$. The lensing prediction of the best-fit HOD is also largely insensitive to $r_\mathrm{max}$. We also test the definition $r_\mathrm{max} = 5\times r_{98}$, where we remove any fixed scale from the definition. Again we find no improvement to the fit compared to the $r_\mathrm{max} = 5h^{-1}$Mpc case. An alternative, computationally cheaper, definition of the local environment is to use a density grid. Specifically, one calculates the local density smoothed over a Gaussian kernel at fixed grid points spanning the entire simulation box. The local density of each halo can be approximately from the overdensities at its nearest grid points, avoiding explicit pair counts. We test this environment definition with a grid size of $N = 1024^3$ and a Gaussian smoothing scale of 3$h^{-1}$Mpc. Surprisingly, we find that this grid-based environmental secondary bias yields no significant improvement over the no-environment HOD in either the best-fit $\chi^2$ or the model evidence. While we still need to explore more grid-based variations before declaring a clear preference, we highlight the high sensitivity of the fit to the details of the HOD model. For now, we continue to recommend the use enclosed neighbor mass as the local environment definition. \subsection{Galaxy-galaxy lensing comparison} \label{subsec:lensing} \begin{figure} \centering \hspace*{-0.6cm} \includegraphics[width = 3.5in]{./plot_lensing_compare_Bold.pdf} \vspace{-0.6cm} \caption{The lensing prediction of the best-fit HOD models when including the environment-based secondary bias parameters. The top panel shows the $r$ weighted surface overdensity profile, whereas the bottom panel shows the relative deviation of the predictions from the data. The red curve corresponds to the baseline HOD fit on $w_p$ presented in Section~\ref{subsec:wpfit}. The orange curve corresponds to the baseline HOD + velocity bias fit presented in Section~\ref{subsec:xifit}. The solid and dashed purple curves correspond to the environment-based secondary bias fits presented in Section~\ref{subsec:abfit}, where the dashed line also includes the satellite profile parameter $s$. The shaded purple region shows the corresponding $1\sigma$ errorbars. These fits are summarized in Table~\ref{tab:abacushod_bestfits} and Table~\ref{tab:abacushod_bestfits_xi}. } \label{fig:lensing_B} \end{figure} \begin{figure} \centering \hspace*{-0.6cm} \includegraphics[width = 3.5in]{./plot_lensing_compare_A.pdf} \vspace{-0.6cm} \caption{The lensing prediction of the best-fit HOD models when including the assembly bias parameters. The top panel shows the $r$ weighted surface overdensity profile, whereas the bottom panel shows the relative deviation of the predictions from the data. The red curve corresponds to the baseline HOD fit on $w_p$ presented in Section~\ref{subsec:wpfit}. The orange curve corresponds to the baseline HOD + velocity bias fit presented in Section~\ref{subsec:xifit}. The solid and dashed purple curves correspond to the assembly bias fits presented in Section~\ref{subsec:abfit}, where the dashed line also includes the satellite profile parameter $s$. The purple shaded region denotes the corresponding $1\sigma$ errorbars.} \label{fig:lensing_A} \end{figure} A well known observational tension exists between galaxy clustering and galaxy-galaxy lensing (g-g lensing). \citet{2017Leauthaud} found discrepancies of 20-40$\%$ between their measurements of g-g lensing for CMASS galaxies and a model predicted from mock galaxy catalogs generated at Planck cosmology that match the CMASS projected correlation function \citep[][see Figure~7 of \citealt{2017Leauthaud}]{2014Reid, 2016Saito}. \citet{2019Lange, 2020Lange} extended this result by finding a similar ${\sim} 25\%$ discrepancy between the projected clustering measurement and the g-g lensing measurement in the BOSS LOWZ sample. In \citet{2019Yuan}, we reaffirmed this tension by fitting simultaneously the projected galaxy clustering and g-g lensing with an extended the HOD incorporating a concentration-based assembly bias prescription. However, in \citet{2021Yuan}, we found that the inclusion of both the assembly bias and an environment-based secondary bias can significantly reduce ($\sim 10-20\%$) the predicted lensing signal when constrainted on the redshift-space 2PCF. We concluded that assembly bias and secondary biases in the HOD could be part of the explanation for the lensing tension. In this section, we revisit this issue with the new \textsc{AbacusHOD} fits. First, we reiterate the key differences compared to the \citet{2021Yuan} analysis. First of all, we are now using a different set of simulations, with higher spatial and force resolution, a fundamentally different halo finder, and a slight difference in cosmology. Then, we have updated the HOD model, with a different model for velocity bias, and a different model for secondary biases. Finally, while \citet{2021Yuan} used optimization routines to identify best-fits, this analysis enables full posterior sampling, achieving more robust results. Figure~\ref{fig:lensing_B} showcases the g-g lensing predictions of the best-fit HODs in this analysis, specifically comparing the HOD models with and without environment-based secondary biases. Again, we find that relative to the baseline HOD constrained on $w_p$, the inclusion of the environment-based secondary bias reduces the lensing prediction by $10-15\%$, towards better agreement with observation. This shows that the correct secondary bias models not only significantly improve the fit on the redshift-space 2PCF, but also serve an important role in resolving the g-g lensing tension. These two key results, in addition to the findings of \citet{2021Yuan}, combine to provide strong evidence for the existence of environment-based secondary bias in the CMASS galaxy population. This detection is only present in fitting the redshift-space 2PCF, demonstrating the extra information contained in the velocity space structure of the full-shape 2PCF. The blue ``observed'' lensing data comes from the Canada France Hawaii Telescope Lensing Survey \citep[CFHTLenS,][]{2012Heymans, 2013Miller}, and the Canada France Hawaii Telescope Stripe 82 Survey \citep[CS82,][]{2013Erben}. We have also internally compared these measurements to updated measurements from Hyper-Suprime Cam (HSC) survey \citep[][]{2018aMandelbaum, 2018bMandelbaum}, the Dark Energy Survey \citep[DES,][]{2016DES, 2018Drlica} Y1 and the Kilo-Degree Survey \citep[KiDS-1000,][]{2019Kuijken, 2020Wright, 2021Hildebrandt, 2021Giblin}. We find that the updated measurement is largely consistent with the older CFHTLens data. The detailed comparison will be presented in upcoming papers (Amon et al. in prep, Lange et al. in prep). Figure~\ref{fig:lensing_A} showcases the g-g lensing predictions when including the concentration-based assembly bias parameters instead of the environment-based parameters. It is clear that the inclusion of assembly bias parameter does not help resolve the lensing tension. This is contrary to the results of \citet{2021Yuan}, where we found both assembly bias and environment-based secondary bias reduce the lensing tension. We attribute this to differences in halo finders and the assembly bias models. We discuss these differences more in Appendix~\ref{sec:cosmos}. \begin{figure} \centering \hspace*{-0.6cm} \includegraphics[width = 3.45in]{./plot_lowz_delta_wpbase.pdf} \vspace{-0.6cm} \caption{The correction to the baseline lensing prediction due to the inclusion of assembly bias (dashed lines) and environment-based bias (solid line). CMASS fits are shown in orange whereas LOWZ fits are in purple. $\delta\Delta\Sigma = (\Delta\Sigma_{\mathrm{with\ bias}}/\Delta\Sigma_\mathrm{base})-1$, where the baseline prediction comes from the naive 5-parameter HOD fit on $w_p$. We see environment-based bias consistently lowers the lensing prediction for both samples by 10$\%$ on small scales whereas assembly bias has less impact. } \label{fig:lensing_lowz} \end{figure} In LOWZ, we continue to find that the inclusion of the environment-based bias in the HOD model results in a more realistic lensing prediction, as we show in Figure~\ref{fig:lensing_lowz}. The figure shows the relative change to the predicted lensing signal due to the inclusion of assembly bias parameters (dashed lines) and environment-based secondary bias parameters (solid lines). Specifically, $\delta\Delta\Sigma = (\Delta\Sigma_{\mathrm{with\ bias}}/\Delta\Sigma_\mathrm{base})-1$, where the baseline prediction comes from the $w_p$ fit with the vanilla 5-parameter HOD model. The CMASS prediction is plotted in orange whereas the LOWZ prediction is plotted in purple. Clearly, on small scales, the $10\%$ reduction in the lensing prediction due to the environment-based bias persists over both samples.The effect of the assembly bias is also consistent across both samples, but remains small in amplitude. This lends further weight to the conclusion from the previous sections that the inclusion of environment-based biases in the HOD model not only achieves a good fit on the small-scale full shape clustering, but also reduces the lensing tension on small scales. We do caution that the lensing tension remains despite accounting for the environment-based secondary bias. A full resolution of the lensing tension likely requires a combination of secondary bias effects, as demonstrated here, improvements in baryonic effects modeling \citep{2020Amodeo}, and potentially a better accounting of observational systematics (Amon et al. in prep.). \red{A statistically rigorous joint-analysis of the clustering and lensing measurements is required to determine whether the combination of these effects can indeed resolve the lensing tension. We reserve such analysis to future papers. } \subsection{Synergies with other works} This work provides growing evidence that the local environment is an important tracer for secondary galaxy bias. \citet{2020Hadzhiyska} and \citet{2020Xu} both systematically tested the effectiveness of various secondary HOD dependencies in capturing the secondary bias signature, \citet{2020Hadzhiyska} through hydrodynamical simulations and \citet{2020Xu} through semi-analytical models. Both studies found the halo environment to be the best indicator of secondary bias. \citet{2021Yuan} and this work complement the previous works by providing the observational support for including environment-based secondary bias in HOD models. This work also adds another piece to the ``lensing is low'' puzzle, by showing that the environment-based secondary bias can account for $30\%$ of the lensing discrepancy. Another recent development in resolving this discrepancy comes from the kinetic Sunyaev-Zeldovich (kSZ) effect measurements from the Atacama Cosmological Telescope (ACT) collaboration \citep{2020Amodeo, 2021Schaan}. These studies found that, due to baryonic feedback, the gas profile is significantly more extended in and around the host halos. A first order estimate by the ACT team shows that this extended gas profile can reduce the predicted lensing signal by approximately $50\%$. This shows that a combination of baryonic effects and secondary biases, and potentially a more thorough accounting of data systematics, can reconcile the lensing tension, without the need to invoke any change to the underlying cosmology. \section{Conclusions} \label{sec:conclude} In this paper, we present \textsc{AbacusHOD}, a new extended multi-tracer HOD framework built on top of the state-of-art \textsc{AbacusSummit} simulation suite. This HOD framework is feature rich, incorporating flexible models for secondary biases and satellite properties. The code is highly optimized for fast HOD evaluation, enabling robust sampling of extended HOD parameter spaces. In the age of DESI, this code will be an important tool in multi-tracer analyses and cosmology inference on relatively small scales. \red{ We present two examples applying \textsc{AbacusHOD} and \textsc{AbacusSummit} to BOSS and eBOSS data. First we model the full-shape redshift-space 2PCF on small to intermediate scales. We find that the redshift-space 2PCF is significantly more constraining on the HOD parameters than the projected 2PCF, while also breaking parameter degeneracies between $M_\mathrm{cut}, M_1, \sigma,$ and $\alpha$. We tested various extensions to the baseline + velocity bias model. We find that the observed redshift-space 2PCF strongly prefers the inclusion of environment-based secondary bias, with greater than $3\sigma$ detection for the environment-based secondary bias parameters. We find weaker evidence for the canonical concentration-based assembly bias, with just over $2\sigma$ detection. This is consistent with several recent studies that have found the local environment to be the far better indicator of galaxy secondary biases in the HOD. } \red{ In the second example, we showcase the multi-tracer capabilities of \textsc{AbacusHOD}\ package by analyzing the auto/cross-correlation functions of eBOSS LRG, ELG, and QSO samples. Our model achieves a good fit on the full data vector, yielding consistent predictions with previous analyses. } In the discussion section, we also highlight the fact that by including the environment-based secondary bias, the best-fit g-g lensing prediction is decreased by approximately 10$\%$ is magnitude, accounting for about 1/3 of the lensing tension. We also find that assembly bias does not significantly lower the lensing prediction. This result is reproduced in both the CMASS and LOWZ sample. In the general sense, this is consistent with the conclusion of \citet{2021Yuan}, that secondary biases can potentially partially resolve the lensing tension. Combined with baryonic effects, which have recently been shown to account for up to $50\%$ of the lensing tension, and a more careful accounting of data systematics, we could potentially explain the full ``lensing is low'' tension. \section*{Acknowledgments} We would like to thank Shadab Alam, Johannes Lange, and Josh Speagle for technical guidance in the analyses. We would like to thank David Weinberg, Charlie Conroy, Lars Hernquist, Douglas Finkbeiner for their helpful feedback. This work was supported by U.S. Department of Energy grant DE-SC0013718, NASA ROSES grant 12-EUCLID12-0004, NSF PHY-2019786, and the Simons Foundation. SB is supported by Harvard University through the ITC Fellowship. This work used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231. The {\sc AbacusSummit} simulations were conducted at the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725, through support from projects AST135 and AST145, the latter through the Department of Energy ALCC program. \section*{Data Availability} The simulation data are available at \url{https://abacussummit.readthedocs.io/en/latest/}. The \textsc{AbacusHOD}\ code package is publicly available as a part of the \textsc{abacusutils} package at \url{http://https://github.com/abacusorg/abacusutils}. Example usage can be found at \url{https://abacusutils.readthedocs.io/en/latest/hod.html}. \bibliographystyle{mnras}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{introduction} Scattering of fermion off dilaton black hole has been extensively studied over the years \cite{HAW, HOOFT, GARF, STRO, SUS, GIDD, GIDD1, CALL, MIT, ARINF} and it has provided much insight into its connection to the Hawking radiation. Even after the intensive investigations it remains as a subject of several interests because of the subtleties involved in it in connection with information loss scenario during the formation and subsequent evaporation of black hole. It is worth mentioning at this stage that a controversy in this context was generated from the Hawking's suggestion \cite{HAW} three decades ago. However his recent suggestion on this issue \cite{HAW1} has brought back a pretty pleasant scenario. It may even be thought that the controversy has come to an end. General description of such scattering problem is extremely difficult. Despite that, there have been attempts in studying such problem in its full complexity through the s-matrix description of such event \cite{HOOFT}. Comparatively less complicated model therefore entered into the picture and showed its prominent role in this issue \cite{STRO, SUS, GIDD, GIDD1, CALL}. The toy model due to Alford and Strominger \cite{STRO} provides an interesting description of the s-wave scattering of fermion off dilaton black hole with a trustworthy results concerning information loss. It helps to avoid some of the technical obstacle posed by quantum gravity in $(3+1)$. Even in the presence of gravitational anomaly a systematic description of this scattering of fermion off dilaton would have been possible through this model \cite{MIT, ARINF}. The provision of taking the effect of one loop correction \cite{JR, RABIN} into consideration is also an exiting aspect of this model. That indeed shows a way to investigate the effect of anomaly \cite{MIT, ARINF} on this scattering phenomena. Notably, this model arose in two dimensional non-critical string theory and its black hole solution was discovered in \cite{WAD}. Few years back Mitra studied this scattering problem replacing Dirac fermion by chiral fermion and found an uncomfortable scenario \cite{MIT}. He observed that information failed to be preserved. With the use of anomaly we have shown that that disaster can be avoided \cite{ARINF}. Anomaly played there a very surprising as well as interesting role. Seeing the interesting role of anomaly on the s-wave scattering of chiral fermion \cite{MIT, ARINF} we are intended here to investigate the role of one loop correction on the s-wave scattering of Dirac fermion using the said toy model due to Alford and Strominger \cite{STRO}. Needless to mention that the counter term appeared here due to one loop correction looks similar to the term used in \cite{ARINF}. \section{Two dimensional effective model for studying s-wave scattering} Here we consider only a special case which turns out to be particularly simple scattering of s-wave fermion incident on dilaton black hole. Black hole is the extrema of the following (3+1) dimensional action. \begin{equation} S_{AF} = \int d^4 x\sqrt{-g}[R + 4(\nabla\phi)^2 - {1\over 2}F^2 + i \bar\psi D\!\!\!/\psi]. \label{EQ1} \end{equation} Here $g$ represents determinant of the space time metric. The geometry consists of three regions \cite{STRO}. Far from the black hole there is an asymptotically flat region. The mouth leads to an infinitely long throat region. In side the throat region the metric is approximated by the flat metric on two dimensional Minkowsky space times the round metric on two sphere with radius $Q$. Electromagnetic field strength is tangential over the two sphere and an integer to $4\pi Q$. When the energy scales is large compared to $Q$, the radius of the two sphere the dynamics within the throat region can be described by the effective action \begin{equation} S_{AF} = \int d^2\sigma\sqrt{-g}[R + 4(\nabla\phi)^2 + {1\over {Q^2}} - {1\over 2}F^2 + i \bar\psi D\!\!\!/\psi], \label{EQ2} \end{equation} Here $D_\mu=\partial_\mu + eA_\mu$. It is a two dimensional effective field theory of dilaton gravity coupled to fermion. $\Phi$ represents the scalar dilaton field and $\psi$ is the charged fermion. For sufficiently low energy incoming fermion, gravitational effect on the scattering of s-wave fermion incident on a charge dilaton black hole can be neglected and equation (\ref{EQ2}) can be approximated by \begin{equation} {\cal S}_f = \int d^2x[i\bar\psi\gamma^\mu[\partial_\mu + ieA_\mu]\psi - {1\over 4} e^{-2\Phi(x)}F_{\mu\nu}F^{\mu\nu}]]. \label{EQ3} \end{equation} The coupling $e$ has one mass dimension. The indices $\mu$ and $\nu$ takes the values $0$ and $1$ in $(1+1)$ dimensional space time. The dilaton field $\Phi$ stands as a non dynamical back ground. It completes its role here just by making the coupling constant a position dependent function. This very toy model of quantum gravity in $(1+1)$ dimension contains black holes and Hawking radiation in a significant manner. Let us now define $G^2(x)= e^{2\Phi(x)}$. We will choose a particular dilaton background motivated by the linear dilaton vacuum of $(1+1)$ dimensional gravity like the other standard cases \cite{GIDD, GIDD1, CALL, STRO, SUS, MIT, ARINF}. Therefore, $\Phi(x) = -x^1$, where $x^1$ is space like coordinate. The region $x^1 \to\ + \infty$, corresponds to exterior space where the coupling $G^2(x)$ vanishes and the fermion will be able to propagate freely. However, the region where $x^1 \to -\infty$, the coupling constant will diverge and it is analogous to infinite throat in the interior of certain magnetically charged black hole. Equation (\ref{EQ2}), was derived viewing the throat region of a four dimensional dilaton black hole as a compactification from four to two dimension \cite{GARF, GIDD, STRO}. Note that, in the extremal limit, the geometry is completely non-singular and there is no horizon but when a low energy particle is thrown into the non-singular extremal black hole, it produces a singularity and an event horizon. The geometry of the four dimensional dilaton black hole consists of three significant regions \cite{GARF, STRO, SUS, GIDD, GIDD1} as has already been mentioned. As long as one proceed nearer to the black hole the curvature begins to rise and finally enters into the mouth region (the entry region to the throat). In the deep throat region physics will be governed by the equation (\ref{EQ2}) since the metric at that region gets simplified into flat two dimensional Minkowsky metric times the round metric on the two sphere with radius $Q$. The dilaton field $\Phi$ indeed increases linearly with the proper distance into the throat. We are now in a state to start our analysis and we would like to proceed with the bosonized version of the theory (\ref{EQ3}). During the process of Bosonization a one loop correction automatically enters within the action because bosonization needs to integrate out both the the left handed as well as the right handed part of the fermion one by one that leads to a fermionic determinant \cite{JR, AR, AR1}. When this fermionic determinant is expressed in terms of scalar field a one loop correction enters into the theory in order to remove the divergence of the fermionic determinant. So the tree level bosonized theory gets the effect of loop correction during the process. Of course, bosonization can be done keeping the gauge symmetry intact which was used in \cite{STRO}. Here masslike term for gauge field has been taken into consideration since we are intended to study the effect of this one loop correction in the s-wave scattering of Dirac fermion. With the counter term used in the study of non confining Scgwinger model \cite{AR, AR1} the bosonized action reads \begin{equation} {\cal L}_B = {1\over 2}\partial_\mu\phi \partial^\mu\phi - e\tilde\partial_\mu\phi A^\mu + {1\over 2}ae^2A_{\mu}A^{\mu} - {1\over 4}e ^{2\Phi(x)}F_{\mu\nu}F^{\mu\nu}. \label{LBH} \end{equation} Here $\phi$ represents a scalar field and $\tilde\partial_\mu$ is the dual to $\partial_\mu$. $\tilde\partial_\mu$ is defined by $\tilde\partial_\mu=\epsilon_{\mu\nu}\partial^\nu$. Note that the lagrangian (\ref{LBH}), maps onto the non-confining Schwinger model \cite{AR, AR1} for $\Phi(x)=0$,. The $U(1)$ current in this situation is \begin{equation} J_\mu = -e\epsilon_{\mu\nu}\partial^\nu\phi + ae^2A_\mu \end{equation} and it is non conserving since $\partial_\mu J^\mu \neq 0$. This current was of preserving nature in \cite{GIDD, GIDD1, STRO, SUS} and in those situations the currents were $J_\mu= -e\epsilon_{\mu\nu}\partial^\nu\phi$. The new setting considered here indeed to show the role of the one loop correction on the s-wave scattering of Dirac fermion. \section{Hamiltonian analysis of the model} It is now necessary to carry out the Hamiltonian analysis of the theory to observe the effect of the dilaton field on the equations of motion. From the standard definition the canonical momenta corresponding to the scalar field $\phi$, and the gauge fields $A_0$ and $A_1$ are found out: \begin{equation} \pi_\phi = \phi' - eA_1\label{MO1} \end{equation} \begin{equation} \pi_0 = 0,\label{MO2} \end{equation} \begin{equation} \pi_1 = e^{-2\phi(x)}(\dot A_1 - A_0')={1\over {G^2}}(\dot A_1 - A_0').\label{MO3} \end{equation} Here $\pi_\phi$, $\pi_0$ and $\pi_1$ are the momenta corresponding to the field $\phi$, $A_0$ and $A_1$. Using the above equations, it is straightforward to obtain the canonical hamiltonian through a Legendre transformation. The canonical hamiltonian is found out to be \begin{eqnarray} {\cal H} &=& {1\over 2}(\pi_\phi +eA_1)^2 + {1\over 2}e^{2\Phi(x)}\pi_1^2 + {1\over 2}\phi'^2 + \pi_1A_0' -eA_0\phi' \nonumber\\ &-& {1\over 2}ae^2(A_0^2 - A_1^2).\label{CHAM}\end{eqnarray} Note that there is an explicit space dependence in the hamiltonian (\ref{CHAM}) through the dilaton field $\Phi(x)$ but it does not pose any hindrance to be preserved in time. So consistency and physically sensibility are in no way be threatened. Equation (\ref{MO2}) is the familiar primary constraints of the theory. Therefore, it is necessary to write down an effective hamiltonian: \begin{equation} {\cal H}_{eff} = {\cal H}_C + u\pi_0 \end{equation} where $u$ is an arbitrary Lagrange multipliers. The primary constraints (\ref{MO2}) has to be preserve in order to have a consistent theory. The preservation of the constraint (\ref{MO2}), leads to the Gauss law of the theory as a secondary constraint: \begin{equation} G = \pi_1' + e\phi' + ae^2A_0 \approx 0. \label{GAUS} \end{equation} The preservation of the constraint (\ref{GAUS}) though does not give rise to any new constraint it fixes the velocity $u$ which comes out to be \begin{equation} u =A'_1. \label{VEL} \end{equation} We, therefore, find that the phase space of the theory contains the following two second class constraints. \begin{equation} \omega_1 = \pi_0 \approx 0, \label{CON1} \end{equation} \begin{equation} \omega_2 = \pi_1' + e\phi' + ae^2A_0 \approx 0.\label{CON2} \end{equation} Both the constraints (\ref{CON1}) and (\ref{CON2}) are weak conditions up to this stage. When we impose these constraints strongly into the canonical hamiltonian (\ref{CHAM}), the canonical hamiltonian gets simplified into the following form. \begin{eqnarray} {\cal H}_{red} &=& {1\over 2}(\pi_\phi + eA_1)^2 + {1\over {2ae^2}}(\pi'_1 + e\phi')^2 + {1\over 2}e^{2\Phi(x)}\pi_1^2 \nonumber\\ &+&{1\over 2}\phi'^2 + {1\over 2}ae^2A_1^2. \label{RHAM}\end{eqnarray} $H_{red}$ obtained in equation (\ref{RHAM}), is generally known as reduced Hamiltonian. According to Dirac \cite{DIR}, Poisson bracket gets invalidate for this reduced Hamiltonian. This reduced Hamiltonian however remains consistent with the Dirac bracket which is defined by \begin{eqnarray} & &[A(x), B(y )]^* = [A(x), B(y)] \nonumber \\ &-&\int[A(x), \omega_i(\eta)] C^{-1}_{ij}(\eta, z)[\omega_j(z), B(y)]d\eta dz, \label{DEFD} \end{eqnarray} where $C^{-1}_{ij}(x,y)$ is given by \begin{equation} \int C^{-1}_{ij}(x,z) [\omega_j(z), \omega_k(y)]dz =\delta(x-y) \delta_{ik}. \label{INV} \end{equation} For the theory under consideration \noindent $C_{ij}(x,y)=$ \begin{equation} ae^2 \pmatrix {0 & -\delta(x-y) \cr \delta(x-y) & o \cr}. \label{MAT} \end{equation} Here $i$ and $j$ runs from $1$ to $2$ and $\omega$'s represent the constraints of the theory. With the definition (\ref{DEFD}), we can compute the Dirac brackets between the fields describing the reduced Hamiltonian $H_{red}$. The Dirac brackets between the fields $A_1$, $\pi_1$, $\phi$ and $\pi_\phi$ are required to obtain the theoretical spectra (equations of motion): \begin{equation} [A_1(x), A_1(y)]^* = 0 = [\pi_1(x), \pi_1(y)]^* ,\label{DR1} \end{equation} \begin{equation} [A_1(x), \pi_1(y)]^* = \delta(x-y),\label{DR2} \end{equation} \begin{equation} [\phi(x), \phi(y)]^* = 0 =[\pi_\phi(x), \pi_\phi(y)]^* ,\label{DR3} \end{equation} \begin{equation} [\phi(x), \pi_\phi(y)]^* = \delta(x-y). \label{DR4} \end{equation} The Dirac Brackets (\ref{DR1}), (\ref{DR2}), (\ref{DR3}) and (\ref{DR4}), along with the Heisenberg's equation of motion leads to the following four first order equations. \begin{equation} \dot A_1= e^{2\Phi}\pi_1 -{1\over {ae^2}}(\pi_1'' + e\phi'') ,\end{equation} \begin{equation} \dot\phi = \pi_\phi + eA_1 , \end{equation} \begin{equation} \pi_\phi = {{a+1}\over a}\phi'' + {1\over {ae}}\pi_1'' ,\end{equation} \begin{equation} \dot\pi_1 = -e\pi_\phi - (a+1)e^2A_1. \end{equation} A little algebra converts the four first order equations into the following two second order Klein-Gordon equations: \begin{equation} [\Box + (1+a)e^2e^{2\Phi(x)}]\pi_1 = 0, \label{SP1} \end{equation} \begin{equation} \Box[\pi_1 + e(1+a)\phi] = 0. \label{SP2} \end{equation} The equation (\ref{SP1}), represents a massive boson with square of the mass $m^2 =(1+a)e^2e^{2\Phi(x)}$. Here $a$ must be greater than $-1$ in order to have the mass of the boson a physical one. Equation (\ref{SP2}), however describes a massless boson. The presence of this massless boson has a disastrous role here which will be going to uncovered now. Let us concentrate into the theoretical spectra. Equation (\ref{SP1}) shows that the mass of the boson is not constant in this model. It contains a position dependent factor $G^2=e^{2\Phi(x)}$, where $\Phi = -x^1$, for the background generated by the linear dilaton vacuum of $(1+1)$ dimensional gravity. Therefore, $m^2 \to \ + \infty$ when $x^1 \to\ - \infty$ and $m^2 \to \ 0$ when $x^1 \to\ + \infty$. Thus mass of the boson goes on increasing indefinitely in the negative $x^1$ direction which implies that any finite energy contribution must be totally reflected and an observer at $x^1 \to \infty$ will get back all the information. To be more precise, mass will vanish near the mouth (the entry region to the throat) but increases indefinitely as one goes into the throat because of the variation of this space dependent factor $G^2$. Since massless scalar is equivalent to massless fermion in $(1+1)$ dimension, we can conclude that a massless fermion proceeding into the black hole will not be able to travel an arbitrarily long distance and will be reflected back with a unit probability and a unitary s-matrix can be constructed. So there is no threat regarding information loss from the massive sector of the theory. Of course it is a pleasant scenario. However an uncomfortable situation appears when we observe carefully towards the massless sector of the theory (\ref{SP2}). It will remain massless irrespective of its position because unlike the massive sector it does not contain any space dependent factor. So this fermion will be able to travel within the black hole without any hindrance and an observer at $x^1 \to \infty$ will never find this fermion with a backward journey. Thus a real problem towards information loss appears for this setting. Note that in the similar type of studies, \cite{STRO, SUS, GIDD, GIDD1} where the setting was such that the masslike term for gauge field was absent, this problem did not occur. The result of the present work though leads to an uncomfortable situation, there is no known way to avoid it. It is true that after the Hawking's recent suggestion \cite{HAW1} it seems to be an unwanted and untrustworthy scenario but one can not rule it out too if he has to accept the model \cite{STRO}. More serious investigation is needed indeed in this issue. It is true that this result indicates a less brighter side of the model but it's presence can not be ignored or suppressed. \section{Conclusion} In this letter the s-wave scattering of fermion off dilaton black hole is investigated in presence of one loop correction due to bosonization. It was found that the presence of that correction term brings a disastrous result. Information loss could not be avoided. The result was not in agreement with the Hawking's recent suggestion too. But there is no way to rule out this possibility. Role of this type of quantum correction due to bosonization does not come as a great surprise for the first time. The crucial role of that was noticed earlier in the description of quantum electrodynamics and quantum chiral electrodynamics \cite{AR, AR1, BEL, JR, KH, PM, MG, FLO}. A famous instance in this context is the removal of the long suffering of the chiral electrodynamics from the non unitary problem \cite{JR}. \noindent{\bf Acknowledgment}: It is a pleasure to thank the Director, Saha Institute of Nuclear Physics and the Head of the Theory Group of Saha Institute of Nuclear Physics, Kolkata for providing working facilities. I would like to thanks the referee for his suggestion toward the improvement of this manuscript.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{} Theoretical prediction of surface stellar abundances of light elements -- lithium, beryllium, and boron -- represents one of the most interesting open problems in astrophysics. As well known, several measurements of $^7$Li abundances in stellar atmospheres point out a disagreement between predictions and observations in different stellar evolutionary phases, rising doubts about the capability of present stellar models to precisely reproduce stellar envelope characteristics. The problem takes different aspects in the various evolutionary phases; the present analysis is restricted to protostellar and pre-Main Sequence phases. Light elements are burned at relatively low temperatures ($T$ from $\approx 2$ to $\approx 5$~million degrees) and thus in the early evolutionary stages of a star they are gradually destroyed at different depths of stellar interior mainly by (p, $\alpha$) burning reactions, in dependence on the stellar mass. Their surface abundances are strongly influenced by the nuclear cross sections, as well as by the extension toward the stellar interior of the convective envelope and by the temperature at its bottom, which depend on the characteristics of the star (mass and chemical composition) as well as on the energy transport in the convective stellar envelope. In recent years, a great effort has been made to improve the precision of light element burning cross sections. However, theoretical predictions surface light element abundance are challenging because they are also influenced by the uncertainties in the input physics adopted in the calculations as well as the efficiency of several standard and non-standard physical processes active in young stars (i.e. diffusion, radiative levitation, magnetic fields, rotation). Moreover, it is still not completely clear how much the previous protostellar evolution affects the pre-Main Sequence characteristics and thus the light element depletion. This paper presents the state-of-the-art of theoretical predictions for protostars and pre-Main Sequence stars and their light element surface abundances, discussing the role of (p, $\alpha$) nuclear reaction rates and other input physics on the stellar evolution and on the temporal evolution of the predicted surface abundances. \tiny \fontsize{8}{11}\helveticabold { \section{Keywords:} Nuclear reactions, nucleosynthesis, abundances, Stars: pre-main sequence, Stars: evolution} \end{abstract} \section{Introduction} \label{sec:intro} Light elements -- lithium, beryllium and boron (hereafter Li, Be and B) -- are burned at relatively low temperatures ($T$ from $\approx 2$ to $\approx 5$~million degrees) easy to reach in stellar interiors at the bottom of the convective envelope, even during the early pre-Main Sequence (pre-MS) evolution. Therefore, surface light elements are depleted if the mixing processes become efficient enough to bring them down to the destruction region. This property makes such elements very good tracers of the mixing efficiency in stellar envelopes whose theoretical treatment is still a difficult task in stellar physics. Due to the different burning temperatures, the comparison between theory and observation for Li, Be and B, if possible, would be very useful to constrain theoretical models and in particular the extension of the convective envelope. The most of the observations concern the abundance of $^7$Li because, in most stars, surface $^6$Li is completely destroyed during the pre-MS phase and Be and B isotopic measurements are very problematic (e.g. \citet{cunha2010}, \citet{kaufer10}, \citet{delgado12}). A huge amount of data for surface $^7$Li abundances are available both for disk, thick disk and halo field stars and for open clusters; however, the well known discrepancy between predictions and observations of this quantity in clusters or in the Sun (the so-called "lithium-problem") is still an open question (see e.g. \citet{xiong2002,piau02,sestito2003,deliyannis00,jeffries00,pinsonneault00,jeffries06, talon08a}). The theoretical prediction of surface light element abundances is complex because they are sensitive to both the input physics (i.e., equation of state, reaction rates, opacity, etc...) and chemical element abundances (i.e., initial abundance of deuterium, helium, metals, etc...) adopted in stellar models, together with the assumed efficiency of microscopic diffusion and radiative acceleration (see e.g. \citet{piau02,burke04,richard2002,richard05,tognelli12,tognelli15b}). The situation is even more complicated because surface light element abundances seem to be affected by additional ``non standard'' physical processes, not routinely included in stellar evolutionary codes, as the possible presence of relevant magnetic fields and mass accretion processes in some young pre-MS stars (see e.g. \citet{baraffe10,macdonald2012,feiden13,somers14,somers2015}). Moreover, rotation-induced mixing, turbulent mixing, gravity waves and mass loss processes could play a role, though mainly for Main Sequence and more evolved stars, (see e.g. \citet{montalban2000,talon10,pace12,charbonnel13} and references therein). The pre-MS is the first stellar phase where the star evolves as a fully formed object. To reach this evolutionary stage, the future star has to accrete mass, until its final value, in the previous "protostellar phase". The details of this phase, when matter of the protostellar cloud is still falling on the surface of the protostar, are complex and uncertain. The full understanding of how the protostellar accreting phase affects the predictions for pre-MS characteristics (and thus light element abundances) is still an open problem. The inclusion of the protostellar accretion phase in evolutionary codes produces stars in the early pre-MS phase different from what expected in standard non accreting models, in which stars essentially contract along the Hayashi track. This eventually results in differences between standard and accreting models still visible during the whole pre-MS or the MS phase, with effects on both the structure and chemical composition of the stellar models (\citet{baraffe09,baraffe10,tognelli13b,kunitomo18,tognelli20}). Light element burning cross sections are fundamental ingredients in the predictions of the time behaviour of light element stellar surface abundances. In recent years new values for (p,$\alpha$) reaction rates have been proposed, mainly estimated using the Trojan Horse Method, greatly improving the precision of these quantities. The present review summarises the state-of-the-art of theoretical predictions for protostars and pre-MS stars and their light element surface abundances, in the light of recent improvements in the adopted input physics, updated reaction rates and description of the formation and evolution of pre-MS stars. The paper is structured as it follows. In Section 2 we qualitatively show the location of observed young pre-MS stars in the HR diagram and we compared it to the predictions of standard non accreting models. In Section 3 we give a brief overview of the main characteristics and evolutionary stages of a pre-MS model without protostellar accretion. In Section 4 we introduce the protostellar accretion phase, discussing the differences between spherical and disc accretion, along with the main parameters the determine the structure of an accreting protostar. In Section 5 we analyse the burning of light elements (Li, Be and B) in pre-MS stars and the predicted surface abundances during the pre-MS for stellar models of different masses without or with protostellar accretion phase. In section 6 we review the impact of updated cross sections for the burning of light elements and their impact in the predictions of surface abundances in pre-MS stellar models. In section 7 we summarize the main aspects highlighted in the review. \section{Observational data of Pre-main sequence stars as test of theoretical models} \label{PMSobservations} This review is focused on theoretical predictions for protostellar/pre-MS models, which can be validated only through comparison with observational data. Given the difficulty in directly observing the stellar formation and the protostellar phase, only an investigation of the characteristics of very young pre-MS stars, close to the end of the protostellar phase, can indirectly give information on the previous accretion period. The availability of observations for very young pre-MS stars is thus fundamental. A great number of data is available for young pre-MS stars (ages $\sim 1$~Myr) with solar or slightly sub-solar metallicity; among these objects, some of them show a still detectable accretion disc, or protoplanetary disc, and very low accretion rates (see .g. \citet{hartmann96,muzerolle00,calvet05,muzerolle05,muzerolle05b,bae13,ingleby14}). Such residual accretion discs show clear footprints of a previous accretion phase. \begin{figure}[t] \centering \includegraphics[width=12cm]{HR_dati.eps} \caption{HR diagram for young pre-MS stars extracted from the literature, compared with a set of standard evolutionary models and isochrones from the PISA database (\citet{tognelli11,tognelli18}.)} \label{fig:dati} \end{figure} As an example, Fig.~\ref{fig:dati} shows a sample of young pre-MS stars compared to standard isochrones from 0.1 to 10~Myr and evolutionary tracks for masses in the range [0.01, 1.0]~M$_{\sun}${} (\citet{tognelli11}). Such observed stars are fully formed, in the sense that the measured accretion rates are extremely small, thus, they have already reached their final mass. Thus they can be considered stars evolving as constant mass structures. This figure is intended to qualitatively show the position of observed young stars in a HR diagram; they are located in a region which theoretically corresponds to pre-MS models undergoing to a gravitational contraction (we will discuss this evolutionary stage in more details in Section~\ref{PMSevolution}). Standard stellar models generally agree with data for young stars in the colour-magnitude (CM) or in the HR diagram (see e.g. \citet{tognelli13,randich18}), as qualitatively shown in figure. We remark that stellar models should be able to populate such region of the HR diagram where young stars are observed. Thus, the simple comparison with observations of young associations/clusters (especially in the GAIA era) in the HR/CM diagram can put strong constraints on stellar evolution theoretical predictions (see e.g. \citet[][]{babusiaux18,randich18,bossini19}). This is a fundamental point especially when accretion phases are taken into account (see Section~\ref{protostar}), helping in constraining free parameters adopted in model computations. Other constraints are provided from pre-MS stars in double-lined eclipsing binary (EB) systems, whose masses and radii can be determined with high precision. In recent years, an increasing number of EB systems have been studied in detail, giving the possibility to check pre-MS model predictions against data (see e.g. \citet{mathieu07,gennaro12}). Further constraints come from the measurements in low-mass stars of the surface abundance of lithium-7, which, being an element whose destruction rate is extremely sensitive to the temperature, can be used to test the temporal evolution of the pre-MS stellar structures. \citet{deliyannis00,charbonnel00,pinsonneault00,piau02,randich10,tognelli12}). These issues will be discussed in the present review. \\ \\ \\ \section{General characteristics of standard pre-main sequence evolution} \label{PMSevolution} \begin{figure} \centering \includegraphics[width=12cm]{HR_phases.eps} \caption{HR diagram for low-mass stars with indicated the main evolutionary stages during the pre-MS evolution: Hayashi track (fully convective star, grey line), partially convective star (blue line), locus of the Zero Age Main Sequence (ZAMS, magenta line). The locus corresponding to the deuterium burning is indicated by the red stripe.} \label{fig:std_pms} \end{figure} The pre-MS evolution starts at the end of the accretion phase and ends with the Zero Age Main Sequence, or simply ZAMS, position\footnote{The ZAMS corresponds to the phase when central hydrogen begins to be burned into helium with secondary burning elements at equilibrium and nuclear energy production fully supporting the star.}. The star is totally formed, and the mass can be considered constant at least for the whole pre-MS and Main Sequence (MS) evolution. The first consistent description of the pre-MS evolution was given by \citet{hayashi61} and \citet{hayashi63}; the basic idea is that a pre-MS star starts from a cold, expanse and luminous model. Due to the low temperatures, the opacity of the stellar matter is large and thus the radiative temperature gradient in the whole structure is larger than the adiabatic one. This leads to convective motions extended within the entire stellar structure; thus the star is fully mixed and chemically homogeneous. Figure~\ref{fig:std_pms} shows an example of the evolution of pre-MS solar metallicity low mass stars in the mass range [0.1, 1.0]~M$_{\sun}${} computed using the PISA stellar evolutionary code (\citet{deglinnocenti08,dellomodarme12}), with the adopted input physics/parameters described in \citet{tognelli18,tognelli20}); the same figure also shows a qualitative representation of some of the main evolutionary stages characteristics of such mass range. Due to their low temperatures, during these first stages of the standard pre-MS evolution, stars cannot produce the nuclear energy required to balance the surface energy losses by radiation and their evolution essentially consists in a gravitational contraction. The evolution time scale is thus given by the thermal (Kelvin-Helmholtz) time scale, which is the time of energy transport throughout the star. It is common to define the Kelvin-Helmholtz time scale as the ratio between the total gravitational energy of the star and its luminosity $L$: \begin{equation} \tau_{KH} = \frac{\beta}{2}\frac{GM^2}{R L}. \end{equation} The factor $\beta$ takes into account the density profile inside the star. For an (unrealistic) model of homogeneous and spherical star with a constant density, $\beta = 5/3$. The gravitational contraction leads to an internal temperature increase. We recall that for non-degenerate structures\footnote{When talking about \emph{degeneration} we refer to electron quantum degeneracy.} the central temperature, $T_c$, depends on the stellar mass $M$, the radius $R$ and the chemical composition (mean molecular weight $\mu$) in the following way: \begin{equation} T_c \propto \frac{\mu M}{R}. \end{equation} From the relation above a contraction naturally leads to a rise in $T_c$. Using this result, the Stephan-Boltzmann law ($L\propto R^2 $$T_\mathrm{eff}$$^4$) and the virial theorem, it can be shown that the luminosity of the star decreases following a simple power law, $L\propto t^{-2/3}$. The gravitational contraction is the only energy source until the central temperature reaches about $10^6$~K, when the deuterium burning reaction D(p,$\gamma$)$^3$He (D-burning) becomes efficient. Such a reaction generates the energy required to maintain the star stable on nuclear time scales, longer than the thermal one. This is guaranteed also by the steep dependence on the temperature of the energy generation rate, $\epsilon_{pD}$, ($\epsilon_{pD}\propto T_c^{12}$); such a dependence limits the $T_c$ increase, halting the gravitational contraction (because of the $T_c \propto 1/R$ relation). The ignition of the D-burning, due to the produced energy flux, maintains the star fully convective and deuterium is burnt in the whole star. The D-burning phase is shown in Fig.~\ref{fig:std_pms} as a the red stripe, which indicates the part of the Hayashi track where D-burning provides more than 10\% of the total stellar luminosity, for stars with different masses. The nuclear time scale of D-burning depends on the characteristics of the star, mainly on the mass. The luminosity of a star at the beginning D-burning phase increases with the stellar mass; this means that increasing the stellar mass the D-burning increases its efficiency to balance the higher energy losses at the stellar surface. Thus the rate of deuterium destruction increases with mass. The typical nuclear D-burning time scale for masses in the range 0.1 - 1~M$_{\sun}${} varies between about $0.1$-2~Myr, depending on the mass; as an example the D-burning phase lasts about 1-2$\times 10^6$~yr for 0.1~M$_{\sun}${} and about $10^5$~yr for a 1~M$_{\sun}${} (see e.g. \citet{chabrier97,dantona97,tognelli11}). The duration of the D-burning phase in pre-MS depends not only on the stellar mass but it is also proportional to the original stellar deuterium mass fraction abundance. Observations suggest that for disc stars a value of $X_D \approx 2\times 10^{-5}$ should be adopted (see e.g. the review by \citet{sembach10}); such a value is smaller than that predicted by the BBN ($X_D \approx 4 \times 10^{-5}$, see e.g. \citet{steigman07,pettini08,pitrou18,mossa2020}), as expected -- e.g. by galactic evolution models -- because deuterium is destroyed in stars. Once deuterium has been completely exhausted in the whole star a pure gravitational contraction phase starts again. As for the previous evolution the stellar luminosity is well approximated by the power law $L\propto t^{-2/3}$. This second gravitational contraction increases the temperature and density in the inner region of the star. Depending on the total mass, such a temperature increase could lead to a drop in the radiative opacity $\kappa_R$. For stars with $M\gtrsim 0.3$~M$_{\sun}$, the internal opacity drop reduces the radiative gradient leading to the formation of a central radiative stable zone. As a consequence of this fact, the star leaves the Hayashi track in the HR diagram, shifting towards larger temperatures as the radiative core grows in mass, until the star efficiently ignites the central hydrogen burning (reaching the ZAMS). This part of the stellar evolution is traditionally called the \emph{Henyey track} and corresponds to the blue part of the evolutionary track in Fig.~\ref{fig:std_pms}. For $M< 0.3$~M$_{\sun}$, the temperature increase is not enough to produce such an opacity drop and the star continues its contraction along the Hayashi line. In this mass range, if the total mass is larger than approximately $0.08$~M$_{\sun}$, the contraction continues until the central temperature is large enough to ignite central hydrogen burning, which becomes the main energy source of the star (see e.g. \citet{iben13}). On the other hand, if $M < 0.08$~M$_{\sun}$, during the contraction the star become so dense that the pressure is dominated by the degenerate electron contribution; in such a configuration the pressure is only very slightly dependent on the temperature. Then the contraction slows down and the star (called brown dwarf) evolves along a cooling sequence which, in the HR diagram, follows a precise mass-radius relation. This general picture describes the evolution of a pre-MS star in the standard case; theoretical calculations are started when the star is a fully formed object, chemically homogeneous at high luminosity (large radius) on the Hayashi line. However, it is well known that stars undergo a formation phase, the \emph{protostellar phase}, during which the mass is accreted from the protostellar cloud and/or from a disk to reach the final stellar mass. The inclusion of such a phase could, at least in principle, modify the standard theoretical picture. \\ \section{Protostellar accretion phase} \label{protostar} The stellar formation process starts with the collapse and the fragmentation of a molecular cloud that contracts forming denser cores which eventually become protostars and then stars. During this process, the protostellar mass progressively increases as the matter in the cloud falls onto the central dense object. The cloud collapse is a complex hydrodynamic problem, in which one has also to take into account cooling processes by molecules and dust. At a given time during the collapse a stable hydrostatic core forms, on which mass continues to fall, so that the accretion treatment does not require anymore hydrodinamical models (e.g. \citet{stahler80,stahler801,stahler81,hartmann97,baraffe12}). Protostellar accretion has been analysed in the literature starting from the pioneering works by \citet{stahler80}, \citet{stahler88}, \citet{palla91}, \citet{palla92}, \citet{palla93}, \citet{hartmann97} and \citet{siess97}, to more recent works by \citet{baraffe09}, \citet{hosokawa09}, \citet{baraffe10}, \citet{tognelli13}, \citet{kunitomo17} and \citet{tognelli20}. Depending on the characteristics of the accretion assumed in the computations (chemical composition, magnetic fields, rotation, geometry...) the collapse of the cloud and the stellar formation can produce different evolution whose footprints are still visible in pre-MS stars. \\ \subsection{Cloud collapse and protostellar accretion} \label{cloudcollapse} The main phases of the protostellar evolution are briefly described below (for more details see \citet{larson69,larson72,larson03}). \\ \begin{itemize} \item \textit{Isothermal collapse:} The protostellar cloud, during its first collapse (until the central density is lower than about $10^{-13}$~g~cm$^{-3}$) does not warm, because its density is too low to trap the energy produced by the contraction. When the density further increases above this limit the radiation can be partially trapped. \item \textit{Formation of the first Larson core:} the energy trapped inside the denser regions of the cloud prevent a further collapse of this region. A first temporarily hydrostatic core forms (with a mass of about 0.01~M$_{\sun}${} and a radius of several AU) out of which the matter is still falling on the core. A transition region (shock front) develops close to the core surface where the matter settles and passes from supersonic to subsonic. \item \textit{Second collapse:} the hydrostatic core contracts as long as it radiates energy from its surface. So, although its mass is increasing due to mass accretion, its radius shrinks. The contraction of the core leads to a temperature rise, until the temperature of molecular hydrogen dissociation ($T\sim 2000~$K) is reached. Then contraction energy does not warm anymore the core but it is used to dissociate $H_2$, forcing the core to break the condition of hydrostatic equilibrium. At this stage, the core density and pressure increase. \item \textit{Formation of the 2nd Larson core:} when $H_2$ is fully dissociated a further increase of density and pressure, due to contraction, while mass is still falling radially on the core, leads to a second, hydrostatic equilibrium for the central core with a mass of the order of $\sim 0.001$~M$_{\sun}${} ($=1$~M$_\mathrm{J}$, Jupiter mass) and a radius of about $1$~R$_{\sun}$. From this moment on the central objects maintains its hydrostatic configuration while its mass increases. \end{itemize} The protostellar evolutionary phases listed above are quite general (for solar metallicity stars) and almost independent of the computation details. \citet{larson69} remarked that at some stages of the cloud evolution a hydrostatic central object (2nd Larson core) forms that can be considered the first protostellar core. The characteristics of this core (i.e. mass, radius, density and central temperature) appear to be barely sensitive to the adopted cloud initial conditions or to the adopted input physics (see e.g. \citet{masunaga00,machida2008,tomida13,vaytet2013}). Reasonable intervals for the mass, radius and temperature of the stable hydrostatic core are: mass range of 1~-~20~M$_\mathrm{J}$, radius values of 0.5~-~10~R$_{\sun}${} and central temperature of 2-6$\times 10^{4}$~K. \begin{figure}[t] \centering \includegraphics[width=10cm]{larsonbirth.eps} \caption{Comparison between standard isochrones (thin solid lines) and the loci of the end of protostellar accreting sequence for two different initial temperatures of the cloud (10 K thick solid and 20 K thick dashed line). Circles mark the position of 0.25, 0.5, 1, 1.5, 2, 3, and 5~M$_{\sun}${} models. Figure adapted from \citet{larson72}.} \label{fig:larson72} \end{figure} In the HR diagram of Fig.~\ref{fig:larson72} the sequence that identifies the end of the protostellar accretion (when the star becomes visible) is compared to standard isochrones. Interestingly, the end of the protostellar evolution is very close to the position of the 1~Myr standard isochrone. \citet{larson69,larson72}, adopting selected accretion parameters, followed the subsequent evolution until the the Hayashi track, finding that low mass stars ($M<1~$M$_{\sun}$) attain, after the protostellar accretion, characteristics similar to that of standard evolution along the Hayashi track. In contrast, as the mass increases ($M>2~$M$_{\sun}$), models skip the Hayashi line, ending the protostellar phase closer and closer to the MS position, where they join the "standard'' track. It is worth to remark that theoretical models for the protostellar evolution cannot be easily checked with observations, as these accreting phases occur when the star is still embedded inside the cloud and thus the central core is largely masked by the matter around it.\\ \\ \\ \subsection{Protostellar accretion in hydrostatic stellar evolution codes} \label{protostarhydrostatic} As already discussed, hydrodynamic evolution of accreting stars is still a challenging task from the computational point of view. However, concerning the central protostar, it is not needed to employ a hydrodynamic code, as the protostar itself is in hydrostatic equilibrium after the formation of the 2nd Larson core. In this approximation, the central object can be described using a mono dimensional hydrostatic stellar evolutionary code (see e.g. \citet{siess97,stahler80,stahler801}). On the other hand, hydrodynamic models are needed to predict the structure of the envelope surrounding the protostar, which does not satisfy the hydrostatic conditions but it is essential to determine the characteristics of the accretion flow. More precisely, the envelope gives information about the accretion rate, the percentage of the energy of the falling matter transferred to the star and the accretion geometry. Information about these quantities are needed inputs for hydrostatic protostellar models. Due to the still present uncertainty on hydrodynamic calculations, all the previous accretion parameters are affected by not negligible theoretical indeterminacy, as briefly summarised below. \begin{itemize} \item \textit{Accretion rate.} The accretion rate ($\dot{m}$) defines the rate at which the star changes its mass; $\dot{m}${} can vary by orders of magnitude during the accretion phase, passing from $\dot{m}$ $< 10^{-6}$-$10^{-7}~$M$_{\sun}$/yr (quiescent accretion) to rapid and intense episodes of mass accretion (bursts) with $\dot{m}$$\sim 10^{-4}$-$10^{-3}~$M$_{\sun}$/yr, e.g. as observed in FU Ori stars, (\citet{hartmann96,hartmann16,audard14}). \item \textit{Energy transferred by the accreted matter.} The matter falling to the star before reaching the stellar surface has a kinetic energy that can be estimated approximating the matter velocity to the free fall one. However, when it settles on the stellar surface, the kinetic energy has become equal to zero, so the kinetic energy has been converted into another energy form. It can be thermal energy carried inside the star (the accreted matter is hot), or the energy can be partially or totally radiated (photons) before the matter reaches the stellar surface (at the shock front). The fraction of the kinetic energy transferred to the protostar depends on the characteristics of the accretion flow (i.e. density, accretion geometry, accretion rate, see e.g. \citet{baraffe12}). \end{itemize} The difficulty in treating simultaneously the protostar and the envelope evolution requires some simplifications, which mainly concern the geometry of the accreting protostar-envelope system: \begin{itemize} \item \textit{Spherical accretion} (see e.g. \citet{stahler80,stahler88,palla91,palla92,palla93,hosokawa09}). The star is supposed to be deeply embedded into the parental cloud and the matter falls on it almost radially. The whole stellar surface is subjected to the accretion and the energy radiated by the star can be reabsorbed by the envelope. The whole protostellar accretion occurs as a radial infall from a cloud that has mass enough to generate the star, at a fixed value of the accretion rate. \item \textit{Disc accretion} (\citet{hartmann97,siess97,baraffe09,baraffe10,kunitomo17,tognelli13}). The matter falls from a boundary layer of a circumstellar disc and reaches the star via accretion streams. Most of the stellar surface is not subjected by the accretion and the star is free to radiate its energy, most of which is lost in space. The disc is assumed to be totally decoupled from the central star and it is not treated in the stellar evolution codes. The parameters that define the accretion (accretion rate, disk lifetime, accretion energy) are considered as external free parameters which can be obtained from detailed accretion disc evolution calculations (e.g. \citet{vorobyov10,baraffe12}). \end{itemize} The spherical accretion scenario is likely to describe the first stages of the formation of the protostar when it is still embedded within the cloud that retains an approximate spherical geometry. However, observations suggest that at some stage of protostellar evolution, the cloud collapses to a disc -- because of angular momentum conservation -- and that it is during the disc accretion that the star gains most of its final mass (see e.g. \citet{natta00,meyer07,watson07,machida10} and references therein). So, both scenarios are interesting and describe a part of the protostellar accretion. The most important difference between spherical and disc accretion, which deeply affects the protostellar evolution, is the amount of energy retained by the accreted matter. Indeed, while in the spherical accretion it is possible to estimate the amount of energy retained by the accreted matter, in disc accretion this quantity is defined by a free parameter ($\alpha_\mathrm{acc}$). The impact of $\alpha_\mathrm{acc}${} on the evolution is discussed in the next sections.\\ \\ \\ \subsection{Spherical and disc protostellar accretion} \label{acccretiongeometry} The spherical accretion scenario applies to a star that is deeply embedded in a gas cloud. In this case, the evolution of the star and of the envelope have to be treated simultaneously. This allows (at least in principle) to have a consistent evaluation of the accretion rate and the amount of thermal energy that the accretion flows bring inside the star. Qualitatively, the energy emitted from the stellar surface is not free to escape into space since it has to interact with the matter around the star. Thus such an energy is partially reabsorbed by the matter in the envelope, and it eventually reaches the star. The effect of this process is that the star has a kind of external energy source that warms up the stellar surface. The injection of thermal energy from the accreted mass forces the star to expand or at least to compensate for the radius decrease caused by injection of mass. \begin{figure}[t] \centering \includegraphics[width=8cm]{stahler.eps} \caption{Comparison between standard tracks (solid lines) and the birthline (dotted line). Dots represent T~Tauri and Herbig Ae/Be stars. Figure adapted from \citet{palla93}.} \label{fig:stahler} \end{figure} The impact of spherical accretion on the formation of pre-MS stars has been largely analysed in the pioneering works by Larson (\citet{larson69,larson72}), Stahler and Palla (\citet{stahler80,stahler88,palla91,palla92,palla93}), and more recently also by Hosokawa and collaborators (\citet{hosokawa09,hosokawa09b,hosokawa10,hosokawa11}). One of the main results of such a spherical accretion scenario is that stars during the accretion phase remain bright and with large radii. Using a mild and constant accretion rate of $10^{-5}~$M$_{\sun}$ /yr, it is possible to obtain fully accreted stars in a region of the HR diagram that corresponds to the upper envelope of the locus where young pre-MS stars are observed (see Figure~\ref{fig:stahler}). This sequence was called ''\emph{birthline}'', that is the locus of stars with different masses where the accretion ends and the stars become optically visible (\citet{palla93}). More recent sets of birthlines can be found in \citet{hosokawa09}: such accretion models was computed for different values of the accretion rate (from $10^{-6}$ to $10^{-3}~$M$_{\sun}$ /yr), adopting a spherical protostellar accretion code (similar to that used by Stahler and Palla). They showed that increasing $\dot{m}$, the birthline moves towards larger luminosities and radii, thus still in full agreement with the observations. Moreover, since spherical accretion models produce low-mass stars (on the birthline) in a region that corresponds to the top of the Hayashi track of standard stellar models (see Figure~\ref{fig:stahler}), the differences between standard and spherical accreting models in pre-MS low-mass stars are negligible. This validates the results of standard evolutionary tracks/isochrones (at least for ages higher than 1 Myr). However, it is commonly accepted that stars do not accrete mass spherically during their entire protostellar phase; on the contrary they gain most of their mass from an accretion disc. This motivates the detailed study of protostellar accretion from a disk geometry. Differently from the spherical accretion, in the disc geometry the accretion streams cover only a very limited part of the stellar surface (few percent, see e.g. \citet{hartmann97}) and almost the whole star is free to radiate its energy into space. Another difference is that all the accretion parameters (i.e. accretion rate, fraction of energy inside the accreted matter, etc..) are treated as external parameters in disc accretion models. In the disc accretion geometry, it is possible to follow also an analytic approach to analyse the main characteristics of the accreting star. Following the formalism presented in \citet{hartmann97}, it is possible to write a simple equation for the temporal evolution of the accreting star radius: \begin{eqnarray} \frac{\dot{R}}{R} = \frac{7}{3} \frac{R}{GM^2}\bigg[\beta_D - L_\mathrm{ph} + \bigg(\alpha_\mathrm{acc} - \frac{1}{7}\bigg)\frac{GM^2}{R}\frac{\dot{m}}{M}\bigg] \label{eq:rdot} \end{eqnarray} where $M$ and $R$ are the stellar mass and radius, $\beta_D$ expresses the luminosity due to the deuterium burning (D-burning), $L_\mathrm{ph}$ is the luminosity of the stellar surface, $\dot{m}${} is the mass accretion rate and $\alpha_\mathrm{acc}$ represents the fraction of the accretion energy deposed into the star (thermal energy of the accreted matter). Equation~(\ref{eq:rdot}) contains three terms: the first and second are the normal terms that define the evolution of the star with a surface radiative loss ($L_\mathrm{ph}$) with the inclusion of D-burning energy production $\beta_\mathrm{D}$, while the last term represents the accretion effect, which is proportional to $\dot{m}$. This term accounts for the fraction of the thermal energy of the accreted matter retained by the star, $\alpha_\mathrm{acc}$. Such a parameter has to be specified as an external free parameter, ranging from 0 (no energy acquired by the star) to about 1 (or 1/2 in case of thin disc, see e.g. \citet{siess97}). From the same equation, it is also evident that $\alpha_\mathrm{acc} = 1/7 \equiv \alpha_\mathrm{acc,cr}$ defines a critical value; for $\alpha_\mathrm{acc} < \alpha_\mathrm{acc,cr}$ the third term is negative, and it contributes to the contraction of the star. For $\alpha_\mathrm{acc} > \alpha_\mathrm{acc,cr}$ the same term produces a radius expansion. It is common to refer to the case $\alpha_\mathrm{acc}\sim 0$ (or $\alpha_\mathrm{acc} \ll \alpha_\mathrm{acc,cr}$) as \emph{cold disc accretion} and $\alpha_\mathrm{acc} > \alpha_\mathrm{acc,cr}$ as \emph{hot disc accretion}. Looking at eq.~(\ref{eq:rdot}) it is clear that a radius expansion requires a positive value of the right side of the equation, which can be obtained or via an efficient deuterium burning (large $\beta_D$) or via an efficient accretion energy transport into the protostar ($\alpha_\mathrm{acc} > \alpha_\mathrm{acc,cr}$). These two cases are discussed separately in the next two sections.\\ \\ \\ \subsection{D-burning during protostellar accretion} \label{Dburning} To check if the D-burning alone can produce a protostar with a large radius, in agreement with observations, we assume $\alpha_\mathrm{acc} = 0$. From eq.~(\ref{eq:rdot}), to produce a radius increase, D-burning has to supply the star with enough energy to counterbalance the radiative losses at the stellar surface plus the gravitational energy decrease caused by the mass ingestion. If this condition is not satisfied, the protostar contracts and the resulting model at the end of the protostellar phase has a radius much smaller than that observed in young disk stars and expected in spherical accretion cases. \begin{figure}[t] \centering \includegraphics[width=11cm]{Kunitomo_Deuterium1.eps} \caption{Effect of different deuterium original abundances (see labels), $X_\mathrm{D}$, on the protostellar evolution of 1~M$_{\sun}${} model. The protostellar accretion starts with a seed mass of 10~M$_\mathrm{J}$, with a constant accretion rate of $10^{-5}$~M$_{\sun}$/yr. Filled circles indicate the end of the protostellar accretion phase and triangles the beginning of D-burning. Figure adapted from \citet{kunitomo17}.} \label{fig:xd1} \end{figure} The dependency of the radius on original deuterium abundance $X_\mathrm{D}$ has been investigated in \citet{tognelli13} and more recently in \citet{kunitomo17}. In \citet{kunitomo17} the authors assumed for the 2nd Larson core mass the value $M_\mathrm{seed}=0.01$~M$_{\sun}$(=10~M$_\mathrm{J}$). Figure~\ref{fig:xd1} shows a comparison between birthlines obtained assuming different values of $X_\mathrm{D}$, for a cold accretion scenario with $M_\mathrm{seed}$=10~M$_\mathrm{J}$. When no deuterium is taken into account in the stellar matter, the star inevitably contracts: in this model, the star ignites the hydrogen burning close to the end of the protostellar accretion, thus totally skipping the pre-MS evolution. The situation changes increasing the deuterium mass fraction abundance in the star. To partially reproduce the standard pre-MS evolution, a deuterium content of $X_D \approx 4\times 10^{-5}$ (i.e. 40 ppm) is required. If a more reliable deuterium content is adopted, $X_D \approx 2\times 10^{-5}$, the models with protostellar accretion converges to standard models only in the Henyey track; in this case, the evolution along the Hayashi track is missed contrarily to what observed in young clusters. We want to comment about the fact that the uncertainty on galactic deuterium mass fraction abundance is not larger than 10~ppm (see e.g. Fig.~2 and table~1 in \citet{sembach10}), thus an initial deuterium content of $X_D \approx 40$~ppm is an over estimation for disk stars. This fact seems to indicate that deuterium alone is not capable of maintaining the star bright enough to reconcile protostellar cold accretion models and the results obtained in a standard non accreting scenario.\\ \\ \\ \subsection{Accretion energy} \label{accretionenergy} There is another natural way to obtain a radius expansion in protostars, which is assuming that the ingested matter retains part of its internal energy; this means to assume a value of $\alpha_\mathrm{acc} > \alpha_\mathrm{acc,cr}$. In \citet{hartmann97} it was shown that non-cold accretion models ($\alpha_\mathrm{acc}$$>\alpha_\mathrm{acc,cr}$) can attain a radius expansion large enough to reproduce observed stars; in this case the disc accretion mimics spherical-accretion birthline obtained by \citet{stahler88}. More recently, \citet{kunitomo17} analysed in more details the impact of $\alpha_\mathrm{acc}${} on the formation of a 1~M$_{\sun}${} model, finding that the inclusion of a certain fraction of the total accretion energy (i.e. $\alpha_\mathrm{acc}$$ \in [0,\,1]$) in the star is capable of maintaining the structure at large radii. Figure~\ref{fig:accene} shows the birthline computed in \citet{tognelli13} -- by means of the PISA stellar evolutionary code -- for solar metallicity stars using three values of $\alpha_\mathrm{acc}$=0 (cold case), 0.5 and 1 (hot case), for a seed mass value of 5~M$_\mathrm{J}$. \begin{figure}[t] \centering \includegraphics[width=12cm]{Tognelli_PhD_Hot_figure.eps} \caption{Effect of $\alpha_\mathrm{acc}${} on the protostellar evolution for three values of $\alpha_\mathrm{acc}$, 0 (cold accretion), 0.5 and 1 (hot accretion). Dotted lines are standard pre-MS tracks. Figure adapted from \citet{tognelli13}.} \label{fig:accene} \end{figure} From Fig.~\ref{fig:accene}, it is evident that adopting a value of $\alpha_\mathrm{acc}$$\ge 0.5$ models on the birthline are bright and intersect the standard evolutionary tracks in the standard Hayashi track, for all the selected mass range [0.1, 2.0]~M$_{\sun}$. Recently, \citet{tognelli20} have obtained similar results for metal poor models: they showed that even in the low metallicity case, the inclusion of accretion energy produces expanse objects that intersect the Hayashi track of standard non accreting models at the end of the protostellar accretion stage.\\ \\ \\ \subsection{Connecting the standard pre-MS and the protostellar accretion phase} \label{protostarandPMS} From the previous discussion, it emerges that, depending on the characteristics of the protostellar accretion, the protostar could end its first evolution with a structure similar or in some cases profoundly different from that obtained in a normal gravitational contraction along the Hayashi track. The largest discrepancy with standard pre-MS evolution occurs in the case of cold accretion starting from a seed of the order of few Jupiter masses, as in that case the classical Hayashi track is almost completely skipped (see e.g. \citet{baraffe09,tognelli20}). Figure~\ref{fig:bf09_tog20} shows the evolution in the HR diagram of cold accretion models starting from different $M_\mathrm{seed}${} and ending with different final masses, as discussed in details in \citet{baraffe09}. It is difficult to reproduce the Hayashi track of pre-MS stars starting from $M_\mathrm{seed}${} of few Jupiter masses (i.e. cases A, B, D). Moreover, the position of the 1~Myr model (filled square) in accretion models is relatively far from the standard 1~Myr isochrone; in most of the cases, the position of pre-MS models with the inclusion of cold protostellar accretion at 1 Myr is very close to the standard non accreting 10 Myr isochrone, witnessing the strong impact of cold accretion on pre-MS evolution. \begin{figure}[t] \centering \includegraphics[width=9cm]{baraffe09_fig.eps} \caption{Evolution in the HR diagram of protostellar models with different values of $M_\mathrm{seed}$, compared with the standard 1 Myr and 10 Myr isochrones (black long dashed lines). The different letters indicate models with different seeds and final masses ($M_{fin}$), in particular: (A) $M_\mathrm{seed}$= 1~M$_\mathrm{J}$, $M_{fin}$ = 0.05~M$_{\sun}$, (B) $M_\mathrm{seed}$= 1~M$_\mathrm{J}$, $M_{fin}$=0.1~M$_{\sun}$, (D) $M_\mathrm{seed}$= 1~M$_\mathrm{J}$, $M_{fin}$=0.5~M$_{\sun}$, (H) $M_\mathrm{seed}$ = 10~M$_\mathrm{J}$, $M_{fin}$ = 0.21~M$_{\sun}$, (I) $M_\mathrm{seed}$ = 50~M$_\mathrm{J}$, $M_{fin}$ = 0.55~M$_{\sun}$, (J) $M_{fin}$=1.05~M$_{\sun}$, (K) $M_\mathrm{seed}$ = 100~M$_\mathrm{J}$, $M_{fin}$ = 1.1~M$_{\sun}$, (L) $M_\mathrm{seed}$=100~M$_\mathrm{J}$, $M_{fin}$ =1.85~M$_{\sun}$. Filled squares represent the position of the 1~Myr model. Figure adapted from \citet{baraffe09}.} \label{fig:bf09_tog20} \end{figure} As discussed, it is likely that stars first accrete in a spherical hot scenario and then, at a given stage, switch to a disk-like accretion. In this case the transition from hot to cold accretion occurs for some value of the protostellar mass (possibly dependent on the amount of mass available in the cloud/disk). This mixed scenario has been investigated firstly by \citet{hosokawa11} to show that the protostar remains bright enough to end the protostellar phase close to a Hayashi track. Top panel of Fig.~\ref{fig:hot_cold} shows the models by \citet{hosokawa11}. The purely hot accretion scenario (purple solid line), which corresponds to a hot birthline obtained assuming a spherical accretion, attains large luminosities and radii well above the standard 1~Myr isochrone. Figure shows also the results of models where the accretion switches from hot to cold at a given value of the stellar mass, namely 0.03~M$_{\sun}${} (magenta dashed line), 0.1~M$_{\sun}${} (magenta solid line) and at 0.3~M$_{\sun}${} (magenta dotted line). It is interesting to notice that in all cases, the birthline is still quite luminous, being very close to the 1~Myr isochrone. Similar results have been obtained for metal poor models by \citet{tognelli20} (bottom panel of Fig.~\ref{fig:hot_cold}). \begin{figure}[t] \centering \includegraphics[width=9cm]{Hosokawa2011_img.eps} \includegraphics[width=11cm]{Tognelli2020_img_hot_cold.eps} \caption{Evolution in the HR diagram of purely and partially hot models computed with an accretion rate $\dot{m}=10^{-5}$~M$_{\sun}${}/yr. Top panel: comparison between purely hot (solid purple line), purely cold (green and red solid lines) and hot+cold birthlines (magenta lines). The two purely cold cases differ for the seed radius, 3.7~R$_{\sun}${} (green line) and 1.5~R$_{\sun}${} (red line, mC5-C). The accretion switches from hot to cold at a given value of the mass, which is 0.03~M$_{\sun}${} (magenta dashed line), 0.1~M$_{\sun}${} (magenta solid line), and 0.3~M$_{\sun}${} (magenta dotted line). Black lines are isochrones of 1 and 10 Myr (dashed lines) and the ZAMS (dot-dashed line) for standard non accreting models. Squares, triangles and circles represent observations of some young pre-MS stars. Figure adapted from \citet{hosokawa11}. Bottom panel: models at low metallicity (Z=0.0001) for a total final mass of 0.7 and 0.8~M$_{\sun}${}. The accretion switches from hot to cold at a mass value 0.1, 0.2, 0.3, 0.4, 0.5, and 0.6~M$_{\sun}${} as indicated in the labels. The thick grey line represents the hot birhtline ($\alpha_\mathrm{acc} = 1$). Figure adapted from \citet{tognelli20}.} \label{fig:hot_cold} \end{figure} \citet{baraffe09} and \citet{baraffe12} investigated also the possibility to produce bright objects using an episodic accretion. The basic idea behind the models is that during intense bursts mass accretion phases the protostar can accrete matter in the hot-accretion configuration ($\alpha_\mathrm{acc} > \alpha_\mathrm{acc,cr}$, see the Appendix in \citet{baraffe12}), to switch back to cold accretion at the end of each burst. The authors showed that in this case it's still possible to produce models that end their protostellar accretion close to the standard position of the Hayashi track, to reproduce data (see also \citet{tognelli20} for metal poor protostars). What emerges from the previous analysis is that, if one assumes masses and radii typical of the 2nd Larson core, cold models cannot produce the observed bright stars in young clusters, but it is required the presence of hot accretion phases. Thus, the results seem quite comfortable: in most hot disk or spherical geometry, the protostellar accretion leads to pre-MS models with characteristics similar to those predicted in standard pre-MS evolution. More importantly, the position in the HR diagram of such models is in agreement with observational data. On the contrary, for the accretion parameters leading to a final mass model different to that of the standard one, as in the cold accretion scenario, the position in the HR diagram is in disagreement with disk star observations, rising doubts about the validity of such models.\\ \\ \\ \section{Light elements surface abundances and nuclear burning during the pre-MS phase} \label{elementabundances} Lithium, together with beryllium and boron, belong to the class of light elements burnt in pre-MS, because of their relatively low nuclear destruction temperature (between 2-5 million degree). The threshold values for the burning temperature depend mainly on the considered element, on the stellar mass (density and evolutionary stage) and slightly on the chemical composition of the star (in particular on helium and metals abundances). For pre-MS solar metallicity stars in the mass interval [0.08, 1.0]~M$_{\sun}$, the ranges of burning temperatures for the different elements approximately are: 2.4-3.5$\times 10^6$~K ($T(^{6,7}$Li)), 3.5-4.0$\times 10^6$~K ($T(^9$Be)) and 4.2-5.0$\times 10^6$~K ($T(^{10,11}$B)). In the literature the temperatures given for the burning are sometimes slightly different form the values reported here; the reason is that usually authors do not take into account that stars with different masses ignites these elements at slightly different temperatures because the nuclear burning rates also depends (even if at a lower level) on the density in the region where the burning occurs. Moreover, such temperatures can be different for MS or pre-MS stars with the same mass because of the different time scales in which light elements are destroyed. In MS stars the evolutionary time scale is much longer than that in pre-MS (for the same mass and chemical composition) consequently a smaller burning rate due to a smaller threshold temperature at the bottom of the convective envelope, is compensated by the longer time during which that element is destroyed. As a result the burning of light elements in MS can efficiently occur even at thresholds temperature smaller than those required in pre-MS. \begin{figure}[t] \centering \includegraphics[width=12cm]{trk_LiBeB.eps} \caption{Evolutionary tracks for solar metallicity in the HR diagram with indicated the regions where surface light element abundances decrease due to burning (where the temperature at the bottom of the convective envelope is higher than the burning temperature of Li, Be and B). The stellar models have been computed using the PISA evolutionary code with the same input parameters described in \citet{lamia15}.} \label{fig:ele_burning} \end{figure} Due to the differences in their burning temperatures in pre-MS, Li, Be and B are gradually destroyed at different depths inside the stellar interior and at different ages, in dependence on the stellar mass. As an example, Fig.~\ref{fig:ele_burning} shows the portion along the evolutionary track where surface Li, Be and B are burnt at the bottom of the convective envelope in a set of solar chemical composition stars in the mass range [0.08, 1.0]~M$_{\sun}$. It is interesting to notice that while Li is burnt (at the bottom of the convective envelope) in the whole selected mass range, surface $^9$Be burning occurs only for masses between about 0.08 and 0.5~M$_{\sun}$, while B is burnt in an even smaller mass range (about 0.1 - 0.3~M$_{\sun}$). The abundance of light elements at the stellar surface are strongly influenced by the nuclear burning as well as by the inwards extension of the convective envelope and by the temperature at its bottom. Consequently, the comparison between theory and observation for Li, Be and B surface abundances are useful to constrain theoretical models and in particular the convective envelope depth. From the observational point of view, most of the data for light elements concern the abundance of $^7$Li whose line (at 670.779~nm) can be safely resolved even in cold stars, as witnessed by the huge amount of data for stars in clusters or isolated stars at different metallicities (see e.g. \citet{sestito05,delgadomena14,delgadomena15,gomez18} and references therein). $^6$Li burns at a lower temperature with respect to $^7$Li, consequently it is almost completely destroyed when $^7$Li burning becomes efficient. Thus a potential detection of observable amounts of $^6$Li in stellar atmospheres would constrain the destruction of the less fragile $^7$Li (\citet{copi1997}). Since the depth of the convective zone increases with metallicity, $^6$Li is almost completely depleted in high metallicity disk stars, as in the Sun (see e.g. \citet{asplund09}) and it is below the detection level also for most thick disk and halo stars (see e.g. \citet{Spite10}). The possible abundance of $^6$Li below the limit of detection also for halo stars could be explained by the fact that the amount of $^6$Li formed by the standard Big Bang and by the cosmic rays is supposed to be very low. Moreover, a very small $^6$Li abundance in these stars would be very difficult to detect, in particular because the lines (doublets) of $^6$Li and $^7$Li are overlapping (see also discussion in Sec.~\ref{Liatmouncertainties}). Beryllium and boron measurements are more problematic than $^7$Li observations. $^9$Be abundance is measured using near-UV lines, only in stars with $T_\mathrm{eff}$ $\gtrsim 5000$~K, which corresponds in pre-MS to a mass range where Be is expected to be preserved and not destroyed (see e.g. \citet{garcia95,santos04,randich07,smiljanic10,delgado12,lamia15}). The abundance of the boron isotopes is even more difficult to measure than Be because the boron lines fall mainly in the UV part of the spectra where the Earth atmosphere is not transparent. In addition, for disk metallicity stars, B lines suffer strong blending problems (see, e.g., \citet{cunha2010}). Similarly to Be, B abundance are available in a mass range where B is expected to be not burnt in standard models. Despite the observational difficulties Be and B surface abundances data are available for some stars even at low metallicities (see e.g. \citet{boesgaard05}, \citet{lodders09}, \citet{primas09}, \citet{tan09}, \citet{boesgaard11}). In the observed stars, the ratio $^{11}$B/$^{10}$B seems to be of the order of four, in agreement with solar values and meteorite results, even if it is very difficult to spectroscopically discriminate among the boron isotopes (see, e.g. \citet{proffitt99}, \citet{prantzos12}). Be and B surface abundances have been also measured in the Sun where, as expected, they are not burned (see e.g. \citet{asplund05}, \citet{asplund09}, \citet{lodders09}, \citet{lodders10}). The temperatures for light elements burning can be reached in stellar interiors during the pre-MS evolution of stars with masses larger than about 0.05-0.1~M$_{\sun}${} (depending on the requested burning threshold temperature). We recall that at the beginning of the pre-MS evolutions stars are, independently of their mass, fully convective. Thus, if a nuclear burning occurs at this evolutionary stage, the burning affects the chemical abundance in the whole star, from the centre to the surface. However, as the star contracts and warms up, the opacity decreases at the stellar centre and stars with $M\gtrsim 0.3$~M$_{\sun}${} develop a radiative core. From this moment on the chemical evolution of the surface is (during the pre-MS) decoupled from that of the centre if the bottom of the convective envelope does not reach a region deep and hot enough to process --via nuclear burning-- the surface matter. Thus, for partially convective pre-MS stars, a condition to have a partial surface depletion of a specific element is that the bottom of the convective envelope reaches a temperature high enough to make nuclear burning efficient and then recedes toward the external parts of the star at lower temperatures. \begin{figure}[t] \centering \includegraphics[width=12cm]{Temp_LiBeB.eps} \caption{Temporal evolution of the temperature at the bottom of the convective envelope for masses in the range [0.08,1.0]~M$_{\sun}$. The threshold temperature required to ignite Li, Be and B burning are indicated as coloured horizontal lines. The ZAMS position is marked by a diamond. The not regular behaviour of the 0.3~M$_{\sun}${} model at $\log t(yr) \sim 7.5-8.3$ is caused by the formation of a transient convective core before the ZAMS (figure adapted from \citet{lamia15}).} \label{fig:Tce} \end{figure} Figure~\ref{fig:Tce} shows the temporal evolution of the temperature at the bottom of the convective envelope, $T_\mathrm{bce}${}, for stars with different masses between 0.08 and 1.0~M$_{\sun}${} at solar metallicity, with indicated the approximate values for the Li, Be and B burning temperatures. In fully convective stars $T_\mathrm{bce}${} coincides with the central temperature, T$_c$. When stars are fully convective $T_\mathrm{bce}${} progressively increases until the star reaches the ZAMS, while in stars that develop a radiative core $T_\mathrm{bce}${} stops increasing when the radiative core forms and $T_\mathrm{bce}${} slowly decreases as the radiative core grows in mass. This has a direct impact on the interval of time during which the surface light elements depletion occurs. Considering e.g. $^7$Li abundance in fully convective stars (i.e. M$\le 0.3$~M$_{\sun}$), $T_\mathrm{bce}${} overcomes $T(^7$Li$)${} at young ages (i.e. about 50~Myr for 0.1~M$_{\sun}${} and 5~Myr for 0.3~M$_{\sun}$); then surface Li burning continues during the whole pre-MS and MS phase. Since $T_\mathrm{bce}${} continuously increases, the burning efficiency increases too. On the other hand, in partially convective stars $T_\mathrm{bce}${} reaches a maximum and then decreases as the star evolves towards the ZAMS. For $M\gtrsim 0.5$~M$_{\sun}$, $T_\mathrm{bce}${} decreases below $T(^7$Li$)${} at some point during the pre-MS, thus halting the lithium burning at the bottom of the convective envelope. From this moment on, surface lithium abundance remains constant during the pre-MS phase. Figure~\ref{fig:Tce} also shows that increasing the mass of the star the time interval during which surface lithium is destroyed is shorter and the maximum value of $T_\mathrm{bce}${} reduces too; this is due to the fact that increasing the mass the convective envelope becomes thinner. This indicates that, increasing the mass, surface lithium is destroyed progressively less efficiently. The situation is similar for the other light elements; clearly one has to take into account the different burning temperatures, so that the mass range in which Be and B are destroyed at the base of the convective envelope is different from that in which lithium is burned. As an example, for solar composition models, Be can be burnt at the bottom of the convective envelope in the mass interval $0.08\lesssim M$/M$_{\sun}$$\lesssim 0.5$. On the other hand B in destroyed only in the mass range $0.1\lesssim M$/M$_{\sun}$$\lesssim 0.3$. Figure~\ref{fig:li_age} gives an example of the $^7$Li surface abundance time behaviour predicted for stars in the mass range [0.08, 1.0]~M$_{\sun}$; it is important to notice that light element surface abundances depend not only on the capability of $T_\mathrm{bce}${} to overcome the threshold temperature for the considered element, but also on the duration of the burning and on the difference between the threshold and $T_\mathrm{bce}$. In particular, this last quantity is very important because the burning rate of light elements is proportional to $T^a$ with $a\approx 20$ for lithium, $a\approx 23$ for beryllium and $a\approx 25$ for boron. \begin{figure}[t] \centering \includegraphics[width=12cm]{Li_age.eps} \caption{Temporal evolution of surface lithium abundance (normalized to the initial one) during the pre-MS for stars in the mass range [0.08, 1.0]~M$_{\sun}$ and solar chemical composition. Stellar models have been computed using the same input parameters described in \citet{lamia15}.} \label{fig:li_age} \end{figure} Referring to Fig.~\ref{fig:li_age}, in fully convective stars, i.e. $M\lesssim 0.3$~M$_{\sun}$, at a fixed age, the surface lithium depletion progressively increases increasing the stellar mass. For partially convective models, this behaviour breaks up and, at a fixed age, the lithium depletion decreases as the mass increases. This is clearly visible in the figure comparing e.g. the predicted lithium abundance at about $\log t(yr)= 7.5$ for 0.7, 0.8, 0.9, and 1.0~M$_{\sun}${} models. The amount of residual surface lithium increases with the mass, as the consequence of the decrease of $T_\mathrm{bce}${} (in time) when the radiative core forms and grows. Figures~\ref{fig:Tce} and \ref{fig:li_age} refer to a standard evolution, where the star is fully formed at the top of the Hayashi track. The situation can be different if protostellar accretion is taken into account, in particular in those cases where the star at the end of the protostellar phase is compact and faint, which corresponds essentially to the case of cold accretion models. This could affect light element burning in two different ways: 1) in principle for some possible values of the accretion parameters it could be possible the burning of light elements (most likely lithium) during the protostellar phase 2) accretion can change the pre-MS stellar characteristics with respect to those already predicted by the standard scenario so that the light element burning efficiency is changed too. We will discuss the effect of the protostellar accretion on the surface chemical composition in Section~\ref{Liprotostar}. \\ \\ \\ \subsection{Surface lithium abundance in open clusters} \label{Liinopen} Many questions are still open about the large discrepancies between the predicted and observed surface lithium abundance in young clusters, where standard models tend to underestimate the surface abundance at a given age (see e.g. \citet{dantona00,dantona03,jeffries06,tognelli12} and references therein). Moreover, the presence of a large scatter in the observed Li abundance among stars with similar $T_\mathrm{eff}${} in young clusters poses questions about the possible mechanisms producing different amounts of lithium depletion in stars with the same mass, age and chemical composition (\citet{king00,jeffries00,clarke04,xiong06,king10}). \begin{figure}[t] \centering \includegraphics[width=10cm]{fig_li_pattern.eps} \caption{Theoretical surface lithium abundance predicted for solar metallicity stars at 10~Myr, as obtained in standard evolutionary models using the PISA evolutionary code.} \label{fig:ex_li_pattern} \end{figure} It is worth noticing that, qualitatively, standard models (without accretion) are capable to produce a pattern of lithium vs mass (or $T_\mathrm{eff}$) similar to that observed in young clusters. This pattern can be divided into three regions, and, referring to Fig.~\ref{fig:ex_li_pattern}, it can be summarised as it follows: \begin{itemize} \item Starting from a certain value of the effective temperature (that depends on the cluster age) the surface lithium content, at a given cluster age, increases with the $T_\mathrm{eff}${} (or the stellar mass), until it reaches a plateau corresponding to stars that do not deplete Li (hot stars). Regions (3)-(4) in Fig.~\ref{fig:ex_li_pattern} correspond to partially convective models of increasing mass. As previously discussed, the more massive is the star, the thinner is the convective envelope and, in turn, the less efficient is the surface Li depletion. The plateau corresponds to stars with a convective envelope so thin that $T_\mathrm{bce}$$<$ $T(^7$Li$)$. \item For lower masses (and thus $T_\mathrm{eff}${}), e.g. in regions (1)-(2), stars are fully convective and lithium is burned in the whole star. At a fixed age the lithium burning efficiency increases with the stellar mass and lithium surface abundance rapidly changes varying the stellar mass. \item In region (1), reducing the mass (or $T_\mathrm{eff}$), one approaches the minimum mass that reaches the Li burning temperature in fully convective stars. Below this minimum mass the surface lithium abundance is constant and equal to the original value. \end{itemize} \quad\\ \subsection{Lithium abundance evolution during protostellar accretion} \label{Liprotostar} As discussed in the previous section, the inclusion of the protostellar accretion phase could (in dependence of the adopted accretion parameters) drastically alter the evolution of a pre-MS star. In this section we briefly review the main effects of the protostellar accretion phase on the surface lithium abundance during the protostellar and pre-MS evolution as a function of the different possible accretion parameters. This problem was first analysed by \citet{baraffe10}, who showed that the inclusion of protostellar accretion in solar metallicity stars with different input parameters can lead to a variety of cases for which the resulting lithium abundance (in pre-MS or in MS) is different from what expected in standard pre-MS evolution (see also \citet{baraffe12,tognelli13,kunitomo18}). We recall that accretion models depend on many parameters, but the main quantities that strongly affect the pre-MS evolution are the seed mass and the accretion energy deposed into the star. The general picture that emerges is that in cold accretion models lithium is efficiently destroyed during the protostellar accretion or at the very beginning of the pre-MS phase. Thus these stars should show a very low surface lithium content. A detailed analysis of the effect of the protostellar accretion on surface lithium abundance for different subsolar metallicities ($Z=0.005$, $Z=0.001$, and $Z=0.0001$) was discussed in \citet{tognelli20}. We also performed some tests to verify that what obtained for sub-solar metallicity is still valid at solar metallicities. The results by \citet{tognelli20}, similarly to what already obtained by \citet{baraffe10}, show that two scenarios can be found: \begin{itemize} \item \emph{Pure cold accretion case}. The accretion leads to stellar structures at the end of the protostellar phase different with respect to standard non accreting models, affecting also the lithium burning efficiency. If the seed mass is of the order of few Jupiter masses, the models result to be so compact and hot that start to efficiently deplete lithium before the end of the accretion phase. The level of depletion is mainly determined by the seed mass and it is only slightly affected by the other accretion parameters (accretion rates, initial radius). After the protostellar phase, for masses larger than about 0.1-0.2~M$_{\sun}$, lithium is completely destroyed in an extremely hot and fully convective pre-MS structure. This prediction is in complete disagreement with observations available for young clusters, where $M\approx 0.8$-1~M$_{\sun}${} stars show no relevant lithium depletion (see Section~\ref{Liuncertainties} for more details). Moreover, such accreting models are in disagreement with the observed position of very young disk pre-MS stars in the HR diagram. The disagreement between data and accretion models is partially mitigated if a larger seed mass is adopted (of the order of 10~M$_\mathrm{J}$). In this case it is possible to reduce the level of lithium depletion in very low mass stars (i.e. $M\lesssim 0.3$~M$_{\sun}$), but not for stars close to 1~M$_{\sun}${} where lithium is totally depleted in pre-MS. \item \emph{Hot accretion case}. In Section~\ref{protostarandPMS} we showed that if stars accrete part of their mass during an hot accretion phase (during which the protostar is maintained at large radius by the accretion energy), the star at the end of the accretion phase is more similar to a standard evolutionary models. In this case, protostars are relatively cold and they do not deplete an appreciable amount of Li. Then, when the star enters the pre-MS the residual lithium is essentially equal to the original one, as predicted by models without accretion and from this moment on the lithium evolution proceeds as in standard stellar evolutionary models. \end{itemize} These two scenarios embrace many other possible solutions, obtained by modify/tuning the accretion parameters and the accretion history to produce, at least in principle, intermediate scenarios. However, a fine tuning of the accretion parameters that depends also on the stellar mass is unlikely and could produce artificial results (\citet{tognelli20}). The two extreme scenarios highlight an important point. The expected Li abundance is strictly connected to the protostellar evolution. Stars that due to the inclusion of the protostellar accretion skip the Hayashi track (i.e. pure cold accretion) undergo to an efficient lithium burning during the protostellar phase, in disagreement with standard predictions. This kind of models are excluded, at least for disk metallicities, by observational data. The possible effects of accretion on stellar characteristics and Li temporal evolution could also be linked to the question of the luminosity spread observed in star forming regions. The problem consists in the fact that stars with the same $T_\mathrm{eff}${} and the same chemical composition show different luminosities (see e.g. \citet{hillenbrand09,jeffries09b,dario10,dario10a}). A possible dependence on the protostellar accretion of such a luminosity spread was analysed by \citet{baraffe09}; the adoption of a different accretion history during the protostellar phase can strongly affect the luminosity and $T_\mathrm{eff}${} of a star at the end of the protostellar phase, as already discussed in previous sections. If this is the case, faint stars, which experienced cold accretion, should show a clear lower lithium content than bright ones. In other words, such a luminosity spread should directly reflect in a surface lithium content spread. This point deserves to be further investigated to clearly confirm or exclude the presence of a correlation between lithium content and luminosity in star forming regions. \\ \\ \\ \subsection{Lithium in old metal poor stars} \label{LilowZstars} An interesting aspect to be discussed about lithium evolution is the cosmological lithium problem. Halo stars show a lithium plateau for $T_\mathrm{eff}$$ > 5900$ K and [Fe/H]~$<-1.5$, the so called Spite plateau (\citet{spite82a,spite82}), with a constant logarithmic lithium abundance\footnote{$A(Li)$ indicates an observational notation for abundances, where $A($Li$) = \log N_\mathrm{Li}/N_\mathrm{H} +12$, with $N$ being the number of particles of a given specie.} $A($Li$) = 2.1$-$2.4$ (\citet{cp05,asplund06,melendez10,sbordone10}), and references therein). From the theoretical point of view, stars with such temperatures and metallicities are expected to preserve their initial lithium content, moreover galactic enrichment due to cosmic rays and spallation processes should be totally negligible at such low metallicities. Thus the Spite plateau is expected to represent the primordial lithium content produced during the big bang nucleosynthesis (BBN). However, BBN predicts a primordial lithium content of $A($Li$) = 2.67$-$2.75$, (see e.g. \citet{cyburt16,pitrou18}). This estimate depends on the density of baryons, which is related to the fluctuations of the cosmic microwave background measured with WMAP and Planck satellites. The BBN predictions for the primordial lithium abundance are thus 0.3-0.6~dex larger than the Spite plateau value. This discrepancy constitutes the so called "cosmological lithium problem''. Several attempts to introduce new physics (exotic particles) or to review the reaction rates during the BBN have been performed, but without any firm conclusion (see e.g. \citet{fields11,pizzone14,gpp16,coc17,damone18,lamia19}). Similarly, on the stellar evolution side, the problem has been analysed to find a possible mechanism to deplete the same lithium amount for the stars with different masses and metallicities which populate the Spite plateau. Diffusion has been investigated as a possible solution, as it slowly brings surface lithium below the convective region (\citet{richard05}). This process acts on timescales typical of the MS evolution, but its efficiency depends on the stellar mass and thus in the mass range corresponding to the Spite Plateau the effect of diffusion increases with $T_\mathrm{eff}$. Thus no Spite plateau would be possible without tuning the diffusion efficiency. Also turbulent mixing could produce an effect similar to that of pure diffusion, on similar time scales, but also in this case an ad hoc tuning is required (see e.g. \citet{richard05,spite12} and references therein). Also mass loss coupled to diffusion and turbulent mixing can be tuned to produce a constant lithium depletion along the Spite plateau (\citet{swenson95,vauclair95}), but, again, there is the need for a fine tuning of the parameters. Another possibility is that lithium depletion occurs during the pre-MS. In this case, \citet{fu15} suggested that a certain level of undershooting\footnote{The term "undershooting" indicates a sink of the external convection toward the stellar interior larger than the one predicted in classical models i.e. by the Schwarzschild criterion.} at the bottom of the convective envelope of pre-MS stars could increase the depletion of surface lithium. In addition, a residual matter accretion, regulated by the stellar luminosity, could provide gas with pristine chemical composition (and thus lithium to the star), obtaining in pre-MS the depletion level required to produce the Spite plateau. However, in such models, MS diffusion must be inhibited to avoid a $T_\mathrm{eff}${} (or mass) dependent depletion on MS time scales. Recently \citet{tognelli20} analysed the possibility to produce a constant lithium depletion in pre-MS taking into account in the models the protostellar evolution with different accretion parameters. As discussed in Sec.s~\ref{protostarandPMS} and \ref{Liprotostar}, depending on the scenario adopted during the protostellar evolution, stars at the beginning of the pre-MS can be profoundly different from the ones evolved starting from the Hayashi track. The reason is that the protostellar phase can deeply affect the thermal structure of a star. As a result, it is possible to induce a lithium depletion in pre-MS or even during the protostellar phase, but it requires the adoption of a fine tuning of the parameters that govern the the stellar mass accretion (see e.g. \citet{fu15,tognelli20}). Moreover, as already discussed, the models that show a significant Li depletion, follow a pre-MS evolution in the HR diagram that is different to that observed for high metallicity pre-MS stars. The lack of Galactic very young and metal poor stellar systems, in which one could observe pre-MS stars with Spite plateau metallicities, avoid the possibility to restrict the range of valid accretion parameters and reach firm conclusions. To conclude, the proposed mechanisms could in principle alleviate the cosmological lithium problem, but the weakness of all these suggested solutions is that a fine tuning of the parameters is still required to produce a constant lithium depletion reproducing the Spite plateau. \\ \\ \\ \subsection{Uncertainties on predicted surface lithium abundance} \label{Liuncertainties} The predicted depletion of surface lithium abundance (and, in general, of the light element surface abundances) is affected by the uncertainties on the input physics adopted in stellar models and on the assumed chemical composition, that influence the extension of convective envelope and temperature structure of the star. In particular, also the uncertainty on the nuclear burning cross sections play a role in the light element abundance predictions. In the literature there are several attempts to estimate the impact of the uncertainties on the parameters/input physics adopted in stellar models on the predictions of lithium abundance in low and very-low mass stars (see e.g. \citet{piau02,burke04,tognelli12,tognelli15b}). The quantities that mainly affects lithium, as analysed in the quoted papers, are the following (see e.g. Chapter~3 in \citet{tognelli13}): \begin{itemize} \item \emph{Radiative opacity and equation of state}. The extension of the convective envelope is determined by the Schwarzschild criterion which simply states that a region is convective if the radiative temperature gradient ($\nabla_{rad} \equiv (d\log T/ d\log P)_{rad}$) is larger than the adiabatic one. The radiative gradient is proportional to the Rosseland mean radiative opacity $\kappa_R$, thus change in $\kappa_R$ directly affects the position of the convective unstable boundary. Generally an uncertainty on $\kappa_R$ of about $\pm 5\%$ is assumed in the computations (\citet{badnell05,blancard2012,mondet2015}). Similarly an uncertainty on the adiabatic gradient, $\nabla_{ad}$, thus in the equation of state, modifies the position of the bottom of the convective envelope. An increase in $\kappa_R$ or a decrease of $\nabla_{ad}$ lead to an extension of the convective envelope that can reach deeper and hotter layers, increasing the efficiency of surface lithium burning (see e.g. \citet{tognelli13}). The variation of surface lithium abundance, due to the change in the equation of state or radiative opacity, strongly depends on the selected mass range and age. However, in those models that efficiently deplete lithium (e.g. for 0.7 and 0.8~M$_{\sun}$) a variation in lithium abundance of approximately 0.1-0.2~dex (due to equation of state uncertainty) and 0.4-0.5~dex (due to opacity error) can be obtained. In the worst cases (i.e. $M\sim $0.6~M$_{\sun}$) a variation of $5\%$ of the radiative opacity can lead to a difference of $\sim 0.8$~dex in the predicted lithium content. The effect on surface lithium of the equation of state or radiative opacity reduces as the mass increases. \item \emph{Outer boundary conditions}. The outer boundary conditions are the pressure $P_{atm}$ and temperature $T_{atm}$ at the bottom of the atmosphere. These quantity deeply affect the temperature profile in the convective envelope thus modifying also its depth. The uncertainty on $(P_{atm},\,T_{atm})$ it is not provided by stellar atmospheric calculations, but one can test the effect on the stellar characteristics of the adoption of different atmospheric models available in the literature (see e.g. \citet{tognelli11}). As said above, the effect on lithium depends on the mass/age of the models; a typical variation of 0.3-0.5~dex is expected, which reduces as the mass increases. \item \emph{Mixing length parameter}. The convection efficiency in super-adiabatic regimes in 1D stellar evolution codes commonly relies on mixing-length theory (MLT) \citet{bohm58}. In this formalism, the scale on which the heat is efficiently transported by convection is defined as $\ell = \alpha_\mathrm{ML} \times H_p$, where $H_p$ is the local pressure scale height and $\alpha_\mathrm{ML}$ is the mixing length parameter, a free parameter to be calibrated. The extension of the convective envelope and thus the temperature at its bottom and the the surface lithium abundance are strongly affected by the adopted $\alpha_\mathrm{ML}$. The adoption of different values of this quantity within plausible ranges can produce a variation of surface lithium abundance as large as 1 order of magnitude in those stars where the external envelope is largely super adiabatic and where lithium is efficiently destroyed at the bottom of the convective envelope (i.e. for masses in the range [0.5, 1.0]~M$_{\sun}${} (see e.g. \citet{piau02} and \citet{tognelli12}). \item \emph{Nuclear cross section}. The error on the cross section for the $^7$Li(p, $\alpha)^4$He reaction directly affects the rate at which lithium is destroyed and thus its temporal evolution. Since the energy released by such reactions is inconsequential for the stellar structure, the only effect is on the surface lithium content at a fixed age. In Section~\ref{reactionandelements} we will discuss this point in more detail. \item \emph{Chemical composition}. The initial abundance of helium ($Y$) and metals ($Z$) in the star is not known, but it can be estimated from the observed [Fe/H], assuming for metal rich stars the same relative abundance of metals of the Sun, while for metal poor galactic stars a suitable alpha-enhancement must be introduced \footnote{With \emph{alpha enhancement} one indicates an enhancement of the relative abundance of $\alpha$ elements (C, O, Ne, Mg, Si, S, Ar and Ca) with respect to the solar composition. It is generally expressed as: [$\alpha$/Fe]=$\log ($N$_{\alpha}$/N$_{Fe})_\mathrm{star}$ - $\log ($N$_{\alpha}$/N$_{Fe})_{\odot}$.}. The conversion of [Fe/H] into $Y$ and $Z$ depends on the assumed values of : 1) the primordial helium mass fraction produced in the BBN ($Y_p$), 2) the metal-to-helium enrichment ratio ($\Delta Y / \Delta Z$), 3) the metal-to-hydrogen ratio in the Sun ($(Z/X)_{\odot}$), 4) and the [$\alpha$/Fe] (alpha-enhancement) for metal poor stars (\citet{gennaro12,tognelli12,tognelli15b}). The observational error on [Fe/H] has thus to be combined with the uncertainties of such quantities, to estimate the final global uncertainty on the initial helium and metal mass fraction to be used in the computation of stellar models (see e.g. \citet{gennaro12,tognelli12,tognelli15b}); for solar chemical composition, the uncertainty on $Y$ and $Z$ are estimated to be of the order of 4-5\% for $Y$ and about 20\% for $Z$ (\citet{tognelli15b}). The variations of $Y$ and $Z$ have a strong impact on the lithium burning because they change both the extension of the convective envelope and the temperature inside a star (see e.g. \citet{piau02}, \citet{tognelli13} and \citet{tognelli15b}); the uncertainty on the chemical composition can produce a variation of the surface lithium abundance up to one order of magnitude, especially in stars with $M\lesssim 0.7$~M$_{\sun}$. The effect reduces at larger masses. \end{itemize} \citet{tognelli13} quantitatively evaluated the impact on the predicted surface lithium abundance of the uncertainties in the input physics and in the initial chemical composition discussed above, calculating upper and lower limits of surface $^7$Li in stellar models. Figure~\ref{fig:unc_li} shows the estimated upper/lower limits (plotted as error bars) of surface lithium abundance and effective temperature, due to the contribution of the input physics uncertainties (top panel) and chemical composition indeterminacy (bottom panel). Stars with different masses at different ages typical of young clusters are shown (for more details on the procedure adopted to obtain these limits see \citet{tognelli12} and \citet{tognelli13}). The errors on the present input physics and the typical uncertainties on the adopted chemical composition have a drastic impact on the predicted surface lithium abundance, which can vary by more than 1 order of magnitude, especially for stars with $T_\mathrm{eff}$$\lesssim 4700$~K. \begin{figure}[t] \centering \includegraphics[width=8.5cm]{Tognelli_PhD_Tesi_2013_img1.eps} \includegraphics[width=8.5cm]{Tognelli_PhD_Tesi_2013_img2.eps} \caption{Uncertainties on surface lithium abundance and effective temperature due to the errors on adopted input physics (left panel) and chemical composition (right panel). Figure adapted from \citet{tognelli13}.} \label{fig:unc_li} \end{figure} In standard models, the only possibility to deplete surface lithium in pre-MS is via convective mixing. If the bottom of the convective envelope reaches a region hot enough to burn lithium, then the surface lithium decreases in time. The level of depletion depends on a key parameter in convective stars, which is the efficiency of convective energy transport. A more efficient convective transport produces hotter stars that consequently experience a more efficient lithium burning. The opposite occurs if the convection efficiency reduces. A precise physical treatment of external convection would require three-dimensional hydrodynamic models which have been improved in recent years, but only for limited regions of the star corresponding mainly to the atmospheric regions (see e.g. \citet{nordlund09}, \citet{collet11}, \citet{freytag12}, \citet{magic13}, \citet{trampedach13}, \citet{trampedach14}, \citet{trampedach15}, \citet{pratt16}, and references therein). These codes are state-of-the-art (magneto) hydrodynamic code that solves the time-dependent hydrodynamic equations for mass, momentum, and energy conservation, coupled with the 3D radiative transfer equation, in order to correctly account for the interaction between the radiation field and the plasma. However, hydrodynamic calculations still cannot cover the wide range of physics quantities needed to model the Galactic stellar populations. Moreover, their results cannot be easily adopted in stellar evolutionary codes, although attempts to implement approximations directly based on 3D simulations in 1D stellar models exist in the literature (e.g. \cite{lydon92}, \citet{ludwig99}, \citet{arnett15,arnett18}). The commonly adopted procedure to treat the convection efficiency in super-adiabatic regimes in 1D stellar evolution codes relies on mixing-length theory (MLT) \citet{bohm58}, where convection efficiency depends on the free parameter $\alpha_\mathrm{ML}$. A variation of such a parameter can produce a large effect on the surface lithium abundance at a given age in stars with a super-adiabatic envelope. This effect is particularly important in stars with masses larger than about 0.5-0.6~M$_{\sun}$. It has been shown in the literature that models with a reduced convection efficiency ($\alpha_\mathrm{ML}$ $< $ $\alpha_\mathrm{ML}$$_{\odot}$, solar calibrated value) attains a better agreement with data for both young clusters and binary stars (\citet{piau02,dantona03,tognelli12}). Figure~\ref{fig:tog12_li} shows the results obtained by \citet{tognelli12} where the observed surface lithium abundance in five young open clusters (IC2602, $\alpha$~Per, Blanco1, Pleiades and NGC2516) has been compared to theoretical predictions obtained adopting two different values for the mixing length parameter during the Pre-MS phase: one calibrated on MS stars to reproduce their colours and the other corresponding to a much less efficient convective energy transport. The best value to be used in pre-MS has been estimated in order to reproduce the $A($Li$)$ vs $T_\mathrm{eff}${} pattern. The availability of young clusters to perform such an analysis (ages below 100-150~Myr) is mandatory to avoid possible effects due to MS non-standard mixing processes which act on timescales of the order of $\sim$~Gyr. \begin{figure} \centering \includegraphics[width=0.92\linewidth]{Tognelli12_img.eps} \caption{Comparison between data and theoretical model predictions for surface lithium in young clusters. Models with the same mixing length parameter in pre-MS and MS phases, calibrated with MS stars, are shown as dashed lines while models with calibrated convection efficiency in MS and artificially reduced mixing length parameter ($\alpha_\mathrm{ML}$=1) in pre-MS are shown as continuous lines. In the case of NGC2516, the models were computed using two different values of [Fe/H], [Fe/H] = -0.10 (bottom left panel) and [Fe/H] = $+0.07$ (bottom right panel), as reported in the literature. Figure adapted from \citet{tognelli12}.} \label{fig:tog12_li} \end{figure} Referring to Fig.~\ref{fig:tog12_li}, it is evident that in all cases, the adoption of a constant value of $\alpha_\mathrm{ML}${} (calibrated on MS stars) produces a lithium depletion much larger than observed. On the other hand it is possible to tune $\alpha_\mathrm{ML}${} during the pre-MS to reproduce the observed lithium pattern and the most important result is that the derived $\alpha_\mathrm{ML}${} in pre-MS is independent of the cluster age and on the stellar mass. The authors derived a value of $\alpha_\mathrm{ML}$$_\mathrm{,PMS} = 1$. We mention that the reduction of the efficiency of superadiabatic convection in stellar envelope has been put forward as a plausible mechanism to explain the discrepancies in the radius observed and predicted in pre-MS binary systems. A value of $\alpha_\mathrm{ML}$$\sim 1$ has been suggested to explain the radius in young binary systems (see e.g. \citet{gennaro12}). As said, $\alpha_\mathrm{ML}${} is a free parameter which reflects the present not precise knowledge of external convection efficiency, thus one should find a physical reason for its variation during the evolution of a given star. A reduced convection efficiency could be motivated by the attempt in 1D evolutionary codes to mimic the main effects of some non-standard mechanisms active in young stars, such as the presence of a not negligible magnetic field (especially in the convective region, see e.g. \citet{ventura98}). To this regard \citet{feiden13} found that the inclusion of a magnetic field in partially convective stars produces a radius expansion caused by the inhibition of convection efficiency in the convective envelope (see also \citet{feiden2012a}). Figure~\ref{fig:feid13} shows the results of their work on evolutionary models computed with and without the inclusion of a magnetic field. The models are compared to the characteristics of YY~Gem binary system (both stars have masses of about 0.6~M$_{\sun}$, \citet{torres2002}). Such a system exhibit evidences of a relatively strong magnetic field (surface spots, X-ray, gyrosynchrotron radio emissions, flaring events). Standard models underestimate the radius of the components by about 8\%, a difference that can be erased if a magnetic field of 4-5~KG is included in the computations. The stronger is the magnetic field the larger the radius of the star at the same age (left panel). In the radius vs $T_\mathrm{eff}${} plane, the inclusion of a magnetic field produces a cooler and brighter star (see right panel of Fig.~\ref{fig:feid13}). \citet{feiden13} also showed that it is possible to reproduce the main effects of a magnetic field in 1D non-magnetic stellar models by using a properly tuned value of the mixing length parameter. To do this an $\alpha_{ML}$ value lower than the solar calibrated one (i.e. close to unity) should be adopted. The presence of a magnetic field makes the star cooler and modifies the temperature stratification inside the star: this has a direct impact on surface lithium burning efficiency (\citet{feiden16}). Figure~\ref{fig:feid16} shows the comparison between the expected surface lithium abundance as a function of $T_\mathrm{eff}${} in standard non magnetic models of 5 and 10~Myr compared to a model of 10~Myr in which a magnetic field is included (the magnetic field strength B$_\mathrm{eq}$ shown in figure varies with the mass but in the mass range 0.1-1~M$_{\sun}${} it is of the order of 2-2.6~KG). The inclusion of the magnetic field has a strong impact on the resulting lithium abundance, drastically reducing the level of depletion and thus pointing in the direction to improve the agreement between data and model predictions for pre-MS stars. \begin{figure}[t] \centering \includegraphics[width=0.96\linewidth]{Feiden2013_img.eps} \caption{Effect of magnetic fields (for the labelled surface magnetic field strength $<$B$f>$) on the radius evolution of a partially convective pre-MS star ($M=0.599$~M$_{\sun}${}). The radius is compared to that of the observed YY Gem binary system (horizontal and vertical stripes). Left panel: radius vs age. Right panel: stellar radius vs $T_\mathrm{eff}${} for magnetic and non magnetic models. Figure adapted from \citet{feiden13}.} \label{fig:feid13} \end{figure} \begin{figure} \centering \includegraphics[width=10cm]{Feiden2016_img.eps} \caption{Effect on the surface lithium abundance of the presence of a magnetic field (B$_\mathrm{eq}$, see text). The surface lithium abundance behaviour as a function of the effective temperature is shown for standard (non magnetic) models of 5 and 10 Myr compared with a model of 10 Myr in which a magnetic field is included. Figure adapted from \citet{feiden16}.} \label{fig:feid16} \end{figure} Another aspect related to the presence of magnetic field in the star is the possibility to include in stellar models a surface spots coverage fraction (see e.g. \citet{jackson14} and \citet{somers2015}). The effect of the spots is to reduce the outgoing flux at the stellar surface producing a radius inflation and a decrease of the stellar effective temperature. Such an effect goes in the same direction of an artificially decreased convection efficiency and, as expected, leads to a cooler envelope and to a less efficient lithium burning. Figure~\ref{fig:som15} shows an application of stellar models with surface spots to the surface lithium abundance observed in the Pleiades cluster (see \citet{somers2015}). Standard models predict a level of pre-MS lithium depletion larger than that observed. The agreement can be restored assuming that a certain fraction of the stellar surface is covered by spots; increasing the coverage fraction, the models are cooler and thus the surface lithium depletion is reduced. \begin{figure}[t] \centering \includegraphics[width=10cm]{Somers2015_img.eps} \caption{Comparison between Pleiades data and predicted surface lithium abundance obtained in models that account for surface spots for several values of the spot coverage fraction $f_\mathrm{spot}$. Dotted lines connect models with the same mass and different values of $f_\mathrm{spot}$. The dimension of the circles is proportional to the rotation velocity of the star. Figure adapted from \citet{somers2015}.} \label{fig:som15} \end{figure} It's important to notice that the presence of magnetic fields of different strength or a different spot coverage fraction in stars with similar mass can introduce a star-to-star scatter in the surface lithium abundance. This could partially answer another important open question about young clusters, i.e. which is the cause of a spread in the lithium abundance, measured in stars with similar effective temperature. The extent of such spread is much larger than the quoted uncertainties, so it represents a real spread (see e.g. \citet{xiong06}). The inclusion of a different surface spots coverage or different magnetic fields strength could produce stars with similar effective temperatures (but different total masses) thus leading to an apparent dispersion in the lithium abundance. Additional mechanisms that can alter the level of lithium burning in stars have been analysed in the literature. An induced extra mixing due to the presence of rotation, gravity waves, diffusion, or mass loss has been put forward to reproduce the surface lithium abundance pattern typical of older stars (ages $\gtrsim 500$~Myr). However, such mechanisms are not relevant for the evolution of young pre-MS stars and thus we will not discuss them in this context (see e.g. \citet{charbonnel13}, \citet{eggenberger12}, \citet{landin06}) \\ \subsection{Uncertainties on atmospheric models for surface lithium abundance determination} \label{Liatmouncertainties} The determination of surface element abundances involves the interpretation of the observed absorption lines through atmospheric models as accurate as possible. However modelling stellar atmospheres is a difficult task and the uncertainties on the measurements of surface element abundance clearly affects the comparison between theory and observations. Here we limit to briefly discuss the main difficulties in modelling realistic stellar atmospheres, the interested reader can find more details in other reviews (see e.g. \citet{paula2019}). The photosphere of low mass stars is covered with a complex and stochastic pattern -- associated with convective heat transport -- of downflowing cooler plasma and bright areas where hot plasma rises, the so called granulation (\citet{nordlund09}). As already discussed, convection is a difficult process to understand, because it is non-local and three-dimensional, involving non-linear interactions over many disparate length scales. In recent years it has become possible to use numerical three-dimensional (3D) radiative hydrodynamical (RHD) codes to study stellar convection in atmosphere such as {\sc Stagger Code} (\citet{collet11,nordlund09}) and CO5BOLD (\citet{freytag12}). Nowadays, the use of large grids of simulations covering a substantial range of values of effective temperature and surface gravity for stars in different regions of the HR diagram (\citet{ludwig09,magic13,trampedach13}) have proven that the convection-related surface structures have different size, depth, and temporal variations, depending on the stellar type (\citet{tremblay13,beeck13,magic14}). Moreover, the related activity (in addition to other phenomena such as magnetic spots, rotation, dust, etc.) has an impact in stellar parameter determination (\citet{bigot11,creevey12,chiavassa12}), radial velocity (\citet{allende13,chiavassa18}), chemical abundances estimates (\citet{asplund09,caffau11}), and photometric colours (\citet{chiavassa18,bonifacio18}). Chemical abundance ratios inferred from spectra of cool stars is based on the understanding of limitations and uncertainties of spectroscopic analyses. In this context, radiation transfer in the atmospheres of late-type stars generally takes place under non-local thermodynamic equilibrium (NLTE) conditions, rather than the idealised LTE (\citet{asplund05b}). The full 3D NLTE treatment would require to compute NLTE radiative transfer inside radiative hydrodynamical simulations and coupling it to the equations of gas movements. In these simulations the computational cost is dominated by the radiative transfer calculations which can be greatly reduced by adopting an approximated solution based on the opacity binning or multi-group method (\citet{nordlund82}). However, introducing NLTE calculations at this stage would largely increase the computation time making very complicated to obtain a relaxed simulation. This is why 3D NLTE radiative transfer calculations are only affordable in a post-processing manner, i.e., each 3D RHD simulation is computed in LTE and then the so called $<$3D$>$ models are computed averaging multiple snapshots of 3D RHD simulations over regions of equal optical depth and over the time series (\citet[e.g.,][]{asplund04,caffau09,lind17,nordlander17,magic13,amarsi18}). This approach offers a middle-ground between full 3D NLTE and 1D NLTE, by accounting for NLTE in model atoms of arbitrary size, and through the use of time-independent 1D structures derived from the full 3D hydrodynamic simulations (\citet{bergemann17}). Using this method, \citet{wang2021} derived a new 3D NLTE solar abundance of A(Li) = 0.96 $\pm$ 0.05, which is 0.09 dex lower than the commonly used value and provided a grids of synthetic spectra and abundance corrections publicly available. Eventually, it has also become possible to undertake large samples of observations from disk and halo stars with this 3D NLTE analysis (\citet{amarsi19,bergemann19}). Unluckily at present 3D atmospheric calculations are not still available for Pre-Main Sequence atmospheres. The measurement of surface lithium abundances constitutes an important example of efforts undertaken in this field. In Sec.\ref{elementabundances} we mentioned that the stellar surface abundance of $^6$Li is expected to be negligible, moreover its identification is very difficult. The presence of $^6$Li in metal-poor halo stars can only be derived from the asymmetry in the red wing of the $^7$Li doublet line at 670.8~nm. Several authors attempted to detect $^6$Li using 1D hydrostatic models and assuming LTE for a number of metal-poor stars with [Fe/H], lower than $-2$ dex (\citet{cayrel99,asplund06}). \citet{cayrel07} pointed out that the intrinsic line asymmetry -- due to the stellar surface convection -- in the $^7$Li doublet would be almost indistinguishable from the asymmetry produced by a weak $^6$Li blend on a (presumed) symmetric $^7$Li profile. The total line strength of the Li resonance line determines the $^7$Li-abundance and the shape of the line profile determines the isotopic ratio due to the shift between $^6$Li and $^7$Li isotopic components. Thus it's critical to resolve the strongly differential NLTE effects on the granules and inter-granular regions (Fig.~\ref{fig:atmospheres}), because they have a preferential influence over the blue- and red-shifted part of the line profile, respectively (\citet{lind13}). To investigate this aspect, \citet{steffen12} and \citet{lind13} used a 3D NLTE treatment with 3D RHD simulation snapshots carried out with CO5BOLD and {\sc Stagger Code} , respectively. They re-analysed the Li feature in some metal-poor stars and were not able to confirm the previous claimed detection of $^6$Li. However, they pointed out that a full understanding of 3D NLTE line formation is necessary to make correct measurements of $^6$Li, even though from their studies they could give only upper limits for the isotopic ratio $^6$Li/$^7$Li. In particular, the 3D NLTE approach is important to characterise the calibration lines, to decrease the observational error. Eventually, a very recent publication by \citet{gonzalez19}, confirms the non detection of $^6$Li for a very metal poor binary star ([Fe/H]$\sim-3.7$ dex), finding an upper limit for the isotopic ratio of $^6$Li/$^7$Li$~< ~10\%$. \begin{figure} \centering \includegraphics[width=0.48\linewidth]{fig_li_atmospheres.eps} \caption{Example of synthetic stellar surface profiles of the $^7$Li doublet line at 6708.0 \AA , computed from 3D RHD simulation of metal-poor stars with an LTE (dashed) and NLTE (solid) approach. The $^7$Li-abundance is the same. The LTE line profile interpolated to meet the same equivalent width as the NLTE line is also displayed (dotted). An additional absorption appears in the red wing relative to the blue for NLTE models due to strongly differential effects in granules and inter-granular regions (\citet{lind13}).} \label{fig:atmospheres} \end{figure} \\ \section{Effects of light element burning cross sections on pre-Main Sequence characteristics and on light element surface abundances} \label{reactionandelements} The predicted temporal evolution of light elements is affected by the stellar evolutionary stage and by the model structure, which depends on the input physics adopted in the computations. One of the key ingredients is the adopted light element burning cross section, as derived from measurements of indirect/direct processes. Thus, it is worth discussing how the recent determination of such cross sections at energies relevant for stellar evolution have changed the prediction of surface light element abundances in low mass stars. For stellar calculation the reaction rate of a two body process can be written in the following way (see e.g \citet{rolfsrodney}), \begin{equation} N_A <\sigma v>_b = \sqrt{\frac{8}{\pi \mu}}\frac{N_A}{(K_B T)^{\frac{3}{2}}} \int_0^{+\infty} \sigma(E)_b \, E \, e^{-\frac{E}{K_BT}}\, dE \,\, (\mathrm{cm}^3\,\,\mathrm{mol}^{-1}\,\mathrm{s}^{-1}) \label{eq:rate} \end{equation} where $\sigma(E)$ is the cross section of the process, the subscript $b$ means that the reaction rate is for two bare nuclei (i.e. without any electron screening effect), $T$ is the temperature in Kelvin (K). In stellar evolution calculations, the energy at which the process occurs are generally so small that it is convenient to write the cross section in terms of another quantity called the astrophysical factor $S(E)$ defined as it follows, \begin{equation} S(E)_b = E\, \sigma(E)_b\, e^{2\pi\eta(E)} \end{equation} with $\eta(E)$ the Sommerfeld parameter related to the tunnel effect of two interacting charged particles, that can be written as: \begin{equation} 2\pi \eta(E) = \frac{Z_1\,Z_2\,e^2}{2\varepsilon_0 \hbar} \sqrt{\frac{\mu}{2E_\mathrm{cm}}} = 31.3998\,Z_1\, Z_2\,\sqrt{\frac{A_\mu}{E_\mathrm{cm}(\mathrm{KeV})}} \end{equation} where $\mu = m_1 m_2 /(m_1+m_2)$ is the reduced mass and $A_\mu$ is the same quantity but expressed in atomic mass units (amu), $E_\mathrm{cm}(\mathrm{KeV})$ is the energy in the centre of mass expressed in KeV. Using this quantity, eq.~(\ref{eq:rate}) assumes the following form, \begin{equation} N_A <\sigma v>_b = \sqrt{\frac{8}{\pi \mu}}\frac{N_A}{K_B T^{\frac{3}{2}}} \int_0^{+\infty} S(E)_b\, e^{-2\pi\eta(E) - \frac{E}{K_BT}}\, dE \,\, (\mathrm{cm}^3\,\,\mathrm{mol}^{-1}\,\mathrm{s}^{-1}) \label{eq:rate1} \end{equation} For many application in stellar astrophysics, it is possible to expand the astrophysical factor around a specific value of the energy, thus obtaining, \begin{eqnarray} S(E) \approx S(E_0) \bigg(1 + \frac{S'(E_0)}{S(E_0)} (E-E_0) + \frac{1}{2}\frac{S''(E_0)}{S(E_0)} (E-E_0)^2 + \dots\bigg ) \end{eqnarray} The quantity $E_0$ is also known as the Gamow peak energy, and it corresponds to the energy where the exponential quantity inside the integral in eq.~(\ref{eq:rate1}) has its maximum value. $E_0$ is defined in the following way (\citet{rolfsrodney}), \begin{equation} E_0 = 1.22 (A_\mu Z_1^2 Z_2^2T_6^2)^{1/3}\,\, (\mathrm{KeV}) \end{equation} with $T_6$ the temperature expressed in million kelvin. The expansion of $S(E)$ given above depends on the temperature at which the considered reaction occurs (thorough $E_0$), which in turn depends on the stellar mass. However, at low energy typical of reactions in stars, $S(E)$ varies slowly with the energy, thus it is convenient to expand $S(E)$ around $E\approx 0$: in this case the reaction rate can be evaluated knowing $S(0)$ and its derivatives (usually it is enough to have $S'(0)$ and $S''(0)$). Light element (p,$\alpha$) reaction rates have been recently revised through the indirect Trojan Horse Method (THM, see e.g. \citet{baur86,spitaleri03,spitaleri16,spitaleri19} and references therein), which is particularly useful to measure cross sections at astrophysical energies by-passing extrapolation procedure, often affected by systematic uncertainties due, for instance, to electron screening effects. THM allows to measure the astrophysically relevant cross sections in correspondence, or very close, to the Gamow peak without experiencing the lowering of the signal-to-noise ratio due to the presence of the Coulomb barrier between the interacting charged particles. Moreover in the last years THM was successfully applied to reactions induced by unstable beams \cite{pizzone16,lamia19} as well as neutron induced reactions which may play a role also in the context of light element nucleosynthesis and BBN. In particular the $^3$He(n,p)$^3$H was studied at the astrophysically relevant energies (see \cite{Pizzone20} and references therein). THM was also applied to reactions between heavier systems, which are relevant in the late stellar evolutionary stages (\citet{tumino2018}). We will discuss the effects of these improvements and of the present errors on nuclear cross sections on the light elements surface abundance calculations in pre-Main Sequence stars. \\ \subsection{Effects of deuterium burning cross sections on pre-MS evolution} As discussed in Sec.s~\ref{PMSevolution}, \ref{acccretiongeometry} and \ref{Dburning}, deuterium burning plays a crucial role in the first stages of pre-MS or protostellar evolution. The value of the cross section of the p(D,$\gamma$)$^3$He process in stellar conditions has been reported by several authors (see \citet{adelberger11} for a review) both from measurements and theoretical calculations along with its uncertainty. \citet{adelberger11} redetermined the best value for the astrophysical factor $S(E)$ for such a reaction at zero energy ($S(0)$) and the uncertainty on it, concluding that the current uncertainty on $S(0)$ for such burning reaction is $\approx 7\%$. Recently, \citet{mossa2020} redetermined the D+p cross sections at energies typical of the BBN (between 32-263 KeV) -- thus larger than those used in stellar calculations -- estimating an uncertainty of about $3\%$. We tested the impact on pre-MS evolution of the D+p cross section variation, using the uncertainty given by \citet{adelberger11} at stellar energies, which is $\pm7\%$. Such a variation of the deuterium burning reaction rate produces a negligible effect on the stellar structure evolution. The negligible effect is related to the large dependency on the temperature of the p(D,$\gamma$)$^3$He burning channel (about $T^{12}$); if $S(0)$ is artificially varied (e.g. reduced) by 7\% (independently of the temperature), to obtain the same energy production rate, which sustains the star, an increase of the burning temperature is required. However, given the high temperature dependency of the rate, a very small temperature variation is enough to restore the energy production. Thus the models are essentially unaffected by the current uncertainty on the p(D,$\gamma$)$^3$He reaction rate. From this analysis we can conclude that the main uncertainty source on the D-burning phase in stellar models is the error on the initial deuterium abundance which can be as large as 50\% as discussed in Section~\ref{Dburning}. Recently \citet{tumino14} (see also \citet{tumino2011}) measured the reaction rate for two additional channels involving the D-burning, namely the D(D,p)$^3$H and the D(D,n)$^3$He processes, using the THM; such burning channels could potentially contribute to the D-burning in stars. \begin{figure}[t] \centering \includegraphics[width=0.485\linewidth]{Tumino1.eps} \includegraphics[width=0.48\linewidth]{Tumino2.eps} \caption{Comparison of the THM rates for the D(D,p)$^3$H (left panels) and D(D,n)$^3$He (right panels) reactions with the NACRE (\citet{nacre}), JINA REACLIB (\citet{cyburt04}) and \citet{descouvemont04} rates. The stripe corresponds to the THM estimated uncertainty. Figure adapted from \citet{tumino14}.} \label{fig:tumino} \end{figure} Figure~\ref{fig:tumino} shows, for the quoted reactions, the THM rates compared to the ones of still widely used NACRE compilation (\citet{nacre}), of the JINA REACLIB (\citet{cyburt04}) and to the \citet{descouvemont04} rates. The estimated uncertainty on the analysed burning channels (of about 5\%) are also shown. At temperatures typical of stellar deuterium burning ($\sim 10^6$~K) the D(D,p)$^3$H is about 5\% larger than the NACRE, while it is much larger (about 15\%) than the value reported in \citet{cyburt04}. The differences sensitively reduce at larger temperatures, more important for cosmological calculations. If the \citet{descouvemont04} rate is considered, the difference with THM is very small at stellar temperatures (below 1\%), while it increases at larger temperatures, reaching about 5\% at $T\sim 10^9$~K. The new THM rate for the D(D,n)$^3$He reaction is $\sim$10\% larger than the others (NACRE, JINA and \citet{descouvemont04}) for temperatures smaller than about $5\times 10^7$~K. At larger temperatures the differences reduce and for $T\gtrsim 2\times 10^8$~K the THM rate is smaller than the others by about 5-10\%. \citet{tumino14} evaluated the effect of the new rates in stellar models, showing that the change in the cross sections does not produce any effect on the stellar structure. The result was expected because such burning channels are quite negligible in stellar models, where D is mainly destroyed via the p(D,$\gamma$)$^3$He channel. On the contrary, these reactions could be more important for BBN (\citet{coc10,cyburt04}). \citet{tumino14} estimated that the new reaction rates could result in a variation of the primordial deuterium abundance inferred from the BBN by about $2\%$, while an impact on the $^7$Li abundance up to about 10\% is expected. \\ \\ \\ \subsection{Stellar surface abundance of light elements and updated (p,$\alpha$) reaction rates} The energy produced in the Li, Be and B nuclear reactions is negligible and such reactions do not affect stellar structures evolution. However, the surface abundances of light elements strongly depend on the nuclear burning. The different fragility of Li, Be and B against (p,$\alpha$) destruction reactions potentially allows to investigate the characteristics of different depths of the stellar interior. In Fig.s~\ref{fig:rate6Li7Li} and \ref{fig:rate9Be10B} the reaction rates for the most relevant light element burning (p,$\alpha$) reactions calculated with the THM are shown and compared with the JINA REACLIB and the less recent, but still widely used, NACRE rates. The results are discussed below. \\ \\ \subsubsection{$^6$Li and $^7$Li surface abundance and (p,$\alpha$) reaction rates efficiency} \begin{figure}[t] \centering \includegraphics[width=0.48\linewidth]{rate6Li.eps} \includegraphics[width=0.49\linewidth]{rate7Li.eps} \caption{Left panel: ratio between the THM $^6$Li(p,$\alpha$)$^3$He reaction rate and the \citep{pizzone05} (pt05) reported in the JINA REACLIB (solid blue line) together with THM upper and lower limits (red dashed line). The black dashed area represents the upper and lower limits of the JINA REACLIB rate assuming the same uncertainties given in the NACRE compilation. $T9$ indicates the temperature in units of 10$^9$ K. Figure adapted from \citet{lamia13}. Right panel: Ratio of the adopted THM $^7$Li(p, $\alpha$)$^4$He reaction rate to that evaluated by the still widely used NACRE compilation (solid blue line). The THM rate upper and lower limits (red dashed lines) are compared with the upper and lower values recommended by NACRE (black dashed lines). Figure adapted from \citet{lamia12}.} \label{fig:rate6Li7Li} \end{figure} The first attempts to apply THM (p,$\alpha$) reaction rates to pre-MS lithium surface abundance calculations were performed by \citet{pizzone03} and \citep{pizzone05} (hereafter pt05) and successively updated, after re-normalization to recently released direct data, in \citet{lamia12,lamia13}. The left panel of Fig.~\ref{fig:rate6Li7Li} shows the $^6$Li(p,$\alpha$)$^3$He reaction rate obtained using the THM compared to the pt05 rate available on the JINA REACLIB page. The THM estimated rate deviates from the pt05 by about $15$\% at a temperature of T$\approx~10^6$~K, typical of $^6$Li burning in the pre-MS phase, a value that is larger than the current estimated uncertainty (about 10\%) on the rate itself. \citet{lamia13} evaluated the effect on the surface $^6$Li abundance of the update of the $^6$Li+p reaction rate for a range of stellar masses at three different metallicities ([Fe/H]$=-0.5, -1.0$, and $-2.0$). Figure~\ref{fig:6Lievolution} shows the time evolution of the surface $^6$Li abundance -- normalised to the original value-- obtained adopting the three different $^6$Li(p,$\alpha$)$^3$He reaction rates -- THM, JINA (pt05), and NACRE. From Fig.~\ref{fig:6Lievolution} it is evident that $^6$Li depletion, at fixed burning reaction rate, varies significantly for different masses and metallicities. This can be understood recalling that the higher the metallicity (or the lower the stellar mass) the deeper and hotter the base of the convective envelope. Note that among the most massive models (i.e., $M=1.2$~M$_{\sun}$), which have the thinnest external convective envelopes, only that with the highest metallicity (i.e., [Fe/H]$=-0.5$) efficiently depletes surface $^6$Li. In the selected [Fe/H] range, the difference in the $^6$Li depletion between the THM and NACRE models ranges from about 13\% (for $M=1.2$~M$_{\sun}$) to about 60\% (for $M=1.0$~M$_{\sun}$). The difference reduces if JINA rate is adopted, as expected due to the smaller differences between the two rates. \begin{figure} \centering \includegraphics[width=0.48\linewidth]{Fig_Li6_1_Lamia13.eps} \includegraphics[width=0.48\linewidth]{Fig_Li6_2_Lamia13.eps} \includegraphics[width=0.48\linewidth]{Fig_Li6_3_Lamia13.eps} \caption{Time evolution of the surface $^6$Li abundance, when the three different labelled $^6$Li(p,$\gamma$)$^3$He reaction rates are adopted in the models. Each panel corresponds, as indicated, to a different chemical composition for which four different masses are evolved. See text for details. Figure adapted from \citet{lamia13}.} \label{fig:6Lievolution} \end{figure} Right panel of Fig.~\ref{fig:rate6Li7Li} shows the comparison between the THM and NACRE $^7$Li$+$p reaction rate; the difference is of about 13\% at a temperature of T$\approx~10^6$~K, not much larger than the current uncertainty on the rate (about 10\%). Figure~\ref{fig:7Lievolution} shows the time evolution of the surface $^7$Li abundance for different masses when THM and NACRE $^7$Li$+$p reaction rates are adopted. The differences between the two calculations range from about 7\% (for $M=1.0~$M$_{\sun}$) to about 25\% (for $M=0.6$~M$_{\sun}$). \begin{figure} \centering \includegraphics[width=0.48\linewidth]{7Lievolution.eps} \caption{Time evolution of surface $^7$Li abundance for PISA models with solar chemical composition and the labelled masses computed with THM, and NACRE reaction rates (figure adapted from \citet{nuclei15}).} \label{fig:7Lievolution} \end{figure} In general, the effect of adopting different $^6$Li and $^7$Li burning reaction rates, although not negligible, is less important than the effects due to errors in other quantities used in the computation of a stellar model, such as the original chemical composition, external convection efficiency, or the uncertainties in some other physical inputs adopted in the calculations (e.g., opacity and equation of state, see Sec.~\ref{Liuncertainties} and the discussions in \citet{pizzone10} and \citet{tognelli12}). Thus, at the moment an uncertainty on the burning reaction rate of the order of $10$\% is not the main error source in the determination of the surface lithium abundance in stellar models (\citet{lamia12,lamia13}). \\ \\ \subsubsection{Lithium Depletion Boundary} \label{LDB} In the mass range $M\approx$0.06-0.4~M$_{\sun}${} (the exact values depending on the chemical composition), $^7$Li is completely destroyed in pre-MS in fully convective structures. The larger is the mass the higher is the temperature inside the star, and consequently the earlier is the onset of lithium burning. The larger temperature in more massive stars produces also a more efficient lithium burning, and consequently the age at which lithium is fully depleted in such convective stars strongly depends on the stellar mass. In a coeval population of stars with ages between about 15 and 350~Myr, one would expect to observe a sharp transition in very low-mass regime between stars with and without surface lithium at a given mass (corresponding to the higher stellar mass that, at the cluster age, fully destroy lithium). Such a transition, usually called the Lithium Depletion Boundary (LDB), is a powerful age indicator (see e.g. \citet{dantona94}) thanks to the connection between luminosity, mass and age. Figure~\ref{fig:ldb} shows an example of the age vs luminosity relation for stars located at the LDB in the mass range [0.06 $\div$ 0.4]~M$_{\sun}$. \begin{figure*} \centering \includegraphics[width=0.60\columnwidth]{LDB_ref.eps} \caption{Age vs luminosity relation for stars at the Lithium Depletion Boundary for solar chemical composition and $\alpha_\mathrm{ML}$=1. The labels along the sequence indicate the stellar mass in M$_{\sun}$. Figure adapted from \citet{tognelli15b}.} \label{fig:ldb} \end{figure*} The method of LDB has been successfully adopted to assign ages to young clusters as a competitive method to the use of isochrone fitting (e.g. \citet{barrado99,oliveira03,jeffries05,manzi08,dobbie10,jeffries13,binks14,juarez14,dahm15,dupuy16,martin18,martin20} and references therein). The uncertainties on age determination through the LDB technique have been analysed in \citet{burke04} and, more recently, in \citet{tognelli15b}; the main uncertainties that potentially affect the LDB age determination are those already discussed in Section \ref{Liuncertainties}. Figure~\ref{fig:ldb_err} shows the relative age uncertainty on LDB age determination obtained by \citet{tognelli15b} taking into account the errors on the adopted input physics and chemical composition. The shaded area has been obtained by a cumulative variation of all the considered input physics and chemical abundances within their uncertainties (going into more detail would require a too long discussion out of the purposes of this review, see the quoted paper for additional information). The uncertainty of the method depends on the stellar luminosity (or mass) at the LDB, which, in turn, translates in an age, but it is in any case lower than about 10\%. As a general comment, faint stars that correspond to LDB ages of the order of 50-60 Myr ($0.06\lesssim M$/M$_{\sun}$$\lesssim 0.1$) have errors of the order of about 5\%, an uncertainty that increases increasing the stellar luminosity at the LDB and thus the derived age. \citet{tognelli15b} showed that a large part of the uncertainty on the LDB ($\sim40\%$) comes from the chemical composition indetermination while the lithium burning rate causes a variation of the LDB age of about 1\%. \begin{figure*} \centering \includegraphics[width=0.60\columnwidth]{LDB_tot.eps} \caption{Relative uncertainty on the LDB age as a function of the luminosity of the star at the LDB. The shaded area represents the cumulative error stripe (see text). The lines are estimations of the uncertainty obtained by a linear or quadratic combination of the individual effect of the variation of the input physics and chemical composition. Figure adapted from \citet{tognelli15b}.} \label{fig:ldb_err} \end{figure*} \\ \\ \subsubsection{$^9$Be and $^{10}$B surface abundance and (p,$\alpha$) reaction rates efficiency} \begin{figure}[t] \centering \includegraphics[width=0.48\linewidth]{rate9Be.eps} \includegraphics[width=0.48\linewidth]{rate10B.eps} \caption{Left panel: ratio between the $^9$Be(p,$\alpha$)$^6$Li THM reaction rate and the one reported in the NACRE compilation. The red filled area marks the THM reaction rate uncertainty compared to the NACRE one (grey region). Right panel: ratio between the $^{10}$B(p,$\alpha$)$^7$Be THM reaction rate and that obtained by the NACRE compilation (blue line). The red filled area marks the THM reaction rate uncertainty compared to the NACRE one (grey region). Figure adapted from \citet{lamia15}.} \label{fig:rate9Be10B} \end{figure} Figure~\ref{fig:rate9Be10B} shows the comparison between the recent THM reaction rates and other reaction rates used in the literature. The THM $^9$Be(p,$\alpha$)$^6$Li rate (at temperatures of few million degrees) is about 25\% larger than NACRE and the uncertainty on the THM $^9$Be burning rate is about 25\% (blue dashed area in the figure). The THM reaction rate is quite similar to that in the recent NACRE II \citet{xu13}, even if the THM uncertainty region is sensibly smaller than that of the NACRE II rate, see \citet{lamia15}. Left panel of Fig.~\ref{fig:9Be10Bevolution} shows the comparison between the predicted surface Be abundances computed using the THM reaction rate and the NACRE reaction rate, for solar metallicity stars. The higher THM rate leads to a faster $^9$Be destruction consequently, at the same age, models with the THM rate show a lower $^9$Be surface abundance with respect to models with the NACRE rate. The differences in the predicted surface abundances are significant in those models where $^9$Be is efficiently destroyed (i.e. for M~$\lesssim 0.5$~M$_{\sun}$). We recall that in stars, $^9$Be is destroyed through two channels: $^9$Be(p,$\alpha$)$^6$Li (R$_1$, the rate analysed by \citet{lamia15}) and $^9$Be(p, 2$\alpha$)$^2$H (R$_2$). The ratio R$_1$/R$_2\approx 1.2$, for stellar conditions at the temperature of interest; thus the R$_2$ contribution to beryllium destruction is not negligible. Changing only R$_1$, as done in \citet{lamia15} affects the final beryllium abundance by a factor that is given by the R$_1$ reaction rate change (about 25\%) multiplied by the probability that the $^9$Be burning occurs in that channel, i.e., 25\%~$\times$~R$_1$/(R$_1$ + R$_2$ )~$\approx$~14\%, thus leading to a change in the predicted Beryllium abundance lower than what expected if only R$_1$ channel was active in stars. The effect of the THM uncertainty on the $^9$Be(p,$\alpha$)$^6$Li rate -- approximately 25\% which is equal to the difference with respect to NACRE rate -- is expected to produce a variation of the predicted $^9$Be surface abundance of the same order of magnitude discussed above. \begin{figure} \centering \includegraphics[width=0.48\linewidth]{lamia15_fig_supBe.eps} \includegraphics[width=0.48\linewidth]{lamia15_fig_supB.eps} \caption{Temporal evolution of surface $^9$Be (left panel) and $^{10}$B (right panel) abundances (normalised to one) for solar chemical composition models with the labelled stellar mass. Models computed using the THM (thick solid line) and the NACRE (dashed line) reaction rates are shown. Figure adapted from \citet{lamia15}.} \label{fig:9Be10Bevolution} \end{figure} \begin{figure} \centering \includegraphics[width=0.96\linewidth]{Lamia2015_BeB_img.eps} \caption{Surface logarithmic abundances of $^9$Be (upper panel) and $^{10}$ B (lower panel) as a function of the star effective temperature for different stellar masses in the range [0.06, 0.8]~M$_{\sun}${} and for three ages typical of young/intermediate open clusters. Models are calculated for [Fe/H]$ = +0.0$ adopting both the THM (solid blue line) and the NACRE (dashed red line) reaction rates. Figure adapted from \citet{lamia15}.} \label{fig:ATeff} \end{figure} Right panel of Fig.~\ref{fig:rate9Be10B} shows the comparison between the THM and the NACRE rate for the $^{10}$B(p,$\alpha$)$^7$Be reaction: at temperatures of a few millions of kelvin, the THM rate is $\sim$30\% lower than the NACRE one. The error of the THM rate at the temperatures of interest is about 20\%. However if the THM rate is compared with the one of the updated NACRE II Compilation \citet{xu13}, the differences are significantly reduced, see \citet{lamia15}. The effect of the different $^{10}$B(p,$\alpha$)$^7$Be reaction rates on surface B abundance in low mass stars (at solar metallicity) is shown in the right panel of Fig.~\ref{fig:9Be10Bevolution}. The lower THM $^{10}$B(p,$\alpha$)$^7$Be cross section leads to a smaller $^{10}$B destruction and thus to a larger surface $^{10}$B abundance at a fixed age. Due to the higher $^{10}$B burning temperature with respect to $^9$Be, the effect of reaction rate change is significant only for masses $M~\lesssim 0.4$~M$_{\sun}$. Also notice that the typical timescale at a fixed mass where $^{10}$B is destroyed is longer than that of $^9$Be. For completeness in Fig.~\ref{fig:9Be10Bevolution} we point out for ages typical of MS evolution (e.g. log~t$\gtrsim 8.5$ for $M=0.5$~M$_{\sun}$ ~for the $^9$Be abundance behaviour and for $M=0.4$~M$_{\sun}$ ~for $^{10}$B) the effect of microscopic diffusion, which leads to the settling of light elements toward the stellar interior. Figure~\ref{fig:ATeff} shows the surface logarithmic abundances of $^9$Be and $^{10}$B as a function of the effective temperature, $T_\mathrm{eff}$. Although observational $^9$Be and $^{10}$B abundances are still not available for the low temperature/mass regimes typical of efficient $^9$Be and/or $^{10}$B burning, it is worth to estimate the role of the improvements in nuclear physics in surface abundance predictions. The models are computed using the THM and NACRE reaction rates discussed above; we remind that $T_\mathrm{eff}$~ is not affected by the change of the (p,$\alpha$) rates. Each curve represents the abundance isochrone, i.e., the locus of models with the same age but different masses. For those models where $^9$Be ($^{10}$B) burns the differences between the adoption of the NACRE and THM reaction rates can be as large as about 0.2-0.3~dex for $^9$Be and almost 1 dex for $^{10}$B. To our knowledge, an analysis of the dependence of surface $^9$Be or $^{10}$B abundances on the errors in input physics and chemical composition adopted in the calculations is not available in the literature. However, it is a good approximation assuming a Be and B burning sensitivity to the input physics similar to that obtained for $^7$Li, as the burning temperatures are not much different. Under this assumption, the effects of the uncertainty on (p,$\alpha$) Be and B burning reaction rates is not the dominant error source for the surface abundance predictions of such elements. \\ \\ \section{Summary and conclusions} Surface light elements abundances prediction in stellar models is a difficult task because it is affected by several errors in the adopted input physics and uncertainties in the efficiency of some physical mechanisms as e.g. convection in the stellar envelope. Moreover, pre-MS characteristics and surface light element abundances depend on the previous protostellar phase, which is the phase when the star forms. Analysis of the effects of different choices of accretion parameters (accretion rate, radius and mass of the first stable hydrostatic protostar, accretion history, accretion geometry, the amount of energy transferred from the accreted matter to the accreting star, etc..) on the subsequent pre-MS evolution have been performed in the literature. The results show that if the accretion process leads to bright and extended structures at the end of the protostellar phase the stellar characteristics (including the surface light element abundances) are very similar to those of standard (non-accreting) pre-MS models with the same final mass. The structure of a pre-MS star at the end of the accretion phase is affected by the inclusion of the protostellar accretion only for a restricted range of accretion parameters, mainly in the so called "cold accretion scenario". In these cases a significant reduction of the surface light element abundances during the protostellar phase (in contrast to standard models) has been obtained; however the position of the stars in the HR diagram is in disagreement with observations for disk stars, rising doubts about the validity of the adopted accretion parameters. Protostellar accretion in low mass halo stars was suggested in the literature as one of the possible solutions for the cosmological lithium problem. However, theoretical calculations show that the reproduction of the Spite Plateau would require a fine tuning of the parameters that govern the protostellar phase and, more important, the models with the required Li depletion follow a pre-MS evolution in the HR diagram which is quite different to the one observed for high metallicity pre-MS stars. Comparison between theoretical predictions and observations for surface lithium abundance in young open clusters still show discrepancies. During the pre-MS phase surface Li abundance is strongly influenced by the nuclear burning as well as by the extension towards the interior of the convective envelope and by the temperature at its bottom. These last two quantities depend on the input physics adopted in the calculations (radiative opacity, atmospheric models etc..), on the assumed stellar chemical composition and on the convection efficiency in superadiabatic regions, whose precise physical treatment is not still fully available. Comparison between predictions and observations for pre-MS stars in open clusters suggest a less efficient convection during the pre-MS phase with respect to the Main Sequence. This is true even if one takes into account the uncertainties on the results due to the errors in the adopted input physics and assumed chemical composition. A possible explanation of this result could be the fact that in 1D evolutionary codes a reduced convection efficiency could mimic the main effects of some non-standard mechanisms active in young stars, such as the presence of a not negligible magnetic field and/or surface spot coverage. The energy produced by the Li, Be, B burning reactions is negligible, thus their effects on stellar structures are irrelevant. However, the surface abundances of light elements strongly depends on the nuclear burning and thus on the adopted reaction rates. The only nuclear burning that during the pre-MS or protostellar accretion phase affects stellar evolution is the deuterium burning. The impact on pre-MS evolution of a variation of the p(D,$\gamma$)$^3$He reaction rate by its present uncertainty ($\pm$ 7\%) has been analysed in the literature, finding a negligible effect on stellar models. Two other D-burning channels have been considered, namely the D(D,p)$^3$H and the D(D,n)$^3$He. However, as expected, the inclusion of such channels does not produce relevant effects on pre-MS evolution as the largest part of deuterium is destroyed via p(D,$\gamma$)$^3$He. The effects on the other light elements surface abundance predictions of the still present (even if greatly reduced) uncertainty on (p,$\alpha$) cross sections have been evaluated in detail and compared to the influence on the results of the errors in the other physics ingredients and in the stellar chemical composition. Light element (p,$\alpha$) reaction rates have been recently revised through the indirect Trojan Horse method (THM), sensibly reducing their estimated uncertainty and finding differences with previous estimates at the energies of astrophysical interest. In general, differences in the predicted surface Li, Be, B abundances if the THM or the less recent but still widely used NACRE reaction rates are adopted are significant for stars in which light elements are efficiently burned. The current uncertainty on the $^6$Li and $^7$Li proton capture reaction rates is of the order of 10\%. Numerical calculations show that the effects on the $^6$Li and $^7$Li surface abundances due to this uncertainty, although not negligible, are less important than the influence of errors in other quantities used in the computation of a stellar model. The present errors on the $^9$Be(p,$\alpha$)$^6$Li and $^{10}$B(p,$\alpha$)$^7$Be rates, at the temperatures of interest, are, respectively, of about 25\% and 20\%. Due to the higher $^9$Be and $^{10}$B burning temperature, with respect to the Li burning, the effects of the reaction rate change/uncertainty are significant only for masses lower than M~$\lesssim 0.5$~M$_{\sun}$~and M~$\lesssim 0.4$~M$_{\sun}$, respectively. In conclusion, recent cross section measurements for light element (p,$\alpha$) burning reactions sensibly reduced their uncertainty, even if it is still not negligible. Pre-Main Sequence theoretical calculations and consequently prediction for light element surface abundances are affected by several uncertainty sources: the not precise knowledge of the protostellar evolution and the efficiency of superadiabatic convection, the still present errors on the input physics adopted in the calculations and in the assumed stellar chemical composition. On the other hand, the errors on light element nuclear cross sections do not constitute the main uncertainty source for the prediction of light elements surface abundances. \\ \\ \section*{Author Contributions} All the authors contributed to the writing of the paper. \\ \\ \section*{Funding} This work has been partially supported by INFN (Iniziativa specifica TAsP) and the Pisa University project PRA 2018/2019 "Le stelle come laboratori cosmici di fisica fondamentale". \section*{Acknowledgements} A.C. acknowledges the support of the "visiting fellow" program (BVF 2019) of the Pisa University. E.T. acknowledges INAF-OAAb for the fellowship "Analisi dell'influenza dell'evoluzione protostellare sull'abbondanza superficiale di elementi leggeri in stelle di piccola massa in fase di pre-sequenza principale". L.L. acknowledges the program "Starting Grant 2020" by University of Catania. \\ \\ \bibliographystyle{frontiersinSCNS_ENG_HUMS}
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsection*{Results} {\bf Magic wavelength determination.} The important step towards high resolution spectroscopy of the clock transition is determination of a magic wavelength of the optical lattice when dynamic polarizabilitites of the clock levels become equal~\cite{Takamoto2003}. We numerically calculated dynamic polarizabilities of the clock levels using time-dependent second order perturbation theory and transition data obtained using {\small COWAN} package \cite{cowan1981} (Model~1 described in Methods and in ref.\,\cite{Sukachev2016}), and predicted existence of the magic wavelength at 811.2\,nm (for the collinear magnetic field $\vec{B}$ and the lattice field polarization $\vec{\epsilon}$) near a narrow transition from the upper $J=5/2$ clock level at $809.5$\,nm. Calculated differential polarizability is shown in Fig.\,\ref{img:mwl} with the red solid line. This wavelength region is readily accessible by Ti:sapphire or powerful semiconductor lasers. \begin{figure}[t!] \includegraphics[width=.9\linewidth]{fig2} \caption{Magic wavelength determination. Calculated (Model\,1, red solid curve), measured (blue dots), and fitted (Model\,2, green dashed line) differential dynamic polarizability $\Delta\alpha$ (in atomic units, a.u.) between the upper ($J=5/2$) and the lower ($J=7/2$) clock levels; theoretical models are described further in the text and in Methods. Inset: zoom of the spectral region around the magic wavelength $\lambda_m=813.320(6)$\,nm. } \label{img:mwl} \end{figure} Using an approach described in~ref.\,\cite{Barber2008}, we experimentally searched for the magic wavelength for the clock transition in the spectral region of 810--815\,nm. The transition frequency shift $\Delta\nu$ as a function of optical lattice power $P$ was measured at different lattice wavelengths. The differential dynamic polarizability $\Delta\alpha$ between the clock levels was calculated using the expression $h\Delta\nu=-16 \,a_0^3 \Delta\alpha P /c \,w^2$. Here $h$ is the Plank constant, $c$ is the speed of light, $a_0$ is the Bohr radius, and $w=126.0(2.5)$\,$\mu$m is the lattice beam radius which was calculated from the enhancement cavity geometry. The intra-cavity power $P$ was determined by calibrated photodiodes placed after the cavity outcouplers M2 and M2$^\prime$. The details of the beam waist determination and power measurements are given in Methods. Figure\,\ref{img:mwl} shows the spectral dependency of $\Delta\alpha$ in atomic units~(a.u.). The magic wavelength of $\lambda_m=813.320(6)$\,nm was determined by zero crossing of the linear fit in the inset of Fig.\,\ref{img:mwl}. Trapping Tm atoms in the optical lattice at $\lambda_m$ drastically reduces inhomogeneous ac Stark broadening of the clock transition. Exciting with a 80\,ms-long Rabi $\pi-$pulses of the clock laser we recorded a spectrum with 10\,Hz full width at the half maximum shown in Fig.\,\ref{img:zeeman}(a). Non-unity excitation at the line center comes from a non-perfect initial polarization of atoms and finite lifetime of the upper clock level ($\tau=112$\,ms). \begin{figure} \includegraphics[width=1\linewidth]{fig3} \caption{Spectroscopy of the clock transition. a)~Spectral line shape of the clock transition in Tm. Every point is average of 6 measurements. The solid curve shows the fit calculated for a Fourier-limited $80$\,ms rectangular $\pi$-pulse. b)~Clock transition frequency shift $\Delta\nu$ depending on $B_y$ (dots) at $B_z=0$ and constant $B_x=225\,$mG; solid line is a parabolic fit. The dependence on $B_z$ is similar.} \label{img:zeeman} \end{figure} Using Ti:sapphire frequency comb, we determined the absolute frequency of $\ket{J=7/2,F=4}\rightarrow\ket{J=5/2,F=3}$ clock transition in Tm of $262\,954\,938\,269\,213(30)$\,Hz. The relative frequency uncertainty of $1.1\times10^{-13}$ mainly comes from instability and calibration accuracy of a GLONASS-calibrated passive hydrogen maser used as a frequency reference for the comb. {\bf Differential polarizability analysis.} In the second order approximation, the energy shift of an atomic level $\ket{J,F,m_F}$ in an external oscillating electromagnetic field with the wavelength $\lambda$ equals $-\alpha_{J,F,m_F}(\lambda)E^2/2$, where $E$ is the amplitude of the electric field. For linear field polarization, the dynamic polarizability $\alpha_{J,F,m_F}$ can be split in the scalar $\alpha_{J}^s$ and the tensor $\alpha_{J,F}^t$ parts as: \begin{equation} \label{eq:pol} \alpha_{J,F,m_F} = \alpha_{J}^s +\frac{3 \cos^2\Theta - 1}{2}\times\frac{3 m_F^2 - F (F+1)}{ F (2 F -1)} \alpha_{J,F}^t\,, \end{equation} where $\Theta$ is the angle between the quantization axis (here the direction of the external magnetic field $\vec B$) and the electric field polarization $\vec{\epsilon}$ of the optical lattice. In our case, the differential polarizability of the two clock levels equals \begin{equation} \label{eq:pol3} \Delta\alpha \equiv \alpha_{5/2,3,0}-\alpha_{7/2,4,0}= \Delta\alpha^s + \frac{3 \cos^2\Theta - 1}{2}\Delta\alpha^t\,, \end{equation} where $\Delta\alpha^s=\alpha_{5/2}^s-\alpha_{7/2}^s$ and $\Delta\alpha^t=\frac{5}{7}\alpha_{7/2,4}^t-\frac{4}{5}\alpha_{5/2,3}^t$. By definition, at the magic wavelength $\lambda_m$ the differential polarizability vanishes: $\Delta\alpha(\lambda_m)=0$. The frequency shift of the clock transition due to the optical lattice can be caused by (i) accuracy of the magic wavelength determination and (ii) angular dependency of the tensor part of the differential polarizability. The accuracy of the magic wavelength determination is related to the slope of $\Delta\alpha(\lambda)$ in the vicinity of $\lambda_m$, which is $-0.055(7)$\,a.u/nm for $\lambda_m\approx813$\,nm in Tm, as it is shown in the inset in Fig.\,\ref{img:mwl}. In more practical units, this corresponds to the clock transition frequency shift of $U\times\Delta f\times 0.30(4) $\,mHz for lattice frequency detuning $\Delta f$\,[GHz] from $\lambda_m$ and lattice depth $U$ in units of the recoil energy. This value is more than one order of magnitude smaller compared to the corresponding sensitivity of Sr and Yb lattice clocks~\cite{Barber2008,brown2017hyperpolarizability}. For $\Theta \ll 1$, the differential tensor polarizability $\Delta\alpha^t$ influences the clock transition frequency as: \begin{equation} \label{eq:tensor_shift} h\Delta\nu \approx -3/2\Delta\alpha^t \frac{E^2}{2}\Theta^2\,. \end{equation} To find $\Delta\alpha^t$, we measured dependency of the clock transition frequency shift $\Delta\nu$ on a small magnetic field $B_y$ at the constant bias fields $B_x=225\,$mG and $B_z=0$ ($\Theta\approx B_y/B_x$) shown in Fig.\,\ref{img:zeeman}(b). From the corresponding parabolic coefficient of 56(11)\,mHz/mG${}^2$, we get the differential tensor polarizability of $\Delta\alpha^t = 0.9(2)$\,a.u. at $\lambda_m$. The uncertainty comes from the absolute calibration of magnetic field and power calibration~(see~Methods). Both lattice frequency shifts mentioned in the previous paragraph can be readily reduced to mHz level by stabilizing the lattice wavelength with 0.1\,GHz accuracy and by maintaining $|\Theta| < 10^{-3}$. The frequency of $\ket{F=4,m_F=0}\rightarrow\ket{F=3,m_F=0}$ clock transition possesses a quadratic sensitivity to a dc magnetic field $B$ with a coefficient $\beta=-257.2$\,Hz/G${}^2$~\cite{Sukachev2016}. To provide uncertainty of the transition frequency below 1\,mHz, it would be necessary to stabilize magnetic field at the level of $20\,\mu$G at the bias field of $B=100$\,mG. Note, that the quadratic Zeeman shift in Tm can be fully canceled by measuring an averaged frequency of two clock transitions $\ket{F=4,m_F=0}\rightarrow\ket{F=3,m_F=0}$ and $\ket{F=3,m_F=0}\rightarrow\ket{F=2,m_F=0}$ [Fig.\,\ref{img:setup}(b)] possessing the quadratic Zeeman coefficients of the opposite signs. To implement this approach, one should provide the magic-wavelength condition for both transitions. This can be done by choosing $\Theta=\arccos{(1/\sqrt{3})}$ to cancel the tensor part in~Eq.\,(\ref{eq:pol3}) and tuning the lattice wavelength approximately to 850\,nm [Fig.\,\ref{img:pol_all}(b)]. At this wavelength the differential scalar polarizability vanishes for both transitions ($\Delta\alpha^s=0$ since $\alpha^s_{J,F}=\alpha^s_{J}$). {\bf Static differential polarizability and the BBR shift.} The BBR shift of the clock transition frequency can be accurately calculated from the static differential scalar polarizability $\Delta\alpha^s_\textup{DC}=\Delta\alpha^s(\lambda\rightarrow\infty)$ from a theoretical model based on the measured polarizability spectrum at the wavelengths of 810--860\,nm and at 1064\,nm. Measurements in the spectral region of 810--860\,nm were done by scanning the wavelength of the Ti:sapphire laser at two polarizations corresponding to $\Theta=0$ ($\vec\epsilon\parallel\vec x $) and $\Theta=\pi/2$ ($\vec\epsilon\parallel\vec y $) as shown in~Fig.\,\ref{img:pol_all}(a). The corresponding scalar $\Delta\alpha^s(\lambda)$ and tensor $\Delta\alpha^t(\lambda)$ differential polarizabilities calculated from Eq.\,(\ref{eq:pol3}) are shown in Fig.\,\ref{img:pol_all}(b). \begin{figure} [t!] \includegraphics[width=.8\linewidth]{fig4} \caption{Differential polarizabilities spectra. a) The differential dynamic polarizability $\Delta\alpha(\lambda)$ for $\Theta=\pi/2$ (red dots) and $\Theta=0$ (blue dots). b)~Corresponding scalar $\Delta\alpha^s(\lambda)$ (green dots) and tensor $\Delta\alpha^t(\lambda)$ (magenta dots) parts. Magenta cross is $\Delta\alpha^t(\lambda_m)$ determined from measurements in Fig.\,\ref{img:zeeman}(b). Solid and dotted curves are calculations based on Model 1 and Model 2, respectively~(see~Methods). } \label{img:pol_all} \end{figure} To measure dynamic polarizability at 1064\,nm, we used a slightly different procedure: Tm atoms were trapped in the optical lattice at $\lambda_m$ for which the differential polarizability vanishes, and the atomic cloud was illuminated along $y$-axis by a focused beam of a linearly-polarized single-frequency 1064\,nm fiber laser with the optical power up to 10\,W. Corresponding results for $\Theta=0$ ($\vec\epsilon\parallel\vec x $) and $\Theta=\pi/2$ ($\vec\epsilon\parallel\vec z $) are also shown in Fig.\,\ref{img:pol_all}. To compare with experimental data and to deduce $\Delta\alpha^s_\textup{DC}$, we use Model\,2 (see~Methods), which differs from Model\,1 by introducing four adjustable parameters: probabilities of the 806.7\,nm and 809.5\,nm transitions and two offsets for differential scalar and tensor polarizabilities. Corresponding fits based on Model\,2 are shown as dashed lines in Fig.\,\ref{img:pol_all}, while calculations with no free parameters (Model\,1) are shown as solid lines. From Model~2 we obtain \begin{equation}\label{eq:polresult} \Delta\alpha^s_\textup{DC}=-0.047(18)\,\textrm{a.u.} \end{equation} The differential static scalar polarizability from Model\,1 is $-0.062$\,a.u., which differs by only 1 standard deviation from~(\ref{eq:polresult}). Calculations of the BBR frequency shift can be readily done using the value of $\Delta\alpha^s_\textup{DC}$~\cite{Sukachev2016}. The differential scalar polarizability in the spectral region around $10\,\mu$m (the maximum of the BBR spectrum at the room temperature) differs by less than $10^{-3}$\,a.u. from $\Delta\alpha_\textrm{DC}^s$. Note, that there are no resonance transitions from the clock levels for $\lambda>1.5$\,$\mu$m. For our clock transition at $1.14\,\mu$m the room temperature BBR shift is $-0.45(18)$\,mHz. It is a few orders of magnitude smaller than for other neutral atoms and is comparable to the best ions species as shown in Table\,\ref{bbr_table}. This result quantitatively confirms the idea of strong shielding of inner-shell transitions in lanthanides from external electric fields. The clock transition frequency shifts due to a magnetic component of the BBR and electric field gradient of the optical lattice are less than $10^{-4}$\,Hz and can be neglected (see Methods). \begin{table}[t!] \caption{The fractional BBR shift at 300\,K for the clock transition frequencies in thulium and some other neutral atoms and ions.} \begin{ruledtabular} \begin{tabular}{cccc} &Element & $\Delta\nu^{BBR}/\nu, 10^{-17}$ \\ \hline &Tm (this work) & $-0.2$\\ &Sr \footnote[1]{ref.\,\cite{Ludlow2015}} & $-550$\\ &Yb \footnotemark[1] & $-270$ \\ &Hg \footnote[2]{ref.\,\cite{Bilicki2016}} & $-16$ \\ &Yb$^+$ (E3)\footnotemark[1] & $-11$\\ &Al$^+$ \footnotemark[1] & $-0.4$ \\ &Lu$^+$ \footnote[3]{transition ${}^1S_0 - {}^3D_1$, ref.\,\cite{Arnold2018}} & $-0.14$ \\ \end{tabular} \end{ruledtabular}\label{bbr_table} \end{table} \subsection*{Discussion} Specific shielding of the inner-shell magnetic-dipole clock transition in atomic thulium at 1.14\,$\mu$m by outer 5$s^2$ and 6$s^2$ electronic shells results in a very low sensitivity of its frequency to external electric fields. The differential static scalar polarizability of the two $J=7/2$ and $J=5/2$ clock levels is only $-0.047(18)$\,atomic units which corresponds to the fractional BBR frequency shift of the transition of $2\times 10^{-18}$ at the room temperature. It is by three orders of magnitude less compared to the prominent clock transitions in Sr and Yb (see Table\,\ref{bbr_table}). Taking into account that all major frequency shifts (the Zeeman shift, lattice shifts, collisional shifts) can be controlled at the low $10^{-17}$ level, these features make thulium a promising candidate for a transportable room-temperature optical atomic clock due to soft constrains on the ambient temperature stability. It combines advantages of unprecedented frequency stability of optical lattice clocks on neutral atoms and low sensitivity to BBR of ion optical clocks. Moreover, precision spectroscopy in Tm opens possibilities for sensitive tests of Lorentz invariance \cite{Shaniv2018} and for search of the fine structure constant variation \cite{kolachevsky2008high}. Optical clocks based on a $f$-$f$ transition in some other lanthanides with spinless nuclei could be even more attractive featuring the low sensitivity to magnetic fields due to the absence of the hyperfine structure and small BBR shift. For example, the fine-structure clock transition at the telecom-wavelength of 1.44\,$\mu$m in laser-cooled erbium atoms (e.g. $^{166}$Er)~\cite{McClelland2006,Kozlov2013} can be particularly interesting for optical frequency dissemination over fiber networks~\cite{Riehle2017}. \subsection*{Methods} {\bf Enhancement cavity}. Optical lattice is formed inside a $\Gamma$-shaped enhancement cavity, as shown in Fig.\,\ref{img:setup}(a). The reflectivity of the curved (the radius is $r=-250$\,mm) incoupler mirror M1 equals 87\% and matches losses introduced by the vacuum chamber viewports. Outcouplers M2 or M2$^\prime$ are identical flat mirrors with the reflectivity of $R>99\%$. For locking to the laser frequency, the cavity mirror reflecting the beam at $45^\circ$ is mounted on a piezo actuator. The intra-cavity polarization is defined by a broadband polarization beam splitter; depending on the polarization, either M2 or M2$^\prime$ outcoupler mirror is used. The intra-cavity lens has the focal length of $f=400$\,mm. Depending on experimental geometry ($\Theta=0$ or $\Theta=\pi/2$), we couple corresponding linearly polarized radiation from the Ti:sapphire laser through the incoupler mirror~M1. Intra-cavity polarization filtering by PBS defines the polarization angle and significantly improves polarization purity of the optical lattice. The angle between the laser field polarization and the bias magnetic field is adjusted with the accuracy of better than $1^\circ$. {\bf Measurement of differential dynamic polarizabilities.} Differential polarizability $\Delta\alpha$ of the clock levels is determined from the frequency shift of the corresponding transition $\Delta\nu$, circulating power $P$, and TEM$_{00}$ cavity mode radius $w$ at the atomic cloud position as \begin{equation}\label{eq4} \Delta\alpha =-\frac{hc w^2}{16 a_0^3}\frac{\Delta\nu}{P}\,. \end{equation} The dependency $\Delta\nu(P)$ was obtained for different wavelengths in the spectral range 810--860\,nm for the intra-cavity circulating power varying from 1\,W to 4\,W, as it is shown in Fig.\,\ref{img:s2}. Frequency shifts $\Delta\nu$ were measured relative to the laser frequency, which is stabilized to an ultra-stable ULE cavity with linear drift compensation. The slope coefficients of corresponding linear fits were substituted to Eq.\,(\ref{eq4}) to deduce $\Delta\alpha$ presented in Figs.\,\ref{img:mwl},\,\ref{img:pol_all}. \begin{figure}[h!] \includegraphics[width=.8\linewidth]{fig5} \caption{ Clock transition frequency shift $\Delta\nu$ as a function of optical lattice power $P$ in the vicinity of the magic wavelength $\lambda_m=813.32$\,nm. Experimental data are fitted by linear functions.} \label{img:s2} \end{figure} Uncertainty of the frequency shift $\Delta\nu$ comes from the residual instability of the reference cavity on time intervals of 1000\,s. To estimate it, we measured clock transition frequency relative to the clock laser frequency at the magic wavelength when the perturbations from the lattice are minimal. Results are shown in Fig.\,\ref{img:s3}. The standard deviation equals to 2.6\,Hz contributing $0.003$\,a.u. to the error budget of $\Delta\alpha$. For the lattice wavelength detuned from $\lambda_m$ contribution of the laser frequency instability is negligible. \begin{figure}[h!] \includegraphics[width=.8\linewidth]{fig6} \caption{Relative frequency of Tm clock transition and ULE cavity mode with linear drift compensation. Each data point and corresponding uncertainty comes from the fit of the clock transition spectrum. The shaded region corresponds to 1~standard deviation of the data set.} \label{img:s3} \end{figure} The intra-cavity power $P$ was determined by measuring power leaking through the cavity outcoupler M2 (or M2$^\prime$) using calibrated photodiodes. For each photodiode we measured the power-to-voltage transfer function $P(U) = \kappa U$, where $U$ is the voltage reading from the photodiode, and $\kappa$ is the coefficient measured using absolutely calibrated Thorlabs S121C power meter. To determine $\kappa$, we unlocked the cavity, slightly tilted the outcoupler, blocked the reflected beam to prevent feasible reflections from the incoupler, and measured the power before the outcoupler and corresponding voltage reading of the photodiode. The linearity of the photodiode response was checked separately and turned out to be better than $3\%$ in the working region. The photodiode calibration was done in the whole spectral range of 810--860\,nm to take into account the spectral response of the outcoupler and the photodiode. Although the specified uncertainty of the power sensor Thorlabs S121C equals 3\%, we ascribe the net uncertainty of power measurement of 10\% from comparison of readings from three different absolutely calibrated sensors. The beam radius $w$ at the atomic cloud position is deduced from the cavity geometry: distances from the vacuum chamber center (and target atomic cloud position) to M1, L and M2 (or M2$^\prime$) are 244\,mm, 384\,mm and 500\,mm, respectively, giving the beam radius of $w=126\,\mu$m at the position of atomic cloud. The uncertainty of $w$ comes from the position uncertainties of cavity elements and of the atomic cloud with respect to the chamber center, as well as from uncertainty of 1\,mm of the lens focal length. We conservatively evaluate position uncertainties of M1, L, and M2 (or PBS and M2$^\prime$) as 1\,mm, and the possible axial displacement of the atomic cloud of 2\,mm. The partial contributions to the beam radius uncertainty are $1.6\,\mu$m (the incoupler), $1.2\,\mu$m (the lens), $0.03\,\mu$m (the outcoupler), $0.3\,\mu$m (the cloud) and $1.5\,\mu$m (focal length). Adding up in quadratures, the total uncertainty of the beam radius $w_0$ equals $2.5\,\mu$m. The result is independently confirmed by measuring frequency intervals between cavity transversal modes. The finite temperature of the atomic cloud reduces averaged light intensity on the atomic cloud $I_\textrm{av}$ in respect to the on-axis antinode lattice intensity $I_0=8P/\pi w^2$. Assuming Boltzman distribution of the atoms with the temperature $T$ in the trap of depth $U_0$ one can calculate the parameter $\eta$ connecting the averaged and maximum intensity $I_\textrm{av} = \eta I_0$. The $\eta$ parameter is calculated from \begin{equation} \eta = \frac{\int\limits_{0}^{U_0} e^{-E/kT} \left(\frac{1}{2 r_0} \int\limits_{-r_0}^{r_0} e^{-2r^2}dr\right)\,dE}{\int\limits_{0}^{U_0} e^{-E/kT}dE}\,, \end{equation} where $r_0 = ({-\ln(1-E/U_0)/2})^{-1/2}$ is the classical turning point. The parameter $\eta$ equals $0.90(5)$ for $kT=0.3\,U_0$ (which corresponds to average experimental conditions). One standard deviation $\sigma=0.05$ of $\eta=0.9$ corresponds to $kT$ range from $0.14\,U_0$ to $0.6\,U_0$, while $2\,\sigma$ already covers ($0$\,-\,$3)\,U_0$ range. Summarizing, we evaluate the uncertainty of measured differential polarizability $\Delta\alpha$ to be 13\% with the dominating contribution from power calibration. In the vicinity of $\lambda_m$, the uncertainty is slightly higher by 0.003\,a.u. because of the reference clock laser frequency instability. To measure differential polarizability at 1064\,nm, atoms were trapped in the optical lattice at the magic wavelength and irradiated by focused slightly elliptic 1064\,nm laser beam with the waists of $w_x=320(20)\,\mu$m and $w_y=280(20)\,\mu$m in $x$ and $y$ directions, respectively. To adjust the 1064\,nm laser beam center to the atomic cloud, we maximized intensity of the beam on the atomic cloud by monitoring the frequency shift of the clock transition. To increase sensitivity, corresponding measurements were done by tuning the clock laser to the slope of 1.14\,$\mu$m transition. For the measurement session of $\Delta\alpha$ at $\Theta=0$, we performed three adjustments of 1064\,nm laser beam. The reproducibility corresponds to the frequency shift of 5\,Hz at the maximal frequency shift of 25\,Hz. The resulting linear coefficient is evaluated as $\Delta\nu/P=-3.1(6)$\,Hz/W including this uncertainty. In $\Theta=\pi/2$ configuration, we did not observe any significant effect from adjustment and the coefficient equals $0.04(24)$\,Hz/W. {\bf Theoretical analysis.} Theoretical approach to calculate polarizabilities of Tm atomic levels is described in our previous works~\cite{Sukachev2016,golovizin2017methods}. Calculations are based on the time-dependent second order perturbation theory with summation over known discrete transitions from the levels of interest. For calculations, we used transitions wavelengths and probabilities obtained with the {\small COWAN} package \cite{cowan1981} with some exceptions: for transitions with $\lambda>800$\,nm, experimental wavelengths from ref.\,\cite{NIST_ASD} are used. This approach allows to increase accuracy of the magic wavelength prediction in the corresponding spectral region of $\lambda>800$\,nm. According to calculations, the magic wavelength was expected at 811.2\,nm which motivated our experimental studies in the spectral region 810--815\,nm (see Fig.\,\ref{img:mwl}). We refer to this model as ``Model\,1'' and use for comparison with experimental results of this work as shown in Figs.\,\ref{img:mwl},\ref{img:pol_all}. Deviation of the experimental data from Model\,1 can be explained by two main factors. First, in this model we did not take into account transitions to the continuum. Together with uncertainties of {\small COWAN} calculations of transition amplitudes, it can result in a small offset of the infrared differential polarizability spectrum. Note, that although transitions to the continuum spectrum may significantly contribute to polarizabilities of the individual levels (up to 10\%), for the differential polarizability of the $f$-$f$ transition at 1.14\,$\mu$m in Tm contribution of the continuum is mostly canceled \cite{Sukachev2016}. \begin{table}[h] \caption{Uncertainty budget for the differential scalar polarizability $\Delta\alpha^s_\textup{DC}$.} \begin{ruledtabular} \begin{tabular}{l|c} Source & Uncertainty, a.u.\\ \hline Experimental results for 810--850\,nm & $0.013$ \\ Experimental result for $1064$\,nm & $0.006$ \\ Angle $\Theta$ & $0.002$\\ Transition probabilities for $\lambda>900$\,nm & $0.01$ \\ \hline \bf{Total} & $\bf{0.018}$ \\ \hline For the reference: & \\ Difference of Models 1 and 2& $0.015$\\ \end{tabular} \end{ruledtabular}\label{table2} \end{table} To fit experimental data we use Model\,2 which differs from Model\,1 by introducing four fit parameters. As parameters we use the probabilities of the 806.7\,nm and 809.5\,nm transitions which mostly effect polarizability spectrum in 810--860\,nm region and two offsets for the scalar and tensor polarizabilities. After fitting the experimental data (see Fig.\,\ref{img:pol_all}) by Model\,2, the probability of the 806.7\,nm transition is changed from $3473\,\text{s}^{-1}$ to $4208(298)\,\text{s}^{-1}$, the probability of the 809.5\,nm transition is changed from 149\,s$^{-1}$ to 357(109)\,s$^{-1}$, the fitted offsets for the differential scalar and tensor polarizabilities equal $0.012(6)$\,a.u. and $-0.028(12)$\,a.u. with reduced $\chi_s^2$ for the fits of 1.35 and 2.9, respectively. Transitions from the upper Tm clock level $J=5/2$ in the spectral range $\lambda>900$\,nm are weak and their probabilities are experimentally not measured. To calculate probabilities we used {\small COWAN} package. To estimate the impact of insufficient knowledge of transition probabilities on differential scalar polarizability $\Delta\alpha^s(\lambda)$, we assume possible variation of each transition probability by a factor of 2. After extrapolation of the fitted Model\,2 to $\lambda\rightarrow\infty$ we get the static differential polarizability $\Delta\alpha^s_\textup{DC}=-0.047^{+0.01}_{-0.005}$\,a.u., where uncertainty comes from variation of $\lambda>900$\,nm transition probabilities. We summarize all sources of uncertainties which contribute to the error of the static differential polarizability $\Delta\alpha^s_\textup{DC}$ in Table\,\ref{table2}. As discussed above, the experimental uncertainty for 810--860\,nm range is 13\% contributing $0.013$\,a.u. to the $\Delta\alpha^s_\textup{DC}$, while the measurement at 1064\,nm is less accurate (20\%) due to the laser beam adjustment and results in $0.006$\,a.u. variation of Model\,2 extrapolation. The uncertainty of the angle $\Theta$ adjustment contributes $0.002$\,a.u. The uncertainty coming from the poorly known transition probabilities from $J=5/2$ clock level in $\lambda>900$\,nm range contributes $0.01$\,a.u. Using extrapolation of Model\,2 and adding all uncertainties we come to the final result of $\Delta\alpha^s_\textup{DC}=-0.047(18)$\,a.u. Note, that this is fully consistent with the extrapolated value of -0.062\,a.u. obtained from Model\,1 and given for the reference in Table\,\ref{table2}. {\bf BBR magnetic field.} To estimate the clock transition frequency shift due to the magnetic component of BBR, we follow the analysis given in the work~\cite{gan2018oscillating}. The corresponding frequency shift of one of the clock levels coupled to another atomic level with magnetic-dipole transition at frequency $\omega_0$ can be found by integrating over the full BBR spectrum as \begin{equation} \begin{split} \Delta\nu^B_{bbr}(T) &= -\frac{\omega_0 }{2\pi}\frac{\mu_B^2}{2 \hbar \pi^2 c^5 \epsilon_0}\int_0^\infty \frac{1}{\omega_0^2-\omega^2}\frac{\omega^3}{e^{\hbar \omega/kT} -1 }d\omega \\ & = -\frac{\omega_0 }{2\pi}\frac{\gamma}{2} \left(\frac{T}{T_0}\right)^2 f(y), \end{split} \end{equation} where $\epsilon_0$ is the vacuum permittivity, $T_0=300$\,K, $y=\hbar\omega_0/k_B T$. Here \begin{gather} \gamma = \frac{\mu_B^2}{\hbar^2}\frac{\hbar}{6 c^5 \epsilon_0} \left(\frac{k_B T_0}{\hbar}\right)^2 \approx 9.78\times10^{-18}, \\ f(y) = \frac{6}{\pi^2}\int_0^\infty\frac{1}{y^2-x^2}\frac{x^3 dx}{e^x-1}. \end{gather} The hyperfine transition frequency $\omega_0$ of the $J=7/2$ ground level in Tm equals $2\pi\times1496$\,MHz, while for the $J=5/2$ clock level it equals $2\pi\times2115$\,MHz. For these values of $\omega_0$ $y \ll 1$, $f(y)\approx -1$, and the shift is on the order of $10^{-8}$\,Hz. To estimate the contribution from the optical transitions, we evaluate the shift from the lowest frequency magnetic-dipole transition at $\omega_0 \approx 2\pi\times263$\,THz, which is the clock transition itself: $y=42$, $f(y)=2.3\times10^{-3}$ and $\Delta\nu^B_{bbr}(T_0)=-3\times10^{-6}$\,Hz for the ground level (for the upper clock level the corresponding shift is $+3\times10^{-6}$\,Hz). Hence, we estimate total shift of the clock transition from the magnetic component of BBR to be less than $10^{-4}$\,Hz. {\bf Electric quadrupole shift.} Opposite to neutral Sr, Yb, and Hg atoms, Tm clock levels posses a non-zero electric quadrupole moment on the order of 1\,a.u.~\cite{Sukachev2016} and are coupled to an electric field gradient. Since the electric field gradient in an optical lattice oscillates at the optical frequency, the corresponding time-averaged frequency shift of the clock transition is zero. \subsection*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Confinement and chiral symmetry breaking are prominent features of strongly coupled gauge theories. If the gauge group contains a non-trivial center ${\mathcal Z}$, then the traced Polyakov loop \cite{Polyakov:1978vu,Susskind:1979up} \eqnl{ L_\mbf{x}={\rm tr}_c\, {\mathcal P}_\mbf{x},\quad {\mathcal P}_\mbf{x}= \prod_{\tau=1}^{N_\mathrm{t}} U_0(\tau,\mbf{x})}{polya1} serves as an order parameter for confinement in pure gauge theories or (supersymmetric) gauge theories with matter in the adjoint representation. The dynamics of $L_\mbf{x}$ near the phase transition point is effectively described by generalized Potts models \cite{Yaffe:1982qf,Wozar:2006fi}. Here we consider the space-independent expectation values $\langle L_\mbf{x}\rangle$ only and thus may replace $L_\mbf{x}$ by its spatial average \eqnl{ L=\frac{1}{V_s}\sum_\mbf{x} L_\mbf{x},\quad V_\mathrm{s}=N_\mathrm{s}^{d-1}.}{polya2} The expectation value $\langle L\rangle$ is zero in the center-symmetric confining phase and non-zero in the center-asymmetric deconfining phase. Chiral symmetry breaking, on the other hand, is related to an unusual distribution of the low lying eigenvalues of the Euclidean Dirac operator ${\mathcal D}$ \cite{Leutwyler:1992yt}. In the chirally broken low-temperature phase the typical distribution is dramatically different from that of the free Dirac operator since a typical level density $\rho(\lambda)$ for the eigenvalues per volume does not vanish for $\lambda\to 0$. Indeed, according to the celebrated Banks-Casher relation \cite{Banks:1979yr}, the mean density in the infrared is proportional to the quark condensate, \eqnl{ \langle \rho(0)\rangle=-\frac{1}{\pi} \langle 0\vert \bar\psi\psi\vert 0\rangle.}{banks1} Which class of gauge field configurations gives rise to this unusual spectral behavior has not been fully clarified. It may be a liquid of instanton-type configuration \cite{Schafer:1996wv}. Simulations of finite temperature $SU(3)$ gauge theory without dynamical quarks reveal a first order confinement-deconfinement phase transition at 260 MeV. At the same temperature the chiral condensate vanishes. This indicates that chiral symmetry breaking and confinement are most likely two sides of a coin (\cite{Kogut:1982rt}, for a review see e.g. \cite{Karsch:2001cy}). Although it is commonly believed that confinement and chiral symmetry breaking are deeply related, no analytical evidence of such a link existed up to a recent observation by Christof Gattringer \cite{Gattringer:2006ci}. His formula holds for lattice regulated gauge theories and is most simply stated for Dirac operators with nearest neighbor interactions. Here we consider fermions with ultra-local and $\gamma_5$-hermitean Wilson-Dirac operator \eqnl{ \langle y\vert {\mathcal D}\vert x\rangle=(m+d)\delta_{xy} -\frac{1}{2}\sum_{\mu=0}^{d-1} \left((1+\gamma^\mu)U_{-\mu}(x)\delta_{y,x-e_\mu} +(1-\gamma^\mu)U_\mu(x)\delta_{y,x+e_\mu}\right),}{DWOperator} where $U_{\pm\mu}(x)$ denotes the parallel transporter from site $x$ to its neighboring site $x\pm e_\mu$ such that $U_{-\mu}(x+e_\mu)U_\mu(x)=\mathbbm{1}$ holds true. Since we are interested in the finite temperature behavior we choose an asymmetric lattice with $N_\mathrm{t}$ sites in the temporal direction and $N_\mathrm{s}\gg N_\mathrm{t}$ sites in each of the $d-1$ spatial directions. We impose periodic boundary conditions in all directions. The \eqnl{ {\rm dim}({\mathcal D})=V\times 2^{[d/2]}\times N_c,\quad V=N_\mathrm{t}\times V_\mathrm{s},}{hilfs1} eigenvalues of the Dirac operator in a background field $\{U_\mu(x)\}$ are denoted by $\lambda_p$. The non-real ones occur in complex conjugated pairs since ${\mathcal D}$ is $\gamma_5$-hermitian. If $N_\mathrm{t}$ and $N_\mathrm{s}$ are both even, then $\lambda_p\to 2(d+m)-\lambda_p$ is a further symmetry of the spectrum.\\ \begin{minipage}[t]{6.5cm} \psset{unit=1.1cm} \begin{pspicture}(-0.3,0.5)(6.7,6.5) \multirput(0.5,1)(1,0){6}{\psline(0,0)(0,5)} \multirput(0,1.5)(0,1){5}{\psline(0,0)(6,0)} \multirput(0.5,2.5)(1,0){6}{\psline[linewidth=.8mm,linecolor=red](0,0)(0,1)} \rput(5.3,2.3){$x$} \rput(6.2,2.5){$\tau$} \rput(6.25,3){${\red z\,U_0(x)}$} \end{pspicture} \end{minipage} \hfill \begin{minipage}[b]{7.8cm} Following \cite{Gattringer:2006ci,Bruckmann:2006kx} we \emph{twist} the gauge field configuration with a center element as follows: all temporal link variables $U_0(\tau,\mbf{x})$ at a \emph{fixed time} $\tau$ are multiplied with an element $z$ in the center ${\mathcal Z}$ of the gauge group. The twisted configuration is denoted by $\{^zU\}$. The Wilson loops ${\mathcal W}_{{\mathcal C}}$ for all \emph{contractable} loops ${\mathcal C}$ are invariant under this twisting whereas the Polyakov loops ${\mathcal P}_\mbf{x}$ pick up the center element, \end{minipage} \eqnl{ {\mathcal W}_{{\mathcal C}}(^zU)={\mathcal W}_{{\mathcal C}}(U)\mtxt{and} {\mathcal P}_\mbf{x}(^zU)=z{\mathcal P}_\mbf{x} (U).}{change} The Dirac-eigenvalues for the twisted configuration are denoted by ${^{z\!}\lambda}_p$. The remarkable and simple identity in \cite{Gattringer:2006ci,Bruckmann:2006kx} relates the traced Polyakov loop $L$ to a particular spectral sum, \eqnl{L= \frac{1}{\kappa}\sum_{k=1}^{\vert{\mathcal Z}\vert} \bar z_k\sum_{p=1}^{{\rm dim}({\mathcal D})} \left({^{z_k\!}\lambda}_p\right)^{N_t},\qquad \kappa=(-1)^{N_\mathrm{t}}2^{[d/2]-1} V\vert{\mathcal Z}\vert.}{gattr1} The first sum extends over the elements $z_1,z_2,\dots$ in the center ${\mathcal Z}$ containing the group identity $e$ for which $^{e}\lambda_p=\lambda_p$. The second sum contains the $N_\mathrm{t}$'th power of all eigenvalues of the Dirac operator ${^{z_k}\mathcal D}$ with twisted gauge fields $\{^{z_k}U\}$. It is just the trace or $({^{z_k}\mathcal D})^{N_\mathrm{t}}$, such that \eqnl{ L= \frac{1}{\kappa}\sum_k \bar z_k \,{\rm tr}\left({^{z_k}\mathcal D}\right)^{N_\mathrm{t}} \equiv \Sigma.}{gattr3} We stress that the formula \refs{gattr3} holds whenever the gauge group admits a non-trivial center. In \cite{Gattringer:2006ci} it was proved for $SU(N_\mathrm{c})$ with center $\mbox{\af Z}(N_\mathrm{c})$ and $\kappa =\frac{1}{2}(-)^{N_\mathrm{t}}\hbox{dim}{\mathcal D}$. In \cite{Bruckmann:2006kx} the Dirac operator for staggered fermions and gauge group $SU(3)$ was investigated and a formula similar to \refs{gattr3} was derived. Note that \refs{gattr3} is not applicable to the gauge groups $G_2,F_4$ and $E_8$ with trivial centers. For completeness we sketch the proof given in \cite{Gattringer:2006ci}, slightly generalized to incorporate all gauge groups with non-trivial centers. The Wilson-Dirac operator contains hopping terms between nearest neighbors on the lattice. A hop from site $x$ to its neighboring site $x\pm e_\mu$ is accompanied by the factor $-\frac{1}{2}(1\mp\gamma^\mu)U_\mu(x)$ and staying at $x$ is accompanied by the factor $m+d$. Taking the $\ell'$th power of ${\mathcal D}$, the single hops combine to chains of $\ell$ or less hops on the lattice. In particular the trace ${\rm tr}\, {\mathcal D}^\ell$ is described by loops with \emph{at most} $\ell$ hops. Each loop ${\mathcal C}$ contributes a term proportional to the Wilson loop ${\mathcal W}_{\mathcal C}$. On an asymmetric lattice with $N_\mathrm{t}<N_\mathrm{s}$ all loops with length $<N_\mathrm{t}$ are \emph{contractable} and since the corresponding Wilson loops ${\mathcal W}_{\mathcal C}$ do not change under twisting one concludes \eqnl{ {\rm tr}\, {^{z}\mathcal D}^\ell={\rm tr}\, {\mathcal D}^\ell\mtxt{for}\ell<N_\mathrm{t}.}{id1} For any matrix group with non-trivial ${\mathcal Z}$ the center elements sum to zero, $\sum z_k=0$, such that \eqnl{ \sum_k \bar z_k {\rm tr}\big({^{z_k}\mathcal D}^\ell\big)={\rm tr}\big({\mathcal D}^\ell\big)\sum_k \bar z_k =0\mtxt{for}\ell<N_\mathrm{t}.}{id3} For $\ell=N_\mathrm{t}$ only the Polyakov loops winding once around the periodic time direction are not contractable. Under a twist by $\{U\}\to \{^zU\}$ they are multiplied by $z$, see \refs{change}. With $\sum_k \bar z_k z_k=\vert{\mathcal Z}\vert$ we end up with the result \refs{gattr3} which generalizes Gattringer formula to arbitrary gauge groups with non-trivial center. What happens for $\ell>N_\mathrm{t}$ in \refs{id3} will be discussed below. In \cite{Bruckmann:2006kx} the average shift of the eigenvalues when one twists the configurations has been calculated. It was observed that above $T_c$ the shift is greater than below $T_c$ and that the eigenvalues in the infrared are more shifted than those in the ultraviolet. But the low lying eigenvalues are relatively suppressed in the sum \refs{gattr1} such that the main contribution comes from large eigenvalues. Indeed, if one considers the \emph{partial sums} \eqnl{ \Sigma_n= \frac{1}{\kappa}\sum_k \bar z_k\sum_{p=1}^{n}{^{z_k\!}\lambda}^{N_t}_p, \quad n\leq \hbox{dim}({\mathcal D}),}{gattr5} where the eigenvalues are ordered according to their absolute values, then on a $4^3\times 3$-lattice $70\%$ of all eigenvalues must be included in \refs{gattr5} to obtain a reasonable approximation to the traced Polyakov loop \cite{Bruckmann:2006kx}. Actually, if one includes fewer eigenvalues then the partial sums have a phase shift of $\pi$ relative to the traced Polyakov loop. For large $N_\mathrm{t}$ the contribution from the ultraviolet part of the spectrum dominates the sum \refs{gattr1}. Thus it is difficult to see how the nice lattice result \refs{gattr3} could be of any relevance for continuum physics. The paper is organized as follows: In the next section we introduce flat connections with zero curvature but non-trivial Polyakov loops. The corresponding eigenvalues of the Wilson-Dirac operator are determined and spectral sums with support in the infrared of the spectrum are defined and computed. The results are useful since they are in qualitative agreement with the corresponding results of Monte-Carlo simulations. In section 3 we recall the construction of the real order parameter $L^{\rm rot}$ related to the Polyakov loop \cite{Wozar:2006fi}. Its Monte-Carlo averages are compared with the averages of the partial sums \refs{gattr5}. Our results for Wilson-Dirac fermions are in qualitative agreement with the corresponding results for staggered fermions in \cite{Bruckmann:2006kx}. In section 4 we discuss spectral sums for inverse powers of the eigenvalues. Their Monte-Carlo averages are proportional to $\langle L^{\rm rot}\rangle$ such that they are useful order parameters for the center symmetry. We show that these order parameters are supported by the eigenvalues from the infrared end of the spectrum. Section 5 contains similar results for exponential spectral sums. Again we find a linear or quadratic relation between their Monte-Carlo averages and $\langle L^{\rm rot}\rangle$. It suffices to include only a small number of infrared eigenvalues in these sums to obtain efficient order parameters. We hope that the simple relations between the infrared-supported spectral sums considered here and the expectation value $\langle L^{\rm rot}\rangle$ are of use in the continuum limit. \section{Flat connections}\label{flat} We checked our numerical algorithms against the analytical results for curvature-free gauge field configurations with non-trivial Polyakov loop. For these simple configurations the spatial link variables are trivial and the temporal link variables are space-independent, \eqnl{ U_i(x)=\mathbbm{1}\mtxt{and} U_0(x)=U_0(\tau),\quad x=(\tau,\mbf{x}).}{fc1} The Wilson loops ${\mathcal W}_{\mathcal C}$ of all contractable ${\mathcal C}$ are trivial which shows that these configurations are curvature-free. We call them \emph{flat connections}. With the gauge transformation \eqnl{ \Omega(\tau)={\mathcal P}^{-1}_\tau,\quad {\mathcal P}_\tau=U_0(\tau-1)U_0(\tau-2)\cdots U_0(2)U_0(1) }{fc3} all link-variables of a flat connection are transformed into the group-identity. But the transformed fermion fields are not periodic in time anymore, \eqnl{ \psi(\tau+N_\mathrm{t},\mbf{x})={\mathcal P}^{-1}\psi(\tau,\mbf{x}),\mtxt{where} {\mathcal P}={\mathcal P}_{N_\mathrm{t}+1}}{fc5} is just the constant Polyakov loop. Since the transformed Dirac operator is the free operator, its eigenfunctions are plane waves, \eqnl{ \psi(x)=e^{ipx}\psi_0.}{fc7} These are eigenmodes of the free Wilson-Dirac operator with eigenvalues $\{\lambda_p\}=\{\lambda_p^\pm\}$, where \eqnl{ \lambda^\pm_p=m\pm i\vert \mathaccent"7017{p}\vert+\frac{r \hat{p}^2}{2},\mtxt{with} \hat p_\mu=2\sin \frac{p_\mu}{2},\quad \mathaccent"7017{p}_\mu=\sin p_\mu.}{fc9} They are periodic in the space directions provided the spatial momenta are from \eqnl{ p_i\in \frac{2\pi}{N_s}n_i\mtxt{with} n_i\in \mbox{\af Z}_{N_\mathrm{s}}.}{fc11} Denoting the eigenvalues of the Polyakov loop by $e^{2\pi i\varphi_1},\dots ,e^{2\pi i\varphi_{N_c}}$, the periodicity conditions \refs{fc5} imply \eqnl{ p_0=\frac{2\pi}{N_t}(n_0-\varphi_j),\quad n_0\in \mbox{\af Z}_{N_\mathrm{t}},\quad j=1,\dots,N_\mathrm{c}.}{fc13} Thus the eigenvalues of the Wilson-Dirac operator with a flat connection are given in \refs{fc9}, with quantized momenta \refs{fc11} and \refs{fc13}. For each momentum $p_\mu$ there exist $2^{[d/2]-1}$ eigenvalues $\lambda_p^+$ and $2^{[d/2]-1}$ complex conjugated eigenvalues $\lambda_p^-$. Next we twist the flat connections with a center-element, for $SU(N_c)$ with \eqnl{ z_k=e^{2\pi i k/N_c}\mathbbm{1},\quad 1\leq k\leq N_c.}{fc15} The spatial components of the momenta are still given by \refs{fc11}, but their temporal component is shifted by an amount proportional to $k$, \eqnl{ p_0(z_k)\in\left\{\frac{2\pi}{N_\mathrm{t}}\left(n_0-\varphi_j -k/N_\mathrm{c}\right)\right\},\quad 1\leq j,k\leq N_\mathrm{c}.}{fc17} In the following we consider flat $SU(3)$-connections with Polyakov loops \eqnl{ {\mathcal P}(\theta)=\pmatrix{e^{2\pi i\theta}&0&0\cr 0&1&0\cr 0&0&e^{-2\pi i\theta}}\Longrightarrow L=1+2\cos(2\pi\theta).}{fc19} For these fields the temporal component of the momentum takes values from \eqnl{ p_0(z_k)\in\left\{\frac{2\pi}{N_\mathrm{t}}\left(n_0-j \theta -k/3\right)\right\},\quad j\in\{-1,0,1\},\quad k\in\{0,1,2\}.}{fc21} We have calculated the spectral sums \eqnl{ \Sigma^{(\ell)}= \frac{1}{\kappa}\sum_k \bar z_k\sum_{p=1}^{{\rm dim} {\mathcal D}} \left({^{z_k\!}\lambda}_p\right)^{\ell} =\frac{1}{\kappa}\sum_k \bar z_k \,{\rm tr}\left( {^{z_k}\mathcal D}\right)^\ell}{fc23} for vanishing mass. For flat connections the sums with powers $\ell$ between $N_\mathrm{t}$ and $2N_\mathrm{t}$ are strictly proportional to the traced Polyakov loop, $\Sigma^{(\ell)}=C_\ell L(\theta)$. Gattringers result implies $C_{N_t}=1$. The next two coefficients are related to the number of loops of length $N_\mathrm{t}+1$ and $N_\mathrm{t}+2$ winding once around the periodic time direction. One finds \eqnl{ C_{N_\mathrm{t}+1}=d(N_\mathrm{t}+1)\mtxt{and} C_{N_\mathrm{t}+2}=\frac{d^2}{2}(N_\mathrm{t}+2)(N_\mathrm{t}+1)+ \frac{d-1}{4}\big(N_\mathrm{t}(N_\mathrm{t}+1)-2\big).}{fc23a} More generally, the relation $\sum \bar z_k z_k^\ell=0$ for $\ell \notin 3\mbox{\af Z}+1$ implies that the spectral sums \refs{fc23} are linear combinations of the traces ${\rm tr} \,{\mathcal P}^{3n+1}(\theta)$ for sufficiently small values of $\vert 3n+1\vert $, \eqnl{ \Sigma^{(\ell)}=\sum_{n:\,\vert 3n+1\vert N_\mathrm{t}\leq \ell} C^{(n)}_\ell\, {\rm tr}\, {\mathcal P}^{3n+1}(\theta).}{fc23b} In Fig. \ref{fig:flatconnection1} we depicted the sums $\Sigma^{(\ell)}$ on a $4\times 12^3$ lattice, divided by the traced Polyakov loop and normalized to one for $\theta=0$ for the flat connections and the powers $\ell=N_\mathrm{t},\, 3N_\mathrm{t}$ and $3.6N_\mathrm{t}$. Note that the power $\ell$ in \refs{fc23} need not be an integer. \begin{figure}[ht] \psset{xunit=1mm,yunit=8mm,arrowsize=1.4mm} \begin{pspicture}(-5,94.5)(105,102.5) \psline(0,98)(0,102)(100,102)(100,98) \psline[linestyle=dotted](0,98)(0,96) \psline[linestyle=dotted](100,98)(100,96) \psline(0,95)(100,95) \small \rput(95,95.5){$\theta$} \psline(100,95)(100,96)\rput(100,94.5){$1$} \psline(0,95)(0,96)\rput(0,94.5){$0$} \psline[linecolor=wpurple](-1,100)(100,100 \rput(-5.5,100){$1.00$} \psline(-1,95)(0,95)\rput(-5.5,95){$0.95$} \rput[l](2.5,101){$\propto \Sigma^{(\ell)}/L$} \rput(50,100.4){\color{wpurple} $\ell=N_\mathrm{t}$} \rput(50,98.6){\red $\ell=3\cdotN_\mathrm{t}$} \rput(50,96.7){\blue $\ell=3.6\cdot N_\mathrm{t}$} \pscurve[linecolor=blue] (0,100)(5,99.937)(10,99.755)(15,99.472)(20,99.115)(25,98.720)(30,98.324)(35,97.967) (40,97.684)(45,97.502)(50,97.439)(55,97.502)(60,97.684)(65,97.967)(70,98.324) (75,98.720)(80,99.115)(85,99.472)(90,99.755)(95,99.937)(100,100) \pscurve[linecolor=red] (0,100)(5,99.982)(10,99.929)(15,99.847)(20,99.744)(25,99.630)(30,99.516)(35,99.413) (40,99.331)(45,99.278)(50,99.260)(55,99.278)(60,99.331)(65,99.413)(70,99.516) (75,99.630)(80,99.744)(85,99.847)(90,99.929)(95,99.982)(100,100) \end{pspicture} \caption{\label{fig:flatconnection1} Spectral sums $\Sigma^{(\ell)}$ divided by the traced Polyakov loop as functions of $\theta$ for different values of $\ell$.} \end{figure} We have argued that the sum $\Sigma^{(\ell)}$ must be a linear combination of ${\rm tr}\,{\mathcal P}$ and ${\rm tr}\,{\mathcal P}^{-2}$ for $\ell$ between $2N_\mathrm{t}$ and $4N_\mathrm{t}$. Actually, up to $\ell\approx 3N_\mathrm{t}$ the sum is well approximated by a multiple of ${\rm tr}\, {\mathcal P}$. This is explained by the fact that for a given $\ell$ there are much more fat loops winding once around the periodic time direction and contributing with ${\rm tr}\,{\mathcal P}$ than there are thin long loops winding many times around and contributing with ${\rm tr}\, {\mathcal P}^{-2},{\rm tr}\,{\mathcal P}^4,\,{\rm tr}\,{\mathcal P}^{-5},\dots$. We shall see that similar results apply to the expectation values of $\Sigma^{(\ell)}$ in Monte-Carlo generated ensembles of gauge fields. Since the eigenvalues in the infrared are mostly affected by the twisting \cite{Bruckmann:2006kx} we could as well choose a spectral sum for which the ultraviolet end of the spectrum is suppressed. Since $\Sigma^{(\ell)}$ with $\ell\leq 3N_\mathrm{t}$ is almost proportional to the traced Polyakov loop there exist many such spectral sums. They define order parameters for the center symmetry and may possess a well-defined continuum limit. For example, the exponential sums \eqnl{ {\mathcal E}^{(\ell)}= \frac{1}{\kappa}\sum_k \bar z_k\sum_{p=1}^{{\rm dim} {\mathcal D}} e^{-\ell\cdot \lambda_p\left(^{z_k} U\right)},}{fc25} are all proportional to the traced Polyakov loop for a factor $\ell$ in the exponent between $0.1$ and $2$. Below we displayed exponential sums for the flat connections on a $4\times 12^3$-lattice and various $\ell$ between $0.1$ and $2$. Again we divided by the traced Polyakov loop $L(\theta)$ and normalized the result to unity for $\theta=0$. \begin{figure}[ht] \psset{xunit=1mm,yunit=4.5mm} \begin{pspicture}(-5,88)(105,104) \psframe(0,90)(100,104) \small \rput(100,89){$1$} \rput(0,89){$0$} \rput(96,90.8){$\theta$} \rput(-5,100){$1.0$} \psline(-1,90)(0,90)\rput(-5,90){$0.9$} \psline[linecolor=wpurple](-1,100)(100,100 \rput[l](2.2,102.8){$\propto {\mathcal E}^{(\ell)}/L$} \rput(50,99){\color{wpurple} $\ell=1$} \rput(50,100.8){\blue $\ell=0.1$} \rput(50,92){\red $\ell=2$} \pscurve[linecolor=wpurple (0,100)(5,99.993)(10,99.974)(15,99.944)(20,99.906)(25,99.864)(30,99.822)(35,99.784) (40,99.754)(45,99.734)(50,99.728)(55,99.734)(60,99.754)(65,99.784)(70,99.822) (75,99.864)(80,99.906)(85,99.944)(90,99.974)(95,99.993)(100,100) \psline[linecolor=blue](0,100)(100,100) \pscurve[linecolor=red] (0,100)(5,99.833)(10,99.349)(15,98.595)(20,97.645)(25,96.591)(30,95.538)(35,94.588) (40,93.834)(45,93.350)(50,93.183)(55,93.350)(60,93.834)(65,94.588)(70,95.538) (75,96.591)(80,97.645)(85,98.595)(90,99.349)(95,99.833)(100,100) \end{pspicture} \caption{\label{fig:flatconnection3} Spectral sums ${\mathcal E}^{(\ell)}$ divided by the traced Polyakov loop as functions of $\theta$ for different values of $\ell$.} \end{figure} \\ Later when we use Monte-Carlo generated configurations to calculate the expectation values of $L$ and ${\mathcal E}^{(\ell)}$ we shall choose $\ell=1$. For this choice the mean exponential sum will be proportional to the mean $L$. Later we shall argue why this is the case. \section{Distribution of Dirac eigenvalues for SU(3)}\label{distribution} We have undertaken extended numerical studies of the eigenvalue distributions and various spectral sums for the Wilson-Dirac operator in $SU(3)$ lattice gauge theory. First we summarize our results on the partial traces \eqnl{ \Sigma_n^{(\ell)}= \frac{1}{\kappa}\sum_k \bar z_k\sum_{p=1}^{n}{^{z_k\!}\lambda}^{\ell}_p\,, \qquad n\leq \hbox{dim}({\mathcal D}),\quad \vert\lambda_p\vert\leq \vert\lambda_{p+1}\vert.}{distr1} For $n=\hbox{dim}({\mathcal D})$ one sums over all eigenvalues of the Dirac-operator and obtains the traces $\Sigma^{(\ell)}$ considered in \refs{fc23}. For $\ell=N_\mathrm{t}$ one finds the partial sums $\Sigma_n$ in \refs{gattr5}. These have been extensively studied for staggered fermions in \cite{Bruckmann:2006kx}. According to the result \refs{gattr1} the object $\Sigma_{{\rm dim}{\mathcal D}}$ is just the traced Polyakov loop. We did simulations on lattices with sizes up to $8^3\times 4$. Here we report on the results obtained on a $4^3\times 3$ lattice with critical coupling $\beta_{\rm crit}\approx 5.49$, determined with the histogram method based on $40\,000$ configurations. The dependence of the two order parameters $\vert L\vert$ and $L^{\rm rot}$ (see below) on the Wilson coupling $\beta$ has been calculated for $35$ different $\beta$ and is depicted in Fig. \ref{fig:loopsOverBeta}. \begin{figure}[ht] \includegraphics{loopsOverBeta} \caption{\label{fig:loopsOverBeta} Dependence of the mean modulus of $L$ and the center-transformed and rotated $L$ (see text) on the Wilson coupling $\beta$ on a $4^3\times 3$ lattice. The critical coupling is $\beta_{\rm crit}=5.49$} \end{figure} For each $\beta$ between $4\,000$ and $20\,000$ independent configuration have been generated and analyzed. For our relatively small lattices the two order parameters change gradually from the symmetric confined to the broken deconfined phase. Table \ref{table:loopsOverBeta} contains the order parameters for $11$ Wilson couplings. \begin{table}[ht] \begin{tabular}{|c|ccccc|cccccccc|}\hline $\beta$&5.200& 5.330 & 5.440& 5.475&&& 5.505& 5.530& 5.560& 5.615 & 5.725& 5.885&6.000\\ \hline $\langle\vert L\vert\rangle$&0.1654& 0.1975 & 0.3050&0.3980&&& 0.5049 & 0.5939 &0.6865 &0.7832 & 0.9007 &1.0013&1.0631 \\ $\langle L^{\rm rot}\rangle$ &0.0318& 0.0615 & 0.1859 &0.3013&&& 0.4296 &0.5363 &0.6452 & 0.7513 &0.8770 &0.9797& 1.0444 \\ \hline \end{tabular} \caption{\label{table:loopsOverBeta} Dependence of the order parameters $\vert L\vert $ and $L^{\rm rot}$ on the Wilson coupling $\beta$.} \end{table} For every independent configuration we calculated the dim$({\mathcal D})=2304$ eigenvalues of the Wilson-Dirac operator. Then we averaged the absolute values of the partial traces $\Sigma_n$ for every $\beta$ in table \ref{table:loopsOverBeta}. In Fig. \ref{fig:curvesAbs43} the ratios \eqnl{ R_n= \frac{\langle\vert \Sigma_n\vert\rangle}{\langle \vert L\vert\rangle}}{mn1} are plotted for these $\beta$ as function of the percentage of eigenvalues considered in the partial traces. \begin{figure}[ht] \includegraphics{curvesAbs4_3} \caption{\label{fig:curvesAbs43} Modulus of the eigenvalue sums starting from the lowest eigenmodes on a $4^3\!\times\!3$-lattice near the phase transition. The distinct graphs are labelled with the Wilson coupling $\beta$.} \end{figure}\\ To retain information on the phase of the partial traces and Polyakov loop we used the invariant order parameter constructed in \cite{Wozar:2006fi}. Recall that the domain for the traced Polyakov loop is the triangle shown in Fig. \ref{fig:domainMapping}. The three elements in the center of $SU(3)$ correspond to the corners of the triangle. \begin{figure}[ht] \includegraphics{domainMapping} \caption{\label{fig:domainMapping} Fundamental domain ${\mathcal F}$ of $L$ obtained by identifying $\mbox{\af Z}(3)$ copies according to the depicted arrows.} \end{figure} We divide the domain into the six distinct parts in Fig.~\ref{fig:domainMapping}. The light-shaded region represents the preferred location of the traced Polyakov loop $L$ in the deconfined (ferromagnetic) phase, whereas the dark-shaded region corresponds to the hypothetical anti-center ferromagnetic phase \cite{Wipf:2006wj}. In the deconfined phase $L$ points in the direction of a center element whereas it points in the opposite direction in the anti-center phase. To eliminate the superfluous center-symmetry we identify the regions as indicated by the arrows in Fig.~\ref{fig:domainMapping}. This way we end up with a \emph{fundamental domain} ${\mathcal F}$ for the center-symmetry along the real axis. Every $L$ is mapped into ${\mathcal F}$ by a center transformation. To finally obtain a real observable we rotate the transformed $L$ inside ${\mathcal F}$ onto the real axis. The result is the variable $L^{\rm rot}$ whose sign clearly distinguishes between the different phases. $L^{\rm rot}$ is negative in the anti-center phase, positive in the deconfined phase and zero in the confined symmetric phase. The object $L^{\rm rot}$ is a useful order parameter for the confinement-deconfinement phase transition in gluodynamics \cite{Wozar:2006fi}. We performed the same construction with the partial sums $\Sigma_{n}$ and calculated the ratios for the corresponding Monte-Carlo averages \eqnl{ R^{\rm rot}_n=\frac{\langle \Sigma^{\rm rot}_n\rangle}{\langle L^{\rm rot}\rangle} }{mn3} for every $\beta$ in table \ref{table:loopsOverBeta} as a function of the percentage of eigenvalues considered in $\Sigma_n$. \begin{figure}[ht] \includegraphics{curvesRot4_3} \caption{\label{fig:curvesRot43} Eigenvalue sums rotated to the fundamental domain starting from the lowest eigenmodes on a $4^3\!\times\!3$-lattice near the phase transition. The distinct graphs are labelled with the Wilson coupling.} \end{figure} In Fig. \ref{fig:curvesAbs43} and \ref{fig:curvesRot43} we observe a universal behavior in the deconfined phase with modulus of the traced Polyakov loop larger than approximately $0.4$. If we include less than $90\% $ of the eigenvalues, then the partial sums $\Sigma_n$ have a phase shift of $\pi$ in comparison with $\Sigma=\Sigma_{\rm dim {\mathcal D}}$. The last dip in Fig \ref{fig:curvesAbs43} is due to this phase shift and indicates the transition through zero that occurs when $\Sigma_n$ changes sign. The same shift and dip has been reported for staggered fermions on a $6^3\times 4$ lattice in \cite{Bruckmann:2006kx}. For staggered fermions $\Sigma_n$ and $\Sigma$ are in phase for $n\geq 0.65\cdot \hbox{dim}({\mathcal D})$. For Wilson-Dirac fermions this happens only for $n\geq 0.9\cdot\hbox{dim}({\mathcal D})$. \paragraph{Finite spatial size scaling of partial sums:}\label{Finite} We fixed the coupling at $\beta=5.5$ and simulated in the deconfined phase on $N_\mathrm{s}^3\times 2$-lattices with varying spatial sizes $N_\mathrm{s}\in\{3,4,5\}$. For this coupling the systems are deep in the broken phase and we can study finite size effects on the spectral sums. \begin{figure}[ht] \includegraphics[scale=0.95]{scalingCurves} \caption{\label{fig:scalingCurves} Rotated eigenvalue sums starting from the lowest $\lambda_p$ on a $N_\mathrm{s}^3\!\times\!2$-lattice in the broken phase.} \end{figure} The results for $\Sigma_n^{\rm rot}$ are depicted in Fig. \ref{fig:scalingCurves}. We observe that to a high precision $\Sigma_n^{\rm rot}$ is approximately independent of the spatial volume. The curves for $N_\mathrm{s}=4$ and $5$ are not distinguishable in the plot and as expected $\sum_k \bar z_k {\rm tr}( {^{z_k}\mathcal D}^{N_\mathrm{t}})$ scales with the spatial volume of the system. An increase of $N_\mathrm{s}$ affects the spectra for the untwisted and twisted configurations alike -- they only become denser with increasing spatial volume. On the other hand, comparing Fig. \ref{fig:curvesRot43} and Fig. \ref{fig:scalingCurves}, it is evident that the graph of $\Sigma_n^{\rm rot}$ depends very much on the temporal extent of the lattice. \paragraph{Partial traces $\Sigma_n^{(\ell)}$:} The truncated eigenvalue sums \refs{distr1} with different powers $\ell$ of the eigenvalues show an universal behavior that is nearly independent of the lattice size. The main reason for this universality and in particular the sign of $\Sigma_n^{(\ell)}$ is found in the response of the low-lying eigenvalues to twisting the gauge field. It has been observed that for non-periodic boundary conditions (which are gauge-equivalent to twisting the gauge field) the low lying eigenvalues are on the average further away from the origin as compared to periodic boundary conditions (or untwisted gauge fields) \cite{Chandrasekharan:1995gt,Chandrasekharan:1995nf,Stephanov:1996he,Gattringer:2002dv}. This statement is very clear for massless staggered fermions with eigenvalues on the imaginary axis. For example, the partial traces \eqnl{ \Sigma_n^{(1)} \propto \sum_{p=1}^n\lambda_p+\bar z \sum_{p=1}^n {^{z\!}\lambda}_p+z \sum_{p=1}^n {^{\bar z}\lambda}_p,\qquad \vert \lambda_p\vert\leq \vert \lambda_{p+1}\vert.}{mn5} with $n\ll \hbox{dim}\,{\mathcal D}$ and the traced Polyakov loop have opposite phases. This is explained as follows: all sums in \refs{mn5} are positive and on the average the last two sums are equal. With $z+\bar z=-1$ the last two terms add up to $-\sum {^{z\!}\lambda}_p$. Since the low lying eigenvalues for the twisted field are further away from the origin as for the untwisted field, the spectral sums \refs{mn5} are negative for small $n$. \section{Traces of propagators}\label{traces} To suppress the contributions of large eigenvalues we introduce spectral sums $\Sigma^{(\ell)}$ with \emph{negative exponents} $\ell$. Similar to the Polyakov loop these sums serve as order parameters for the center symmetry. In particular the spectral sums \eqnl{ \Sigma^{(-1)}=\frac{1}{\kappa}\sum_k {\rm tr} \left(\frac{\bar z_k}{{^{z_k}\mathcal D}}\right)\mtxt{and} \Sigma^{(-2)}=\frac{1}{\kappa}\sum_k {\rm tr} \left(\frac{\bar z_k}{{^{z_k}\mathcal D}^2}\right)\ }{mn7} are of interest, since they relate to the Green functions of ${\mathcal D}$ and ${\mathcal D}^2$, objects which enter the discussion of the celebrated Banks-Casher relation. Contrary to the ultraviolet-dominated sums with positive $\ell$ are the sums with negative $\ell$ dominated by the eigenvalues in the infrared. The operators $\{{^{z}\mathcal D},\,z\!\in\!{\mathcal Z}\}$ have similar spectra and we may expect that $\kappa\Sigma^{(-1)}$ has a well-behaved continuum limit. Here we consider the partial traces \eqnl{ \Sigma^{(-1)}_n= \frac{1}{\kappa}\sum_k \bar z_k\sum_{p=1}^{n}\frac{1}{{^{z_k\!}\lambda}_p}, \mtxt{and} \Sigma^{(-2)}_n= \frac{1}{\kappa}\sum_k \bar z_k\sum_{p=1}^{n}\frac{1}{({^{z_k\!}\lambda}_p)^2},\quad \vert\lambda_p\vert\leq \vert\lambda_{p+1}\vert.}{mn8} Since the Wilson-Dirac operator with flat connection possesses zero-modes we added a small mass $m=0.1$ to the denominators in \refs{mn8}. \begin{figure}[ht] \psset{xunit=4mm,yunit=2mm} \begin{pspicture}(-2,2)(26,-38) \rput(4,-2){$1000\cdot \Sigma^{(-1)}_n$} \rput(12,-39.5){$\%$ of lowest eigenvalues} \small \rput(1,-36.5){0}\rput(23,-36.5){100} \psline(12,-35)(12,-34)\rput(12,-36.5){50} \psline(6.5,-35)(6.5,-34)\rput(6.5,-36.5){25} \psline(17.5,-35)(17.5,-34)\rput(17.5,-36.5){75} \rput[r](.5,0){$0$} \psline(1,-10)(1.5,-10)\rput[r](.5,-10){$-10$} \psline(1,-20)(1.5,-20)\rput[r](.5,-20){$-20$} \psline(1,-30)(1.5,-30)\rput[r](.5,-30){$-30$} \rput(20,-6.3){\color{wbl} $L=.25$} \rput(20,-16.5){\red $L=.50$} \rput(20,-23.7){\color{wpurple} $L=.75$} \rput(20,-28.7){$L=1.0$} \psframe(1,0)(23,-35) \pscurve[linecolor=wbl](1,-6.346)(2,-7.016)(3,-7.047)(4,-7.273)(5,-7.329)(6,-7.343)(7,-7.070) (8,-7.777)(9,-7.525)(10,-7.553)(11,-7.375)(12,-7.319)(13,-7.670)(14,-7.312) (15,-7.433)(16,-7.396)(17,-7.420)(18,-7.493)(19,-7.280)(20,-7.364)(21,-7.365) (22,-7.307)(23,-7.330) \pscurve[linecolor=red](1,-15.667)(2,-17.020)(3,-17.094)(4,-17.297)(5,-17.646)(6,-17.686)(7,-18.916) (8,-18.135)(9,-18.041)(10,-18.091)(11,-17.748)(12,-17.643)(13,-18.272)(14,-17.633) (15,-17.866)(16,-17.824)(17,-18.104)(18,-17.849)(19,-17.571)(20,-17.731)(21,-17.731) (22,-17.620)(23,-17.665) \pscurve[linecolor=wpurple] (1,-21.855)(2,-23.919)(3,-24.046)(4,-24.408)(5,-24.850)(6,-24.939)(7,-26.269) (8,-25.295)(9,-25.314)(10,-25.308)(11,-25.045)(12,-25.454)(13,-25.525)(14,-25.444) (15,-25.144)(16,-25.062)(17,-25.388)(18,-25.125)(19,-24.790)(20,-25.041)(21,-25.001) (22,-24.874)(23,-24.906) \pscurve (1,-25.511)(2,-28.338)(3,-29.003)(4,-28.949)(5,-29.654)(6,-30.860)(7,-31.143) (8,-30.048)(9,-30.206)(10,-30.157)(11,-29.954)(12,-30.313)(13,-30.308)(14,-30.349) (15,-29.974)(16,-29.971)(17,-30.218)(18,-29.950)(19,-29.854)(20,-29.834)(21,-29.736) (22,-29.677)(23,-29.676) \end{pspicture} \caption{\label{fig:inverseDirac}The partial spectral sums $\Sigma^{(-1)}_n$ for the inverse power and flat connections.} \end{figure} In Fig. \ref{fig:inverseDirac} the partial sums $\Sigma^{(-1)}_n$ on a $4^3\times 3$ lattice are plotted. It is seen that for flat connections the $\Sigma^{(-1)}_n$ for small $n$ are excellent indicators for the traced Polyakov loop. Thus it is tempting to propose $\Sigma^{(-1)}_n,\,\Sigma^{(-2)}_n$ with $n\ll {\rm dim}{\mathcal D}$ as order parameters for the center symmetry. To test this proposal we calculated the partial sums \refs{mn8}, transformed to the fundamental domain and rotated to the real axis, for Monte-Carlo generated configurations on a $4^3\times 3$ lattice for various values of $\beta$. The results in Fig. \ref{fig:lambdaMinusEinsZwei43} are qualitatively similar to those for the flat connections. Taking into account $10\%$ of the eigenvalues in the IR already yields the asymptotic values $\Sigma^{(-1),{\rm rot}}$ and $\Sigma^{(-2),{\rm rot}}$. \begin{figure}[ht] \includegraphics{lambdaMinusEins4_3} \includegraphics{lambdaMinusZwei4_3} \caption{\label{fig:lambdaMinusEinsZwei43} The expectation values of the partial spectral sums $\Sigma^{(-1)}_n$ and $\Sigma^{(-2)}_n$ rotated to the fundamental domain starting from the lowest eigenvalue on a $4^3\times 3$ lattice. The graphs are labelled with $\beta$.} \end{figure} To find an approximate relation between $\Sigma^{(-1)}$ and the traced Polyakov loop we applied the hopping-parameter expansion. To that end one expands the inverse of the Wilson-Dirac operator ${\mathcal D}=(m+d)\mathbbm{1} -V$ in powers of $V$, \eqnl{ {\mathcal D}^{-1}=\frac{1}{m+d}\sum_k \frac{1}{(m+d)^k}\big[(m+d)\mathbbm{1} -{\mathcal D}\big]^k. }{mn9} Inserting this Neumann series into $\Sigma^{(-1)}$ in \refs{mn7} and keeping the leading term only yields \eqnl{ \Sigma^{(-1)}= \frac{(-1)^{N_\mathrm{t}}}{(m+d)^{N_\mathrm{t}+1}}\,\Sigma^{(1)}+\dots \stackrel{\refs{gattr3}}{\approx}\frac{(-1)^{N_\mathrm{t}}}{(m+d)^{N_\mathrm{t}+1}}L.} {mn11} To check whether the expectation values of $\Sigma^{(-1),\rm rot}$ and $L^{\rm rot}$ are indeed proportional to each other we have calculated these values for Monte-Carlo ensembles corresponding to the $11$ Wilson couplings in table \ref{table:loopsOverBeta}. The results in Fig. \ref{fig:propLambdaMinusEins43.eps} clearly demonstrate that there is such a linear relation. \begin{figure}[ht] \includegraphics{propLambdaMinusEins4_3.eps} \caption{\label{fig:propLambdaMinusEins43.eps} The expectation values of $\Sigma^{(-1),\rm rot}$ as functions of $\langle L^{\rm rot}\rangle$ on a $4^3\times 3$ lattice.} \end{figure} \\ A linear fit yields \eqnl{ \langle\Sigma^{(-1),\rm rot}\rangle=-0.00545\cdot \langle L^\mathrm{rot}\rangle-4.379\cdot 10^{-6} \quad ({\rm rmse} = 2.978\cdot 10^{-5}).}{mn13} For massless fermions on a $4^3\times 3$ lattice the crude approximation \refs{mn1} leads to a slope $-0.003906$. This is not far off the slope $-0.00545$ extracted from the Monte-Carlo data. We have repeated our calculations for the spectral sum $\Sigma^{(-2),\rm rot}$. The corresponding results for the expectation values in Fig. \ref{fig:propLambdaMinusZwei43} show again a linear relation between the expectation values of this spectral sum and the traced Polyakov loop. \begin{figure}[ht] \includegraphics{propLambdaMinusZwei4_3.eps} \caption{\label{fig:propLambdaMinusZwei43} The expectation values of $\Sigma^{(-2),\rm rot}$ as functions of $\langle L^{\rm rot}\rangle$ on a $4^3\times 3$ lattice.} \end{figure}\\ This time a linear fit yields $\langle\Sigma^{(-2),\rm rot}\rangle=-0. 00582\cdot\langle L^\mathrm{rot}\rangle-8.035\cdot 10^{-5}.$ \section{Exponential spectral sums}\label{exponential} After the convincing results for sums of inverse powers of the eigenvalues we analyze the partial exponential spectral sums \begin{eqnarray} {\mathcal E}_n=\frac{1}{\kappa}\sum_k \bar z_k\sum_{p=1}^{n} e^{-{^{z_k\!}\lambda}_p}&\Longrightarrow& {\mathcal E}\equiv {\mathcal E}_{\rm dim {\mathcal D}}=\frac{1}{\kappa}\sum_k \bar z_k \,{\rm tr}\exp\left(-{^{z_k}\mathcal D} \right)\label{en1a}\\ {\mathcal G}_n=\frac{1}{\kappa}\sum_k \bar z_k\sum_{p=1}^{n} e^{-\vert{^{z_k\!}\lambda}_p\vert^2}&\Longrightarrow& {\mathcal G}={\mathcal G}_{\rm dim {\mathcal D}}=\frac{1}{\kappa}\sum_k \bar z_k \,{\rm tr} \exp\left(-{^{z_k}\mathcal D}^\dagger\, {^{z_k}\mathcal D} \right)\label{en1b} \end{eqnarray} In particular the last expression is used in a heat kernel regularization of the fermionic determinant. $\kappa{\mathcal G}$ has a well-behaved continuum limit if we enclose the system in a box with finite volume. We computed the partial sums ${\mathcal G}_n$ for the flat connections and various values \begin{figure}[ht] \psset{xunit=1.7mm,yunit=1.9cm} \begin{pspicture}(-10,.1)(60,-3.2) \psframe(1,0)(57,-3) \small \rput(6,-1.8){$1000\cdot {\mathcal G}_n$} \rput(29,-3.3){$\%$ of lowest eigenvalues} \rput(1,-3.12){$0$} \rput(57,-3.12){$10$} \psline(29,-3)(29,-2.94)\rput(29,-3.12){$5$} \psline(15,-3)(15,-2.94)\rput(15,-3.12){$2.5$} \psline(43,-3)(43,-2.94)\rput(43,-3.12){$7.5$} \psline(1,-1)(1.5,-1)\rput[r](.4,-1){$-1$} \psline(1,-2)(1.5,-2)\rput[r](.4,-2){$-2$} \rput[r](.4,0){$0$}\rput[r](.4,-3){$-3$} \rput(52,-.48){\blue $L=0.25$} \psline[linecolor=blue] (1,-0.007)(2,-0.137)(3,-0.396)(4,-0.275)(5,-0.216)(6,-0.218)(7,-0.220) (8,-0.222)(9,-0.225)(10,-0.227)(11,-0.229)(12,-0.248)(13,-0.271)(14,-0.293) (15,-0.316)(16,-0.339)(17,-0.361)(18,-0.384)(19,-0.428)(20,-0.472)(21,-0.516) (22,-0.560)(23,-0.604)(24,-0.613)(25,-0.627)(27,-0.634)(29,-0.634) (31,-0.633)(33,-0.629)(35,-0.624)(37,-0.619)(39,-0.614)(41,-0.608) (43,-0.604)(45,-0.601)(47,-0.604)(49,-0.605)(51,-0.605)(53,-0.605)(55,-0.606) (57,-0.606) \rput(52,-1.1){\green $L=0.50$} \psline[linecolor=green] (1,-0.028)(2,-0.280)(3,-0.783)(4,-0.545)(5,-0.436)(6,-0.444)(7,-0.452) (8,-0.459)(9,-0.467)(10,-0.474)(11,-0.482)(12,-0.509)(13,-0.554)(14,-0.599) (15,-0.644)(16,-0.689)(17,-0.735)(18,-0.780)(19,-0.861)(20,-0.946)(21,-1.031) (22,-1.117)(23,-1.202)(24,-1.230)(25,-1.259)(27,-1.273) (29,-1.276)(31,-1.275)(33,-1.269)(35,-1.263)(37,-1.255)(39,-1.245)(41,-1.235) (43,-1.226)(45,-1.219)(47,-1.221)(49,-1.223)(51,-1.223)(53,-1.223)(55,-1.224) (57,-1.224) \rput(52,-1.7){\red $L=0.75$} \psline[linecolor=red] (1,-0.061)(2,-0.429)(3,-1.163)(4,-0.814)(5,-0.659)(6,-0.676)(7,-0.692) (8,-0.708)(9,-0.724)(10,-0.741)(11,-0.757)(12,-0.790)(13,-0.858)(14,-0.925) (15,-0.993)(16,-1.061)(17,-1.129)(18,-1.197)(19,-1.310)(20,-1.435)(21,-1.559) (22,-1.684)(23,-1.808)(24,-1.853)(25,-1.899)(27,-1.920) (29,-1.927)(31,-1.926)(33,-1.918)(35,-1.911)(37,-1.900)(39,-1.885)(41,-1.871) (43,-1.859)(45,-1.848)(47,-1.850)(49,-1.852)(51,-1.853)(53,-1.854)(55,-1.855) (57,-1.855) \rput(52,-2.35){\cyan $L=1.00$} \psline[linecolor=cyan] (1,-0.104)(2,-0.585)(3,-1.542)(4,-1.080)(5,-0.885)(6,-0.913)(7,-0.941) (8,-0.968)(9,-0.996)(10,-1.024)(11,-1.052)(12,-1.089)(13,-1.180)(14,-1.271) (15,-1.361)(16,-1.452)(17,-1.543)(18,-1.633)(19,-1.778)(20,-1.941)(21,-2.103) (22,-2.265)(23,-2.428)(24,-2.488)(25,-2.548)(26,-2.560)(27,-2.577) (29,-2.588)(31,-2.589)(33,-2.578)(35,-2.567)(37,-2.553)(39,-2.536)(41,-2.519) (43,-2.503)(45,-2.489)(47,-2.491)(49,-2.492)(51,-2.494)(53,-2.495)(55,-2.497) (57,-2.498) \end{pspicture} \caption{\label{fig:Gaussflat} The partial Gaussian sums ${\mathcal G}_n$ for flat connections with different $L$.} \end{figure} of the traced Polyakov loop. In Fig. \ref{fig:Gaussflat} we plotted those sums for which $10\%$ or less of the low lying eigenvalues have been included. Similarly as for the sums of negative powers of the eigenvalues we conjecture that the Gaussian sums ${\mathcal G}_n$ are good candidates for an order parameter in the infrared. The expectation values of the partial sums ${\mathcal E}_n^{\rm rot}$ and ${\mathcal G}_n^{\rm rot}$ for Monte-Carlo generated configurations at four Wilson couplings are plotted in Fig. \ref{fig:expoMinusLambda43} and Fig. \ref{fig:expoMinusLaLaQuer43}. \begin{figure}[ht] \includegraphics{expoMinusLambda4_3} \caption{\label{fig:expoMinusLambda43} Mean exponential sums ${\mathcal E}_n^{\rm rot}$ on a $4^3\!\times\!3$-lattice near $\beta_{\rm crit}$. The graphs are labelled with $\beta$.} \end{figure} As expected from our studies of flat connections, the Gaussian sums are perfect order parameters for the center symmetry. They are superior to the other spectral sums considered in this paper, since their support is even further at the infrared end of the spectrum. Fig. \ref{fig:expoMinusLaLaQuerAusschnitt43} shows the expectation values $\langle{\mathcal G}^{\rm rot}_n\rangle$ with only $4.5\%$ or less of the infrared-modes included. \begin{figure}[ht] \includegraphics[scale=.95]{expoMinusLaLaQuer4_3.eps} \caption{\label{fig:expoMinusLaLaQuer43} Mean Gaussian sums ${\mathcal G}^{\rm rot}_n$ on a $4^3\!\times\!3$-lattice near $\beta_{\rm crit}$. The graphs are labelled with $\beta$.} \end{figure} \begin{figure}[h!] \includegraphics[scale=.95]{expoMinusLaLaQuerAusschnitt4_3} \caption{\label{fig:expoMinusLaLaQuerAusschnitt43} Zooming into Gaussian sums ${\mathcal G}_n^{\rm rot}$ on a $4^3\!\times\!3$-lattice near the phase transition.} \end{figure} The result is again in qualitative agreement with that for flat connections in Fig. \ref{fig:Gaussflat}, although in the Monte-Carlo data the dips are washed out. The Monte-Carlo results for the expectation values $\langle {\mathcal E}^{\rm rot}\rangle$ and $\langle L^{\rm rot}\rangle$ with Wilson couplings in table \ref{table:loopsOverBeta} are depicted in Fig. \ref{fig:propExpoMinusLambda43}. The quality of the linear fit \eqnl{ \langle{\mathcal E}^{\rm rot}\rangle=-0.00408\cdot \langle L^\mathrm{rot}\rangle +2.346\cdot 10^{-5} \quad ({\rm rmse} = 1.82\cdot 10^{-5}),}{en11} is as good as for the spectral sum $\Sigma^{(-1)}$. \begin{figure}[ht] \includegraphics{propExpoMinusLambda4_3} \caption{\label{fig:propExpoMinusLambda43} The expectation value of ${\mathcal E}^{\rm rot}$ as function of $\langle L^{\rm rot}\rangle$ on a $4^3\times 3$ lattice.} \end{figure}\\ To estimate the slope and in particular its dependence on the lattice size we expand the exponentials in ${\mathcal E}_n$ which results in \eqnl{ {\mathcal E}_n=(-)^{N_\mathrm{t}}\sum_{p=0}^\infty\frac{(-1)^p}{(N_\mathrm{t}+p)!}\,\Sigma^{(N_\mathrm{t}+p)}_n.} {en3} Since $\Sigma^{(\ell)}$ is proportional to the traced Polyakov loop for $\ell \leq 3N_\mathrm{t}$ we conclude that ${\mathcal E}={\mathcal E}_{{\rm dim}{\mathcal D}}$ should be proportional to $L$. We can estimate the proportionality factor as follows: in the Wilson loop expansion of ${\rm tr}\,{\mathcal D}^{(N_\mathrm{t}+p)}$ only loops winding around the periodic time direction contribute. If we neglect fat loops and only count the straight loops winding once around the periodic time direction, then there are \eqnl{ (m+d)^p\cdot {N_\mathrm{t}+p \choose p}}{en5} such loops contributing. Actually, for $p\geq N_\mathrm{t}$ there are loops winding several times around the time direction. But these have relatively small entropy and do not contribute much. Hence, with \refs{en3} we arrive at the estimate \eqnl{ {\mathcal E}\approx (-1)^{N_\mathrm{t}}\sum_{p=0}\frac{(-1)^p}{(N_\mathrm{t}+p)!}(m+d)^p\cdot {N_\mathrm{t}+p \choose p}\cdot L =\frac{(-1)^{N_\mathrm{t}}}{N_\mathrm{t}!} e^{-(m+d)}L. }{en7} In $4$ dimensions and for $m=0$ we obtain the approximate linear relation \eqnl{ N_\mathrm{t}!\,{\mathcal E} \approx (-1)^{N_\mathrm{t}}\cdot 0.0183\cdot L.}{en9} For the linear fit \refs{en11} to the MC-data the slope is $3!\cdot 0.00408=0.0245$ instead of $0.0183$. The Monte-Carlo results for the order parameters $\langle{\mathcal G}^{\rm rot}\rangle$ and $\langle L^{\rm rot}\rangle$ with Wilson couplings from table \ref{table:loopsOverBeta} are shown in Fig. \ref{fig:quadratExpoMinusLaLaQuer43.eps}. \begin{figure}[ht] \includegraphics{quadratExpoMinusLaLaQuer4_3.eps} \caption{\label{fig:quadratExpoMinusLaLaQuer43.eps} The expectation value of ${\mathcal G}^{\rm rot}$ as function of $\langle L^{\rm rot}\rangle$ on a $4^3\times 3$ lattice.} \end{figure} In this case the functional dependence is more accurately described by a quadratic function, \eqnl{ \langle{\mathcal G}^{\rm rot}\rangle=-0.000571 \cdot\langle L^\mathrm{rot}\rangle^2 -0.00156\cdot\langle L^\mathrm{rot}\rangle + 1.061\cdot 10^{-5} \quad ({\rm rmse} =1.453\cdot 10^{-5}),}{en13} and this relation is very precise. Since in addition $\langle {\mathcal G}_n^{\rm rot}\rangle \approx \langle{\mathcal G}^{\rm rot}\rangle$ already for small $n$ we can reconstruct the order parameter $\langle L^{\rm rot}\rangle$ from the low lying eigenvalues of the Wilson-Dirac operator. \paragraph{Scaling with $N_\mathrm{t}$:} On page \pageref{Finite} we discussed the finite (spatial) size scaling of the MC expectation values $\langle\Sigma_n^{\rm rot}\rangle$. We showed that they converge rapidly to their infinite-$N_\mathrm{s}$ limit, see Fig. \ref{fig:scalingCurves}. Here we study how the Gaussian sums ${\mathcal G}_n$ depend on the temporal extend of the lattice. To that end we performed simulations on larger lattices with fixed $N_\mathrm{s}=6$, variable $N_\mathrm{t}=2,3,4,5$ and Wilson coupling $\beta=6.5$. We calculated the ratios \eqnl{ \tilde R^{\rm rot}_n=\frac{\kappa}{\langle L^{\rm rot}\rangle}\,\langle {\mathcal G}_n^{\rm rot}\rangle,}{en15} where we multiplied with the extensive factor $\kappa$ in \refs{gattr1} since in the partial sums \eqnl{ \tilde{\mathcal{G}}_n=\kappa {\mathcal G}_n=\sum_k \bar z_k\sum_{p=1}^{n} e^{-\vert{^{z_k\!}\lambda}_p\vert^2}\,,\qquad \vert\lambda_p\vert\leq \vert \lambda_{p+1}\vert. }{en17} only a tiny fraction of the $5184$ to $12\,960$ eigenvalues have been included. The order parameter $\langle L^{\rm rot}\rangle$ for the lattices with $N_\mathrm{t}=2,\,3,\,4,\,5$ is $1.9474,\, 1.40194, \,0.932245,\, 0.523142$. In Fig. \ref{fig:expoMinusLaLaQuer6N} we plotted the ratios $\tilde R^{\rm rot}_n$ for $n$ from $1$ up to $100$. \begin{figure}[ht] \includegraphics{expoMinusLaLaQuer6_Nt.eps} \caption{\label{fig:expoMinusLaLaQuer6N} The rations $\tilde R^{\rm rot}_n$ as function of the number $n$ of IR-eigenvalues included. $100$ eigenvalues corresponds to approximately $1\%$ of all eigenvalues.} \end{figure} Note that on the $6^3\times 5$-lattice $n=100$ means less than $0.8\%$ of all $12\,960$ eigenvalues. This figure very much supports our earlier statements about the quality of the order parameters $\langle {\mathcal G}_n^{\rm rot}\rangle$ or $\langle\tilde{\mathcal G}_n^{\rm rot}\rangle$. \section{Conclusions}\label{conclusion} In this paper we studied spectral sums of the type \eqnl{ {\mathcal S}_n(f)=\frac{1}{\kappa}\sum_k \bar z_k \sum_{p=1}^n f\left({^{z_k\!}\lambda}_p\right) \mtxt{and} \hat{\mathcal S}_n(f)=\frac{1}{\kappa}\sum_k \bar z_k \sum_{p=1}^n f\left(\vert{^{z_k\!}\lambda}_p\vert^2\right). }{con1} where $\{{^{z_k\!}\lambda}_p\}$ is the set eigenvalues of the Wilson-Dirac operator with twisted gauge field. Summing over all dim$({\mathcal D})$ eigenvalues the sums over $p$ become traces such that \eqnl{ {\mathcal S}(f)= \frac{1}{\kappa}\sum_k \bar z_k \,{\rm tr} f\left({^{z_k}\mathcal D}\right)\mtxt{and} \hat{\mathcal S}(f) =\frac{1}{\kappa}\sum_k \bar z_k \,{\rm tr} f\left({^{z_k}\mathcal D}^\dagger\,{^{z_k}\mathcal D}\right). }{con3} For $f(\lambda)=\lambda^{N_\mathrm{t}}$ one finds the spectral sum $\Sigma$ which reproduces the traced Polyakov loop \cite{Gattringer:2006ci}. Unfortunately this lattice-result is probably of no use in the continuum limit. Thus we have used functions $f(\lambda)$ which vanish for large (absolute) values of $\lambda$. The corresponding sums are order parameters which get their main contribution from the infrared end of the spectrum. Of all spectral sums considered here, the Gaussian sums ${\mathcal G}_n$ in \refs{en1b} define the most efficient order parameters. Besides the ${\mathcal G}_n$ the spectral sums of inverse powers of eigenvalues are quite useful as well. This observation may be of interest since these sums relate to the Banks-Casher relation. It remains to investigate the continuum limits of the spectral sums considered in this paper. The properly normalized ${\mathcal G}_n$ should have a well-behaved continuum limit. With regard to the conjectured relation between confinement and chiral symmetry breaking it would be more interesting to see whether the suitably normalized sums $\Sigma^{(-1)}$ or/and $\Sigma^{(-2)}$ can be defined in the continuum theory. Clearly, the answer to this interesting question depends on the dimension of spacetime. \textbf{Acknowledgments:} We thank Georg Bergner, Falk Bruckmann, Christof Gattringer, Tobias K{\"a}stner and Sebastian Uhlmann for interesting discussions. This project has been supported by the DFG, grant Wi 777/8-2.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Inflation \citep{Guth:1980zm,Linde:1981mu,Albrecht:1982wi,Starobinsky:1982ee} is a hypothetical high-energy process in the very early universe. It is believed that quantum fluctuations of various fields during inflation exit the horizon, being seeds of complicated structures of galaxies nowadays. The fluctuations involve the interactions between inflatons, which are hypothetical particles produced during inflation responsible for the \textcolor{black}{primordial} density perturbations. There are also interactions between inflatons and gravitons, and purely between gravitons, which produced primordial gravitational waves. Interactions between scalars, especially in the soft momenta limit, are well studied. Many interesting properties are discovered and compared to those in flat spacetime \citep{Assassi:2012zq,Chu:2018ovy}. With the recent detection of the gravitational waves produced by the \textcolor{black}{black hole} merger \citep{Abbott:2016blz} and neutron star merger \citep{TheLIGOScientific:2017qsa}, as well as the forthcoming LISA experiments about primordial gravitational waves, the gravitational wave astronomy is attracting more and more attentions. It is hoped that in the future we will also have results about primordial gravitational waves \cite{Abazajian:2016yjj,Li:2017drr} in terms of correlation functions with different types of polarizations, thus it is desiring to study the properties of such correlation functions during inflation. In this paper we focus on the interactions between gravitons only. The fluctuations in inflation are characterised by correlation functions. The two-point functions, or power spectrum, represent the Gaussian perturbations without interactions. Higher-point functions represent more special features of perturbations, known as non-Gaussianities, when interactions are present \citep{Chen:2010xka}. The three-point functions in the slow-roll model of single minimally coupled scalar field is first calculated in \citep{Maldacena:2002vr}. The correlation functions are similar to scattering amplitudes considered in particle physics. \textcolor{black}{However, particle physicists usually consider amplitudes in flat spacetime, while we are in spacetime during inflation, or a nearly de Sitter spacetime, and for simplicity we only consider a purely de Sitter spacetime. The symmetries present in these two spacetimes are different, and it is interesting to study how the mathematical structures of the correlation functions are affected when the symmetry group changes.} It would be nice if there exist some non-trivial relationships between inflation and flat spacetime, and the well-studied interesting properties of amplitudes can be generalized to our context. One of the wonderful properties of graviton scattering amplitudes in flat spacetime can be schematically written as ``$\mathrm{Gravity}=\mathrm{Gauge}^{2}$'' \citep{Kawai:1985xq}. Here ``gauge'' means Yang-Mills theory, which is the gauge theory of gluons or strong interactions. It means that the amplitudes of graviton scattering are roughly the square of those of gluon scattering, due to a simple idea that a massless spin 2 graviton is double copy of massless spin 1 gluons. There is the BCJ conjecture of duality between colour factors and kinematic factors in gluon amplitudes, so that we can just replace colour factors in gluon amplitudes by copies of kinematic factors to obtain graviton amplitudes \citep{Bern:2008qj,Tye:2010dd}. In string theory this fact is precisely described by KLT relations \citep{Kawai:1985xq}, which relates closed strings and open strings as ``$\mathrm{Closed}=\mathrm{Open}^{2}$'', and we expect they will have projections on flat spacetime quantum field theory. For example, the KLT relation for four-point amplitudes is \begin{equation} M\left(1234\right)=nA\left(1234\right)A\left(1243\right)~,\label{eq:1} \end{equation} \noindent where $M$ and $A$ denote for graviton and \textcolor{black}{colour-stripped} gluon amplitudes respectively, and $n$ is some kinematic factor which is not important in this paper. The numbers in the brackets label the external particles, and the orders of numbers represent the arrangement of the particles in clockwise direction. One would expect similar relations can also be found in inflationary context. \textcolor{black}{While we indeed find a relation similar to KLT relations in our context, there are some extra terms in the relation, which may correspond to features appearing in de Sitter spacetime only. They make the KLT structure very opaque. Especially, there are terms which do not have the form of $\mathrm{Gauge}^{2}$ clearly. Therefore, we would like to emphasize that the relations in this paper are only preliminary steps towards a complete KLT-like relation in inflation, and further studies are needed to solidify the relations.} If we use Feynman diagrams to calculate gluon and graviton amplitudes, extremely complicated expressions with thousands of terms are obtained but they can be grouped into a single term when the amplitude is maximally helicity violating. The result is known as Parke-Taylor formula \citep{Parke:1986gb}. This means there are some symmetries hidden in the ordinary expressions. In the literature of scattering amplitudes, this feature is revealed by extra tools such as spinor helicity formalism \citep{Berends:1981rb,DeCausmaecker:1981jtq,Kleiss:1985yh,Gastmans:1990xh,Xu:1986xb,Gunion:1985vca}, BCFW recursion relations \citep{Britto:2005fq} and so on (see nice reviews \citep{Dixon:1996wi,Elvang:2013cua}). Here we will also study spinor helicity formalism in details to see its generalization to inflation. \textcolor{black}{In addition, the main result of this paper, the KLT-like behaviour, is derived with this formalism, which can greatly simplify the calculations needed.} This paper is organised as follows. In Section \ref{sec:2} we review general properties of the gravitons, and the computations of correlation functions of graviton interactions. In Section \ref{sec:3} we discuss the generalization of spinor helicity formalism to inflationary spacetime, and use it to compute a four-point function. In this procedure we derive a KLT-like relation. In Section \ref{sec:4} we describe attempts to interpret such behaviour and point out its significance, by considering various limits of momentum configurations. We conclude and talk about possible extension of this work in Section \ref{sec:5}. \section{Graviton Correlators in Inflation\label{sec:2}} To begin with, we review the calculation of three-point functions of three gravitons in inflation. The calculation has been done \citep{Maldacena:2002vr} and we just recall some key points which are useful to the analysis below. For simplicity, we consider the model of single minimally coupled scalar field. We use the $\left(-,+,+,+\right)$ metric convention. Then the action is given by \begin{equation} S=\int d^{4}x\,\sqrt{-g}\left(\frac{R}{2}-\frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi-V\left(\phi\right)\right)~, \end{equation} \noindent where $\phi$ is the inflaton field which can be decomposed as background and perturbations as $\phi=\bar{\phi}+\delta\phi$. One can expand the action to arbitrary order using ADM formalism \citep{Baumann:2009ds} \begin{equation} g_{00}=-N^{2}+g_{ij}N^{i}N^{j},\:g_{0i}=g_{ij}N^{j},\:g_{ij}=a^{2}\exp\left(h\right)_{ij}=e^{2Ht}\exp\left(h\right)_{ij}~, \end{equation} \noindent where $H$ is the Hubble parameter, $N$ and $N^{i}$ are lapse and shift functions. To study graviton fluctuations only, in the traceless and transverse gauge of the gravitons, we can set $N=1$ and $N^{i}=0$ for the third order action. We quantize the graviton field by the second order action \begin{equation} S_{2}=\frac{1}{8}\int d\tau d^{3}x\,a^{2}\left(h_{ij}'h_{ij}'-\partial_{l}h_{ij}\partial_{l}h_{ij}\right)~, \end{equation} \noindent where $h_{ij}$ is the graviton field from Ricci scalar and we set $M_{Pl}=\left(8\pi G\right)^{-1/2}=1$. From now on we only use conformal time. Now we decompose the field to scalars by polarization tensors and quantize the scalars \[ h_{ij}\left(\mathbf{k}\right)=\underset{s=\pm}{\sum}\epsilon_{ij}^{s}\left(\mathbf{k}\right)h_{\mathbf{k}}^{s}~, \] \begin{equation} h_{\mathbf{k}}^{s}=\frac{H}{\sqrt{2k^{3}}}\left(1+ik\tau\right)e^{-ik\tau}a_{\mathbf{k}}^{s}+\frac{H}{\sqrt{2k^{3}}}\left(1-ik\tau\right)e^{ik\tau}a_{\mathbf{-k}}^{s\dagger}~.\label{eq5} \end{equation} Here $a^{\dagger}$ and $a$ are the creation and annihilation operators. We use circular polarization and choose the traceless and transverse gauge. Thus, the polarization tensors satisfy \begin{equation} \epsilon_{ij}^{s}=\epsilon_{ji}^{s},\:\partial_{i}\epsilon_{ij}^{s}=k_{i}\epsilon_{ij}^{s}=0,\:\epsilon_{ii}^{s}=0~. \end{equation} We also normalize the polarization tensors by $\epsilon_{ij}^{s}\epsilon_{ij}^{*s'}=4\delta_{ss'}$. Now the three-point interaction is determined by the third order action, which is given by \citep{Maldacena:2002vr,Maldacena:2011nz,Fu:2015vja} \begin{equation} S_{3}=\frac{1}{8}\int d\tau d^{3}x\,a^{2}\left(h_{kl}\partial_{k}h_{ij}-2h_{ik}\partial_{k}h_{jl}\right)\partial_{l}h_{ij}~. \end{equation} We can already expect some relations between this action and that of flat spacetime since the integrand is just the one in flat spacetime multiplied by $a^{2}$ \citep{Bern:1999ji}. Let us also discuss some features of the correlator first. Due to momentum conservation, the correlator must take the form \begin{equation} \left\langle h_{\mathbf{k}_{1}}^{s_{1}}h_{\mathbf{k}_{2}}^{s_{2}}h_{\mathbf{k}_{3}}^{s_{3}}\right\rangle =\left(2\pi\right)^{3}\delta^{3}\left(\mathbf{k}_{1}+\mathbf{k}_{2}+\mathbf{k}_{3}\right)\left\langle h_{\mathbf{k}_{1}}^{s_{1}}h_{\mathbf{k}_{2}}^{s_{2}}h_{\mathbf{k}_{3}}^{s_{3}}\right\rangle '~. \end{equation} Here we define the symbol ``prime'' to be the correlation function with the momentum conservation delta function removed. Since the energy of the particles is no longer conserved, we just have 3-dimensional delta function, which is different from that in flat spacetime. Next, we note that both terms in the interaction have the form of $h\partial h\partial h$ and we can factorize the scalar fields out, remaining time-independent products between tensors. Therefore if we only consider three-point interactions, each contribution to the correlation functions from each diagram can be written schematically as \begin{equation} \left\langle h_{\mathbf{k}_{1}}^{s_{1}}...h_{\mathbf{k}_{n}}^{s_{n}}\right\rangle _{i}'=\left(\mathrm{scalar}\:\mathrm{part}\right)\left(\mathrm{tensor}\:\mathrm{part}\right)~,\label{eq:10} \end{equation} \noindent where the scalar part is unchanged when we replace the graviton fields by inflaton-like fields $\delta\phi$ and consider a hypothetical model with interaction $H_{I}'=-\frac{1}{8}\int d^{3}x\,a^{2}\delta\phi^{3}$. This part can also be studied by some scattering amplitude technique using the method developed in \citep{Arkani-Hamed:2017fdk}. Here $i$ represents the diagram we are calculating. \textcolor{black}{Now we demonstrate this fact by explicit calculations. The inflationary correlation functions are calculated by the in-in formalism \citep{Weinberg:2005vy} (see also \citep{Chen:2010xka,Wang:2013eqj,Chen:2017ryl}) \begin{equation} \left\langle Q\right\rangle =\left\langle \left[\bar{T}e^{i\int_{-\infty}^{0}d\tau\,H_{I}\left(\tau\right)}\right]Q\left[Te^{-i\int_{-\infty}^{0}d\tau\,H_{I}\left(\tau\right)}\right]\right\rangle ~,\label{eq11} \end{equation} } \noindent \textcolor{black}{where $H_{I}$ is the interaction Hamiltonian, $\bar{T}$ and $T$ are anti-time-ordering and time-ordering operators respectively. We then write down \begin{equation} H_{I}=-\frac{1}{8}\int d^{3}x\,a^{2}\left(h_{kl}\partial_{k}h_{ij}-2h_{ik}\partial_{k}h_{jl}\right)\partial_{l}h_{ij}~. \end{equation} } \textcolor{black}{Now substitute $H_{I}$ and Equation (\ref{eq5}) into Equation (\ref{eq11}). We consider one term $X$ in the expansion of the exponentials in one diagram. Since we only have interactions in the form of $h\partial h\partial h$, in momentum space $X$ can be decomposed schematically into \begin{equation} X=\sum\left(\left(\prod\int d\tau\prod-\frac{1}{8}\int d^{3}x\,a^{2}\prod k^{i}\right)\left\langle h_{\mathbf{k}_{1}}^{s_{1}}...h_{\mathbf{k}_{n}}^{s_{n}}h_{ij}...\right\rangle _{c}\right). \end{equation} } \textcolor{black}{The expectation values $\left\langle h_{\mathbf{k}_{1}}^{s_{1}}...h_{\mathbf{k}_{n}}^{s_{n}}h_{ij}...\right\rangle _{c}$ consist of $h_{\mathbf{k}}^{s}$ and $h_{ij}$ only, and the numbers of them are the same in each term in the sum. The subscript $c$ means that we only consider the contribution from one certain way of contractions, depending on the diagram we calculate. Note that only operators with the same polarization have non-zero contractions, and the contraction $\widehat{h_{\mathbf{k}}^{s}h_{\mathbf{k'}}^{s}}$ is independent of $s$. Therefore it can be replaced by contraction between two scalar fields $\widehat{\delta\phi_{\mathbf{k}}\delta\phi_{\mathbf{k'}}}$. We also have \[ \widehat{h_{\mathbf{k}}^{s}h_{ij}\left(\mathbf{k'}\right)}=\epsilon_{ij}^{s}\left(\mathbf{k'}\right)\widehat{h_{\mathbf{k}}^{s}h_{\mathbf{k'}}^{s}}=\epsilon_{ij}^{s}\left(\mathbf{k'}\right)\widehat{\delta\phi_{\mathbf{k}}\delta\phi_{\mathbf{k'}}}\;, \] } \textcolor{black}{ \begin{equation} \widehat{h_{ij}\left(\mathbf{k}\right)h_{lm}\left(\mathbf{k'}\right)}=\underset{s=\pm}{\sum}\epsilon_{ij}^{s}\left(\mathbf{k}\right)\epsilon_{lm}^{s}\left(\mathbf{k}\right)\widehat{h_{\mathbf{k}}^{s}h_{\mathbf{k'}}^{s}}=\widehat{\delta\phi_{\mathbf{k}}\delta\phi_{\mathbf{k'}}}\underset{s=\pm}{\sum}\epsilon_{ij}^{s}\left(\mathbf{k}\right)\epsilon_{lm}^{s}\left(\mathbf{k}\right)\;. \end{equation} } \textcolor{black}{Therefore all contractions, and thus $\left\langle h_{\mathbf{k}_{1}}^{s_{1}}...h_{\mathbf{k}_{n}}^{s_{n}}h_{ij}...\right\rangle _{c}$ can be factorized into scalar parts and tensor parts. Moreover, all such expectation values have the same scalar part $\prod\widehat{\delta\phi\delta\phi}$, since we have specified the diagram we calculate. Now consider back the whole sum. In each term in the summation, we identity $\prod\int d\tau\prod-\frac{1}{8}\int d^{3}x\,a^{2}\prod\widehat{\delta\phi\delta\phi}$ to be the scalar parts, and $\prod k^{i}\sum\left(\prod\epsilon_{ij}^{s}\right)$ to be the tensor parts. The tensor parts can be factorised out from the integrations. On the other hand, it is clear that the scalar parts are the same for all terms. We then reach \begin{equation} X=\left(\prod\int d\tau\prod-\frac{1}{8}\int d^{3}x\,a^{2}\prod\widehat{\delta\phi\delta\phi}\right)\sum\left(\prod k^{i}\prod\epsilon_{ij}^{s}\right). \end{equation} } \textcolor{black}{Now consider the contribution from a diagram $\left\langle h_{\mathbf{k}_{1}}^{s_{1}}...h_{\mathbf{k}_{n}}^{s_{n}}\right\rangle _{i}'$. It is the sum of such $X$ in which only the integral operators $\prod\int d\tau$ vary. Therefore in this sum the tensor part stays the same and we conclude} \textcolor{black}{ \begin{equation} \left\langle h_{\mathbf{k}_{1}}^{s_{1}}...h_{\mathbf{k}_{n}}^{s_{n}}\right\rangle _{i}'=\sum\left(\prod\int d\tau\prod-\frac{1}{8}\int d^{3}x\,a^{2}\prod\widehat{\delta\phi\delta\phi}\right)\sum\left(\prod k^{i}\prod\epsilon_{ij}^{s}\right). \end{equation} } \textcolor{black}{This justifies our claim. On the other hand, the scalar part is equivalent to the expectation value $\left\langle \delta\phi\delta\phi...\delta\phi\right\rangle _{i}'$ with an effective Hamiltonian $H_{I}'$. From the above, $H_{I}'$ should consist of one integral operator $-\frac{1}{8}\int d^{3}x\,a^{2}$. To get the correct number of $\widehat{\delta\phi\delta\phi}$, it should be also a three-point interaction. Moreover, there is no derivatives of fields. Therefore $H_{I}'=-\frac{1}{8}\int d^{3}x\,a^{2}\delta\phi^{3}$.} \textcolor{black}{Be reminded that the tensor part still transforms as a scalar, while it involves contracted products of tensors. We observe that in this model this factorization only works for three-point interactions. The tensor part is purely the kinematics of momenta and polarizations, and is present in both de Sitter correlations and flat spacetime amplitudes. On the other hand, the scalar part is really the dynamics in de Sitter spacetime, and is not related to properties of tensors. Note that one can make the same factorization for flat spacetime amplitudes with three-point interactions. Therefore to study the mathematical structures caused by tensor properties, for simplicity we focus on comparing the tensor parts. Note that the pole structures of correlation functions are always in the scalar part.} Let us make an example of the above factorization. Applying the in-in formalism to compute the three-point function, we get \begin{align} \left\langle h_{\mathbf{k}_{1}}^{s_{1}}h_{\mathbf{k}_{2}}^{s_{2}}h_{\mathbf{k}_{3}}^{s_{3}}\right\rangle '= & -\frac{H^{4}}{16\left(k_{1}k_{2}k_{3}\right)^{3}}I\nonumber \\ & \left(\left(\epsilon_{ii'}^{1}\epsilon_{jj'}^{2}\epsilon_{jj'}^{3}k_{i}^{2}k_{i'}^{2}-2k_{i}^{2}k_{j'}^{1}\epsilon_{ii'}^{1}\epsilon_{jj'}^{2}\epsilon_{ji'}^{3}\right)+2\:\mathrm{cyclic}\right)~, \end{align} \begin{equation} I=-\left(k_{1}+k_{2}+k_{3}\right)+\frac{k_{1}k_{2}+k_{2}k_{3}+k_{3}k_{1}}{k_{1}+k_{2}+k_{3}}+\frac{k_{1}k_{2}k_{3}}{\left(k_{1}+k_{2}+k_{3}\right)^{2}}~. \end{equation} The second line is the tensor part. Below we will see that these six terms can be grouped into a simple expression when the helicities of the three gravitons are known. One can calculate the four-point functions from graviton exchange similarly, which has been done in \cite{Fu:2015vja}. We would see the scalar part keeps the same for different scenarios \citep{Fu:2015vja,Seery:2008ax}. An interesting point is that when we consider contributions from different in-in contours and permutations separately, there are IR divergences. However, they all cancel when we sum up the contributions. For the tensor part, we just multiply the tensor parts of two three-point vertices together, and sum up the helicity of internal graviton propagator. \textcolor{black}{It is the same as that obtained by redoing the in-in formalism.} We will treat this after introducing the spinor helicity formalism. Once we have simplified the three-point vertex, we can apply the results to calculate higher-point functions. \textcolor{black}{Before starting the analysis, let us make a remark. Below we only consider diagrams formed by three-point vertices only. We are not talking higher-point vertices because they cannot be easily transformed between 4 dimensions and 3 dimensions, that is, time derivatives are involved in the Feynman rules \citep{Fu:2015vja,Bern:1999ji}. Moreover, we cannot apply the factorization between scalar parts and tensor parts to those vertices. On the other hand, contributions from different channels contain different scalar parts, which are not our main concern in this paper but cause great difficulty adding different channels. Therefore as a preliminary step, we demonstrate the behaviour of the four-point functions by considering only one channel with three-point vertices only. From the results below, whether adding up different channels can cause further simplification is quite non-trivial, and we would like to leave it in further studies.} \section{Spinor Helicity Formalism for Inflation\label{sec:3}} Here we discuss the spinor helicity formalism for inflation. The spinor helicity formalism for inflation is introduced in \citep{Maldacena:2011nz} to simplify the above three-point functions. A review on that in flat spacetime and the details of notations used here can be found in Appendix \ref{sec:A}. During inflation, we have nearly de Sitter background. For simplicity, below we consider pure de Sitter background \begin{equation} ds^{2}=\frac{1}{H^{2}\tau^{2}}\left(-d\tau^{2}+dx^{2}+dy^{2}+dz^{2}\right)~. \end{equation} Due to the expansion of the universe, the time translational symmetry, which is present in flat spacetime, is broken and energy is not conserved in general. However, we still have the 3-momentum conservation. Therefore in inflation we usually work in 3-dimensional formalism i.e. considering only spatial components of vectors and tensors. In this way we lose the information about energy. In contrast, in flat spacetime, especially in the spinor helicity formalism, we work in 4-dimensional formalism. It means that some changes are needed to generalize the formalism to 3 dimensions. Some formulas are also modified due to energy non-conservation. As a result, some nice features in the spinor helicity formalism can no longer be used. \subsection{Modifications of the Formalism} Although most results here were already obtained in \citep{Maldacena:2011nz}, here we work out more details of the formalism and emphasize some points that were not mentioned. To begin with, the most simple generalization is done by replacing 3-dimensional indices to 4-dimensional indices. For example \begin{equation} \epsilon_{ij}^{s}k_{i}k_{j}\rightarrow\epsilon_{\mu\nu}^{s}k^{\mu}k^{\nu}~, \end{equation} \noindent and the momentum vectors should be lightlike. We then define \begin{equation} k^{\mu}=\left(k,\boldsymbol{k}\right)~. \end{equation} Next, to force 4-dimensional results to be the same as 3-dimensional results, we make sure the terms become zero when there are indices being zero. Since we are considering products purely between polarization tensors, and among polarization tensors and momenta, we require \begin{equation} \epsilon_{0\nu}=\epsilon_{\mu0}=0~. \end{equation} To implement this, first we notice that under the gauge in Section \ref{sec:2}, a polarization tensor can be written as direct product of two vectors. We can set \begin{equation} \epsilon_{+}^{\mu\nu}\left(p\right)=\frac{\left\langle p\right|\gamma^{\mu}\left|q\right]\left\langle p\right|\gamma^{\nu}\left|q\right]}{\left[qp\right]^{2}},\:\epsilon_{-}^{\mu\nu}\left(p\right)=\frac{\left\langle q\right|\gamma^{\mu}\left|p\right]\left\langle q\right|\gamma^{\nu}\left|p\right]}{\left\langle qp\right\rangle ^{2}}~,\label{eq16} \end{equation} \noindent where $q\neq p$ is the reference spinor. One can check that Equation (\ref{eq16}) satisfies the remaining gauge and normalization conditions. Here we can already see that gravitons are double copy of gluons. The zeroth components of the tensors are not zero in general. To ensure they are always zero, we must choose $q$ to be \begin{equation} \left|q\right]=\left|p\right\rangle ,\left|q\right\rangle =\left|p\right]~. \end{equation} This is not a convenient gauge to choose in flat spacetime, as it breaks the Lorentz symmetry. Nevertheless, this choice allows us to rewrite the graviton correlations into the spinor helicity formalism. Now we cannot choose $q$ freely to simplify our calculations. Therefore, many simplifications in flat spacetime no longer work. Below we will also see that we have very different conclusions on correlators from those in flat spacetime. We also introduce crossing between angle brackets and square brackets, which makes our calculations even more complicated. To be precise, we formally define the crossing as\footnote{Note that the crossing products defined here are different from those in \citep{Elvang:2013cua}, which vanish by definition.} \begin{equation} \left\langle pq\right]=\left\langle p\right|\gamma^{0}\left|q\right],\:\left[pq\right\rangle =\left[p\right|\gamma^{0}\left|q\right\rangle ~. \end{equation} One can then derive the following formulas: \begin{equation} \left[pp\right\rangle =-\left\langle pp\right]=2p~, \end{equation} \begin{equation} \left[pq\right\rangle \left[qp\right\rangle =2\left(pq+\mathbf{p}\cdot\mathbf{q}\right)~.\label{eq:20} \end{equation} Since we only have 3-momentum conservation now, the trick of momentum conservation (see Appendix \ref{sec:A}) must be modified and there are also variations of the trick due to the crossing products. For example, we consider 3 momenta $\mathbf{k}_{1},\mathbf{k}_{2},\mathbf{k}_{3}$ with $\mathbf{k}_{1}+\mathbf{k}_{2}+\mathbf{k}_{3}=0$. We have \begin{equation} \left\langle 12\right\rangle \left[23\right]=\left(k_{1}+k_{2}+k_{3}\right)\left\langle 13\right]~,\label{eq:21} \end{equation} \begin{equation} \left\langle 12\right\rangle \left[23\right\rangle =\left(k_{1}+k_{2}-k_{3}\right)\left\langle 13\right\rangle ~, \end{equation} \begin{equation} \left[12\right\rangle \left[23\right\rangle =\left(-k_{1}+k_{2}-k_{3}\right)\left[13\right\rangle ~. \end{equation} However, in flat spacetime, for instance, $\left\langle 12\right\rangle \left[23\right]=0$. Therefore energy non-conservation makes our results more complicated. Note that Equation (\ref{eq:21}) vanishes if the 3 momenta are on-shell and energy is conserved i.e. $k_{1}+k_{2}+k_{3}=0$. Surely energy of a particle cannot be negative, but since we are setting all external particles to be incoming, we can analytically continue $k$ to $-k$ for outgoing particles. The situation becomes even worse when there are more than 3 momenta. For example, for the case of 4 momenta we have \begin{equation} \left\langle 12\right\rangle \left[23\right\rangle =\left(k_{1}+k_{2}-k_{3}+k_{4}\right)\left\langle 13\right\rangle -\left\langle 14\right\rangle \left[43\right\rangle ~. \end{equation} As a result, when we consider correlation functions higher than three-point, it is hard to eliminate the crossing products. It leads to some new terms of the correlation functions which only appears in de Sitter spacetime. \subsection{Computation of Correlation Functions Using Spinor Helicity Formalism} Here we compute the four-point functions using the formalism described above. For the computation of three-point functions, which was briefly done in \citep{Maldacena:2011nz}, see Appendix \ref{sec:B}. One main message is that when we flip one $+$ helicity to $-$, we just transform the original result by $\left.\right\rangle \rightarrow\left.\right]$ and $k\rightarrow-k$ and vice versa for corresponding graviton. Since we only consider three-point vertices, this is true in general. From now on when mentioning a correlation function, we refer to the tensor part of it (see Equation (\ref{eq:10})), labeled as $\left\langle \mathrm{helicity}\right\rangle _{\mathrm{channel}}$, unless otherwise specified. From the in-in formalism, the tensor part of a higher-point function is just product of tensor parts of three-point functions. Since the momentum 4-vectors are just defined artificially, they can always be lightlike. Therefore, the three-point functions in the product are just associated with momenta of the particles. For simplicity, here we only consider four-point functions. Consider the correlator $\left\langle 1^{+}2^{+}3^{+}4^{+}\right\rangle $, where external legs of gravitons $1$ to $4$ are arranged clockwisely in the diagram. Let the internal graviton have momentum $k_{I}^{i}$. Note that the internal graviton can have $+$ or $-$ polarizations. Here we only calculate the contribution from $s$ channel, see Figure \ref{fig:1}. The contribution from other channels can be obtained by simply permuting external gravitons. In addition, the results for other combinations of helicities can be obtained by transformations for flipping helicities. \begin{figure} \begin{centering} \includegraphics[width=0.3\paperwidth]{4pt4graviton} \par\end{centering} \caption{$\left\langle 1^{+}2^{+}3^{+}4^{+}\right\rangle _{s}$} \label{fig:1} \end{figure} The helicity index $s$ can be $+$ or $-$. We choose the internal momentum incoming to the $12$ vertex and outgoing from the $34$ vertex. We call this choosing the direction of $k_{I}$. One can choose the other direction and the result should be independent of the direction we choose. That means the result should be even in $k_{I}$. To decompose the diagram to two three-point functions, we analytically continue the outgoing graviton in $34$ vertex to be incoming with opposite momentum 4-vector and helicity \citep{Elvang:2013cua,Schwartz:2013pla}. Therefore \begin{align} \left\langle 1^{+}2^{+}3^{+}4^{+}\right\rangle _{s} & =\frac{1}{256\left(k_{1}k_{2}k_{3}k_{4}k_{I}^{2}\right)^{2}}[\left(\left\langle 12\right\rangle \left\langle 2I\right\rangle \left\langle I1\right\rangle \left\langle 34\right\rangle \left\langle 4I\right]\left[I3\right\rangle \left(k_{1}+k_{2}+k_{I}\right)\left(k_{3}+k_{4}+k_{I}\right)\right)^{2}\nonumber \\ & +\left(\left\langle 34\right\rangle \left\langle 4I\right\rangle \left\langle I3\right\rangle \left\langle 12\right\rangle \left\langle 2I\right]\left[I1\right\rangle \left(k_{1}+k_{2}-k_{I}\right)\left(k_{3}+k_{4}-k_{I}\right)\right)^{2}]~.\label{eq:24} \end{align} Here we sum over helicity $s$. Using $\mathbf{k}_{I}=-\mathbf{k}_{1}-\mathbf{k}_{2}=\mathbf{k}_{3}+\mathbf{k}_{4}$ and applying Schouten's identity and momentum conservation repeatedly, we get \begin{align} \left\langle 2I\right\rangle \left\langle I1\right\rangle \left\langle 4I\right]\left[I3\right\rangle & =\left(k_{3}-k_{4}+k_{I}\right)\left(k_{1}-k_{2}+k_{I}\right)\left\langle 23\right\rangle \left\langle 14\right\rangle \nonumber \\ & +\left(-k_{3}+k_{4}+k_{I}\right)\left(k_{1}-k_{2}+k_{I}\right)\left\langle 24\right\rangle \left\langle 13\right\rangle +2k_{I}\left\langle 23\right\rangle \left\langle 24\right\rangle \left[21\right\rangle ~, \end{align} \begin{align} \left\langle 4I\right\rangle \left\langle I3\right\rangle \left\langle 2I\right]\left[I1\right\rangle & =\left(k_{3}-k_{4}-k_{I}\right)\left(k_{1}-k_{2}-k_{I}\right)\left\langle 23\right\rangle \left\langle 14\right\rangle \nonumber \\ & +\left(-k_{3}+k_{4}-k_{I}\right)\left(k_{1}-k_{2}-k_{I}\right)\left\langle 24\right\rangle \left\langle 13\right\rangle -2k_{I}\left\langle 23\right\rangle \left\langle 24\right\rangle \left[21\right\rangle ~. \end{align} We can see that the transformation for flipping helicities also works well for internal gravitons. The expressions seem to be much more complicated. The purpose of this step is to express the four-point functions in terms of external momenta and $k_{I}$ only. Nevertheless there are angle-square bracket products remained, while we suppose there are only angle bracket products when all 4 external gravitons have $+$ helicity. In general, we would expect gravitons with $+$ helicity to associate with angle brackets, and gravitons with $-$ helicity to associate with square brackets. If there are terms which do not follow this pattern, we call such terms anomalous terms. Note that we just randomly choose the anomalous term $\left[21\right\rangle $ to appear, and one can derive similar expressions for all other possibilities. Now we expand the square and use \begin{equation} \left\langle 23\right\rangle \left\langle 24\right\rangle \left[21\right\rangle =\left(-k_{1}+k_{2}-k_{3}+k_{4}\right)\left\langle 23\right\rangle \left\langle 14\right\rangle +\left(-k_{1}+k_{2}+k_{3}-k_{4}\right)\left\langle 24\right\rangle \left\langle 13\right\rangle +\left\langle 13\right\rangle \left\langle 14\right\rangle \left[12\right\rangle ~,\label{eq:27} \end{equation} \noindent and Equation (\ref{eq:20}) to eliminate squares of angle-square bracket products, we finally arrive \begin{align} \left\langle 1^{+}2^{+}3^{+}4^{+}\right\rangle _{s} & =\frac{1}{256\left(k_{1}k_{2}k_{3}k_{4}k_{I}^{2}\right)^{2}}[2K_{c}\left(\left\langle 12\right\rangle \left\langle 23\right\rangle \left\langle 34\right\rangle \left\langle 41\right\rangle \right)\left(\left\langle 12\right\rangle \left\langle 24\right\rangle \left\langle 43\right\rangle \left\langle 31\right\rangle \right)\nonumber \\ & +K_{s_{+}}\left(\left\langle 12\right\rangle \left\langle 23\right\rangle \left\langle 34\right\rangle \left\langle 41\right\rangle \right)^{2}+K_{s_{-}}\left(\left\langle 12\right\rangle \left\langle 24\right\rangle \left\langle 43\right\rangle \left\langle 31\right\rangle \right)^{2}\nonumber \\ & -16\left(K_{a_{+}}\left\langle 12\right\rangle \left\langle 23\right\rangle \left\langle 34\right\rangle \left\langle 41\right\rangle +K_{a_{-}}\left\langle 12\right\rangle \left\langle 24\right\rangle \left\langle 43\right\rangle \left\langle 31\right\rangle \right)\left\langle 12\right\rangle \left\langle 23\right\rangle \left\langle 24\right\rangle \left\langle 34\right\rangle \left[21\right\rangle ]~,\label{eq:28} \end{align} \noindent where \noindent \begin{align} K_{c}= & \left(\left(\left(k_{1}+k_{2}+k_{I}\right)\left(k_{3}+k_{4}+k_{I}\right)\left(k_{1}-k_{2}+k_{I}\right)\right)^{2}+\left(\left(k_{1}+k_{2}-k_{I}\right)\left(k_{3}+k_{4}-k_{I}\right)\left(k_{1}-k_{2}-k_{I}\right)\right)^{2}\right)\nonumber \\ & \left(k_{3}-k_{4}+k_{I}\right)\left(k_{3}-k_{4}-k_{I}\right)\nonumber \\ + & 2k_{I}^{2}\left(\left(\left(k_{1}+k_{2}+k_{I}\right)\left(k_{3}+k_{4}+k_{I}\right)\right)^{2}+\left(\left(k_{1}+k_{2}-k_{I}\right)\left(k_{3}+k_{4}-k_{I}\right)\right)^{2}\right)\left(k_{1}-k_{2}+k_{I}\right)\left(k_{1}-k_{2}-k_{I}\right)~, \end{align} \begin{align} K_{s_{\pm}} & =\left(\left(k_{1}+k_{2}+k_{I}\right)\left(k_{3}+k_{4}+k_{I}\right)\left(k_{1}-k_{2}+k_{I}\right)\left(\pm k_{3}\mp k_{4}+k_{I}\right)\right)^{2}\nonumber \\ & +\left(\left(k_{1}+k_{2}-k_{I}\right)\left(k_{3}+k_{4}-k_{I}\right)\left(k_{1}-k_{2}-k_{I}\right)\left(\pm k_{3}\mp k_{4}-k_{I}\right)\right)^{2}~, \end{align} \begin{equation} K_{a_{\pm}}=\left(k_{1}+k_{2}+k_{3}+k_{4}\right)k_{I}^{2}\left(\left(k_{1}-k_{2}\right)\left(k_{3}-k_{4}\right)\pm k_{I}^{2}\right)\left(\left(k_{1}+k_{2}\right)\left(k_{3}+k_{4}\right)+k_{I}^{2}\right)~. \end{equation} The expression is surely much longer than before, but the different hidden roles of the products of tensors are now very clear. See discussions in next section. Note that although $A\left(++++\right)=0$ for gluon scattering, if we insert a higher-dimensional interaction, such as $\mathcal{L}_{I}\sim\mathrm{tr}\,F_{\mu}^{\nu}F_{\nu}^{\lambda}F_{\lambda}^{\mu}$, into the Yang-Mills theory, we would have $A\left(1^{+}2^{+}3^{+}4^{+}\right)\sim\left\langle 12\right\rangle \left\langle 23\right\rangle \left\langle 34\right\rangle \left\langle 41\right\rangle $ and $A\left(1^{+}2^{+}4^{+}3^{+}\right)\sim\left\langle 12\right\rangle \left\langle 24\right\rangle \left\langle 43\right\rangle \left\langle 31\right\rangle $. In addition, the KLT relation for flat spacetime (see Equation (\ref{eq:1})) is still true if we also insert the corresponding interaction $\mathcal{L}_{I}\sim R^{3}$ into Einstein gravity \citep{Cohen:2010mi}. Therefore the first line in Equation (\ref{eq:28}) is analog to the KLT relation, while second and third lines are some extra terms representing new features of amplitudes that are not present in flat spacetime. For convenience, we call the first, second and third lines to be cross terms, square terms and anomalous terms respectively. \textcolor{black}{As a remark, the existence of ``$\mathrm{Gravity}=\mathrm{Gauge}^{2}$'' is trivial in our case since the polarization tensors are already square of polarization vectors in gauge theory at the beginning of computation. However, our main concern is how the ``squares'' look like in the KLT-like relation, which remains non-trivial and interesting.} Equation (\ref{eq:28}) also does not look symmetric. It is because we have chosen $\left[21\right\rangle $ to be the anomalous term. In principle, we can have a more symmetric form, but it turns out that the biased form is more convenient for further calculations. We have the freedom to choose the anomalous term to facilitate the calculations. Before analysing the results, let us make a remark. The angle-square bracket products can be, in fact, expressed in terms of ordinary spinor products. Using Equation (\ref{eq:27}) again, we have, for example \[ \left\langle 23\right\rangle \left\langle 24\right\rangle \left[21\right\rangle =\frac{1}{2}\left(C\pm\sqrt{C^{2}-4\left(k_{1}-k_{2}+k_{I}\right)\left(k_{1}-k_{2}-k_{I}\right)\left\langle 23\right\rangle \left\langle 14\right\rangle \left\langle 24\right\rangle \left\langle 13\right\rangle }\right)~, \] \begin{equation} C=\left(-k_{1}+k_{2}-k_{3}+k_{4}\right)\left\langle 23\right\rangle \left\langle 14\right\rangle +\left(-k_{1}+k_{2}+k_{3}-k_{4}\right)\left\langle 24\right\rangle \left\langle 13\right\rangle ~,\label{eq:35} \end{equation} \noindent and one can use other conditions to determine the sign above. However, it remains hard to interpret the square root and the expression becomes even longer. Therefore we prefer to keep the angle-square bracket products. \subsection{\textcolor{black}{Comparing with Yang-Mills Theory in Inflation}} \textcolor{black}{So far we compare the graviton correlation functions in inflation with the colour-stripped Yang-Mills scattering amplitudes in flat spacetime. Although it suffices to show new features of the KLT-like relation, to be precise we should also do the comparison with the colour-stripped Yang-Mills correlation functions in inflation. Note that in our simplest inflation model, we do not have Yang-Mills interactions. However, since the behaviour of ``$\mathrm{Gravity}=\mathrm{Gauge}^{2}$'' is still clear for three-point functions (see Appendix \ref{sec:B}), it is natural to write down \begin{equation} \left\langle 1^{+}2^{+}3^{+}\right\rangle ^{YM}=\frac{\left\langle 12\right\rangle \left\langle 23\right\rangle \left\langle 31\right\rangle }{4k_{1}k_{2}k_{3}}\left(k_{1}+k_{2}+k_{3}\right)\;, \end{equation} } \noindent \textcolor{black}{and similar expressions for other combinations of helicities. We then just repeat the calculations in the previous section to obtain the corresponding four-point functions. To see whether the analog of KLT relations when both sides are in inflation is non-trivial, we just compare $\left\langle 1^{+}2^{+}3^{+}4^{+}\right\rangle _{s}$ with $\left\langle 1^{+}2^{+}3^{+}4^{+}\right\rangle _{s}^{YM}\left\langle 1^{+}2^{+}4^{+}3^{+}\right\rangle _{s}^{YM}$. We then have \begin{alignat}{1} \left\langle 1^{+}2^{+}3^{+}4^{+}\right\rangle _{s}^{YM}\left\langle 1^{+}2^{+}4^{+}3^{+}\right\rangle _{s}^{YM} & =\frac{1}{256\left(k_{1}k_{2}k_{3}k_{4}k_{I}^{2}\right)^{2}}[2k_{c}\left(\left\langle 12\right\rangle \left\langle 23\right\rangle \left\langle 34\right\rangle \left\langle 41\right\rangle \right)\left(\left\langle 12\right\rangle \left\langle 24\right\rangle \left\langle 43\right\rangle \left\langle 31\right\rangle \right)\nonumber \\ & -k_{s_{+}}\left(\left\langle 12\right\rangle \left\langle 23\right\rangle \left\langle 34\right\rangle \left\langle 41\right\rangle \right)^{2}-k_{s_{-}}\left(\left\langle 12\right\rangle \left\langle 24\right\rangle \left\langle 43\right\rangle \left\langle 31\right\rangle \right)^{2}\nonumber \\ & -16\left(k_{a_{+}}\left\langle 12\right\rangle \left\langle 23\right\rangle \left\langle 34\right\rangle \left\langle 41\right\rangle +k_{a_{-}}\left\langle 12\right\rangle \left\langle 24\right\rangle \left\langle 43\right\rangle \left\langle 31\right\rangle \right)\left\langle 12\right\rangle \left\langle 23\right\rangle \left\langle 24\right\rangle \left\langle 34\right\rangle \left[21\right\rangle ]~, \end{alignat} } \noindent \textcolor{black}{where \begin{alignat}{1} k_{c}= & (\left(k_{1}+k_{2}+k_{I}\right)\left(k_{3}+k_{4}+k_{I}\right)\left(k_{1}-k_{2}+k_{I}\right)\left(k_{3}-k_{4}+k_{I}\right)\nonumber \\ & +\left(k_{1}+k_{2}-k_{I}\right)\left(k_{3}+k_{4}-k_{I}\right)\left(k_{1}-k_{2}-k_{I}\right)\left(k_{3}-k_{4}-k_{I}\right))\nonumber \\ & (\left(k_{1}+k_{2}+k_{I}\right)\left(k_{3}+k_{4}+k_{I}\right)\left(k_{1}-k_{2}+k_{I}\right)\left(-k_{3}+k_{4}+k_{I}\right)\nonumber \\ & +\left(k_{1}+k_{2}-k_{I}\right)\left(k_{3}+k_{4}-k_{I}\right)\left(k_{1}-k_{2}-k_{I}\right)\left(-k_{3}+k_{4}-k_{I}\right))\nonumber \\ + & 8k_{I}^{4}\left(k_{1}+k_{2}+k_{3}+k_{4}\right)^{2}\left(k_{1}-k_{2}+k_{I}\right)\left(k_{1}-k_{2}-k_{I}\right)\;, \end{alignat} \begin{align} k_{s_{\pm}}= & (\left(k_{1}+k_{2}+k_{I}\right)\left(k_{3}+k_{4}+k_{I}\right)\left(k_{1}-k_{2}+k_{I}\right)\left(\pm k_{3}\mp k_{4}+k_{I}\right)\nonumber \\ & +\left(k_{1}+k_{2}-k_{I}\right)\left(k_{3}+k_{4}-k_{I}\right)\left(k_{1}-k_{2}-k_{I}\right)\left(\pm k_{3}\mp k_{4}-k_{I}\right))^{2}\;, \end{align} } \textcolor{black}{ \begin{equation} k_{a_{\pm}}=k_{I}^{4}\left(k_{1}+k_{2}+k_{3}+k_{4}\right)^{2}\left(\mp k_{1}\pm k_{2}-k_{3}+k_{4}\right)\;. \end{equation} } \textcolor{black}{It turns out that all three types of terms are still present, but it is clear that we cannot simply relate $\left\langle 1^{+}2^{+}3^{+}4^{+}\right\rangle _{s}$ to $\left\langle 1^{+}2^{+}3^{+}4^{+}\right\rangle _{s}^{YM}\left\langle 1^{+}2^{+}4^{+}3^{+}\right\rangle _{s}^{YM}$ by adding kinematic factor. Moreover, we cannot write $\left\langle 12\right\rangle \left\langle 23\right\rangle \left\langle 34\right\rangle \left\langle 41\right\rangle $ and $\left\langle 12\right\rangle \left\langle 24\right\rangle \left\langle 43\right\rangle \left\langle 31\right\rangle $ in terms of $\left\langle 1^{+}2^{+}3^{+}4^{+}\right\rangle _{s}^{YM}$ and $\left\langle 1^{+}2^{+}4^{+}3^{+}\right\rangle _{s}^{YM}$. Therefore we have demonstrated that the KLT-like relations are even more non-trivial then those in previous section when both sides are in inflation. This proves the existence of the new features mentioned above. As a remark, here we even do not have a clear picture on four-point interactions, and whether adding up channels can cause extra simplifications is more non-trivial than that in previous sections. Since the relation becomes much less clear when both sides are in inflation, in the following analysis we stick with the interpretation in terms of flat spacetime amplitudes.} \section{Behaviour of the Four-Point Functions\label{sec:4}} We still need interpretations for the extra terms. To make better sense of them, we study their behaviour in several limits, namely the collinear limit, squeezed limit and collapsed limit, see Figure \ref{fig:2}. In previous literature, many interesting properties of (mostly scalar) correlation functions are found in such limits \citep{Maldacena:2002vr,Creminelli:2004yq,Seery:2008ax,Li:2008gg,Assassi:2012zq,Senatore:2012wy,Berezhiani:2013ewa,Hinterbichler:2013dpa,Fu:2015vja,Chu:2018ovy}. We expect similar properties can be found similarly. From this, we can identify the correspondence between such properties and types of terms. As mentioned above, we need to choose the anomalous term which mostly facilitate our calculations. Here we try our best to fix the choice $\left[21\right\rangle $ and only work out cases of some independent combinations of helicities. The results can be easily generalized to all combinations with other choices of anomalous terms. \begin{figure} \begin{centering} \includegraphics[width=0.8\paperwidth]{limits} \par\end{centering} \caption{Momentum configurations in different limits} \label{fig:2} \end{figure} \subsection{Collinear Limit} Here we set the external momentum vectors to \textcolor{black}{align} on the same straight line. Then the 3-momentum conservation $\mathbf{k}_{I}=-\mathbf{k}_{1}-\mathbf{k}_{2}=\mathbf{k}_{3}+\mathbf{k}_{4}$ implies $k_{1}+k_{2}+k_{I}\rightarrow0$ and $k_{3}+k_{4}-k_{I}\rightarrow0$ in the sense of analytic continuation. Here we maintain the choice of direction of $k_{I}$ as above, and one can replace $k_{I}$ with $-k_{I}$. The limits look like energy conservation, but keep in mind that $k$ is just magnitudes of momentum and is not related to energy in general. Therefore, we are not recovering the full result in flat spacetime. \textcolor{black}{As a remark, we cannot really construct the flat spacetime limit like the one in \cite{Raju:2012zs,Raju:2012zr}. It is a price of that we greatly simplify the computation in previous section by throwing away the information of energy of the internal graviton. Even if we can construct the flat spacetime limit, here we cannot demonstrate the well-known behaviour of full flat spacetime amplitudes since we are not adding up channels in our previous results.} \begin{itemize} \item All $+$ \end{itemize} From Equation (\ref{eq:24}), we can already see the correlation function vanish. One interesting point is that if we only take the limit $k_{1}+k_{2}+k_{3}+k_{4}\rightarrow0$, the correlation function is still not zero, but the anomalous term disappears. \begin{itemize} \item Three $+$ and One $-$ \end{itemize} Under such limit we have $\left\langle 12\right\rangle \left[12\right]=\left(k_{1}+k_{2}+k_{I}\right)\left(k_{1}+k_{2}-k_{I}\right)=0$, so $\left\langle 12\right\rangle =0$ or $\left[12\right]=0$. If $\left\langle 12\right\rangle =0$, when we consider $\left\langle 1^{+}2^{-}3^{+}4^{+}\right\rangle _{s}$, the anomalous term vanishes. It is also clear that $K_{c}$ and $K_{s_{\pm}}$ vanish after flipping the helicity. Therefore $\left\langle 1^{+}2^{-}3^{+}4^{+}\right\rangle _{s}=0$. Changing the choice of anomalous term from $\left[21\right\rangle $ to $\left[12\right\rangle $ in Equation (\ref{eq:28}), we also have $\left\langle 1^{-}2^{+}3^{+}4^{+}\right\rangle _{s}=0$. Similarly if $\left[12\right]=0$, we have $\left\langle 1^{+}2^{-}3^{-}4^{-}\right\rangle _{s}=\left\langle 1^{-}2^{+}3^{-}4^{-}\right\rangle _{s}=0$. Since such correlation functions become their complex conjugate when all external helicities are flipped \citep{Fu:2015vja}, the conclusions are true for all cases. Here we get similar behaviour as that in flat spacetime. It is because for amplitudes, if the internal graviton also goes on-shell, the conditions of energy conservation at each vertex and the limits here become equivalent to each other. Then it is expected that we recover some special cases in flat spacetime, that is, vanishing amplitudes remaining vanishing. \begin{itemize} \item Two $+$ and Two $-$ \end{itemize} If we flip one external graviton from $+$ to $-$ at each vertex, it is easy to see that only the square terms are dominant with the same reason as above. For example, \begin{equation} \left\langle 1^{+}2^{-}3^{-}4^{+}\right\rangle _{s}=\frac{\left(\left\langle 12\right]\left[23\right]\left[34\right\rangle \left\langle 41\right\rangle \right)^{2}}{k_{2}^{2}k_{3}^{2}}~. \end{equation} The result here seems to contradict to what we have in flat spacetime, but it is just because we have not considered the contributions from other channels. For example, if we flip helicities of two external gravitons at one vertex i.e. $\left\langle 1^{-}2^{-}3^{+}4^{+}\right\rangle _{s}$ and $\left\langle 1^{+}2^{+}3^{-}4^{-}\right\rangle _{s}$, all three types of terms are important. However, to consider the full contribution we also need to consider the scalar parts. We will leave this for a future work. From above, we can see how the signs in the prefactors of spinor products control the helicity structures. Such behaviour is not fully clear in flat spacetime. \subsection{Squeezed Limit} Here we let one external graviton to be soft. For example, we consider the limit $k_{1}\rightarrow0$. The condition of momentum conservation becomes $\mathbf{k}_{2}+\mathbf{k}_{I}=0$ and thus $k_{2}=-k_{I}$. The sign here depends on how we define the direction of $k_{I}$, but the results are the same for all cases. It is clear that for all combinations of helicities, the only dominant term is the anomalous term if we keep the choice $\left[21\right\rangle $. For instance, \begin{align} \left\langle 1^{+}2^{+}3^{+}4^{+}\right\rangle _{s}= & \frac{\left(k_{2}+k_{3}+k_{4}\right)^{2}}{16\left(k_{1}k_{2}k_{3}k_{4}\right)^{2}}\nonumber \\ & \left[\left(-k_{2}+k_{3}-k_{4}\right)\left\langle 12\right\rangle \left\langle 23\right\rangle \left\langle 34\right\rangle \left\langle 41\right\rangle +\left(k_{2}+k_{3}-k_{4}\right)\left\langle 12\right\rangle \left\langle 24\right\rangle \left\langle 43\right\rangle \left\langle 31\right\rangle \right]\left\langle 12\right\rangle \left\langle 23\right\rangle \left\langle 24\right\rangle \left\langle 34\right\rangle \left[21\right\rangle ~. \end{align} Note that $\left[21\right\rangle \left[12\right\rangle =\left(k_{I}+k_{1}-k_{2}\right)\left(k_{I}-k_{1}+k_{2}\right)=0$, so either $\left[21\right\rangle $ or $\left[12\right\rangle $ vanishes. We look at the non-vanishing $\left\langle 1^{+}2^{+}3^{+}4^{+}\right\rangle _{s}$ i.e. $\left[21\right\rangle \neq0$. Then by Equation (\ref{eq:35}) \begin{align} \left\langle 1^{+}2^{+}3^{+}4^{+}\right\rangle _{s} & =\frac{\left(k_{2}+k_{3}+k_{4}\right)^{2}}{16\left(k_{1}k_{2}k_{3}k_{4}\right)^{2}}\left[\left(-k_{2}+k_{3}-k_{4}\right)\left\langle 12\right\rangle \left\langle 23\right\rangle \left\langle 34\right\rangle \left\langle 41\right\rangle +\left(k_{2}+k_{3}-k_{4}\right)\left\langle 12\right\rangle \left\langle 24\right\rangle \left\langle 43\right\rangle \left\langle 31\right\rangle \right]^{2}\nonumber \\ & =\frac{\left(k_{2}+k_{3}+k_{4}\right)^{2}}{16\left(k_{1}k_{2}k_{3}k_{4}\right)^{2}}\left(\left\langle 12\right\rangle \left\langle 23\right\rangle \left\langle 24\right\rangle \left\langle 34\right\rangle \left[21\right\rangle \right)^{2}~. \end{align} Here the anomalous term and other types of terms can be converted into each other. Therefore the non-trivial relation between anomalous terms and other terms can be recovered in this limit. As a check, when we also take $k_{2}\rightarrow0$, which implies $k_{I}\rightarrow0$, we really recover the grouping of terms in collapsed limit. Since $\left|1\right\rangle $ has order $\sqrt{k_{1}}$, both the numerator and the denominator have order $k_{1}^{2}$ and the expression is indeed finite. Note that in this limit the relation $k_{1}+k_{2}+k_{I}\rightarrow0$ still holds, so either $\left\langle 12\right\rangle $ or $\left[12\right]$ vanishes. Therefore for $s$ channel, at most two of the configurations $1^{+}2^{+},\:1^{-}2^{+},\:1^{+}2^{-},\:1^{-}2^{-}$ can lead to non-zero contributions. Interestingly, under this limit we can find a relation between the four-point functions and the three-point functions, including scalar parts. Under this limit, there are simple consistency relations for scalar correlators \citep{Assassi:2012zq,Berezhiani:2013ewa,Creminelli:2004yq,Hinterbichler:2013dpa}, but these relations do not apply here since we are considering one specific channel, instead of full contributions. Nevertheless, using the expression in \citep{Seery:2008ax} we can still construct \begin{equation} \frac{\left\langle \phi_{\mathbf{k}_{1}}\phi_{\mathbf{k}_{2}}\phi_{\mathbf{k}_{3}}\phi_{\mathbf{k}_{4}}\right\rangle _{s}'}{\left\langle \phi_{\mathbf{k}_{1}}\phi_{-\mathbf{k}_{1}}\right\rangle '}=-\frac{\partial}{\partial\left(k_{2}^{2}\right)}\left\langle \phi_{\mathbf{k}_{2}}\phi_{\mathbf{k}_{3}}\phi_{\mathbf{k}_{4}}\right\rangle '~, \end{equation} \noindent up to a numerical factor which is not important. Here $\left\langle \phi...\phi\right\rangle $ denote the scalar parts of corresponding correlation functions. Note that here it is natural to choose the direction of $k_{I}$ to be incoming to $34I$ vertex i.e. $k_{2}=k_{I}$ in order to construct the three-point function. Now add back the tensor parts and we get \begin{align} \frac{\left\langle h_{\mathbf{k}_{1}}^{+}h_{\mathbf{k}_{2}}^{+}h_{\mathbf{k}_{3}}^{+}h_{\mathbf{k}_{4}}^{+}\right\rangle _{s}'}{\left\langle h_{\mathbf{k}_{1}}^{+}h_{\mathbf{-k}_{1}}^{+}\right\rangle '} & =-\frac{\left(k_{2}+k_{3}+k_{4}\right)^{2}}{16\left(k_{1}k_{2}k_{3}k_{4}\right)^{2}}\left(\left\langle 12\right\rangle \left\langle 23\right\rangle \left\langle 24\right\rangle \left\langle 34\right\rangle \left[21\right\rangle \right)^{2}\frac{\partial}{\partial\left(k_{2}^{2}\right)}\frac{16k_{2}^{2}k_{3}^{2}k_{4}^{2}\left\langle h_{\mathbf{k}_{2}}^{+}h_{\mathbf{k}_{3}}^{+}h_{\mathbf{k}_{4}}^{+}\right\rangle '}{\left\langle 23\right\rangle ^{2}\left\langle 34\right\rangle ^{2}\left\langle 42\right\rangle ^{2}\left(k_{2}+k_{3}+k_{4}\right)^{2}}\nonumber \\ & =-\frac{\left[\left\langle 12\right\rangle \left[21\right\rangle \right]^{2}}{k_{1}^{2}}\frac{\partial}{\partial\left(k_{2}^{2}\right)}\left\langle h_{\mathbf{k}_{2}}^{+}h_{\mathbf{k}_{3}}^{+}h_{\mathbf{k}_{4}}^{+}\right\rangle '+K\left\langle h_{\mathbf{k}_{2}}^{+}h_{\mathbf{k}_{3}}^{+}h_{\mathbf{k}_{4}}^{+}\right\rangle '~, \end{align} \noindent where $K$ is a kinematic factor. Here it is interesting to observe the cancellation of kinematic factors in the derivative term. Note that $\frac{\left[\left\langle 12\right\rangle \left[21\right\rangle \right]^{2}}{k_{1}^{2}}\sim\epsilon_{ij}^{1}k_{i}^{2}k_{j}^{2}$, so this relation has the same form as the consistency relation between three-point and two-point functions \citep{Creminelli:2004yq,Maldacena:2002vr,Hinterbichler:2013dpa}, but with an extra term proportional to the three-point function due to the presence of tensor parts in the three-point function. \subsection{Collapsed Limit} We take the collapsed limit $k_{I}\rightarrow0$ and consider leading order terms in Equation (\ref{eq:28}). This also implies that $\left|k_{1}\right|\approx\left|k_{2}\right|$ and $\left|k_{3}\right|\approx\left|k_{4}\right|$ since the limit forces $\mathbf{k}_{1}+\mathbf{k}_{2}=\mathbf{k}_{3}+\mathbf{k}_{4}\approx0$. However, since the magnitudes are just approximately equal, we keep them to be distinct. In this way we actually keep some higher order terms implicitly. It is clear that for all combinations of helicities the anomalous terms become negligible while other terms remain important. However, the remaining terms can be grouped into a simple expression by Schouten's identity. For example, \begin{equation} \left\langle 1^{+}2^{+}3^{+}4^{+}\right\rangle _{s}=\frac{\left(\left(k_{1}^{2}-k_{2}^{2}\right)\left(k_{3}^{2}-k_{4}^{2}\right)\right)^{2}}{128\left(k_{1}k_{2}k_{3}k_{4}k_{I}^{2}\right)^{2}}\left\langle 12\right\rangle ^{4}\left\langle 34\right\rangle ^{4}~.\label{eq:33} \end{equation} We then have a nice factorization. This can be understood as when the helicity of internal graviton changes, we have $k_{I}\rightarrow-k_{I}$ but the expression is insensitive to this in the collapsed limit. Therefore we can recover the multiplication of two three-point vertices. It also shows that there exist relations between different types of terms, while such relations are non-trivial in flat spacetime. It is worth mentioning that although we do not know the explicit forms of higher-point functions, with the same logic we can factorize a correlation function into product of two lower-point correlation functions when one of internal graviton becomes soft \citep{Assassi:2012zq,Senatore:2012wy}, see Figure \ref{fig:3}. The soft internal gravitons in the two lower point correlation functions have opposite 3-momenta and same helicity, which is equivalent to the analytic continuation as we do for four-point functions when we only consider one internal graviton. It is because in such case both are equivalent to transforming $\left|k\right]\rightarrow\left|k\right\rangle $ and vice versa \citep{Maldacena:2011nz}. However, the result is the same no matter what helicity and direction of momentum of the soft internal graviton we choose in one of the correlation functions. Conventionally, after the factorization, we can apply consistency relations in squeezed limit as mentioned in last section to further simplify the correlation functions. An example is given in Appendix \ref{sec:C}. \begin{figure} \begin{centering} \includegraphics[width=0.8\paperwidth]{collapsedlimit} \par\end{centering} \caption{Factorization of correlation function in collapsed limit into two correlation functions in squeezed limit} \label{fig:3} \end{figure} As a check, we let all momenta lie on the same plane and take $\left|k_{1}\right|=\left|k_{2}\right|$ and $\left|k_{3}\right|=\left|k_{4}\right|$. Then both numerator and denominator seem to vanish. However, before taking so, we can first approximate $k_{I}\approx\pm\left(\left|k_{1}\right|-\left|k_{2}\right|\right)\approx\pm\left(\left|k_{3}\right|-\left|k_{4}\right|\right)$ and $\left|2\right\rangle \rightarrow-\left|1\right],\left|4\right\rangle \rightarrow-\left|3\right]$ \citep{Maldacena:2011nz}. The signs depend on how we define the direction of $k_{I}$, but the result is independent of the signs. Finally Equation (\ref{eq:33}) becomes simply \begin{equation} \left\langle 1^{+}2^{+}3^{+}4^{+}\right\rangle _{s}=32k_{1}^{2}k_{3}^{2}~. \end{equation} Note that it is non-zero only when the external helicities are the same at each vertex. In other words, only $\left\langle 1^{+}2^{+}3^{+}4^{+}\right\rangle _{s},\left\langle 1^{-}2^{-}3^{+}4^{+}\right\rangle _{s},\left\langle 1^{+}2^{+}3^{-}4^{-}\right\rangle _{s}\left\langle 1^{-}2^{-}3^{-}4^{-}\right\rangle _{s}$ do not vanish. Therefore our calculation is consistent with the cases with external scalars \citep{Seery:2008ax} and linear polarization \citep{Fu:2015vja}. In general, if not all momenta lie on the same plane, there is also an angular dependence from the configurations of momenta, see Appendix \ref{sec:C}, but here it is conveniently encoded into the spinor products. The first order contribution in $k_{I}$ vanishes as mentioned before. When we go to second order, the expression has become complicated and contained anomalous terms. To summarize, different types of terms have their peculiar roles and properties in the whole correlation functions, while there exist non-trivial relations between different types of terms which can be recovered in certain limits. Such division of roles interestingly controls the helicity structure and represents the distinction between de Sitter spacetime and flat spacetime. Various properties of the correlation functions also become transparent under our formalism. \section{Conclusion\label{sec:5}} In this paper, we generalize the inflationary spinor helicity formalism in \citep{Maldacena:2011nz} to four-point functions. Through this we derive a KLT-like relation, which contains some extra terms, including terms that do not look like square of amplitudes, when compared to what KLT relations in flat spacetime predict. \textcolor{black}{These terms may be new features in de Sitter spacetime or inflation, which make the KLT structure very unclear. Therefore, what we present here is only some preliminary work on constructing a complete KLT-like relation, and much further research on this direction can be developed.} Interesting topics along this direction include \begin{itemize} \item It remains interesting to seek for more interpretations and physical meaning of those extra terms. There may be non-trivial relations between different types of terms. In addition, it is important to find further ways to simplify the anomalous terms, since so far we do not have a rigorous proof to show that whether the anomalous terms are really ``anomalous'', or just a non-trivial form of square of amplitudes. We have such possibility since our calculations can be facilitated with certain choices of anomalous terms, showing that there are still some other relations hidden between anomalous terms and other terms. \item It may be useful to consider the diagrams of other channels or permutations of external gravitons. As the case in flat spacetime, the correlators may be further simplified when we add up different contributions. However, to do so we also need the information of the scalar parts since they are different for different channels in general \citep{Seery:2008ax}. Here we only focus on the tensor parts, but it would be nice to see if such simplifications happen and new properties may be discovered in this way. \item It is natural to investigate higher-than-four-point correlations. It is expected to get the KLT relations for higher-point amplitudes as parts of the relations. However, the interpretations of those extra terms may change. In addition, it would be interesting if new types of extra terms, especially anomalous terms appear in higher-point functions. \item Here we only work with the simplest minimally coupled inflation model with Einstein gravity. There is a large variety of inflation models and modified gravity. One may investigate the applications of the formalism here to other models. This generalization is highly non-trivial since the factorization into scalar parts and tensor parts can only applied to some specific models. \item So far we only focus on spinor helicity formalism, which may not be a good method because the symmetries required by the formalism are not all present in inflation. However, there are other symmetries in de Sitter spacetime while they are not present in flat spacetime, such as the conformal symmetries. One may use the conformal invariance of correlators to enforce the forms of them, bypassing direct computations. For example, this can help us tackle the tedious algebra of special functions when we consider massive fields \citep{Arkani-Hamed:2015bza}. This kind of work has been done for three-point functions from both interactions purely between gravitons and between scalar and gravity \citep{Maldacena:2011nz,Mata:2012bx}. It is interesting to know if it works also for higher-point functions. In fact, similar work has been done in, for example, \citep{Raju:2012zs} in the context of AdS/CFT correspondence \citep{Maldacena:1997re,Maldacena:2002vr}. \end{itemize} We hope to address some of the above issues in our further studies. \begin{acknowledgments} We thank Henry Tye for many helpful discussions. This work is supported in part by ECS Grant 26300316 and GRF Grant 16301917 from the Research Grants Council of Hong Kong. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Executive Summary} \label{sec:summary} \subsection{Objectives} \label{subsec:obj} The primary purpose of the workshop was to bring together a wide variety of participants in the exoplanet atmospheres community and beyond (Solar System and Earth Sciences) to discuss 3D general circulation models (sometimes also known as Global Climate Models, GCMs) in the context of exoplanet climates and atmospheric characterization. Specifically, the THAI project and workshop focused on modeling of TRAPPIST-1e, as it represents perhaps the best candidate for observation and atmospheric characterization of a terrestrial sized exoplanet in the habitable zone. The THAI project was used as a vector for comparisons and discussions between the various GCMs that are currently commonly used for modeling terrestrial extrasolar planets. Particular attention was given to key parameterizations such as surface properties, moist convection, water clouds, radiative transfer, and non-LTE processes. Finally we also discussed how 1D models, such as energy balance models (EBMs) or single-column radiative-convective models, complement 3D models for exoplanet studies. \subsection{Organization} \label{subsec:org} Due to the COVID-19 pandemic the THAI workshop was held virtually, instead of in-person as was originally planned. The Scientifc Organizing Committe (SOC) consisted of Thomas J. Fauchez, Shawn D. Domagal-Goldman, Ravi Kumar Kopparapu, Linda Sohl, Martin Turbet, Michael J. Way and Eric T. Wolf. The THAI SOC worked with Knowinnovation (\url{https://knowinnovation.com/}), company led by Andy Burnett and assisted by Najja Bouldin and John Cabra, to build the conference website and to organize the live discussions. Each talk (twenty-six of about 12 to 15~min in length) was pre-recorded by the speakers and uploaded to the conference website at least a week before the live part of the workshop (September 14\textsuperscript{th} to 16\textsuperscript{th}). The talks are also permanently available on the \href{https://www.youtube.com/channel/UCb0gqdGHntaPKxEuvc88Irg}{\textit{NExSS Youtube channel}}. The workshop attendees were therefore able to watch the presentations in advance and write questions to the speakers. Live sessions were limited to three hours per day (9~am to 12~pm EDT) divided in three parts: 1) questions and answer (Q$\&$A) session about pre-recorded talks, 2) coffee break in a 2D virtual reality space, 3) breakout discussions. Having an important part of the workshop offline helped to mitigate the impact of the time zone differences and travel issues, allowing more people, especially from underrepresented groups, to attend. \subsection{Main Outcomes} \label{subsec:out} This workshop's main scientific result is the intercomparison of four mature 3D GCMs used for simulating terrestrial climates; this will be presented in three separate papers as a part of a special issue of the Planetary Science Journal. During the workshop, the inter-model differences in the convection and cloud parameterizations have been highlighted as key culprits for disagreements between simulated climates. This necessitates future model development in this area of climate modeling, particularly given the importance of clouds and hazes in the observation and characterization of exoplanetary atmospheres. Furthermore, the dominance of one surface type or another --- e.g., ice, land, or ocean --- alters the planetary albedo which can significantly influence climate and habitability (Sec.~\ref{subsec:surf}). Latitudinal EBM simulations either underestimate or overestimate the strong day-night side contrast for synchronously rotating planets, although a longitudinal EBM can provide better representation of the temperature contrast between hemispheres (Sec.~\ref{subsec:syn}). A two column approach (day and night sides) shows promising results to capture the globally averaged surface temperature, and some degree of the hemispheric asymmetries. In the various discussions during the workshop, certain aspects of model intercomparison and potential areas of improvement were found to be shared with similar questions of modeling atmospheres of solar system planets, including Earth. Concerns were raised over the carbon footprint of GCM simulations mostly due to the electricity demand of supercomputers. Performing runs responsibly and optimizing the GCM to reduce the computational time have been identified as the best mitigation strategy and this report also recommends that future studies that use GCMs evaluate the amount of CO$_2$ emissions related to the modeling activities and disclose it in the paper. However, those considerations should not prevent researchers to perform the numerical experiments required by their science investigations. Finally, we need to advance aspects of diversity, inclusivity, belonging, and justice in the field. This will require multiple efforts at both the individual level, and at the group and community levels. The long-term positives from such efforts will improve both the community we do our research within and the products from that community. This workshop report is structured as follow. In section \ref{sec:overview} we introduce the THAI project and how Earth and Solar System intercomparisons can help us to build meaningful ones for the exoplanet community. In section \ref{sec:predict_obs} we discuss how GCMs are crucial to predict and interpret exoplanet observations. We then review in section \ref{sec:GCMparam} GCM parameterizations for exoplanets, their limits and the developments needed. In section \ref{sec:survey} we show the results of a survey filled by the workshop participants concerning the future of exoplanet GCMs. We follow in section \ref{sec:div} by discussing diversity, equity and inclusion in the community. Finally conclusions and perspectives are given in section \ref{sec:end}. \section{General Discussions about Intercomparisons} \label{sec:overview} In this section we present the THAI project and we discuss how current Earth and Solar System intercomparisons can help us to build successful ones for exoplanets. \subsection{GCM intercomparisons for Earth and beyond} \label{subsec:inter} No object in space is more well studied and has as significant a dataset regarding its extant and past state as the Earth. Indeed, much of the understanding of planets in our solar system and exoplanets has been informed by principles gleaned from the study of different processes on the Earth. This is especially true when considering the impact of geophysics on the modeling of exoplanets using GCMs. The development of GCMs for Earth Science studies has enabled their use for other planets. But, there are also key ways in which exoplanet GCM simulations can in turn improve our understanding of the Earth system and its evolution. At a fundamental level, exoplanet GCM simulations inherently act as a means of stress testing and phase space exploration that can then be applied to Earth-tuned models. Due to the wide range of conditions that may exist on exoplanets, GCM simulations are typically run with lower complexity than leading Earth-tuned models, but explore conditions over a wider phase space that may lie at the limits of the model's physical validity. The results from these models can expose minor bugs or clarify the effects of varying specific parameters that can potentially feed back to Earth-tuned models and the geophysics assumptions that underpin them. The lesser expense of these simpler exoplanet GCM simulations and their tendency to explore this wider phase space also has a more direct influence on the understanding of the effect of certain features of geophysics on Earth. Simulations that explore the general effects of variations in bulk geophysical parameters or of external parameters are now often run for a range of exoplanet parameters or for a broad suite of terrestrial planets \citep{Wolf2017, Way2018}. These simulations and their findings can then provide a library of outcomes and lessons that can be applied to specific geophysical conditions on simulations of the extant Earth or for paleo-Earth climate simulations. In this subsection, we review briefly some of the highlights of previous Earth, Solar System and exoplanet intercomparisons. \subsubsection{The TRAPPIST Habitable Atmosphere Intercomparison (THAI)} \label{sec:thai} Preliminary results of the THAI intercomparison \citep{Fauchez2020THAI} have been published. Four atmospheric compositions have been simulated by the four GCMs (see Sec.~\ref{subsec:GCMs}). The simulated atmospheres include two dry (no surface liquid water) benchmark cases ``Ben1" and ``Ben2", presented by Martin Turbet (\url{https://www.youtube.com/watch?v=B8a2-G8NmmA}), with the atmospheric compositions of 1 bar of N$_2$, 400 ppm of CO$_2$ and a purely 1 bar of CO$_2$, respectively, and two moist habitable cases ``Hab1" and ``Hab2", with a global ocean and the same respective atmospheric compositions, presented by Thomas J. Fauchez (\url{https://www.youtube.com/watch?v=kYLbp_BrJFs}). The overall outcome of the Ben1 and Ben2 cases is a good agreement between the four GCMs in terms of surface and atmospheric fields. However, we note some differences in the circulation regime, which manifest a sensitivity of TRAPPIST-1e simulations to GCM setup due to a combination of planetary parameters, noted e.g., in \citet{Sergeev2020}. Synthetic transmission spectra have been produced by the Planetary Spectrum Generator \citep[PSG,][]{Villanueva2018} and they are in good agreement between the models as long as the top of the atmosphere is extended (assuming an isothermal atmosphere and fixed gas mixing ratio) up to about 100 km (10$^{-7}$ and 10$^{-10}$ bar for Ben1 and Ben2, respectively). Without this extension, the strongest absorption features of CO$_2$ are truncated. Concerning the Hab1 and Hab2 cases, clouds are the largest source of discrepancies between the models, as expected, due to differences in convection (Sec.~\ref{subsec:conv}), bulk condensation, cloud microphysics, boundary layer, and other parameterizations, and their coupling with atmospheric dynamics and radiation. The altitude and thickness of the cloud deck at the terminators impact the simulated transmission spectra, leading to about 20\% differences between the models on the number of transits required to detect those atmospheres with James Webb Space Telescope (JWST) at a 5$\sigma$ confidence level. More details on the Ben1 \& Ben2 simulations, Hab1 \& Hab2 simulations and on the impact on observable transmission spectra and thermal phase curves will be presented in three follow-up papers. We also welcome other modelling groups to join THAI at any time. Interest has been shown from the THOR \citep{Deitrick2020} and Isca \citep{Vallis2018} GCM groups that we hope to host and compare results soon, while the Exeter Exoplanet Theory Group (EETG) will contribute results from the UM's replacement, LFRic \citep{Adams2019}, in the future. Once JWST data for TRAPPIST-1e are available, GCM output will be compared to observational data. We expect such comparison will lead to a new set of simulations and to further validation of model performance against terrestrial exoplanet data. It is important to maintain and improve the level of collaboration between the exoplanet GCM community (including THAI) and the observational community. The ``TRAPPIST-1 JWST Community Initiative'' \citep{gillon2020trappist1} is particularly relevant as it aims to develop a coordinated framework to study TRAPPIST-1 planets with JWST from the both the observational and theoretical/modeling levels. \subsubsection{Model Intercomparisons Across Stellar Spectral Types \citep{yang_differences_2016,Yang2019}} \cite{yang_differences_2016} compared the differences in 1D radiative transfer calculations among two line-by-line codes (SMART and LBLRTM), a moderate resolution code (SBART) and four low-resolution codes that are used in GCMs (CAM3, CAM4\_Wolf, LMD-G, and AM2). Note CAM4\_Wolf would eventually become ExoCAM (see Appendix \ref{subsubsec:ExoCAM}). The atmospheric composition was set to 1 bar of N$_2$, 376 ppmv CO$_2$, and variable H$_2$O. They showed that there are small differences between the models when the surface temperature is lower than about 300 K. At higher temperature, such as 320--360 K, the differences between the models could be tens of watts per square meter. The source of the differences are mainly from water vapor radiative transfer calculations in both shortwave and longwave. The differences are larger for shortwave than longwave and also for an M-dwarf spectrum than a solar spectrum. These results suggest that radiative transfer codes should be verified first (such as the absorption and continuum behavior of water vapor) before being used in an exoplanet GCM especially when targeting planets with hot climates or to estimate the inner edge of the habitable zone. Notably, an important lesson learned from this study is that the adequate performance of shortwave radiative transfer for warm moist atmospheres is contingent upon sufficiently resolving the near-IR H$_2$O spectral absorption bands and windows, particularly when considering irradiation from M-dwarf stars. \cite{Yang2019} compared the differences of 3D GCM simulation results on a rapidly rotating aqua-planet receiving a G-star spectral energy distribution and on a tidally locked aqua-planet receiving an M-star spectral energy distribution. Several GCMs were considered, various versions of CAM, the LMD-G and the AM2 GCM \citep{GFDL2004} They found relatively small difference ($<$8~K) in global-mean surface temperature predicted by various GCM for cloudy planets orbiting a G star but rather large differences (20-30~K) for cloudy planets orbiting M stars. These discrepancies are due to differences in the atmospheric dynamic, clouds and radiative transfer. Clouds is the largest difference between the models. The interactions between radiative transfer (such as shortwave absorption by water vapor) and atmospheric circulation can influence the atmospheric relative humidity and therefore affect the surface temperature. \subsubsection{GCM intercomparisons for Solar System planets} \paragraph{Mars.} Several GCM intercomparison efforts were arranged in the Mars atmosphere modelling community as soon as enough teams could contribute to such a project. They were organized in advance of two workshops that helped structure the community just like the 2020 THAI workshop did, although no official reports were published in the literature. The first meeting was the ``Mars GCM intercomparison workshop'' organized at Oxford University, United Kingdom, July 22-24, 1996. It was later followed by the first ``Mars atmosphere modelling and observations'' which took place in Granada (Spain) on January 13-15, 2003. In both cases instructions were sent to the different teams to prepare comparable simulations in advance, and volunteers analyzed the simulations by comparing the zonal mean fields and the predicted planetary waves. Radiative transfer models were also compared for specific test cases \citep[as announced by][]{Harri2003}. These efforts showed that, most of the time, significant differences could always be attributed to different settings in the models, and that it was difficult to reach profound scientific conclusions from such organized intercomparisons. Mars GCM teams later focused on comparing their models with the numerous observational datasets that became available in the 2000s. Nevertheless specific intercomparison studies have continued being conducted to study phenomena or altitude ranges for which not much observational data are available \cite[e.g.,][]{Gonzalez2010,Wilson2014} or to compare model predictions at a mission landing site \cite[e.g.,][]{Kass:03,Newman2020}. \paragraph{Venus and Titan.} Similarly, intercomparison campaigns have been organized in the Titan and Venus GCM communities which reached a sufficient stage of development a few years after the Martian case. In practice, because on Venus and Titan the problem of superrotation is so striking and challenging for the GCMs, the most interesting comparisons actually focused on the behaviour of the various dynamical cores and their ability to simulate superrotation and conserve angular momentum \citep[][see also section~\ref{sec:gcm-known-limi}]{Lebonnois2012,Lebonnois2013}. These studies revealed that various dynamical core which would give very similar results in Earth or Mars conditions, can predict very different circulation patterns in Venus-like conditions. Recently, a detailed Titan climate model intercomparison has been organized, with the motivation of preparing the planned Dragonfly Titan lander mission \cite[]{Lora2019}, like intercomparison studies done for Mars to prepare for a mission landing. On the basis of the acceptable agreement between the different models, the authors conclude that the ``low-latitude environment on Titan at this season is now fairly well constrained'', which is reassuring when preparing an ambitious mission like Dragonfly. \paragraph{Snowball Earth.} A noticeable GCM intercomparison has been presented in \cite{Abbot:2012} concerning the impact of clouds in the snowball Earth deglaciation. Six different GCMs were onboard the intercomparison. They found that clouds could warm a snowball Earth enough to reduce the amount of CO$_2$ required for deglaciation. But because the amount of clouds varies from one model to another, the amount of CO$_2$ required differs by one order of magnitude depending on the model. This intercomparison highlights clouds as an important source of discrepancies between the GCMs. \subsection{Ideas for Advancing Exoplanet Model Intercomparisons} \subsubsection{The Coupled Model Intercomparison Project (CMIP) as a Guide for ``ExoMIPs"}\label{subsubsec:CMIP} In planning future community-based exoplanet model intercomparisons (``ExoMIPs") like the THAI project, it is useful to consider the 25-year history of the future Earth-focused Coupled Model Intercomparison Project (CMIP). This discussion has been presented at the THAI workshop by Linda Sohl. CMIP is perhaps best known now for its contributions to the periodic assessments issued by the Intergovernmental Panel on Climate Change (see, e.g., IPCC report 2013, \url{https://www.ipcc.ch/report/ar5/wg1/}), but it began in 1995 as an independent project of the World Climate Research Program (WCRP). Over the years, CMIP has grown, from an effort to use simple global coupled ocean-atmosphere GCM experiments, with interactive sea ice and land models, to separate natural climate variability from human-induced climate change \citep{Covey2003}, to the exploration of a variety of sophisticated climate change scenarios, using GCMs with ever-more complex capabilities that include chemistry and higher-resolution dynamical interactions \citep{Eyring2016}, as well as specialized ancillary investigations (e.g., the various projects under the \href{https://pmip.lsce.ipsl.fr/}{\textit {Paleoclimate Modelling Intercomparison Project, or PMIP}}). The history of the CMIP experience highlights four key considerations that would benefit any future exoMIP effort: \textit{Context/Rationale: Establish how the MIP will advance the state of knowledge and/or the state of the art.} CMIP's overall experiment designs are developed as an outgrowth of the \href{https://www.wcrp-climate.org/grand-challenges/grand-challenges-overview}{\textit {WCRP Grand Challenges}}, which are updated periodically via community input. These Grand Challenges, which encompass observational, theoretical, and modeling-based research, are meant to: \begin{itemize} \item Identify the key research questions needing to be addressed in order to move the field forward in a substantive way, as well as the barriers to progress (what do we need to learn next, and what stands in the way?); \item Define effective and measurable performance metrics (how will we know we have been successful in achieving our goals?); \item Provide storylines that engage a broad interested audience, from the media and general public to scientists from other disciplines (how can we attract future talent and improving interdisciplinary connections?). \end{itemize} Planetary science and astrobiology do not have an internationally defined set of grand challenges as such. However, documents such as the \href{https://astrobiology.nasa.gov/research/astrobiology-at-nasa/astrobiology-strategy/}{\textit {NASA Astrobiology Strategy}} and \href{https://www.liebertpub.com/doi/full/10.1089/ast.2015.1441}{\textit {AstRoMap European Astrobiology Roadmap}} outline research topics of interest for advancing the field, and not surprisingly there is overlap that can help narrow the context and rationale of an exoplanet MIP. An ExoMIP should strive to make connections with as many of these topics as is plausible: linking the MIP rationale to broad themes of community-wide interest is to our advantage in connecting to our fellow researchers whose focus is on theory, field work or observations. \textit{Experiment Design: Encourage broad participation by planning core experiments with low entry barrier for most groups, with specialized subprojects as needed.} One of the benefits of MIPs, from a model development standpoint, is that comparisons across multiple models can illustrate which model design/parameterization approaches provide the most robust results. Thus the more models we can encourage to participate in a given MIP, the better for the community. The participation of any one modeling group in a MIP is going to be limited by three factors: the technical requirements of the MIP (how intensive is the set-up process for the experiments?); the available resources (how much computing time is available?); and expertise on hand (are there enough people with the necessary model knowledge and time to run all the experiments?). CMIP has taken the approach of defining a limited set of required core experiments that are easy to implement \citep[e.g.,][]{Eyring2016}. This lowers the entry barrier to groups interested in joining, and improves the chances of useful outcomes beneficial to the community. More complex scenarios, specialized topics, and extended parameter space are addressed through \href{https://www.wcrp-climate.org/modelling-wgcm-mip-catalogue/modelling-wgcm-cmip6-endorsed-mips}{\textit{related MIPs}}, which attract groups that have the additional resources, interest and expertise available. For an ExoMIP, it is conceivable that a similar design could focus on global-scale core experiments with simple forcing changes (stellar insolation, relatively thin atmospheres/oceans), and some of the related MIPS could engage with 1D/EBM models on details of atmospheric composition/radiative forcing. \textit{MIP Logistics: Plan realistic schedules for experiment completion and group analyses/manuscript preparation.} Keeping model groups focused and reaching project milestones in a timely fashion are important for the overall success of a MIP. CMIP and the related MIPs typically establish these schedules via community planning workshops, where ideas for experiments and additional ``rules of engagement" regarding MIP participation are also defined in advance. These MIP protocols should then be published as close to the official start of a MIP project as possible, so that additional groups not involved in the planning workshops can also make a timely decision to participate. \textit{Data Sharing: When/how to release experiment results for broadest impact?} On this topic, CMIP and some of the related MIPs - mainly the PMIP projects - handle data sharing differently. Because the groups contributing to CMIP are frequently not entirely the same as researchers conducting multi-model evaluations of the experiment results, data are released immediately to the community. In contrast, the specialized PMIP project experiments are often run and evaluated by the MIP participants themselves. In the latter case, a data embargo is often declared until the first group papers are published, as part of the agreed-upon project schedule. An ExoMIP might want to consider a similar data embargo, as part of a community agreement not to publish each other's work prematurely. Data sharing logistics are a more complicated issue. No ExoMIP will have the vast resources currently available to CMIP for data sharing, so at present any data sharing is likely to happen on an ad hoc basis. However, it is possible to develop community standards for what and how data should be shared. While raw model output and some accompanying post-processing scripts might provide maximum flexibility to fellow modelers, model results that have already been post-processed into file formats such as netCDF for map views and plain text for line plots allows the broadest possible audience - from fellow scientists who are not modelers, to educators and students - to work with and learn from the output with the help of free apps. \subsubsection{Planning Workshop Themes for Rocky Exoplanet Model Intercomparisons} \label{subsec:challenges} The beginning of the first network of exoplanet model intercomparison is a great time to address organizational challenges, and it should be a top priority for the community. In order to advance inter-model comparisons for exoplanets study, a first-order requirement is a collaboration workshop for the community to discuss key issues, especially regarding data to be shared (and how to share it), which would be very important for establishing standards and best practices for intercomparisons going forward. \textit{A formal intercomparison workshop could be organized for roughly one year after the THAI workshop (fall 2021), which would allow time for planning the workshop and getting funding support to encourage participation. } We can expect that an intercomparison workshop would produce documentation about best practices for model intercomparisons, and most importantly, produce a community consensus on how to share data so that a common repository (to be identified) would host a comprehensive set of common diagnostic outputs that are most important for addressing the science questions asked (e.g., which diagnostic attributes contribute most to the synthetic spectral signatures of interest for observational programmes?), and yet do not overwhelm potential users or the repository itself with ancillary diagnostics/large data files. To compare GCMs, even to one another, we must necessarily address a range of cases (by bracketing single-point cases with increasingly complex physics included sequentially, etc.). This requires a clearly defined question (e.g., ``is this climate sensitive predominantly to clouds or surface water reservoir effects") with the goal of producing concrete test cases (that is, with an exhaustive list of clearly stated parameters) and testable predictions, so that it is possible to differentiate possible states. Having a clear, well-defined question, in conjunction with a concise list of simulations and deliverables, is necessary to ensure that requirements for participation can be met within realistically allocatable work-efforts for such projects. Transparency with respect to set parameters also ensures reproducibility, and provides diagnostic access for future tests and observations. Future observational modes further constrain important test deliverables (wavelength coverage and spectral resolution if spectroscopic, or dynamic/geologic/phenomenological diagnostics; spatial scales; arrival times; duration of observation or mission lifetime, etc.). If the models are too far away from one another, then how do each of the GCMs motivate those disparate results, and does this lead to additional testable predictions? Given the abundance of ``hidden" parameterizations in models, it can be difficult to assess whether a given model in a given part of parameter space matches observations for the ``right" reason (e.g., the same physical driver as the primary control in both the model and the planetary environment), versus a confluence of other effects. Further intercomparison work can elucidate some of these factors, but it is unlikely that they will all become explicit dependencies, even with additional documentation. Note that potential new observables outside current capabilities and ones where additional precision would refine parameter ranges also helps to steer future mission development, which then feeds back into how close the models are to ground-truth. Continued intercomparisons require continued funding, and given the dependence of adequate documentation, this suggests the need for clear funding lines for development of testing frameworks (either for a single model or as part of a new or ongoing collaboration), validation, documentation, etc. GCM model development work is costly, and national laboratories like NCAR or GISS typically hire software engineers to support the scientists. The exoclimate community is young and the scientists generally lack such support, hampering progress. Lastly, reducing the model output to a single metric (e.g., generic climate state or spectral signature produced) is an additional constraint (meta-sensitivity?), suggesting that disagreeing models may ``agree" in some diagnostic sense. This would help to identify additional directions to explore, such as broadening the parameter space identified initially, or through secondary observables (diurnal/seasonal variability, etc.). Finally, we would also recommend that potential intercomparison contributors think beyond the goal of ``what does this planet simulation look like from the observation perspective?" 3D GCMs in particular produce a wide variety of diagnostic outputs that are interesting and relevant to understanding the potential habitability of a particular world configuration, as well as model performance, but these do not necessarily produce directly observable results. This is especially true knowing that some observables that we have currently identified may be in fact unobservable and that new ones will eventually be found later. Current modeling should therefore not be only constrained by the current set of possible observables. \subsubsection{Discussing and building the Climates Using Interactive Suites of Intercomparisons Nested for Exoplanet Studies (CUISINES)} \label{subsec:CUISINES} In the upcoming era of JWST, it becomes essential to focus community effort on benchmarking/comparing/validating the performance of exoplanet climate models, both with respect to other models and to observations (when available). As noted in Section \ref{subsubsec:CMIP}, model intercomparisons have been widely used for decades by the Earth science community in this way, as a very valuable means to improve model reliability, mitigate model dependencies, track down bugs, and provide benchmarks for new models. While individual intercomparison projects should have their own clearly defined protocols, the exoplanet community would benefit also from a metaframework -- essentially, a framework for designing model intercomparison projects. This metaframework is what we propose with CUISINES. This framework would be open from 0D to 3D models as well as radiative transfer models, and not limited only to rocky exoplanets. One of the first steps in establishing this metaframework will be to create a CUISINES committee and then to prepare a workshop on best practices for intercomparisons. At the end of the THAI workshop, two future intercomparisons were already discussed: one between EBMs for ice belts, and one between GCMs for cloud-free mini Neptunes (see Section \ref{subsubsec:beyond} below). In the era of JWST, mini-Neptunes and hot Jupiters in particular will require focused modeling efforts from the community. Other ideas of intercomparisons under the CUISINES metaframework are welcome. \subsubsection{Moving Beyond Rocky Planets: Envisioning a Mini-Neptune Model Intercomparison}\label{subsubsec:beyond} A future THAI-equivalent GCM intercomparison project for mini-Neptunes has been proposed during a breakout discussion. Mini-Neptunes are the most abundant category of exoplanets that have been discovered so far and thanks to their larger size they will be more easily characterized through transmission spectroscopy with JWST. For now, there is a wide range of approaches considering mini-Neptunes modelling so there is a significant need for an intercomparison to see what the differences are when everyone makes the same assumptions (more important for now than thinking about the more complicated physics we need to implement). For instance, aerosols are very challenging to be included in mini-Neptunes \citep{Charnay:2015a,Charnay:2015b,Charnay:2020}, therefore it has been suggested to consider clear-sky simulations as a start. Cloudiness has been shown to decrease with decreasing equilibrium temperature \citep{Crossfield2017} which motivates the case for relatively cold temperatures \citep[K2-18b][and colder]{Benneke2019,Tsiaras2019} where less photochemical haze is expected. Gliese-436b could be a very good candidate with respect to the amount of data potentially available and the quality of constraints on the planetary parameters \citep[and references therein]{Gillon2007,Demory2007,Lanotte2014,Ehrenreich2015,Bourrier2016,dosSantos2019}. Atmospheric compositions should be limited to common gases expected for these planets such as hydrogen, helium, water, methane and carbon dioxide and surface pressures could range from few millibars to tens of bars. However, in order to take into account deep atmosphere effects on the upper atmosphere, that have been shown to be important for Titan \citep{Lebonnois2012} it may require inclusion of pressures up to 10 to 100 bar which can subsequently increase the computational time \citep{Wang:2020}. In a next step the atmospheric compositions can be refined to include more processes to match with observations that JWST would have provided. Also other models such as EBM and 1D radiative-convective models could be engaged too. \section{GCM Simulations to Predict and Interpret Exoplanet Atmospheric Characterization} \label{sec:predict_obs} The strength of the exoplanet modeling community is in its close connection between scientists from numerous disparate fields, including astronomers, climate scientists, planetary scientists, and geophysicists. Knowledge from all of these fields should be leveraged when considering exoplanetary atmospheres, and inputs from each field can be incorporated into GCM simulations of exoplanetary climates. Due to the computational expense of GCMs, before conducting large sets of simulations, we tightly constrain the goals of our simulations sets. Given the observational limitations that will persist through the coming decade, from an astronomical perspective it would be most beneficial to prioritise categories of planets that are going to be definitively observable in the near future. We should compile a list of potential targets in the order of importance --- this would help to distribute our computing resources more efficiently. Model development effort should then mainly be directed towards the most observable types of planet. For example, can we observe a Mars-like planet tidally locked to an M-dwarf star? Or are exo-Venuses our most realistic target? Astrobiological implications is an obvious motivation for simulating exoplanet atmospheres. However the characterization of Earth-sized habitable worlds is far more challenging (and perhaps untenable) compared with characterization of sub-Neptunes and larger worlds. One strategy for constraining habitable zones and habitability may lie first, paradoxically, in constraining uninhabitable regions in planet-phase spaces, thus allowing us to eliminate planets from the list of potentially habitable worlds. In other words, we have to be able to readily distinguish extreme atmospheric conditions from Earth-like atmospheres. The simplest answer to what planets and parameter space will be a target of near term observations are planets we can actually observe. Characterization of exoplanets is still challenging for most planets smaller than large Jupiter-sized planets and the ability to characterize smaller planets is one of the highly anticipated open areas that JWST will explore. While the highest quality data from JWST with respect to characterization will be for hot Jupiters, there is still significant interest in characterizing rocky planets, if feasible, with JWST and ELTs. The recognition that characterization of potentially rocky planets may be the next frontier that is explored in exoplanet science is what has motivated much of the recent study of theoretical models of rocky planets. These theoretical studies of the different potential environments of rocky exoplanets and their potential observational signatures are key to ensuring that sufficient understanding exists for interpretation of observations of rocky exoplanets when they are obtainable. In the intermediate range between Jupiter sized planets and potentially rocky Earth sized planets are the class of planets that may be the next truly characterizable planets in the near term, so called ``mini-Neptunes" and ``super-Earths". These terms are meant to describe the mass and radius regime of these worlds as there is considerable uncertainty regarding their interior, surface and atmospheric properties. However, it is precisely this uncertainty and the potential to extricate key parameters from observations of these worlds that makes them potentially valuable probes of planetary formation, evolution and habitability. Aside from the abundance of potential targets, Super Earths and small Neptunes also inhabit a potentially critical region of planetary parameter space. These planets likely bracket the point at which runaway accretion of a primary gaseous atmosphere occurs in the core accretion model \citep{Pollack1996}. Therefore, they bridge the structures of giant planets with thick hydrogen/helium dominated atmospheres, to terrestrial planets with much thinner ``secondary" atmospheres \citep{Lopez2014}, as well as being in the size range where irradiative evaporation becomes significant \citep{Owen2012}. The array of observational techniques that will be used to extricate the properties of these worlds are both those that have been heavily used in the past and that are in the nascent stages of being leveraged. For the latter, emission spectroscopy may be a critical means by which to probe into deeper portions of the atmosphere, despite the presence of expected clouds and hazes. Ground-based observations will also be key, particularly high-resolution spectroscopy -- possibly coupled with direct imaging -- that may be diagnostic of composition and other atmospheric features using some of the large planned near term ground-based telescopes \citep{Snellen:2015,Lovis:2017}. More familiar data products such as light curves will continue to be critical as their multi-wavelength morphology will be key to informing and validating climate models. This connection and feedback between observations and theoretical work will be key to near term interpretation of exoplanet observations in the context of atmospheric and surface characterization. Synergies between retrievals and GCMs will enable the connection of the existing physical and chemical models to observations in way that may be able to elucidate parameters that will be informative regarding planetary formation and evolution. To facilitate this, there exists the need for closer connection between chemistry models and GCM models \citep[e.g.,][]{Chen2019,Drummond_2020}. In addition to that, there is a need for generalized condensation schemes (for a broad range of planets from hot rocky planets to cold gaseous planets and for a variety of atmospheric compositions). Connections to and integration of other key modeling, such as modeling of atmospheric escape and the evolution of planets given different formation pathways, will also need to supported. Observations will drive much of this theoretical work, and both cooler ``sub-Neptunes"/``super-Earths" and hot rocky planets that may be characterizable by JWST are key. While connecting simulations to observations is key, another essential component of exoplanet characterization is the need to be confident in our models through validation practices such as inter-comparisons like THAI before applying them to understand terrestrial exoplanet observations. The validation practices will be especially important because modeling of some of the most favorable near term observational targets will require the addition of novel functionality in a number of areas. The following are just some areas that will likely require model development in order to appropriately model exoplanets that are likely to be near term observational targets: \begin{itemize} \item Modelling ``sub-Neptunes"/``super-Earths" with extended atmospheres will require deep atmosphere equations, as the primitive equations break down due to thickness of atmosphere relative to planetary radius \citep{Mayne_2019}. \item The range of atmospheric compositions that will have to be considered will also expand for planets that are not $H_{2}$-rich or Earth-like or that do not have any representation in our Solar System \citep{Woitke2021}. \item Updated chemistry schemes that capture non-equilibrium or photochemistry effects that are likely relevant for these classes of exoplanet will be required. \item There is a need for an improved understanding of interior mixing to get proper boundary conditions for GCMs. \item There will be a need to be able to run simulations at lower pressures in order to properly treat photochemical hazes and other upper atmosphere processes that may affect observables. \item Robust parameterizations for convection that can deal with non-dilute condensibles will be required. \end{itemize} Deserving of their own section (i.e. section \ref{subsec:aerosol}) clouds and hazes are the elephant in the room for improved understanding of characterizable exoplanets. Observations of planets in and outside our solar system indicate that understanding of clouds and aerosols are required to even have a first order understanding of a planets' extant state and consequently its evolution. While modeling of clouds and hazes are inherently a complexity-rich endeavor that require trade offs due to limits of understanding or computational complexity, there are some key needs that will be required in order to accurately characterize exoplanets in the near term. Understanding the inherent variability of a planet and determining the level at which weather and cloud variability changes the continuum level of observations is important for both interpretation and planning of observations. The coupling of clouds and photochemical hazes to dynamics is also important to determine impact on transmission, emission and reflection spectra and phase curves (relevant for effectively all classes of planets, as demonstrated by solar system objects). An additional near term focus will be the inclusion of coupling detailed cloud microphysics codes (e.g., Helling code, CARMA, EddySed) to GCMs as has been done for hot Jupiters \citep{2018A&A...615A..97L}. The addition of models of increasing complexity, such as these, will likely require a model hierarchy going between simple models and the coupled cloud microphysical models. For planets in more extreme temperature regimes, such as hot rocky planets amenable to JWST characterization, there will be a need for significant model updates including updated radiative transfer (RT) schemes and non-dilute condensables development. These worlds will also require 1D models to take into account surface chemistry due to potential magma oceans. Finally, additional factors such as gravity waves will require appropriate parameterizations, since their impact in the upper atmosphere is significant for planets of the solar system \citep[e.g.,][]{Lott2013, Hoshino2013, Gilli2020} and potentially for hot Jupiters (for example, see \citealt{watkins_2010}). Addressing these questions, we must design our numerical experiments efficiently. To do large sets of simulations, we can adopt statistical approaches to cover as much of the parameter space with as few simulations as possible \citep[e.g., Latin Hypercube, see][]{Sexton2019}. For example, we can use a decision tree of specific well-known biosignature parameter sweeps for each stage of Earth's history. Parameter space can be covered efficiently also by relying on synergy between EBMs and 3D GCMs following an asynchronous coupling. Namely, a ``rough" climate state can be spun up by a resource-cheap EBM and then explored in more detail with a resource-expensive GCM, followed by another spin-up period done with the EBM, and so on. We believe the same methodology should be considered for future coupled atmosphere-ocean simulations of exoplanets, in which the ocean part requires longer timescales than the atmosphere. We also discussed the untapped computational potential of graphical processing units (GPUs), which are currently underused by the GCM community \citep[e.g., THOR, see][]{Mendonca2016b,Deitrick2020}. Future GCMs should ideally be developed agnostic of the machine architecture \citep[see][for example]{Adams2019}. Using GPUs and similar hardware optimized for heavy computations would help to run large parameter sweeps in less time. To summarize, there are a lot of necessary planet types to simulate and a lot of new couplings between atmosphere and other processes (e.g., atmosphere-ocean) to explore. Future model intercomparisons should focus on relatively more observable atmospheres, keeping close connection with observational data. We have to be careful in selecting modeling targets: on one hand it is more interesting to run simulations of exotic (relative to Earth) atmospheres; on the other hand these atmospheres are notoriously difficult to simulate with Earth-tuned codes, leaving very few GCMs being able to join the intercomparison. Looking in a more distant future (the following decade perhaps?), a new generation of GCMs should be developed to be able to simulate such extreme cases as non-dilute, fully collapsible, or non-ideal-gas atmospheres. \section{GCM Parameterizations, Limits and Development Needed}\label{sec:GCMparam} \subsection{Sensitivity to numerical settings and initial conditions} \subsubsection{Horizontal numerical diffusion} Most GCMs require a numerical diffusion or filter which is applied in addition to the existing terms of the Euler or primitive equations \citep[][Chapter 13]{Lauritzen2011}. This mechanism typically serves two practical purposes that are intimately related: providing numerical stability, and achieving a kinetic energy spectrum that is consistent with our understanding of turbulent cascades. In the case of numerical diffusion, diffusivity is generally a tune-able parameter. For Earth GCMs, the diffusivity can be tuned to achieve a spectral slope that matches empirically measured values \citep{Nastrom1985,Lauritzen2011}. For exoplanets, there is little hope of measuring the kinetic energy spectrum, but we can use the expectation of a $-3$ power law \citep{Charney1971} or $-5/3$ power law \citep{Pope2000,Caba:20} as guidance. More specifically, the turbulent cascade causes energy to build up at the grid scale, well above the level at which molecular viscosity would act to convert this energy to heat \citep[see][Figure 13.7]{Lauritzen2011}. A numerical diffusion term is thus usually included, and selected to have a form that preferentially diffuses the fields at the smallest scales in the model by using, for example, iterated Laplacian operators \citep[see e.g.,][appendix A.2]{Spiga2020}. However, selecting the strength and form of this term is somewhat of an art, as it will depend on the resolution, time step size, solver, grid, and numerous other factors \citep{Lauritzen2011,Thrastarson2011}. Fortunately, most exoplanet observables are relatively insensitive to the strength of numerical diffusion \citep{Heng2011,Deitrick2020}. Nonetheless, the exoplanet GCM community should bear in mind that other properties may be sensitive to ad hoc numerical settings. For instance, differences have been noted between GCM prediction of Titan superrotation possibly due to numerical diffusion \citep{Newman2011}. \subsubsection{Sponge layers} \label{subsubsec:sponge} One numerical issue deserves further attention: the need for so-called ``sponge layers'', that is, enhanced diffusion or drag near the model top and/or bottom. This need arises because GCMs typically use reflecting boundary conditions, allowing waves (usually gravity waves) to be reflected back into the model domain. These reflected waves are unphysical and can amplify and trigger numerical instabilities \citep{Lauritzen2011}. Thus an additional drag mechanism is often used to eliminate these reflections. Various types of sponge layer exist, for example, Rayleigh friction, which directly damps wind speeds toward zero or another value such as the zonal mean \citep[see, for example,][]{Mayne_2014,Mendonca2018}. This type of sponge is easy to implement but is non-conservative \citep[for instance, terrestrial studies by][]{Shaw:07} show that sponge layers adversely impact the angular momentum balance, thus the simulated circulations). It is instructive to note that some GCM simulations of solar-system gas giants do not employ a sponge layer so as to avoid altering the angular momentum balance of the atmosphere \citep{Schn:09,Liu:10jets,Spiga2020}. Another commonly used sponge layer, which is conservative in finite-volume formulations, reduces the order of the numerical diffusion, from (for example) fourth order to second order \citep{Lauritzen2011}. In either case, it is not always clear how to tune the strength and size of the sponge layer, which needs to damp waves without strongly affecting the general circulation or inducing additional reflections. While sponge layers have been carefully calibrated for Earth simulations, the effects of sponge layers and reflected waves arguably deserves more attention in exoplanet atmospheres. Indeed, the exact settings required for these various damping mechanisms are currently unknown, as there is little constraint from observations, therefore, although in cases physically motivated (e.g., capturing dissipation from sub--grid eddies, or emulating the propagation of waves into space) their main use is for numerical stability \citep[see][for an example of the dissipation and maximum wind speed]{heng_2011}. \subsubsection{Initial conditions} There has been some debate in the literature on the sensitivity of hot Jupiter simulations to initial conditions \citep{Thrastarson2010,Liu2013}. Other recent work has hinted at the possibility that zonal wind speeds on these planets may be sensitive to the initial temperature-pressure profile used in the deep atmosphere \citep{SainsburyMartinez2019}. More investigation should be done on initial conditions in exoplanet GCMs, although it is a challenge to explore many possibilities with such computationally expensive models. \subsubsection{Conservation properties} The atmospheres of exoplanets present new territories in which physical processes may be unfamiliar and poorly constrained, compared to Earth and other solar system bodies. Unlike bodies in our solar system, for exoplanets we have little, if any, spatial information on the atmospheric structure. Thus in modeling these atmospheres we must utilize any and all available criteria to ensure physical realism. The dynamical cores of GCMs are formulated using conservation laws (e.g., the Euler equations). As such, the global conservation of properties such as mass, energy, and angular momentum provides a diagnostic of the model's performance and accuracy. \cite{Thuburn2008} provides some guidance on which properties may be conserved and the desirable degree of conservation. We reiterate a few of those concepts here but more details on the dynamical cores will be given in section \ref{subsec:limdyn}. Firstly, as pointed out in \cite{Thuburn2008}, while the continuous forms of the equations of fluid dynamics can be formulated to conserve all physical properties, the discrete forms of the equations do not. Choices must be made regarding which properties to conserve (to numerical precision). One example of this is the thermodynamic equation (i.e., the first law of thermodynamics), which can be written in terms of different variables, such as potential temperature, pressure, internal or total energy, etc. Many GCMs use potential temperature, which is convenient for modeling convective processes. This lead to a conservation law for entropy, whereas sometimes a conservation law for energy (total or internal) may be preferred \citep{Satoh2002}. A similar choice must be made in the momentum equations, as these may be written in terms of linear momenta, angular momenta, vorticity, etc. Conservation of mass is particularly important, as noted by \cite{Thuburn2008}, since it affects all other conservation laws, and should be robust in the absence of significant sources and sinks (e.g., escape to space or volatile freezeout). In other words, errors in mass conservation will lead to a cascade of errors elsewhere. As we develop GCMs further, and explore novel territory, conservation also provides a critical way to identify coding errors. In his talk, Russell Deitrick briefly discussed using mass and angular momentum conservation to identify bugs in the THOR GCM. Finite volume models (such as THOR), should conserve properties naturally to roughly machine precision because the equations are discretized in flux form---fluxes flow across boundaries such that control volumes on either side experience the exact same flux. Model discretized in other ways (e.g., spectral models) may ensure conservation by use of ``fixers'' \citep[see][Chapter 13]{Lauritzen2011}. \subsubsection{Grid choice} Traditionally, GCMs have used a latitude-longitude spherical grid, which is easy to construct and has operators that are well-known and intuitive. It does, however, suffer from singularities and resolution clustering due to the convergence of meridional lines at the poles \citep{Staniforth2012}. Many models have solved this issue using a combination of semi-implicit time integration and numerical filters, for example the UM. Another solution is to use an alternative horizontal grid structure. A comprehensive review of grid types, and their advantages and disadvantages, is provided in \cite{Staniforth2012}. The most commonly used of these seem to be the cubed-sphere, used in versions of the FMS, for example \cite{Lin2004}, and the icosahedral grid, used in NICAM \citep{Tomita2004}, THOR \citep{Mendonca2016b}, and DYNAMICO \citep{Dubos2014}. These quasi-uniform grids succeed in making the resolution more uniform across the sphere, avoiding numerical complications at polar regions in the lat-lon grid. These grids also scale very well with a high number of processors, which is usually not the case for the lat-lon grid \citep{Staniforth2012}. However, these grids are a true challenge to work with---the divergence, gradient, and curl operators must be written using Gaussian integrals on the icosahedral grid, for example \citep{Tomita2004}. Further, they are not \emph{completely} uniform and thus still admit the possibility of grid imprinting, wherein errors build up to a level that makes the underlying grid visible to the eye \citep{Staniforth2012}. Also, the core utilization of isocahedrad grid is far bellow the one of lat-lon grid. Unfortunately, there is no known ``perfect'' grid, so an understanding of the shortcomings of a particular grid is essential, as is comparison between models utilizing different grids. A further word of caution is warranted here: some models, as in \cite{DobbsDixon2008}, have avoided numerical issues with polar regions by omitting them entirely. While some features of the resulting simulations may be qualitatively reasonable, it is likely that the polar regions are particularly crucial for the circulation of tidally-locked exoplanets. \subsection{The Limits of Dynamical Cores for Exoplanets} \label{subsec:limdyn} One of the goals of atmospheric modeling for exoplanets is, obviously, to unveil and to disentangle the physical processes underlying the observed properties. Another goal relates directly to the science of modeling itself: using hydrodynamical solvers (dynamical cores) and subgrid-scale parameterizations in the extreme conditions of exoplanetary atmospheres is interesting because it could illustrate the limitations of dynamical cores in a two-fold perspective \begin{enumerate} \item Exoplanets allow us to explore from a fresh perspective known limitations of atmospheric modeling encountered in Earth and solar system planet applications (section~\ref{sec:gcm-known-limi}). \item Exoplanets can be very exotic (when considered from a solar-system-centered point of view) therefore new limitations arise when applying atmospheric numerical models to these environments (section~\ref{sec:gcm-new-limit}). \end{enumerate} Before delving into a description of those challenges and limitations, it is important to note that, despite these challenges, e.g., studies of hot Jupiters using GCMs have provided excellent insight, including an almost complete picture of the acceleration of the zonal flow \citep[e.g.,][]{showman_2011,hammond_2020,Debras_2019,Debras_2020}, departures from chemical equilibrium caused by 3D dynamical mixing \citep[e.g.,][]{Drummond_2020}, and potential trends and characteristics of clouds \citep[e.g.,][]{lee_2016,Lines_2018,parmentier_2020}. Hot Jupiters have been targeted as the main exoplanets to which GCMs have been applied due mainly to them being the most observationally-constrained cases as far as exoplanets are concerned. \subsubsection{Known limitations considered with a fresh perspective \label{sec:gcm-known-limi}} \paragraph{Dissipation and accuracy} When GCMs are used to model more ``extreme" planets, such as hot Jupiters, changes in the physical conditions can lead to reductions in the stability and potential accuracy of the simulation results. As discussed GCMs rely on several forms of dissipation in the model \citep{Jabl:11} to control sub-grid ``noise" which can lead to model instability. These can take several forms, from a diffusion of the winds themselves, ``filtering" over polar regions in latitude-longitude grid GCMs and so--called ``sponge" layers (see section \ref{subsubsec:sponge}, paragraph on sponge layers). \paragraph{Rayleigh drag and closing the angular momentum budget} The question of the conservation of axial angular momentum is paramount in atmospheric modeling, as is illustrated for instance in studies of the slow-rotating bodies and gas giants in the solar system \citep[their appendix]{Lebo:12super,Spiga2020}. Dynamical cores are not explicitly formulated to conserve axial angular momentum and this can cause spurious variations of modeled angular momentum that can range from negligible to major, as was evidenced e.g., in the case of exoplanet modeling by \cite{Poli:14}. This question of angular-momentum conservation is all the more critical in planets without a solid surface: a simple Rayleigh drag on horizontal winds is used as a bottom boundary condition in GCM studies of Jupiter and Saturn to emulate the closing of angular momentum budget in simulated jets by putative magnetic drag at depth \citep{Liu:10jets,Youn:19partone,Spiga2020}. In the simulations of hot Jupiters too, Rayleigh drag, used as a simple parameterisation of surface drag on horizontal winds, has been used to capture the impact of magnetic drag in the deep atmosphere of hot Jupiters \citep{perna_2010}. However, in a similar fashion as the above-mentioned dissipation, its main use is for stability as by dragging the deep atmosphere to immobility where the radiative timescale is long, or indeed infinite \citep{iro_2005}, one can remove dependency of the simulated results on the initial conditions \citep[see][ for various discussions on this issue]{Mayne_2014,Amundsen_2016,Tremblin_2017,Sainsbury_2019}. Early simulations with the {\sc MITGCM} demonstrated a loss of global axial angular momentum without this inner boundary drag \citep{polichtchouk_2015}, and similar effects have been found for both {\sc UM} and {\sc THOR}. However, often the angular momentum conservation may vary with the spatial and temporal resolution adopted for model simulations; this opens the possibility to obtain negligible changes in axial angular momentum with adjustment to the spatial and temporal resolution of the particular setup. The cause of the AM conservation issue in dynamical cores is still not clearly understood. \paragraph{Thermodynamics} In most of the existing GCMs for planetary atmospheres, molecular weight gradients, heat capacity, gravity etc, are not taken into account. Those quantities are therefore assumed constant through the whole atmosphere and through time. This assumption can have several effects: when the thickness of the atmosphere becomes large with respect to the radius (such as for Titan or mini-Neptunes), the mass of a given atmospheric cell should change through buoyancy, due to the change of gravity. A constant or variable value of the heat capacity at constant pressure $C_p$, would directly impact the stability profile of the atmosphere. Such effects play on atmospheric dynamics and therefore on the equilibrium between the different terms of the atmospheric equations, but are generally assumed to be second order effects (except when the variations are really strong). Taking into account this variability into GCMs would require subsequent development and time, and this endeavour would benefit both solar system planets and exoplanets. Preliminary works on how to take into account variations of $C_p$ with the temperature have already been presented in \cite{Lebonnois2010} and \cite{Mendonca2016}. \subsubsection{New limitations, specific to exoplanets \label{sec:gcm-new-limit}} \paragraph{Shocks} For some simulations the flow speed can approach, or exceed, a Mach number of one, leading to some authors questioning whether shock capturing solutions to the continuity equation are required \citep{Li2010,fromang_2016}. However, as shown by \citet{fromang_2016}, shocks play a minimal role in atmospheres dominated by a large-scale superrotating jet. As is understood for the solar system's gas giants, flows of conductive material in the presence of a background magnetic field can lead to drag (as mentioned earlier) and heating through ``ohmic dissipation" \citep[see for example][]{ginzburg_2016}. In hot Jupiters, the outer layers can become ionised, leading to magnetic drag, and the deeper layers potentially experience significant enough ohmic heating to alter the planetary radius. To date, the only simulations consistently capturing these impacts have been those of \citet{rogers_2014,rogers_2014b} which revealed that ohmic heating is unlikely to be significant enough, for reasonable magnetic fields. Magnetic drag, aside from the work of \citet{rogers_2014,rogers_2014b} has otherwise been captured through parameterised drag schemes \citep{perna_2010}. \paragraph{Sub-stellar objects} As more and more planets have been detected and other observable populations identified, either through discovery, or improvements in instrumentation, the use of GCMs for gas giant extra-solar objects has widened. Brown dwarfs, sub-stellar objects which form similarly to stars, share parameter space with gas giant planets in terms of bulk compositions (to some extent) and radii, leading to similarities in atmospheric circulations in the two kinds of objects \citep{Show:19}. Additionally, due to high levels (relative to Jovian planets) of interior convection these objects are self--luminous, and high--signal to noise data is available \citep{buenzli_2015}. GCMs have been applied to these objects exploring their flows and additionally the occurrence of clouds \citep[e.g.,][]{zhang_2014,tan_2020}. The same studies can be applied to young gas giant planets, with high interior convection (e.g., directly imaged planets). The main challenges are in handling the strong interior fluxes from the convection, and the extremely short rotation periods. Work is beginning on coupling models of the convective interior of gas giant planets to atmospheric models, to better capture the interaction between these two regimes. Irradiated brown dwarfs, with a partner star from either the main sequence or white dwarf \citep{casewell_2018} have also recently been studied using an adapted GCM \citep{lee_2020}. Through studying this collection of gas giant objects, from young self-luminous Jovian exoplanets, to older short and long-period Jovian planets, and isolated or heavily irradiated brown dwarfs, a complete continuum of atmospheric regimes could be unraveled. \paragraph{Adaptations related to exotic thermodynamics and chemistry} Observations of hot Jupiters have also begun to demarcate this sub--class itself into further categories. In particular, ultra-hot Jupiters (with temperatures in excess of $\sim$2,500\,K) provide some real advantages, whilst presenting new challenges. Although the relatively high temperatures result in the assumption of chemical equilibrium holding over most of the observable portion of the atmosphere, effects of the high temperature and photon fluxes such as thermal and photo dissociation, and the resulting $H^{-}$ opacity must be included \citep{baxter_2020}. Most significantly, these objects span a temperature range in which hydrogen is present in both molecular and atomic forms \citep{Bell2018,Tan2019}. The variations this causes in the specific heat capacity are large enough to mean the standard assumption of a single value through the atmosphere, made within GCMs, may become problematic. Resolution of this issue requires a significant reworking of the dynamical cores developed under the assumption of constant heat capacity. \paragraph{Specific challenges for Mini-Neptunes} The drive in instrumentation is pushing towards detection and characterisation of smaller radii and longer orbital period planets, ultimately in the search for potentially habitable planets (the focus of this workshop). However, the next set of observational facilities and instruments will likely provide access to the sub-class of planets discussed previously, termed mini-Neptunes or Super--Earths. The existing challenges listed in section~\ref{sec:gcm-known-limi} clearly apply to those objects. Nevertheless, for these planets, initial work has shown that the standard primitive equations of motion often employed within a GCM may not be valid \citep{Mayne_2019}, and/or elapsed simulation times must be significantly extended \citep{huize_2020}. Additionally, as temperatures cool, the role of photochemistry and condensation may become even more important, but for a range of species. As highlighted by \cite{leconte_condensation_2017}, a background gas lighter than the condensible gas (for example H$_2$O and H$_2$) can induce a mean molecular weight gradient which inhibits convection in the atmosphere. This process is stronger on giant planets such as Mini-Neptunes but could be observed on smaller rocky planets. Therefore, it may be interesting to explore this phenomenon with 3D simulations. \subsection{Parameterization of Convection in Exoplanet GCMs, differences and limitations} \label{subsec:conv} \subsubsection{General considerations} Convection, and moist convection especially, is an important driver of heat redistribution in planetary atmospheres. By forming clouds and depending on the boundary-layer processes, moist convection is also a key part of complex feedback mechanisms in the climate system \citep[e.g.,][]{Arakawa2004}. To accurately resolve convective plumes, a numerical climate model has to have sufficiently high spatial resolution, making it extremely computationally expensive and thus unfeasible for long simulations or multi-planet studies. Thus, all modern exoplanet GCMs rely on parameterizations to emulate the overall effect of subgrid-scale convective processes on large-scale atmospheric fields. These parameterizations always include a number of quasi-empirical parameters, usually inherited from Earth climate models or validated against observations and convection-resolving simulations on Earth, raising the question of their applicability to extraterrestrial atmospheres. For the Earth's atmosphere, an assumption of dilute condensible is usually a good approximation \citep{Pierrehumbert2010}. This is not the case for other planetary atmospheres in the Solar System and beyond: the main condensible species can comprise a substantial portion of the atmosphere or have thermodynamic properties such that the convective mass-flux \citep{Ooyama1971} is sufficient to affect the large-scale dynamics, like on Pluto \citep{Bertrand:2018}. In such a scheme, the depth of the convection is constrained by the distance the air, rising in convective updrafts, penetrates above its level of neutral buoyancy and ceases to rise. To simulate these effects correctly, the LMDG is equipped with a convection parameterization accounting for non-trace condensible species \citep{Pierrehumbert_Feng2016}. It is applicable to a wide range of atmospheric conditions and is described in \cite{Leconte:2013b}. Whatever the convection parameterization is based on --- the adjustment to reference profiles, subgrid-scale mass-flux or other principles --- it has to be validated against observations and convection-resolving simulations. In the absence of in-situ observations for exoplanets, the next best option is to use high-resolution convection-resolving and cloud-resolving models (CRMs), which simulate convective processes explicitly. Targeted limited-area CRM simulations can be used to benchmark and improve convection parameterizations. For instance, \cite{Abbot2014} have compared CRM simulations to GCM simulations in the case of a snowball Earth and found that they provide consistent results. This helped to confirm the hypothesis that clouds could provide a large warming on a snowball Earth and potentially lead to the deglaciation of the planet. Two talks at the THAI workshop presented work paving the way in this direction. Denis Sergeev demonstrated substantial differences between cloud profiles in a global coarse-resolution GCM experiment and a limited-area CRM experiment, conducted using the UK Met Office Unified Model (\ref{subsubsec:UM}) for THAI Hab1 \& Hab2 setups. Differences in convective cloud cover appear to be model- and planet-dependent, because the opposite picture has been found by Maxence Lef\`evre, who compared LMD-G GCM (\ref{subsubsec:LMD-G}) to the (Weather Research \& Forecasting) WRF model \citep{Skam:08} in a CRM mode for a case of convection on Proxima Centauri b. Further work will hopefully build on Sergeev's and Lef\`evre's CRM simulations to explore convective processes in other atmospheric and planetary regimes, informing atmospheric modellers of parameterization biases and caveats. \subsubsection{Transitioning from mass-flux scheme and convective adjustment toward fully resolving the convection: challenges and potential science returns.} \label{subsec:mf2ad} It was a general consensus among the participants that a flexible convection parameterization based on the mass-flux approach is the best option for coarse-resolution 3D GCMs, while the adjustment scheme is usually too crude to represent convection. A shift towards fully-resolved global convection simulations is not going to happen quickly, but limited-area CRMs should be used more actively in the exoplanet atmospheric modelling. As outlined in Sec.~\ref{subsec:conv}, one of the main applications of CRMs is to re-tune existing convection parameterizations for different extraterrestrial atmospheres. Such experiments are routinely performed for the Earth weather and climate prediction models \citep{Rio2010} and for Mars as well \citep{Col2013}, so the exoplanet GCM community should work closely with meteorologists and Earth model developers to benefit from their invaluable expertise. In practical terms, an important and already feasible project is building an archive of convection-resolving simulations of H\textsubscript{2}, N\textsubscript{2}, CO\textsubscript{2}-dominated atmospheres, which then can serve as a standard benchmark suite for coarse-grid global models. This project can then evolve into an exoplanet CRM model intercomparison project and find its rightful niche under the CUISINES umbrella (Section \ref{subsec:CUISINES}). With a wide grid of CRM models at hand, certain aspects of convective processes can then become more tractable, such as whether the structure and dynamics of convective plumes and precipitation anvils in non-Earth planetary atmospheres might change. How might they be influenced by plume size, convective overshoot beyond the level of neutral buoyancy, entrainment and detrainment, timescales of convection change when the composition of the atmosphere or stellar/planetary parameters (such as stellar spectrum and planetary gravity) are changed. In addition, Earth science expertise is valuable for the development of generalized convection schemes from scratch. In this case, code developers should strive to make them flexible, modular and portable so that it will be relatively easy to swap one convection parameterization in the GCM for another. The fact that most convection schemes usually operate column-wise without communicating with neighbouring columns in a 3D GCM, makes the issue of portability easier to tackle. \subsection{Planetary surface parameterizations for exoplanets and their impact on the climate}\label{subsec:surf} \subsubsection{Land} \label{subsubsec:land} There are many ways in which a continental surface can affect a planet's climate, including (but not limited to) surface albedo, topography, thermal inertia, surface roughness, etc. Depending on the nature of the land surface, these parameters could change significantly and thus alter the planet's climate \citep{Madden:2020}. The most extreme configuration in which continental surfaces play a major role is ``land planets''. Land planets, sometimes also known as dune planets, are desert rocky planets with limited surface water \citep{Abe:2011}. It is thought that this type of planet is one of the most probable (along with water-rich, ocean planets) around M-stars, as a result of formation and escape processes \citep{Tian:2015}. These types of planets have already been studied with 3D GCM simulations \citep{Abe:2011,Leconte:2013,Menou:2013,Yang:2014,Kodama:2019,Way2020} and specifically applied to the TRAPPIST-1 planets \citep{Wolf2017,Turbet:2018aa,Rushby:2020}. \cite{Rushby:2020} recently provided a detailed analysis of how surface type/composition may affect the climates of TRAPPIST-1 assuming they are land planets. This work was also described in the presentation given by Aomawa Shields. While the study of continental surfaces of the Earth and other planets of the solar system (in particular Mars, characterized by its hyper-continental climate) is today our primary source of information on this matter, characterizing the climate of the planets of the TRAPPIST-1 system and other nearby planets (e.g., Proxima~b) may provide us with crucial data on how continental surfaces are operating on alien worlds. \subsubsection{Ocean} \label{subsubsec:ocean} Ocean modeling is often overlooked by the exoplanet community, largely due to the large computational expense associate with spinning up dynamic ocean models, coupled with the challenges of observing an ocean on another planet \citep[e.g.,][]{Robinson2010}. Future exoclimate simulations of terrestrial worlds benefit from the use of a dynamic ocean component \citep{Way2018,Yang2019a} coupled with a dynamic sea ice model. It has been shown that sea ice drift can alter the habitable zone limits in various cases \citep{HuYang2014,Way2017,Way2018,DelGenio2019}. Near the inner edge of the HZ, \cite{Leconte2018,Yang2019a,Salazar_2020} have shown that ocean heat transport is not always necessarily critical, especially when continents are present. Warmer planets (T$_S$ $>$ 300 K) tend to have more homogeneous surface temperatures, and thus ocean currents may not cause a meaningful net change in ocean-atmosphere heat exchanges. \cite{Yang2019a} show this to be the case, and further state that ``...ocean dynamics have almost no effect on the observational thermal phase curves of planets near the inner edge of the habitable zone.'' These results suggest that future studies of the inner edge may devote computational resources to atmosphere-only processes such as clouds and radiation". However, \citet{Yang2019a} also argue that ocean heat transport is critical for the climate and observables for ``middle HZ'' planets in M-dwarf systems. Still further, \cite{Way2018} demonstrated that planets with modern Earth-like land-sea masks show marked differences in mean surface temperature for fast rotators (spin less than 8 Earth sidereal days) versus slow rotators (see \citealt{Way2018}; Figure 2) in their inner edge of the habitable zone studies. In addition, ocean composition---specifically, ocean salinity---may affect the fraction of habitable surface area and phase curves of middle and outer HZ worlds \citep{Cullum2016,DelGenio2018,Olson2020}. Salinity (dissolved salt content) is a first-order control on the density of seawater, and it further modulates the relationship between temperature and density. Salinity thus influences ocean stratification, circulation, and heat transport. At the same time, salinity depresses the freezing point of seawater, potentially limiting the formation of sea ice in salty oceans with consequences for surface albedo. In sum, saltier oceans tend to result in warmer climates. Models that neglect ocean dynamics cannot currently simulate salinity impacts on OHT, but may include freezing point depression. However, the relative contribution of heat transport vs. freezing point depression to climate warming, and how the balance of these effects may differ under different climate states is not well understood. It is thus unknown when/if simple hacks such as adjusting the freezing point of seawater in a model without a dynamic ocean is a reasonable strategy for simulating the climates of exoplanets with unknown ocean salinity or whether the likelihood that exo-oceans differ from Earth's ocean with respect to salinity requires inclusion of a dynamic ocean. To facilitate future studies, the exoplanet modeling community should consider working more closely with oceanographers. For example, there is a large parameter space of ocean tidal dissipation to be explored that affects planetary rotation rates over time \citep[e.g.,][]{Green2019}. As mentioned above rotation rate has been demonstrated to affect climate. It must also be stressed that current GCMs used for exoplanetary studies have serious shortcomings in some cases. First, the putative thermodynamic oceans (also called q-flux \citep{Miller1983,Russell1985} as a heat source ``q'', whose values are generally specified by a control run, is prescribed to represent seasonal deep water exchange and horizontal ocean heat transport.) used in exoplanet GCMs have generally used zero horizontal heat transport, or highly simplified parameterizations \citep[e.g.,][]{Edson2012,Godolt2015,Kilic2017}. As well, current GCM dynamic ocean models are presently highly parameterized for modern day Earth since they are the children of Earth parent GCMs. For this reason it is important to engage more closely with the oceanography community to better parameterize the current suite of dynamic oceans used in exoplanetary GCMs. For ocean planets that have a low density, the ocean depth can reach tens of or even hundreds of kilometers. At the bottom of the ocean, ice under high pressure may form. For the deep ocean, the equation of state for the seawater is required to be changed. Moreover, the ocean-bottom ice can influence the friction and the exchange of heat and materials between the ocean and the solid planet. Key questions may be answered using ocean GCMs: How deep are is ocean circulation (including both wind-driven and thermal-driven) and how does ocean circulation influence the concentrations of CO$_2$ and other greenhouse gases in the atmosphere \citep{Checlair:2019}. Besides seawater oceans, another type that needs to be investigated are magma oceans. The lowest temperature of a magma ocean is about 1600 K. The density, viscosity, and diffusivity of the magma ocean are quite different from that of Earth's ocean. However, they may still of the same order. So, in this respect an Earth ocean GCM may be easily modified to simulate the circulation of a magma ocean. But, a key process for the magma ocean is silicate (or other materials) precipitation in the ocean, which acts like vertical convection and can significantly influence the heat and mass transports in the ocean. \subsection{Middle and Upper Atmosphere Processes} \label{subsec:midup} Humans spend nearly their entire lives in Earth's troposphere, handling day to day local weather, and coping with climate change. Analogous stratospheric regions, and those above, constitute our best opportunities at characterizing an exoplanet's atmosphere and will likely, and should be, a primary focus in GCM development. The multitude of planetary GCMs that have been adapted for exoplanets have seen limited efforts incorporating middle atmosphere effects, especially coupling transport processes from other regions. Convection on Earth can lead to the production of high-altitude clouds (and hazes) and being a source of atmospheric gravity waves. This represents a prime example of tropospheric-stratospheric coupling where momentum is transferred from the lower to the upper atmosphere. While there is work needed on features and processes that more directly influence the surface of exoplanets, further understanding and development of middle (and upper) atmospheric modeling in GCMs offers the greatest opportunity for scientific advancement in our interpretation of exoplanet observations from the next generation of telescopes. \subsubsection{Non-equilibrium or non-conservative radiative and dynamical effects} \label{subsubsec:NLTE} \paragraph{Gravity waves} The challenge of current exoplanet GCMs is to overcome the expense of running simulations with the necessary horizontal and vertical resolution over a range of pressures that adequately resolves processes operating over a variety of spatial and temporal timescales. But if our focus is data driven, we ought to look at the processes important in Earth's (and other worlds') middle atmospheres. Earth, Venus, and Titan, are all terrestrial worlds with significant super-rotation in their middle atmospheres. Venus' slow (and counter) rotation make it a hallmark case study for tidally locked exoplanets with substantial atmospheres. It has super-rotation in the equator that is likely driven by the Gierasch-Rossow-Williams (GRW) mechanism \citep[][]{Gierasch1975, RossowWilliams1979}, where planetary waves from high latitudes can transport angular momentum towards the equator and spin up super-rotating jets. Titan too has been observed to have significant variability in winds at different altitudes spanning from the stratosphere to the lower thermosphere and it too is a slow rotator at roughly 16 days. Earth has an alternating stratospheric jet oscillation, known as the Quasi-biennial Oscillation (QBO), which is the product of a complex interaction from a broad spectrum of waves. Stratospheric oscillations are also found to occur in giant planets of the solar system \citep{Fouc:08} and are a recent focus for climate modeling \citep{Cose:17,Bard:21}. The point is that their upper atmospheric dynamics are driven by waves not easily resolved and their effects are mostly missing from current exo-planet models while potentially having strong observational effects. Small-scale or high-frequency gravity waves have a large role in the upper atmospheric dynamics of terrestrial size planets. Not only are they an important source of momentum to drive the QBO, but they also affect mid-latitude jet streams, the semi-annual oscillation, and even the Brewer-Dobson circulation \citep{Butchart2014}. They are an efficient means of transporting energy across latitudes and altitudes and effectively redistribute energy, either mechanical or thermal away from its source to other areas until equilibrium or relaxed/balanced states are reached. Gravity waves try to keep the upper atmosphere of Earth in dynamical and radiative equilibrium by dispersing energy spatially over time. Gravity wave activity is also confirmed in Venus and Mars middle/upper atmosphere by several measurements and claimed to produce the observed variability in density, temperature and cloud structure \citep[e.g.,][]{Creasey2006, Garcia2009, Altieri2012}. Linear wave theory has allowed parameterizations of the effects from gravity waves and their breaking action in Earth based GCMs for some time. The typical approach is to assume some characteristics of the properties of the waves, amplitude or momentum flux, wavelength, phase speed, etc. that are inputs into the parameterization which is applied depending on the modeled local atmospheric environment - with a focus on horizontal and vertical wind shear and temperature gradients. Nevertheless, given the lack of systematic observations of gravity waves and the uncertainty of the source (for instance on Mars and Venus) which are necessary to constrain model parameters, our experience with different GCM configurations let us conclude that the total zonal wind (i.e. averaged for all local times) value is very sensitive to many GCM quirks. Zonal wind in the middle/upper atmosphere can be either positive or negative, producing different circulation regimes. \paragraph{Non LTE effect in the upper atmosphere} The upper atmospheres of terrestrial-like planets in our Solar System are similar in terms of physical processes, in spite of important differences in temperature, density and composition \citep{Gladstone2002}. A basic property of these upper layers is the low gas density that is also responsible for situations of breakdown of Local Thermodynamic Equilibrium (LTE), specific for each molecular species and each vibrational transition. These non-LTE effects result in populations of the molecular energy states not dictated by Boltzman statistics at the local kinetic temperature and occur when molecular collisions are so infrequent that other processes (e.g., radiative transfer) become important for the determination of those states' number population \citep{LopezPuertasTaylor2001}. In terrestrial planets, and for the main molecules and infrared emissions, those layers usually correspond to their mesosphere and thermosphere. At pressure layers above about 10$^{-5}$ mbar, solar EUV heating and thermal conduction are the main processes controlling the energy balance, while in the mesosphere (on terrestrial planets between 1 and 10$^{-5}$ mbar, approximately), absorption and emissions by atmospheric molecules with active ro-vibrational bands in the IR usually play a crucial role on the thermal structure \citep{Gladstone2002}. These non-LTE processes have to be considered when interpreting strongly irradiating exoplanets. CO$_2$ and CO non-LTE fluorescence is common in telluric atmospheres, but CO emission has also been detected in Neptune \citep{Fletcher2010}. Furthermore, non-LTE radiative transfer modelling helped to explain unexpected observed features around 3.3 um on the hot-Jupiter HD 189733b from ground measurements, reported to be CH$_4$ non-LTE emission \citep{Swain2010,Waldmann2012}. Future detection by JWST, LUVOIR, together with ground-based measurements of upper atmosphere of hot-Jupiters by IR spectrographs (CRIRES/VLT, METIS/E-LTE) will make it possible to test composition and temperature models of warm and hot Jupiters, However, it is still challenging for terrestrial exoplanets. Due to its expected significant effect on exoplanet atmospheric characterization, the improvement of the mid and upper atmosphere processes in exoplanet atmospheric models should therefore be a priority in the era of JWST and ELTs. \subsubsection{Photochemistry in 3D atmospheres} \label{subsec:photo} The transport of photochemically produced gaseous species and hazes could have a significant impact on the characterization of exoplanet atmospheres \citep{Carone2018}. Different circulation regimes can result in different global distributions of important atmospheric species such as ozone \citep{Yates_2020,Chen2019}. The community should first think about what kind of composition is the most interesting and imperative for near-term observations (e.g., exo-Venuses). Very large uncertainties remain for many deposition fluxes. For instance CO and relatively minor tracers like Cl have an important catalytic activity for both Venus and Mars but are not accurately constrained. Along a similar vein, the surface emission fluxes of various biogenic compounds such as DMS are unconstrained, but different assumptions can dramatically effect their resultant global distributions \citep{Chen2018}. Exoplanet observations of many regimes show that clouds and hazes significantly affect planetary spectra. Current GCMs provide the spatial mapping of water clouds, which can have strong effects on transmission spectra and thermal phase curves \citep{Wolf2019}. However, the majority of GCMs do not include self consistent treatment of photochemical haze. As an integral part of climate modeling, aerosols need to be included - either from the ground up (production rates from chemical networks), or decoupled (aerosols and photochemical hazes would be separated). For instance, in modeling Titan (and Titan-like exoplanets) \citep{Lora2018}, the production rate is fixed to reproduce observations, and then the photochemistry is separated. For exoplanets, this would required the use of very detailed models to capture monomer formation self-consistently, that will be used in a simple photochemical model favoring the approach of a lower-resolution model but with a higher-complexity component. As a follow-up to \citet{Chen2019}, one possibility is through adapting CARMA \citep{Larson2015}, a state-of-the-art microphysical model that can be used here to simulate the evolution of hazes. Note that CARMA is already coupled to ExoCAM with a nominal fractal aggregate haze model \citep{Wolf+Toon2010}, with funded plans to use it for studying hazy iterations of the habitable zone planet TOI-700 d. Haze production rates will be sourced from off-line 1D Atmos calculations (e.g., \citet{Arney2017}. Lastly, the 3D temperature structure of a planet could also be important for the chemistry itself (exo/endo-thermal reactions), even in absence of photochemistry, It has to be noted that for exoplanet photochemistry, the use of pre-calculated photolysis rate tables is much too specific. A rate table of reasonable size is unlikely to be generic. For an atmosphere for which we do not know the profile of the species we cannot build a coherent table. It is necessary to go through an online calculation of the photolysis rates which leads to a reconsideration of the radiative transfer within the GCM. Since most GCMs use the correlated-k method it cannot be used for photolysis calculations. In short, it is necessary to calculate the photolysis online. However, simulating the photochemistry and chemistry with 3D models is computationally very expensive. It may require prior work to reduce the number of gaseous families, which would depend on bulk composition, temperature and instellation. Such an approach has been started by Olivia Venot's group but it is uncertain how 3D transport will affect this optimization which is performed in 1D. Another caveat is that the majority of terrestrial 3D chemical models are restricted to present-day Earth compositions. This is due to the fact that these models largely inherited components from their model supersets originally developed for Earth-based research. For instance, \citet{Chen2019} adapted the National Center for Atmospheric Research model \citep{Marsh2013} by deploying subroutines from ExoCAM with the Whole Atmosphere Commuity Climate Model (WACCM). Thus another goal is to extend 3D photochemistry models non-oxygenated reducing and weakly-oxidized (H$_2$-, N$_2$- and CO$_2$-rich) atmospheres. Such anoxic conditions dominate the atmospheric evolution histories of Earth, Mars, Venus, and Titan. This suggests that anoxic atmospheres are the ``default" state of a planet's eventual fate in the absence of biological activity. As shown by previous 1D work \citep{Lincowski2018}, assessing non-Earth-similar atmospheres is important to gauge the detectability of photochemical byproducts on the myriad of potential rocky exoplanet compositions. While out of the scope of this report, it is important to mention that star-planet interaction is fundamental to understand a range of key processes (stellar-sourced charged particles, EUV heating, photodissociation, and photoionization, etc) that could shape the upper layers of rocky exoplanet atmospheres. Collaboration with stellar physicists and observers will be crucial. \subsection{Aerosols in exoplanet GCMs}\label{subsec:aerosol} \subsubsection{Condensible gases in GCM simulation for exoplanets} Characterization of non-water condensables is extremely important for understanding both the atmosphere and surface processes of other worlds. In the low-temperature range, gases such as CO$_2$, H$_2$S, SO$_2$ can condense \citep{Fray:2009} for planets with hydrogen-dominated atmospheres and volcanic activity; while CH$_4$, NH$_3$, and N$_2$ are expected or found for worlds like Venus, Titan, Pluto and other Solar System moons. This condensation can have large consequences on the planet's atmosphere by removing greenhouse gases, forming clouds and modifying surface albedo. For instance, \cite{Turbet2017} showed that CO$_2$ condensation can strongly reduce the deglaciation of terrestrial planets as CO$_2$ condensation leads to accumulation of surface CO$_2$ ice that can get permanently trapped under water ice. In the high-temperature range, a variety of condensables would potentially be observable by JWST or future extremely large telescopes (ELTs) for highly irradiated exoplanets, but data on properties (e.g., microphysical properties) of these condensable species is sorely lacking. Spectral information would be needed from laboratory experiments and missions regarding optical properties of exotic clouds and albedo properties for non-water condensables, to improve model response to a broad variety of scenarios. In addition to acquiring new data, model developments are needed within GCMs in order to include condensable species other than water and to precipitate condensables with appropriate albedo, grain-size properties. In this area, the LMD-G GCM already has extensive capabilities for CO$_2$ condensation on Mars \citep[e.g.,][]{Forget98}, early Mars \citep[e.g.,][]{Forget:2013} and exoplanets \citep[e.g.,][]{Wordsworth:2011,Turbet2017}, for N$_2$ condensation on early Titan \citep[e.g.,][]{Charnay:2014}, N$_2$, CH$_4$ and CO condensation on Pluto \citep[e.g.,][]{Forget:2017}. Overall, much more data is needed to make significant progress on the question of condensible species in exoplanet's atmospheres and surfaces (in particular, for high temperature condensables), along with potentially complicated and time-consuming model development. Initial work coupling various schemes with multiple condensate species has been done for hot Jupiters \citep[e.g.,][]{lee_2016,Lines_2018,Lines_2018b,Lines_2019}. \subsubsection{Impacts of aerosol microphysics in GCM simulations and simulated spectra} \label{subsec:micro} Aerosols are present in every atmosphere of our solar system planets and moons. Clouds have also been observed in exoplanet's atmospheres such as the super-Earth GJ-1214b \citep{Kreidberg2014}, the gaseous giant WASP-12b \citep{Wakeford2017}, and WASP-31b \citep{Sing2016}. Hazes have been observed on WASP-6b \citep{Nikolov2015} and HAT-P-12b \citep{Sing2016}. They have not been observed for terrestrial size exoplanets yet but simulations using GCMs and PSG have shown that they dramatically flatten the transmission spectrum preventing an exhaustive atmospheric characterization from space observatories \citep{Fauchez:2019,Komacek2020,Suissa2020}. However, the detailed aerosol microphysics is uncertain for exoplanet atmospheres. Yet, changes in the microphysical and optical properties can have a very large impact on the climate simulation and simulated spectra. To improve our understanding of aerosol properties in exoplanet atmospheres we need data, from the lab but also from JWST and ARIEL. If a statistical study is performed on such data it may allow us to discriminate between cloud particles and hazes that differ in terms of size and microphysical/optical properties. Also linear polarization is a powerful tool with which to retrieve cloud microphysical/optical properties as is performed for Earth using, for instance, POLDER/PARASOL data \citep{Goloub2000}. Finally, we have to improve our connection with other communities such as the Earth scientists, paleoclimatologists, and Solar System planetary scientists to better share data (remote sensing + in situ) and methods of applying data to exoplanet atmospheres. In addition to data modeling studies can also be very helpful. For instance, a sensitivity study on cloud microphysics can be performed by varying cloud particle size and amount of cloud condensation nuclei (CCN) to simulate their impact on simulated spectra. Also, modeling work could allow to better understand how spatial heterogeneity (horizontal and vertical resolution, cloud gap fraction, overlap) affect both transmission and reflection spectra. These improvements may necessitate the inclusion of biological coupling for both haze (microbial, plant) and fires and therefore to develop computational physics packages, chemistry packages, and increased capabilities for capturing spatial and temporal heterogeneity. More direct collaboration needs to be done with the climate modelers and observation simulators to bring consistency between assumptions about radiative transfer. More observations of Earth transmission spectra are therefore needed, for example the Atmospheric Chemistry Experiment - Fourier Transform Experiment (ACE-FTS) onboard the SCISAT-1 satellite provides observation in the 2.2$\mu m$ - 13.3$\mu m$ window. This emphasizes the benefit of Earth and Planetary Science synergies. \subsection{Synergy between EBMs, 1D radiative-convective and photochemical models with GCMs} \label{subsec:syn} \subsubsection{EBMs in the THAI workshop} The HEXTOR energy balance model (EBM) was used to conduct the THAI scenarios, using both a latitudinal and longitudinal mode \citep{haqqmisra2021}. HEXTOR is a one-dimensional EBM based on the model by \citet{Williams1997}. The model is typically run in a latitudinal mode, which reproduces Earth's mean annual climate. The model can also be used to explore changes in Earth's climate due to past and future orbital variations and possible feedback from anthropogenic forcing \citep{haqq2014damping}. Prior versions of this model have represented radiative transfer with a basic linear relationship \citep{haqq2014damping} or with a polynomial fit of 1D radiative-convective climate calculations \citep{Williams1997,HaqqMisra2016,batalha2016climate,hayworth2020warming}. The current version of HEXTOR attempts to improve the accuracy of the radiative transfer in the model by using a lookup table, which conducts a nearest-neighbor interpolation for OLR and albedo using a database containing thousands of 1D radiative-convective climate calculations. This provides an advantage in accuracy at the cost of added computational expense. During the THAI workshop, Dr. Haqq-Misra showed that HEXTOR in a latitudinal configuration either underestimates or overestimates the global average temperature in the THAI simulations, because the hemispheric differences between the day and night sides cannot be represented with a single dimension in latitude; however, he also showed that HEXTOR can also be configured as a longitudinal EBM through a coordinate transformation, which places the substellar point at the north pole and allows the day-to-night side contrast to be represented more accurately \citep{fortney2010transmission,koll2015deciphering,Checlair:2017,haqqmisra2021}. Longitudinal EBMs, either along the equator like HEXTOR or with full latitude-longitude resolution, can provide constraints on climate across broad parameter spaces or for long time integrations, which can be useful in identifying specific problems to study further with GCMs. VPLanet \citep{Barnes2020} includes an EBM called POISE (Planetary Orbit-Influenced Simple EBM), a one-dimensional seasonal EBM that reproduces Earth's annual climate as well as its Milankovitch cycles \citep[see][]{North1979, HuybersTziperman08,Deitrick2018b}. Though the model lacks a true longitudinal dimension, each latitude is divided into a land portion and a water portion, with distinct heat capacities and albedos, and heat is allowed to flow between them. Ice can accumulate on land at a constant rate when temperatures are below 0$^{\circ}$ C, while melting/ablation occurs when ice is present and temperatures are above 0$^{\circ}$ C. Sea ice forms when a latitude's temperature drops below $-2^\circ$ C (accounting for salinity), and melts when higher. To account for ice sheet flow, bedrock depression, lithospheric rebound, and ice sheet height, they employ the formulations from \cite{HuybersTziperman08}. The bedrock depresses and rebounds locally in response to the changing weight of ice above, always seeking isostatic equilibrium. POISE is thus a self-consistent model for ice sheet growth and retreat due to instantaneous stellar radiative forcing, orbital elements, and rotational angular momentum. \subsubsection{1D radiative-convective and photochemical models in the THAI workshop} 1D radiative-convective climate and photochemical models are one dimensional models representing a vertical atmospheric column assuming plane-parallel waves in hydrostatic equilibrium. In the photochemical models, the vertical transport takes into account molecular and eddy diffusion and are able to represent a complex photochemistry. 1D models have been widely used by the community to determine the edges of the habitable zone \citep{Kopparapu2013}, to study the ancient Earth \citep{Arney2016,Arney2017} and various exoplanets \citep{Lincowski2018,Meadows2018}. In this workshop, THAI simulations with the Atmos 1D model \citep{Wunderlich2020} were presented by Andrew Lincowski following a two-column approach. Dr. Lincowski has shown that two 1D radiative-convective atmospheric columns are able to reproduced the day-night temperature contrast simulated by GCMs while keeping an advantage in term of computational time (Lincowski et al., in preparation in this focus issue). \subsubsection{Synergy between GCMs, EBMs and 1D models} GCMs are very complex models that require significant time to converge. Lower dimensional models such as EBMs or 1D radiative-convective climate and photochemical models, while ideal to explore large parameter sweeps, lack a representation of atmospheric dynamics, surface heterogeneity and clouds. The computational efficiency of EBMs enable them to simulate climates on much longer timescales of thousands or millions of years to explore orbital and rotational effects on climate \citep[e.g.,][]{Spiegel2009,Deitrick2018b}. EBMs are typically 1D in latitude and solve a single partial-differential equation for surface temperature. Temperature then depends on incoming stellar flux (instellation), heat diffusion, albedo, and the outgoing longwave radiation (OLR). The OLR and albedo are parameterized with simple formulations \citep{North1979,Spiegel2009,Rose2017,Palubski2020}, though several studies have made advancements by fitting polynomials to radiative-convective models \citep{Williams1997,HaqqMisra2016}. The chief challenge of these models comes from the parameterization of atmospheric dynamics in terms of a heat diffusion term and accuracy in parameterizing the radiative transfer. For this reason, synergy with GCMs is necessary to ensure some measure of accuracy and predictive power from EBMs. 1D EBMs and radiative-convective climate models coupled to photochemical models can explore a very large parameter space (i.e., star and planets properties, instellation, rotation and orbital periods, eccentricity, atmospheric properties, etc.) and can identify key points of interest in the parameter space that 3D models can then investigate. For instance, 1D models can be used to determine the likely chemical state as input to a GCM. GCMs can also be used to determine cloud coverage percentage and dynamical model as input to 1 and 1.5-D models. GCMs can be run with simple tracer chemistry with haze precursors and 1D photochemical models can be used to figure out what happens next with haze formation/chemistry etc. Using both 1D and 3D models simultaneously would allow one to get a more complete picture of chemistry, clouds and observables. It has also been highlighted during the THAI workshop that interactions between the atmosphere and interior of terrestrial planets require more attention that currently given. This would require improving the collaboration with geologists/geophysicists. Such coupling should probably first be developed in 1D following a ``Planet evolution model" approach based on an asynchronous coupling employing a mixture of (short term) climate calculations and long term simulations (for glaciers, but also longer processes as well). Finally, it is important to predict in advance, with a hierarchy of models, what we might see and have the models ready to interpret the data. Ideally, upstream modeling work should not be constrained by anticipated observational sensitivity. \section{The Future of Exoplanet GCMs, Results of the Pre-Workshop Survey} \label{sec:survey} Several weeks prior to the workshop, an online survey was sent to all THAI participants to poll their opinion on what the field of exoplanet GCMs might look like in the coming decades. The aim of this exercise was mainly to highlight key modeling developments that need to be pushed by the community to move the field forward in the best possible directions. A total of 35 participants completed the online survey. Participants have different levels of career advancement (3 undergraduate students, 4 graduate students, 13 early-career scientists, 12 mid-career scientists, and 3 senior scientists) and work in several continents (17 in Europe, 14 in North America, 5 in Asia, and 1 in Oceania). The survey consisted of a dozen questions, the main results of which are summarized below.\\ \textbf{(1) High resolution simulations: global or local?} With the increase in available computing resources, and the need to simulate atmospheric processes such as convection and clouds without using empirical parameterizations, high spatial resolution seems to be an attractive development pathway of the exoplanet GCM field for the coming decades. It is in fact one of the main directions of development in the modeling of the future of Earth's climate \citep{Stevens2019}. We asked the survey participants if they thought that the future of very high spatial resolution simulations for exoplanets was on the side of global or local simulations (i.e. simulations performed on a local grid and then used to derive parameterizations of subgrid processes to be used in low spatial resolution GCM simulations). The results, which are presented in Fig.~\ref{Q1_survey}, show that most respondents believe that the hierarchical approach (local high-resolution simulations to derive subgrid parameterizations for low resolution GCM simulations) is the most promising for the field. It should be noted that several recent works on exoplanet atmospheric modeling go in this direction \citep{Zhang:2017,Koll:2017,Lefevre:2019,Sergeev2020}. \\ \begin{figure} \includegraphics[width=9cm]{Q1_resolution_model.png} \caption{Results of the first item of the survey: ``Do you think that the future of global climate modeling is?" (1) First possibility (in blue): High (spatial) resolution thanks to increased computing resources? (e.g., to explicitly simulate convection processes directly in GCMs) and (2) Second possibility (in black): Using a hierarchy of models ranging from very fine resolutions to global scale? (e.g., to simulate explicitly convection in an idealized box to derive subgrid scale parameterizations for GCMs). } \label{Q1_survey} \end{figure} \textbf{(2) Most important processes to be modeled in fully coupled 3D GCMs} As more computing resources become available, it is becoming increasingly possible to build fully coupled 3D GCMs, i.e. GCMs that include all processes at play (chemistry, aerosols, oceans, glaciers, etc.) in/on a planetary atmosphere. It is by combining all these processes at the same time -- in the same way that it is done for fully coupled Earth GCMs \citep{Sellar:2019} -- that it will be possible to build virtual planetary atmospheres that are more and more realistic and therefore able to interpret the observations. This is in this context we asked the survey participants to prioritize the processes for which it is most important today to focus our efforts. The results, which are presented in Fig.~\ref{Q2_survey}, show that most respondents ranked clouds/hazes and convection as the first and second most important processes for the field to focus on. This is most likely because clouds/hazes (and moist convection, which leads to cloud formation) have been identified as the most serious limit for probing the composition of exoplanetary atmospheres, in particular using the transit spectroscopy technique \citep{Fauchez:2019,Komacek2020}. This interpretation is also reflected in the results of the open-ended question ``According to you, which developments should be prioritized to connect GCM models to ongoing and future observations of exoplanets?". The vast majority of those who answered this question did indeed mention clouds as the top priority for modeling efforts.\\ \begin{figure} \includegraphics[width=9cm]{Q2_developments_GCM.png} \caption{Results of the second item of the survey: ``Many projects are currently underway to improve the representation of the different processes involved in exoplanet climate modelling. Can you prioritize the development of these processes in the order you consider appropriate below? (1 is highest priority ; 10 is lowest priority) ------------------- For example, choose 1 for convection (1st priority) ; 2 for continental hydrology (2nd priority) ; 3 for cloud/hazes microphysics (3rd priority) ... until dynamical core (10th and last priority)" } \label{Q2_survey} \end{figure} \textbf{(3) Best strategies to limit the computing time needed to perform fully coupled 3D GCM simulations} Despite the increase in available computing resources, some atmospheric and/or surface processes can be extremely costly in computing time. We thus asked the survey participants what they felt were the best strategies to address this issue. The results are presented in Fig.~\ref{Q3_survey}. Most respondents believe that the increase in computing resources will not be sufficient to address the issue, and that instead efforts should be put on improving the efficiency of numerical codes as well as on developing new strategies to accelerate the convergence of models.\\ \begin{figure} \includegraphics[width=9cm]{Q3_computing_efficiency.png} \caption{Results of the third item of the survey: ``Coupled GCMs (with chemistry, clouds, oceans, etc.) can be computationally very expensive. What strategies should we prioritize to overcome this issue? (1) First possibility (in blue): Improving GCM codes to make them less resource-intensive. (2) Second possibility (in black): Exploring new strategies (e.g., asynchronous coupling, convergent suite of simulations, etc.) to accelerate the convergence of GCM simulations. (3) Third possibility (in red): Thanks to Moore's law, this will not be an issue anymore in the future." } \label{Q3_survey} \end{figure} \textbf{(4) Computing language} \begin{figure} \includegraphics[width=9cm]{Q4_language.png} \caption{Results of the fourth item of the survey: ``Most of the GCMs are written in Fortran (or C) language. Do you think GCMs should be converted to other computing languages?" } \label{Q4_survey} \end{figure} The effectiveness of GCMs as well as their usability and their ability to evolve over time depends on the programming language in which they are written. Most GCM codes are mainly written in Fortran -- the oldest high-level programming language --- and one can thus wonder if these codes require to be converted to a more modern language, such as Python. However, the time require to convert these codes would be tremendous and Fortran remains a very fast language to perform the GCM calculations. We therefore keep using them as legacy codes. We therefore asked the survey participants if they believe that GCM codes should be converted from Fortran to modern languages (Fig.~\ref{Q4_survey}). Opinions were very divided, with a slight prevalence for negative answers. This issue has also been the subject of intense debate during the third day of the workshop. Among the disadvantages of Fortran that have been put forward: \begin{itemize} \item Fortran is difficult to handle for new generations of students (accustomed to other modern object oriented programming languages e.g., Python). This may highly impact the attractiveness of the field, with a risk that these students and, more generally, scientific and engineering developers may turn away and/or lose the skills for sophisticated computer development in Fortran. \item The community of developers of modern languages (e.g., Python) is now much wider, and therefore there are many more libraries and contents that GCM codes could make use of. \end{itemize} And the responses from critics: \begin{itemize} \item Once students know one programming language, they can in principle easily adapt to other languages. \item Most GCM codes are several hundred thousand lines long, so in practice it is an excessive amount of work to convert a GCM code into another computer language. \item Which language to choose for converting GCM codes? Python? C? How do you know if these languages will still be widely used 5, 10, 30 years from now? \item Fortran is a very efficient (and evolving) programming language, e.g., last version if Fortran 2018. A first (reasonable) alternative is therefore to modernize the GCM codes to the most recent versions of Fortran. Fortran compilers are also highly optimized and fast. \item Finally, a compromise could be found in using Python (or a graphic user interface (GUI)) as a wrapper to run a GCM for which the core code would be in Fortran. Note that the UM GCM already uses such GUI, a but it requires additional resources and funding to maintain and update it. \end{itemize} To summarize, the fact that Fortran is used for GCMs is historical, but continues to be justified because it is a compiled language that has evolved to offer high performance, in particular for parallel operations on multicore or massively parallel environments. Alternative compiled languages are C or C++. Nevertheless, Python is currently the language growing in popularity to write scientific code in spite of the fact that is it is not a compiled language and thus much slower than Fortran, for instance. The runtime performance of Python can be improved by using pre-compiled libraries (e.g., numba or NumPy), but it has not yet been used to develop a GCM. \\ \textbf{(5) Machine learning} \begin{figure} \includegraphics[width=9cm]{Q5_AI_technique.png} \caption{Results of the fifth item of the survey: ``Do you think that artificial intelligence (AI) techniques could significantly help atmospheric modeling?" } \label{Q5_survey} \end{figure} Machine learning techniques are on the verge of revolutionizing many fields of science, including astrophysics \citep[e.g.,][]{Way2012,Ivezic2019} and exoplanets \citep[e.g.,][]{Shallue2018,Armstrong2020}. We thus asked the survey participants if they believe that Machine Learning (ML) /Artificial Intelligence (AI) techniques could also significantly help atmospheric modeling and if so how. The results are presented in Fig.~\ref{Q5_survey}. Opinions are again very divided, but with a significant peak for people with no opinions. This is most likely symptomatic of the fact that the use of ML techniques is a topic that has been very little discussed in the (exoplanet) atmospheric modeling community to date. Some survey participants mentioned that ML techniques can be used to derive better sub-grid scale parameterization, e.g., of convection. It is an avenue being currently explored for the modeling of the Earth's climate (see e.g., \citealt{Rasp:2018}). These ML techniques could also prove to be a promising way to connect local 3D high-resolution cloud resolving models with 3D low resolution GCMs, in line with the first point of the survey.\\ \textbf{(6) Environmental impact of numerical simulations} \begin{figure} \includegraphics[width=9cm]{Q6_environmental_impact.png} \caption{Results of the sixth item of the survey: ``Are you concerned about the increasing energy cost and thus the environmental impact of GCM simulations?" } \label{Q6_survey} \end{figure} Today's and especially tomorrow's GCM simulations (with the increase in both the resolution and number of physical and chemical processes taken into account) are and are likely to be very energy-consuming, with a potentially high environmental footprint (greenhouse gas emissions, rare-earth metal mining, etc.). We thus asked the survey participants if they were concerned about the increasing energy cost and thus the environmental impact of GCM simulations. The results are presented in Fig.~\ref{Q6_survey}. Opinions, that are again quite divided, were the subject of debate on the last day of the workshop. One of the preliminary proposals that emerged from this discussion is to make the environmental impact of our work more transparent, for example by stating in our publications the amount of greenhouse gases (e.g., in CO$_2$ tons equivalent) that were emitted for the study. As this carbon footprint can vary by several orders of magnitude from one country to another (depending on the carbonation level of the electricity network), from one GCM to another, from one parameterization used to another, from low to high resolutions, or depending on the number of simulations performed, it is very difficult to know the emissions associated with each study. More transparency on this subject would raise the community's awareness and could ultimately contribute to impacting environmental policy decisions (e.g., at the level of researchers, so that they make the most intelligent use of available resources to avoid waste; at the level of the University in the choice of computing equipment, energy source of the cluster; at the national/international level, to encourage the decarbonization of the electricity networks). It has also been mentioned that carbon offset strategies could be budgeted during proposal submission. However, the efficiency of carbon offset projects (including tree planting) is highly debated today \citep[e.g.,][]{Gates2021}. Finally, it has to be noted that short and small workshops such as the THAI workshop are very well suited for remote solutions and would help to mitigate the research laboratories carbon footprint release by flying to meetings. This workshop report therefore recommends to GCM users to systematically disclose the amount of CO$_2$ released by running computer simulations and eventually consider a carbon mitigation plan. While it has not been actively discussed during the THAI workshop, it is important to mention here the access of GCM data post publications. Discussions among co-authors generally agree that GCM data should be made available post publication, when possible. However, the amount of GCM data can be very large which may lead to additional fees to store them on disks and/or clouds beyond the limit that is usually allowed for free. It has also been discussed that it is actually quite rare that data from a published study are effectively downloaded and used. Therefore, the ratio benefit to cost of systematically making available GCM data may not always be relevant. Also, some models are inherently proprietary and serve the community better that way than if they would become open source. Indeed, it requires a lot of resources and personnel to keep a large and complex code at the forefront of its field. This is the case for instance of the UM owned by the UK Met Office. The proprietary license, however, does not prevent sharing output data and configuration files, which is the case for UM's contribution to THAI. \section{Creating a Diverse and Inclusive International Community in the Exoplanet GCM field} \label{sec:div} The workshop also included discussions about taking concrete action to improve aspects of diversity, inclusion, equity, belonging, and justice that will have long term implications. The workshop organizers decided to include such discussions, because of the potential benefits to having a field that is representative of and open to the diversity of our society. These issues are inherently cultural in nature; as such, how they are viewed be a function of the different disciplinary and national cultures engaged in an interdisciplinary and international endeavor, such as this workshop. That said, the effects of discrimination are severe and well-documented. A report recently outlined the barriers to access for women to permanent astronomy positions in France \citep{berne2020inequalities}. The American Astronomical Society Task Force on Diversity and Inclusion in Astronomy Graduate Education has published a report discussing strategies to improve the diversity and fairness in gradschool education \citep{AASDEI2018}. The US National Academy of Sciences, Engineering, and Medicine published a workshop report on the impacts of racism on Black people in sciences and engineering \citep{national2020impacts}, a report on the impacts of race and ethnicity on health care \citep{nelson2002unequal}, and of the prevalence and impacts of sexual harassment across academia \citep{national2018sexual}. They also provided a top-level strategy for ``reducing barriers to scientific excellence" in their Exoplanet Science Strategy. That report included the finding that ``Development and dissemination of concrete recommendations to improve equity and inclusion and combat discrimination and harassment would be valuable for building the creative, interdisciplinary teams needed to maximize progress in exoplanet science over the coming decades" \citep{WangDEI:2019}. If our field can make and follow such recommendations, it would likely generate improvements to our work, as suggested by other research; for example, increased diversity is shown to lead to an improvement in the productivity and outputs from groups and reorganizations \citep{page2008difference}, and cultures of inclusivity bring about an improvement of morale and a decrease in conflict \citep{nishii2013benefits}. The need for inclusivity also extends to academic and disciplinary considerations. This research exists at the overlap between Earth sciences, astronomy, planetary sciences, and heliophysics. Incorporating the perspectives of these different disciplines is critical to success. Similarly, this research communities is global in extent, with teams conducting GCM simulations in many countries across multiple continents. Finally, this research faces any workforce challenges that other work in academic is presented with. Based on this research, the workshop organizers believed that increasing the field in these ways will increase the variety of perspectives in our work, which will ultimately serve to improve the outputs from the community. They considered ways to ensure the workshop would do this along multiple axes of diversity, including but not limited to disciplines, institutions, genders, races, ethnicities, sexual orientations, disability statuses, cognitive diversity, nationalities, political affiliation, career stages, generations, job ranks, and levels of professional stability. Each of these aspects of diversity will require consideration on their own; in turn, action on any of them will also serve to lessen the negative impacts of discrimination in other areas. The workshop organizers included such considerations into the very structure of the meeting. To ensure accessibility across a global community, working in the context of a pandemic, they recorded lectures and made them available for later viewing and created a slack space for asynchronous communication. To account for the interdisciplinary scope of the meeting, the organizers posted introductory talks before the start of the meeting, to familiarize everyone with terminology and tools. To prevent harassment, participants agreed to adhere to a code of conduct that was shared on the meeting's home page. And the workshop included discussions of diversity in the field. One difficult issue was to determine how to structure these conversations. Originally, this discussion was scheduled as a ``breakout discussion" run in parallel with scientific/technical breakout discussions. However, some participants suggested we instead hold this as a ``whole group" discussion so that everyone would be engaged in the conversation, and so that those wanting to work on these efforts did not have to ``trade" discussions of these issues against technical/research discussions they also wanted to engage in. In response to this feedback, we dedicated time for the entire workshop community to discuss these issues, even though that came at the costs of a disruption to the planned schedule and less time for the breakout sessions. Our discussions on diversity were organized around the idea of appreciative inquiry, where individuals share stories of past successes. In this case, we discussed ``a time when you were part of a diverse team in early career, which really benefited from its diversity." We asked about the environments in which that success was found, to highlight those instances of success. This discussion highlighted a number of areas of past success; we relate some of those examples here. In situations where there was good diversity along one or more axis, it helped value other areas of diversity as it also nurtures a sense of inclusivity in the organization. One participant claimed that leadership played a positive role in groups they had previously been a part of, and that good leadership helped the group advance their degree of inclusivity in that community. Other participants discussed the relevancy of current bridge initiatives that are already in place that have proven to be successful to help bring underrepresented minorities and low income students to the STEM workforce \citep{Crouse2020}. This further developed conversations about future bridge programs in the making that we expect to positively impact our community. The highly interdisciplinary nature of our THAI community could directly benefit from adopting similar pipelines and mentoring strategies of bridge programs and will ensure a more inclusive and representative community in the following years. Smaller group discussions were noted as helpful, as they gave voice to the perspectives of different backgrounds. In some cases, the diversity in a group provided extrinsic value, such as when a TA (teaching assistant) spoke the same languages as students and helped them learn class material, or differing perspectives produced better results. There was also discussion of open recruitment for positions, and for selection criteria centered on underlying skills, not statistics such as GPA or citations. General strategies that were discussed included training the next generation of role models, being an ally to people from underrepresented groups, and acknowledging both the real progress we have made and the challenges we have yet to address. There is a related issue raised at our meeting that our community must also grapple with: that of equity for individuals without tenured positions. This group includes people in non-tenure-track faculty or research roles, as well as tenure-track faculty who have not yet received tenure. The community of non-tenured researchers is growing, both in real terms and as a percentage of our fields. As a result, the discrepancies in salary, financial security, and privileges in the workplace are increasing in their impacts on our ability to do this work \citep{Bourne8647,SSFNRIG2017}. The related stresses have impacts on the morale in our field, and it reduces the flexibility people have to spend time on these endeavors, which may not be the ones that lead to promotion. Additionally, at some institutions these discrepancies can block access to resources - such as funding for community service work. In that context, it can create the paradoxical situation where the people that have the time to conduct intermodel comparison simulations are the ones that do not have the funding or the professional stability for that activity. Specifically, workshop discussions highlighted that GCMs are very complex tools with generally steep learning curves for building, running, and modifying the codes. They also converge after days, weeks or sometimes months of computation, and can produce GB to TB in output to sort through. While Earth climate science departments are familiar with these timescales and expectations, the intersection of climate modeling with the fast paced and hypercompetitive environment of exoplanet science and astronomy can prove challenging in terms of career advancement metrics. In a very competitive field where the scientific productivity as an early career scientist is crucial, being a GCM modeler may be inhibiting, due to the long timescale to produce good and original science. This is especially true if they are similarly compared and evaluated to, for instance, observers that have a higher rate of publications and discoveries. Like other aspects of diversity and inclusion, this issue's impact can compound with other axes of power and privilege, and also leave individuals without the energy and career stability needed to address other aspects of diversity. These discussions were short, so the above approaches are a small subset of what is needed to improve the field. However, they provide a starting point for the necessary, sustained discussion on this topic. This will ultimately require thinking vertically across career stages, to develop a pipeline that allows people from any background the opportunity to join and meaningfully contribute to our field. We must then ensure those various backgrounds are included in our intellectual discussions and work, with intentional organization of open and inclusive conversations. We must work to ensure both formal and informal policies in the field are anti-discriminatory in nature. And our institutions need to do better to ensure equity and opportunity for people from all these backgrounds, and to people from different career stages and levels of job security. \section{Conclusions of the Workshop and Perspectives}\label{sec:end} The THAI workshop has allowed the exoplanet GCM community, focused on terrestrial planets, to discuss the role of GCMs in exoplanet characterization. THAI has been used as a vector in discussions between the various GCM groups (ExoCAM, LMD-G, ROCKE-3D, UM, THOR, Isca, etc.). From the THAI experiment, it is clear that clouds are the largest source of differences between the models. The average altitude of clouds and their optical thickness at the terminator affect the continuum level of the simulated transmission spectra. Various continuum levels therefore imply different detectability of molecular absorption lines, thereby impacting predictions of the detectability of an atmosphere with future space observatories such as JWST. Three papers are currently in preparation to present the THAI results and will be included within a focus issue ``Collection of model papers for GCM, EBM and 1D models applied to THAI" in the Planetary Science Journal (PSJ) alongside this workshop report. The future of exoplanet GCMs will likely require the use of a hierarchical approach (i.e. simulations performed on a local grid in order to derive parameterizations of sub-grid processes to be used in low spatial resolution GCM simulations) and will not necessarily lean toward higher spatial resolutions. In addition, the workshop participants have identified clouds/hazes and convection as the first and second most important processes for the field to focus on in the upcoming years. GCMs do not have to be used alone - a scientific approach using a hierarchy of models such as EBMs, 1D radiative-convective models and GCMs is the key to progress efficiently on prediction observation and interpret data. However, GCM simulations are computationally expensive and - in a world where the climate is globally changing - the CO$_2$ emissions released by heavy computing should be controlled with strategies to reduce these emissions at a community level. THAI has also demonstrated the utility of intermodel comparison for exoplanet science. To continue this initiative, we have proposed the Climates Using Interactive Suites of Intercomparisons Nested for Exoplanet Studies (CUISINES) that will host additional intercomparisons among exoplanet characterization studies in the future. A formal workshop on best practices for such intercomparisons will be organized in Fall 2021 to optimize the collaboration and science returns of CUISINES. If we wish to successfully grow our understanding of the Earth and the worlds beyond our own atmosphere, we need to ensure the GCM community reaches more diverse audiences. We hope that implementing Diversity \& Inclusion initiatives - such as bridge programs - will help move the scientific community forward in a way that brings equitable collaborations in the coming years. \acknowledgments The workshop SOC acknowledge funding support from the Nexus for Exoplanet System Science (NExSS). T. Fauchez, R. Kopparapu, S. Domagal-Goldman and M.J. Way acknowledge support from the GSFC Sellers Exoplanet Environments Collaboration (SEEC), which is funded in part by the NASA Planetary Science Divisions Internal Scientist Funding Model. M. Turbet received funding from the European Research Council (ERC) under the European Unions Horizon2020 research and innovation program (grant agreement No. 724427/FOUR ACES) and from the European Unions Horizon 2020 research and innovation program under the Marie Sklodowska-Curie Grant Agreement No. 832738/ESCAPE. M.T. thanks the Gruber Foundation for its generous support. This work has been carried out within the framework of the National Centre of Competence in Research PlanetS supported by the Swiss National Science Foundation. M.T. acknowledges the financial support of the SNSF. M.J. Way and L. Sohl acknowledge funding support through NASA's Nexus for Exoplanet System Science (NExSS) via the ROCKE-3D working group. E.T. Wolf acknowledges funding support through NASA's Nexus for Exoplanet System Science (NExSS) via the Virtual Planetary Laboratory and the ROCKE-3D working groups. G. Gilli acknowledges funding by the European Union's Horizon2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No. 796923/Hot-TEA. M. Lef\`evre acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 740963/EXOCONDENSE). French co-authors were granted access to the High-Performance Computing (HPC) resources of Centre Informatique National de l'Enseignement Supérieur (CINES) under the allocations No. A0060110391 and A0080110391 made by Grand Équipement National de Calcul Intensif (GENCI). D.E.S., N.J.M. and I.A.B acknowledge use of the Monsoon system, a collaborative facility supplied under the Joint Weather and Climate Research Programme, a strategic partnership between the Met Office and the Natural Environment Research Council. Some of this work was performed using the DiRAC Data Intensive service at Leicester, operated by the University of Leicester IT Services, which forms part of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K000373/1 and ST/R002363/1 and STFC DiRAC Operations grant ST/R001014/1. DiRAC is part of the National e-Infrastructure. This research also made use of the ISCA High Performance Computing Service at the University of Exeter. We acknowledge support of the Met Office Academic Partnership secondment programme. This work was partly supported by a Science and Technology Facilities Council Consolidated Grant (ST/R000395/1). We would like to thank Dorian S. Abbot and the anonymous reviewer for comments that greatly improved our manuscript. \software{{\sc matplotlib} \citep{Hunter2007}} \clearpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:1} The study of rich correlation structures of high-dimensional random objects, is often invoked when learning the unknown functional relationship between an observed random variable, and some other parameters that might inform on the properties of a system. A problem in which a vector of system parameters (say $\boldsymbol{\rho}\in{\cal R}\subseteq{\mathbb R}^p$) is related to an observed response variable (say $\boldsymbol{Y}\in{\cal Y}\subseteq{\mathbb R}^d$), is easily visualised by the equation: $\boldsymbol{Y}=\boldsymbol{\xi}(\boldsymbol{\rho})$, where $\boldsymbol{\xi}:{\cal R}\longrightarrow{\cal Y}$. Given training data $\mathbf{D} = \{(\boldsymbol{\rho}_i,\boldsymbol{y}_i)\}_{i=1}^{N_{data}}$, we aim to learn this unknown mapping $\boldsymbol{\xi}(\cdot)$ within the paradigm of \textit{supervised learning}. By ''training data'' we mean here the pairs composed of chosen design points $\boldsymbol{\rho}_i$, and the output $\boldsymbol{y}_i$ that is generated at $\boldsymbol{\rho}_i$; $i=1,\ldots,N_{data}$. Methods to perform supervised learning are extensively covered in the literature \cite{elementML,neal,rasmussen,russelnorvig}. Having learnt $\boldsymbol{\xi}(\cdot)$, one could use this model to predict the value $\boldsymbol{\rho}$ \cite{ejs}, at which the test datum $\boldsymbol{y}_{test}$ on $\boldsymbol{Y}$ is realised -- either in the conventional framework as $\boldsymbol{\rho}=\boldsymbol{\xi}^{-1}(\boldsymbol{Y})\vert_{\boldsymbol{Y}=\boldsymbol{y}_{test}}$, or as the Bayesian equivalent. Such prediction is possible, only subsequent to the learning of the functional relation between $\boldsymbol{\rho}$ and $\boldsymbol{Y}$ using training data $\mathbf{D}$. However, there exist physical systems for which only measurements on the observable $\boldsymbol{Y}$ are known, i.e. training data is not available. The disciplines affected by the absence of training data are diverse. In engineering \cite{Sun2011}, anomaly detection is entirely sample-specific. There is no training data that allows for the learning of a functional relationship between anomaly occurrence (parametrised by type and severity of anomaly) and conditions that the sample is subjected to. Yet, we need to predict those anomalies. In finance, such anomalies in stock price trends are again outside the domain of supervised learning, given that the relationship between the market conditions and prices have not been reliably captured by any "models" yet. In neuroscience, \cite{ahmad2017}, a series of neurons spike at different amplitudes, and for different time widths, to cause a response (to a stimulus). We can measure the response's strength and the parameters of firing neurons, but do not know the relation between these variables. Again, in petrophysics, the system property that is the proportion of the different components of a rock (eg. water, hydrocarbons), affects Nuclear Magnetic Resonance (NMR) measurements from the rock \cite{6,7}. However, this compositional signature cannot be reliably estimated given such data, using available estimation techniques. Quantification of petrological composition using the destructive testing of a rock, is highly exclusive, and expensive, to allow for a sample that is large enough to form a meaningful training data set. Also, the resulting training data will in general be unrepresentative of any new rock, since the relationship between the (compositional) system property and (NMR) observable is highly rock-specific, being driven by geological influences on the well that the given rock is obtained from. Therefore any training data will need to be substantially diverse, and as stated before, this is unachievable in general. Equally, this dependence on the latent geological influence annuls the possibility of using numerical simulations to generate NMR data, given design compositional information. Thus, generation of training data is disallowed in general. In this work, we advance the learning of the sought functional relation between an observable and a system parameter vector, in such a challenging (absent training) data situation; this could in principle, then be undertaken as an exercise in {\it unsupervised learning}, though opting for the more robust supervised learning route is still possible, as long as the missing training data is generated, i.e. we are able to generate the $\boldsymbol{\rho}_i$ at which the measured (test) datum, $\boldsymbol{y}_i$ on $\boldsymbol{Y}$, is available, $\forall i\in\{1,\ldots,N_{data}\}$. Our new method for accomplishing this, is to invoke a system property that helps link $\boldsymbol{\rho}$ with $\boldsymbol{Y}$, and this is possible in physical systems for which we have -- at least partial -- observed information. To clarify, what we advance in the face of the absent training data, is the pursuit of the probability density function of the observable $\boldsymbol{Y}$, on which data is available, and employ this to learn the system parameter vector $\boldsymbol{\rho}$. We undertake such an exercise in a Bayesian framework, in which we seek posterior of the \textit{pdf} of the observables, and the system parameters, given the available data. The sought parameter vector could inform on the behaviour, or structure, of the system (eg. it could be the vectorised version of the density function of all gravitating matter in a distant galaxy). The state space \textit{pdf} establishes the link between this unknown vector, and measurements available on the observable (that may comprise complete or incomplete information on the state space variable). We consider dynamical systems, s.t. the system at hand is governed by a kinetic equation \cite{kinetic}; we treat the unknown system parameter vector as the stationary parameter in the model of this dynamical system. In the novel Bayesian learning method that we introduce, this parameter is embedded within the support of the state space \textit{pdf}. We describe the general model in Section~\ref{sec:2}, that is subsequently applied to an astronomical application that is discussed in Section~\ref{sec:3}. Inference is discussed in Section~\ref{sec:4}, where inference is made on the state space {\it pdf} and the sought system parameters, given the data that comprises measurements of the observable, using Metropolis-within-Gibbs. Results are presented in Section~\ref{sec:5}, and the paper is rounded up with a conclusive section (Section~\ref{sec:6}). \section{General Methodology} \label{sec:2} \noindent We model the system as a dynamical one, and define the state space variable as a $p$-dimensional vector $\boldsymbol{S}\in{\cal S}\subseteq{\mathbb R}^p$. Let the observable be $\boldsymbol{Y}\in{\cal Y}\subseteq{\mathbb R}^d;\:d<p$, such that only some ($d$) of the $p$ different components of the state space vector $\boldsymbol{S}$ can be observed. In light of this situation that is marked by incomplete information, we need to review our earlier declaration of interest in the probability density function of the full state space vector. Indeed, we aim to learn the \textit{pdf} of the state space variable $\boldsymbol{S}$, and yet, have measured information on only $\boldsymbol{Y}$, i.e. on only $d$ of the $p$ components of $\boldsymbol{S}$. Our data is then one set of measurements of the observable $\boldsymbol{Y}$, and can be expressed by $\boldsymbol{D}=\{\boldsymbol{y}^{(k)}\}_{k=1}^{N_{data}}$. If the density of ${\cal S}$ is to be learnt given data on $\boldsymbol{Y}$, such incompleteness in measured information will have to be compensated for by invoking some independent information. Such independent information is on the symmetry of ${\cal S}$. It follows that unobserved components of $\boldsymbol{S}$ will have to be integrated out of the state space \textit{pdf}, in order to compare against data that comprises measurements of the observables. This state space {\it pdf} that the unobserved variables are integrated out of, is equivalently projected onto the space ${\cal Y}$ of observables, and therefore, we refer to it as the \textit{projected state space pdf}. The likelihood of the model parameters, given the data, is simply the product of the projected state space \textit{pdf} over all the data points. But until now, the unknown model parameters have not yet appeared in our expression of the likelihood. The next step is then to find a way for embedding the sought system parameters, in the support of the projected state space \textit{pdf}. This can be achieved by assuming that our dynamical system is stationary, so that its state space \textit{pdf} does not depend on time-dependent variables. In other words, the rate of change of the state space \textit{pdf} is $0$. This allows us to express the \textit{pdf} as dependent on the state space vector $\boldsymbol{S}$, but only via such functions of (some or all amongst) $S_1,\ldots, S_p$ that are not changing with time; in fact, the converse of this statement is also true. This is a standard result, often referred to as Jeans Theorem \cite{BT, Jeans}. The model parameters that we seek, can be recast as related to such identified time-independent functions of all/some state space coordinates of motion. Thus, by expressing the state space \textit{pdf} as a function of appropriate constants of motion, we can embed system parameters into the support of the sought \textit{pdf}. As stated above, this \textit{pdf} will then need to be projected into the space of observables ${\cal Y}$, and we will convolve such a projected \textit{pdf} with the error density, at every choice of the model parameters. Then assuming the data to be $iid$, the product of such a convolution over the whole data set will finally define our likelihood. Using this likelihood, along with appropriate priors, we then define the posterior probability density of the model parameters and the state space $pdf$, given the data $\boldsymbol{D}$. Subsequently we generate posterior samples using {Metropolis-within-Gibbs}. scheme. We recall that in absence of training data on a pair of r.v.s, we cannot learn the correlation structure of the functional relationship between these variables. In such situations, instead of the full function, we can only learn the vectorised version of the sought function. In other words, the relevant interval of the domain of the function is discretised into a bin, and the value of the function held a constant over any such bin; we can learn the functional value over any such bin. \section{Astrophysics Application} \label{sec:3} \noindent Our astrophysics application is motivated by the wish to learn the contribution of dark matter, to the density function of all gravitating mass in a distant galaxy. While information on light-emitting matter is available, it is more challenging to model the effects of dark matter since, by definition, one cannot observe such matter (as it does not emit/reflect light of any colour). However, physical phenomena such as: the distortion of the path of light by gravitational matter acting as gravitational lenses; temperature distribution of hot gas that is emanating from a galaxy; motions of stars or other galactic particles that is permitted in spite of the attractive gravitational pull of the surrounding galactic matter, allow us to confirm that non-observable, dark matter is contributing to the overall gravitational mass density of the galaxy. In fact, astrophysical theories suggest that the proportion of dark matter in older galaxies (that are of interest to us here) is the major contributor to the galactic mass, over the minor fraction of luminous galactic matter \cite{veselina_kalinova}. We can compute the proportion of this contribution, by subtracting the density of the luminous matter from the overall density. It is then necessary to learn the gravitational mass density of the whole system in order to learn the density of dark matter. We begin by considering the galaxy at hand to be a stationary dynamical system, i.e. the distribution of the state space variable does not depend on time. Let $\boldsymbol{S} = (X_1,X_2,X_3,V_1,V_2,V_3)^T\in{\cal S}\subseteq{\mathbb{R}^6}$ define the state space variable of a galactic particle, where $\boldsymbol{X}=(X_1,X_2,X_3)^T$ is defined as its 3-dimensional location vector and $\boldsymbol{V}=(V_1,V_2,V_3)^T$ as the 3-dimensional velocity vector of the galactic particle. Our data consists of measurement of the one observable velocity coordinate $V_3$, and two observable spatial coordinates, $X_1, X_2$, of $N_{data}$ galactic particles (eg. stars). That is, for each galactic particle, we have measurements of $\boldsymbol{Y} = (X_1,X_2,V_3)^T \in{\cal Y}\subseteq{\mathbb{R}^3}$. For $N_{data}$ observations, our data is thus $\boldsymbol{D}=\{\boldsymbol{y}^{(k)}\}_{k=1}^{N_{data}}$. The system function that we are interested in learning here, is the density function $\rho(X_1, X_2, X_3)$ of the gravitational mass of all matter in the considered galaxy, where we assume that this gravitational mass density $\rho(\cdot)$ is a function of the spatial coordinates $\boldsymbol{X}$ only. This system function does indeed inform on the structure of the galactic system -- for it tells us about the distribution of matter in the galaxy; it also dictates the behaviour of particles inside the galaxy, since the gravitational mass density is deterministically known as a function of the gravitational potential $\Phi(X_1, X_2, X_3)$ via the Poisson equation ($\nabla^2 \Phi(X_1, X_2, X_3) = -4\pi G \rho(X_1, X_2, X_3)$, where $G$ is the known Universal Gravitational constant, and $\nabla^2$ is the Laplacian operator), which is one of the fundamental equations of Physics \cite{goldstein}. The potential of a system dictates system dynamics, along with the state space distribution. Here, we assume that the state space density of this dynamical system does not vary with time, i.e. $\displaystyle{\frac{df\left[X_1(t),X_2(t),X_3(t),V_1(t),V_2(t),V_3(t)\right]}{dt}} = 0$. This follows from the consideration that within a typical galaxy, collisions between galactic particles are extremely rare \cite{BT}. We thus make the assumption of a collisionless system evolving in time according to the \textit{Collisionless Boltzmann Equation} (CBE) \cite{BT,CBE}. As motivated above, this allows us to express the state space \textit{pdf} as dependent on those functions of $X_1,X_2,X_3, V_1,V_2,V_3$ that remain invariant with time, along any trajectory in the state space $\mathcal{S}$; such time-invariant constants of motion include energy, momentum, etc. It is a standard result that the constant of motion that the state space $pdf$ has to depend on, is the energy $E(X_1,X_2,X_3,\parallel\boldsymbol{V}\parallel)$ of a galactic particle \cite{binney82,contop63}, where $\parallel\cdot\parallel$ represents the Euclidean norm of a vector. Here, energy is given partly by kinetic energy that is proportional to $\parallel\boldsymbol{V}\parallel^2$, and partly by potential energy, which by our assumption, is independent of velocities. Secondly, given that the state space is $6$-dimensional, the number of constants of motion $\leq$5, in order to let the galactic particle enjoy at least 1 degree of freedom, i.e. not be fixed in state space \cite{contop63}. We ease our analysis by assuming that the state space $pdf$ is a function of energy only. This can be rendered equivalent to designating the symmetry of isotropy to the state space ${\cal S}$, where isotropy implies invariance to rotations in this space, i.e. the state space $pdf$ is assumed to be such a function of $\boldsymbol{X}$ and $\boldsymbol{V}$, that all orthogonal transformations of $\boldsymbol{X}$ and $\boldsymbol{V}$ preserve the state space $pdf$. The simple way to achieve the equivalence between a isotropic state space $pdf$ and the lone dependence on energy $E$ of the $pdf$, is to ensure that the gravitation mass density, (and therefore the gravitational potential) at all points at a given Euclidean distance from the galactic centre, be the same, i.e. the distribution of gravitational mass abides by spherical symmetry s.t. $\rho(\cdot)$ (and therefore $\Phi(\cdot)$) depends on $X_1,X_2,X_3$ via the Euclidean norm $\parallel\boldsymbol{X}\parallel$ of the location vector $\boldsymbol{X}$ of a particle. Then energy $E$ is given as the sum of the $\parallel\boldsymbol{V}\parallel^2$-dependent kinetic energy, and the $\parallel\boldsymbol{X}\parallel$-dependent potential energy. Spherical mass distribution is not a bad assumption in the central parts of ``elliptical'' galaxies that are of interest for us, as these have a global triaxial geometry. To summarise, state space $pdf$ is written as $f(E)$, and we embed $\boldsymbol{\rho}(\cdot)$ into the support of this state space $pdf$ $f(E)$, by recalling that energy $E$ is partly the gravitational potential energy $\Phi(\cdot)$ that is deterministically related to the gravitational mass density $\rho(\cdot)$ through Poisson equation. As there is no training data available to learn the correlation structure of the sought functions $\rho(\boldsymbol{X})$ and $f(E)$, we can only learn values of these functions at specified points in their domains, i.e. learn their vectorised forms $\boldsymbol{\rho}$ and $\boldsymbol{f}$ respectively, where $\boldsymbol{\rho}:=(\rho_1,...,\rho_{N_X})^T$, with $\rho_i=\rho(\boldsymbol{x})$ for $\boldsymbol{x}\in[\boldsymbol{x}_{i-1}, \boldsymbol{x}_i]; i=1,\ldots N_x$. The discretised form of $f(E)$ is similarly defined, after discretising the relevant (non-positive) $E$-values (to indicate that the considered galactic particles are bound to the galaxy by gravitational attraction), into $N_E$ number of $E$-bins. Then in terms of these vectorised versions of the state space $pdf$ likelihood of the unknown parameters $\rho_1,\ldots\rho_{N_X},f_1,\ldots,f_{N_E}$, given data on the observable $\boldsymbol{Y}$ is: \begin{equation} \ell\left(\boldsymbol{\rho},\boldsymbol{f}|\{\boldsymbol{y}^{(k)}\}_{k=1}^{N_{data}} \right) = \prod_{k=1}^{N_{data}} \nu(\boldsymbol{y}^{(k)},\boldsymbol{\rho},\boldsymbol{f}), \label{eq:finalL} \end{equation} where $\nu(.)$ is the projected state space \textit{pdf}. We also require that $\rho_1\geq 0,\ldots\rho_{N_X}\geq 0,f_1\geq 0,\ldots,f_{N_E} \geq 0$, and that $\rho_i\geq \rho_{i+1},\:i=1,\ldots,N_X-1$. The latter constraint is motivated by how the mass in a gravitating system (such as a galaxy) is distributed; given that gravity is an attractive force, the stronger pull on matter closer to the centre of the galaxy, implies that gravitational mass density should not increase, as we move away from the centre of the system. These constraints are imposed via the inference that we employ. \section{Inference} \label{sec:4} \noindent Inference on the unknown parameters -- that are the components of $\boldsymbol{\rho}$ and $\boldsymbol{f}$ -- is undertaken using Metropolis-within-Gibbs. In the first block update during any iteration, the $\rho_1,\ldots,\rho_{N_X}$ parameters are updated, and subsequently, the $f_1,\ldots, f_{N_E}$ parameters are updated in the 2nd block, at the updated $\rho$-parameters, given the data $\boldsymbol{D}$ that comprises $N_{data}$ measurements of the observed state space variables $X_1,X_2,V_3$ that are the components of the observable vector $\boldsymbol{Y}$. Imposition of the monotonicity constraint on the $\rho$ parameters, s.t. $\rho_i \geq \rho_{i+1}$, $i=1,\ldots N_{X}-1$, renders the inference interesting. We propose $\rho_i$ from a Truncated Normal proposal density that is left truncated at $\rho_{i+1}$, $\forall i=1,\ldots,N_X-1$, and propose $\rho_{N_X}$ from a Truncated Normal that is left truncated at 0. The mean of the proposal density is the current value of the parameter and the variance is experimentally chosen, as distinct for each $i\in\{1,\ldots,N_X\}$. Such a proposal density helps to maintain the non-increasing nature of the $\rho_i$-parameters, with increasing $i$. At the same time, non-negativity of these parameters is also maintained. We choose arbitrary seeds for $\rho_1,\ldots,\rho_{N_X}$, and using these as the means, a Gaussian prior is imposed on each parameter. The variance of the prior densities is kept quite large, and demonstration of lack of sensitivity to the prior choices, as well as the seeds, is undertaken. \vspace{-.4cm} \begin{figure}[!h] \centering \includegraphics[height=6.5cm]{HPDs} \caption{Results from a MCMC scheme showing the $95\%$ HPDs for all the parameters to learn, for both PNe (top row) and GC (bottom row) data. Modes are shown as red dots.\textbf{Top Row:} HPDs on the $\boldsymbol{\rho}$ (left), and the $\boldsymbol{f}$ parameters for the PNe data. \textbf{Bottom Row:} HPDs on the $\boldsymbol{\rho}$ (left), and the $\boldsymbol{f}$ parameters for the GC data. } \label{fig:HPDs} \end{figure} \vspace{-.4cm} As for components of the vectorised state space $pdf$, there is no correlation information to be enjoyed in this case, unlike in the case of the components of the vectorised gravitational mass density function. We propose $f_j$ from a Truncated Normal (to maintain non-negativity), where the mean of this proposal density is the current value of the parameter and the variance is chosen by hand. Loose Gaussian priors are imposed, while the same seed value is used $\forall j\in\{1,\ldots,N_{E}\}$. An important consideration in our work is the choice of $N_X$ and $N_{E}$. We could have treated these as unknowns and attempted learning these from the data; however, that would imply that the number of unknowns is varying from one iteration to another, and we desired to avoid such a complication, especially since the data strongly suggests values of $N_X$ and $N_E$. We choose $N_X$ by binning the range of $R_p:=\sqrt{X_1^2+X_2^2}$ values in the data $\boldsymbol{D}$, s.t. each resulting $R_p$-bin includes at least one observed value of $V_3$ in it, and at the same time, the number of $R_p$-bins is maximised. Again, we use the available data $\boldsymbol{D}$ to compute the empirical values of energy $E$, where an arbitrarily scaled histogram of the observed $R_p$ is used to mimic the vectorised gravitational mass density function, that is then employed to compute the empirical estimate of the vectorised gravitational potential function, that contributes to $E$ values. We admit maximal $E$-bins over the range of the empirically computed values of $E$, s.t. each such $E$-bin contains at least one datum in $\boldsymbol{D}$. \vspace{-.5cm} \section{Results} \label{sec:5} \noindent We have input data on location and velocities of 2 kinds of galactic particles (called ``Globular Clusters'', and ``Planetary Nebulae'' -- respectively abbreviated as GC and PNe), available for the real galaxy NGC4494. The GC data comprises $114$ measurements of $\boldsymbol{Y} = (X_1,X_2,V_3)^T$, for the GCs in NGC4494 \cite{GC}. Our second data set (PNe data), comprises $255$ measurements of the PNe \cite{pne}. Results of the learnt $95\%$ HPDs for all parameters, given both PNe (top row) and GC (bottom row) data, are shown in Figure \ref{fig:HPDs}. Significant inconsistencies between the learnt gravitational mass density parameters can suggest interesting dynamics, such as splitting of the galactic state space into multiple, non-communicating sub-spaces \cite{jasa}, but for this galaxy, it is noted that such parameters learnt from the 2 datasets, concur within learnt HPDs. \section{Conclusions} \label{sec:6} \noindent An astronomical implication of our work is that $\rho_1$ learnt from either dataset suggests a very high gravitational mass density in the innermost $R_p$-bin ($\approx$ 1.6kpc), implying gravitational mass $\gtrsim 10^9$times mass of the Sun, enclosed within this innermost radial bin. This result alone does not contradict the suggestion that NGC4494 harbours a central supermassive blackhole (SMBH) of mass $\sim 2.69\pm 2.04\times 10^7$ solar masses \cite{blackhole}. Very interestingly, our results indicate that for both GCs and PNe, most particles lie in the intermediate range of energy values; this is also borne by the shape of the histogram of the empirically computed energy using either dataset, where this empirical $E$ value computation is discussed in the last paragraph of Section~\ref{sec:4}. However, owing to its intense radially inward gravitational attraction, a central SMBH is expected to render the potential energy (and therefore the total energy $E$) of the particles closer to the galactic centre, to be much higher negative values, than those further away, while also rendering the number (density) of particles to be sharply monotonically decreasing with radius away from the centre. This is expected to render the energy distribution to be monotonically decreasing as we move towards more positive $E$ values -- in contradiction to our noted non-monotonic trend. So while our results are not in contradiction to the report of a very large value of mass enclosed within the inner parts of NGC4494, interpretation of that mass as a SMBH does not follow from our learning of the state space $pdf$. The learning of the gravitational mass density function, and state space $pdf$ -- as well as that of the relation $\boldsymbol{\xi}(\cdot)$ between the observable state space coordinates, and the system function/vector -- can be done after generating the training dataset relevant to the functional learning problem at hand. Applications in Petrophysics and Finance are also planned. \renewcommand\baselinestretch{1.} \small \bibliographystyle{apalike}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:1} Having accurate and detailed information on social and economic conditions, summarised by appropriate indicators, is imperative for the efficient implementation of policies. The term \textit{detailed} is used to signify information that extends beyond aggregate levels into highly disaggregated geographical and other domains (e.g. demographic groups). The term \textit{accurate} refers to information that is estimated with appropriate level of precision and is comparable over space and time. Simply analysing data from national sample surveys is not enough for achieving the dual target of accurate and detailed information. This is mainly due to the reduction of sample sizes as the level of required detail increases. The achievement of this dual target demands appropriate model-based methodology collectively referred to as Small Area Estimation (SAE). SAE-methods can be broadly divided in two classes: first, area-level models \citep{Fay_Heriot1979} assume that only aggregated data for the survey and for the auxiliary information is available. Second, unit-level models \citep{Battese_etal1988} - further labelled as BHF - require access to the survey and to the auxiliary information on micro-level. A versatile extension of the BHF model is the EBP-approach by \citet{Molina_rao2010}. The EBP is capable to estimate area-level means as well as other linear and non-linear indicators. Both classes (area-level and unit-level models) are predominantly regression-based models, where the hierarchical structure of observations is modelled by random effects. These linear mixed models (LMM) assume normality of random effects and error terms. Focusing on social and economic inequality data, the required assumptions for LMMs hardly meet empirical evidence. \citet{JiangRao2020} remind, that optimality results and predictive performance in model-based SAE are inevitably connected to the validity of model assumptions. Without theoretical and practical considerations regarding improperly met assumptions, estimates are potentially biased and mean squared error (MSE) estimates unreliable. One strategy to prevent model-failure, is the assurance of normality by transforming the dependent variable \citep{sugasawa2017transforming,Rojas_etal2019}. For instance, \cite{Rojas_etal2019} generalize the EBP with a data-driven transformation on the dependent variable, such that normality assumptions can be met in transformed settings. Further details on how to obtain the most-likely transformation parameter that improves the performance of unit-level models are available in \cite{Rojas_etal2019} and \cite{sugasawa2019adaptively} or from a more applied perspective in \cite{Tzavidis_etal2018}. Apart from transformation strategies, another alternative is the use of models with less restrictive (parametric) assumptions to avoid model-failure. For instance, \cite{DialloRao2018} and \cite{Graf_etal2019} formulate the EBP under more flexible distributional assumptions. \textcolor{black}{Alternatively, \citet{Cha06} propose an approach for estimating area-level means based on M-quantile models, which are a robust method avoiding distributional assumptions of LMMs including the formal specification of area-level random effects. \citet{Tzavidis_etal2010} and \citet{Marchetti_Tzavidis2021} extended this approach to allow for estimating more complex statistics, like quantiles of area-specific distribution functions and non-linear indicators.} Semi- or non-parametric approaches for the estimation of area-level means were investigated among others by \citet{Opsomeretal2008}. \citet{Opsomeretal2008} use penalized splines regression, treating the coefficients of spline components as additional random effects within the LMM setting. A distinct methodological option to avoid the parametric assumptions of LMMs are the class of machine learning methods. These methods are not only limited to parametric models and `learn' predictive relations from data, including higher order interactions between covariates, without explicit model assumptions \citep{Hastie_etal2009, Varian2014}. Among the broad class of machine learning methods, we focus on tree-based models and particularly on random forests \citep{Breiman2001} because they exhibit excellent predictive performance in the presence of outliers and implicitly solve problems of model-selection \citep{Biau_Scornet2016}. In general, the predictive perspective of (tree-based) machine learning methods conceptually transfers straight forward to the methodology of unit-level SAE-models: survey data is used to construct a model with predictive covariates. Subsequently, auxiliary information from a supplementary data source (usually census, register or administrative data) is utilized to obtain predictions over sampled and non-sampled areas. From a machine learning perspective, the survey data serves as an implicit training set to construct a proper model, while supplementary data is used to predict final indicators. Nevertheless, \citet{JiangRao2020} claim, that results from machine learning methods in SAE are harder to be interpreted and justified by SAE-practitioners, compared to LMM-alternatives. We aim to fill this gap by providing a consistent framework enabling a coherent use of tree-based machine learning methods in SAE. In particular, we \textcolor{black}{incorporate random forests within the methodological tradition of SAE by proposing a non-linear, data-driven, and semi-parametric alternative for the estimation of area-level means by using Mixed Effects Random Forests (MERF) \citep{Hajjem2014}. We focus on the construction of area-level mean-estimates using MERFs for sampled and out-of-sample domains. Our proposed model-based estimator consists of a composite model of a structural component, accounting for hierarchical dependencies of survey data with random effects and a non-parametric random forest, which models fixed effects. In contrast to existing SAE-methods, our proposed approach assists SAE-practitioners by an automated model selection. We highlight strengths and weaknesses of random forests in the context of SAE, in comparison to existing (or `traditional') SAE-methods using design- and model-based simulations. A distinct merit of this paper is the provision of a reliable bootstrap-scheme determining the uncertainty of area-level mean-estimates. Thus, this paper aims to contribute towards the trend of diversifying the model-toolbox for SAE-practitioners and researchers, while simultaneously respecting the methodological and structural nature of SAE.} The general idea of tree-based methods in SAE is not entirely new. \citet{Anderson_etal2014} use district-level data from Peru to juxtapose the performance of LMM-based and tree-based methods for estimating population densities. \citet{Bilton_etal2017} use classification trees for categorical variables to incorporate auxiliary information from administrative data to survey data on household-poverty in Nepal. For a continuous variable \citet{DeMolinerGoga2018} estimate mean electricity consumption curves for sub-populations of households in France by using methods of LMMs and regression-based trees. \citet{MoConvilleetal2019} propose a regression-tree estimator for finite‐population totals, which can be viewed as an automatically-selected post‐stratification estimator. \citet{Dagdougetal2020} analyse theoretical properties of random forest in the context of complex survey data. \citet{Mendez2008} provides theoretical and empirical considerations for using random forests in the context of SAE and compares their performance with 'traditional' unit-level LMMs for the estimation of area-level means. Although we share the general idea of \citet{Mendez2008}, the approach of this paper differs in several ways: first of all, we leave the random forests algorithm \citep{Breiman2001} unchanged and explicitly estimate random effects to account for the hierarchical structure of the data. Secondly, the proposed framework of this paper is more flexible and potentially extendable to model more complex hierarchical dependency structures as well as spatial and temporal correlations. Additionally, the extension to other methods of machine learning is possible, such as support vector machines or gradient boosted trees. The paper is organized as follows: Section \ref{sec:2} provides a methodological introduction to random forests and introduces MERFs based on \citet{Hajjem2014} as a method that effectively amalgamates random forests and the possibility to account for hierarchical dependencies of unit-level observations. Additionally, we motivate a general unit-level mixed model, treating LMMs in SAE as special cases. In Section \ref{sec:2.3}, we discuss the construction of area-level mean-estimates. Random forests promote the flexibility of predictive models, but their lack of distributional assumptions complicates inferences. As a result, Section \ref{sec:3} proposes a non-parametric bootstrap-scheme for the estimation of the area-level MSE. In Section \ref{sec:4}, we use model-based simulations under complex settings to extensively discuss and compare the performance of the proposed method for point- and MSE-estimates. We claim MERFs to be a valid alternative to existing methods for the estimation of SAE-means. In Section \ref{sec:5}, we use household income data of the Mexican state Nuevo León to estimate area-level averages and corresponding uncertainty estimates. We highlight modelling and robustness properties of our proposed methods. Section \ref{sec:5.3} proceeds with a design-based simulation, which asses the quality of results in the application of Section \ref{sec:5.2}. Furthermore, the design-based simulation contributes to a genuine demonstration of properties and advantages of MERFs in the context of SAE. Section \ref{sec:6} concludes and motivates further research. \section{Theory and method}\label{sec:2} In this section we propose a flexible, data-driven approach using random forests for the estimation of area-level means in the presence of unit-level survey data. The method requires a joint understanding of tree-based modelling techniques and concepts of SAE. We review the basic theory of random forest and discuss modifications to ensure their applicability to hierarchical data and subsequently to applications of SAE. \subsection{Review of random forests}\label{sec:2.1} Random forests combine individual decision trees \citep{Breiman_etal1984} to improve their joint predictive power, while simultaneously reducing their prediction variance \citep{Biau_Scornet2016, Breiman2001}. \citet{Breiman2001} extends his idea of Bagging \citep{Breiman1996bagging} - which combines predictions from single trees through a bootstrap and aggregation procedure - to random forests that apply bootstrap aggregation on decorrelated trees. Note that the two tuning parameters of random forests are the number of trees (controlling the number of bootstrap replications) and the number of variables to be selected as candidates for each split (controlling the degree of decorrelation). Because the forest is a combination of decorrelated trees, each aiming to minimize the prediction MSE, the optimal estimator for the random forest regression function $f()$ also minimizes the point-wise MSE. The minimizer under squared error loss in the regression context, is given by the conditional mean of target variable $y$ given the data \citep{Efron_Hastie2016, Wager_Athey2018}. Random forests captivate with a lack of assumptions such as linearity or the distributional specification of model errors, however, observations are assumed to be independent. Applications of SAE are characterized by the use of hierarchical data. Ignoring the correlation between observations, generally results in inferior point-predictions and inferences. LMMs capture the dependencies between observations by random effects, while effects between covariates are modelled by linear fixed effects, resulting in an additive model of both terms. In the context of tree-based methods, \citet{Sela_Simonoff2012} propose a semi-parametric mixed model consisting of a random effects part and a fixed effects non-parametric tree-model. \citet{Hajjem_etal2011} propose a similar approach under the label of mixed effect regression trees (MERT). As the superior performance of random forests over regression trees transfers to dependent data, \citet{Hajjem2014} replace the fixed effects part in MERTs by a random forest, leading to mixed effects random forests (MERF). We scrutinize this approach and propose a general semi-parametric unit-level mixed model combining the flexibility of tree-based models with the structural advantages of linear mixed models in the next subsection. \subsection{Mixed effects random forests}\label{sec:2.2} We assume a finite population which is divided into $D$ disjunct areas $U_i$, with population sizes $N_i$, where $i = 1,...,D$ specifies the areas and $N = \sum_{i=1}^{D} N_i$ defines the population size. We assume to have a sample from this population. The number of sampled observations in area $i$ is given by $n_i$, where the sample size is denoted by $n = \sum_{i=1}^{D} n_i$. In each sampled area we obtain $j$ individual observations ranging from $1,...,n_i$. We define the metric target variable for area $i$ as a $n_i \times 1$ vector of individual observations $y_i = [y_{i1},..., y_{in_i}]'$. Covariates are captured in the $n_i \times p$ matrix of $X_i = [x_{i1},..., x_{in_i}]'$, where $p$ defines the number of covariates. $Z_i = [z_{i1},...,z_{in_i}]'$ defines the $n_i \times q$ matrix of area-specific random effect specifiers, where $q$ describes the dimension of random effects. $v_i = [v_{i1},...,v_{iq}]'$ is the $q \times 1$ vector of random effects for area $i$. $\epsilon_i = [\epsilon_{i1},...,\epsilon_{in_i}]' $ is the $n_i \times 1$ vector of individual error terms. Observations between areas are assumed to be independent and $v_i$ and $\epsilon_i$ are mutually independently normally distributed with the same variance-covariance matrix $H_i$ for random effects of each area $i$ and $R_i$ for individual errors. A joint notation for all $D$ areas is as follows: \begin{align*} y = col_{1\leq i\leq D}(y_i) = (y_i',...,y_D')', \quad X = col_{1\leq i\leq D}(X_i), \\ Z = diag_{1\leq i\leq D}(Z_i), \quad v = col_{1\leq i\leq D}(v_i), \quad \epsilon = col_{1\leq i\leq D}(\epsilon_i), \\ R = diag_{1\leq i\leq D}(R_i), \quad H = diag_{1\leq i\leq D}(H_i). \end{align*} The goal is to identify a relation $f()$ between covariates $X$ and the target variable $y$, in order to predict values for non-sampled observations utilizing available supplementary covariates from census or register information across areas. We state a model consisting of two major parts: a fixed part $f(X)$ and a linear part $Zv$ capturing dependencies by random effects. In the following, $f()$ can be any parametric or non-parametric function that expresses the conditional mean of target variable $y$ given covariates $X$: \begin{equation}\label{mod1} y = f(X)+Z v + \epsilon, \end{equation} where $$ \epsilon \sim N(0,R) \quad \text{and} \quad v \sim N(0,H). $$ Note that for each area $i$ the following model holds: \begin{equation} y_i = f(X_i)+Z_i v_i + \epsilon_i. \end{equation} The covariance matrix of the observations $y$ is given by the block diagonal matrix $Cov(y) = V = diag_{1\leq i\leq D}(V_i)$, where $V_i = Z_i H_i Z_i' +R_i$. We introduce model (\ref{mod1}) in general terms to potentially allow for modelling of complex covariance and dependency structures. However, for the rest of the paper we assume that correlations arises only due to between-area variation, i.e. $R_i = \sigma_{\epsilon}^2 I_{n_i}$ for all areas $i$. Note that the already mentioned LMM proposed by \cite{Battese_etal1988} for estimating area-level means results as a special case of (\ref{mod1}) by setting $f()$ to be the linear model $f(X) = X\beta$, with regression parameters $\beta = [\beta_1,...,\beta_p]'$. Defining $f()$ as a random forest, results in the MERF-approach proposed by \citet{Hajjem2014}, which is the preferred specification throughout the rest of the paper. Before we continue, we want to clarify consequences of distributional assumptions in (\ref{mod1}) that mainly address the linear part of the model. The unit-level errors are assumed to follow $\epsilon \sim N(0,R)$. However, the assumption of normality does not affect the properties of the fixed part $f(X)$ and we do not require residuals to be normally distributed for the application of our proposed method. Nevertheless, for the random components part, we require a proper likelihood function to ensure that the adapted expectation-maximization (EM) algorithm (see below for further details) for parameter estimates converges towards a local maximum within the parameter space. A normality-based likelihood function is exploited, as it has two important properties: firstly, it facilitates the estimation of random effects due to the existence of a closed-form solution of the integral of the Gaussian likelihood function. Secondly, the maximum likelihood estimate for the variance of unit-level errors is given by the mean of the unit-level residual sum of squares. The estimation of the random effects could be also done in a non-parametric way by using discrete mixtures \citep{Marino2016, Marino2019}. However, the modification towards a fully non-parametric formulation of model (\ref{mod1}) is subject to further research. For fitting the model (\ref{mod1}) we use an approach reminiscent of the EM-algorithm similar to \cite{Hajjem2014}. In short, the MERF-algorithm subsequently estimates a) the forest function, assuming the random effects term to be correct and b) estimates the random effects part, assuming the Out-of-Bag-predictions (OOB-predictions) from the forest to be correct. OOB-predictions utilize the unused observations from the construction of each forest's sub-tree \citep{Breiman2001, Biau_Scornet2016}. The proposed algorithm is as follows: \begin{enumerate} \item Initialize $b = 0$ and set random components $\hat{v}_{(0)}$ to zero. \item Set $b = b+1$. Update $\hat{f}(X)_{(b)}$ and $\hat{v}_{(b)}$: \begin{enumerate} \item $y^*_{(b)} = y -Z \hat{v}_{(b-1)}$ \item Estimate $\hat{f}()_{(b)}$ using a random forest with dependent variable $y^*_{(b)}$ and covariates $X$. Note that $\hat{f}()_{(b)}$ is the same function for all areas $i$. \item Get the OOB-predictions $\hat{f}(X)^{OOB}_{(b)}$. \item Fit a linear mixed model without intercept and restricted regression coefficient of 1 for $\hat{f}(X)^{OOB}_{(b)}$: $$y = \hat{f}(X)^{OOB}_{(b)} +Z \hat{v}_{(b)} + \epsilon.$$ \item Extract the variance components $\hat{\sigma}^2_{\epsilon,(b)}$ and $\hat{H}_{(b)}$ and estimate the random effects by: $$\hat{v}_{(b)} = \hat{H}_{(b)}Z ' \hat{V}_{(b)}^{-1} (y - \hat{f}(X)^{OOB}_{(b)}).$$ \end{enumerate} \item Repeat Step (2) until convergence is reached. \end{enumerate} The convergence of the algorithm is assessed by the marginal change of the modified generalized log-likelihood (GLL) criterion: $$GLL (f,v_i | y) = \sum_{i=1}^{D}([y_i - f(X_i) - Z_i v_i ]' R_i^{-1} [ y_i - f(X_i) - Z_i v_i] + v_i ' H_i ^{-1} v_i +\text{log} |H_i|+ \text{log}|R_i|).$$ In the linear case with $f = X \beta$, and for given variance components $H$ and $R$, the maximization of the GLL-criterion is equivalent to the solution of so-called mixed model equations \citep{Wu_Zhang2006} - leading to best linear unbiased predictor (BLUP): $ \hat{v} = HZ' V ^{-1} (y - X\hat{\beta})$. For random forests, the corresponding estimator $\hat{v}$ for known parameters $H$ and $R$ is given by: \begin{equation}\label{v_opt} \hat{v} = HZ ' V ^{-1} (y - \hat{f}(X)^{OOB}). \end{equation} Mathematical details of the derivations are provided in Appendix A. This result is in line with \cite{capitaine_etal2021}, claiming that $\hat{v}$ is obtained by taking the conditional expectation given the data $y$ and subsequently $\hat{v}$ can thus be considered as the BLUP for the linear part of model (\ref{mod1}). The estimation of variance components in Step 2 (d) for $\hat{\sigma}^2_{\epsilon}$ and $\hat{H}$ is obtained by taking the expectation of maximum likelihood estimators given the data. Although $\hat{\sigma}^2_{\epsilon}$ is a naive estimator within the discussed framework, it cannot be considered as a valid estimator for the variance $\sigma_{\epsilon}^2$ of the unit-level errors $\epsilon$. \citet{Breiman2001} maintains that the sum of squared residuals from OOB-predictions are a valid estimator for the squared prediction error of new individual observations. However, as an estimator of the residual variance under the model, $\hat{\sigma}^2_{\epsilon}$ is positively biased, as it includes uncertainty regarding the estimation of function $\hat{f}()$. Following \citet{Mendez_Lohr2011} we use a bias-adjusted estimator for the residual variance $\sigma^2_{\epsilon}$ from a random forest model (\ref{mod1}) using a bootstrap bias-correction. The essential steps to obtain the corrected residual variance are summarized as follows: \begin{enumerate} \item Use the OOB-predictions $\hat{f}(X)^{\text{OOB}}$ from the final model $\hat{f}()$ after convergence of the algorithm. \item Generate $B$ bootstrap samples $y^{\star}_{(b)} = \hat{f}(X)^{\text{OOB}} + \epsilon^{\star}_{(b)}$, where the values $\epsilon^{\star}_{(b)}$ are sampled with replacement from the centred marginal residuals $\hat{e} = y -\hat{f}(X)^{\text{OOB}}$. \item Recompute $\hat{f}(X)^{\text{OOB}}_{(b)}$ using a random forest with $y^{\star}_{(b)}$ as dependent variable. \item Estimate the correction-term $K(\hat{f})$ by: \begin{align*} \hat{K}(\hat{f}) = B^{-1} \sum_{b=1}^{B} \left[\hat{f}(X)^{\text{OOB}} - \hat{f}(X)^{\text{OOB}}_{(b)}\right]^2. \end{align*} \end{enumerate} The bias-corrected estimator for the residual variance is then given by: \begin{equation}\label{biasadj} \hat{\sigma}_{bc,\epsilon}^2 = \hat{\sigma}_{\epsilon}^2 - \hat{K}(\hat{f}). \end{equation} \subsection{Predicting small-area averages}\label{sec:2.3} The MERF-model (\ref{mod1}) predicts the conditional mean on individual level of a metric dependent variable given unit-level auxiliary information. In the context of SAE, we are not interested in predictions on individual level, but in estimating indicators such as area-level means or area-level totals \citep{Rao_Molina2015}. Thus, we assume the same structural simplifications as the LMM proposed by \cite{Battese_etal1988} for estimating area-level means throughout the paper, i.e. $q=1$, $Z$ is a $n_i \times D$ design-matrix of area-intercept indicators, $v = [v_{1},...,v_{D}]'$ is a $D \times 1$ vector of random effects, and variance-covariance matrix for random effects simplifies to $H_i = \sigma_{v}^2$. Firstly, we use the fact that random forest estimates of the fixed part $\hat{f}()$ express the conditional mean on unit-level. We calculate the mean-estimator for each area $i$ based on available supplementary data sources (usually census or administrative data) by: $$\bar{\hat{f}}(X_{i}) = \frac{1}{N_i} \sum_{j=1}^{N_i} \hat{f}(X_{i})=\frac{1}{N_i} \sum_{j=1}^{N_i} \hat{f}(x_{ij}).$$ Secondly, we exploit the result (\ref{v_opt}) that $\hat{v}_i$ is the BLUP for the linear part of the model (\ref{mod1}). Therefore, the proposed estimator for the area-level mean $\mu = [\mu_1,..., \mu_D]'$ is given by: \begin{equation}\label{mu1} \hat{\mu}_i = \bar{\hat{f}}(X_i) + Z_i\hat{v}_i\enspace\enspace\text{for}\enspace\enspace i=1,...D. \end{equation} In cases of non-sampled areas, the proposed estimator for the area-level mean reduces to the fixed part from the random forest: $$\hat{\mu}_i = \bar{\hat{f}}(X_i).$$ \textcolor{black}{We shortly discuss properties of our proposed estimator from Equation (\ref{mu1}). The structural component $Z_i\hat{v}_i$ captures dependency and correlation structures by random effects and the expression $\bar{\hat{f}}(X_i)$ is the fixed effects predictor of the mean. For the special case, where $\hat{f}()$ is assumed to be the linear model $\hat{f}(X) = X\hat{\beta}$, with regression parameters $\hat{\beta} = [\hat{\beta_1},...,\hat{\beta}_p]'$, the estimator for area-level means resembles the result of the EBLUP \citep{Battese_etal1988}. If $\hat{f}()$ is a random forest, we face area-specific mean-estimates for fixed-effects from a highly flexible, data-driven and non-differentiable function. Two major tuning parameters affect the predictive performance of the random forest $\hat{f}()$, i.e. the number of trees and the number of split-candidates at each node controlling the degree of decorrelation. In contrast to existing parametric and non-parametric methods in SAE, our proposed estimator from Equation (\ref{mu1}) abstains from problems due to model-selection. Random forests implicitly perform optimized model-selection including higher-order effects or non-linear interactions. Although flexible approaches such as P-Splines \citep{Opsomeretal2008} potentially determine non-linear relations in covariates, users have to explicitly specify model-variables and interactions to be interpolated a-priori, resulting in a comparable paradigm of model-selection compared to standard LMMs. An additional property of $\hat{f}()$ is the capability to deal with high-dimensional covariate data, i.e. cases where the number of covariates $p$ is larger than the sample size $n$. This property might be exploited in the context of applications to alternative Big Data sources \citep{Marchetti_etal2015,Schmid2017}.} \section{Estimation of uncertainty}\label{sec:3} The assessment of uncertainty of area-level indicators in SAE is crucial to analyse the quality of estimates. The area-level MSE is a conventional measure fulfilling this goal, but its calculation is a challenging task. For instance, for the unit-level LMM with block diagonal covariance matrices \citep{Battese_etal1988}, the exact MSE cannot be analytically derived with estimated variance components \citep{Gonzalez_etal2008, Rao_Molina2015} and only partly-analytical approximations are available \citep{Prasad_Rao1990,Datta_Lahiri2000}. An alternative to estimate uncertainty of the area-level indicators are bootstrap-schemes \citep{Hall_Maiti2006, Gonzalez_etal2008, Chambers_Chandra2013}. In contrast, general statistical results for inference of random forests are rare, especially in comparison to the existing theory of inference using LMMs. \textcolor{black}{Nevertheless, we provide a theoretical discussion on the estimation of MSEs for in-sample area-level means in the spirit of \citet{Prasad_Rao1990} based on \citet{Mendez2008}. Derivations can be found in the online supplementary materials. The resulting analytical approximation is considered to be a complement to contextualize the quality of our proposed bootstrap MSE-estimator for in-sample areas. We discuss performance details in the model-based simulation in Section \ref{sec:4}. An exact theoretical determination and discussion of asymptotic properties will be left to further research.} The theoretical background of random forests grows, but mainly aims to quantify the uncertainty of individual predictions \citep{Sexton_Laake2009, wager_etal2014, Wager_Athey2018, Athey_etal2019, Zhang2019}. The extension of recent theoretical results, such as conditions for the consistency of unit-level predictions \citep{Scornet_etal2015} or their asymptotic normality \citep{Wager_Athey2018}, towards area-level indicators is a conducive strategy. In this paper, we propose a non-parametric random effect block (REB) bootstrap for estimating the MSE of the introduced area-level estimator given by Equation (\ref{mu1}). We aim to capture the dependence-structure of the data as well as the uncertainty introduced by the estimation of model (\ref{mod1}). Our bootstrap-scheme builds on the non-parametric bootstrap introduced by \cite{Chambers_Chandra2013}. The proposed REB bootstrap has two major advantages: firstly, empirical residuals depend only on the correct specification of the mean behaviour function $f()$ of the model, thus the REB setting is lenient to specification errors regarding the covariance structure of the model. Secondly, the bootstrap within blocks ensures that the variability of residuals within each area is captured. We scale and centre the empirical residuals by the bias-corrected residual variance (\ref{biasadj}) in order to eliminate the uncertainty due to the estimation of $\hat{f}()$ from the naive residuals. The steps of the proposed bootstrap are as follows: \begin{enumerate} \item For given $\hat{f}()$ calculate the $n_i\times 1$ vector of marginal residuals $\hat{e}_i = y_i -\hat{f}(X_i)$ and define $\hat{e} = [\hat{e}_1',...,\hat{e}_D']'$. \item Using the marginal residuals $\hat{e}$, compute level-2 residuals for each area by $$\bar{r}_{i} = \frac{1}{n_i} \sum_{j=1}^{n_i} {\hat{e}_{i}}\enspace\enspace\text{for}\enspace\enspace i=1,...D$$ and $\bar{r} = [\bar{r}_1,...,\bar{r}_D]'$ indicates the $D\times 1$ vector of level-2 residuals. \item To replicate the hierarchical structure we use the marginal residuals and obtain the $n_i\times 1$ vector of level-1 residuals by $\hat{r}_{i} = \hat{e}_{i} - 1_{n_i}\bar{r}_i$. The residuals $\hat{r} = [\hat{r}_1',...,\hat{r}_D']'$ are scaled to the bias-corrected variance $\hat{\sigma}_{bc,\epsilon}^2$ (\ref{biasadj}) and centred, denoted by $\hat{r}^{c} = [\hat{r}^{c'}_{1},...,\hat{r}^{c'}_{D}]'$. The level-2 residuals $\bar{r}_i$ are also scaled to the estimated variance $\hat{H}_i=\hat{\sigma}_{v}^2$ and centred, denoted by $\bar{r}^{c} = [\bar{r}^{c}_1,...,\bar{r}^{c}_D]'$. \item For $b=1,...,B$: \begin{enumerate} \item Sample independently with replacement from the scaled and centred level-1 and level-2 residuals: \begin{eqnarray} \nonumber r_{i}^{(b)}=\text{srswr}(\hat{r}^c_{i},n_i)\enspace\enspace \text{and}\enspace\enspace \bar{r}^{(b)}=\text{srswr}(\bar{r}^c,D). \end{eqnarray} \item Calculate the bootstrap population as $y^{(b)} = \hat{f}(X) +Z \bar{r}^{(b)}+ r^{(b)}$ and calculate the true bootstrap population area means $\mu_i^{(b)}$ as $\frac{1}{N_i} \sum_{j=1}^{N_i} y_{ij}^{(b)}$ for all $i = 1,..,D$. \item For each bootstrap population draw a bootstrap sample with the same $n_i$ as the original sample. Use the bootstrap sample to obtain estimates $\hat{f}^{(b)}()$ and $\hat{v}^{(b)}$ as discussed in Section \ref{sec:2.2}. \item Calculate area-level means following Section \ref{sec:2.3} by $$\hat{\mu}^{(b)}_{i} = \bar{\hat{f}}^{(b)}(X_i) + Z_i\hat{v}_i^{(b)}.$$ \end{enumerate} \item Using the $B$ bootstrap samples, the MSE-estimator is obtained as follows: $$\widehat{MSE}_i = B^{-1} \sum_{b=1}^{B} \left(\mu_i^{(b)}-\hat{\mu}^{(b)}_{i}\right)^2.$$ \end{enumerate} \section{Model-based simulation}\label{sec:4} This section marks the first step in the empirical assessment of the proposed method. The model-based simulation juxtaposes point estimates for the area-level mean from the mixed effects random forest model (\ref{mod1}) with several competitors. In particular, we study the performance of MERFs compared to the BHF-model \citep{Battese_etal1988}, the EBP \citep{Molina_rao2010}\textcolor{black}{, the EBP under data-driven Box-Cox transformation (EBP-BC) by \citet{Rojas_etal2019} as well as the non-parametric EBLUP with P-Splines (P-SPLINES) by \citet{Opsomeretal2008}}. The BHF-model serves as an established baseline for the estimation of area-level means and the EBP and the EBP-BC conceptually build on the BHF-model. \textcolor{black}{The non-parametric EBLUP by \citet{Opsomeretal2008} incorporates advantages of flexible, non-linear smoothing methods into existing theory for SAE based on LMMs.} Differences in the performance of the EBP and the EBP-BC highlight advantages of data-driven transformations, while differences in the performance of the linear competitors and \textcolor{black}{more flexible alternatives (MERF, P-SPLINES) indicate advantages of semi-parametric and non-linear modelling}. Overall, we aim to show, that our proposed methodology for point and uncertainty estimates performs comparably well to `traditional' SAE-methods and has comparative advantages in terms of robustness against model-failure. The simulation-setting is characterized by a finite population $U$ of size $N=50000$ with $D=50$ disjunct areas $U_1,...,U_D$ of equal size $N_i = 1000$. We generate samples under stratified random sampling, utilizing the $50$ small areas as stratas, resulting in a sample size of $n = \sum_{i=1}^{D} n_i = 1229$. The area-specific sample sizes range from $6$ to $49$ sampled units with a median of $21$ and a mean of $25$. The sample sizes are comparable to area-level sample sizes in the application in Section \ref{sec:5} and can thus be considered to be realistic. We consider four scenarios denoted as \textit{Normal}, \textit{Interaction}, \textit{Normal-Par}, \textit{Interaction-Par} and repeat each scenario independently $M=500$ times. The comparison of competing model-estimates under these four scenarios allows us to examine the performance under two major dimensions of model-misspecification: Firstly, the presence of skewed data delineated by non-normal error-terms and secondly, the presence of unknown non-linear interactions between covariates. Scenario \textit{Normal} provides a baseline under LMMs with normally distributed random effects and unit-level errors. As model-assumptions for LMMs are fully met, we aim to show, that MERFs perform comparably well to linear competitors in the reference scenario. Scenario \textit{Interaction} shares its error-structure with \textit{Normal}, but involves a complex model including quadratic terms and interactions. This scenario portrays advantages of semi-parametric and non-linear modelling methods protecting against model-failure. Working with inequality or income data, we often deal with skewed target variables. Thus, we use the Pareto distribution to mimic realistic income scenarios. Scenario \textit{Normal-Par} uses the linear additive structure of LMMs and adds Pareto distributed unit-level errors. The resulting scenario, including a skewed target variable, is a classical example promoting the use of transformations assuring that assumptions of LMMs to be met. Finally, scenario \textit{Interaction-Par} combines the two discussed dimensions of model misspecification, i.e. a non-Gaussian error-structure with complex interactions between covariates. We chose this scenario to emphasize the ability of MERFs to handle both complications simultaneously. Further details on the data-generating process for each scenario are provided in Table \ref{tab:MB1}. \begin{table}[ht] \centering \captionsetup{justification=centering,margin=1.5cm} \caption{Model-based simulation scenarios} \resizebox{\textwidth}{!}{\begin{tabular}{rlccccccc} \toprule {Scenario} & {Model} & {$x1$} & {$x2$} & {$\mu$} & {$v$} & {$\epsilon$} \\ \midrule Normal & $ y = 5000-500x_1-500x_2+v+\epsilon$ & $N(\mu,3^2)$ & $N(\mu,3^2)$ & $unif(-1,1)$ & $N(0,500^2)$ & $N(0,1000^2)$ \\ Interaction & $ y = 15000-500x_1x_2-250x_2^2+v+\epsilon $ & $N(\mu,4^2)$ & $N(\mu,2^2)$ & $unif(-1,1)$ & $N(0,500^2)$ &$N(0,1000^2)$ \\ Normal-Par & $ y = 5000-500x_1-500x_2+v+\epsilon $ & $N(\mu,3^2)$ & $N(\mu,3^2)$ & $unif(-1,1)$ & $N(0,500^2)$ & $Par(3,800)$ \\ Interaction-Par & $ y = 20000 - 500x_1x_2 - 250x_2^2+ v + \epsilon $ & $N(\mu,2^2)$ & $N(\mu,2^2)$ & $unif(-1,1)$ & $N(0,1000^2)$ & $Par(3,800)$ \\ \bottomrule \end{tabular}} \label{tab:MB1} \end{table} We evaluate point estimates for the area-level mean by the relative bias (RB) and the relative root mean squared error (RRMSE). As quality-criteria for the evaluation of the MSE-estimates, we choose the relative bias of RMSE (RB-RMSE) and the relative root mean squared error of the RMSE: \begin{align} \nonumber RB_i &= \frac{1}{M} \sum_{m=1}^{M} \left(\frac{\hat{\mu}^{(m)}_i - \mu^{(m)}_i}{\mu^{(m)}_i}\right)\\\nonumber \textcolor{black}{RRMSE_i} &= \textcolor{black}{\frac{\sqrt{\frac{1}{M} \sum_{m=1}^{M} \left(\hat{\mu}^{(m)}_i - \mu^{(m)}_i\right)^2}}{\frac{1}{M}\sum_{m=1}^{M}\mu^{(m)}_i}}\\\nonumber RB\text{-}RMSE_i &=\frac{\sqrt{\frac{1}{M} \sum_{m=1}^{M} MSE^{(m)}_{est_i}} - RMSE_{emp_i}}{RMSE_{emp_i}}\\\nonumber RRMSE\text{-}RMSE_i &= \frac{\sqrt{\frac{1}{M} \sum_{m=1}^{M} \left(\sqrt{MSE^{(m)}_{est_i}} - RMSE_{emp_i}\right)^2}}{RMSE_{emp_i}}, \end{align} where $\hat{\mu}^{(m)}_i$ is the estimated mean in area $i$ based on any of the methods mentioned above and $\mu^{(m)}_i$ defines the true mean for area $i$ in simulation round $m$. $MSE_{est_i}^{(m)}$ is estimated by the proposed bootstrap in Section \ref{sec:3} and $RMSE_{emp_i} = \sqrt{\frac{1}{M} \sum_{m=1}^{M}(\hat{\mu}^{(m)}_i -\mu^{(m)}_i)^2}$ is the empirical root MSE over $M$ replications. For the computational realization of the model-based simulation, we use R \citep{R_language}. The BHF estimates are realized from the \emph{sae}-package \citep{Molina_Marhuenda:2015} and the \emph{emdi}-package \citep{Kreutzmann_etal2019} is used for the EBP as well as the EBP under the data-driven Box-Cox transformation. \textcolor{black}{We implement the P-SPLINE method with the package \emph{mgcv} \citep{Wood_2017}.} For estimating the proposed MERF-approach, we use the packages \emph{randomForest} \citep{Liaw_Wiener2002} and \emph{lme4} \citep{Bates_etal2015}. We monitor the convergence of algorithm introduced in Section \ref{sec:2.2} with a precision of $1e^{-5}$ in relative difference of the GLL-criterion and set the number of split-candidates to $1$, keeping the default of $500$ trees for each forest. We start with a focus on the performance of point estimates. Figure \ref{fig:MBpoint} reports the empirical RMSE of each method under the four scenarios. As expected, in the \textit{Normal} scenario, the BHF and the EBP perform on the same level and outperform the MERF estimator. The EBP with a data-driven transformation (EBP-BC) \textcolor{black}{and the non-parametric EBLUP (P-SPLINES)} lead to similar results compared to the BHF and EBP. This shows that the data-driven transformation \textcolor{black}{and the penalized smoothing approach} work as expected. A similar pattern appears in the results from the \textit{Normal-Par} scenario, except that the EBP-BC reaches a lower overall RMSE due to its property of data-driven transformation and subsequently improved estimation under skewed data. As anticipated, a comparison of the performance of the MERF between the \textit{Normal} and the \textit{Normal-Par} scenario indicates, that the MERF exhibits robust behaviour under skewed data and subsequently regarding violations of the normal distribution of errors. \textcolor{black}{LMM-based competitors match the data-generating process of fixed effects and perform accordingly, as already observed under the \textit{Normal} scenario.} For complex scenarios, i.e. \textit{Interaction} and \textit{Interaction-Par}, point estimates of the proposed MERF outperform the SAE-methods based on LMMs. The EBP-BC performs better in terms of lower RMSE values compared to the BHF and the EBP in both interaction scenarios. \textcolor{black}{The flexible approach of P-SPLINES outperforms the BHF, the EBP and the data-driven EBP-BC. However, MERFs automatically identify interactions and non-linear relations, such as the quadratic term in scenario \textit{Interaction-Par}, which leads to a clear comparative advantage in terms of RMSE.} Overall, the results from Figure \ref{fig:MBpoint} indicate that the MERF performs comparably well to LMMs in simple scenarios, and outperforms `traditional' SAE-models in the presence of unknown non-linear relations between covariates. Additionally, the robustness against model-misspecification of MERFs holds if distributional assumptions for LMMs are not met, i.e. in the presence of non-normally distributed errors and skewed data. Table \ref{tab:MBpoint} reports the corresponding values of RB and RRMSE for our discussed point estimates. The RB and the RRMSE from the MERF-method attest a competitively low level for all scenarios. Most interestingly, in complex scenarios (\textit{Interaction} and \textit{Interaction-Par}), a familiar result regarding the statistical properties of random forests appears: the RB is higher compared to the LMM-based models, but the enlarged RB is rewarded by a lower RRMSE for point estimates. \begin{figure}[ht] \centering \captionsetup{justification=centering,margin=1.5cm} \includegraphics[width=1\linewidth]{Figures/Results_MB_point} \caption{Empirical RMSE comparison of point estimates for area-level averages under four scenarios} \label{fig:MBpoint} \end{figure} \begin{table}[!h] \footnotesize \centering \captionsetup{justification=centering,margin=1.5cm} \caption{Mean and Median of RB and RRMSE over areas for point estimates in four scenarios} \begin{tabular}{@{\extracolsep{5pt}} lrcccccccc} \\[-1.8ex]\hline \hline \\[-1.8ex] & &\multicolumn{2}{c}{\textit{Normal}} &\multicolumn{2}{c}{\textit{Interaction}}&\multicolumn{2}{c}{\textit{Normal-Par}}&\multicolumn{2}{c}{\textit{Interaction-Par}} \\ \hline \\[-1.8ex] & & Median & Mean & Median & Mean & Median & Mean & Median & Mean \\ \hline \\[-1.8ex] \multicolumn{9}{l}{RB[\%]}\\ \hline \\[-1.8ex] &BHF & $0.087$ & $0.131$ & $$-$0.202$ & $0.106$ & $0.193$ & $0.220$ & $0.043$ & $0.142$ \\ &EBP & $0.069$ & $0.128$ & $$-$0.060$ & $0.108$ & $0.216$ & $0.217$ & $0.105$ & $0.142$ \\ &EBP-BC & $0.152$ & $0.184$ & $0.156$ & $0.381$ & $0.174$ & $0.129$ & $0.139$ & $0.262$ \\ &P-SPLINES & $0.096$ & $0.137$ & $$-$0.064$ & $0.123$ & $0.199$ & $0.227$ & $0.051$ & $0.090$ \\ &MERF & $0.137$ & $0.191$ & $0.279$ & $0.312$ & $0.409$ & $0.460$ & $0.151$ & $0.188$ \\ \hline \\[-1.8ex] \multicolumn{9}{l}{RRMSE[\%]}\\ \hline \\[-1.8ex] &BHF & $3.830$ & $4.090$ & $3.770$ & $3.870$ & $3.600$ & $4.100$ & $2.800$ & $2.950$ \\ &EBP & $3.850$ & $4.100$ & $3.750$ & $3.870$ & $3.600$ & $4.120$ & $2.830$ & $2.950$ \\ &EBP-BC & $3.850$ & $4.100$ & $3.680$ & $3.800$ & $3.430$ & $3.710$ & $2.650$ & $2.770$ \\ &P-SPLINES & $3.840$ & $4.090$ & $3.580$ & $3.620$ & $3.590$ & $4.100$ & $2.380$ & $2.490$ \\ &MERF & $4.070$ & $4.380$ & $2.270$ & $2.330$ & $3.890$ & $4.380$ & $1.420$ & $1.530$ \\ \hline \\[-1.8ex] \end{tabular} \label{tab:MBpoint} \end{table} We scrutinize the performance of our proposed MSE-estimator on the four scenarios, examining whether the observed robustness against model-misspecification due to unknown complex interactions between covariates or skewed data for point estimates, also holds for our non-parametric bootstrap-scheme. For each scenario and each simulation round, we choose the parameter of bootstrap replications $B = 200$. From the comparison of RB-RMSE among the four scenarios provided in Table \ref{tab:MBmse}, we infer, that the proposed non-parametric bootstrap procedure effectively handles scenarios that lead to model-misspecification in cases of (untransformed) LMMs. This is demonstrated by essentially unbiasedness in terms of mean and median values of RB-RMSE over areas of the MSE-estimator under all four scenarios: independently, whether the data generating process is characterized by complex interactions (\textit{Interaction}), non-normal error terms (\textit{Normal-Par}) or a combination of both problems (\textit{Interaction-Par}). \textcolor{black}{We compare the performance of our bootstrap estimator to an estimator resulting from an analytical discussion of uncertainty in the spirit of \citet{Prasad_Rao1990}, which can be found in the online supplementary materials. The analytical approximation generally underestimates the MSE, except for the \textit{Interaction} scenario, which substantiates the quality of the proposed bootstrap estimator. A detailed graphical comparison of the RB-RMSE between the non-parametric bootstrap and the analytical MSE estimator is provided by Figure \ref{fig:MBappendix} in the Appendix B.} \begin{table}[!h] \footnotesize \centering \captionsetup{justification=centering,margin=1.5cm} \caption{Performance of bootstrap and analytical MSE estimators in model-based simulation: mean and median of RB-RMSE and RRMSE-RMSE over areas} \begin{tabular}{@{\extracolsep{5pt}} lrcccccccc} \\[-1.8ex]\hline \hline \\[-1.8ex] & &\multicolumn{2}{c}{\textit{Normal}} &\multicolumn{2}{c}{\textit{Interaction}}&\multicolumn{2}{c}{\textit{Normal-Par}}&\multicolumn{2}{c}{\textit{Interaction-Par}} \\ \hline \\[-1.8ex] & & Median & Mean & Median & Mean & Median & Mean & Median & Mean \\ \hline \\[-1.8ex] \multicolumn{9}{l}{RB-RMSE[\%]}\\ \hline \\[-1.8ex] &Bootstrap & $0.319$ & $$-$0.084$ & $0.127$ & $0.548$ & $0.340$ & $0.724$ & $$-$0.802$ & $0.123$ \\ &Analytic & $$-$5.700$ & $$-$5.010$ & $0.707$ & $0.261$ & $$-$4.020$ & $$-$4.480$ & $$-$7.500$ & $$-$7.000$ \\ \hline \\[-1.8ex] \multicolumn{9}{l}{RRMSE-RMSE[\%]}\\ \hline \\[-1.8ex] &Bootstrap & $12.500$ & $12.500$ & $22.200$ & $22.800$ & $43.100$ & $48.200$ & $41.000$ & $44.700$ \\ &Analytic & $6.130$ & $5.930$ & $10.400$ & $12.200$ & $21.300$ & $21.400$ & $33.600$ & $33.500$ \\ \hline \\[-1.8ex] \end{tabular} \label{tab:MBmse} \end{table} \begin{figure}[ht] \centering \captionsetup{justification=centering,margin=1.5cm} \includegraphics[width=1\linewidth]{Figures/tracking_fig} \caption{Empirical, bootstrapped and analytical area-level RMSEs for four scenarios} \label{fig:trackMSE} \end{figure} From the results of Table \ref{tab:MBmse} and the subsequent discussion, we cannot directly infer the area-wise tracking properties of the estimated RMSE against the empirical RMSE over our $500$ simulation rounds. Thus, Figure \ref{fig:trackMSE} provides additional intuition on the quality of our proposed non-parametric MSE-bootstrap estimator. Given the tracking properties in all four scenarios, we conclude that our bootstrap MSE-estimates strongly correspond to the empirical RMSE \textcolor{black}{and appear to track the domain-specific empirical RMSE more precisely than the estimates of our analytical MSE estimator from the theoretical discussion in the online supplementary materials.} Furthermore, we do not observe systematic differences between the bootstrapped and empirical MSE-estimates regarding different survey-sample sizes. \section{Application: Estimating household income for Mexican municipalities}\label{sec:5} In this section, we discuss the performance of our proposed method in the context of a genuine SAE example. Concretely, we apply the MERF-method proposed in Section \ref{sec:2.2} to estimate domain-level average household income for the Mexican state Nuevo León. Section \ref{sec:5.1} describes the data and Section \ref{sec:5.2} reports results. We end our empirical analysis by an additional design-based simulation enabling a profound discussion on the quality of point and MSE-estimates in Section \ref{sec:5.3}. \subsection{Data description}\label{sec:5.1} Income inequality in Mexico is a research topic of timeless importance, particularly regarding the effectiveness of developmental and social policies \citep{lambert_hyunmin2019}. Although the level of income inequality in Mexico is comparable to other Latin American countries, it is one of the highest among other OECD countries \citep{Oecd_21}. Analysing average national income values is a common practice, but constitutes an inappropriate measure to monitor the efficacy of regional policy measures. Besides detailed disaggregated information, also suitable statistical methods are needed to quantify local developments. For the following application, we break down regional differences in average household per capita income of one of 32 Mexican states. Nuevo León is located in the North-East of Mexico and according to the (sub-national) Human Development Index (HDI), it is one of the most developed states of Mexico \citep{smits_Permanyer2019}. Nevertheless, the distribution of individual household income in Nuevo León is unequal and thus highly skewed. For instance, the Gini-coefficient of household income is comparable to the total Gini of Mexican household disposable income from 2012 which was $0.46$ \citep{Oecd_21}. We use data from 2010 provided by CONEVAL (Consejo Nacional de Evaluación de la Política de Desarrollo Social), combining the Mexican household income and expenditure survey (Encuesta Nacional de Ingreso y Gastos de los Hogares, ENIGH) with a sample of census microdata by the National Institute of Statistics and Geography (Instituto Nacional de Estadística y Geografía). The dataset comprises income and socio-demographic data, equally measured by variables that are part of the survey as well as the census data. The target variable for the estimation of domain-level average household income in Section \ref{sec:5.2}, is the total household per capita income (\textit{ictpc}, measured in pesos), which is available in the survey but not in the census. Nuevo León is divided into $51$ municipalities. While the census dataset in our example comprises information on $54848$ households from all $51$ municipalities, the survey data includes information on $1435$ households from $21$ municipalities, ranging from a minimum of $5$ to a maximum of $342$ households with a median of $27$ households. This leaves $30$ administrative divisions to be out-of-sample. Table \ref{tab:Apdetails} provides details on sample and census data properties. \begin{table}[ht] \centering \captionsetup{justification=centering,margin=1.5cm} \caption{Summary statistics on in- and out-of-sample areas: area-specific sample size of census and survey data} \begin{tabular}{@{\extracolsep{5pt}} lcccccccc} \\[-1.8ex]\hline \hline \\[-1.8ex] &\multicolumn{2}{c}{Total}&\multicolumn{2}{c}{In-sample}&\multicolumn{2}{c}{Out-of-sample}\\ &\multicolumn{2}{c}{51} & \multicolumn{2}{c}{21} & \multicolumn{2}{c}{30} \\ \hline \hline \\[-1.8ex] & Min. & 1st Qu. & Median & Mean & 3rd Qu. & Max. \\ \hline Survey area sizes & 5.00 & 14.00 & 27.00 & 68.33 & 79.00 & 342.00 \\ Census area sizes & 76.00 & 454.50 & 642.00 & 1075.45 & 872.50 & 5904.00 \\ \hline \end{tabular} \label{tab:Apdetails} \end{table} With respect to the design-based simulation in Section \ref{sec:5.3}, we emphasize that we are in the fortunate position of having a variable that is highly correlated with the target variable \textit{ictpc} in the application and that is available in the survey and in the census dataset: the variable \textit{inglabpc} measures earned per capita income from work. Although \textit{inglabpc} deviates from the desired income definition for our approach - as it covers only one aspect of total household income - it is effective to evaluate our method under a design-based simulation in Section \ref{sec:5.3}. Furthermore, the design-based simulation implicitly assess the quality of our empirical results from Section \ref{sec:5.2}. Using data from Nuevo León for the estimation of domain-level income averages, is an illustrative and realistic example and imposes several challenges on the proposed method of MERFs: first of all, about $24$ percent of households in the survey-sample are located in the capital Monterrey. Secondly, there exist more out-of sample domains than in-sample domains. Moreover, we are confronted with households reporting zero-incomes. Our intention in choosing this challenging example for the application part in Section \ref{sec:5.2} and the following design-based simulation in Section \ref{sec:5.3} are simple: we aim to show, that our proposed approaches for point and uncertainty estimates demonstrate a valid alternative to existing SAE-methods and are predominantly applicable for cases where `traditional' methods perform poorly or even fail. Additionally, we aim to provide a clear-cut presentation and empirical assessment of MERFs for SAE, which requires a transparent discussion of advantages and potential weaknesses in demanding real-world examples. \\ \subsection{Results and discussion} \label{sec:5.2} Direct estimates for average total household per capita income for Nuevo León are possible for 21 out of 51 domains. The use of model-based SAE-methods incorporating covariate census data will not only lead to estimates for the remaining out-of-sample areas, but correspondingly improve the overall quality of estimates \citep{Tzavidis_etal2018}. As variable \textit{ictpc} is highly skewed, we deduce potential issues of model-misspecification and suggest the use of the EBP-BC and the MERF. Given the theoretical discussion and the results in the model-based simulation in Section \ref{sec:4}, we infer that the EBP-BC and the proposed method of MERFs for SAE effectively handle non-normally distributed data. Moreover, we are particularly interested in differences between these two diverse SAE-models in the context of real-world applications. \textcolor{black}{We use the design-based simulation in Section \ref{sec:5.3} to extend our methodological discussion towards all methods discussed in the model-based simulation in Section \ref{sec:4}.} Figure \ref{fig:mapincome} maps results from direct estimates, the MERF and the EBP-BC. Obviously, the model-based estimates from the MERF and the EBP-BC expand the perspective of regional disparities in average total household income per capita towards non-sampled regions. Furthermore, we identify three distinct clusters of income levels from our results in Figure \ref{fig:mapincome} that had not been observable from the mapping of direct estimates: a low income cluster in the South of Nuevo León, a very high income cluster in the metropolitan area of the capital Monterrey and a group of middle-income areas between the North and the South of the state. This finding illustrates, the potential of model-based techniques to highlight patterns of regional income disparities and enable the mapping of reliable empirical evidence. Given the information provided by the three maps, we do not report major differences between the point estimates of MERFs and the EBP-BC. \begin{figure}[ht] \centering \captionsetup{justification=centering,margin=1.5cm} \includegraphics[width=1\linewidth]{Figures/map_estim} \caption{Estimated average total household per capita income \textit{itcpc} for the state Nuevo León based on three different estimation methods} \label{fig:mapincome} \end{figure} Apart from mapping empirical results of domain averages, we are mainly interested in quality criteria, such as the coefficients of variation (CV) and the details of model-specification for the EBP-BC and the MERF. To obtain estimates of variances for the direct estimates, we use the calibrated bootstrap method \citep{Alfons_Templ2013} as provided in the R package \emph{emdi} \citep{Kreutzmann_etal2019}. \textcolor{black}{For the MSE-estimates of the MERF, we rely on the non-parametric bootstrap from Section \ref{sec:3}}. For the model of the data-driven EBP-BC, we follow the approach of \citet{Rojas_etal2019} and use the Bayesian Information Criterion (BIC) to identify valid predictors for the target variable of \textit{ictpc}. The resulting working-model includes variables determining occupation, sources of income, the socio-economic level and educational aspects of individual households. The identification of predictive covariates for MERFs highlights a conceptual difference to LMM-based methods. Due to the properties of the random forest algorithm \citep{Breiman2001}, random forests perform an implicit variable selection. The selected model for fixed effects in our case is characterized by an R-squared of about $0.47$ percent. The dilemma between predictive precision and the interpretability of random forest models can be mitigated by concepts such as variable importance plots \citep{Greenwell_etal2020} (Figure \ref{fig:Vipappendix}) or an analysis of partial dependence for influential covariates \citep{Greenwell_2017} (Figure \ref{fig:Pdpappendix}). \textcolor{black}{Variable importance is reported by the mean increase in individual mean squared prediction error (\%IncMSE), resulting from the exclusion of the corresponding variable. Partial dependence plots depict the estimated marginal effect of a particular variable on the predicted individual target variables. From the inspection of Figure \ref{fig:Pdpappendix}, we can infer whether relationships between ictpc and predictive variables are monotonic or more complex. Figure \ref{fig:Vipappendix} reveals that the most important variable for the random forest model is the average relative amount of schooling (escol\_rel\_hog), followed by the availability of goods in the household (bienes) as well as the average years of schooling of persons (jaesc). Table \ref{tab:appExplain} in the Appendix B provides explanations on further variables. The most influential variables are related to education, work experience and employment and household assets. Figure \ref{fig:Pdpappendix} indicates rather complex and non-linear relationships between ictpc and its predictive covariates except for two variables related to the number of income earners in the household (pcpering, pcpering\_2).} We monitor the convergence of the proposed MERF algorithm under a precision of $1e^{-5}$ in relative difference of the GLL criterion and keep the default of $500$ trees. A parameter optimization based on 5-fold cross-validation on the original survey-sample advices the use of $3$ variables at each split for the forest. For the MSE-bootstrap procedure, we use $B=200$. Figure \ref{fig:detailCV} reports corresponding CVs for in- and out-of-sample domains. We observe a significant improvement for in-sample CVs of the EBP-BC and the MERF compared to the CVs for direct estimates. CVs of MERFs are slightly lower in median terms than the results for the EBP-BC. However, there exists one outlying area for MERFs. Going into details, the corresponding area of General Zaragoza features no obvious data-specific irregularities, such as extremely low sample size. Nevertheless, General Zaragoza is one of the poorest regions according to our analysis. In the design-based simulation in Section \ref{sec:5.3}, we will pay special attention to differences between MERFs and the EBP-BC regarding their ability to handle comparably extreme estimates given a broad range of relatively high and relative low income areas. Regarding the CVs for out-of-sample areas, we discover an evident advantage of CVs from our proposed MERF approach. From the scrutiny of individual CV values it remains unclear, whether the asset of improved results from MERFs roots in superior point estimates for domain-level averages or its relatively lower MSE-estimates. Figure \ref{fig:pointEstim} compares direct estimates to the model-based estimates for in and out-of-sample domains. Apparently, there exist no systematic differences between the estimates from the EBP-BC and the MERF. Thus, it appears as if the variance of MERF predictions is generally lower. This conjecture is in line with the theoretical properties of random forests \citep{Breiman2001, Biau_Scornet2016}. In-sample areas in Figure \ref{fig:pointEstim} are sorted by survey-sample sizes. In comparison to the direct estimates, predicted averages of the EBP-BC as well as of the MERF seem less extreme. The obvious irregularity in terms of high-income, is a distinct part of the Monterrey metropolitan area: San Pedro Garza García registers several headquarters of national and international corporations. This economic peculiarity apparently transfers to the income of its citizenship. Figure \ref{fig:mapincome}, underlines the existence of an apparent high-income cluster in this region. Overall, it is interesting to observe how reliable estimates on fine spatial resolution unveil patterns of regional income segregation. Our proposed method of MERFs provides useful results with remarkably higher accuracy than direct estimates and the EBP-BC for most of out-of-sample domains. The following design-based simulation will strengthen the reliability of results and enable an in-depth discussion of our methods for point and MSE-estimates. \begin{figure}[!htb] \centering \captionsetup{justification=centering,margin=1.5cm} \includegraphics[width=0.93\linewidth]{Figures/vip_plot} \caption{Variable importance in mean decrease accuracy (\%IncMSE) for the ten most influential variables} \label{fig:Vipappendix} \end{figure} \begin{figure}[ht] \centering \captionsetup{justification=centering,margin=1.5cm} \includegraphics[width=0.95\linewidth]{Figures/pdp_plot} \caption{Partial dependence plots for variables ranked by \%IncMSE} \label{fig:Pdpappendix} \end{figure} \begin{figure}[!htb] \centering \captionsetup{justification=centering,margin=1.5cm} \includegraphics[width=0.95\linewidth]{Figures/CV_details} \caption{Domain-specific CVs for target variable \textit{ictpc} for in- and out-of-sample domains} \label{fig:detailCV} \end{figure} \begin{figure}[ht] \centering \captionsetup{justification=centering,margin=1.5cm} \includegraphics[width=0.95\linewidth]{Figures/point_estim} \caption{Detailed comparison of point estimates for the domain-level average total household income. The dotted line separates sampled from non-sampled areas. In-sample areas are sorted by decreasing survey-sample size} \label{fig:pointEstim} \end{figure} \FloatBarrier \subsection{Evaluation using design-based simulation}\label{sec:5.3} The design-based simulation allows to directly juxtapose the performance of the proposed MERF-approach to existing SAE-methods for the estimation of area-level means based on empirical data. In this sense, the design-based simulation adds not only insights to the results from the model-based simulation in Section \ref{sec:4}, but also evaluates results from the example in the previous Section \ref{sec:5.3}. We focus on area-level mean-estimates of household income from work \textit{inglabpc} in the Mexican state Nuevo León. As we use the same data with a different target variable, sample and census data properties are similar to the previous example with details provided in Table \ref{tab:Apdetails}. Implementing the design-based simulation, we sample $T=500$ independent samples from the fixed population of our census dataset. Each pseudo-survey-sample mirrors the characteristics of the original survey, as we keep the number of in-sample households similar to the original sample sizes and abstain from sampling out-of-sample municipalities. As a result, we use $500$ equally structured pseudo-survey-samples with equal overall sample size. True values, are defined as the domain-level averages of household incomes from work in the original census. We consider the same methods as in the model-based simulation in Section \ref{sec:4}. Comparably to Section \ref{sec:5.2}, we use the same working-model for the BHF, the EBP, \textcolor{black}{the EBP-BC and P-SPLINES and} assume it to be fixed throughout the design-based simulation. For the EBP-BC and the MERF, we keep the parameters as already discussed in Section \ref{sec:5.2}. \begin{figure}[ht] \centering \captionsetup{justification=centering,margin=1.5cm} \includegraphics[width=1\linewidth]{Figures/RMSE_DB} \caption{Performance of area-specific point estimates including details on in- and out-of-sample areas. Comparison of empirical RMSEs from the design-based simulation for target variable \textit{inglabpc}} \label{fig:DBpoint} \end{figure} \begin{table}[!h] \centering \captionsetup{justification=centering,margin=1.5cm} \caption{Mean and Median of RB and RRMSE over in- and out-of-sample areas for point estimates} \begin{tabular}{@{\extracolsep{5pt}} lrcccccccc} \\[-1.8ex]\hline \hline \\[-1.8ex] & &\multicolumn{2}{c}{Total} &\multicolumn{2}{c}{In-sample}&\multicolumn{2}{c}{Out-of-sample} \\ \hline \\[-1.8ex] & & Median & Mean & Median & Mean & Median & Mean \\ \hline \\[-1.8ex] \multicolumn{7}{l}{RB[\%]}\\ \hline \\[-1.8ex] &BHF & $14.200$ & $17.800$ & $5.070$ & $11.200$ & $21.800$ & $22.400$ \\ &EBP & $14.700$ & $17.900$ & $5.310$ & $11.700$ & $22.600$ & $22.300$ \\ &EBP-BC & $9.650$ & $18.400$ & $6.140$ & $7.150$ & $15.800$ & $26.300$ \\ &P-SPLINES & $8.120$ & $18.200$ & $0.014$ & $14.200$ & $13.500$ & $21.000$ \\ &MERF & $7.720$ & $18.600$ & $3.490$ & $17.200$ & $10.600$ & $19.600$ \\ \hline \\[-1.8ex] \multicolumn{7}{l}{RRMSE[\%]}\\ \hline \\[-1.8ex] &BHF & $14.900$ & $21.600$ & $9.520$ & $17.000$ & $23.000$ & $24.800$ \\ &EBP & $15.900$ & $21.700$ & $9.480$ & $17.400$ & $22.900$ & $24.700$ \\ &EBP-BC & $14.000$ & $23.900$ & $12.900$ & $15.200$ & $16.100$ & $29.900$ \\ &P-SPLINES & $11.600$ & $22.800$ & $7.360$ & $20.000$ & $15.000$ & $24.700$ \\ &MERF & $9.430$ & $21.200$ & $6.130$ & $19.900$ & $12.500$ & $22.100$ \\ \hline \\[-1.8ex] \end{tabular} \label{tab:DBpoint} \end{table} The discussion of results, starts with an investigation into the performance of point estimates. Figure \ref{fig:DBpoint} reports the average RMSE of the area-level mean-estimates for Nuevo León in total and with details on the $21$ in- and $30$ out-of sample areas. The corresponding summary statistics for Figure \ref{fig:DBpoint} are given in Table \ref{tab:MBappendix} in Appendix B. Regarding the total of $51$ areas, we observe no remarkable difference in the performance of the BHF and the EBP, whereas the EBP-BC has lower RMSE on average. \textcolor{black}{P-SPLINES outperform the BHF, EBP and EBP-BC in mean and median terms of RMSE.} The MERF point estimates indicate the lowest RMSEs among all areas resulting in an more than $22$ percent improvement compared to the BHF. Referring to the RMSE for in-sample areas, we see two different ways how the EBP-BC and \textcolor{black}{the adaptive methods of P-SPLINES and MERF} deal with high and unbalanced variation in our true values for certain areas, ranging from $475$ to about $4004$ pesos: overall, the MERF deals best in modelling the complex survey data and produces highly accurate estimates for the majority of in-sample areas. A closer look at the results, reveals however, that higher RMSE values due to overestimation mainly occur in two areas, both characterised by a very low level of income ($622$ and $544$ pesos respectively). \textcolor{black}{A similar observation can be made for P-SPLINES, although the MERF appears to reproduce the predictive relations more efficiently.} In contrast, we observe the in-sample behaviour of the EBP-BC, which clearly opposes its superior overall performance. The EBP-BC appears to balance extreme estimates by producing slightly worse estimates for each individual in-sample areas, than allowing for individually inferior estimates for specific `outlier'-areas. This behaviour is conceptually rooted in its data-driven transformation-approach. Nevertheless, this property enables the EBP-BC to identify a model, providing stable and precise estimates on the majority of areas, especially the 30 non-sampled areas. Given the data-scenario of Nuevo León, the performance on the out-of-sample areas, delineates each method's quality and stability. In this case, the \textcolor{black}{EBP-BC and the non-parametric approaches of P-SPLINES and MERF} outperform the `traditional' methods (BHF and EBP) in terms of lower RMSE. \textcolor{black}{The median of RMSE of P-SPLINES aligns to the values of the EBP-BC, although the RMSEs of P-SPLINES are lower in means.} One distinct advantage of the MERF is its adaptability and implicit model-selection that is rewarded in the presence of complex data-scenarios. The findings from Figure \ref{fig:DBpoint}, are strengthened by a discussion of mean and median values of RB and RRMSE in Table \ref{tab:DBpoint}. Referring to all $51$ areas, the RB of the data-driven method of EBP-BC\textcolor{black}{, P-SPLINES} and the MERF is smaller in median values than the RB of BHF and the EBP. Respectively, the MERF shows comparatively low levels of median RB while mean values lie in the same range with competing methods. The obvious difference between mean and median values, indicates the previously discussed existence of inferior estimates for specific regions due to the empirical properties of our underlying data. For the $20$ in-sample areas, \textcolor{black}{P-SPLINES} perform superior to competing methods regarding the median values of RB. The close relation between the mean and median values of RB for the EBP-BC highlight the mentioned balancing-property of the EBP-BC. For the majority of areas in the model-based simulation, i.e. the $30$ non-sampled areas, the EBP-BC\textcolor{black}{, P-SPLINES} as well as the MERF exhibit a comparatively low level of median RB. Especially the MERF captivates by the lowest values in median RRMSE compared to its competitors\textcolor{black}{, while mean values for all cases are within a compatible range}. Finally, we focus on the performance of the proposed non-parametric MSE-bootstrap procedure. While, the model-based simulation in Section \ref{sec:4} indicates unbiasedness of the proposed bootstrap-scheme under all four scenarios, our results from the design-based simulation require a deeper discussion. \textcolor{black}{We abstain from a discussion of result from our analytical approximation to the area-level MSE because it is limited to in-sample areas and was solely used to contextualize the quality of our proposed bootstrap-scheme from Section \ref{sec:3}.} Table \ref{tab:DBmse} reports the results of RB-RMSE and the RRMSE-RMSE for the corresponding estimates. Figure \ref{fig:DBappendix} in the Appendix B visualizes details from Table \ref{tab:DBmse}. First of all, the values of RRMSE-RMSE are comparable to the most complex scenario in the model-based scenario in Section \ref{sec:4}. The RB-RMSE for the in-sample areas indicates unbiasedness in median terms and an acceptable overestimation regarding the mean RB-RMSE. For out-of-sample areas, we face a moderate underestimation regarding the median value and over-estimation according to mean values. Nevertheless, Figure \ref{fig:DBappendix} in Appendix B reveals, that the mixed signal between mean and median in Table \ref{tab:DBmse} is explained by a balanced mix of under- and over-estimation. Overall, the expectations towards the MSE-bootstrap procedure, given the challenging conditions of this design-based simulation, are met. Especially, the results for in-sample areas, combined with insights from the model-based simulation, indicate a solid and reliable performance of the proposed non-parametric bootstrap procedure. Although, the RB-RMSE for all $51$ areas is driven by the results from out-of-sample areas, the median RB-RMSE is acceptable. Apparently, the MSE-estimates mirror the high variation in sample sizes paired with high and dis-proportional variation of high-income and low-income regions between the $21$ in-sample and $30$ out-of-sample areas. From an applied perspective, the MSE-estimates for out-of-sample areas are nevertheless practicable for the construction of confidence intervals, with a median coverage of $0.97$. \begin{table}[!h] \centering \captionsetup{justification=centering,margin=1.5cm} \caption{Performance of MSE-estimator in design-based simulation: mean and median of RB and RRMSE over in- and out-of-sample areas} \begin{tabular}{@{\extracolsep{5pt}} lcccccccc} \\[-1.8ex]\hline \hline \\[-1.8ex] &\multicolumn{2}{c}{Total} &\multicolumn{2}{c}{In-sample}&\multicolumn{2}{c}{Out-of-sample} \\ \hline \\[-1.8ex] & Median & Mean & Median & Mean & Median & Mean \\ \hline \\[-1.8ex] RB-RMSE[\%] & $$-$1.260$ & $14.300$ & $0.719$ & $7.820$ & $$-$9.460$ & $18.800$ \\ RRMSE-RMSE[\%] & $48.100$ & $55.900$ & $41.400$ & $47.700$ & $50.900$ & $61.700$ \\ \hline \\[-1.8ex] \end{tabular} \label{tab:DBmse} \end{table} \FloatBarrier \section{Concluding remarks}\label{sec:6} In this paper, we explore the potential of tree-based machine learning methods for the estimation of SAE-means. In particular, we provide a solid framework easing the use of random forests for regression within the existing methodological framework of SAE. We highlight the potential of our approach to meet modern requirements of SAE, including the robustness of random forests against model-failure and the applicability for high-dimensional problems processing Big Data sources. The methodological part focusses on the MERF-procedure \citep{Hajjem2014} and implicitly discusses a semi-parametric unit-level mixed model, treating LMM-based SAE-methods, such as the BHF and the EBP, as special cases. The model is fit by an algorithm resembling the EM-algorithm, allowing for flexibility in the specification to model fixed effects as well as random-effects. The proposed point estimator for area-level means is complemented by the non-parametric MSE-bootstrap scheme, building on the REB-bootstrap by \citet{Chambers_Chandra2013} and the bias-corrected estimate for the residual variance by \citet{Mendez_Lohr2011}. We evaluate the performance of point- and MSE-estimates compared to `traditional' SAE-methods by model- and design-based simulations and provide a distinctive SAE example using income data from the Mexican state Nuevo Leòn in Section \ref{sec:5.2}. The model-based simulation in Section \ref{sec:4} demonstrates the ability of point estimates to perform compatibly in classical scenarios and outperform `traditional' methods in the existence non-linear interactions between covariates. The design-based simulation in Section \ref{sec:5.3} confirms the adequacy of MERFs for point estimates under searingly realistic conditions. The model- and design-based simulations indicate that the proposed approach is robust against distributional violations of normality for the random effects and for the unit-level error terms. Concerning our proposed MSE-bootstrap scheme, we conclude its reliability based on the performance in the model-based simulation in Section \ref{sec:4}. Furthermore, we obtain reasonable support for the performance in the application in Section \ref{sec:5.2} and the following design-based simulation in Section \ref{sec:5.3}. We motivate three major dimensions for further research, including theoretical work, aspects of generalizations and advanced applications using Big Data covariates: from a theoretical perspective, further research is needed to investigate the construction \textcolor{black}{and theoretical discussion of a partial-analytical MSE for area-level means. A conducive strategy is an extension based on our theoretical discussion in the online supplementary materials. Additionally the deduction of recent theoretical results, such as conditions for the consistency of unit-level predictions \citep{Scornet_etal2015} or considerations of individual predictions intervals \citep{wager_etal2014, Zhang2019}, towards area-level indicators, bears potential.} Alternatively, considerations concerning a fully non-parametric formulation of model (\ref{mod1}) impose an interesting research direction. From a survey statistical perspective, our proposed method currently abstains from the use of survey weights which bears a risk if the assumption of non-informative sampling is violated. Nevertheless, there exist approaches incorporating weights into random forests \citep{Winham_etal2013}. The transfer of such ideas to the proposed method of MERFs is subject to ongoing research. Regarding additional generalizations of the proposed method, we aim to extend the use of MERFs towards the estimation of small area quantiles and other non-linear indicators, such as Gini-coefficients or Head Count Ratios. Furthermore, a generalization towards binary or count data is possible and left to further research. The semi-parametric composite formulation of model (\ref{mod1}) allows for $f()$ to adapt any functional form regarding the estimation of the conditional mean of $y_i$ given $X_i$ and technically transfers to other machine learning methods, such as gradient-boosted trees or support vector machines. In terms of advanced applications, we propose the use of MERFs for complex random effect and covariance-structures in empirical problems to the SAE-research community. Equally interesting is the use of high dimensional supplementary data, i.e. Big Data covariates, for the estimation of area-level means, that can be directly handled by the proposed MERF-framework. \section*{Acknowledgements} The authors are grateful to CONEVAL for providing the data used in empirical work. The views set out in this paper are those of the authors and do not reflect the official opinion of CONEVAL. The numerical results are not official estimates and are only produced for illustrating the methods. Additionally, the authors would like to thank the HPC Service of ZEDAT, Freie Universität Berlin, for computing time. \section*{Appendix A} After convergence of the algorithm introduced in Section \ref{sec:2.2}, we obtain an optimal non-parametric estimator for $\hat{f}()$. In the following, we facilitate the notation and refer to $\hat{f}^{OOB}()$ simply as $\hat{f}()$. The best predictor for the random effects for known parameters $H_i$ and $R_i$ must maximize the generalized log-likelihood criterion: $$GLL (f,v_i | y) = \sum_{i=1}^{D}\{ [y_i - f(X_i) - Z_i v_i ]' R_i^{-1} [ y_i - f(X_i) - Z_i v_i] + v_i ' H^{-1} v_i +log |H|+ log|R_i|\}.$$ Finding a maximum for $GLL$ is equivalent to the problem of finding a maximizer for $v$ in the first term of the summation of the proposed GLL-criterion: $$[y - \hat{f}(X_i) - Z_i v_i ]' R_i^{-1} [ y_i - \hat{f}(X_i) - Z_i v_i] + v_i ' H_i^{-1} v_i.$$ Reshaping leads to: \begin{flalign*} [y_i - \hat{f}(X_i) - Z_i v_i ]' R_i^{-1}[ y_i - \hat{f}(X_i) - Z_i v_i] + v_i ' H_i^{-1} v_i &= \\ y_i' R_i^{-1}y_i - y_i' R_i^{-1}\hat{f}(X_i) -y_i' R_i^{-1} Z_i v_i -\hat{f}(X_i)'R_i^{-1}y_i + \hat{f}(X_i)'R_i^{-1}\hat{f}(X_i) &+ \\ \hat{f}(X_i)'R_i^{-1}Z_i v_i - (Z_i v_i)'R_i^{-1}y_i+(Z_i v_i)'R_i^{-1}\hat{f}(X_i)+(Z_i v_i)'R_i^{-1}(Z_i v_i)+v_i ' H_i^{-1} v_i \end{flalign*} Now we derive the expression with respect to $v$ and set it to 0 in order to find the maximizer \begin{flalign*} -y_i' R_i^{-1} Z_i+ \hat{f}(X_i)'R_i^{-1} Z_i -Z_i'R_i^{-1}y_i+\\ Z_i'R_i^{-1}\hat{f}(X_i)+Z_i'R_i^{-1}Z_i v_i+(Z_i v_i)'R_i^{-1}y_i+Z_i+ 2 H_i^{-1} v_i &= 0 &\Longleftrightarrow\\ -2 y_i R_i^{-1} Z_i + 2 Z_i'R_i^{-1}\hat{f}(X_i) + 2 Z_i'R_i^{-1}Z_i v_i +2 H_i^{-1} v_i &= 0 &\Longleftrightarrow \\ - y_i R_i^{-1} Z_i + Z_i'R_i^{-1}\hat{f}(X_i) + Z_i'R_i^{-1}Z_i v_i + H_i^{-1} v_i &= 0 &\Longleftrightarrow\\ y_i R_i^{-1} Z_i - Z_i'R_i^{-1}\hat{f}(X_i) &= Z_i'R_i^{-1}Z_i v_i + H_i^{-1} v_i &\Longleftrightarrow\\ y_i R_i^{-1} Z_i - Z_i'R_i^{-1}\hat{f}(X_i) &= (Z_i'R_i^{-1}Z_i + H_i^{-1}) v_i &\Longleftrightarrow\\ (Z_i'R_i^{-1}Z_i + H_i^{-1})^{-1} (Z_i'R_i^{-1} (y_i - \hat{f}(X_i)) &= v_i \end{flalign*} \begin{flalign*} v_i &= (Z_i'R_i^{-1}Z_i + H_i^{-1})^{-1} (Z_i'R_i^{-1} (y_i - \hat{f}(X_i))\\ &= (Z_i'R_i^{-1}Z_i + H_i^{-1})^{-1} V_i V_i^{-1} (Z_i'R_i^{-1} (y_i - \hat{f}(X_i)\\ &= (Z_i'R_i^{-1}Z_i + H_i^{-1})^{-1} Z_i'R_i^{-1} (R_i+Z_i H_i Z_i')V_i^{-1}(y_i - \hat{f}(X_i))\\ &= (Z_i'R_i^{-1}Z_i + H_i^{-1})^{-1} (Z_i' +Z_i'R_i^{-1}Z_i H_i Z_i')V_i^{-1}(y_i - \hat{f}(X_i))\\ &= (Z_i'R_i^{-1}Z_i + H_i^{-1})^{-1}(Z_i'R_i^{-1}Z_i + H_i^{-1}) (H_i Z_i'V_i^{-1}(y_i - \hat{f}(X_i)))\\ &=(H_i Z_i'V_i^{-1}(y_i - \hat{f}(X_i))) \end{flalign*} The solution of the maximization problem is given by $\hat{v_i}^{*} = H_i Z_i'V_i^{-1}(y_i - \hat{f}(X_i))$. Note, for $\hat{f}(X_i) = X_i\hat{\beta}$, the optimality solution resembles the BLUP. \section*{Appendix B: additional simulation results and model-diagnostics} \begin{figure}[ht] \centering \captionsetup{justification=centering,margin=1.5cm} \includegraphics[width=1\linewidth]{Figures/Details_RB_MSE_MB} \caption{Details on the performance of the proposed bootstrap MSE-estimator and the analytic approximation in the model-based simulation: boxplots of the area-specific RB-RMSEs averaged over simulation runs} \label{fig:MBappendix} \end{figure} \begin{table}[ht] \centering \captionsetup{justification=centering,margin=1.5cm} \caption{Explanation of most influential variables according to the random forest model in the application of Section \ref{sec:5.2}} \begin{tabular}{@{\extracolsep{1pt}} lr} \\[-1.8ex]\hline \hline \\[-1.8ex] Variable name & Explanation \\ \hline ictpc & Total household income per capita \\ escol\_rel\_hog & Average relative amount of schooling standardized \\ & by age and sex of household members \\ bienes & Availability of goods in the household \\ jaesc & Average years of schooling of persons in the household \\ jnived & Formal education of the head of the household \\ actcom & Assets in the household \\ pcpering & Percentage of income earners in the household \\ jexp & Years of working experience of the head of the household \\ pcpering\_2 & Number of income earners in the household by household size \\ pcocup & Percentage of people employed in the household \\ jtocup & Occupation type \\ \hline \end{tabular} \label{tab:appExplain} \end{table} \begin{table}[ht] \centering \captionsetup{justification=centering,margin=1.5cm} \caption{Performance of point estimates in design-based simulation: summary statistics of empirical RMSE for area-level mean-estimates} \begin{tabular}{@{\extracolsep{5pt}} lrcccccccc} \\[-1.8ex]\hline \hline \\[-1.8ex] Areas & Method & Min & 1st Qu. & Median & Mean & 3rd Qu. & Max \\ \hline \multirow{ 4}{*}{Total} & BHF & 72.58 & 170.63 & 336.06 & 386.40 & 498.13 & 1351.82 \\ & EBP & 68.83 & 168.13 & 341.10 & 387.91 & 490.14 & 1342.92 \\ & EBP-BC & 65.22 & 225.84 & 331.64 & 376.49 & 460.86 & 1094.08 \\ & P-SPLINES & 72.70 & 142.91 & 290.84 & 337.11 & 462.94 & 969.15 \\ & MERF & 82.58 & 136.01 & 236.86 & 298.20 & 447.21 & 716.72 \\ \hline \multirow{ 4}{*}{In-sample} & BHF & 111.93 & 139.13 & 246.62 & 301.90 & 349.26 & 978.49 \\ & EBP & 107.41 & 143.07 & 251.95 & 308.47 & 348.00 & 994.45 \\ & EBP-BC & 145.48 & 212.64 & 308.43 & 314.08 & 353.02 & 705.81 \\ & P-SPLINES & 86.69 & 142.70 & 224.82 & 285.37 & 421.29 & 707.71 \\ & MERF & 94.56 & 123.72 & 141.16 & 264.27 & 392.73 & 688.74 \\ \hline \multirow{ 4}{*}{Out-of-sample} & BHF & 72.58 & 224.35 & 422.90 & 445.55 & 541.31 & 1351.82 \\ & EBP & 68.83 & 248.89 & 412.91 & 443.52 & 547.12 & 1342.92 \\ & EBP-BC & 65.22 & 234.98 & 348.83 & 420.17 & 494.83 & 1094.08 \\ & P-SPLINES & 72.70 & 172.29 & 351.17 & 373.34 & 465.36 & 969.15 \\ & MERF & 82.58 & 151.24 & 272.97 & 321.95 & 493.85 & 716.72 \\ \hline \end{tabular} \label{tab:MBappendix} \end{table} \begin{figure}[ht] \centering \captionsetup{justification=centering,margin=1.5cm} \includegraphics[width=1\linewidth]{Figures/DB_RB_MSE} \caption{Details on the performance of the proposed MSE-estimator in the design-based simulation: boxplots of the area-specific RB-RMSEs averaged over simulation runs including details on in- and out-of-sample areas} \label{fig:DBappendix} \end{figure} \clearpage \bibliographystyle{apacite} \section{Introduction}\label{sec:1} Having accurate and detailed information on social and economic conditions, summarised by appropriate indicators, is imperative for the efficient implementation of policies. The term \textit{detailed} is used to signify information that extends beyond aggregate levels into highly disaggregated geographical and other domains (e.g. demographic groups). The term \textit{accurate} refers to information that is estimated with appropriate level of precision and is comparable over space and time. Simply analysing data from national sample surveys is not enough for achieving the dual target of accurate and detailed information. This is mainly due to the reduction of sample sizes as the level of required detail increases. The achievement of this dual target demands appropriate model-based methodology collectively referred to as Small Area Estimation (SAE). SAE-methods can be broadly divided in two classes: first, area-level models \citep{Fay_Heriot1979} assume that only aggregated data for the survey and for the auxiliary information is available. Second, unit-level models \citep{Battese_etal1988} - further labelled as BHF - require access to the survey and to the auxiliary information on micro-level. A versatile extension of the BHF model is the EBP-approach by \citet{Molina_rao2010}. The EBP is capable to estimate area-level means as well as other linear and non-linear indicators. Both classes (area-level and unit-level models) are predominantly regression-based models, where the hierarchical structure of observations is modelled by random effects. These linear mixed models (LMM) assume normality of random effects and error terms. Focusing on social and economic inequality data, the required assumptions for LMMs hardly meet empirical evidence. \citet{JiangRao2020} remind, that optimality results and predictive performance in model-based SAE are inevitably connected to the validity of model assumptions. Without theoretical and practical considerations regarding improperly met assumptions, estimates are potentially biased and mean squared error (MSE) estimates unreliable. One strategy to prevent model-failure, is the assurance of normality by transforming the dependent variable \citep{sugasawa2017transforming,Rojas_etal2019}. For instance, \cite{Rojas_etal2019} generalize the EBP with a data-driven transformation on the dependent variable, such that normality assumptions can be met in transformed settings. Further details on how to obtain the most-likely transformation parameter that improves the performance of unit-level models are available in \cite{Rojas_etal2019} and \cite{sugasawa2019adaptively} or from a more applied perspective in \cite{Tzavidis_etal2018}. Apart from transformation strategies, another alternative is the use of models with less restrictive (parametric) assumptions to avoid model-failure. For instance, \cite{DialloRao2018} and \cite{Graf_etal2019} formulate the EBP under more flexible distributional assumptions. \textcolor{black}{Alternatively, \citet{Cha06} propose an approach for estimating area-level means based on M-quantile models, which are a robust method avoiding distributional assumptions of LMMs including the formal specification of area-level random effects. \citet{Tzavidis_etal2010} and \citet{Marchetti_Tzavidis2021} extended this approach to allow for estimating more complex statistics, like quantiles of area-specific distribution functions and non-linear indicators.} Semi- or non-parametric approaches for the estimation of area-level means were investigated among others by \citet{Opsomeretal2008}. \citet{Opsomeretal2008} use penalized splines regression, treating the coefficients of spline components as additional random effects within the LMM setting. A distinct methodological option to avoid the parametric assumptions of LMMs are the class of machine learning methods. These methods are not only limited to parametric models and `learn' predictive relations from data, including higher order interactions between covariates, without explicit model assumptions \citep{Hastie_etal2009, Varian2014}. Among the broad class of machine learning methods, we focus on tree-based models and particularly on random forests \citep{Breiman2001} because they exhibit excellent predictive performance in the presence of outliers and implicitly solve problems of model-selection \citep{Biau_Scornet2016}. In general, the predictive perspective of (tree-based) machine learning methods conceptually transfers straight forward to the methodology of unit-level SAE-models: survey data is used to construct a model with predictive covariates. Subsequently, auxiliary information from a supplementary data source (usually census, register or administrative data) is utilized to obtain predictions over sampled and non-sampled areas. From a machine learning perspective, the survey data serves as an implicit training set to construct a proper model, while supplementary data is used to predict final indicators. Nevertheless, \citet{JiangRao2020} claim, that results from machine learning methods in SAE are harder to be interpreted and justified by SAE-practitioners, compared to LMM-alternatives. We aim to fill this gap by providing a consistent framework enabling a coherent use of tree-based machine learning methods in SAE. In particular, we \textcolor{black}{incorporate random forests within the methodological tradition of SAE by proposing a non-linear, data-driven, and semi-parametric alternative for the estimation of area-level means by using Mixed Effects Random Forests (MERF) \citep{Hajjem2014}. We focus on the construction of area-level mean-estimates using MERFs for sampled and out-of-sample domains. Our proposed model-based estimator consists of a composite model of a structural component, accounting for hierarchical dependencies of survey data with random effects and a non-parametric random forest, which models fixed effects. In contrast to existing SAE-methods, our proposed approach assists SAE-practitioners by an automated model selection. We highlight strengths and weaknesses of random forests in the context of SAE, in comparison to existing (or `traditional') SAE-methods using design- and model-based simulations. A distinct merit of this paper is the provision of a reliable bootstrap-scheme determining the uncertainty of area-level mean-estimates. Thus, this paper aims to contribute towards the trend of diversifying the model-toolbox for SAE-practitioners and researchers, while simultaneously respecting the methodological and structural nature of SAE.} The general idea of tree-based methods in SAE is not entirely new. \citet{Anderson_etal2014} use district-level data from Peru to juxtapose the performance of LMM-based and tree-based methods for estimating population densities. \citet{Bilton_etal2017} use classification trees for categorical variables to incorporate auxiliary information from administrative data to survey data on household-poverty in Nepal. For a continuous variable \citet{DeMolinerGoga2018} estimate mean electricity consumption curves for sub-populations of households in France by using methods of LMMs and regression-based trees. \citet{MoConvilleetal2019} propose a regression-tree estimator for finite‐population totals, which can be viewed as an automatically-selected post‐stratification estimator. \citet{Dagdougetal2020} analyse theoretical properties of random forest in the context of complex survey data. \citet{Mendez2008} provides theoretical and empirical considerations for using random forests in the context of SAE and compares their performance with 'traditional' unit-level LMMs for the estimation of area-level means. Although we share the general idea of \citet{Mendez2008}, the approach of this paper differs in several ways: first of all, we leave the random forests algorithm \citep{Breiman2001} unchanged and explicitly estimate random effects to account for the hierarchical structure of the data. Secondly, the proposed framework of this paper is more flexible and potentially extendable to model more complex hierarchical dependency structures as well as spatial and temporal correlations. Additionally, the extension to other methods of machine learning is possible, such as support vector machines or gradient boosted trees. The paper is organized as follows: Section \ref{sec:2} provides a methodological introduction to random forests and introduces MERFs based on \citet{Hajjem2014} as a method that effectively amalgamates random forests and the possibility to account for hierarchical dependencies of unit-level observations. Additionally, we motivate a general unit-level mixed model, treating LMMs in SAE as special cases. In Section \ref{sec:2.3}, we discuss the construction of area-level mean-estimates. Random forests promote the flexibility of predictive models, but their lack of distributional assumptions complicates inferences. As a result, Section \ref{sec:3} proposes a non-parametric bootstrap-scheme for the estimation of the area-level MSE. In Section \ref{sec:4}, we use model-based simulations under complex settings to extensively discuss and compare the performance of the proposed method for point- and MSE-estimates. We claim MERFs to be a valid alternative to existing methods for the estimation of SAE-means. In Section \ref{sec:5}, we use household income data of the Mexican state Nuevo León to estimate area-level averages and corresponding uncertainty estimates. We highlight modelling and robustness properties of our proposed methods. Section \ref{sec:5.3} proceeds with a design-based simulation, which asses the quality of results in the application of Section \ref{sec:5.2}. Furthermore, the design-based simulation contributes to a genuine demonstration of properties and advantages of MERFs in the context of SAE. Section \ref{sec:6} concludes and motivates further research. \section{Theory and method}\label{sec:2} In this section we propose a flexible, data-driven approach using random forests for the estimation of area-level means in the presence of unit-level survey data. The method requires a joint understanding of tree-based modelling techniques and concepts of SAE. We review the basic theory of random forest and discuss modifications to ensure their applicability to hierarchical data and subsequently to applications of SAE. \subsection{Review of random forests}\label{sec:2.1} Random forests combine individual decision trees \citep{Breiman_etal1984} to improve their joint predictive power, while simultaneously reducing their prediction variance \citep{Biau_Scornet2016, Breiman2001}. \citet{Breiman2001} extends his idea of Bagging \citep{Breiman1996bagging} - which combines predictions from single trees through a bootstrap and aggregation procedure - to random forests that apply bootstrap aggregation on decorrelated trees. Note that the two tuning parameters of random forests are the number of trees (controlling the number of bootstrap replications) and the number of variables to be selected as candidates for each split (controlling the degree of decorrelation). Because the forest is a combination of decorrelated trees, each aiming to minimize the prediction MSE, the optimal estimator for the random forest regression function $f()$ also minimizes the point-wise MSE. The minimizer under squared error loss in the regression context, is given by the conditional mean of target variable $y$ given the data \citep{Efron_Hastie2016, Wager_Athey2018}. Random forests captivate with a lack of assumptions such as linearity or the distributional specification of model errors, however, observations are assumed to be independent. Applications of SAE are characterized by the use of hierarchical data. Ignoring the correlation between observations, generally results in inferior point-predictions and inferences. LMMs capture the dependencies between observations by random effects, while effects between covariates are modelled by linear fixed effects, resulting in an additive model of both terms. In the context of tree-based methods, \citet{Sela_Simonoff2012} propose a semi-parametric mixed model consisting of a random effects part and a fixed effects non-parametric tree-model. \citet{Hajjem_etal2011} propose a similar approach under the label of mixed effect regression trees (MERT). As the superior performance of random forests over regression trees transfers to dependent data, \citet{Hajjem2014} replace the fixed effects part in MERTs by a random forest, leading to mixed effects random forests (MERF). We scrutinize this approach and propose a general semi-parametric unit-level mixed model combining the flexibility of tree-based models with the structural advantages of linear mixed models in the next subsection. \subsection{Mixed effects random forests}\label{sec:2.2} We assume a finite population which is divided into $D$ disjunct areas $U_i$, with population sizes $N_i$, where $i = 1,...,D$ specifies the areas and $N = \sum_{i=1}^{D} N_i$ defines the population size. We assume to have a sample from this population. The number of sampled observations in area $i$ is given by $n_i$, where the sample size is denoted by $n = \sum_{i=1}^{D} n_i$. In each sampled area we obtain $j$ individual observations ranging from $1,...,n_i$. We define the metric target variable for area $i$ as a $n_i \times 1$ vector of individual observations $y_i = [y_{i1},..., y_{in_i}]'$. Covariates are captured in the $n_i \times p$ matrix of $X_i = [x_{i1},..., x_{in_i}]'$, where $p$ defines the number of covariates. $Z_i = [z_{i1},...,z_{in_i}]'$ defines the $n_i \times q$ matrix of area-specific random effect specifiers, where $q$ describes the dimension of random effects. $v_i = [v_{i1},...,v_{iq}]'$ is the $q \times 1$ vector of random effects for area $i$. $\epsilon_i = [\epsilon_{i1},...,\epsilon_{in_i}]' $ is the $n_i \times 1$ vector of individual error terms. Observations between areas are assumed to be independent and $v_i$ and $\epsilon_i$ are mutually independently normally distributed with the same variance-covariance matrix $H_i$ for random effects of each area $i$ and $R_i$ for individual errors. A joint notation for all $D$ areas is as follows: \begin{align*} y = col_{1\leq i\leq D}(y_i) = (y_i',...,y_D')', \quad X = col_{1\leq i\leq D}(X_i), \\ Z = diag_{1\leq i\leq D}(Z_i), \quad v = col_{1\leq i\leq D}(v_i), \quad \epsilon = col_{1\leq i\leq D}(\epsilon_i), \\ R = diag_{1\leq i\leq D}(R_i), \quad H = diag_{1\leq i\leq D}(H_i). \end{align*} The goal is to identify a relation $f()$ between covariates $X$ and the target variable $y$, in order to predict values for non-sampled observations utilizing available supplementary covariates from census or register information across areas. We state a model consisting of two major parts: a fixed part $f(X)$ and a linear part $Zv$ capturing dependencies by random effects. In the following, $f()$ can be any parametric or non-parametric function that expresses the conditional mean of target variable $y$ given covariates $X$: \begin{equation}\label{mod1} y = f(X)+Z v + \epsilon, \end{equation} where $$ \epsilon \sim N(0,R) \quad \text{and} \quad v \sim N(0,H). $$ Note that for each area $i$ the following model holds: \begin{equation} y_i = f(X_i)+Z_i v_i + \epsilon_i. \end{equation} The covariance matrix of the observations $y$ is given by the block diagonal matrix $Cov(y) = V = diag_{1\leq i\leq D}(V_i)$, where $V_i = Z_i H_i Z_i' +R_i$. We introduce model (\ref{mod1}) in general terms to potentially allow for modelling of complex covariance and dependency structures. However, for the rest of the paper we assume that correlations arises only due to between-area variation, i.e. $R_i = \sigma_{\epsilon}^2 I_{n_i}$ for all areas $i$. Note that the already mentioned LMM proposed by \cite{Battese_etal1988} for estimating area-level means results as a special case of (\ref{mod1}) by setting $f()$ to be the linear model $f(X) = X\beta$, with regression parameters $\beta = [\beta_1,...,\beta_p]'$. Defining $f()$ as a random forest, results in the MERF-approach proposed by \citet{Hajjem2014}, which is the preferred specification throughout the rest of the paper. Before we continue, we want to clarify consequences of distributional assumptions in (\ref{mod1}) that mainly address the linear part of the model. The unit-level errors are assumed to follow $\epsilon \sim N(0,R)$. However, the assumption of normality does not affect the properties of the fixed part $f(X)$ and we do not require residuals to be normally distributed for the application of our proposed method. Nevertheless, for the random components part, we require a proper likelihood function to ensure that the adapted expectation-maximization (EM) algorithm (see below for further details) for parameter estimates converges towards a local maximum within the parameter space. A normality-based likelihood function is exploited, as it has two important properties: firstly, it facilitates the estimation of random effects due to the existence of a closed-form solution of the integral of the Gaussian likelihood function. Secondly, the maximum likelihood estimate for the variance of unit-level errors is given by the mean of the unit-level residual sum of squares. The estimation of the random effects could be also done in a non-parametric way by using discrete mixtures \citep{Marino2016, Marino2019}. However, the modification towards a fully non-parametric formulation of model (\ref{mod1}) is subject to further research. For fitting the model (\ref{mod1}) we use an approach reminiscent of the EM-algorithm similar to \cite{Hajjem2014}. In short, the MERF-algorithm subsequently estimates a) the forest function, assuming the random effects term to be correct and b) estimates the random effects part, assuming the Out-of-Bag-predictions (OOB-predictions) from the forest to be correct. OOB-predictions utilize the unused observations from the construction of each forest's sub-tree \citep{Breiman2001, Biau_Scornet2016}. The proposed algorithm is as follows: \begin{enumerate} \item Initialize $b = 0$ and set random components $\hat{v}_{(0)}$ to zero. \item Set $b = b+1$. Update $\hat{f}(X)_{(b)}$ and $\hat{v}_{(b)}$: \begin{enumerate} \item $y^*_{(b)} = y -Z \hat{v}_{(b-1)}$ \item Estimate $\hat{f}()_{(b)}$ using a random forest with dependent variable $y^*_{(b)}$ and covariates $X$. Note that $\hat{f}()_{(b)}$ is the same function for all areas $i$. \item Get the OOB-predictions $\hat{f}(X)^{OOB}_{(b)}$. \item Fit a linear mixed model without intercept and restricted regression coefficient of 1 for $\hat{f}(X)^{OOB}_{(b)}$: $$y = \hat{f}(X)^{OOB}_{(b)} +Z \hat{v}_{(b)} + \epsilon.$$ \item Extract the variance components $\hat{\sigma}^2_{\epsilon,(b)}$ and $\hat{H}_{(b)}$ and estimate the random effects by: $$\hat{v}_{(b)} = \hat{H}_{(b)}Z ' \hat{V}_{(b)}^{-1} (y - \hat{f}(X)^{OOB}_{(b)}).$$ \end{enumerate} \item Repeat Step (2) until convergence is reached. \end{enumerate} The convergence of the algorithm is assessed by the marginal change of the modified generalized log-likelihood (GLL) criterion: $$GLL (f,v_i | y) = \sum_{i=1}^{D}([y_i - f(X_i) - Z_i v_i ]' R_i^{-1} [ y_i - f(X_i) - Z_i v_i] + v_i ' H_i ^{-1} v_i +\text{log} |H_i|+ \text{log}|R_i|).$$ In the linear case with $f = X \beta$, and for given variance components $H$ and $R$, the maximization of the GLL-criterion is equivalent to the solution of so-called mixed model equations \citep{Wu_Zhang2006} - leading to best linear unbiased predictor (BLUP): $ \hat{v} = HZ' V ^{-1} (y - X\hat{\beta})$. For random forests, the corresponding estimator $\hat{v}$ for known parameters $H$ and $R$ is given by: \begin{equation}\label{v_opt} \hat{v} = HZ ' V ^{-1} (y - \hat{f}(X)^{OOB}). \end{equation} Mathematical details of the derivations are provided in Appendix A. This result is in line with \cite{capitaine_etal2021}, claiming that $\hat{v}$ is obtained by taking the conditional expectation given the data $y$ and subsequently $\hat{v}$ can thus be considered as the BLUP for the linear part of model (\ref{mod1}). The estimation of variance components in Step 2 (d) for $\hat{\sigma}^2_{\epsilon}$ and $\hat{H}$ is obtained by taking the expectation of maximum likelihood estimators given the data. Although $\hat{\sigma}^2_{\epsilon}$ is a naive estimator within the discussed framework, it cannot be considered as a valid estimator for the variance $\sigma_{\epsilon}^2$ of the unit-level errors $\epsilon$. \citet{Breiman2001} maintains that the sum of squared residuals from OOB-predictions are a valid estimator for the squared prediction error of new individual observations. However, as an estimator of the residual variance under the model, $\hat{\sigma}^2_{\epsilon}$ is positively biased, as it includes uncertainty regarding the estimation of function $\hat{f}()$. Following \citet{Mendez_Lohr2011} we use a bias-adjusted estimator for the residual variance $\sigma^2_{\epsilon}$ from a random forest model (\ref{mod1}) using a bootstrap bias-correction. The essential steps to obtain the corrected residual variance are summarized as follows: \begin{enumerate} \item Use the OOB-predictions $\hat{f}(X)^{\text{OOB}}$ from the final model $\hat{f}()$ after convergence of the algorithm. \item Generate $B$ bootstrap samples $y^{\star}_{(b)} = \hat{f}(X)^{\text{OOB}} + \epsilon^{\star}_{(b)}$, where the values $\epsilon^{\star}_{(b)}$ are sampled with replacement from the centred marginal residuals $\hat{e} = y -\hat{f}(X)^{\text{OOB}}$. \item Recompute $\hat{f}(X)^{\text{OOB}}_{(b)}$ using a random forest with $y^{\star}_{(b)}$ as dependent variable. \item Estimate the correction-term $K(\hat{f})$ by: \begin{align*} \hat{K}(\hat{f}) = B^{-1} \sum_{b=1}^{B} \left[\hat{f}(X)^{\text{OOB}} - \hat{f}(X)^{\text{OOB}}_{(b)}\right]^2. \end{align*} \end{enumerate} The bias-corrected estimator for the residual variance is then given by: \begin{equation}\label{biasadj} \hat{\sigma}_{bc,\epsilon}^2 = \hat{\sigma}_{\epsilon}^2 - \hat{K}(\hat{f}). \end{equation} \subsection{Predicting small-area averages}\label{sec:2.3} The MERF-model (\ref{mod1}) predicts the conditional mean on individual level of a metric dependent variable given unit-level auxiliary information. In the context of SAE, we are not interested in predictions on individual level, but in estimating indicators such as area-level means or area-level totals \citep{Rao_Molina2015}. Thus, we assume the same structural simplifications as the LMM proposed by \cite{Battese_etal1988} for estimating area-level means throughout the paper, i.e. $q=1$, $Z$ is a $n_i \times D$ design-matrix of area-intercept indicators, $v = [v_{1},...,v_{D}]'$ is a $D \times 1$ vector of random effects, and variance-covariance matrix for random effects simplifies to $H_i = \sigma_{v}^2$. Firstly, we use the fact that random forest estimates of the fixed part $\hat{f}()$ express the conditional mean on unit-level. We calculate the mean-estimator for each area $i$ based on available supplementary data sources (usually census or administrative data) by: $$\bar{\hat{f}}(X_{i}) = \frac{1}{N_i} \sum_{j=1}^{N_i} \hat{f}(X_{i})=\frac{1}{N_i} \sum_{j=1}^{N_i} \hat{f}(x_{ij}).$$ Secondly, we exploit the result (\ref{v_opt}) that $\hat{v}_i$ is the BLUP for the linear part of the model (\ref{mod1}). Therefore, the proposed estimator for the area-level mean $\mu = [\mu_1,..., \mu_D]'$ is given by: \begin{equation}\label{mu1} \hat{\mu}_i = \bar{\hat{f}}(X_i) + Z_i\hat{v}_i\enspace\enspace\text{for}\enspace\enspace i=1,...D. \end{equation} In cases of non-sampled areas, the proposed estimator for the area-level mean reduces to the fixed part from the random forest: $$\hat{\mu}_i = \bar{\hat{f}}(X_i).$$ \textcolor{black}{We shortly discuss properties of our proposed estimator from Equation (\ref{mu1}). The structural component $Z_i\hat{v}_i$ captures dependency and correlation structures by random effects and the expression $\bar{\hat{f}}(X_i)$ is the fixed effects predictor of the mean. For the special case, where $\hat{f}()$ is assumed to be the linear model $\hat{f}(X) = X\hat{\beta}$, with regression parameters $\hat{\beta} = [\hat{\beta_1},...,\hat{\beta}_p]'$, the estimator for area-level means resembles the result of the EBLUP \citep{Battese_etal1988}. If $\hat{f}()$ is a random forest, we face area-specific mean-estimates for fixed-effects from a highly flexible, data-driven and non-differentiable function. Two major tuning parameters affect the predictive performance of the random forest $\hat{f}()$, i.e. the number of trees and the number of split-candidates at each node controlling the degree of decorrelation. In contrast to existing parametric and non-parametric methods in SAE, our proposed estimator from Equation (\ref{mu1}) abstains from problems due to model-selection. Random forests implicitly perform optimized model-selection including higher-order effects or non-linear interactions. Although flexible approaches such as P-Splines \citep{Opsomeretal2008} potentially determine non-linear relations in covariates, users have to explicitly specify model-variables and interactions to be interpolated a-priori, resulting in a comparable paradigm of model-selection compared to standard LMMs. An additional property of $\hat{f}()$ is the capability to deal with high-dimensional covariate data, i.e. cases where the number of covariates $p$ is larger than the sample size $n$. This property might be exploited in the context of applications to alternative Big Data sources \citep{Marchetti_etal2015,Schmid2017}.} \section{Estimation of uncertainty}\label{sec:3} The assessment of uncertainty of area-level indicators in SAE is crucial to analyse the quality of estimates. The area-level MSE is a conventional measure fulfilling this goal, but its calculation is a challenging task. For instance, for the unit-level LMM with block diagonal covariance matrices \citep{Battese_etal1988}, the exact MSE cannot be analytically derived with estimated variance components \citep{Gonzalez_etal2008, Rao_Molina2015} and only partly-analytical approximations are available \citep{Prasad_Rao1990,Datta_Lahiri2000}. An alternative to estimate uncertainty of the area-level indicators are bootstrap-schemes \citep{Hall_Maiti2006, Gonzalez_etal2008, Chambers_Chandra2013}. In contrast, general statistical results for inference of random forests are rare, especially in comparison to the existing theory of inference using LMMs. \textcolor{black}{Nevertheless, we provide a theoretical discussion on the estimation of MSEs for in-sample area-level means in the spirit of \citet{Prasad_Rao1990} based on \citet{Mendez2008}. Derivations can be found in the online supplementary materials. The resulting analytical approximation is considered to be a complement to contextualize the quality of our proposed bootstrap MSE-estimator for in-sample areas. We discuss performance details in the model-based simulation in Section \ref{sec:4}. An exact theoretical determination and discussion of asymptotic properties will be left to further research.} The theoretical background of random forests grows, but mainly aims to quantify the uncertainty of individual predictions \citep{Sexton_Laake2009, wager_etal2014, Wager_Athey2018, Athey_etal2019, Zhang2019}. The extension of recent theoretical results, such as conditions for the consistency of unit-level predictions \citep{Scornet_etal2015} or their asymptotic normality \citep{Wager_Athey2018}, towards area-level indicators is a conducive strategy. In this paper, we propose a non-parametric random effect block (REB) bootstrap for estimating the MSE of the introduced area-level estimator given by Equation (\ref{mu1}). We aim to capture the dependence-structure of the data as well as the uncertainty introduced by the estimation of model (\ref{mod1}). Our bootstrap-scheme builds on the non-parametric bootstrap introduced by \cite{Chambers_Chandra2013}. The proposed REB bootstrap has two major advantages: firstly, empirical residuals depend only on the correct specification of the mean behaviour function $f()$ of the model, thus the REB setting is lenient to specification errors regarding the covariance structure of the model. Secondly, the bootstrap within blocks ensures that the variability of residuals within each area is captured. We scale and centre the empirical residuals by the bias-corrected residual variance (\ref{biasadj}) in order to eliminate the uncertainty due to the estimation of $\hat{f}()$ from the naive residuals. The steps of the proposed bootstrap are as follows: \begin{enumerate} \item For given $\hat{f}()$ calculate the $n_i\times 1$ vector of marginal residuals $\hat{e}_i = y_i -\hat{f}(X_i)$ and define $\hat{e} = [\hat{e}_1',...,\hat{e}_D']'$. \item Using the marginal residuals $\hat{e}$, compute level-2 residuals for each area by $$\bar{r}_{i} = \frac{1}{n_i} \sum_{j=1}^{n_i} {\hat{e}_{i}}\enspace\enspace\text{for}\enspace\enspace i=1,...D$$ and $\bar{r} = [\bar{r}_1,...,\bar{r}_D]'$ indicates the $D\times 1$ vector of level-2 residuals. \item To replicate the hierarchical structure we use the marginal residuals and obtain the $n_i\times 1$ vector of level-1 residuals by $\hat{r}_{i} = \hat{e}_{i} - 1_{n_i}\bar{r}_i$. The residuals $\hat{r} = [\hat{r}_1',...,\hat{r}_D']'$ are scaled to the bias-corrected variance $\hat{\sigma}_{bc,\epsilon}^2$ (\ref{biasadj}) and centred, denoted by $\hat{r}^{c} = [\hat{r}^{c'}_{1},...,\hat{r}^{c'}_{D}]'$. The level-2 residuals $\bar{r}_i$ are also scaled to the estimated variance $\hat{H}_i=\hat{\sigma}_{v}^2$ and centred, denoted by $\bar{r}^{c} = [\bar{r}^{c}_1,...,\bar{r}^{c}_D]'$. \item For $b=1,...,B$: \begin{enumerate} \item Sample independently with replacement from the scaled and centred level-1 and level-2 residuals: \begin{eqnarray} \nonumber r_{i}^{(b)}=\text{srswr}(\hat{r}^c_{i},n_i)\enspace\enspace \text{and}\enspace\enspace \bar{r}^{(b)}=\text{srswr}(\bar{r}^c,D). \end{eqnarray} \item Calculate the bootstrap population as $y^{(b)} = \hat{f}(X) +Z \bar{r}^{(b)}+ r^{(b)}$ and calculate the true bootstrap population area means $\mu_i^{(b)}$ as $\frac{1}{N_i} \sum_{j=1}^{N_i} y_{ij}^{(b)}$ for all $i = 1,..,D$. \item For each bootstrap population draw a bootstrap sample with the same $n_i$ as the original sample. Use the bootstrap sample to obtain estimates $\hat{f}^{(b)}()$ and $\hat{v}^{(b)}$ as discussed in Section \ref{sec:2.2}. \item Calculate area-level means following Section \ref{sec:2.3} by $$\hat{\mu}^{(b)}_{i} = \bar{\hat{f}}^{(b)}(X_i) + Z_i\hat{v}_i^{(b)}.$$ \end{enumerate} \item Using the $B$ bootstrap samples, the MSE-estimator is obtained as follows: $$\widehat{MSE}_i = B^{-1} \sum_{b=1}^{B} \left(\mu_i^{(b)}-\hat{\mu}^{(b)}_{i}\right)^2.$$ \end{enumerate} \section{Model-based simulation}\label{sec:4} This section marks the first step in the empirical assessment of the proposed method. The model-based simulation juxtaposes point estimates for the area-level mean from the mixed effects random forest model (\ref{mod1}) with several competitors. In particular, we study the performance of MERFs compared to the BHF-model \citep{Battese_etal1988}, the EBP \citep{Molina_rao2010}\textcolor{black}{, the EBP under data-driven Box-Cox transformation (EBP-BC) by \citet{Rojas_etal2019} as well as the non-parametric EBLUP with P-Splines (P-SPLINES) by \citet{Opsomeretal2008}}. The BHF-model serves as an established baseline for the estimation of area-level means and the EBP and the EBP-BC conceptually build on the BHF-model. \textcolor{black}{The non-parametric EBLUP by \citet{Opsomeretal2008} incorporates advantages of flexible, non-linear smoothing methods into existing theory for SAE based on LMMs.} Differences in the performance of the EBP and the EBP-BC highlight advantages of data-driven transformations, while differences in the performance of the linear competitors and \textcolor{black}{more flexible alternatives (MERF, P-SPLINES) indicate advantages of semi-parametric and non-linear modelling}. Overall, we aim to show, that our proposed methodology for point and uncertainty estimates performs comparably well to `traditional' SAE-methods and has comparative advantages in terms of robustness against model-failure. The simulation-setting is characterized by a finite population $U$ of size $N=50000$ with $D=50$ disjunct areas $U_1,...,U_D$ of equal size $N_i = 1000$. We generate samples under stratified random sampling, utilizing the $50$ small areas as stratas, resulting in a sample size of $n = \sum_{i=1}^{D} n_i = 1229$. The area-specific sample sizes range from $6$ to $49$ sampled units with a median of $21$ and a mean of $25$. The sample sizes are comparable to area-level sample sizes in the application in Section \ref{sec:5} and can thus be considered to be realistic. We consider four scenarios denoted as \textit{Normal}, \textit{Interaction}, \textit{Normal-Par}, \textit{Interaction-Par} and repeat each scenario independently $M=500$ times. The comparison of competing model-estimates under these four scenarios allows us to examine the performance under two major dimensions of model-misspecification: Firstly, the presence of skewed data delineated by non-normal error-terms and secondly, the presence of unknown non-linear interactions between covariates. Scenario \textit{Normal} provides a baseline under LMMs with normally distributed random effects and unit-level errors. As model-assumptions for LMMs are fully met, we aim to show, that MERFs perform comparably well to linear competitors in the reference scenario. Scenario \textit{Interaction} shares its error-structure with \textit{Normal}, but involves a complex model including quadratic terms and interactions. This scenario portrays advantages of semi-parametric and non-linear modelling methods protecting against model-failure. Working with inequality or income data, we often deal with skewed target variables. Thus, we use the Pareto distribution to mimic realistic income scenarios. Scenario \textit{Normal-Par} uses the linear additive structure of LMMs and adds Pareto distributed unit-level errors. The resulting scenario, including a skewed target variable, is a classical example promoting the use of transformations assuring that assumptions of LMMs to be met. Finally, scenario \textit{Interaction-Par} combines the two discussed dimensions of model misspecification, i.e. a non-Gaussian error-structure with complex interactions between covariates. We chose this scenario to emphasize the ability of MERFs to handle both complications simultaneously. Further details on the data-generating process for each scenario are provided in Table \ref{tab:MB1}. \begin{table}[ht] \centering \captionsetup{justification=centering,margin=1.5cm} \caption{Model-based simulation scenarios} \resizebox{\textwidth}{!}{\begin{tabular}{rlccccccc} \toprule {Scenario} & {Model} & {$x1$} & {$x2$} & {$\mu$} & {$v$} & {$\epsilon$} \\ \midrule Normal & $ y = 5000-500x_1-500x_2+v+\epsilon$ & $N(\mu,3^2)$ & $N(\mu,3^2)$ & $unif(-1,1)$ & $N(0,500^2)$ & $N(0,1000^2)$ \\ Interaction & $ y = 15000-500x_1x_2-250x_2^2+v+\epsilon $ & $N(\mu,4^2)$ & $N(\mu,2^2)$ & $unif(-1,1)$ & $N(0,500^2)$ &$N(0,1000^2)$ \\ Normal-Par & $ y = 5000-500x_1-500x_2+v+\epsilon $ & $N(\mu,3^2)$ & $N(\mu,3^2)$ & $unif(-1,1)$ & $N(0,500^2)$ & $Par(3,800)$ \\ Interaction-Par & $ y = 20000 - 500x_1x_2 - 250x_2^2+ v + \epsilon $ & $N(\mu,2^2)$ & $N(\mu,2^2)$ & $unif(-1,1)$ & $N(0,1000^2)$ & $Par(3,800)$ \\ \bottomrule \end{tabular}} \label{tab:MB1} \end{table} We evaluate point estimates for the area-level mean by the relative bias (RB) and the relative root mean squared error (RRMSE). As quality-criteria for the evaluation of the MSE-estimates, we choose the relative bias of RMSE (RB-RMSE) and the relative root mean squared error of the RMSE: \begin{align} \nonumber RB_i &= \frac{1}{M} \sum_{m=1}^{M} \left(\frac{\hat{\mu}^{(m)}_i - \mu^{(m)}_i}{\mu^{(m)}_i}\right)\\\nonumber \textcolor{black}{RRMSE_i} &= \textcolor{black}{\frac{\sqrt{\frac{1}{M} \sum_{m=1}^{M} \left(\hat{\mu}^{(m)}_i - \mu^{(m)}_i\right)^2}}{\frac{1}{M}\sum_{m=1}^{M}\mu^{(m)}_i}}\\\nonumber RB\text{-}RMSE_i &=\frac{\sqrt{\frac{1}{M} \sum_{m=1}^{M} MSE^{(m)}_{est_i}} - RMSE_{emp_i}}{RMSE_{emp_i}}\\\nonumber RRMSE\text{-}RMSE_i &= \frac{\sqrt{\frac{1}{M} \sum_{m=1}^{M} \left(\sqrt{MSE^{(m)}_{est_i}} - RMSE_{emp_i}\right)^2}}{RMSE_{emp_i}}, \end{align} where $\hat{\mu}^{(m)}_i$ is the estimated mean in area $i$ based on any of the methods mentioned above and $\mu^{(m)}_i$ defines the true mean for area $i$ in simulation round $m$. $MSE_{est_i}^{(m)}$ is estimated by the proposed bootstrap in Section \ref{sec:3} and $RMSE_{emp_i} = \sqrt{\frac{1}{M} \sum_{m=1}^{M}(\hat{\mu}^{(m)}_i -\mu^{(m)}_i)^2}$ is the empirical root MSE over $M$ replications. For the computational realization of the model-based simulation, we use R \citep{R_language}. The BHF estimates are realized from the \emph{sae}-package \citep{Molina_Marhuenda:2015} and the \emph{emdi}-package \citep{Kreutzmann_etal2019} is used for the EBP as well as the EBP under the data-driven Box-Cox transformation. \textcolor{black}{We implement the P-SPLINE method with the package \emph{mgcv} \citep{Wood_2017}.} For estimating the proposed MERF-approach, we use the packages \emph{randomForest} \citep{Liaw_Wiener2002} and \emph{lme4} \citep{Bates_etal2015}. We monitor the convergence of algorithm introduced in Section \ref{sec:2.2} with a precision of $1e^{-5}$ in relative difference of the GLL-criterion and set the number of split-candidates to $1$, keeping the default of $500$ trees for each forest. We start with a focus on the performance of point estimates. Figure \ref{fig:MBpoint} reports the empirical RMSE of each method under the four scenarios. As expected, in the \textit{Normal} scenario, the BHF and the EBP perform on the same level and outperform the MERF estimator. The EBP with a data-driven transformation (EBP-BC) \textcolor{black}{and the non-parametric EBLUP (P-SPLINES)} lead to similar results compared to the BHF and EBP. This shows that the data-driven transformation \textcolor{black}{and the penalized smoothing approach} work as expected. A similar pattern appears in the results from the \textit{Normal-Par} scenario, except that the EBP-BC reaches a lower overall RMSE due to its property of data-driven transformation and subsequently improved estimation under skewed data. As anticipated, a comparison of the performance of the MERF between the \textit{Normal} and the \textit{Normal-Par} scenario indicates, that the MERF exhibits robust behaviour under skewed data and subsequently regarding violations of the normal distribution of errors. \textcolor{black}{LMM-based competitors match the data-generating process of fixed effects and perform accordingly, as already observed under the \textit{Normal} scenario.} For complex scenarios, i.e. \textit{Interaction} and \textit{Interaction-Par}, point estimates of the proposed MERF outperform the SAE-methods based on LMMs. The EBP-BC performs better in terms of lower RMSE values compared to the BHF and the EBP in both interaction scenarios. \textcolor{black}{The flexible approach of P-SPLINES outperforms the BHF, the EBP and the data-driven EBP-BC. However, MERFs automatically identify interactions and non-linear relations, such as the quadratic term in scenario \textit{Interaction-Par}, which leads to a clear comparative advantage in terms of RMSE.} Overall, the results from Figure \ref{fig:MBpoint} indicate that the MERF performs comparably well to LMMs in simple scenarios, and outperforms `traditional' SAE-models in the presence of unknown non-linear relations between covariates. Additionally, the robustness against model-misspecification of MERFs holds if distributional assumptions for LMMs are not met, i.e. in the presence of non-normally distributed errors and skewed data. Table \ref{tab:MBpoint} reports the corresponding values of RB and RRMSE for our discussed point estimates. The RB and the RRMSE from the MERF-method attest a competitively low level for all scenarios. Most interestingly, in complex scenarios (\textit{Interaction} and \textit{Interaction-Par}), a familiar result regarding the statistical properties of random forests appears: the RB is higher compared to the LMM-based models, but the enlarged RB is rewarded by a lower RRMSE for point estimates. \begin{figure}[ht] \centering \captionsetup{justification=centering,margin=1.5cm} \includegraphics[width=1\linewidth]{Figures/Results_MB_point} \caption{Empirical RMSE comparison of point estimates for area-level averages under four scenarios} \label{fig:MBpoint} \end{figure} \begin{table}[!h] \footnotesize \centering \captionsetup{justification=centering,margin=1.5cm} \caption{Mean and Median of RB and RRMSE over areas for point estimates in four scenarios} \begin{tabular}{@{\extracolsep{5pt}} lrcccccccc} \\[-1.8ex]\hline \hline \\[-1.8ex] & &\multicolumn{2}{c}{\textit{Normal}} &\multicolumn{2}{c}{\textit{Interaction}}&\multicolumn{2}{c}{\textit{Normal-Par}}&\multicolumn{2}{c}{\textit{Interaction-Par}} \\ \hline \\[-1.8ex] & & Median & Mean & Median & Mean & Median & Mean & Median & Mean \\ \hline \\[-1.8ex] \multicolumn{9}{l}{RB[\%]}\\ \hline \\[-1.8ex] &BHF & $0.087$ & $0.131$ & $$-$0.202$ & $0.106$ & $0.193$ & $0.220$ & $0.043$ & $0.142$ \\ &EBP & $0.069$ & $0.128$ & $$-$0.060$ & $0.108$ & $0.216$ & $0.217$ & $0.105$ & $0.142$ \\ &EBP-BC & $0.152$ & $0.184$ & $0.156$ & $0.381$ & $0.174$ & $0.129$ & $0.139$ & $0.262$ \\ &P-SPLINES & $0.096$ & $0.137$ & $$-$0.064$ & $0.123$ & $0.199$ & $0.227$ & $0.051$ & $0.090$ \\ &MERF & $0.137$ & $0.191$ & $0.279$ & $0.312$ & $0.409$ & $0.460$ & $0.151$ & $0.188$ \\ \hline \\[-1.8ex] \multicolumn{9}{l}{RRMSE[\%]}\\ \hline \\[-1.8ex] &BHF & $3.830$ & $4.090$ & $3.770$ & $3.870$ & $3.600$ & $4.100$ & $2.800$ & $2.950$ \\ &EBP & $3.850$ & $4.100$ & $3.750$ & $3.870$ & $3.600$ & $4.120$ & $2.830$ & $2.950$ \\ &EBP-BC & $3.850$ & $4.100$ & $3.680$ & $3.800$ & $3.430$ & $3.710$ & $2.650$ & $2.770$ \\ &P-SPLINES & $3.840$ & $4.090$ & $3.580$ & $3.620$ & $3.590$ & $4.100$ & $2.380$ & $2.490$ \\ &MERF & $4.070$ & $4.380$ & $2.270$ & $2.330$ & $3.890$ & $4.380$ & $1.420$ & $1.530$ \\ \hline \\[-1.8ex] \end{tabular} \label{tab:MBpoint} \end{table} We scrutinize the performance of our proposed MSE-estimator on the four scenarios, examining whether the observed robustness against model-misspecification due to unknown complex interactions between covariates or skewed data for point estimates, also holds for our non-parametric bootstrap-scheme. For each scenario and each simulation round, we choose the parameter of bootstrap replications $B = 200$. From the comparison of RB-RMSE among the four scenarios provided in Table \ref{tab:MBmse}, we infer, that the proposed non-parametric bootstrap procedure effectively handles scenarios that lead to model-misspecification in cases of (untransformed) LMMs. This is demonstrated by essentially unbiasedness in terms of mean and median values of RB-RMSE over areas of the MSE-estimator under all four scenarios: independently, whether the data generating process is characterized by complex interactions (\textit{Interaction}), non-normal error terms (\textit{Normal-Par}) or a combination of both problems (\textit{Interaction-Par}). \textcolor{black}{We compare the performance of our bootstrap estimator to an estimator resulting from an analytical discussion of uncertainty in the spirit of \citet{Prasad_Rao1990}, which can be found in the online supplementary materials. The analytical approximation generally underestimates the MSE, except for the \textit{Interaction} scenario, which substantiates the quality of the proposed bootstrap estimator. A detailed graphical comparison of the RB-RMSE between the non-parametric bootstrap and the analytical MSE estimator is provided by Figure \ref{fig:MBappendix} in the Appendix B.} \begin{table}[!h] \footnotesize \centering \captionsetup{justification=centering,margin=1.5cm} \caption{Performance of bootstrap and analytical MSE estimators in model-based simulation: mean and median of RB-RMSE and RRMSE-RMSE over areas} \begin{tabular}{@{\extracolsep{5pt}} lrcccccccc} \\[-1.8ex]\hline \hline \\[-1.8ex] & &\multicolumn{2}{c}{\textit{Normal}} &\multicolumn{2}{c}{\textit{Interaction}}&\multicolumn{2}{c}{\textit{Normal-Par}}&\multicolumn{2}{c}{\textit{Interaction-Par}} \\ \hline \\[-1.8ex] & & Median & Mean & Median & Mean & Median & Mean & Median & Mean \\ \hline \\[-1.8ex] \multicolumn{9}{l}{RB-RMSE[\%]}\\ \hline \\[-1.8ex] &Bootstrap & $0.319$ & $$-$0.084$ & $0.127$ & $0.548$ & $0.340$ & $0.724$ & $$-$0.802$ & $0.123$ \\ &Analytic & $$-$5.700$ & $$-$5.010$ & $0.707$ & $0.261$ & $$-$4.020$ & $$-$4.480$ & $$-$7.500$ & $$-$7.000$ \\ \hline \\[-1.8ex] \multicolumn{9}{l}{RRMSE-RMSE[\%]}\\ \hline \\[-1.8ex] &Bootstrap & $12.500$ & $12.500$ & $22.200$ & $22.800$ & $43.100$ & $48.200$ & $41.000$ & $44.700$ \\ &Analytic & $6.130$ & $5.930$ & $10.400$ & $12.200$ & $21.300$ & $21.400$ & $33.600$ & $33.500$ \\ \hline \\[-1.8ex] \end{tabular} \label{tab:MBmse} \end{table} \begin{figure}[ht] \centering \captionsetup{justification=centering,margin=1.5cm} \includegraphics[width=1\linewidth]{Figures/tracking_fig} \caption{Empirical, bootstrapped and analytical area-level RMSEs for four scenarios} \label{fig:trackMSE} \end{figure} From the results of Table \ref{tab:MBmse} and the subsequent discussion, we cannot directly infer the area-wise tracking properties of the estimated RMSE against the empirical RMSE over our $500$ simulation rounds. Thus, Figure \ref{fig:trackMSE} provides additional intuition on the quality of our proposed non-parametric MSE-bootstrap estimator. Given the tracking properties in all four scenarios, we conclude that our bootstrap MSE-estimates strongly correspond to the empirical RMSE \textcolor{black}{and appear to track the domain-specific empirical RMSE more precisely than the estimates of our analytical MSE estimator from the theoretical discussion in the online supplementary materials.} Furthermore, we do not observe systematic differences between the bootstrapped and empirical MSE-estimates regarding different survey-sample sizes. \section{Application: Estimating household income for Mexican municipalities}\label{sec:5} In this section, we discuss the performance of our proposed method in the context of a genuine SAE example. Concretely, we apply the MERF-method proposed in Section \ref{sec:2.2} to estimate domain-level average household income for the Mexican state Nuevo León. Section \ref{sec:5.1} describes the data and Section \ref{sec:5.2} reports results. We end our empirical analysis by an additional design-based simulation enabling a profound discussion on the quality of point and MSE-estimates in Section \ref{sec:5.3}. \subsection{Data description}\label{sec:5.1} Income inequality in Mexico is a research topic of timeless importance, particularly regarding the effectiveness of developmental and social policies \citep{lambert_hyunmin2019}. Although the level of income inequality in Mexico is comparable to other Latin American countries, it is one of the highest among other OECD countries \citep{Oecd_21}. Analysing average national income values is a common practice, but constitutes an inappropriate measure to monitor the efficacy of regional policy measures. Besides detailed disaggregated information, also suitable statistical methods are needed to quantify local developments. For the following application, we break down regional differences in average household per capita income of one of 32 Mexican states. Nuevo León is located in the North-East of Mexico and according to the (sub-national) Human Development Index (HDI), it is one of the most developed states of Mexico \citep{smits_Permanyer2019}. Nevertheless, the distribution of individual household income in Nuevo León is unequal and thus highly skewed. For instance, the Gini-coefficient of household income is comparable to the total Gini of Mexican household disposable income from 2012 which was $0.46$ \citep{Oecd_21}. We use data from 2010 provided by CONEVAL (Consejo Nacional de Evaluación de la Política de Desarrollo Social), combining the Mexican household income and expenditure survey (Encuesta Nacional de Ingreso y Gastos de los Hogares, ENIGH) with a sample of census microdata by the National Institute of Statistics and Geography (Instituto Nacional de Estadística y Geografía). The dataset comprises income and socio-demographic data, equally measured by variables that are part of the survey as well as the census data. The target variable for the estimation of domain-level average household income in Section \ref{sec:5.2}, is the total household per capita income (\textit{ictpc}, measured in pesos), which is available in the survey but not in the census. Nuevo León is divided into $51$ municipalities. While the census dataset in our example comprises information on $54848$ households from all $51$ municipalities, the survey data includes information on $1435$ households from $21$ municipalities, ranging from a minimum of $5$ to a maximum of $342$ households with a median of $27$ households. This leaves $30$ administrative divisions to be out-of-sample. Table \ref{tab:Apdetails} provides details on sample and census data properties. \begin{table}[ht] \centering \captionsetup{justification=centering,margin=1.5cm} \caption{Summary statistics on in- and out-of-sample areas: area-specific sample size of census and survey data} \begin{tabular}{@{\extracolsep{5pt}} lcccccccc} \\[-1.8ex]\hline \hline \\[-1.8ex] &\multicolumn{2}{c}{Total}&\multicolumn{2}{c}{In-sample}&\multicolumn{2}{c}{Out-of-sample}\\ &\multicolumn{2}{c}{51} & \multicolumn{2}{c}{21} & \multicolumn{2}{c}{30} \\ \hline \hline \\[-1.8ex] & Min. & 1st Qu. & Median & Mean & 3rd Qu. & Max. \\ \hline Survey area sizes & 5.00 & 14.00 & 27.00 & 68.33 & 79.00 & 342.00 \\ Census area sizes & 76.00 & 454.50 & 642.00 & 1075.45 & 872.50 & 5904.00 \\ \hline \end{tabular} \label{tab:Apdetails} \end{table} With respect to the design-based simulation in Section \ref{sec:5.3}, we emphasize that we are in the fortunate position of having a variable that is highly correlated with the target variable \textit{ictpc} in the application and that is available in the survey and in the census dataset: the variable \textit{inglabpc} measures earned per capita income from work. Although \textit{inglabpc} deviates from the desired income definition for our approach - as it covers only one aspect of total household income - it is effective to evaluate our method under a design-based simulation in Section \ref{sec:5.3}. Furthermore, the design-based simulation implicitly assess the quality of our empirical results from Section \ref{sec:5.2}. Using data from Nuevo León for the estimation of domain-level income averages, is an illustrative and realistic example and imposes several challenges on the proposed method of MERFs: first of all, about $24$ percent of households in the survey-sample are located in the capital Monterrey. Secondly, there exist more out-of sample domains than in-sample domains. Moreover, we are confronted with households reporting zero-incomes. Our intention in choosing this challenging example for the application part in Section \ref{sec:5.2} and the following design-based simulation in Section \ref{sec:5.3} are simple: we aim to show, that our proposed approaches for point and uncertainty estimates demonstrate a valid alternative to existing SAE-methods and are predominantly applicable for cases where `traditional' methods perform poorly or even fail. Additionally, we aim to provide a clear-cut presentation and empirical assessment of MERFs for SAE, which requires a transparent discussion of advantages and potential weaknesses in demanding real-world examples. \\ \subsection{Results and discussion} \label{sec:5.2} Direct estimates for average total household per capita income for Nuevo León are possible for 21 out of 51 domains. The use of model-based SAE-methods incorporating covariate census data will not only lead to estimates for the remaining out-of-sample areas, but correspondingly improve the overall quality of estimates \citep{Tzavidis_etal2018}. As variable \textit{ictpc} is highly skewed, we deduce potential issues of model-misspecification and suggest the use of the EBP-BC and the MERF. Given the theoretical discussion and the results in the model-based simulation in Section \ref{sec:4}, we infer that the EBP-BC and the proposed method of MERFs for SAE effectively handle non-normally distributed data. Moreover, we are particularly interested in differences between these two diverse SAE-models in the context of real-world applications. \textcolor{black}{We use the design-based simulation in Section \ref{sec:5.3} to extend our methodological discussion towards all methods discussed in the model-based simulation in Section \ref{sec:4}.} Figure \ref{fig:mapincome} maps results from direct estimates, the MERF and the EBP-BC. Obviously, the model-based estimates from the MERF and the EBP-BC expand the perspective of regional disparities in average total household income per capita towards non-sampled regions. Furthermore, we identify three distinct clusters of income levels from our results in Figure \ref{fig:mapincome} that had not been observable from the mapping of direct estimates: a low income cluster in the South of Nuevo León, a very high income cluster in the metropolitan area of the capital Monterrey and a group of middle-income areas between the North and the South of the state. This finding illustrates, the potential of model-based techniques to highlight patterns of regional income disparities and enable the mapping of reliable empirical evidence. Given the information provided by the three maps, we do not report major differences between the point estimates of MERFs and the EBP-BC. \begin{figure}[ht] \centering \captionsetup{justification=centering,margin=1.5cm} \includegraphics[width=1\linewidth]{Figures/map_estim} \caption{Estimated average total household per capita income \textit{itcpc} for the state Nuevo León based on three different estimation methods} \label{fig:mapincome} \end{figure} Apart from mapping empirical results of domain averages, we are mainly interested in quality criteria, such as the coefficients of variation (CV) and the details of model-specification for the EBP-BC and the MERF. To obtain estimates of variances for the direct estimates, we use the calibrated bootstrap method \citep{Alfons_Templ2013} as provided in the R package \emph{emdi} \citep{Kreutzmann_etal2019}. \textcolor{black}{For the MSE-estimates of the MERF, we rely on the non-parametric bootstrap from Section \ref{sec:3}}. For the model of the data-driven EBP-BC, we follow the approach of \citet{Rojas_etal2019} and use the Bayesian Information Criterion (BIC) to identify valid predictors for the target variable of \textit{ictpc}. The resulting working-model includes variables determining occupation, sources of income, the socio-economic level and educational aspects of individual households. The identification of predictive covariates for MERFs highlights a conceptual difference to LMM-based methods. Due to the properties of the random forest algorithm \citep{Breiman2001}, random forests perform an implicit variable selection. The selected model for fixed effects in our case is characterized by an R-squared of about $0.47$ percent. The dilemma between predictive precision and the interpretability of random forest models can be mitigated by concepts such as variable importance plots \citep{Greenwell_etal2020} (Figure \ref{fig:Vipappendix}) or an analysis of partial dependence for influential covariates \citep{Greenwell_2017} (Figure \ref{fig:Pdpappendix}). \textcolor{black}{Variable importance is reported by the mean increase in individual mean squared prediction error (\%IncMSE), resulting from the exclusion of the corresponding variable. Partial dependence plots depict the estimated marginal effect of a particular variable on the predicted individual target variables. From the inspection of Figure \ref{fig:Pdpappendix}, we can infer whether relationships between ictpc and predictive variables are monotonic or more complex. Figure \ref{fig:Vipappendix} reveals that the most important variable for the random forest model is the average relative amount of schooling (escol\_rel\_hog), followed by the availability of goods in the household (bienes) as well as the average years of schooling of persons (jaesc). Table \ref{tab:appExplain} in the Appendix B provides explanations on further variables. The most influential variables are related to education, work experience and employment and household assets. Figure \ref{fig:Pdpappendix} indicates rather complex and non-linear relationships between ictpc and its predictive covariates except for two variables related to the number of income earners in the household (pcpering, pcpering\_2).} We monitor the convergence of the proposed MERF algorithm under a precision of $1e^{-5}$ in relative difference of the GLL criterion and keep the default of $500$ trees. A parameter optimization based on 5-fold cross-validation on the original survey-sample advices the use of $3$ variables at each split for the forest. For the MSE-bootstrap procedure, we use $B=200$. Figure \ref{fig:detailCV} reports corresponding CVs for in- and out-of-sample domains. We observe a significant improvement for in-sample CVs of the EBP-BC and the MERF compared to the CVs for direct estimates. CVs of MERFs are slightly lower in median terms than the results for the EBP-BC. However, there exists one outlying area for MERFs. Going into details, the corresponding area of General Zaragoza features no obvious data-specific irregularities, such as extremely low sample size. Nevertheless, General Zaragoza is one of the poorest regions according to our analysis. In the design-based simulation in Section \ref{sec:5.3}, we will pay special attention to differences between MERFs and the EBP-BC regarding their ability to handle comparably extreme estimates given a broad range of relatively high and relative low income areas. Regarding the CVs for out-of-sample areas, we discover an evident advantage of CVs from our proposed MERF approach. From the scrutiny of individual CV values it remains unclear, whether the asset of improved results from MERFs roots in superior point estimates for domain-level averages or its relatively lower MSE-estimates. Figure \ref{fig:pointEstim} compares direct estimates to the model-based estimates for in and out-of-sample domains. Apparently, there exist no systematic differences between the estimates from the EBP-BC and the MERF. Thus, it appears as if the variance of MERF predictions is generally lower. This conjecture is in line with the theoretical properties of random forests \citep{Breiman2001, Biau_Scornet2016}. In-sample areas in Figure \ref{fig:pointEstim} are sorted by survey-sample sizes. In comparison to the direct estimates, predicted averages of the EBP-BC as well as of the MERF seem less extreme. The obvious irregularity in terms of high-income, is a distinct part of the Monterrey metropolitan area: San Pedro Garza García registers several headquarters of national and international corporations. This economic peculiarity apparently transfers to the income of its citizenship. Figure \ref{fig:mapincome}, underlines the existence of an apparent high-income cluster in this region. Overall, it is interesting to observe how reliable estimates on fine spatial resolution unveil patterns of regional income segregation. Our proposed method of MERFs provides useful results with remarkably higher accuracy than direct estimates and the EBP-BC for most of out-of-sample domains. The following design-based simulation will strengthen the reliability of results and enable an in-depth discussion of our methods for point and MSE-estimates. \begin{figure}[!htb] \centering \captionsetup{justification=centering,margin=1.5cm} \includegraphics[width=0.93\linewidth]{Figures/vip_plot} \caption{Variable importance in mean decrease accuracy (\%IncMSE) for the ten most influential variables} \label{fig:Vipappendix} \end{figure} \begin{figure}[ht] \centering \captionsetup{justification=centering,margin=1.5cm} \includegraphics[width=0.95\linewidth]{Figures/pdp_plot} \caption{Partial dependence plots for variables ranked by \%IncMSE} \label{fig:Pdpappendix} \end{figure} \begin{figure}[!htb] \centering \captionsetup{justification=centering,margin=1.5cm} \includegraphics[width=0.95\linewidth]{Figures/CV_details} \caption{Domain-specific CVs for target variable \textit{ictpc} for in- and out-of-sample domains} \label{fig:detailCV} \end{figure} \begin{figure}[ht] \centering \captionsetup{justification=centering,margin=1.5cm} \includegraphics[width=0.95\linewidth]{Figures/point_estim} \caption{Detailed comparison of point estimates for the domain-level average total household income. The dotted line separates sampled from non-sampled areas. In-sample areas are sorted by decreasing survey-sample size} \label{fig:pointEstim} \end{figure} \FloatBarrier \subsection{Evaluation using design-based simulation}\label{sec:5.3} The design-based simulation allows to directly juxtapose the performance of the proposed MERF-approach to existing SAE-methods for the estimation of area-level means based on empirical data. In this sense, the design-based simulation adds not only insights to the results from the model-based simulation in Section \ref{sec:4}, but also evaluates results from the example in the previous Section \ref{sec:5.3}. We focus on area-level mean-estimates of household income from work \textit{inglabpc} in the Mexican state Nuevo León. As we use the same data with a different target variable, sample and census data properties are similar to the previous example with details provided in Table \ref{tab:Apdetails}. Implementing the design-based simulation, we sample $T=500$ independent samples from the fixed population of our census dataset. Each pseudo-survey-sample mirrors the characteristics of the original survey, as we keep the number of in-sample households similar to the original sample sizes and abstain from sampling out-of-sample municipalities. As a result, we use $500$ equally structured pseudo-survey-samples with equal overall sample size. True values, are defined as the domain-level averages of household incomes from work in the original census. We consider the same methods as in the model-based simulation in Section \ref{sec:4}. Comparably to Section \ref{sec:5.2}, we use the same working-model for the BHF, the EBP, \textcolor{black}{the EBP-BC and P-SPLINES and} assume it to be fixed throughout the design-based simulation. For the EBP-BC and the MERF, we keep the parameters as already discussed in Section \ref{sec:5.2}. \begin{figure}[ht] \centering \captionsetup{justification=centering,margin=1.5cm} \includegraphics[width=1\linewidth]{Figures/RMSE_DB} \caption{Performance of area-specific point estimates including details on in- and out-of-sample areas. Comparison of empirical RMSEs from the design-based simulation for target variable \textit{inglabpc}} \label{fig:DBpoint} \end{figure} \begin{table}[!h] \centering \captionsetup{justification=centering,margin=1.5cm} \caption{Mean and Median of RB and RRMSE over in- and out-of-sample areas for point estimates} \begin{tabular}{@{\extracolsep{5pt}} lrcccccccc} \\[-1.8ex]\hline \hline \\[-1.8ex] & &\multicolumn{2}{c}{Total} &\multicolumn{2}{c}{In-sample}&\multicolumn{2}{c}{Out-of-sample} \\ \hline \\[-1.8ex] & & Median & Mean & Median & Mean & Median & Mean \\ \hline \\[-1.8ex] \multicolumn{7}{l}{RB[\%]}\\ \hline \\[-1.8ex] &BHF & $14.200$ & $17.800$ & $5.070$ & $11.200$ & $21.800$ & $22.400$ \\ &EBP & $14.700$ & $17.900$ & $5.310$ & $11.700$ & $22.600$ & $22.300$ \\ &EBP-BC & $9.650$ & $18.400$ & $6.140$ & $7.150$ & $15.800$ & $26.300$ \\ &P-SPLINES & $8.120$ & $18.200$ & $0.014$ & $14.200$ & $13.500$ & $21.000$ \\ &MERF & $7.720$ & $18.600$ & $3.490$ & $17.200$ & $10.600$ & $19.600$ \\ \hline \\[-1.8ex] \multicolumn{7}{l}{RRMSE[\%]}\\ \hline \\[-1.8ex] &BHF & $14.900$ & $21.600$ & $9.520$ & $17.000$ & $23.000$ & $24.800$ \\ &EBP & $15.900$ & $21.700$ & $9.480$ & $17.400$ & $22.900$ & $24.700$ \\ &EBP-BC & $14.000$ & $23.900$ & $12.900$ & $15.200$ & $16.100$ & $29.900$ \\ &P-SPLINES & $11.600$ & $22.800$ & $7.360$ & $20.000$ & $15.000$ & $24.700$ \\ &MERF & $9.430$ & $21.200$ & $6.130$ & $19.900$ & $12.500$ & $22.100$ \\ \hline \\[-1.8ex] \end{tabular} \label{tab:DBpoint} \end{table} The discussion of results, starts with an investigation into the performance of point estimates. Figure \ref{fig:DBpoint} reports the average RMSE of the area-level mean-estimates for Nuevo León in total and with details on the $21$ in- and $30$ out-of sample areas. The corresponding summary statistics for Figure \ref{fig:DBpoint} are given in Table \ref{tab:MBappendix} in Appendix B. Regarding the total of $51$ areas, we observe no remarkable difference in the performance of the BHF and the EBP, whereas the EBP-BC has lower RMSE on average. \textcolor{black}{P-SPLINES outperform the BHF, EBP and EBP-BC in mean and median terms of RMSE.} The MERF point estimates indicate the lowest RMSEs among all areas resulting in an more than $22$ percent improvement compared to the BHF. Referring to the RMSE for in-sample areas, we see two different ways how the EBP-BC and \textcolor{black}{the adaptive methods of P-SPLINES and MERF} deal with high and unbalanced variation in our true values for certain areas, ranging from $475$ to about $4004$ pesos: overall, the MERF deals best in modelling the complex survey data and produces highly accurate estimates for the majority of in-sample areas. A closer look at the results, reveals however, that higher RMSE values due to overestimation mainly occur in two areas, both characterised by a very low level of income ($622$ and $544$ pesos respectively). \textcolor{black}{A similar observation can be made for P-SPLINES, although the MERF appears to reproduce the predictive relations more efficiently.} In contrast, we observe the in-sample behaviour of the EBP-BC, which clearly opposes its superior overall performance. The EBP-BC appears to balance extreme estimates by producing slightly worse estimates for each individual in-sample areas, than allowing for individually inferior estimates for specific `outlier'-areas. This behaviour is conceptually rooted in its data-driven transformation-approach. Nevertheless, this property enables the EBP-BC to identify a model, providing stable and precise estimates on the majority of areas, especially the 30 non-sampled areas. Given the data-scenario of Nuevo León, the performance on the out-of-sample areas, delineates each method's quality and stability. In this case, the \textcolor{black}{EBP-BC and the non-parametric approaches of P-SPLINES and MERF} outperform the `traditional' methods (BHF and EBP) in terms of lower RMSE. \textcolor{black}{The median of RMSE of P-SPLINES aligns to the values of the EBP-BC, although the RMSEs of P-SPLINES are lower in means.} One distinct advantage of the MERF is its adaptability and implicit model-selection that is rewarded in the presence of complex data-scenarios. The findings from Figure \ref{fig:DBpoint}, are strengthened by a discussion of mean and median values of RB and RRMSE in Table \ref{tab:DBpoint}. Referring to all $51$ areas, the RB of the data-driven method of EBP-BC\textcolor{black}{, P-SPLINES} and the MERF is smaller in median values than the RB of BHF and the EBP. Respectively, the MERF shows comparatively low levels of median RB while mean values lie in the same range with competing methods. The obvious difference between mean and median values, indicates the previously discussed existence of inferior estimates for specific regions due to the empirical properties of our underlying data. For the $20$ in-sample areas, \textcolor{black}{P-SPLINES} perform superior to competing methods regarding the median values of RB. The close relation between the mean and median values of RB for the EBP-BC highlight the mentioned balancing-property of the EBP-BC. For the majority of areas in the model-based simulation, i.e. the $30$ non-sampled areas, the EBP-BC\textcolor{black}{, P-SPLINES} as well as the MERF exhibit a comparatively low level of median RB. Especially the MERF captivates by the lowest values in median RRMSE compared to its competitors\textcolor{black}{, while mean values for all cases are within a compatible range}. Finally, we focus on the performance of the proposed non-parametric MSE-bootstrap procedure. While, the model-based simulation in Section \ref{sec:4} indicates unbiasedness of the proposed bootstrap-scheme under all four scenarios, our results from the design-based simulation require a deeper discussion. \textcolor{black}{We abstain from a discussion of result from our analytical approximation to the area-level MSE because it is limited to in-sample areas and was solely used to contextualize the quality of our proposed bootstrap-scheme from Section \ref{sec:3}.} Table \ref{tab:DBmse} reports the results of RB-RMSE and the RRMSE-RMSE for the corresponding estimates. Figure \ref{fig:DBappendix} in the Appendix B visualizes details from Table \ref{tab:DBmse}. First of all, the values of RRMSE-RMSE are comparable to the most complex scenario in the model-based scenario in Section \ref{sec:4}. The RB-RMSE for the in-sample areas indicates unbiasedness in median terms and an acceptable overestimation regarding the mean RB-RMSE. For out-of-sample areas, we face a moderate underestimation regarding the median value and over-estimation according to mean values. Nevertheless, Figure \ref{fig:DBappendix} in Appendix B reveals, that the mixed signal between mean and median in Table \ref{tab:DBmse} is explained by a balanced mix of under- and over-estimation. Overall, the expectations towards the MSE-bootstrap procedure, given the challenging conditions of this design-based simulation, are met. Especially, the results for in-sample areas, combined with insights from the model-based simulation, indicate a solid and reliable performance of the proposed non-parametric bootstrap procedure. Although, the RB-RMSE for all $51$ areas is driven by the results from out-of-sample areas, the median RB-RMSE is acceptable. Apparently, the MSE-estimates mirror the high variation in sample sizes paired with high and dis-proportional variation of high-income and low-income regions between the $21$ in-sample and $30$ out-of-sample areas. From an applied perspective, the MSE-estimates for out-of-sample areas are nevertheless practicable for the construction of confidence intervals, with a median coverage of $0.97$. \begin{table}[!h] \centering \captionsetup{justification=centering,margin=1.5cm} \caption{Performance of MSE-estimator in design-based simulation: mean and median of RB and RRMSE over in- and out-of-sample areas} \begin{tabular}{@{\extracolsep{5pt}} lcccccccc} \\[-1.8ex]\hline \hline \\[-1.8ex] &\multicolumn{2}{c}{Total} &\multicolumn{2}{c}{In-sample}&\multicolumn{2}{c}{Out-of-sample} \\ \hline \\[-1.8ex] & Median & Mean & Median & Mean & Median & Mean \\ \hline \\[-1.8ex] RB-RMSE[\%] & $$-$1.260$ & $14.300$ & $0.719$ & $7.820$ & $$-$9.460$ & $18.800$ \\ RRMSE-RMSE[\%] & $48.100$ & $55.900$ & $41.400$ & $47.700$ & $50.900$ & $61.700$ \\ \hline \\[-1.8ex] \end{tabular} \label{tab:DBmse} \end{table} \FloatBarrier \section{Concluding remarks}\label{sec:6} In this paper, we explore the potential of tree-based machine learning methods for the estimation of SAE-means. In particular, we provide a solid framework easing the use of random forests for regression within the existing methodological framework of SAE. We highlight the potential of our approach to meet modern requirements of SAE, including the robustness of random forests against model-failure and the applicability for high-dimensional problems processing Big Data sources. The methodological part focusses on the MERF-procedure \citep{Hajjem2014} and implicitly discusses a semi-parametric unit-level mixed model, treating LMM-based SAE-methods, such as the BHF and the EBP, as special cases. The model is fit by an algorithm resembling the EM-algorithm, allowing for flexibility in the specification to model fixed effects as well as random-effects. The proposed point estimator for area-level means is complemented by the non-parametric MSE-bootstrap scheme, building on the REB-bootstrap by \citet{Chambers_Chandra2013} and the bias-corrected estimate for the residual variance by \citet{Mendez_Lohr2011}. We evaluate the performance of point- and MSE-estimates compared to `traditional' SAE-methods by model- and design-based simulations and provide a distinctive SAE example using income data from the Mexican state Nuevo Leòn in Section \ref{sec:5.2}. The model-based simulation in Section \ref{sec:4} demonstrates the ability of point estimates to perform compatibly in classical scenarios and outperform `traditional' methods in the existence non-linear interactions between covariates. The design-based simulation in Section \ref{sec:5.3} confirms the adequacy of MERFs for point estimates under searingly realistic conditions. The model- and design-based simulations indicate that the proposed approach is robust against distributional violations of normality for the random effects and for the unit-level error terms. Concerning our proposed MSE-bootstrap scheme, we conclude its reliability based on the performance in the model-based simulation in Section \ref{sec:4}. Furthermore, we obtain reasonable support for the performance in the application in Section \ref{sec:5.2} and the following design-based simulation in Section \ref{sec:5.3}. We motivate three major dimensions for further research, including theoretical work, aspects of generalizations and advanced applications using Big Data covariates: from a theoretical perspective, further research is needed to investigate the construction \textcolor{black}{and theoretical discussion of a partial-analytical MSE for area-level means. A conducive strategy is an extension based on our theoretical discussion in the online supplementary materials. Additionally the deduction of recent theoretical results, such as conditions for the consistency of unit-level predictions \citep{Scornet_etal2015} or considerations of individual predictions intervals \citep{wager_etal2014, Zhang2019}, towards area-level indicators, bears potential.} Alternatively, considerations concerning a fully non-parametric formulation of model (\ref{mod1}) impose an interesting research direction. From a survey statistical perspective, our proposed method currently abstains from the use of survey weights which bears a risk if the assumption of non-informative sampling is violated. Nevertheless, there exist approaches incorporating weights into random forests \citep{Winham_etal2013}. The transfer of such ideas to the proposed method of MERFs is subject to ongoing research. Regarding additional generalizations of the proposed method, we aim to extend the use of MERFs towards the estimation of small area quantiles and other non-linear indicators, such as Gini-coefficients or Head Count Ratios. Furthermore, a generalization towards binary or count data is possible and left to further research. The semi-parametric composite formulation of model (\ref{mod1}) allows for $f()$ to adapt any functional form regarding the estimation of the conditional mean of $y_i$ given $X_i$ and technically transfers to other machine learning methods, such as gradient-boosted trees or support vector machines. In terms of advanced applications, we propose the use of MERFs for complex random effect and covariance-structures in empirical problems to the SAE-research community. Equally interesting is the use of high dimensional supplementary data, i.e. Big Data covariates, for the estimation of area-level means, that can be directly handled by the proposed MERF-framework. \section*{Acknowledgements} The authors are grateful to CONEVAL for providing the data used in empirical work. The views set out in this paper are those of the authors and do not reflect the official opinion of CONEVAL. The numerical results are not official estimates and are only produced for illustrating the methods. Additionally, the authors would like to thank the HPC Service of ZEDAT, Freie Universität Berlin, for computing time. \section*{Appendix A} After convergence of the algorithm introduced in Section \ref{sec:2.2}, we obtain an optimal non-parametric estimator for $\hat{f}()$. In the following, we facilitate the notation and refer to $\hat{f}^{OOB}()$ simply as $\hat{f}()$. The best predictor for the random effects for known parameters $H_i$ and $R_i$ must maximize the generalized log-likelihood criterion: $$GLL (f,v_i | y) = \sum_{i=1}^{D}\{ [y_i - f(X_i) - Z_i v_i ]' R_i^{-1} [ y_i - f(X_i) - Z_i v_i] + v_i ' H^{-1} v_i +log |H|+ log|R_i|\}.$$ Finding a maximum for $GLL$ is equivalent to the problem of finding a maximizer for $v$ in the first term of the summation of the proposed GLL-criterion: $$[y - \hat{f}(X_i) - Z_i v_i ]' R_i^{-1} [ y_i - \hat{f}(X_i) - Z_i v_i] + v_i ' H_i^{-1} v_i.$$ Reshaping leads to: \begin{flalign*} [y_i - \hat{f}(X_i) - Z_i v_i ]' R_i^{-1}[ y_i - \hat{f}(X_i) - Z_i v_i] + v_i ' H_i^{-1} v_i &= \\ y_i' R_i^{-1}y_i - y_i' R_i^{-1}\hat{f}(X_i) -y_i' R_i^{-1} Z_i v_i -\hat{f}(X_i)'R_i^{-1}y_i + \hat{f}(X_i)'R_i^{-1}\hat{f}(X_i) &+ \\ \hat{f}(X_i)'R_i^{-1}Z_i v_i - (Z_i v_i)'R_i^{-1}y_i+(Z_i v_i)'R_i^{-1}\hat{f}(X_i)+(Z_i v_i)'R_i^{-1}(Z_i v_i)+v_i ' H_i^{-1} v_i \end{flalign*} Now we derive the expression with respect to $v$ and set it to 0 in order to find the maximizer \begin{flalign*} -y_i' R_i^{-1} Z_i+ \hat{f}(X_i)'R_i^{-1} Z_i -Z_i'R_i^{-1}y_i+\\ Z_i'R_i^{-1}\hat{f}(X_i)+Z_i'R_i^{-1}Z_i v_i+(Z_i v_i)'R_i^{-1}y_i+Z_i+ 2 H_i^{-1} v_i &= 0 &\Longleftrightarrow\\ -2 y_i R_i^{-1} Z_i + 2 Z_i'R_i^{-1}\hat{f}(X_i) + 2 Z_i'R_i^{-1}Z_i v_i +2 H_i^{-1} v_i &= 0 &\Longleftrightarrow \\ - y_i R_i^{-1} Z_i + Z_i'R_i^{-1}\hat{f}(X_i) + Z_i'R_i^{-1}Z_i v_i + H_i^{-1} v_i &= 0 &\Longleftrightarrow\\ y_i R_i^{-1} Z_i - Z_i'R_i^{-1}\hat{f}(X_i) &= Z_i'R_i^{-1}Z_i v_i + H_i^{-1} v_i &\Longleftrightarrow\\ y_i R_i^{-1} Z_i - Z_i'R_i^{-1}\hat{f}(X_i) &= (Z_i'R_i^{-1}Z_i + H_i^{-1}) v_i &\Longleftrightarrow\\ (Z_i'R_i^{-1}Z_i + H_i^{-1})^{-1} (Z_i'R_i^{-1} (y_i - \hat{f}(X_i)) &= v_i \end{flalign*} \begin{flalign*} v_i &= (Z_i'R_i^{-1}Z_i + H_i^{-1})^{-1} (Z_i'R_i^{-1} (y_i - \hat{f}(X_i))\\ &= (Z_i'R_i^{-1}Z_i + H_i^{-1})^{-1} V_i V_i^{-1} (Z_i'R_i^{-1} (y_i - \hat{f}(X_i)\\ &= (Z_i'R_i^{-1}Z_i + H_i^{-1})^{-1} Z_i'R_i^{-1} (R_i+Z_i H_i Z_i')V_i^{-1}(y_i - \hat{f}(X_i))\\ &= (Z_i'R_i^{-1}Z_i + H_i^{-1})^{-1} (Z_i' +Z_i'R_i^{-1}Z_i H_i Z_i')V_i^{-1}(y_i - \hat{f}(X_i))\\ &= (Z_i'R_i^{-1}Z_i + H_i^{-1})^{-1}(Z_i'R_i^{-1}Z_i + H_i^{-1}) (H_i Z_i'V_i^{-1}(y_i - \hat{f}(X_i)))\\ &=(H_i Z_i'V_i^{-1}(y_i - \hat{f}(X_i))) \end{flalign*} The solution of the maximization problem is given by $\hat{v_i}^{*} = H_i Z_i'V_i^{-1}(y_i - \hat{f}(X_i))$. Note, for $\hat{f}(X_i) = X_i\hat{\beta}$, the optimality solution resembles the BLUP. \section*{Appendix B: additional simulation results and model-diagnostics} \begin{figure}[ht] \centering \captionsetup{justification=centering,margin=1.5cm} \includegraphics[width=1\linewidth]{Figures/Details_RB_MSE_MB} \caption{Details on the performance of the proposed bootstrap MSE-estimator and the analytic approximation in the model-based simulation: boxplots of the area-specific RB-RMSEs averaged over simulation runs} \label{fig:MBappendix} \end{figure} \begin{table}[ht] \centering \captionsetup{justification=centering,margin=1.5cm} \caption{Explanation of most influential variables according to the random forest model in the application of Section \ref{sec:5.2}} \begin{tabular}{@{\extracolsep{1pt}} lr} \\[-1.8ex]\hline \hline \\[-1.8ex] Variable name & Explanation \\ \hline ictpc & Total household income per capita \\ escol\_rel\_hog & Average relative amount of schooling standardized \\ & by age and sex of household members \\ bienes & Availability of goods in the household \\ jaesc & Average years of schooling of persons in the household \\ jnived & Formal education of the head of the household \\ actcom & Assets in the household \\ pcpering & Percentage of income earners in the household \\ jexp & Years of working experience of the head of the household \\ pcpering\_2 & Number of income earners in the household by household size \\ pcocup & Percentage of people employed in the household \\ jtocup & Occupation type \\ \hline \end{tabular} \label{tab:appExplain} \end{table} \begin{table}[ht] \centering \captionsetup{justification=centering,margin=1.5cm} \caption{Performance of point estimates in design-based simulation: summary statistics of empirical RMSE for area-level mean-estimates} \begin{tabular}{@{\extracolsep{5pt}} lrcccccccc} \\[-1.8ex]\hline \hline \\[-1.8ex] Areas & Method & Min & 1st Qu. & Median & Mean & 3rd Qu. & Max \\ \hline \multirow{ 4}{*}{Total} & BHF & 72.58 & 170.63 & 336.06 & 386.40 & 498.13 & 1351.82 \\ & EBP & 68.83 & 168.13 & 341.10 & 387.91 & 490.14 & 1342.92 \\ & EBP-BC & 65.22 & 225.84 & 331.64 & 376.49 & 460.86 & 1094.08 \\ & P-SPLINES & 72.70 & 142.91 & 290.84 & 337.11 & 462.94 & 969.15 \\ & MERF & 82.58 & 136.01 & 236.86 & 298.20 & 447.21 & 716.72 \\ \hline \multirow{ 4}{*}{In-sample} & BHF & 111.93 & 139.13 & 246.62 & 301.90 & 349.26 & 978.49 \\ & EBP & 107.41 & 143.07 & 251.95 & 308.47 & 348.00 & 994.45 \\ & EBP-BC & 145.48 & 212.64 & 308.43 & 314.08 & 353.02 & 705.81 \\ & P-SPLINES & 86.69 & 142.70 & 224.82 & 285.37 & 421.29 & 707.71 \\ & MERF & 94.56 & 123.72 & 141.16 & 264.27 & 392.73 & 688.74 \\ \hline \multirow{ 4}{*}{Out-of-sample} & BHF & 72.58 & 224.35 & 422.90 & 445.55 & 541.31 & 1351.82 \\ & EBP & 68.83 & 248.89 & 412.91 & 443.52 & 547.12 & 1342.92 \\ & EBP-BC & 65.22 & 234.98 & 348.83 & 420.17 & 494.83 & 1094.08 \\ & P-SPLINES & 72.70 & 172.29 & 351.17 & 373.34 & 465.36 & 969.15 \\ & MERF & 82.58 & 151.24 & 272.97 & 321.95 & 493.85 & 716.72 \\ \hline \end{tabular} \label{tab:MBappendix} \end{table} \begin{figure}[ht] \centering \captionsetup{justification=centering,margin=1.5cm} \includegraphics[width=1\linewidth]{Figures/DB_RB_MSE} \caption{Details on the performance of the proposed MSE-estimator in the design-based simulation: boxplots of the area-specific RB-RMSEs averaged over simulation runs including details on in- and out-of-sample areas} \label{fig:DBappendix} \end{figure} \clearpage \bibliographystyle{apacite}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Algorithms} \vspace{-20pt} \begin{algorithm}[!h] \caption{Stage1: Global GNN Encoder $\mathcal{N}_G$} \label{alg:gnn} \begin{algorithmic} \STATE {\bfseries Input:} obstacles $O$, start $v_s$, goal $v_g$, network $g_x, g_y$, $K_x^{(i)}, Q_x^{(i)}, V_x^{(i)}, K_y^{(i)}, Q_y^{(i)}, V_y^{(i)}$ for $i$-th embedding dimension. \NoThen \STATE Sample $n$ nodes $v_1, \cdots, v_n$ from configuration space of ego-arm robot \STATE Initialize $G = \{V:\{v_s, v_g, v_1, \cdots, v_n\}, E:\text{k-NN}(V)\}$ \STATE Initialize encoding of vertices and edges \vspace{-5pt} \begin{align*} x_i&=g_x(v_i, v_g, v_i-v_g, ||v_i-v_g||^2_2), \forall v_i \in V\\ y_l&=g_y(v_i, v_j, v_j-v_i), \forall e_l:\langle v_i, v_j\rangle \in E \end{align*} \vspace{-15pt} \STATE Initialize obstacle encoding $\mathcal{O}_t=g_o(O_t)+TE(t), \forall t \in [0,\cdots, T]$ using Eq. \ref{eq:TE} \STATE Encode obstacles into vertices and edges using Eq. \ref{eq:attention} \STATE Message Passing using Eq. \ref{coreupdate} \RETURN Encoding of edges $\{y_l\}$ \end{algorithmic} \end{algorithm} \vspace{-20pt} \begin{algorithm}[!h] \caption{Stage2: Lobal Planner $\mathcal{N}_P$} \label{alg:planner} \begin{algorithmic} \STATE {\bfseries Input:} graph $G=\langle V, E\rangle$, encoding of edges $y_l$, obstacle encoding $\mathcal{O}$, time window $w$, global GNN encoder $\mathcal{N}_G$, local planner $\mathcal{N}_P$, goal-reaching constant $\delta$. \NoThen \STATE Initialize $i=0, v_0=v_s, t_0=0, \pi={(v_0, t_0)}, E_0=\{e:\langle v_s, v_k\rangle \in E, \forall v_k\in V\}$ \REPEAT \STATE $\eta = \mathcal{N}_P(V,E_i,\mathcal{O}, \mathcal{N}_G, t_i)$ \STATE select $e_j=\arg\max_{e_l\in E_i} \eta_l$, and $e_j$ connects $\langle v_i,v_j\rangle$ \IF { $e_j$ is collision-free when start moving from $t_i$} \STATE $t_{i+1} = t_{i} + \Delta(v_i, v_j)$ \tcp*{$\Delta(v_i, v_j)$ is the travel time from $v_i$ to $v_j$} \STATE $\pi_{i+1} \leftarrow \pi_{i} \cup\{(v_j,t_{i+1})\}$ \STATE $v_{i+1} \leftarrow v_{j}$ \STATE $E_{i+1}=\{e: \langle v_{i+1},v_k \rangle \in E, \forall v_k \in V\}$ \IF{$||v_{i+1}-v_g||_2^2 \leq \delta$} \RETURN $\pi$ \ENDIF \STATE $i\leftarrow i+1$ \ELSE \STATE $E_{i} = E_{i} \setminus {e_j}$ \ENDIF \UNTIL{$E_i=\emptyset$} \RETURN $\emptyset$ \end{algorithmic} \end{algorithm} \vspace{-15pt} \begin{algorithm}[H] \caption{\textcolor{black}{Dijkstra-H}} \label{alg:dij-H} \begin{algorithmic} \STATE {\bfseries Input:} graph $G=\langle V, E\rangle$, start $v_s$, goal $v_g$, goal-reaching constant $\delta$. \NoThen \STATE Sample $n$ nodes $v_1, \cdots, v_n$ from configuration space of ego-arm robot. \STATE Initialize $G = \{V:\{v_s, v_g, v_1, \cdots, v_n\}, E:\text{k-NN}(V)\}$ \STATE Calculate the shortest distance $d_{v_k}$ on the graph from $v_g$ to each node $v_k\in V$ using Dijkstra's algorithm. \STATE Initialize $i=0, v_0=v_s, t_0=0,\pi={(v_0, t_0)}, E_0=\{e:\langle v_s, v_k\rangle \in E, \forall v_k \in V\}$ \REPEAT \STATE select $v_j=\arg\min_{\langle v_i, v_j\rangle\in E_i} d_{v_j}$ \IF {$\langle v_i, v_j\rangle$ is collision-free when start moving from $t_i$} \STATE $t_{i+1} = t_{i} + \Delta(v_i, v_j)$ \tcp*{$\Delta(v_i, v_j)$ is the travel time from $v_i$ to $v_j$} \STATE $\pi_{i+1} \leftarrow \pi_{i} \cup\{(v_j,t_{i+1})\}$ \STATE $v_{i+1} \leftarrow v_{j}$ \STATE $E_{i+1}=\{e: \langle v_{i+1},v_k\rangle\in E, \forall v_k \in V\}$ \IF{$||v_{i+1}-v_g||_2^2 \leq \delta$} \RETURN $\pi$ \ENDIF \STATE $i\leftarrow i+1$ \ELSE \STATE $E_{i} = E_{i} \setminus {e_j}$ \ENDIF \UNTIL{$E_i=\emptyset$} \RETURN $\emptyset$ \end{algorithmic} \end{algorithm} \section{\textcolor{black}{Network Architecture Details}} We provide the numbers of network parameters in Table \ref{tab:netdetail}. Please refer to \ref{fig:gnn-arch} for the overall two-stage architecture of the proposed \textbf{GNN-TE}. \begin{table}[H] \centering \caption{Network Architecture Details} \begin{tabular}{c|c} \toprule[2pt] \textbf{Name} & \textbf{Model} \\ \hline \multicolumn{2}{c}{\textbf{Stage1 Global GNN Encoder}} \\ \hline Node Encoder Net $g_x$ & MLP((config\_size+1)*4,32),MLP(32,32) \\ \hline Edge Encoder Net $g_y$ & MLP((config\_size+1)*3,32),MLP(32,32) \\ \hline Obstacle Encoder Net $g_o$ & MLP(obstacle\_size,32), MLP(32,32) \\ \hline \multirow{3}[2]{*}{Attention Net} & Key Network $f_{K_{(\cdot)}^{(\cdot)}}$: MLP(32,32) \\ \cline{2-2} & Query Network $f_{Q_{(\cdot)}^{(\cdot)}}$: MLP(32,32) \\ \cline{2-2} & Value Network $f_{V_{(\cdot)}^{(\cdot)}}$: MLP(32,32) \\ \hline Feedforward Net & MLP(32,32),MLP(32,32) \\ \hline Node Message Passing $f_x$ & MLP(32*4,32),MLP(32,32) \\ \hline Edge Message Passing $f_y$ & MLP(32*3,32),MLP(32,32) \\ \hline \multicolumn{2}{c}{\textbf{Stage2 Local Planner}} \\ \hline \multirow{2}[2]{*}{Planner Net $f_P$} & MLP(32+obstacle\_size*window\_size, 64),MLP(64,32), \\ \cline{2-2} & MLP(32,32), MLP(32,1) \\ \hline \end{tabular}% \label{tab:netdetail}% \end{table}% \section{Experiments} \subsection{Hyperparameters} We provide the hyperparameters in Table \ref{tab:hyper}. \begin{table}[H] \centering \caption{Hyperparameters} \begin{tabular}{c|c} \toprule[2pt] \textbf{Hyperparameters} & \textbf{Values} \\ \hline $k$ for k-NN & 50 \\ \hline Training Epoch before DAgger & 200 \\ \hline Training Epoch for DAgger & 100 \\ \hline Learning Rate & 1e-3 \\ \hline Temporal Encoding Frequency $\omega$ & 10000 \\ \hline $d_{TE}$ & 32 \\ \hline Time Window $w$ & 2 \\ \hline \end{tabular}% \label{tab:hyper}% \end{table}% \subsection{Overall Performance} We provide the detailed overall performance in Table \ref{tab:suc}, \ref{tab:pr} and \ref{tab:cc}. \begin{table}[H] \centering \caption{Success Rate (\%)} \begin{adjustbox}{width=1.2\columnwidth,center} \begin{tabular}{c|c|c|c|c|c|c|c} \toprule[2pt] & & \textbf{2Arms} & \textbf{Kuka-4DoF} & \textbf{Kuka-5DoF} & \textbf{Kuka-7DoF} & \textbf{3Arms} & \textbf{Kuka3Arms} \\ \hline \multirow{2}[1]{*}{\textbf{SIPP}} & random & 100±0.00 & 100±0.00 & 100±0.00 & 100±0.00 & 100±0.00 & 100±0.00 \\ \cline{2-8} & hard & 100±0.00 & 100±0.00 & 100±0.00 & 100±0.00 & 100±0.00 & 100±0.00 \\ \hline \multirow{2}[1]{*}{\textbf{GNN-TE}} & random & \textbf{94.1±0.02} & \textbf{97.8±0.01} & 97.6±0.00 & \textbf{98.8±0.01} & \textbf{92.1±0.01} & \textbf{97.4±0.01} \\ \cline{2-8} & hard & \textbf{62.5±0.02} & \textbf{34.9±0.00} & \textbf{37.9±0.15} & \textbf{38.1±0.12} & \textbf{52.5±0.03} & 42.1±0.08 \\ \hline \multirow{2}[1]{*}{\textbf{GNN-TE w/o Dagger}} & random & \textbf{94.1±0.01} & 97.5±0.01 & \textbf{97.7±0.00} & \textbf{98.8±0.01} & 91.5±0.01 & 97.3±0.01 \\ \cline{2-8} & hard & 58.1±0.03 & 33.3±0.00 & 36.8±0.14 & 36.5±0.11 & 51.6±0.05 & \textbf{42.6±0.08} \\ \hline \multirow{2}[1]{*}{\textbf{Dijkstra-H}} & random & 89.7±0.03 & 96.3±0.01 & 96.2±0.01 & 97.7±0.01 & 85.9±0.01 & 93.9±0.01 \\ \cline{2-8} & hard & 0.00±0.00 & 0.00±0.00 & 0.00±0.00 & 0.00±0.00 & 0.00±0.00 & 0.00±0.00 \\ \hline \end{tabular}% \label{tab:suc}% \end{adjustbox} \end{table}% \begin{table}[H] \centering \caption{Path Time Ratio} \begin{adjustbox}{width=1.2\columnwidth,center} \begin{tabular}{c|c|c|c|c|c|c|c} \toprule[2pt] & & \textbf{2Arms} & \textbf{Kuka-4DoF} & \textbf{Kuka-5DoF} & \textbf{Kuka-7DoF} & \textbf{3Arms} & \textbf{Kuka3Arms} \\ \hline \multirow{2}[1]{*}{\textbf{SIPP}} & random & 100±0.00 & 100±0.00 & 100±0.00 & 100±0.00 & 100±0.00 & 100±0.00 \\ \cline{2-8} & hard & 100±0.00 & 100±0.00 & 100±0.00 & 100±0.00 & 100±0.00 & 100±0.00 \\ \hline \multirow{2}[1]{*}{\textbf{GNN-TE}} & random & \textbf{107.55±2.33} & \textbf{132.71±4.46} & 171.39±12.94 & 172.83±8.77 & \textbf{120.76±7.89} & 186.18±23.96 \\ \cline{2-8} & hard & 123.31±7.71 & \textbf{189.54±5.41} & \textbf{183.33±48.26} & \textbf{167.65±47.7} & \textbf{134.54±9.76} & \textbf{152.52±22.31} \\ \hline \multirow{2}[1]{*}{\textbf{GNN-TE w/o Dagger}} & random & 109.67±5.46 & 148.1±13.59 & \textbf{161.40±16.50} & \textbf{170.05±9.11} & 130.48±11.74 & \textbf{181.74±18.98} \\ \cline{2-8} & hard & \textbf{121.41±8.85} & 250.26±82.5 & 232.67±99.79 & 193.12±42.4 & 154.85±23.81 & 164.67±40.65 \\ \hline \multirow{2}[1]{*}{\textbf{Dijkstra-H}} & random & 123.73±8.29 & 212.09±20.73 & 198.72±14.7 & 199.22±23.22 & 177.88±19.59 & 189.23±5.64\\ \cline{2-8} & hard & / & / & / & / & / & / \\ \hline \end{tabular}% \label{tab:pr}% \end{adjustbox} \end{table}% \begin{table}[H] \centering \caption{Collision Checking} \begin{adjustbox}{width=1.2\columnwidth,center} \begin{tabular}{c|c|c|c|c|c|c|c} \toprule[2pt] & & \textbf{2Arms} & \textbf{Kuka-4DoF} & \textbf{Kuka-5DoF} & \textbf{Kuka-7DoF} & \textbf{3Arms} & \textbf{Kuka3Arms} \\ \hline \multirow{2}[1]{*}{\textbf{SIPP}} & random & 60440.21±1543.21 & 171336.68±2061.60 & 196567.99±1152.81 & 268602.98±780.07 & 96713.81±3945.07 & 269033.61±1159.78 \\ \cline{2-8} & hard & 1080768.34±81176.44 & 145280.09±1448.0 & 182696.61±1271.86 & 257783.45±742.83 & 114337.00±3560.95 & 255173.7±2099.46 \\ \hline \multirow{2}[1]{*}{\textbf{GNN-TE}} & random & \textbf{17.24±8.45} & \textbf{37.36±4.74} & \textbf{56.89±5.41} & \textbf{61.99±8.08} & \textbf{33.08±7.95} & \textbf{72.63±13.13} \\ \cline{2-8} & hard & \textbf{47.31±7.72} & \textbf{155.7±96.49} & \textbf{108.25±48.29} & \textbf{110.65±39.63} & \textbf{65.93±14.84} & \textbf{90.42±25.61} \\ \hline \multirow{2}[1]{*}{\textbf{GNN-TE w/o Dagger}} & random & 21.13±11.71 & 42.00±4.60 & 61.79±10.39 & 69.39±10.67 & 47.48±14.40 & 73.33±11.57 \\ \cline{2-8} & hard & 47.91±8.85 & 160.43±70.64 & 166.61±86.29 & 164.41±88.98 & 98.67±37.69 & 98.41±32.15 \\ \hline \multirow{2}[1]{*}{\textbf{Dijkstra-H}} & random & 56.21±23.56 & 236.50±17.60 & 229.35±21.66 & 236.95±34.93 & 103.74±19.57 & 237.93±12.83 \\ \cline{2-8} & hard & / & / & / & / & / & / \\ \hline \end{tabular}% \label{tab:cc}% \end{adjustbox} \end{table}% \subsection{Optional Backtracking Search}\label{AppendixBT} We provide the results of \textbf{GNN-TE} and \textbf{Dijkstra-H} with backtracking (top-5) in 2Arms environment in \ref{backtracking-with-dij}. Our method outperforms the heuristic method with and without the backtracking search. \begin{table}[h!] \centering \caption{The performance of backtracking search in the 2Arms environment} \begin{adjustbox}{center} \begin{tabular}{c|c|c||c|c||c|c} \toprule[2pt] & & \textbf{SIPP} & \textbf{Dijkstra-H} & \textbf{GNN-TE}& \textbf{Dijkstra-H w. BT} & \textbf{GNN-TE w. BT} \\ \hline \multirow{1}[4]{*}{{\textbf{Success Rate}}} & random & 100\% & 89.70\% & \textbf{94.10}\% & 94.10\% & \textbf{98.00\%}\\ \cline{2-7} & hard & 100\% & 0\% & \textbf{62.50}\% & 50.70\% & \textbf{89.30\%} \\ \hline \multirow{1}[4]{*}{\textbf{Path Time Ratio}} & random & 100\% & 123.61\% & \textbf{107.65}\% & 123.61\% & \textbf{107.65\%} \\ \cline{2-7} & hard & 100\% & / & \textbf{128.22}\% & 276.25\% & \textbf{128.22\%} \\ \hline \multirow{1}[4]{*}{\textbf{Collision Checks}} & random & 60K & 55.88 & \textbf{17.17} & 55.88 & \textbf{17.17} \\ \cline{2-7} & hard & 1081K & / & \textbf{52.68} & 1161.29 & \textbf{52.68} \\ \hline \end{tabular}% \label{backtracking-with-dij}% \end{adjustbox} \end{table}% \textcolor{black}{We also provide the success rate of \textbf{GNN-TE} with backtracking in all the environments in \ref{tab:backtracking-all}. As the DoF and the complexity of the configuration space increase, the searching space grows and requires more backtracking steps. Thus the increase in success rate by backtracking may not be as significant as in the simple settings if we keep the backtracking steps the same. However, GNN-TE still shows a significant advantage over Dijkstra-H even with backtracking in all the settings.} \begin{table}[H] \centering \caption{Success rates of GNN-TE and Dijkstra-H with backtracking search} \begin{adjustbox}{center} \begin{tabular}{c|c|c|c|c|c|c|c} \toprule[2pt] & & \textbf{2Arms} & \textbf{Kuka-4DoF} & \textbf{Kuka-5DoF} & \textbf{Kuka-7DoF} & \textbf{3Arms} & \textbf{Kuka3Arms} \\ \hline \multirow{4}[1]{*}{\textbf{random}} & \textbf{Dijkstra-H} & 89.7±0.03 & 96.3±0.01 & 96.2±0.01 & 97.7±0.01 & 85.9±0.01 & 93.9±0.01 \\ \cline{2-8} & \textbf{GNN-TE} & \textbf{94.1±0.02} & \textbf{97.8±0.01} & \textbf{97.6±0.00} & \textbf{98.8±0.01} & \textbf{92.1±0.01} & \textbf{97.4±0.01} \\ \cline{2-8} & \textbf{Dijkstra-H w. BT} & 94.1±0.01 & 96.7±0.02 & 96.2±0.01 & 97.8±0.01 & 92.4±0.01 & 94.2±0.01 \\ \cline{2-8} & \textbf{GNN-TE w. BT} & \textbf{98.0±0.01} & \textbf{97.8±0.01} & \textbf{97.7±0.00} & \textbf{98.9±0.01} & \textbf{97.1±0.01} & \textbf{97.4±0.00} \\ \hline \hline \multirow{4}[1]{*}{\textbf{hard}} & \textbf{Dijkstra-H} & 0.0±0.0 & 0.0±0.0 & 0.0±0.0 & 0.0±0.0 & 0.0±0.0 & 0.0±0.0 \\ \cline{2-8} & \textbf{GNN-TE} & \textbf{62.5±0.02} & \textbf{34.9±0.00} & \textbf{37.9±0.15} & \textbf{38.1±0.12} & \textbf{52.5±0.03} & \textbf{42.1±0.08} \\ \cline{2-8} & \textbf{Dijkstra-H w. BT} & 50.7±0.06 & 10.1±0.01 & 5.8±0.40 & 2.8±0.24 & 45.8±0.05 & 2.7±0.01 \\ \cline{2-8} & \textbf{GNN-TE w. BT} & \textbf{89.3±0.03} & \textbf{36.4±0.00} & \textbf{40.8±0.15} & \textbf{39.4±0.12} & \textbf{82.6±0.02} & \textbf{44.3±0.01} \\ \hline \end{tabular}% \label{tab:backtracking-all}% \end{adjustbox} \end{table}% \subsection{Comparison with End-to-End RL}\label{AppendixRL} We compare our approach with RL-based approaches, \textbf{DQN-GNN} and \textbf{PPO-GNN} specifically. The two algorithms both encode the graph using GNN as ours in stage 1. \textbf{DQN-GNN}, similar to our local planner, learns a network to evaluate the Q value of the subsequent edge as a priority value. \textbf{PPO-GNN} learns a policy network that output the next configuration, and we project it onto the nearest vertex on the graph encoded by GNN. We define the reward as $-10$ for collision, $10$ for reaching the goal, and the $distance\ displacement$ for non-collision configurations. In the general RL setting, we do not expect the generalization capability of algorithms. But as a general graph encoder, GNN can achieve generalization between graphs. Based on this, we train \textbf{DQN-GNN} and \textbf{PPO-GNN} across problems and test their performance. On training set, \textbf{DQN-GNN} achieves $54.5\%$ success rate while \textbf{PPO-GNN} only achieves $21.1\%$. We also provide results on randomly generated test cases and hard cases in Table \ref{table:RL}. We can observe that \textbf{GNN-TE} significantly outperforms all the RL approaches. Moreover, the advanced inductive bias of GNN for discrete decision-making problems explained why \textbf{DQN-GNN} has better performance than \textbf{PPO-GNN} in both randomly sampled cases and hard cases. Nevertheless, \textbf{DQN-GNN} and \textbf{PPO-GNN} both cannot efficiently find plans, especially in hard cases. This is because RL-based approaches have trouble finding a feasible path without demonstration from the oracle and only rely on rewards in challenging problems. \vspace{-10pt} \begin{table}[!ht] \centering \caption{Table for RL Approaches in 2Arms Environment} \begin{tabular}{c|c|c|c|c} \toprule[2pt] & & \textbf{GNN-TE} & \textbf{DQN-GNN} & \textbf{PPO-GNN} \\ \hline \multirow{2}[1]{*}{\textbf{Success Rate}} & random & \textbf{94.10\%} & 62.40\% & 9.80\% \\ \cline{2-5} & hard & \textbf{62.50\%} & 2.00\% & 0.70\% \\ \hline \multirow{2}[1]{*}{\textbf{Path Time Ratio}} & random & \textbf{103.55\%} & 105.47\% & 119.73\% \\ \cline{2-5} & hard & \textbf{102.43\%} & 109.76\% & 134.15\% \\ \hline \multirow{2}[1]{*}{\textbf{Collision Checking}} & random & \textbf{4.98} & \textbf{4.68} & 5.43 \\ \cline{2-5} & hard & \textbf{6.00} & \textbf{6.00} & 7.00 \\ \hline \end{tabular}% \label{table:RL}% \end{table}% \subsection{Comparison with OracleNet-D}\label{AppendixOracle} We compare \textbf{GNN-TE} with a learning-based approach \textbf{OracleNet-D} by modifying \textbf{OracleNet} \cite{OracleNet} to the dynamic version. Concretely, we concatenate the trajectories of obstacles to the input in every roll-out of \textbf{OracleNet} to inform the network of the dynamic environment \footnote{We use the original code from repository https://github.com/mayurj747/oraclenet-analysis}. We provide the results in 2Arms environment in Table \ref{tab:Oracle}. (For a fair comparison, we present the result of GNN-TE without DAgger. And the collision checking is not provided because OracleNet-D generates and rolls out the path iteratively without checking the collision.) \begin{table}[H] \centering \caption{Table for GNN-TE and OracleNet-D in 2Arms Environment} \begin{tabular}{c|c|c|c|c|c} \toprule[2pt] & & \textbf{SIPP} & \textbf{Dijkstra-H} & \textbf{GNN-TE} & \textbf{OracleNet-D} \\ \hline \multirow{2}[1]{*}{\textbf{Success Rate}} & random & 100\% & 89.70\% & \textbf{94.10\%} & 53.90\% \\ \cline{2-6} & hard & 100\% & 0.00\% & \textbf{58.10\%} & 10.80\% \\ \hline \multirow{2}[1]{*}{\textbf{Avg Path Time Ratio}} & random & 100\% & 120.61\% & \textbf{113.94\%} & 1130.76\% \\ \cline{2-6} & hard & 100\% & / & \textbf{118.92\%} & 813.00\% \\ \hline \end{tabular}% \label{tab:Oracle}% \end{table}% We observe that the performance of \textbf{OracleNet-D} falls behind \textbf{GNN-TE} both on success rate and the average time ratio. This result shows that encoding environmental information is important for the planner in a dynamic environment. As mentioned in \cite{OracleNet}, the configuration of the robot and the environmental information form different distributions and the mapping is challenging. We believe GNN with the attention mechanism and temporal encoding provides a good solution to the problem. Also, GNN-TE benefits from the second-stage local planner, which takes local temporal obstacle information into consideration. \subsection{\textcolor{black}{Ablation Study on Varying Training Set Sizes}} \begin{figure}[H] \centering \begin{adjustbox}{width=0.7\textwidth,center=\textwidth} \includegraphics[]{figs/tradeoff.png} \end{adjustbox} \caption{Results on varying training set sizes in 2Arms environment. We observe that \textbf{GNN-TE} benefits from increasing the training problems, both in better success rate and less collision checking. \textbf{Left:} A scatter plot visualizes the relevance between the success rate and the collision checking regarding the training size. The number on each point indicates the training size. \textbf{Right:} An equivalent plot that clearly shows the performance boost benefited from a larger training size. Higher success rate (blue curve) and lower collision checking (orange curve) are favored.} \label{fig:tradeoff} \end{figure} We train \textbf{GNN-TE} on varying training problems (specifically 100, 200, 300, 400, 500, 1000, 1500, 2000, 2500, 3000) and test on the same random sampled and hard problems in 2Arms environment. We observe that \textbf{GNN-TE} benefits from increasing the training problems, both in better success rate and less collision checking. From the plot in the right column of the figure, we observed that the trends are prone to be log-like. It shows that the performance will be saturated as the training set covers the problem distribution. \subsection{\textcolor{black}{Ablation Study on Basic GNN}} In Table \ref{tab:basicGNN}, we provide the overall performance gain by all the components of \textbf{GNN-TE} over the basic GNN (\textbf{GNN-basic}) in 2Arms environment. Specifically, in the first stage, \textbf{GNN-basic} removes the attention mechanism and temporal encoding. And in the second stage, \textbf{GNN-basic} only inputs the obstacle encoding at the current time step. \begin{table}[H] \centering \caption{Overall performance gain over basic GNN} \begin{tabular}{c|c|c|c|c|c} \toprule[2pt] & & \textbf{SIPP} & \textbf{Dijkstra-H} & \textbf{GNN-TE} & \textbf{GNN-basic} \\ \hline \multirow{2}[1]{*}{\textbf{Success Rate}} & random & 100\% & 89.70\% & \textbf{94.10\%} & 92.70\% \\ \cline{2-6} & hard & 100\% & 0.00\% & \textbf{62.50\%} & 32.00\% \\ \hline \multirow{2}[1]{*}{\textbf{Avg Path Time Ratio}} & random & 100\% & 123.73\% & \textbf{107.78\%} & 112.42\% \\ \cline{2-6} & hard & 100\% & / & \textbf{122.13\%} & 185.92\% \\ \hline \multirow{2}[1]{*}{\textbf{Avg Collision Checking}} & random & 60K & 56.21 & \textbf{17.44} & 28.80 \\ \cline{2-6} & hard & 1081K & / & \textbf{45.23} & 109.70 \\ \hline \end{tabular}% \label{tab:basicGNN}% \end{table}% \subsection{\textcolor{black}{Failure Modes in 2Arms Environment}}\label{failmode} We provide visualizations of \textbf{GNN-TE} failing to find feasible solutions in 2Arms environments. We find there are mainly two modes: it fails to make a detour in Fig. \ref{fig:fm1} or gets too close to the moving obstacles in Fig.\ref{fig:fm2}. In Fig. \ref{fig:fm1}, we can observe that \textbf{GNN-TE} plans to directly get to the goal while the feasible path is to make a detour to avoid the obstacle. In Fig. \ref{fig:fm2}, although \textbf{GNN-TE} can follow the correct direction but fail in getting too close to the obstacle arm. \begin{figure}[H] \centering \begin{adjustbox}{width=0.6\textwidth,center=\textwidth} \includegraphics[]{figs/FailureMode1.png} \end{adjustbox} \caption{Failure mode: the planner fails to make a detour. Our planner controls the arm in black and white.} \label{fig:fm1} \end{figure} \begin{figure}[H] \centering \begin{adjustbox}{width=0.6\textwidth,center=\textwidth} \includegraphics[]{figs/FailureMode2.png} \end{adjustbox} \caption{Failure mode: the planner gets too close to the obstacle. Our planner controls the arm in black and white. Though the planner follows the correct direction, it gets too close to the obstacle arm, which leads to the collision.} \label{fig:fm2} \end{figure} \section{\textcolor{black}{Limitations and Future Work}} \subsection{Discussions on Using GNNs and Attention Mechanism} Motion planning has been a longstanding challenge in robotics, especially in dynamic environments. Our approach uses a learning-based approach leveraging Graph Neural Networks to efficiently tackle this problem. GNNs show great capability in capturing geometric information and are invariant to the permutations of the sampled graph. Another challenge in dynamic environment is that the difference in distributions of the robot configuration and the environmental information makes the mapping and motion planning challenging. Our approach tackles this by introducing the attention mechanism with temporal encoding to learn the correlation between the temporal positions of obstacles and the ego-arm configuration on the graph. It is efficient because, as for a configuration node on the graph, the obstacles' positions of some time steps are more important than others, as the obstacles may have more possibilities of colliding with the ego-arm at those time steps. So, in this case, the obstacles of those time steps should be given more importance in modeling. Also, the attention mechanism can take time sequences with variable length as inputs. Regardless of the empirical efficiency, the performance of the GNN-based approach is still bounded by the sampled configurations. It can only be boosted by a sufficient number of nodes on the graph, especially in a complex environment and with a robot with a high degree of freedom. It is still an open problem how many samples would be sufficient for the GNN to capture the geometric pattern from the configuration space. \subsection{Limited Performances on Hard Problems Although \textbf{GNN-TE} can achieve a better success rate than other learning-based approaches, it is still not complete and has limited performance on hard problems (see Section \ref{failmode} for examples of failure modes). A direction of solving this problem is to do hard example mining and train on those problems, where we train \textbf{GNN-TE} on extra hard examples and test its performance, and the success rate rises from 62.5\% to 71.3\% on 2Arms environment. However, in general, we believe the safety and reliability of learning-enabled systems are always a core issue that needs to be solved after learning-based approaches show clear benefits. For motion planning, a potential future direction is to integrate our learning-based component with monitoring. Such monitoring identifies hard graph structures that are out-of-distribution for the neural network components. It ensures that the learning-based components are only used when the planning can be safely accelerated, in which case they will provide great benefits in reducing collision checking and overall computation. When hard or out-of-distribution cases occur, the planner should fall back to more complete algorithms such as \textbf{SIPP}. There also has been much ongoing development in frameworks for ensuring the safe use of learning-based components in planning and control, which we believe is orthogonal to our current work. For example, \cite{brunke2022safe} provides reviews learning-based control and RL approaches that prioritize the safety of the robot's behavior. \subsection{Trade-off Between Quality and Efficiency} Another observation from the result is the trade-off between quality (success rate of finding paths) and efficiency (number of collision checks). In this work, we further add backtracking, where we keep a stack of policy edges of the top-n priority values and allow the algorithm to take the sub-optimal choices if it fails. Therefore, the backtracking will increase the collision checking with the hope of finding a solution. Although adding this or other systematic searching algorithms can improve the quality in the sacrifice of efficiency, we think the actual bottleneck might still be the priority values as the heuristic produced by the model. We believe this trade-off may be a crucial learning-based dynamic motion planning topic and needs future investigations. \subsection{Problem Distribution and Generalization} As most learning-based approaches would assume, our model needs to be trained on the same actor and obstacle arms as it's tested on. Both the sampled graph and the expert trajectory are implicitly conditioned on the kinematic structure. This assumption aligns with the most immediate use of learning-based components for reducing repeated planning computation in a relatively fixed setting of arm configurations. We believe learning planning models that can be generalized to arbitrary arms and obstacles is still challenging for the community, for it requires an in-depth study of other issues that have not been fully understood, such as the inherent generalization properties of graph neural networks. As shown in \cite{garg2020generalization}, there still exists the trade-off between expressivity and generalization in GNN. We leave this topic to future works. \section{More Snapshots in Different Environments} In this section, we show more snapshots of baselines SIPP, GNN-TE, Dijkstra-H in different environments. In those environments and cases, \textbf{GNN-TE} succeeds in finding a near-optimal path while \textbf{Dijkstra-H} fails. \end{appendices} \begin{figure}[H] \centering \begin{adjustbox}{width=0.9\textwidth,center=\textwidth} \includegraphics[]{figs/ap-fig-simple2.png} \end{adjustbox} \caption{Snapshots: 2Arms} \end{figure} \begin{figure}[H] \centering \begin{adjustbox}{width=0.9\textwidth,center=\textwidth} \includegraphics[]{figs/ap-fig-simple3.png} \end{adjustbox} \caption{Snapshots: 3Arms} \end{figure} \begin{figure}[H] \centering \begin{adjustbox}{width=0.9\textwidth,center=\textwidth} \includegraphics[]{figs/ap-fig-kuka4.png} \end{adjustbox} \caption{Snapshots: Kuka-4DoF} \end{figure} \begin{figure}[H] \centering \begin{adjustbox}{width=0.9\textwidth,center=\textwidth} \includegraphics[]{figs/ap-fig-kuka5.png} \end{adjustbox} \caption{Snapshots: Kuka-5DoF} \end{figure} \begin{figure}[H] \centering \begin{adjustbox}{width=0.9\textwidth,center=\textwidth} \includegraphics[]{figs/ap-fig-kuka7.png} \end{adjustbox} \caption{Snapshots: Kuka-7DoF} \end{figure} \begin{figure}[H] \centering \begin{adjustbox}{width=0.9\textwidth,center=\textwidth} \includegraphics[]{figs/ap-fig-3kuka.png} \end{adjustbox} \caption{Snapshots: Kuka3Arms} \end{figure} \subsection{Overall Performance} Figure \ref{fig:overall-performance} shows the overall performance of the algorithms in various environments, including all the baselines and \textbf{GNN-TE} without DAgger, i.e., pure imitation learning from the oracle. In all the environments, \textbf{SIPP} gives the optimal complete non-collision path, but it suffers from the excessive amount of collision checking. \textbf{GNN-TE} significantly reduces the collision checking by more than 1000x, which corresponds to reducing online computation time by over 95\%. At the same time, the methods have high success rates and a better path time ratio compared to simpler heuristics. Fig. \ref{fig:overall-snapshot} shows the performance snapshots of the algorithms on a 2Arms and a Kuka-7DoF test case. In both cases, our method successes in planning a near-optimal path compared to the oracle \textbf{SIPP} whereas \textbf{Dijkstra-H} fails. \noindent{\bf Performance on randomly generated test problems.} As shown in Fig.\ref{fig:overall-performance}, our methods significantly reduces the collision checking by over 1000x compared to \textbf{SIPP}, i.e., 60k, 97k, 171k, 197k, 269k, 269k to 17.24, 33.08, 37.36, 56.89, 61.99, 72.63 on 2Arms, 3Arms, Kuka-4DoF, Kuka-5DoF, Kuka-7DoF, Kuka3Arms respectively. Because \textbf{SIPP} needs to search in both space and time dimensions, it suffers from a large number of collision checking. Our approach benefits from learning and achieves a much less amount of collision checking even in the dynamic setting while not compromising much of the success rate, in which case our method outperforms \textbf{Dijkstra-H} in all the environments with higher success rates. Note that as a crucial part of the training procedures, DAgger also improves performance in all environments. \begin{wrapfigure}{r}{0.5\textwidth} \includegraphics[width=0.5\textwidth]{figs/fig-overall2.png} \caption{Snapshots of the trajectories of the output path on 2 test cases. The environments are 2Arms (first row) and Kuka-7DoF (second row). The ego-arm is black and white with a red end-effector. The obstacle arm is blue and orange with a yellow end-effector. In both environments, \textbf{Dijkstra-H} fails to find a path, while our method can yield a near-optimal path compared to the oracle \textbf{SIPP}.} \vspace{-0.3cm} \label{fig:overall-snapshot} \end{wrapfigure} \noindent{\bf Performance on hard test problems.} On the hard test problems, \textbf{Dijkstra-H} fails to find feasible paths in the dynamic environment. Comparatively, our \textbf{GNN-TE} can successfully find solutions to these problems with considerable success rates and acceptable path time ratios. It is also worth noting that DAgger can better assist in improving the performance in the more challenging scenarios compared to the randomly generated problems. \noindent{\bf Optional backtracking search.} In the Appendix \ref{AppendixBT}, we also report the result of \textbf{GNN-TE} and \textbf{Dijkstra-H} and them with backtracking \textbf{GNN-TE w. BT} and \textbf{Dijkstra-H w. BT} on 2Arms. On the same algorithm, the optional backtracking search will only result in a higher success rate while not affecting the path time ratio and collision checking on the common success cases. The result of \textbf{Dijkstra-H w. BT} shows that although it improves the success rate significantly but sacrificing a tremendous number of collision checking. Nevertheless, our method outperforms the heuristic method both with or without the backtracking search. \noindent{\bf Comparison with end-to-end RL.} We also compare our approach with RL approaches, including DQN~\cite{DQN} and PPO~\cite{PPO}. The neural network architectures for these two baselines are implemented with the same GNN architectures as ours. We observe that even in 2Arms, the average success rate of DQN on the training set is only around 54\%, while PPO only has a success rate of around 20\%. They fail to find plans in test cases. We provide more details in the Appendix \ref{AppendixRL}. \noindent{\bf Comparison with OracleNet-D.} We compare \textbf{GNN-TE} with a learning-based approach \textbf{OracleNet-D}, by modifing \textbf{OracleNet}\cite{OracleNet} to the dynamic version. We observe that the performance of \textbf{OracleNet-D} falls behind \textbf{GNN-TE} largely on all the metrics both in random and hard problems. Details are provided in the Appendix \ref{AppendixOracle}. \subsection{Ablation Studies}\label{ablation} We perform ablation studies on 2Arms of the different encoding in our model, including global and local obstacle encoding and temporal encoding. The results are shown in Fig. \ref{fig:ablation}. \begin{figure}[t!] \centering \begin{adjustbox}{width=1\textwidth,center=\textwidth} \includegraphics[]{figs/ablation2.png} \end{adjustbox} \caption{Ablation studies on 2Arms of (1) global obstacle encoding, (2) local obstacle encoding, and (3) temporal encoding. We have demonstrated that all these three components improve the effectiveness of the proposed approach. See Section \ref{ablation} for more details.} \label{fig:ablation} \end{figure} \textbf{Global Obstacle Encoding.} In stage 1, we encode both the configuration graph and the global obstacle trajectories with the GNN-based global encoder. To investigate the effectiveness of leveraging global obstacle information, we conduct an experiment in which we input the sampled configurations in stage 1 and only introduce the obstacle information in stage 2. As we can observe in the figure that although on random problems there are slight degrades of collision checking and success rate, the performance of path time ratio, success rate, and collision checking improves greatly in the hard environment. This results from the model receiving the overall trajectories of the obstacles, and the encoding helps to reduce collision checking and improves success rate, especially on complicated problems. \textbf{Local Obstacle Encoding.} We compare our model with the one omitting the local obstacle encoding in stage 2. We only input the temporal encoding corresponding to the arrival time to the local planner as the time indicator for planning. The result has shown that the local obstacle encoding within a time window directly helps the local planner perform better. \textbf{Temporal Encoding.} We analyze the importance of temporal encoding where we remove it from both two stages and only input the trajectory of obstacles in the models. The results also show that temporal information helps \textbf{GNN-TE} make use of the order of the trajectory sequence on both random and hard problems. \section{Introduction} Motion planning for manipulation has been a longstanding challenge in robotics~\cite{manip1,manip2}. Learning-based approaches can exploit patterns in the configuration space to accelerate planning with promising performance~\cite{GVIN,Fastron,Clearance}. Existing learning-based approaches typically combine reinforcement learning (RL) and imitation learning (IL) to learn policies for sampling or ranking the options at each step of the planning process~\cite{MPNet,Implicit,NEXT}. Graph Neural Networks (GNNs) are a popular choice of representation for motion planning problems, because of their capability to capture geometric information and are invariant to the permutations of the sampled graph~\cite{GNNMP,decentral,attentionqingbiao,chenning21}. \textcolor{black}{Motion} planning in dynamic environments, such as for multi-arm assembly and human-robot interaction, is significantly more challenging than in static environments. Dynamic obstacles produce trajectories in the temporal-spatial space, so the motion planner needs to consider global geometric constraints in the configuration space at each time step (Figure~\ref{fig:intro}). This dynamic nature of the environment generates the much larger space of sequences of graphs for sampling and learning, and it is also very sensitive to the changes in one single dimension: time. A small change in the timing of the ego-robot or the obstacles in two spatially similar patterns may result in completely different planning problems. For instance, the dynamic obstacle may create a small time window for the ego-robot to pass through, and if that window is missed, then the topology of configuration space can completely change. Consequently, we need to design special architectures that can not only encode the graph structures well, but also infer temporal information robustly. Indeed, complete search algorithms for dynamic motion planning, such as the leading method of Safe Interval Path Planning (SIPP) and its variations~\cite{phillips2011sipp,li2019safe,gonzalez2012using,narayanan2012anytime}, focus on reasoning about the temporal intervals that are safe for the ego-robot. These complete algorithms typically require significantly more computation and collision checking operations compared to the static setting. As it is proved in \cite{reif1994motion}, the computational complexity of planning with the moving obstacles is NP-hard even when the ego-robot has only a small and fixed number of degrees of freedom of movement. \begin{figure}[t!] \centering \includegraphics[width=0.8\textwidth]{figs/intro2.png} \vspace{-5pt} \caption{Left: A sampled graph from the configuration space. A dynamic obstacle, colored in yellow, moves over time from $t=0$ to $t=T$. The goal of our approach is to search for a path on the graph connecting the start to the goal, without collision with the obstacle at any timestep. Right: A successful plan where the ego-robot (grey arm) avoids collision with the dynamic obstacle (blue arm) and reaches the goal.} \label{fig:intro} \end{figure} We propose a novel Graph Neural Network (GNN) architecture and the corresponding training algorithms \textcolor{black}{for motion} planning in dynamic environments. We follow the framework of sampling-based motion planning~\cite{DBLP:journals/trob/KavrakiSLO96,RRT*}, where path planning is performed on random graphs sampled from the configuration space. The GNN takes in the following inputs: the sampled graph in the configuration space, the obstacle's trajectory in the workspace, and the current state of the ego-robot. The output is a vector of priority values on the candidate edges at the current state of the ego-robot. The encoding is performed in two stages. In the first stage, we encode the graph structure using attention mechanisms~\cite{Attention}, and also design a temporal encoding approach for the obstacle trajectories. The temporal encoding uses the idea of positional encoding from the Transformer and NeRF~\cite{Attention,Nerf}, which encourages the neural network to capture temporal patterns from high-frequency input signals. In the second stage of encoding, we incorporate the ego-robot's current vertex in the configuration space, the local graph structure, and the current time-shifted trajectories of the obstacles. This two-stage structure extends the previous use of GNNs in static environments~\cite{decentral,attentionqingbiao,chenning21} and it is important for making high-quality predictions on the priority values. The entire GNN of both stages will be trained simultaneously in an end-to-end fashion. Due to the complexity of the GNN architecture, we observe that RL-based approaches can hardly train generalizable models based on the architecture, and using imitation learning with data aggregation (DAgger) is the key to good performance. We utilize SIPP as the expert, first perform behavior cloning as warm-up, and then allow the ego-robot to self-explore and learn from the expert following the DAgger approach~\cite{DAgger,decentralarm}. We evaluate the proposed approach in various challenging dynamic motion planning environments ranging from 2-DoF to 7-DoF KUKA arms. Experiments show that our method can significantly reduce collision checking, often by more than 1000x compared to the complete algorithms, which leads to reducing the online computation time by over 95\%. The proposed methods also achieve high success rates in hard instances and consistently outperform other learning and heuristic baselines. \section{Related Work} \input{related} \section{Preliminaries} \noindent{\bf Sampling-based Motion Planning with Dynamic Obstacles.} We focus on the sampling-based motion planning, in which a random graph is formed over samples from the {\em configuration space} $C\subseteq \mathbb{R}^n$ where $n$ is the number of degree-of-freedom for the ego-robot. The sampled vertex set $V$ always contains the start vertex $v_s$ and goal vertex $v_g$. The edges in the graph $G=\langle V,E\rangle$ are determined by r-disc or k-nearest-neighbor (k-NN) rules~\cite{rdisc,k-NN}. We assume global knowledge of the trajectories of the dynamic obstacles. We represent the trajectories of dynamic obstacles in the {\em workspace} as the vector of all the joint positions in the time window of length $T>0$. The goal of the motion planning problem is to find a path from $v_s$ to $v_g$ in the sampled graph that is free of collision with the dynamic obstacles at all time steps in $[0,T]$. \noindent{\bf Graph Neural Networks (GNNs).} GNNs learn representations over of vertices and edges on graphs by message passing. With MLP networks $f$ and $g$, GNN encodes the representation $h_i^{\textcolor{black}{(k+1)}}$ of vertex $v_i$ after $k$ aggregation steps defined as \begin{equation}\label{eq2} \left.\begin{aligned} h_i^{(k+1)} = g(h_i^{(k)}, \textcolor{black}{\oplus}(\left\{f(h_i^{(k)}, h_j^{(k)})\mid (v_i,v_j)\in E\right\})) \end{aligned}\right. \end{equation} where $h_i^{(1)}=x_i$ can be some arbitrary vector of initial data for the vertex $v_i$. $\oplus$ is typically some permutation-invariant aggregation function on sets, such as mean, max, or sum. We use the attention mechanism to encode the obstacle features. In the general form of the attention mechanism, there are $n$ keys, each with dimension $d_k$: $K\in \mathbb{R}^{n\times d_k}$, each key \textcolor{black}{has} a value $V \in \mathbb{R}^{n\times d_v}$. Given $m$ query vectors $Q \in \mathbb{R}^{m\times d_k}$, we use a typical attention function $\mathbf{Att}(K, Q, V)$ for each query as $\mathbf{Att}(K, Q, V)=\text{softmax}(QK^T/\sqrt{d_k})V$ \cite{Attention}. \noindent{\bf Imitation Learning.} Imitation learning aims to provide guidance to train policy without explicitly designing reward functions. Given a distribution of the oracle actions $\pi_{oracle}$, it tries to learn a new policy distribution $\pi$ that minimize the deviation from the oracle, i.e. $ \pi^* = \operatorname*{argmin}_{\pi}{D(\pi,\pi_{oracle})} $ , where $D$ is the difference function between the two distributions, which can be represented as $p$-norm or $f$-divergence. We use imitation learning to train our GNN models from the demonstration of the oracle planner. Specifically, in the task of sampling-based motion planning, the oracle predicts the priority values of subsequent edges and prioritize the one given by the oracle. However, imitation learning based on behavior cloning often suffers from distribution drift, which can be mitigated by imitation with data aggregation (DAgger)~\cite{DAgger}. With DAgger, the learner can actively query the oracle on states that are not typically provided in the expert trajectories to robustify the learned policy. \section{GNN-TE: Dynamic Motion Planning with GNNs and Temporal Encoding} \subsection{Overall Architecture} We design the dynamic motion planning network \textbf{GNN-TE} to capture the spatial-temporal nature of dynamic motion planning. The forward pass in the network consists of two stages. The first stage is the global GNN encoder $\mathcal{N}_G$ that encodes the global information of the ego-robot and obstacles in the environment. The second stage is the local planner $\mathcal{N}_p$ that assigns priority on edges, utilizing the encoding output of the first stage. Fig \ref{fig:gnn-arch} shows the overall two-stage architecture. \noindent{\bf First-stage global GNN encoder $\mathcal{N}_G$}. The GNN encoder $\mathcal{N}_G$ takes in a sampled random geometric graph $G=\langle V,E\rangle$, $V = \{v_s, v_g, v\}$. For an $n$-dimensional configuration space, each vertex $v_i\in \mathbb{R}^{n+1}$ contains an $n$-dimensional configuration component and a $1$-dimensional one-hot label indicating if it is the special goal vertex. \begin{figure}[t!] \centering \includegraphics[width=0.9\textwidth]{figs/Architecture3.png} \caption{The overall two-stage architecture of the proposed \textbf{GNN-TE}. In Stage 1, we encode global information of the ego-arm and the obstacles, using attention mechanisms, and output the encoding of each edge. In Stage 2, the local planner will take in the output from Stage 1 along with the obstacle encoding within the relevant time window, to predict the priority value of each outgoing edge. The planner will propose the edge with the highest priority value to take as the output policy.} \label{fig:gnn-arch} \vspace{-5pt} \end{figure} The vertices and the edges are first encoded into a latent space with $x\in \mathbb{R}^{|V|\times d_h}, y\in \mathbb{R}^{|E|\times d_h}$, where $d_h$ is the size of the encoding. \textcolor{black}{Specifically, to get the feature $x_i$ for the $i$-th node $v_i\in V$, we use $ x_i=g_x(v_i, v_g, v_i-v_g, ||v_i-v_g||^2_2)$. To get the feature $y_l$ for the $l$-th edge $e_l:\langle v_i,v_j\rangle \in E$ , we use $y_l=g_y(v_i, v_j, v_j-v_i)$. The $g_x$ and $g_y$ are two different two-layer MLPs.} The L2 distance to the goal $||v-v_g||^2_2$ serves as the heuristic information for $\mathcal{N}_G$. The dynamic obstacles $O$ form barriers on the top of the graph $G$, and we incorporate their trajectories to $\mathcal{N}_G$ to leverage the global environment and assist the planning of $\mathcal{N}_P$. Additionally, to inform the networks about the relative time over the horizon, we incorporate {\em temporal encoding} with the obstacles. Given the obstacle position $O_t$ at time step $t$ and a two-layer MLP $g_o$, the obstacle is encoded as $\mathcal{O}_t = g_o(O_t)+TE(t)$, which adds $g_o(O_t)$ and $TE(t)$ element-wisely. $TE(t)$ is the temporal encoding at time step $t$, which we will discuss at Section \ref{TESection}. With the sequential property of the trajectory, we use the attention mechanism to model the temporal-spatial interactions between the ego-arm and the obstacles. Concretely, the obstacles are encoded into the vertex and edge of $G$ as: \begin{align} x &= x + \mathbf{Att}(f_{K_x^{(i)}}(\textcolor{black}{\mathcal{O}}), f_{Q_x^{(i)}}(x), f_{V_x^{(i)}}(\textcolor{black}{\mathcal{O}})) \\ y &= y + \mathbf{Att}(f_{K_y^{(i)}}(\textcolor{black}{\mathcal{O}}), f_{Q_y^{(i)}}(y), f_{V_y^{(i)}}(\textcolor{black}{\mathcal{O}})) \label{eq:attention} \end{align} Taking the vertex and edge encoding $x, y$, the GNN $\mathcal{N}_G$ aggregates the local information for each vertex and edge from the neighbors with the following operation with 2 two-layer MLPs $f_x$ and $f_y$: \begin{equation} \begin{split} x_i&=\max \left(x_i, \max\{f_x(x_j-x_i,x_j, x_i,y_l)\mid e_l:\langle v_i,v_j\rangle \in E\}\right), \forall v_i \in V\\ y_l&= \max(y_l, f_y(x_j-x_i,x_j, x_i)), \forall e_l:\langle v_i,v_j\rangle \in E \label{coreupdate} \end{split} \end{equation} Note that here we use $\max$ as the aggregation operator to gather the local geometric information, due to its empirical robustness to achieve the order invariance~\cite{PointNet}. The edge information is also incorporated into the vertex by adding $y_l$ as the input to $f_x$. Also, because Equation~\ref{coreupdate} is a homogeneous function that updates on the $x$ and $y$ in a self-iterating way, we can update without introducing redundant layers over multiple loops. After several iterations, the first-stage $\mathcal{N}_G$ outputs the encoding of each vertex $x_i$ and edge $y_l$. \noindent{\bf Second-stage local planner $\mathcal{N}_P$}. After $\mathcal{N}_G$ encodes the information of the configuration graph and obstacles, the second-stage local planner $\mathcal{N}_P$ utilizes the encoding and performs motion planning. Specifically, when arriving at a vertex $v_i$ at time $t_i$, $\mathcal{N}_P$ predicts the priority value $\eta_{e_i}$ of all the connected edges $e_i\in \textcolor{black}{E_i}$ with the expression $\eta_{e_i} = f_p(y_{e_i}, \textcolor{black}{\mathcal{O}}_{t_i-w}, \textcolor{black}{\mathcal{O}}_{t_i-w+1},...,\textcolor{black}{\mathcal{O}}_{t_i+w-1}, \textcolor{black}{\mathcal{O}}_{t_i+w})$, where $f_p$ is an MLP. Note that in addition to the encoding of the connected edges, we also input the local obstacle encoding within a time window $w$ of the current arrival time $t_i$. This provides local information for $\mathcal{N}_P$ to plan towards the goal vertex, while considering the barriers of dynamic obstacles to avoid collisions. At inference time, we use $\mathcal{N}_P$ to choose the edge with the highest priority value while keeping track of the current time. \subsection{Temporal Encoding}\label{TESection} Positional encoding is a crucial design in the Transformer architecture \cite{vaswani2017attention} for making use of the order of the sequence. Dynamic motion planning requires the models to infer the relative position of obstacles and how they interact with the ego-arm at each time step. So along with the positions of the obstacles in the workspace, we add temporal encoding \textcolor{black}{$TE(t) \in \mathbb{R}^{d_{TE}}$ at each time step $t\in[0,\cdots,T]$, where its $2k$-th and $2k+1$-th dimensions} are computed as \begin{align} TE(t, 2k) = \sin({\omega^{-2k/{d_{TE}}}}t)\ \mbox{ and }\ TE(t, 2k+1) = \cos({\omega^{-2k/{d_{TE}}}}t) \label{eq:TE} \end{align} where $\omega\in \mathbb{Z}^+$ is a fixed frequency. We select the temporal encoding to have the same dimension as the obstacle input, and add them as the obstacle encoding before inputting into the networks $\mathcal{N}_G$ and $\mathcal{N}_P$. We illustrate the overall encoding procedure on the left of Figure~\ref{fig:temporal-encoding & training}. \begin{figure}[h] {\includegraphics[width=0.9\textwidth]{figs/training+te3.png}} \caption{Left: Temporal encoding is incorporated when representing the dynamic obstacle sequence. Right: Training procedures with DAgger. The proposed GNN is first trained to imitate an optimal oracle, then improves itself by self-exploring the data with feedback from the oracle.} \label{fig:temporal-encoding & training} \vspace{-10pt} \end{figure} \subsection{Training and Inference Procedures} In each training problem, along with the dynamic obstacles $\mathcal{O}$, start vertex $v_s$ and goal vertex $v_g$, we sample a k-NN graph $G=\langle V,E\rangle, V=\{v_s, v_g, v\}$, where $v$ is the vertices sampled from the configuration space of the ego-arm. In the first stage, we use the global GNN encoder $\mathcal{N}_G$ to encode the graph and dynamic obstacles, and the local planner $\mathcal{N}_p$ in the second stage uses the encoding as the input to predict the priority value $\eta$ of the subsequent edges. \noindent{\bf Imitation from SIPP with Data Aggregation.} We train our two-stage network $\mathcal{N}_G$ and $\mathcal{N}_p$ in an end-to-end manner by imitating an oracle. Specifically, we use Safe Interval Path Planning (SIPP) \cite{phillips2011sipp} to compute the shortest non-collision motion path and use it as the oracle. In the first stage, $\mathcal{N}_G$ will process the graph and the obstacle trajectories, then output the encoded features of each vertex and edge. Then in the second stage, we train the networks to imitate the oracle SIPP along the optimal path. Concretely, starting from the $v_s$, SIPP provides the optimal path $\pi^*=\{(v^*_i, t^*_i)\}_{i\in[0,n]}$ with the vertex $v^*_i$ and the corresponding arrival time $t^*_i$. When arriving at the vertex $v^*_i$ at time $t^*_i$, the local planner $\mathcal{N}_p$ will take in the edge feature $y_{e_i}$ along with the obstacle encoding in the time window $[t^*_{i-w}, t^*_{i+w}]$ to predict the priority value of all the subsequent edges $E_i$. Then it prioritizes the next edge on the optimal path $e^*_i:\langle(v^*_i, t^*_i), (v^*_{i+1}, t^*_{i+1})\rangle$ among $E_i$. We maximize the priority value $\eta_{e^*_i}$ of $e^*_i$ over all other $e_i \in E_i\setminus \{e^*_i\}$ with the standard cross entropy loss $L_{i} = -\log (\exp({\eta_{e^*_i}})/(\Sigma_{e_i \in E_i}\exp({\eta_{e_i}})))$. Since SIPP only provides the guidance on the optimal path, when the planned path given by $\mathcal{N}_p$ deviates from the optimal path, our network cannot sufficiently imitate the oracle. To this end, we use DAgger~\cite{DAgger} to encourage the network to learn from these sub-optimal paths. We first train our network for $k$ iterations with pure demonstrations from SIPP. Then we explore a path $\pi^k$ on the graph using the priority value predicted by the current network, which may not reach the goal vertex $v_g$ nor be optimal. We randomly stop at the vertex $v^k_i$ at time $t^k_i$ and query the oracle. SIPP treats $v^k_i$ and $t^k_i$ as the start vertex and the start time respectively, along with the obstacles trajectory starting at $t^k_i$, calculates the optimal path. The new demonstrations are aggregated to the previous dataset to keep training the network. The training procedures are showed on the right of Figure~\ref{fig:temporal-encoding & training}. \noindent{\bf Inference with Time Tracking.} Given a graph $G=\langle V, E\rangle, V=\{v_s, v_g, v\}$, with the trajectories of obstacles $\mathcal{O}$, $\mathcal{N}_G$ will first encode the graph and obstacles. Next, $\mathcal{N}_P$ executes motion planning by predicting the priority values $\eta=\mathcal{N}_P(V,E_i,\mathcal{O}, \mathcal{N}_G, t_i)$ when arriving at vertex $v_i$ at time $t_i$, and follow the edge with the maximum one, i.e. $e_{\pi_i} = \operatorname*{argmax}_{e_i \in E_i}\eta_{e_i}$. After the edge $e_{\pi_i}$ is proposed by the network, we check the collision on $e_{\pi_i}$ on which the ego-arm starts moving at $t_i$. If there is no collision, we add $(e_{\pi_i}, t_i)$ into the current path $\pi$. Otherwise, we query $\mathcal{N}_P$ for another edge with the next greater priority value. The planning will end if we succeed in finding a path $\pi$ from $v_s$ to $v_g$ or fail when stopping at a vertex with all the connected edges with collisions. Optionally, when the latter one happens, we can backtrack to the last step and query for the top-k greatest priorities in turn. Further discussions are covered in the experiments section. \section{Experiments} \begin{figure}[!th] \centering \begin{adjustbox}{width=1\textwidth,center=\textwidth} \includegraphics[]{figs/overall-perf-rebuttal.png} \end{adjustbox} \caption{Overall Performance of baselines on collision checking, success rate, and path time ratio. Our approach significantly reduces collision checking more than 1000x compared to the complete algorithm SIPP, improves the overall planning efficiency, and achieves high success rates.} \label{fig:overall-performance} \end{figure} \input{experiments} \section{Discussion and Conclusion} We proposed a GNN-based neural architecture \textbf{GNN-TE} for motion planning in dynamic environments, where we formulate the spatial-temporal structure of dynamic planning problem with GNN and temporal encoding. We also use imitation learning with DAgger for learning both the embedding and edge prioritization policies. We evaluate the proposed approach in various environments, ranging from 2-DoF arms to 7-DoF KUKA arms. Experiments show that the proposed approach can reduce costly collision checking operations by more than 1000x and reduces online computation time by over 95\%. Future steps in this direction can involve using ideas from the complete planning algorithms, such as incorporating safe intervals, to improve the success rate on hard instances, as well as more compact architectures for further reducing online computation. \section{Acknowledgement} This material is based on work supported by DARPA Contract No. FA8750-18-C-0092, AFOSR YIP FA9550-19-1-0041, NSF Career CCF 2047034, NSF CCF DASS 2217723, and Amazon Research Award. We appreciate the valuable feedback from Ya-Chien Chang, Milan Ganai, Chiaki Hirayama, Zhizhen Qin, Eric Yu, Hongzhan Yu, Yaoguang Zhai, and the anonymous reviewers. \medskip
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Protoplanetary disks link protostars and (extra-)solar planetary systems physically and chemically. Understanding the chemical composition and evolution of disks thus provides constraints on the nature of molecules incorporated into planetesimals and planets. A variety of simple species, including the organic molecules CN, HCN, and H$_2$CO, have been detected towards a handful of disks in unresolved studies \citep{Dutrey97, Thi04,Kastner08}, suggestive of an active chemistry. However, apart from CO and to some extent HCO$^+$, the chemistry is poorly constrained \citep{Pietu07}. Observations of the earlier stages of star formation and of comets, the possible remnants of our own protoplanetary disk, reveal a chemistry rich in simple and complex organics up to HCOC$_2$H$_5$ and (CH$_2$OH)$_2$ in size \citep[e.g.][]{vanDishoeck95, Crovisier04, Belloche09}. Pre-biotic pathways to chemical complexity thus exist. If theses pathways are active in disks or if the observed cometary complexity is instead a fossil remnant from earlier stages remains to be shown. Recent experiments suggest that the combination of icy grain mantles and UV irradiation efficiently produces the complex molecules found around protostars \citep{Oberg09d}, and surely appropriate conditions are common in disks as well \citep{vanZadelhoff03,Hersant09}. The chemistry in disks is significantly more difficult to probe than in protostellar cores because of their order-of-magnitude smaller angular size, which necessitates the use of (sub-)millimeter arrays to resolve the chemistry of the bulk of the disk material. The first observations of disk chemistry by \citet{Dutrey97} also revealed lower gas-phase abundances of most molecules compared to protostars. From a slightly larger sample consisting of four sources \citet{Thi04} showed that protoplanetary disks seem to generally contain orders of magnitude lower gas abundances compared to protostellar cores. This is reproduced by models of disks with a combination of freeze-out onto grains toward the disk midplane and photodissociation in the disk atmosphere \citep[e.g.][]{Aikawa99}. Gas phase molecules are only expected to be abundant in an intermediate zone that is warmer than common ice sublimation temperatures, but still deep enough into the disk to be partly protected from stellar and interstellar UV irradiation \citep[e.g.][]{Aikawa99, Bergin03}. The low molecular abundances result in weak emission that requires long integration times at all existing (sub-)millimeter facilities. The investment required for interferometers in large part explains the small number of resolved chemistry studies of any species more complicated than HCO$^+$ \citep{Qi03, Dutrey04, Dutrey07, Qi08, Henning10}. Despite these impediments, single dish studies of the T Tauri stars DM Tau, GG Tau, LkCa 15, TW Hya, V4046 Sgr and the Herbig Ae stars MWC 480 and HD 163296 have provided some constraints on the chemistry of protoplanetary disks \citep{Dutrey97,Thi04,Kastner08}. The species CN, HCN, DCN, HNC, CS, C$_2$H, H$_2$CO, HCO$^+$ and DCO$^+$ have been detected toward at least one of these objects, with CN/HCN ratios that only can be explained by high UV or X-ray fluxes penetrating into the disk -- CN is a photodissociation product of HCN. The variations in molecular abundances among different systems are significant; H$_2$CO is only detected toward DM Tau and LkCa 15, DCO$^+$ toward TW Hya, and HCN toward all T Tauri stars but not toward either of the Herbig Ae stars. Suggested reasons for these variations include higher photodissociation rates and a lack of cold chemistry products toward the more luminous Herbig Ae stars, different stages of grain growth in different disks, and different disk structures. Differences in the chemistry preceding the disk stage may also play a role. Resolved studies of disk chemistry are few but intriguing. Using the IRAM Plateau de Bure Interferometer, \citet{Dutrey07} reported low signal-to-noise N$_2$H$^+$ detections toward DM Tau and LkCa 15 and an upper limit toward MWC 480. Within the same project \citet{Schreyer08} tentatively detected HCN toward AB Aur. More recently \citet{Henning10} resolved C$_2$H emission toward DM Tau and LkCa 15, while no C$_2$H was detected toward the more luminous MWC 480. The difference between the T Tauri stars and the Herbig Ae star was explained by a combination of high UV and low X-ray fluxes toward MWC 480. Using the SMA, \citet{Qi08} spatially resolved the HCO$^+$ and DCO$^+$ emission toward TW Hydrae, revealing different distributions of these species -- DCO$^+$ is relatively more abundant at increasing radii out to 90 AU, consistent with its origins from cold disk chemistry. HCN and DCN were both detected as well, but provide less constraints on the chemistry because of the weak DCN signal. Overall, the combination of small samples of diverse sources and even fewer resolved studies have so far prevented any strong constraints on the main source of chemical diversity in protoplanetary disks. With the DISCS (Disk Imaging Survey of Chemistry with SMA) Submillimeter Array legacy program, we aim to produce a resolved, systematic survey of chemistry toward protoplanetary disks spanning a range of spectral types and disk parameters. The targeted molecules are the simple species that previous studies suggested may be detectable (CO, HCO$^+$, DCO$^+$, CN, HCN, DCN, N$_2$H$^+$, C$_3$H$_2$, H$_2$CO and CH$_3$OH). The initial survey contains six well known disks in the Taurus molecular clouds (DM Tau, AA Tau, LkCa 15, GM Aur, CQ Tau and MWC 480) with central stars that span spectral types M1 to A4. The disk sample is presented in $\S$\ref{sec:sample} with special attention to the properties that may affect the disk chemistry such as stellar luminosity, accretion rates, disk size, disk structure and dust settling. The spectral set-ups and observational details are described in $\S$\ref{sec:obs}. The channel maps toward one of the richest sources, moment maps for all disks in the most abundant species, and spectra of all detected lines toward all sources are shown in $\S$\ref{sec:res}. The detection rates as well as source-to-source variations are discussed in section $\S$\ref{sec:disc} followed by qualitative discussions on the origins of the observed chemical variations. \section{The Disk Sample\label{sec:sample}} \subsection{Selection criteria} The Taurus DISCS sample of protoplanetary disks was chosen to assess the impact of spectral type or stellar irradiation field on the chemistry in the disk. The target systems span the full range of stellar luminosities among the T Tauri and Herbig Ae stars associated with the Taurus molecular cloud. Table~\ref{tbl:star} lists the stellar properties. The stellar masses in the sample range between 0.5 and 1.8--2.3 M$_{\odot}$, corresponding to luminosities that span almost two orders of magnitude. If stellar luminosity is the main driver of the outer disk chemical evolution, then this sample should display a range of chemical behaviors. As discussed below there are other sources of radiation that may affect the chemistry as well, especially accretion luminosity and X-rays. The sample is biased toward disks of large angular extent, since disks smaller than a few hundred AU are not spatially resolved by the SMA in the compact configuration. The observations are not sensitive to gas inside of 100~AU, which entails that there is a large gap in radii between these millimeter observations and infrared disk chemistry observations that typically probe the inner disk out to a few AUs. The sources were selected from disks previously mapped in CO and thus may be biased toward gas-rich disks. Furthermore only disks clearly isolated from the parent cloud emission are included to reduce confusion and ensure that the detected molecules reside in the disks. Known harborers of organic molecules were favored (DM Tau, LkCa 15 and MWC 480), but the sample also contains disks that have only upper limits or that have not been investigated for molecular lines other than CO. The focus on a single star forming region allows us to probe disks of similar ages and likely similar chemical starting points, reducing the sources of chemical variation. \subsection{Star and Disk Properties affecting Disk Chemistry\label{sec:sample_prop}} Stellar luminosity, accretion luminosity, X-rays, the interstellar irradiation field, disk geometry, disk gaps and holes, dust settling and dust growth may all affect the chemistry in the disk. The sample characteristics in terms of these properties are discussed in this section. Observations of high CN and HCN abundances toward protoplanetary disks reveal that a chemistry controlled by far-ultraviolet (FUV) or X-ray radiation or both must contribute to the observed abundances \citep[e.g.][]{vanZadelhoff01,Thi04}. FUV radiation below 2000~\AA~affects the chemistry by directly heating the gas in the disk surface layers, in limiting molecular abundances via photodissociation, increasing the amount of photochemistry products such as CN and liberating frozen species via photodesorption. The nature of the dominating source of radiation is however unknown. The quiescent stellar luminosities in the sample range from 0.25 to 11.5 L$_{\odot}$, and if quiescent FUV radiation controls the photodissociation rate in the disk surface, then there should be a clear trend in the CN/HCN ratio between the low-luminosity T Tauri stars and the order of magnitude more luminous Herbig Ae stars. However, Kurucz (1993) stellar atmosphere models show evidence that stars with a spectral type later than F do not have significant stellar continuum $<$2000~\AA~above that generated by accretion, and that it is only for A stars that stellar UV becomes more significant than accretion luminosity for the FUV field. This is confirmed by FUV observations toward the T Tauri star TW Hya, which is dominated by line emission generated from accretion shocks \citep{Herczeg02}. The accretion FUV spectrum is dominated by line emission and especially Ly-$\alpha$ emission, which results in preferential HCN dissociation and therefore boosts the CN/HCN ratio \citep{Bergin03} beyond what is expected from a UV chemistry dominated by continuum radiation. Toward T Tauri stars, the FUV flux is expected to scale with the accretion rate \citep[e.g.][]{Calvet04}. The accretion rates probably vary over time however, as has been observed toward TW Hya, where \citet{Alencar02} found mass accretion rates between 10$^{-9}$ -- 10$^{-8}$ M$_{\sun}$ yr$^{-1}$ during a year and smaller variations on timescales of days. The measured accretion rate variability among the T Tauri stars in this sample are all within this range and it is unclear whether the average accretion luminosity vary significantly among the sources. In general more massive systems have higher accretion rates \citep{Calvet04} and it can therefore be expected that the FUV flux from accretion will be higher toward the intermediate mass Herbig Ae stars in the sample. As mentioned above, for the early type stars (e.g. A) the stellar continuum also adds a significant contribution to the FUV radiation field. Thus, even if accretion rather than quiescent luminosity controls the UV chemistry, one would expect to observe a higher CN/HCN ratio toward the more massive stars. Observations of the FUV field in T Tauri systems find that accretion produces UV fluxes that are a few hundred to a thousand times stronger than the Interstellar Radiation Field (ISRF) at 100 AU \citep{Bergin04}. The ISRF may still be important at large radii, however. The external irradiation field is presumed to be constant toward the Taurus sample of sources, i.e. none of the disks are close to any O or B stars, but its impact may differ between the different classes of sources; the ISRF may play a larger role in driving the chemistry for the low luminosity objects, thus acting as a chemical equalizer. A fourth possible driver of the disk chemistry is X-rays, which is predicted to be important for the ionization fraction in the disk \citep{Markwick02}. X-rays are mainly attenuated by gas, while continuum UV photons are quickly absorbed by dust \citep{Glassgold97}. X-rays can therefore penetrate deeper into the disk compared to UV rays, and may be a main driver of both ion chemistry and molecular dissociation. This may result in observable differences in the molecular distribution if the chemistry is driven by X-rays instead of UV radiation. For individual objects X-ray measurements are notoriously variable and assessing their impact observationally may be possible only through monitoring of the X-ray flux and the chemical variation toward a single system \citep{Glassgold05}. On average the X-ray fluxes seem higher toward T Tauri stars compared to Herbig Ae stars and thus ion-driven chemical reactions may be faster in disks around T Tauri stars \citep{Gudel09}. Regardless of whether the quiescent luminosity of the central star is the main source of energetic radiation, it may still control the disk temperature and thus the temperature sensitive chemistry, e.g. deuterium fractionation efficiency probed by the DCO$^+$/HCO$^+$ ratio. The stellar continuum photons are expected to be the primary agent for heating grains in the disk, particularly toward the midplane, setting the overall reservoir of warm or cold grains in the outer disk to which the SMA is most sensitive. Both the level of gas phase depletion and the chemistry dependent on CO depletion may then be regulated by the stellar continuum flux. For example a tracer of CO freeze-out such as the N$_2$H$^+$/HCO$^+$ ratio is expected to be higher toward low-luminosity systems \citep{Bergin02}. The disk structure and dust properties determine how much of the stellar radiation is intercepted by the disk and the penetration depth of that radiation. These disk properties may be equally important to the the strength of the radiation field for the chemical evolution in the disk. Table \ref{tbl:disk} list the sample disk characteristics from previous CO and continuum observations and modeling. Position angles and inclinations do not affect the chemistry intrinsically, but the disk inclination affects which parts of the disk are observed, and thus our view of the chemistry. The sizes of the disks are described in terms of CO gas disk radii and range from 200 to 890~AU. Disk masses are less well constrained, since they are derived from dust emission assuming a dust-to-gas ratio. In most studies the canonical interstellar dust-to-gas ratios of 1 to 100 is used. The actual dust-to-gas ratio may be different because of dust coagulation and photoevaporation, and also variable among the sources. Still it seems clear that two of the disks, AA Tau and CQ Tau, are significantly less massive than LkCa 15, GM Aur and MWC 480, while DM Tau falls in between (Table \ref{tbl:disk}). This difference in disk mass may affect their relative abundances of different species, since more massive disks are expected to contain more cold material per unit of incoming irradiation. Three of the sample disks are so-called transition and pre-transition disks (DM Tau, GM Aur and LkCa 15) with large inner holes or gaps \citep{Calvet04, Espaillat07,Dutrey08}. From a {\it Spitzer} survey of disk chemistry many transition disks, including GM Aur and LkCa 15 have CO gas in the disk hole \citep{Salyk09}. They do however lack emission from HCN and C$_2$H$_2$ transitions in the {\it inner} disk that are strong toward classical T Tauri stars \citep{Pascucci09}. It is unclear whether this is a chemistry or gas-mass effect. It is also unknown whether the chemistry in the outer regions of these transition disks differ significantly from classical T Tauri disks, although it is curious that these systems are the most chemically rich in terms of the outer disk seen to date. This may be explained by increased radiation fluxes; more stellar radiation may reach the outer disk the larger the hole. In addition, large holes may be a tracer of overall grain growth and thus of increased UV penetration depth in both the inner and outer disk. Grain properties can be traced by the millimeter slope of the Spectral Energy Distributions (SEDs), parameterized by the power-law index of the opacity spectrum, $\beta$, and this is the last disk characteristic listed in Table \ref{tbl:disk}. The index $\beta$ is predicted to decrease with grain growth and the disks in the survey all have $\beta$ below the expected 1.7 for interstellar grains \citep{Andrews07}. The $\beta$ estimates are however different between different studies, which makes it difficult to conclude on an order of dust coagulation among the disks, but AA Tau seems to have a significantly lower $\beta$ compared to the other T Tauri disks. Most of the sources have also been observed by Spitzer in studies that constrain the grain properties in the inner disk \citep{Furlan09}. These measurements do not provide any straightforward constraints on the grain properties in the outer disk probed by the SMA, however. Finally the disk structure, e.g. the amount of flaring, affects the amount of stellar light intercepted by the disk. The disk structure, parameterized by the dust scale height, can be constrained by modeling the SED, though degeneracy is a problem \citep{Chiang01}. As the dust settles towards the midplane the dust scale height decreases compared to the gas scale height. Five of the sample disk SEDs were modeled by \citet{Chiang01}, who concluded that MWC 480 and LkCa 15 are more settled than CQ Tau, GM Aur and AA Tau, i.e. in the disks of MWC 480 and LkCa 15 the upper disk layers are significantly depleted in micron-sized dust grains. This has been confirmed in more recent studies that find significantly less settling toward GM Aur compared to LkCa 15 \citep{Espaillat07, Hughes09}. \section{Observations\label{sec:obs}} \subsection{Spectral Setups} Two frequency setups per source were selected to cover 4 to 8 spectral lines in each setup, at the same time providing continuum observations. The targeted molecules are DCO$^+$ and DCN (probing deuterium chemistry), CN and HCN (probing photochemistry), HCO$^+$ and N$_2$H$^+$ (ions), H$_2$CO and CH$_3$OH (potential grain chemistry products), $c$-C$_3$H$_2$ (carbon-chain chemistry) and CO (the disk kinematic tracer). Tables ~\ref{tbl:setup1} and ~\ref{tbl:setup2} summarizes the two spectral setups, centered at 1.1 mm and 1.4 mm, respectively. The SMA correlator covers $\sim4$~GHz bandwidth in each of the two sidebands using two Intermediate Frequency (IF) bands with widths of 1.968 GHz. The first IF band is centered at 5 GHz, and the second IF band is centered at 7 GHz. Each band is divided into 24 slightly overlapping ``chunks'' of 104 MHz width, which can have different spectral resolution. For each spectral setting, the correlator was configured to increase the spectral resolution on the key species with 128--256 channels per chunk, with the exception of one H$_2$CO line observed with 64 channels. Chunks containing weaker lines were then binned to obtain higher signal to noise, while still recovering sufficient kinetic information. The remaining chunks were covered by 64 channels each and used to measure the continuum. The continuum visibilities in each sideband and each IF band were generated by averaging the central 82 MHz in all line-free chunks. \subsection{Data Acquisition and Calibration} The six disks were observed from 2009 November through December with the compact configuration of the Submillimeter Array (SMA) interferometer (Ho et al. 2004) at Mauna Kea, Hawaii. LkCa~15 was also observed on 2010 March 22nd with the subcompact configuration for more short-spacing data to improve the signal-to-noise of deuterated species detection. For each observation, at least six of the eight 6~m SMA antennas were available, spanning baselines of 16--77~m (Table ~\ref{tbl:obs}). The observing sequence interleaved disk targets and two quasars in an alternating pattern. Depending on their proximity to the disk targets and fluxes at the time of the observations, a group of three quasars was used as gain calibrators: 3C 111, J0510+180 and 3C 120. The observing conditions were generally very good, with $\tau_{\rm 225{\:GHz}}\sim0.04-0.1$ and stable atmospheric phase. The data were edited and calibrated with the IDL-based MIR software package\footnote{http://www.cfa.harvard.edu/$\sim$cqi/mircook.html}. The bandpass response was calibrated with observations of Uranus and the bright quasars available (3C 454.3, 3C 273, 3C 84). Observations of Uranus provided the absolute scale for the calibration of flux densities for all compact tracks and Vesta for the one LkCa 15 sub-compact track. The typical systematic uncertainty in the absolute flux scale is $\sim$10\%. Continuum and spectral line images were generated and CLEANed using MIRIAD. MIRIAD was also used to calculate synthesized beam sizes and position angles, which are listed in Table \ref{tbl:beam} for each setting and source. \section{Results and simple analysis}\label{sec:res} The first section presents the extracted spectra for all molecules detected toward at least one disk, channel maps toward DM Tau, which contain some of the strongest detections of the weaker lines in the survey and finally disk images and moment maps for strong molecular lines toward all sources ($\S$\ref{ssec:overview}). The subsequent sections provide more details on the observed CN/HCN ratios, ions and deuterated molecules and their variation among the surveyed disks. In general, we do not attempt to estimate column densities of molecules, but rather present the data in terms of integrated fluxes and flux ratios. The line fluxes and column densities are of course related, but the fluxes also depend on excitation temperatures and opacities. Determining molecular column densities therefore requires knowledge of the disk structure and the spatial distribution of molecules. Even for optically thin lines, estimated column densities may be off by an order of magnitude or more if the wrong emission regions/temperatures are adopted. For optically thick lines the estimates become even more uncertain and single dish studies including $^{13}$CO and H$^{13}$CO$^+$ have shown that the emission from $^{12}$CO and H$^{12}$CO$^+$ is mostly optically thick \citep{Dartois03,Thi04}. A proper derivation of column densities therefore requires detailed chemical modeling of the disks, which we anticipate for a future DISCS publication. An exception from only reporting fluxes is made for the deuterium fractionation, where it is useful to derive abundance ratio limits assuming the same emission regions of molecules and their deuterated equivalents, to compare with previous studies. We also estimate the HCO$^+$/CO ratio toward CQ Tau for a rough consistency check with previous observations. \subsection{Overview of Taurus disk chemistry\label{ssec:overview}} Figure \ref{fig:spec} shows the extracted spectra of the targeted molecules toward DM Tau, AA Tau, LkCa 15, GM Aur, CQ Tau and MWC 480. The spectra are also tabulated in Table \ref{tbl:spec}. The four spectral lines, CO 2--1, HCO$^+$ 3--2, HCN 3--2 and CN 3--2, expected to be strongest from previous studies were detected in all six disks except for HCN toward CQ Tau, with an order of magnitude variation in integrated fluxes between the different disks. Spectral lines from N$_2$H$^+$ and H$_2$CO were detected toward three disks, DM Tau, LkCa 15 and GM Aur, with tentative detections toward AA Tau. Lines from the two deuterated molecules, DCO$^+$ and DCN, were detected toward LkCa 15 and the DCO$^+$ 3--2 line was also detected toward DM Tau. The spectral lines from CH$_3$OH and c-C$_3$H$_2$ were not detected toward any of the disks. Figure \ref{fig:dmtau} shows the velocity channel maps for all detected spectral lines toward DM Tau. The peak intensities vary between $\sim$9 Jy beam$^{-1}$ for CO and $\sim$0.1 Jy beam$^{-1}$ for the weaker H$_2$CO line. All species, except for H$_2$CO, are apparent in multiple velocity channel maps, and the stronger detections clearly follow the velocity pattern indicative of a disk in Keplerian rotation. The relative intensity of different species in different velocity channels varies significantly, indicative of differences in emission regions between different lines. Fig. \ref{fig:maps} shows integrated intensity and first moment maps derived from the channel maps for CO, HCO$^+$, HCN and CN for each disk. In addition, the first column in Figure \ref{fig:maps} shows millimeter continuum SEDs compiled from the literature \footnote{\citet{Acke04,Adams90,Andrews05, Beckwith90, Beckwith91,Chapillon08,Dutrey96,Dutrey98,Duvert00,Guilloteau98,Hamidouche06,Hughes09,Isella09,Kitamura02,Koerner93,Looney00,Mannings94,Mannings97,Mannings00,Natta01,Osterloh95,Pietu06,Rodmann06,Testi01,Weintraub89}} together with new measurements at 218 GHz and 267 GHz (Table \ref{tbl:int}), which show good agreement for all of the sources. The dust continuum flux densities at $\sim$267 (and 218) GHz vary by about a factor of four among these sources, with MWC 480 the strongest at 430 mJy and AA Tau the weakest at 110 mJy. Note that there is not a one-to-one correspondence between the strength of the 218 or 267 GHz dust continuum emission and the CO 2-1 line emission. The first-moment maps show the Keplerian rotation pattern of the disks, most clearly in the CO 2-1 and HCO$^+$ 3-2 emission. The rotational velocity pattern is also present to some extent in HCN and CN emission, which suffer from lower signal-to-noise and, for CN, the blending of the spectroscopic triplet. Line spectra are extracted from the channel maps using elliptical masks produced by fitting a Gaussian profile to the CO integrated intensity maps toward each source to obtain major and minor axes and positions angles (listed in Fig. \ref{fig:maps}). The size of the mask is scaled for each line, to optimize the signal-to-noise without losing any significant emission, such that the major and minor axes are between 2-$\sigma$ and 1-$\sigma$ of the Gaussian fitted to the CO emission -- a 2-$\sigma$ radius corresponds to $\sim$1.7 $\times$ the radius at full-width-half-maximum (FWHM). All masks are large enough to not cut out any emission, i.e. the chosen masks results in no significant decrease in integrated line intensity compared to integrating over the full CO disk. Applying these masks ensures that only disk emission and minimal noise are included in the spectra. The derived CO full-width-half-maxima agree reasonably well with the disk sizes from the literature compiled in Table \ref{tbl:disk}, i.e. all semi-major axes are within $1''$ of the previously observed or derived CO radii, where resolved data exist. In addition the position angles agree within 10$\degree$ except for the barely resolved disk of CQ Tau. The integrated spectra from CQ Tau are however not significantly affected by the choice of mask shape. The resulting spectra in Fig. \ref{fig:spec} are used to derive the total intensities listed in Table \ref{tbl:int}. The 2-$\sigma$ upper limits are calculated from the rms when the spectra are binned to a spectral resolution of 3.3--4.3 km s$^{-1}$, the minimum resolution for resolving any lines, multiplied by the full width half maximum of the CO 2-1 line toward each source. This approximation is supported by the similarity in the line widths for different transitions toward the same disk in Fig. \ref{fig:spec}. Because of variable observing conditions and integration times, the flux upper limits vary between 0.09 and 0.78 Jy km s$^{-1}$ per beam. Most upper limits are however lower compared to detected fluxes toward other sources. Specifically the non-detections toward MWC 480 of DCO$^+$, N$_2$H$^+$ and one of the H$_2$CO lines are up to a factor of two lower than the detected fluxes toward the T Tauri stars DM Tau, GM Aur and LkCa 15. \subsection{CN/HCN flux ratios} As discussed above, deriving molecular abundances from disk spectra is complicated by the uncertainties in disk structures and the accompanying uncertainties in the emission conditions of the different molecules. Line flux ratios can however be used as a proxy for abundance ratios when the emission is optically thin and the upper energy levels are similar, canceling out the temperature effect, if the same emission region for the two molecules can be assumed. Models suggest that the emission regions are different for CN and HCN, however \citep{Jonkheid07}. It is still informative to compare the flux ratios between different sources if it can be assumed that the relative emission regions of the CN and HCN is stable between different disks, i.e. that CN is always present in a warmer layer closer to the surface compared to HCN. Then changes in the CN/HCN flux ratio can still be used to trace changes in abundance ratios if the emission is optically thin. \citet{Thi04} suggested that HCN and CN line emission from protoplanetary disks may be somewhat optically thick from comparisons of HCN and H$^{13}$CN line intensities toward LkCa 15. This survey does not currently include any rare isotopologues, but the CN optical depth can be estimated from the relative integrated intensities of the 2$_3$-1$_2$ triplet transition and the $\sim$10 times weaker 2$_2$-1$_1$ singlet transition using the line strengths from CDMS \citep[http://www.astro.uni-koeln.de/cdms/catalog and][]{Muller01} at any of the reported temperatures between 9 and 300~K (the transitions have the same excitation energy). Toward DM Tau and AA Tau the observed CN line ratios are consistent with optically thin emission. Toward LkCa 15 and MWC 480, the emission from the CN triplet underestimates the CN abundance by factors of 1.3 and 1.6, respectively, indicative of somewhat optically thick CN triplet emission. Despite the complications introduced by modest optical depth, changes in CN 2$_3$-1$_2$ / HCN 3--2 ratios larger than a factor of 2, assuming similar levels of optical depth for HCN emission, are expected to trace real variations in the chemistry as discussed above. The absolute CN and HCN integrated intensities range from 0.2 Jy km s$^{-1}$ toward CQ Tau to 5.5 Jy km s$^{-1}$ toward LkCa 15. Figure \ref{fig:ratios} shows that the variation in CN/HCN line intensities is smaller -- all sources have ratios of 0.8--1.6 except for AA Tau, which has a CN/HCN emission ratio of 2.9. AA Tau is then the only significant outlier in terms of CN/HCN flux ratios. \subsection{Ions: HCO$^+$ (DCO$^+$) and N$_2$H$^+$} HCO$^+$ is often used as a tracer of gas ionization and thus of high energy radiation in disks. The high optical depth of HCO$^+$ and the lack of rare isotopologues of CO prevent such an analysis at present, except for toward CQ Tau where the CO 2--1 emission has been estimated to be optically thin \citep{Chapillon08}. The ratio of column densities for species X and Y, for optically thin, resolved emission, can be calculated from \begin{equation} \label{eq:ratio} \frac{N_{\rm X}}{N_{\rm Y}} = \frac{\int T_{\rm mb}^{\rm X} d\nu}{\int T_{\rm mb}^{\rm Y} d\nu} \times \frac{Q_{\rm rot}^{\rm X}(T)}{Q_{\rm rot}^{\rm Y}(T)} \times \frac{e^{E_{\rm u}^{\rm X}/T_{\rm ex}}}{e^{E_{\rm u}^{\rm Y}/T_{\rm ex}}} \times \frac{\nu_{\rm Y} S_{\rm Y}\mu^2_{\rm Y}}{\nu_{\rm X} S_{\rm X}\mu^2_{\rm X}}, \end{equation} \noindent where $N$ is the column density, $\int T_{\rm mb} d\nu$ is the integrated line emission in K km s$^{-1}$, $Q_{\rm rot}(T)$ the temperature dependent partition function, $E_{\rm u}$ the energy of the upper level in K, $T_{\rm ex}$ the excitation temperature in K and $S_{\rm Y}\mu^2$ are the line strength and dipole moment \citep[e.g.][]{Thi04}. Toward the same source, the integrated line flux in Jy and line intensity in K are related by $T_{\rm mb}[{\rm K}]\varpropto F[{\rm Jy}]\times\lambda^2[{\rm mm^2}]$. Using partition functions, level energies and line strengths from CDMS and assuming the same excitation conditions for HCO$^+$ 3--2 and CO 2--1 the [HCO$^+$]/[CO] ratio is $1.0-1.3\times10^{-4}$ toward CQ Tau for excitation temperatures of 18--75~K. The N$_2$H$^+$ observations toward DM Tau, LkCa 15 and GM Aur (and tentatively toward AA Tau) provide unambiguous detections of this species in protoplanetary disks, confirming previous claims by \citet{Dutrey07}. Since N$_2$H$^+$ and DCO$^+$ both potentially trace the chemistry further toward the midplane compared to the more abundant molecules, their ratio provides important constraints on the cold chemistry in disks. The DCO$^+$/N$_2$H$^+$ line intensity ratio ranges from $<$0.1 toward GM Aur to 0.8 toward DM Tau (Fig. \ref{fig:ratios}). In contrast, there is no significant variation in the H$_2$CO/N$_2$H$^+$ ratio between the sources. Assuming that these molecules always reside in the colder regions of the disks, these differences in flux ratios suggest relative abundance variations of N$_2$H$^+$ and DCO$^+$ of an order of magnitude between the different sources. \subsection{Deuteration: DCO$^+$/HCO$^+$ and DCN/HCN} Because of the high optical depth of the HCO$^+$ line emission, the DCO$^+$/HCO$^+$ line intensity ratio can only be used to derive upper limits on the average HCO$^+$ deuteration fraction in the disk. The analysis is further complicated by evidence of different emission regions of DCO$^+$ and HCO$^+$ \citep{Qi08}. Assuming, however, the same emission region of DCO$^+$ and HCO$^+$ and optically thin emission for both ions, the upper limits on the deuteration fraction is calculated using Eq. \ref{eq:ratio} to vary between 0.32 toward DM Tau, 0.18 toward LkCa 15 and $<$0.07 toward GM Aur. An excitation temperature of 19 K is assumed, but the ratios are only marginally affected by the excitation temperature between 10 and 50~K. The variable ratios hint at differences in deuteration fractionation between different sources, especially since GM Aur has both lowest DCO$^+$/HCO$^+$ and DCO$^+$/N$_2$H$^+$ ratios, but radiative transfer modeling is needed to confirm this result. DCN is only detected toward LkCa 15. The detection is at the $>$5-$\sigma$ level, and the moment maps shows an almost perfectly aligned velocity field compared to CO and HCO$^+$ and the detection appears secure. From the CN analysis above and previous single-dish observations, HCN line emission is expected to be much less optically thick than HCO$^+$ emission and the DCN/HCN ratio should provide stricter limits on the deuteration level in the disk. Assuming optically thin emission, the same emission region for DCN and HCN and an excitation temperature of $\sim$40~K, the upper limit on average deuteration in the disk around LkCa 15 is 0.06, a factor of three lower than the estimate from DCO$^+$/HCO$^+$. \section{Discussion} \label{sec:disc} \subsection{Detection rates and comparison with previous studies} The reported images and spectra were acquired with only 3--7 hours on source integration, with the shorter time spent in the 1.1 mm setting. The detection rate of N$_2$H$^+$ and H$_2$CO in the 1.1 mm setting is then quite remarkable and can be attributed to the advantages of targeting higher $J$ lines when probing disks. For comparison, N$_2$H$^+$ was previously detected toward DM Tau and LkCa 15 through its 1--0 line using the Plateau de Bure Interferometer, with peak fluxes $<$0.02 Jy \citep{Dutrey07}, an order of magnitude or more lower than the $3-2$ line peak fluxes reported here. The 1-0 line in LkCa 15 was also detected by \citet{Qi03} using the Owens Valley Radio Observatory Millimeter Array with integrated intensity four times larger than that with the PdBI. The agreement with previous single-dish observations is generally good. All targeted molecular lines that have been previously reported in the single dish studies of DM Tau, LkCa 15 and MWC 480 are also detected with the SMA \citep{Dutrey97,Thi04, Guilloteau06}. Where the same lines have been studied, most integrated intensities agree. HCN $J=3-2$ toward DM Tau is an exception, where the reported upper limit in \citet{Dutrey97} is a factor of two below the intensity observed with the SMA. Using their detections, \citet{Dutrey07} derived [N$_2$H$^+$]/[HCO$^+$] ratios of 0.02--0.03 for DM Tau and LkCa 15 by fitting the line emission to disk models. Without such modeling we can only derive upper limits on the [N$_2$H$^+$]/[HCO$^+$] of 0.13--0.19 for the two disks because of the HCO$^+$ line optical depth. Considering that the HCO$^+$ abundance may be underestimated by up to an order of magnitude, the two data sets are consistent. Within the same observational program \citet{Chapillon08} searched for CO 2--1 and HCO$^+$ 1--0 emission toward CQ Tau and used the data to derive an upper limit on the [HCO$^+$]/[CO] abundance ratio. Assuming the same CO and HCO$^+$ distribution and excitation conditions and optically thin CO emission they find [CO]/[HCO$^+$]$>$4$\times10^3$. This is consistent with the abundance ratio of 10$^4$ reported above, which is calculated making similar assumptions, but without the detailed modeling in \citet{Chapillon08}. CQ Tau is by far the most chemically poor of the investigated disks. It is interesting that despite the low abundances, the chemistry appears 'normal', the ratios of the integrated intensities toward MWC 480 and CQ Tau are the same within a factor of two, including the CN/HCN emission ratio. The only difference is that overall the gas toward CQ Tau is probably richer in CN and HCN with respect to CO, taking into account the large optical depth of the CO emission toward MWC 480 \citep{Thi04}, as might be expected for a smaller disk, completely exposed to UV radiation. The upper limits on the [DCO$^+$]/[HCO$^+$] abundance ratio of $<$0.07--0.32 found toward the T Tauri systems in DISCS are consistent with the ratio of 0.035--0.05 observed toward TW Hydrae \citep{vanDishoeck03,Qi08}. The better constrained [DCN]/[HCN] ratio of 0.06 toward LkCa 15 is also consistent with the value of 0.02--0.05 toward TW Hydrae. While this ratio may be overestimated by a factor of a few, high levels of deuterium fractionation seems common toward T Tauri systems. As found in single dish studies, the CN/HCN line intensity ratios toward the Taurus disks are high compared to interstellar clouds and cores. The line ratios all fall within the range of measurements toward other disks, where the total integrated flux ratio of CN/HCN is $\sim$1--5 \citep[see][for a compilation]{Kastner08}. A more quantitative comparison with previous observations is difficult without detailed modeling because different studies observed different transitions of CN and HCN. \subsection{T Tauri vs. Herbig Ae stars} In agreement with previous studies, we find that the disks surrounding T Tauri stars are more chemically rich in species with strong mm-transitions compared to disks around Herbig Ae stars \citep{Chapillon08,Schreyer08,Henning10}. The observed chemical poverty in the outer disks of Herbig Ae stars has been attributed to the more intense UV field around Herbig Ae stars compared to T Tauri stars, which may efficiently photodissociate most targeted molecules. In this sample, the most obvious difference between T Tauri and Herbig Ae stars is the lack of the cold chemistry tracers N$_2$H$^+$, DCO$^+$, DCN and H$_2$CO toward CQ Tau and MWC 480, while they are detected in 3/4, 2/4, 1/4 and 3/4 of the T Tauri systems. The upper limits toward CQ Tau are less informative because of its weak CO emission and observations toward more Herbig Ae stars are required to confirm that this difference between disks around low and medium mass stars is general. In contrast the CN and HCN emission is similar toward the lowest and highest luminosity stars, DM Tau and MWC 480, in the sample. CN and HCN emission are modeled to originate mainly from the outer layers of the disk \citep{Willacy07} and this chemistry thus seems equally active toward low- and intermediate-mass pre-main sequence stars. \subsection{CN and HCN} CN is a photodissociation product of HCN and the CN/HCN ratio has been put forward to trace several different aspects of the UV field. The CN/HCN ratio is proposed to increase with the strength of the UV field \citep{vanZadelhoff03}, and it will be further enhanced if the UV radiation is dominated by line emission from accretion, since HCN is dissociated by Ly-$\alpha$ photons while CN is not \citep{Bergin03}. Dust settling or coagulation allows radiation to penetrate deeper into the disk, which is also predicted to enhance the CN/HCN ratio \citep{Jonkheid07}. The quiescent UV luminosity increases with stellar mass. There is however no visible trend in the emission ratio of CN/HCN with spectral type. In fact, all CN/HCN ratios are the same within a factor of two, except toward AA Tau, which has a factor of a few higher intensity ratio. This suggests that the CN/HCN ratio is not set by the stellar luminosity though there are complications in comparing CN/HCN ratios toward disks around low- and intermediate-mass stars because of potentially different excitation conditions for HCN in the two sets of disks \citep{Thi04}. Within this sample the CN/HCN ratio also does not trace accretion luminosity; AA Tau has a comparable accretion rate to LkCa 15 and among the sources with comparable CN/HCN ratios the accretion rate varies by an order of magnitude. AA Tau is reported to have a lower power-law index of the opacity spectrum, $\beta$, compared to the other disks, indicative of dust growth and it may be the dust properties rather than the stellar or accretion luminosities govern the importance of photochemistry in disks. A larger sample that spans a wider range of accretion rates and dust properties is clearly required to give a more definitive answer. An additional complication is that the high CN/HCN ratio toward AA Tau may be a geometric effect. Compared to the other disks, AA Tau is almost edge-on \citep{Menard03}, which may result in preferential probing of the disk atmosphere compared to less inclined disks. Disk chemistry models (Fogel et al. submitted to ApJ) show that CN mainly emits from the disk surface, while HCN emission originates further into the molecular disk layer and the more inclined disk may offer a viewing angle that is biased toward CN emission. To estimate the effect of disk inclination on CN/HCN flux variations then requires a combination of chemical modeling and radiative transfer models. In terms of absolute flux intensities, the weak CN and HCN emission toward GM Aur compared to LkCa 15 and DM Tau stands out. The difference between GM Aur and LkCa 15 may be due to the higher accretion rate and intenser FUV field toward LkCa 15. The difference between DM Tau and GM Aur is however difficult to explain in terms of UV flux, since DM Tau is a weaker accretor than GM Aur. There is some evidence for significantly more dust settling toward LkCa 15 and DM Tau compared to GM Aur \citep{Chiang01,Espaillat07,Hughes09}. This may expose more of the gas in the LkCa 15 and DM Tau disks to high-energy radiation, enhancing the photoproduction of CN and HCN as well as the ion chemistry deeper in toward the disk midplane. \subsection{Cold chemistry tracers} Lower abundances of DCO$^+$, DCN, N$_2$H$^+$ and H$_2$CO toward more luminous stars are qualitatively consistent with our current chemical understanding. DCO$^+$ forms efficiently from gas phase reactions with H$_2$D$^+$, which is only enhanced at low temperatures \citep{Roberts00,Willacy07} and should be enhanced toward colder disks. Among the T Tauri stars the brightest DCO$^+$ emission is observed toward the disk around the least luminous star, DM Tau, consistent with a higher degree of deuterium fractionation around colder stars. The difference in DCO$^+$ line flux around GM Aur and LkCa 15 is more difficult to explain. GM Aur has a more massive dust disk than LkCa 15 and the two stars have similar luminosities. Naively GM Aur should then be surrounded by at least as much cold disk material as LkCa 15. Instead, the upper limit on the DCO$^+$ flux is a factor of three lower toward GM Aur compared to LkCa 15. There is thus no one-to-one correlation between the ratio of disk dust mass over quiescent stellar luminosity and DCO$^+$ column densities. N$_2$H$^+$ forms from protonation of N$_2$ by H$_3^+$ and is mainly destroyed by reactions with CO \citep{Bergin02}. Abundant N$_2$H$^+$ is therefore only expected where CO is depleted onto grains toward the disk midplane. N$_2$ freezes onto grains a few degrees below CO \citep{Oberg05} and in cold disks the N$_2$H$^+$ abundance should peak in a narrow region where the temperature is between the N$_2$ sublimation temperature of $\sim$16~K and the CO sublimation temperature of $\sim$19~K. As long as a cold region exists in the disk, the N$_2$H$^+$ abundances may be quite independent of the total amount of cold disk material. The observations are consistent with this disk abundance structure; within the T Tauri sample the N$_2$H$^+$ emission only varies by a factor of two, increasing slightly with increasing disk mass. The variation in DCO$^+$/N$_2$H$^+$ flux ratios over the sample suggest that while both molecules trace a cold chemistry, their dependences on the physical environment is considerably different. The $>$8 times higher DCO$^+$/N$_2$H$^+$ flux ratios toward DM Tau and LkCa 15 compared to GM Aur may be related to the 5--6 times higher fluxes of CN and HCN toward DM Tau and LkCa 15 compared to GM Aur. This would suggest that both ratios depend on the amount of dust settling and that DCO$^+$ trace a cold radiation driven chemistry. Considering the ions involved in forming DCO$^+$ and DCN, it seems reasonable that their formation will be enhanced in regions that are irradiated, but not heated by FUV photons or X-rays. To test this hypothesis requires H$^{13}$CO$^+$ abundances toward both systems (to measure whether the DCO$^+$/HCO$^+$ abundance ratio varies as well) in combination with a model that simultaneously treat deuterium chemistry and UV and X-ray radiative transfer. H$_2$CO can form both through gas and grain surface processes. The gas phase process starts with CH$_3^+$ reacting with H$_2$ \citep{Roberts07b} and is expected to be at least as efficient in disks around low and intermediate mass stars. In contrast, H$_2$CO formation on grains requires the freeze-out of CO, which is only efficient at low temperatures. The absence of H$_2$CO toward the more luminous stars suggests that the grain surface formation mechanism dominates in disks and it is also another indication of the lack of a large cold chemistry reservoir toward disks around intermediate mass stars. It also suggests that the organic molecules formed in the protostellar stage, where H$_2$CO is common, do not survive in the gas phase in mature disks. In summary, all potential tracers of cold chemistry imply the same lack of cold disk material around Herbig Ae stars, which is in agreement with a recent survey of CO gas toward Herbig Ae/Be stars \citep{Panic09}. In contrast, \citet{Pietu07} find that the disk around the Herbig Ae star MWC 480 contains large amounts of cold CO gas, below 17~K, indicative of cold material outside of 200 AU. At these temperatures CO should not be in the gas phase at all, however, since it is below the sublimation point of CO ice. Its presence is a sign of either efficient mixing in the disk or efficient non-thermal ice evaporation, perhaps through photodesorption \citep{Oberg07b,Hersant09}. Mixing may drag up material from the midplane on shorter timescales than the cold chemistry timescales, explaining the lack of cold chemistry tracers. Efficient photodesorption of CO into the gas phase would also explain the lack of N$_2$H$^+$ and H$_2$CO, while its impact on the deuterium fractionation is harder to assess. The same processes are probably present in disks around T Tauri stars as well, but because their disks are overall colder there is still enough material protected from vertical mixing and photodesorption on long enough timescales for large amounts of N$_2$H$^+$, DCO$^+$ and H$_2$CO to form. The lack of CH$_3$OH detections does not put strong constraints on the CH$_3$OH/H$_2$CO abundance ratio, since H$_2$CO is barely detected and the CH$_3$OH transitions in this spectral region are more than an order of magnitude weaker than the observed H$_2$CO transitions. To put stronger constraints on CH$_3$OH abundances in disks instead requires targeted observations of the most intense CH$_3$OH lines. \section{Conclusions} Protoplanetary disks exhibit a rich chemistry that varies significantly between different objects within the same star forming region. Some of this variation can be understood in terms of the central star and its heating of the disk -- the cold chemistry tracers N$_2$H$^+$, DCO$^+$, DCN and H$_2$CO are only detected toward T Tauri stars in our disk sample of four T Tauri stars and two Herbig Ae stars. Tracers of photochemistry, especially CN and HCN, show no clear dependence on quiescent stellar luminosity within the sample. Deuterium fractionation also seems to depend on parameters other than the disk temperature structure. For these chemical systems, the impact of other sources of irradiation, e.g. accretion shocks and X-rays, as well as the disk structure and grain characteristics may all be more important for the chemical evolution than the quiescent stellar luminosity. Investigating the relative importance of these different disk and star characteristics requires a combination of detailed modeling of the current sample, an increase in the number of sources to boost the statistics and span more parameters -- especially a larger range of accretion rates and disks around the intermediate F stars -- and targeted observations of rare isotopes of CO and HCO$^+$ to extract accurate abundance ratios. While the chemical evolution in protoplanetary disks is clearly complex, the qualitative agreement between at least parts of the early DISCS results and our current chemical understanding is promising for the ongoing modeling of these objects. The key results so far are listed below. \begin{enumerate} \item Six disks in Taurus (DM Tau, AA Tau, LkCa 15, GM Aur, CQ Tau and MWC 480) have been surveyed for 10 molecules, CO, HCO$^+$, DCO$^+$, CN, HCN, DCN, H$_2$CO, N$_2$H$^+$, CH$_3$OH and c-C$_3$H$_2$, with a high detection rate and large chemical variability. \item The brightest molecular lines, CO 2-1, HCO$^+$ 3-2, CN 3-2 and HCN 3-2 are detected toward all disks, except for HCN toward CQ Tau. Other molecular lines tracing different types of cold chemistry, N$_2$H$^+$, DCO$^+$, DCN and H$_2$CO, are only detected toward disks around T Tauri stars, indicative of a lack of cold regions around Herbig Ae stars for long enough time scales. \item Both the absolute CN flux and the CN/HCN ratio vary significantly among the observed disks and their variation seems independent of stellar luminosity, suggestive of that other parameters such as accretion luminosity and dust growth and dust settling play an important role for the chemical evolution in disks. \item Among the cold chemistry tracers the DCO$^+$/N$_2$H$^+$ ratio varies by an order of magnitude suggesting that the deuterium fractionation depends on other parameters, including the radiation field, beyond the amount of cold material present in the disk. \end{enumerate} {\it Facilities:} \facility{SMA} \acknowledgments This work has benefitted from discussions with and comments from Ewine van Dishoeck, Geoffrey Blake and Michiel Hogerheijde, and from a helpful review by an anonymous referee. The SMA is a joint project between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics and is funded by the Smithsonian Institution and the Academia Sinica. Support for K.~I.~O. and S.~M.~A. is provided by NASA through Hubble Fellowship grants awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555. C.~E. was supported by the National Science Foundation under Award No.~0901947. E.~A.~B. acknowledges support by NSF Grant \#0707777
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} {\em Racks and quandles} were introduced independently in the 1980s by Joyce and Matveev~\cites{Joyce:Thesis, Matveev} mainly in order to provide algebraic machineries in constructing invariants of knots and links. Racks are sets equipped with non-associative structures which, to some extent, share much with the theory of groups, Hopf algebras, Lie algebras, etc. More precisely, a {\em rack} is a set $X$ equipped with a binary operation $$X\times X\ni (x,y)\mto x\rRack y \in X$$ which is bijective with respect to the left variable and satisfying \[ (x\rRack y)\rRack z=(x\rRack z)\rRack (y\rRack z) \] for all $x,y,z\in X$. If in addition $x\rRack x= x$ for each $x\in X$, then $(X,\rRack)$ is called a {\em quandle}. A rack or a quandle $X$ is {\em trivial} if $x\rRack y=x$ for all $x,y\in X$. Unless otherwise specified, all our racks and quandles will be assumed nontrivial. The most natural quandles are the {\em conjugation} and the {\em core} quandles of a group. Specifically, let $G$ be a group. Its conjugation quandle $Conj(G)$ is the set $G$ together with the operation $g\rRack h:=hgh^{-1}$, while its {\em core quandle} $Core(G)$ is the set $G$ equipped with the operation $g\rRack h:=hg^{-1}h$. We refer to \cite{Joyce:Thesis, Matveev,Elhamdadi-Nelson:Quandles_An_Introduction} for more details about racks and quandles. In this paper we treat racks and quandles purely as algebraic objects on their own right rather than by their connections with knot theory. We introduce and investigate several new notions and algebraic properties in the category of racks. In~\cite{Elhamdadi-Moutuou:Foundations_Topological_Racks} we defined a {\em stabilizer} in a rack $X$ to be an element $u\in X$ such that $x\rRack u = x$ for all $x\in X$. For example, given a group $G$, the stabilizers of Conj$(G)$ are exactly the elements of the center $Z(G)$ of $G$. On the other hand, the associated {\em core quandle} of $G$ has no stabilizers if $G$ has no non-trivial $2$--torsions. This observation suggests that the property of having stabilizers is too strong to capture the identity and center of a group in the category of racks and quandles. We have weakened this property by introducing the notion of {\em stabilizing families} in a rack. A finite subset $\{u_1, \ldots, u_n\}$ of a rack $X$ is a {\em stabilizing family of order $n$} for $X$ if $(\cdots (x\rRack u_1)\rRack \cdots )\rRack u_n = x$, for all $x\in X$. If such a family exists, $X$ is said to be {\em finitely stable} or $n$--{\em stable}. The (possibly empty) set of all stabilizing families of order $n$ is denoted by $\cal S^n(X)$. Let, for instance, $\{x_1,\ldots, x_k\}$ be a finite subset of a group $G$. Then we get a stabilizing family of order $2k$ for $Core(G)$ by duplicating each element in the first set; {\em i.e.}, $\{x_1, x_1, x_2, x_2, \ldots, x_k, x_k\}$ is a stabilizing family of order $2k$. It follows that $Core(G)$ is $2k$--stable, for all $k\leq |G|$ (cf. Proposition~\ref{pro:Core-2k}). However, for the odd case, we have the following (cf.Theorem~\ref{thm:Core-odd}) result \medskip \noindent {\em Let $G$ be a group. Then $Core(G)$ is $(2k+1)$--stable if and only if $G$ is isomorphic to a direct sum $\oplus_I\bb Z_2$ of the cyclic group of order $2$.} \\ We have paid attention to the interesting class of {\em Alexander quandles} and established a general criterion for them to be finitely stable. Recall~\cites{Joyce:Thesis, And-Grana} that given a group $\Gamma$ and a $\bb Z[\Gamma]$--module $M$, each $\gamma\in \Gamma$ provides $M$ with the quandle structure $\rRack^\gamma$ defined by $x\rRack^\gamma y:= (x-y)\cdot \gamma + y, \ x,y\in M$. For all non-negative integer $n$ and each $\gamma\in \Gamma$, let $F_\gamma: M^n \rTo M$ be the function given by $F_\gamma(x_1,\ldots, x_n):= \sum_{i=1}^nx_i\cdot \gamma^{n-i}$. Then we have following result (Theorem~\ref{thm:Alexander})\\ \noindent {\em The quandle $(M, \rRack^\gamma)$ is $n$--stable if and only if $\gamma$ is an $n$--torsion. Furthermore, $\cal S^n(M)$ is the abelian group of all solutions of the equation $$F_\gamma(x_1,\ldots, x_n)=0.$$} We have extended the notion of dynamical cocycles introduced in~\cite{And-Grana} to that of {\em twisted rack dynamical systems}, which are triples $(Q,X,\partial)$ where $X$ and $Q$ are racks, together with an action by rack automorphisms of $X$ on $Q$, and a family $\{\partial_{x,y}\}_{X \times X}$ of functions $Q\times Q\rTo Q$ satisfying some compatibily conditions with respect to the rack structure of $X$. When the maps $\partial_{x,y}$ happen to be rack structures on $Q$, we say that $(Q,\partial_{x,y})_X$ is an {\em $X$--bundle of racks}. In particular, for a given group $G$, $G$--{\em families of quandles} studied in~~\cites{Ishii:G-family_Quandles, Nosaka:Quandles_Cocycles} are special cases of bundles of racks. Associated to a twisted rack dynamical system $(Q,X,\partial)$ there is a rack $Q\rtimes_\partial X$, its {\em cross--product}, which is the Cartesian product $Q\times X$ equipped with the operation $(p,x)\rRack_\partial (q,y):=(\partial_{x,y}(p,q),x\rRack y)$. \\ Further, elements of rack representation theory are introduced and general properties are established. Specifically, we showed the analogues of the Schur's lemma for racks and quandles, and proved that (cf. Theorem~\ref{thm:strong_rep_connected}) \\ \noindent {\em Every strong irreducible representation of a finite connected involutive rack is one--dimensional.} \\ Finally, an analogous notion of the Pontryagin dual is defined for racks. The Pontryagin dual of a rack $X$, denoted by $D_qX$, is an Abelian group; in fact it is a finite copies of the unitary group $U(1)$. \\ The article is organized as follows. In section~\ref{fsr}, we introduce the concept of finitely stable racks which presents itself as an appropriate analogue of "Abelianity" in the category of racks and quandles. We show the existence of non-trivial stabilizing families for the core quandle $Core(G)$ and characterize stabilizing families for the $Cong_{\phi}$ quandle where $\phi$ is an automorphism of $G$. In section~\ref{SAQ}, we give necessary and sufficient conditions for Alexander quandles to be finitely stable, and provide a general algorithm to construct stabilizing families. We introduce and study in Section~\ref{npivot} the $n$-pivot of a group as a generalization of the $n$-core defined by Joyce, {\em approximate units} in section ~\ref{rackaction}, twisted rack dynamical systems and their cross-products in Section~\ref{twist}, and $X$--bundles in section~\ref{Xbundle}. Section~\ref{rackrep} presents elements of representation theory of racks and quandles. Section~\ref{strong_rep} is mainly devoted to strong irreducible representations of finite connected involutive racks, and in Section~\ref{duality} we define characters of a rack and duality. \section{Finitely stable racks}\label{fsr} Throughout, we will use the following notations: if $u_1,..., u_n$ are elements in the rack $X$, then for $x\in X$ we will write \[ x\rRack (u_i)_{i=1}^n := (\cdots (x\rRack u_1)\rRack \cdots )\rRack u_n. \] In particular, if $u_i=u$ for all $i=1,...,n$, then we will write $x\rRack^nu$ for $x\rRack(u)_{i=1}^n$. Recall from~\cite{Elhamdadi-Moutuou:Foundations_Topological_Racks} that a \emph{stabilizer} in the rack $X$ is an element $u\in X$ such that $x\rRack u = x$ for all $x\in X$. This notion can be generalised as follow. \begin{df} Let $X$ be a rack or quandle. \begin{enumerate} \item A \emph{stabilizing family of order} $n$ for $X$ is a finite subset $\{u_i, i=1, \ldots, n\}$ of $X$ such that \[ x\rRack (u_i)_{i=1}^n = x, \] for all $x\in X$. In other words, $R_{u_n}R_{u_{n-1}}\cdots R_{u_1} = Id_X$. \item An $n$--{\em stabilzer} of $X$ is a stabilizing family of order $n$ of the type $\{u\}_{i=1}^n$. \item For any positive integer $n$, we define the \emph{$n$--center} of $X$, denoted by $\cal S^n(X)$, to be the (possibly empty) collection of all stabilizing families of order $n$ for $X$. \item The collection $$\cal S(X):= \bigcup_{n\in \bb N}\cal S^n(X)$$ of all stabilizing families for $X$ is called the \emph{center} of the rack $X$. \end{enumerate} \end{df} \begin{lem} Let $X$ be a rack and $\{u_i\}_{i=1}^n\in \cal S^n(X)$. Let $\sigma$ be an element in the symmetric group $\fr S_n$. Then $\{u_{\sigma(i)}\}_{i=1}^n\in \cal S^n(X)$ if $\sigma$ is an element of the cyclic group of order $n$ generated by the permutation $\left.\begin{pmatrix} 2 & \ldots & n & 1 \end{pmatrix} \right. .$ \end{lem} \begin{proof} Straightforward. \end{proof} \begin{df} A rack $X$ is said to be \emph{finitely stable} if $\cal S(X)$ is non-empty. It will be called $n$--\emph{stable} if it has a stabilizing family of order $n$. \end{df} \begin{rmk} Notice that any stabilizer $u$ in $X$ is obviously a $1$--stabilizer. Moreover, if in particular every element of $X$ is an $n$--stabilzer, then we recover the definition of an $n$--{\em quandle} of Joyce~\cite{Joyce:Thesis} (see also~\cite{Hoste-Shanahan:n-Quandles}). \end{rmk} \begin{ex} Any finite rack $X$ is finitely stable since the symmetric group of $X$ is finite. \end{ex} We will see later that a rack might have only stabilizing families of higher orders. \begin{ex} Consider the real line $\mathbb{R}$ with the usual rack structure $$x\rRack y = 2y - x, \ \ x, y\in \mathbb{R}.$$ Given any family $\{x_1, \ldots, x_n\}$ of real numbers we get the formula \[ t\rRack(x_i)_{i=1}^n = 2 \sum_{i=0}^{n-1} (-1)^ix_{n-i} +(-1)^nt, \ \forall t\in \mathbb{R}. \] We then form the stabilizing family $\{u_1, \cdots, u_{2n}\}$ of order $2n$ by setting \[ u_{2i-1} = x_i = u_{2i}, i=1, ..., n. \] Hence, $\mathbb{R}$ admits an infinitely many stabilizing families of even orders. \end{ex} Similar construction as above can easily be done for the core quandle of any group. Precisely, we have the following. \begin{pro}\label{pro:Core-2k} Let $G$ be a non-trivial group. Then for all even natural number $2k\leq |G|$, there exists a non-trivial stabilizing family of order $2k$ for the \emph{core} quandle $Core(G)$. In particular, if $G$ is infinite, $Core(G)$ admits infinitely many stabilizing families. \end{pro} \begin{proof} Recall that $Core(G)$ is $G$ as a set with the quandle structure defined by \[ g\rRack h = hg^{-1}h, \ \ g, h\in G. \] Thus $(g\rRack h_1)\rRack h_2 = h_2h_1^{-1}gh_1^{-1}h_2$, and more generally \begin{eqnarray}\label{eq:takasaki} g\rRack (h_i)_{i=1}^n= \prod_{i=0}^{n-1}h_{n-i}^{(-1)^i}g^{(-1)^n} \prod_{j=1}^{n}h_{j}^{(-1)^{n-j}} \end{eqnarray} Then for any subset $\{x_1, \ldots ,x_n\} \subset X$ of length $n$, the family \[ \{x_1,x_1, x_2, x_2, \ldots , x_n, x_n\} \] is a stabilizing family of order $2n$ for the \emph{core} quandle $Core(G)$. \end{proof} We moreover have the following complete characterization for groups whose core quandle have stabilizing families of odd order. \begin{thm}\label{thm:Core-odd} Let $G$ be a group. The core quandle of $G$ is $(2k+1)$--stable if and only if all elements of $G$ are $2$--torsions; in other words $G$ is isomorphic to $\bigoplus_{i\in I}\bb Z_2$, for a certain finite or infinite set $I$. \end{thm} \begin{proof} If $\{u_i \}_i \in \cal{S}^{2k+1}(Core(G))$, then is is easy to see that equation~\eqref{eq:takasaki} implies that the inversion map $g \mapsto g^{-1}$ in $G$ is an inner automorphism, therefore $G$ is abelian. Now, since $G$ is an abelian group, then~\eqref{eq:takasaki} applied to the family $\{u_i\}_{i=1}^{2k+1}$ gives \begin{eqnarray}\label{eq2:takasaki} g\rRack(u_i)_i = 2 \sum_{i=0}^{2k} (-1)^ih_{2k+1-i} - g = g, \ \forall g\in G; \end{eqnarray} which implies in particular that \[ 2\sum _{i=0}^{2k} (-1)^i u_{2k+1-i}=0, \] therefore $2g=0, \forall g \in G$. Conversely, if all elements of $G$ are of order $2$, then it is obvious each element of $G$ is a stabilizer of $Core(G)$, hence $Core(G)$ is $1$--stable. \end{proof} Assume $f:X\rTo Y$ is a rack homomorphism; {\em i.e.}, $f(x\rRack y)=f(x)\rRack f(y)$ for all $x,y\in X$. If $\{u_i\}_{i=1}^s\in \cal S(X)$, then $\{f(u_i)\}_{i=1}^s$ is a stabilizing family of Im$(f)$ which is a subrack of $Y$. In particular, if $f$ is onto and if $X$ is finitely stable, then $Y$ is finitely stable. \section{Stable Alexander quandles}\label{SAQ} In this section we are giving necessary and sufficient conditions for Alexander quandles to be stable, and provide a general algorithm to construct stabilizing families. More precisely, we are proving the following result. \begin{thm}\label{thm:Alexander} Let $\Gamma$ be a group, and let $M$ be a right $\bb Z[\Gamma]$--module. For each non-trivial $\gamma \in \Gamma$, define the (Alexander) quandle structure on $M$ given by \[ x\rRack^\gamma y :=(x-y)\cdot \gamma + y, \ \ x, y\in M. \] For $n\in \bb N$, let $F_\gamma:M^n\rTo M$ be the function defined by \[ F_\gamma(x_1,\ldots, x_n)=\sum_{i=1}^nx_i\cdot\gamma^{n-i}. \] Then $(M,\rRack^\gamma)$ is $n$--stable if and only if $\gamma$ is of order $n$. Furthermore, if this condition is satisfied, $\cal S^n(M)$ is exactly the linear space of all solutions of the equation \begin{eqnarray}\label{eq:Alexander} F_\gamma(x_1,\ldots, x_n)=0. \end{eqnarray} \end{thm} Instead of proving directly the theorem, we are going to prove a more general result. But first let us give a few consequences. \begin{cor} Let $V$ be a complex vector space equipped with the Alexander quandle structure given by \[ x\rRack y = \alpha x + \beta y, \ \ x, y\in V, \] where $\alpha\neq 0, \alpha\neq 1$, and $\beta$ are fixed scalars such that $\alpha+\beta = 1$. Then $V$ admits a stabilizing family $\{u_i\}_{i=1}^n$ of order $n$ if and only if the following hold: \begin{itemize} \item[(i)] $\alpha$ is an $n^{th}$ root of unity; \item[(ii)] there exists an integer $0<l< n$, such that $\sum_{k=1}^{n}e^{-\frac{2i\pi kl}{n}}u_k = 0$. \end{itemize} \end{cor} \begin{proof} Consider the multiplicative group $\bb C^{\times}$ and think of the complex vector space $V$ as a $\bb Z[\bb C^{\times}]$--module. Then Theorem~\ref{thm:Alexander} applies. \end{proof} \begin{cor} Let $V$ be a vector space equipped with the quandle structure as above. Then, for all $n\in \bb N$ such that $\alpha$ is an $n^{th}$ root of unity, each $u\in V$ is an $n$--stabilizer. In other words, $V$ is an $n$--quandle.. \end{cor} The following example shows that a quandle may have no stabilizing family at all. \begin{ex} Let $V$ be a vector space equipped with the quandle structure $$x\rRack y = (x+y)/2.$$ Then $\cal S(V)=\emptyset$ \end{ex} \begin{df} Let $G$ be a group and $\varphi$ be an automorphism of $G$, we define the \emph{$\varphi$-conjugate} of $G$, denoted by $Conj_{\varphi}(G)$, to the set $G$ with the quandle operation being \[ g \rRack h:=h \varphi(g)\varphi(h^{-1}), \ \ \forall g, h \in G. \] \end{df} In particular, if $\varphi$ is the identity map of $G$, $Conj_{\operatorname{id}}(G)$ is the usual \emph{conjugate quandle} $Conj(G)$; {\em i.e.}, the quandle operation is given by \[ g\rRack h:= hgh^{-1}, \ \ h, g, \in G. \] \begin{pro}\label{pro:Conj_phi} Let $G$ and $\varphi$ be as above. Then $Conj_{\varphi}(G)$ is finitely stable if and only if there exists an integer $n$ such that $\varphi ^n$ is an inner automorphism. Furthermore, $\{u_i\}_{i=1}^n\in \cal S^n(Conj_{\varphi}(G))$ if and only if \begin{equation}\label{eq:Conj_phi} \varphi^n = Ad_{u_n\varphi(u_{n-1})\cdots \varphi^{n-1}(u_1)}. \end{equation} \end{pro} \begin{proof} Suppose that $Conj_{\varphi}(G)$ is $n$--stable and $\{u_i\}_i\in \cal S^n(Conj_{\varphi}(G))$. Then for all $g\in G$, we have \[ \begin{array}{ll} g\rRack (u_i) & = \left[\prod_{i=0}^{n-1}\varphi^i(u_{n-i}) \right]\varphi^n(g)\left[\prod_{i=0}^{n-1}\varphi^{n-i}(u_{i+1})^{-1}\right] \\ & = g \end{array} \] so that $\varphi^n(g) = Ad_{\prod_{i=0}^{n-1}\varphi^i(u_{n-i})}(g)$; therefore $\varphi^n$ is an inner automorphism. Conversely, if $\varphi^n=Ad_u$ for some $u\in G$, then it is clear that the family $\{1, \ldots, 1, u\}$ is in $\cal S^n(Conj_{\varphi}(G))$. \end{proof} As a consequence we have the following. \begin{cor} For any group $G$, the conjugation quandle $Conj(G)$ is $n$--stable for all $n\leq 1$. \end{cor} We shall note that the abelian case is related to the torsion in the automorphism group as shown by the following. \begin{pro} Let $G$ be a non-trivial abelian group and $\varphi$ an automorphism of $G$. Define the quandle $G_{\varphi}$ as the set $G$ together with the operation \[ g\rRack h = \varphi(g) + (\operatorname{id} - \varphi)(h), \ g, h \in G. \] Then the following are equivalent. \begin{itemize} \item[(i)] $G_{\varphi}$ is finitely stable. \item[(ii)] $\varphi$ is a torsion element of $Aut(G)$, the automorphism group of $G$. \end{itemize} \end{pro} \begin{proof} For all finite subset $\{u_i\}_{i=1}^n$ of $G$, and every $g\in G$, we have \[ g\rRack (u_i)_{i=1}^n = \varphi^n(g) + (\operatorname{id}-\varphi)\sum_{i=1}^n\varphi^{n-i}(u_i). \] It follows that $\{u_i\}_{i=1}^n$ is a stabilizing family of order $n$ for $G_{\varphi}$ if and only if $\sum_{i=1}^n\varphi^{-i}(u_i) = 0$ and $\varphi^n=\operatorname{id}$. \end{proof} \section{The $n$-pivot of a group}\label{npivot} Let $G$ be a group and $n$ a positive integer. The $n$--\emph{core} of $G$ was defined by Joyce in his thesis~\cite{Joyce:Thesis} as the subset of the Cartesian product $G^n$ consisting of all tuples $(x_1, \ldots, x_n)$ such that $x_1\cdots x_n = 1$. Moreover, the $n$-core of $G$ has a natural quandle structure defined by the formula \begin{eqnarray}\label{eq1:n-core} (x_1, \ldots, x_n)\rRack (y_1, \ldots, y_n) := (y_n^{-1}x_ny_1, y_1^{-1}x_1y_2, \ldots, y_{n-1}^{-1}x_{n-1}y_n) \end{eqnarray} In this section we define $n$-\emph{pivot} of a group as a generalisation of the $n$-core and investigate its relationship with the center of racks. \begin{df} Let $G$ be a group. For $n\in \bb N$, we define the \emph{$n$-pivot} of $G$ to be the subset of $G^n$ given by \[ \cal P^n(G) = \{(x_1, \ldots, x_n)\in G^n \mid x_1\cdots x_n \in Z(G)\}, \] where, as usual $Z(G)$ is the center of $G$. \end{df} It is straightforward to see that the formula~\eqref{eq1:n-core} defines a quandle structure on $\cal P^n(G)$. Moreover, we have the following result. \begin{pro}\label{pro:core_n} Let $G$ be a non-trivial group. Then for all $n\in \bb N$, there is a bijection $$\cal S^n(Cong(G)) \cong \cal P^n(G).$$ In particular, $\cal S^n(Conj(G))$ is naturally equipped with a quandle structure. Furthermore, if $G$ has a trivial center, then $\cal S^2(Cong(G)) = Core(G)$. \end{pro} \begin{proof} We have already noticed that $Conj(G) = Conj_{\operatorname{id}}(G)$. It follows from Proposition~\ref{pro:Conj_phi} that $\{u_i\}_{i=1}^n\in \cal S^n(Conj(G))$ if and only if \[ \operatorname{id} = Ad_{u_n\cdots u_1}, \] which is equivalent to $(u_1,\ldots, u_n)\in \cal P^n(G)$; hence the bijection $$\cal S^n(Conj(G)) \cong \cal P^n(G).$$ Next, if $G$ has trivial center of $G$, then for all $n\geq 2$, $\cal S^n(Conj(G))$ coincides with the $n$--core of $G$. In particular we have $\cal S^2(Cong(G)) = Core(G)$. \end{proof} \section{Rack actions and approximate units} \label{rackaction} In this section we give a more elaborated definition of rack actions and investigate some of their properties. Recall that (see for instance~\cite{Elhamdadi-Moutuou:Foundations_Topological_Racks}) a \emph{rack action} of a rack $X$ on a space $M$ consists of a map $M\times X \ni (m,x) \mto m\cdot x \in M$ such that \begin{itemize} \item[(i)] for all $x\in X$, the induced map $M\ni m \mto m\cdot x \in M$ is a bijection; and \item[(ii)] for all $m\in M, x,y\in X$ we have \begin{equation}\label{eq1:rack_action} (m\cdot x)\cdot y = (m\cdot y)\cdot (x\rRack y). \end{equation} \end{itemize} If $\{x_i\}_{i=1}^s$ is a family of elements in $X$, we will write $m\cdot(x_i)_i$ for \[ (\cdots (m\cdot x_1)\cdot \ \cdots )\cdot x_s. \] In this work, we require two additional axioms that generalize in an appropriate way the concept of group action. Precisely we give the following. \begin{df}\label{df:rack_action} An action of the rack $(X,\rRack)$ on the space $M$ consists of a map $M\times X \ni (m, x)\mto m\cdot x\in M$ satisfying equation~\eqref{eq1:rack_action} and such that for all $\{u_1, \ldots, u_s\} \in \cal S(X)$, \begin{equation} m\cdot (u_i)_i = m\cdot (u_{\sigma(i)})_i, \ \ \forall m\in M, \end{equation} for all cycle $\sigma$ in the subgroup of $\fr S_n$ generated by the cycle $(2\ \ldots n \ 1) $ \end{df} Here we should illustrate this definition; especially the following example gives the main motivation behind the first axiom. \begin{ex} Let $G$ be a group acting (on the right) on a space $M$. We then get a rack action of $Conj(G)$ on $M$ by setting $m \cdot g:=mg^{-1}$. It is easy to check that for all $\{g_i\}_i \in \cal P^n(G)$, we have \[ m\cdot (g_i)_i= m \cdot (g_{\sigma(i)})_i, \] for all $m \in M$ and $\sigma$ in the subgroup generated by the cycle $(2\ \ldots n \ 1) $ \end{ex} Note however that a rack action of $Conj(G)$ does not necessarily define a group action of $G$. \begin{lem}\label{lem:action_stabilizing} Let $X$ be a finitely stable rack acting on a set $M$. Then for any stabilizing family $\{u_i\}_{i=1}^s$, we have \[ (m\cdot x)\cdot (u_i)_i = (m\cdot (u_i)_i)\cdot x, \] for all $m\in M, x\in X$. \end{lem} \begin{proof} It is easy to verify that, thanks to~\eqref{eq1:rack_action}, for all finite subset $\{x_i\}_i \subset X$, we have \begin{eqnarray}\label{eq:action_family} (m\cdot x)\cdot (x_i)_i = (m\cdot (x_i)_i)\cdot (x\rRack (x_i)_i), \end{eqnarray} for all $m\in M, x\in X$. The result is therefore obvious if we take a stabilizing family $\{u_i\}_i$. \end{proof} \begin{df} Let $X$ be a rack acting on an non-empty set $M$. For $\{x_i\}_{i=1}^s\in X$, we define \begin{itemize} \item[(i)] the \emph{orbit} of $\{x_i\}_i$ as \[ M\cdot (x_i)_i = \{m\cdot (x_i)_{i=1}^s \mid m\in M\}. \] \item[(ii)] the \emph{fibre} of $M$ at $\{x_i\}_i$ to be \[ M_{\{x_i\}_i}=\{m\in M \mid m\cdot (x_i)_i = m\} \subset M\cdot x. \] \end{itemize} \end{df} \begin{df} Let $X$ and $M$ be as above. For $m\in M$, we define the \emph{stabilizer} of $m$ to be \[ X[m] = \{x\in X \mid m\cdot x = m\} \] \end{df} The following is straightforward. \begin{lem} Let $X$ and $M$ be as above. Then, all stabilizers are subracks of $X$. \end{lem} \begin{df} A rack action of $X$ on an nonempty set $M$ is {\em faithful} if for each $m\in M$, the map $X\ni x\mto m\cdot x\in M$ is one-to-one. \end{df} \begin{lem}\label{lem:faithful_action} Let $X$ be a nonempty set. A rack structure $\rRack$ on $X$ is trivial if and only if $(X,\rRack)$ acts faithfully on a nonempty set $M$ and the action satisfies $(m\cdot x)\cdot y = (m\cdot y)\cdot x$ for all $x,y\in X, m\in M$. \end{lem} \begin{proof} If $X$ acts faithfully on a nonempty set $M$ and $(m\cdot x)\cdot y= (m\cdot y)\cdot x$, then \[ (m\cdot x)\cdot y = (m\cdot y)\cdot x = (m\cdot x)\cdot (y\rRack x ), \ \forall x,y\in X, m \in M; \] therefore, by faithfulness, $y\rRack x=y$ for all $x,y\in X$, hence the rack structure is trivial. Conversely, suppose the rack structure on $X$ is trivial $X$. Then one defines a faithful rack action of $X$ on the set $\bb CX$ of all complex-valued continuous functions $\alpha:X\rTo \bb C$ by setting \[ \alpha\cdot x:= \alpha +\delta_x, \ \alpha\in \bb C, x\in X. \] where, as usual, $\delta_x$ is the characteristic function of $x$. \end{proof} \begin{df} Let $X$ be a rack acting on a non-empty set $M$. \begin{itemize} \item[(i)] An {\em approximate unit} for the rack action is a finite subset $\{t_i\}_{i=1}^r\subset X$ such that $m\cdot (t_i)_i=m$ for all $m\in M$. \item[(ii)] An element $t\in X$ is an $r$--{\em unit} for the rack action if the family $\{t\}_{i=1}^r$ is an approximate unit for the rack action. \item[(iii)] The rack action of $X$ on $M$ is called {\em $r$--periodic} if each $t\in X$ is an $r$--unit for the rack action. \item[(iv)] A rack action is said to be {\em strong} if every stabilizing family of the rack is an approximate unit. \end{itemize} \end{df} \begin{ex} Let $(X, \rRack)$ be a rack. Then we get a rack action of $X$ on its underlying set by defining $m\cdot x := m\rRack x, \ m, x\in X$. It is immediate that a finite subset $\{t_i\}_i\subset X$ is an approximate unit for this rack action if and only if it is a stabilizing family for the rack structure, hence it is a strong action. Furthermore, this action is $r$--periodic if and only if $X$ is an $r$--rack in the sense of Joyce~\cite{Joyce:Thesis}. \end{ex} \begin{lem}\label{lem:strong_action} Let $X$ be a finite rack acting strongly on the set $M$. Let $x\in X$ be of order $k$ in $X$. Then, every element $m\cdot (x_i)_{i=1}^r$, where $x$ appears $k$ times in the sequence $x_1,\ldots, x_r$, can be written as $m\cdot (y_j)_{j=1}^{r-k}$. \end{lem} \begin{proof} Since $X$ is finite, every element $x\in X$ has a finite order; that is there exists $k$ such that $x$ is a $k$--stabilizer for $X$, hence since the action is strong, $m\cdot ^kx=m$ for all $m\in M$. Now the result follows by applying~\eqref{eq1:rack_action} to $m\cdot (x_i)_{i=1}^r$ repeatedly until the $x$'s appear next to each other. \end{proof} \section{Twisted actions and rack cross-products}\label{twist} In this section we examine the case of rack actions on racks. This generalizes the construction of extensions of racks using the notion of dynamical cocycles \cite{And-Grana}. Specifically, let $X$ and $Q$ be racks whose rack structures are indistinguishably denoted by $\rRack$. A rack action $Q\times X \ni (p,x)\mto p\cdot x \in Q$ is called an {\em $X$-action by rack automorphisms} on $Q$ if for each $x\in X$, the induced bijection \[ Q\ni p\mto p\cdot x\in Q \] is a rack automorphism of $Q$. Throughout, given two sets $A$ and $B$, $\cal F(A,B)$ will denote the set of all maps from $A$ to $B$. \begin{df}\label{def:X-cocycle} Let $X$ be a rack acting by automorphisms on the rack $Q$. An $X$--{\em cocycle} on $Q$ is a map \[ \begin{array}{lccc} \partial :& X\times X & \rTo & \cal F(Q\times Q, Q)\\ & (x,y) & \mto & \partial_{x,y} \end{array} \] such that \begin{itemize} \item[(i)] for each $x,y\in X$ and each $q\in Q$, the induced map $$\partial_{x,y}(-, q):Q \ni p\mto \partial_{x,y}(p,q)\in Q$$ is a rack automorphism; \item[(ii)] (Equivariance) for all $x,y\in X$ and $p,q\in Q$, we have \begin{eqnarray}\label{eq1:X-cocycle} \partial_{x,y}(p\cdot t, q)=\partial_{x,y}(p,q)\cdot (t\rRack y), \end{eqnarray} for all $t\in X$; \item[(iii)](Cocycle condition) for all $x,y,z\in X$ and all $p,q,r\in Q$, the following relation holds \begin{eqnarray}\label{eq2:X-cocycle} \partial_{x\rRack y, z}(\partial_{x,y}(p,q),r) = \partial_{x\rRack z, y\rRack z}(\partial_{x,z}(p,r),\partial_{y,z}(q,r)). \end{eqnarray} \end{itemize} \end{df} \begin{df} Let $X$ be a rack. A {\em twisted $X$--action} on a rack $Q$ is a rack action of $X$ by automorphisms on $Q$ together with an $X$--cocycle on $Q$. The triple $(Q,X,\partial)$ will be called a {\em twisted rack dynamical system}. \end{df} We shall notice that the idea of {\em extension by dynamical cocycle} introduced by Andruskiewitsch and Gra\~{n}a in~\cite{And-Grana} is a special case of a twisted dynamical system in our sense. Indeed, when $Q$ is endowed with the trivial rack structure ({\em i.e.}, $p\rRack q=p$), and the trivial $X$--action, then relation~\eqref{eq1:X-cocycle} is trivially satisfied. \begin{ex}\label{eq:twisted_cocycle} If $X$ acts by rack automorphism on the rack $Q$, then define the map $\partial:X\times X\rTo \cal F(Q\times Q, Q)$ by \[ \partial_{x,y}(p,q):=p\cdot y, \ {\rm for \ } p,q\in Q. \] It is easy to check that $(Q,X,\partial)$ is a twisted rack dynamical system. \end{ex} \begin{df} Let $(Q,X,\partial)$ be a twisted rack dynamical system. The {\em rack cross-product} $Q\rtimes_\partial X$ of $Q$ by $X$ is the Cartesian product $Q\times X$ together with the binary operation \begin{eqnarray}\label{eq:semi_direct} (p,x)\rRack_\partial (q,y) = (\partial_{x,y}(p,q),x\rRack y), \end{eqnarray} for $(p,x), (q,y)\in Q\times X$. \end{df} \begin{lem} Let $X$ and $Q$ be as above. Then, together with the operation~\eqref{eq:semi_direct}, $Q\rtimes_\partial X$ is a rack. \end{lem} \begin{proof} Left to the reader. \end{proof} One can easily verify that given a twisted rack dynamical system $(Q,X,\partial)$, if the cross-product $Q\rtimes_\partial X$ is finitely stable, then so is $X$, but $Q$ may not be so. Precisely, we have the following \begin{pro} Let $(Q,X,\partial)$ be a twisted rack dynamical system. Then, the rack cross-product $Q\rtimes_\partial X$ is finitely stable if and only if there exists a finite family $(\xi_i, t_i)_{i=1}^n$ of elements in $Q\times X$ such that $\{t_i\}_i\in \cal S(X)$ and such that the function \[ p\mto \partial_{x\rRack (t_i)_{i=1}^{n-1},t_n}(\partial_{x\rRack(t_i)_{i=1}^{n-2},t_{n-1}}(\ldots (\partial_{x\rRack t_1,t_2}(\partial_{x,t_1}(p,\xi_1),\xi_2))\ldots),\xi_n), \] is the identity function on $Q$, for all $x\in X$. \end{pro} \begin{ex} Let $(\cal H,\pi)$ be a representation of a finite group $G$ with non-trivial center. Denote $\xi g:=\pi_g(\xi)$ for $\xi\in \cal H, g\in G$. Equip $\cal H$ with the quandle structure $\xi\rRack \eta:=\frac{\xi + \eta}{2}$. Then $Conj(G)$ acts by rack automorphisms on $\cal H$ under the operation $\xi\cdot g:=\xi g^{-1}$. Fix a non-zero vector $v_0\in \cal H$, and let $\zeta_0:=\frac{1}{\#G}\sum_{g\in G}v_0g$. Then $\zeta_0\cdot h = \zeta_0$ for all $h\in G$. Let $t\neq 1$ be a solution of $x^3-1=0$ in $\bb C$, and for $g,h\in G$, let $\partial_{g,h}:\cal H\times \cal H\rTo \cal H$ be the function given by \[ \partial_{g,h}(\xi,\eta) :=t\cdot \xi h^{-1} + (1-t)\zeta_0, \ \xi,\eta\in \cal H. \] One can verify that the triple $(\cal H, Conj(G), \partial)$ satisfies all the axioms of a twisted rack dynamical system. Furthermore, $\cal H\rtimes_\partial Cong(G)$ is finitely stable; although $\cal H$ is not finitely stable, thanks to Theorem~\ref{thm:Alexander}. To see this, let $g_1, g_2\in Z(G)$ and $g_3=(g_1g_2)^{-1}$, so that $(g_1,g_2,g_3)\in \cal S^3(Conj(G))$. Let $\eta_1,\eta_2, \eta_3$ in $\cal H$. Then for all $g\in G$, we get \[ \begin{array}{lcl} \partial_{(g\rRack g_1)\rRack g_2,g_3}(\partial_{g\rRack g_1,g_2}(\partial_{g,g_1}(\xi,\eta_1),\eta_2),\eta_3) & = & \xi + (1-t)(t^2+t+1)\zeta_0 \\ & = & \xi \end{array} \] for all $\xi \in \cal H$; which implies that $(\eta_i,g_i)_i\in \cal S^3(\cal H\rtimes_\partial Conj(G))$. \end{ex} As a consequence of the above proposition, we have \begin{cor} Let $X$ be a rack acting by rack automorphisms on the rack $Q$. Let $Q$ be equipped with the $X$-cocycle $\partial$ of Example~\ref{eq:twisted_cocycle} ({\em i.e.} $\partial_{x,y}(p,q)=p \cdot y$). Then $Q\rtimes_\partial X$ is finitely stable if and only if there exists $\{t_i\}_i\in \cal S(X)$ which is an approximate unit for the rack action of $X$ on $Q$. \end{cor} \begin{ex} Recall from~\cite{Elhamdadi-Moutuou:Foundations_Topological_Racks} that a left ideal in a rack $X$ is subrack $T$ of $X$ such that $T\rRack X\subseteq T$. Let $T$ be such a left ideal. Then the operation $T\times X\ni (t,x)\mto t\rRack x\in T$ defines an action of $X$ by rack automorphisms on $T$. Let $\partial_{x,y}(t,s):=t\rRack y$. Then the cross-product $T\rtimes_\partial X$ is finitely stable if and only if $X$ is. \end{ex} \section{$X$--bundle of racks}\label{Xbundle} In this section we examine the case of families of rack structures on a set parametrized by a given rack. More specifically, we introduce bundles of racks which are generalization of the notion of $G$--{\em families of quandles} as defined in~\cites{Ishii:G-family_Quandles, Nosaka:Quandles_Cocycles}. \begin{df} Let $X$ be a rack. An $X$--{\em bundle of racks} $(\cal A,\star_x^y)_{x,y\in X}$ consists of a set $\cal A$, and a family of racks structures $\star_x^y$ on $\cal A$ such that \begin{eqnarray}\label{eq:X-bundle} (a\star_x^yb)\star_{x\rRack y}^z c = (a\star_x^zc)\star_{x\rRack z}^{y\rRack z} (b\star_y^z c), \end{eqnarray} for all $a,b,c\in \cal A$ and all $x,y,z\in X$. \end{df} \begin{lem} Let $(Q,X,\partial)$ be a twisted rack dynamical system. Then the family $(Q,\partial_{x,y}(-, -))_{x,y\in X}$ is an $X$--{\em bundle of racks} if and only if for all $x,y\in X$ we have \begin{eqnarray}\label{eq:Cocycle-vs-X-bundle} \partial_{x,y}(\partial_{x,y}(p,q),r) = \partial_{x,y}(\partial_{x,y}(p,r), \partial_{x,y}(q,r)), \forall p,q,r\in Q. \end{eqnarray} \end{lem} \begin{proof} The identity~\eqref{eq:Cocycle-vs-X-bundle} means that the operation $p\star_x^yq:=\partial_{x,y}(p,q)$ defined on $Q$ is distributive. Moreover, the invertibility of the right multiplication $$-\star_x^y q: Q\ni p \mto p\star_x^y q\in Q$$ is automatic from Definition~\ref{def:X-cocycle}. Also, notice that equation~\eqref{eq:X-bundle} is satisfied through~\eqref{eq2:X-cocycle}. \end{proof} \begin{df} Let $X$ and $Y$ be racks, and $(\cal A, \star_x^y)_{x,y\in X}$ be an $X$--bundle of racks. For any rack morphism $f:Y\rTo X$, we define the {\em pull-back} of $(\cal A, \star_x^y)_{x,y\in X}$, denoted by $(f^\ast \cal A, \star_u^v)_{u,v\in Y}$ as the set $\cal A$ equipped with the family of binary operations $\star_u^v, u,v\in Y$, \[ a\star_u^vb:=a\star_{f(u)}^{f(v)}b, \ \ u, v\in Y, a,b\in \cal A. \] \end{df} \begin{lem} Let $X$, $Y$, $(\cal A, \star_x^y)_{x,y\in X}$, and $f:X\rTo Y$ be as above. Then the pull-back $(f^\ast A, \star_u^v)_{u,v\in Y}$ is a $Y$--bundle of racks. \end{lem} As we already mentioned in the beginning of the section, $G$--families of quandles are special cases of our bundles of racks. Specifically we have the following example. \begin{ex} Recall from~\cite{Ishii:G-family_Quandles} that given a group $G$, a $G$--{\em family of quandles} is a non-empty set $X$ together with a family of quandle structures $\rRack^g$, indexed by $G$, on $X$, satisfying the following equations: \begin{itemize} \item $x\rRack^{gh}y = (x\rRack^gy)\rRack^hy$, and $x\rRack^ey=y$, for all $x,y\in X, \ g,h\in G$; \item $(x\rRack ^gy)\rRack^hz = (x\rRack^hz)\rRack^{h^{-1}gh}(y\rRack^hz)$, for all $x,y,z\in X, g,h\in G$. \end{itemize} Now, given such a $G$--family of quandles, we construct the $Conj(G)$--bundle of racks $(X,\star_g^h)_{g,h\in G}$ by letting \[ x\star_g^hy:= x\rRack^hy, x,y\in X, g,h\in G. \] \end{ex} We shall also note that our definition of bundles of racks generalizes the notion of $Q$--family of quandles (where $Q$ is a quandle) defined in~\cite[p.819]{Ishii:G-family_Quandles}. Indeed, by using the same construction as in the above example, one can easily show that any $Q$--family of quandles defines a $Q$--bundle of racks (quandles). \section{Rack representations}\label{rackrep} In this section we introduce first elements of representation theory for abstract racks and quandles. \begin{df} A \emph{representation} of a rack $X$ is a vector space $V$ equipped with a rack action of $X$ such that for all $x \in X$, the induced map $V \ni v \mapsto v \cdot x \in V$ is an isomorphism of vector spaces. Equivalently, a representation of $X$ consists of a vector space $V$ and a rack homomorphism \[ \pi: X\rTo Conj(GL(V)); \] {\em i.e.}, $\pi_{x\rRack y} = \pi_y\pi_x\pi_y^{-1}, \ \forall x, y\in X$. The map $\pi$ will be denoted as $\pi^V$ where there likely to be a confusion. \end{df} \begin{ex} Every representation $(V,\rho)$ of a group $G$ naturally defines representation of the quandle $Conj(G)$. \end{ex} Let us give an example of rack representation analogous to the regular representation in group theory. \begin{ex}\label{ex:regular} Let $X$ be a finite rack and let $\bb C X$ be the vector space of complex valued functions on $X$, seen as the space of formal sums $f = \sum_{x\in X} a_xx$, where $a_x\in \bb C, x\in X$. We construct the {\em regular representation} of the rack $X$ \[ \lambda : X\rTo Conj(GL(\bb CX)) \] by $\lambda_t(f)(x):=f(R_t^{-1}(x))$, where as usual $R_t$ is the right "translation" $X\ni y\rTo y\rRack t\in X$. \end{ex} \begin{df} Let $V$ and $W$ be representations of the rack $X$. A linear map $\phi: V\rTo W$ is called $X$-\emph{linear} if for each $x\in X$ the following diagram commutes \[ \xymatrix{ V \ar[r]^{\pi^V_x} \ar[d]_{\phi} & V \ar[d]^{\phi} \\ W \ar[r]^{\pi^W_x} & W } \] If in addition $\phi$ is an isomorphism, the two representations $V$ and $W$ are said to be \emph{equivalent}, in which case $\phi$ is called an {\em equivalence of rack representations}. \end{df} If $V$ is a representation of the rack $X$, a subspace $W\subset V$ is called a {\em subrepresentation} if for all finite family $\{x_i\}_i\subseteq X$, $W\cdot (x_i)_i \subseteq W$. We have the following immediate observation. \begin{lem}\label{lem:subrep_Ker_Im} Let $V$ and $W$ be representations of the rack $X$ and let $\phi:V\rTo W$ be a linear map. If $\phi$ is $X$-linear, then $\ker \phi$ and ${\rm Im\ } \phi$ are subrepresentations of $V$ and of $W$, respectively. \end{lem} \begin{df} A rack representation $V$ of $X$ is said to be {\em irreducible} if it has no proper subrepresentation; {\em i.e.}, if $W$ is a subrepresention of $V$, then either $W=\{0\}$ or $W=V$. \end{df} \section{Strong Representations}\label{strong_rep} \begin{df} A {\em strong representation} of $X$ is a representation $V$ such that the rack action is strong. We denote by $Rep_s(X)$ the set of equivalence classes of strong irreducible finite dimensional representations of $X$. \end{df} One can check that as in the group case, $Rep_s(X)$ is an abelian group under tensor product of strong irreducible representations. \begin{rmk} In view of Proposition~\ref{pro:core_n}, we notice that if $G$ is an abelian group, the only strong representation of $Conj(G)$ is the trivial one. \end{rmk} \begin{ex} The regular representation $ (\bb{C}X, \lambda)$ of a rack $X$ defined in Example \ref{ex:regular} is clearly strong. \end{ex} \begin{ex} Let $(\bb{Z}_3, \rRack)$, with $x \rRack y=2y-x$, be the dihedral quandle. Define $\rho: \bb{Z}_3 \rTo Conj(GL(\bb{C}^3))$ as the rack representation induced by the reflections on $\bb C^3=Span\{e_1,e_2,e_3\}$ \[ \rho_0= (2 \; 3), \; \rho_1=(1 \; 3),\; \text{and}\; \rho_2=(1 \; 2). \] Then $\rho$ is a strong representation of $\bb{Z}_3$ as a quandle, although it is clearly not a group representation. Note, however, that this is a reducible representation; indeed, the one-dimensional subspace spanned by $(1,1,1)$ is a subrepresentation. \end{ex} \begin{ex}\label{ex:involution_rep} Suppose $X$ is an involutive rack, that is, $(x\rRack y)\rRack y=x$ for all $x,y\in X$; which means each $y\in X$ is a $2$--stabilizer for $X$. Then every pair $(V,\tau)$, where $V$ is a vector space and $\tau:V\rTo V$ is a linear involution, gives rise to a strong representation $\tilde{\tau}:X\rTo Conj(GL(V))$ by setting $\tilde{\tau}_x(v)=\tau(v)$ for all $x\in X, v\in V$. \end{ex} In fact we have the following. \begin{lem}\label{lem:inv_rep} Let $X$ be an involutive rack. The assignment $(V,\tau)\mto \tilde{\tau}$ defines a covariant functor from the category $\cal V_{inv}$, whose objects are pairs $(V,\tau)$ of vector spaces equipped with involutions and whose morphisms are vector space morphisms intertwining the involutions, to the category of strong representations of $X$. \end{lem} \begin{pro} Suppose the rack $X$ is finite, involutive, and connected; that is, $X$ has only one orbit. Then, the regular representation of $X$ corresponds to a conjugacy class of the symmetric group $\frak S_n$ where $n={\#X}$. More generally, if $X$ has $k$ connected components, then the regular representation corresponds to $k$ conjugacy classes in $\frak S_n$. \end{pro} \begin{proof} Since the representation $\lambda\colon X\rTo Conj(GL(\bb CX))$ is strong and $X$ is connected, the involutions $\lambda_t:\bb CX\rTo \bb CX$ are all equivalent, hence they define a partition of the dimension $n$ of $\bb CX$. The result then follows from the correspondence between partitions of $n$ and conjugacy classes of the symmetric group. \end{proof} \begin{thm}\label{thm:irr-stable} Let $X$ be a finite rack. Then every irreducible strong representation of $X$ is either trivial or finite dimensional. \end{thm} \begin{proof} Let $X$ be a finite rack of cardinality $n$ and let $(V,\pi)$ be a nontrivial irreducible strong representation of $X$. Then, since every element $x\in X$ is a $k$--stabilizer for $X$ where $k$ is a factor of $n!$, we have $\pi_x^{n!}=\operatorname{id}_V$ for each $x\in X$. Fix a non-zero vector $v\in V$ and let $E_v$ be the subspace \[ {\rm Span}\left\{v\cdot (x_i)_{i=0}^s\mid s=0,\ldots, (n+1)!, \{x_i\}_{i=1}^s\subseteq X\right\} \] of $V$, where we have used the convention that $v\cdot \emptyset =v$. Then $E_v$ is a finite dimensional complex vector space. Furthermore, $E_v$ is invariant under the rack $X$--action, thanks to Lemma~\ref{lem:strong_action} and to the fact that any sequence of $(n+1)!$ elements in $X$ has at least one element repeated at least $n!$ times. This means that $E_v$ is a subrepresentation of $V$. Therefore, since $V$ is irreducible and $E_v\neq \{0\}$, we have $V=E_v$. \end{proof} We shall observe that the case of involutive racks the above result becomes more precise with regard to the dimension of the irreducible representations. \begin{lem}\label{lem:involutive_rack} Suppose $X$ is an involutive rack (finite or infinite). Every irreducible strong representation of the form $(V,\tilde{\tau})$ coming from the category $\cal V_{inv}$ is one-dimensional. \end{lem} \begin{proof} Let $V_+:=\{v\in V\mid \tau(v)=v\}$. If $V_+$ is a nonzero space, and $v_0$ is a nonzero vector in $\in V_+$, then the subspace $Span\{v_0\}$ is clearly a subrepresentation of $V$, hence $V$ is one-dimensional. On the other hand, if $V_+=\{0\}$, then $\tau(v)=-v$ for all $v\in V$. Hence each nonzero vector $v \in V$ spans a subrepresention of $V$. Therefore, since $V$ is irreducible, it spans $V$. \end{proof} \begin{cor}\label{cor1:involutive_rack} Let $X$ be a connected involutive rack. If $(V,\pi)$ is a strong irreducible representation satisfying the property that there exists an $x_0\in X$ such that $V^{x_0}_+=\{v\in V\mid \pi_{x_0}(v)=v\}=0$, then $V$ is one-dimensional. \end{cor} \begin{proof} We write $V=V^{x_0}_+\oplus V^{x_0}_-$ where the second summand is the subspace of $V$ consisting of vectors $v$ such that $\pi_{x_0}(v)=-v$. The assumption amounts to saying $\pi_{x_0}=-\id_V$. Now, since $X$ is connected, every $z\in X$ can be written as $z=x_0\rRack y$ for some $y\in X$. Hence $\pi_z=\pi_y\pi_{x_0}\pi_y$, which means $\pi_z$ and $\pi_{x_0}$ are conjugate involutions on $V$. Therefore the representation$(V,\pi)$ comes from a fixed involution as in the lemma. \end{proof} \begin{thm}\label{thm:strong_rep_connected} Every irreducible strong representation of an involutive connected finite rack $X$ is one-dimensional. \end{thm} \begin{proof} Write $X=\{x_1,\ldots, x_n\}$. Let $(V,\pi)$ be a strong irreducible representation of $X$, and $\{e_1,\cdots, e_m\}$ be a basis for $V$. As in the proof of Corollary~\ref{cor1:involutive_rack}, since $\pi_x:V\rTo V$ is an involution for each $x\in X$, there are subspaces $V^{x_i}_+, i=1,\ldots, n$ of $V$ with the property that for every $i=1,\ldots,n$, $\pi_{x_i}(v)=v, \forall v\in V^{x_i}_+$; specifically, $V^{x_i}_+$ is the eigenspace of $\pi_{x_i}$ associated to the eigenvalue $1$. If one of the $V^{x_i}_+$'s is trivial, then so are all of the others, and $V$ is one-dimensional, thanks to Corollary~\ref{cor1:involutive_rack}. Suppose then that $V^{x_i}_+\neq \{0\}$ for all $i=1,\ldots, n$. Let $V_+:=\sum_{i=1}^nV^{x_i}_+$ be the subspace of $V$ spanned by $\bigcup_iV^{x_i}_+$. Then $V_+$ is invariant under the rack action by $X$. Indeed, for all $t\in X$, the restriction map $\pi_t:V^{x_i}_+\rTo V^{x_i\rRack t}_+$ is an isomorphism of vector spaces. It follows that $V_+$ is a subrepresentation of $V$. Therefore, since $V$ is irreducible and $V_+\neq \{0\}$, we have $V=V_+$. We thus have that $\cap_{k=1}^nV^{x_k}_+ \neq \{0\}$. Indeed, since $X$ is connected, we may fix $t_1, \ldots, t_{n-1}\in X$ such that $x_{i+1}=x_i\rRack t_i,\ i=1, \ldots, n-1$. Let $\varphi\colon V\rTo V$ be the morphism of vector spaces given by \[ \varphi:=\operatorname{id}+\pi_{t_1}+\pi_{t_2}\pi_{t_1}+ \cdots + \pi_{t_{k-1}}\cdots \pi_{t_1} +\cdots + \pi_{t_{n-1}}\cdots \pi_{t_1} \] Then for $k=1, \ldots, n$, we have \[ \varphi(V^{x_k}_+) = V^{x_k}_+ + \cdots + V^{x_1}_+ + \cdots + V^{((x_1\rRack t_2)\rRack \cdots )\rRack t_{n-1}}_+ \] which means that $V^{x_1}_+$ is in the intersection of all the spaces $\varphi(V^{x_k}_+)$; hence $\cap_k\varphi(V^{x_k}_+) \neq \{0\}$. Now, any non zero vector $v_0\in \cap_kV^{x_k}_+$ will be invariant under $\pi$, therefore it will span $V$. \end{proof} \begin{cor} If $G$ is a finite group, then every nontrivial irreducible strong representation of $Core(G)$ has one dimension. \end{cor} \section{Rack characters and duality}\label{duality} In this section we give an analogue construction of the {\em Pontryagin dual} for racks and quandles. \begin{df} Let $X$ be a rack. A {\em character} of $X$ is a map $\phi$ on $X$ with values on the unitary group $U(1)$ such that $\phi(x\rRack y) = \phi(x)$; in other words, $\phi$ is function from $X$ to $U(1)$ that is constant on the orbits of $X$. The set of all characters of $X$, called the {\em Pontryagin dual} of $X$, will be denoted by $D_qX$. \end{df} \begin{ex} As in the finite group case, any representation of a finite rack gives rise to a character of the rack. Indeed, suppose $(V,\pi)$ is a representation of a finite rack $X$. One defines $\phi_\pi:X\rTo U(1)$ by $\phi_\pi(x):=Tr(\pi_x)$. Then for all $x,y\in X$, we have $\phi_\pi(x\rRack y)=Tr(\pi_y\pi_x\pi_y^{-1})=Tr(x)=\phi_\pi(x)$. \end{ex} We shall observe the following. \begin{lem} For any rack $X$, $D_qX$ is an Abelian group under point-wise product, with inverse given by the point-wise conjugate. Moreover \[ D_qX\cong \bigoplus_{\# orb(x)}U(1), \] where for $x\in X$, $orb(x)$ is the set of all $z\in X$ such that $z=x\rRack y$ for some $y\in X$. \end{lem} \begin{thm} If $X$ is a finite involutive and connected rack, then there is an isomorphism of Abelian groups $Rep_s(X)\cong D_qX$. \end{thm} \begin{proof} Since $X$ is finite, involutive, and connected, then by Theorem~\ref{thm:strong_rep_connected}, a strong irreducible representation of $X$ is but a character of $X$. Now, the map $(V,\pi) \mto \phi_\pi$ from the collection of all strong irreducible representations of $X$ to $D_qX$ satisfies $\phi_{\pi_1\otimes \pi_1}=\phi_{\pi_1}\phi_{\pi_2}$ and induces the desired isomorphism of Abelian groups. \end{proof} \begin{ex} Since $\bb Z$ is abelian, the rack structure of $Conj(\bb Z)$ is trivial, $D_qConj(\bb Z)$ is the set of all sequences on $U(1)$. \end{ex} \begin{ex} Let $\bb Z_4$ be the dihedral quandle with two components $\{0,2\}$ and $\{1,3\}$. Then $D_q\bb Z_4 \cong U(1)\oplus U(1)$. \end{ex} \begin{ex} Let $G$ be an Abelian group. Then, since $Core(G)$ is a connected quandle, we have $$D_qCore(G[{\frac 12}]) \cong U(1).$$ \end{ex} More generally, we have the following for Alexander quandles. \begin{ex} Let $M$ be a $\bb Z[t,t^{-1}]$--module equipped with the Alexandle quandle structure $x\rRack^ty:=(x-y)t + y$. Then $D_qM[\frac{1}{1-t}]\cong U(1)$. In particular, if $E$ is a complex or real vector space equipped with the Alexander quandle operation $\xi \rRack \eta = {\frac 12}(\xi+\eta)$, then $D_qE=U(1)$. \end{ex} \begin{pro} Let $G$ be an Abelian group generated by a finite set of cardinality $n$. Then \[ D_qCore(G) \cong U(1)^{2^n}. \] \end{pro} \begin{proof} Indeed, for all $k\in \bb Z$ and all $x\in G$, we have $$\phi((2k+1)x) = \phi(-x\rRack kx) = \phi(-x), \ {\rm and \ } \phi(-x)=\phi(x\rRack 0)=\phi(x).$$ Hence, $\phi((2k+1)x) = \phi((2k-1)x) = \phi(x)$ for all $k\in \bb Z$ and all $x\in G$. Furthermore, $\phi(2kx) = \phi(0\rRack kx) = \phi(0), \forall k\in \bb Z$. It follows that if $G = <S>\cup <-S>$ and $S=\{s_1, \ldots, s_n\}$, then one can check that every character $\phi \in D_qCore(G)$ is completely determined by its $2^n$ values \begin{itemize} \item $\phi(0), \phi(s_1), \ldots, \phi(s_n)$; \item $ \phi(s_1+s_2), \ldots, \phi(s_1+s_n), \ldots, \phi(s_{n-1}+s_n)$; \item $\phi(s_1+s_2+s_3), \ldots, \phi(s_1 + s_2 + s_n)$; \item ... \item $\phi(s_1+ s_2 + \ldots +s_n)$. \end{itemize} in $U(1)$. This gives a bijection between $D_qCore(G)$ and the set of all maps from the power set of $S$ to $U(1)$. \end{proof} \begin{ex}\label{pro:dual_Z_ce} Let $\bb Z_{ce}$ be $\bb Z$ equipped with the core structure $m\rRack n = 2n-m, \ m,n\in \bb Z$. Then $D_q\bb Z_{ce} = U(1)\oplus U(1)$. In particular, every character $\phi$ of $\bb Z_{ce}$ is completely determined by its two values $\phi(0)$ and $\phi(1)$. \end{ex} \begin{bibdiv} \begin{biblist} \bib{And-Grana}{article}{ author={Andruskiewitsch, N.}, author={Gra{\~n}a, M.}, title={From racks to pointed Hopf algebras}, journal={Adv. Math.}, volume={178}, date={2003}, number={2}, pages={177--243}, issn={0001-8708}, review={\MR {1994219 (2004i:16046)}}, doi={10.1016/S0001-8708(02)00071-3}, } \bib{Elhamdadi-Moutuou:Foundations_Topological_Racks}{article}{ author={Elhamdadi, M.}, author={Moutuou, E. M.}, title={Foundations of topological racks and quandles}, journal={J. Knot Theory Ramifications}, volume={25}, date={2016}, number={3}, pages={1640002, 17}, issn={0218-2165}, review={\MR {3475069}}, doi={10.1142/S0218216516400022}, } \bib{Elhamdadi-Nelson:Quandles_An_Introduction}{book}{ author={Elhamdadi, M.}, author={Nelson, S.}, title={Quandles---an introduction to the algebra of knots}, series={Student Mathematical Library}, volume={74}, publisher={American Mathematical Society, Providence, RI}, date={2015}, pages={x+245}, isbn={978-1-4704-2213-4}, review={\MR {3379534}}, } \bib{Hoste-Shanahan:n-Quandles}{article}{ author={Hoste, J.}, author={Shanahan, P. D.}, title={Links with finite $n$-quandles}, status={eprint}, note={\arxiv {math.GT/1606.08324}}, date={2016}, } \bib{Ishii:G-family_Quandles}{article}{ author={Ishii, A.}, author={Iwakiri, M.}, author={Jang, Y.}, author={Oshiro, K.}, title={A $G$-family of quandles and handlebody-knots}, journal={Illinois J. Math.}, volume={57}, date={2013}, number={3}, pages={817--838}, issn={0019-2082}, review={\MR {3275740}}, } \bib{Joyce:Thesis}{thesis}{ author={Joyce, D.}, title={An Algebraic Approach to Symmetry with Applications to Knot Theory}, type={phdthesis}, date={1979}, note={available electronically at \url {http://aleph0.clarku.edu/~djoyce/quandles/aaatswatkt.pdf}}, } \bib{Matveev}{article}{ author={Matveev, S. V.}, title={Distributive groupoids in knot theory}, language={Russian}, journal={Mat. Sb. (N.S.)}, volume={119(161)}, date={1982}, number={1}, pages={78--88, 160}, issn={0368-8666}, review={\MR {672410 (84e:57008)}}, } \bib{Nosaka:Quandles_Cocycles}{article}{ author={Nosaka, T.}, title={Quandle cocycles from invariant theory}, journal={Adv. Math.}, volume={245}, date={2013}, pages={423--438}, issn={0001-8708}, review={\MR {3084434}}, doi={10.1016/j.aim.2013.05.022}, } \end{biblist} \end{bibdiv} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Preliminaries}\label{prelim} First we establish notation, then we introduce function spaces as well as estimates used. \subsection{Notation} $a \lesssim b$ means $a \leq Cb$ for some positive constant $C$. A point in the $2+1$ dimensional Minkowski space is written as $(t,x)=(x^\alpha)_{0 \leq \alpha \leq 2}.$ Greek indices range from $0$ to $2$, and Roman indices range from $1$ to $2$. We raise and lower indices with the Minkowski metric $\diag(-1,1,1)$. We write $\partial_\alpha=\partial_{x^\alpha}$ and $\partial_t=\partial_0$, and we also use the Einstein notation. Therefore, $\partial^i \partial_i =\triangle,$ and $\partial^\alpha \partial_\alpha=-\partial^2_t +\triangle=\square$. When we refer to spatial and time derivatives of a function $f$, we write $\partial f$, and when we consider only spatial derivatives of $f$, we write $\nabla f$. Finally, $d$ denotes the exterior differentiation operator and $d^\ast$ its dual given by $d^\ast=(-1)^k\ast\ast\ast d \ast$, where $\ast$ is the Hodge $\ast$ operator (see for example \cite{Roe}) and $k$ comes from $d^\ast$ acting on some given $k$-form. It will be clear from the context, when $\ast$ and $d^\ast$ operators act with respect to the Minkowski metric and when with respect to the euclidean metric. For the convenience of the reader we include the following: with respect to the euclidean metric on $\mathbb R^2$ we have \[ \ast dx=dy,\quad \ast dy=-dx,\quad \ast 1=dx\wedge dy, \] and with respect to the $\diag(-1,1,1)$ metric on $\mathbb R^{2+1}$ \[ \ast dt=dx\wedge dy,\quad \ast dx=dt\wedge dy,\quad \ast dy=-dt\wedge dx. \] \subsection{Function Spaces}\label{spaces} We use Picard iteration. Here we introduce the spaces, in which we are going to perform the iteration\footnote{We are also going to employ a combination of the standard $L^p_tW^{s,q}_x$ spaces for $A_0$. See Section \ref{ellipticpiece1}.}. First we define following Fourier multiplier operators \begin{align} \widehat{\Lambda^\alpha f}(\xi)&=(1 + |\xi|^2)^\frac{\alpha}{2}\hat f(\xi),\\ \widehat{\Lambda^\alpha_+ u}(\tau,\xi)&=(1 + \tau^2+|\xi|^2)^\frac{\alpha}{2}\hat u(\tau,\xi),\\ \widehat{\Lambda^\alpha_- u}(\tau,\xi)&=\left(1 +\frac{ (\tau^2-|\xi|^2)^2}{1+\tau^2+|\xi|^2}\right)^\frac{\alpha}{2}\hat u(\tau,\xi), \end{align} where the symbol of $\Lambda^\alpha_-$ is comparable to $(1 + \big||\tau|-|\xi|\big|)^\alpha$. The corresponding homogeneous operators are denoted by $D^\alpha,D_+^\alpha,D_-^\alpha$ respectively. \newline\indent Now, the spaces of interest are the Wave-Sobolev spaces, $H^{s,\theta}$ and ${\mathcal H^{s,\theta}}$, given by\footnote{These spaces, together with results in \cite{SelbergEstimates}, allowed Klainerman and Selberg to present a unified approach to local wellposedness for Wave Maps, Yang-Mills and Maxwell-Klein-Gordon types of equations in \cite{KS}, and are now the natural choice for low regularity subcritical local wellposedness for wave equations. Also see \cite{taobook}.} \begin{align} \|u\|_{H^{s,\theta}}&=\| \Lambda^s\Lambda^\theta_-u \|_{L^2(\mathbb R^{2+1})},\\ \|u\|_{\mathcal H^{s,\theta}}&=\|u\|_{H^{s,\theta}}+\|\partial_tu\|_{H^{s-1,\theta}}. \end{align} An equivalent norm for ${\mathcal H^{s,\theta}}$ is $\|u\|_{{\mathcal H^{s,\theta}}}=\| \Lambda^{s-1}\Lambda_+\Lambda^\theta_-u \|_{L^2(\mathbb R^{2+1})}$. By results in \cite{SelbergT} if $\theta > \frac{1}{2}$, we have \begin{eqnarray} H^{s,\theta} &\hookrightarrow & C_b(\mathbb R,H^s),\label{embed}\\ {\mathcal H^{s,\theta}} &\hookrightarrow & C_b(\mathbb R,H^s)\cap C_b^1(\mathbb R,H^{s-1}). \end{eqnarray} This is a crucial fact needed to localize our solutions in time. We denote the restrictions to the time interval $[0,T]$ by \[ H^{s,\theta}_T\quad\mbox{and}\quad \mathcal H^{s,\theta}_T, \] respectively. \subsection{{Estimates Used}} Throughout the paper we use the following estimates: \begin{align} \|D^{-\sigma} (uv) \|_{L^p_tL^q_x} &\lesssim \|u\|_{H^{s,\theta}} \|v\|_{H^{s,\theta}},\label{A}\\ \|u\|_{L^{p}_tL^2_x} &\lesssim \|u\|_{H^{0,\theta}},\quad 2\leq p \leq \infty,\ \theta >\frac{1}{2},\label{C}\\ \|u\|_{L^{p}_tL^q_x} &\lesssim \|u\|_{H^{1-\frac{2}{q}-\frac{1}{p},\theta}},\quad 2\leq p \leq \infty,\ 2\leq q < \infty,\ \frac{2}{p}\leq \frac{1}{2}-\frac{1}{q},\ \theta>\frac{1}{2},\label{D}\\ \|uv\|_{L^2_{t,x}}&\lesssim\|u\|_{H^{a,\alpha}}\|v\|_{H^{b,\beta}},\quad a,b,\alpha,\beta\geq0,\ a+b>1,\ \alpha+\beta>\frac{1}{2}.\label{E} \end{align} Estimate \eqref{A} is a theorem of Klainerman and Tataru established in \cite{KlainermanTataru} for the space-time operator $D_+$. The proof for the spatial operator $D$ was shown by Selberg in \cite{SelbergT}. There are several conditions $\sigma, p,q$ have to satisfy, and they are listed in Section \ref{KTT}, where we discuss the application of the estimate. Estimate \eqref{C} can be proved by interpolation between $H^{0,\theta} \hookrightarrow L^2_{t,x}$ and \eqref{embed} with $s=0$. \eqref{D} is a two dimensional case of Theorem D in Klainerman and Selberg \cite {KS}. Finally, \eqref{E} is a special case of the proposition in Appendix A.2 also in \cite{KS}. \end{section} \begin{section}{A Closer Look at the Monopole Equations}\label{closer} \begin{subsection}{Derivation and Background} Electric charge is quantized, which means that it appears in integer multiples of an electron. This is called the principle of quantization and has been observed in nature. The only theoretical proof so far was presented by Paul Dirac in 1931 \cite{Dirac}. In the proof Dirac introduced the concept of a magnetic monopole, of an isolated point-source of a magnetic charge. Despite extensive research magnetic monopoles have not been (yet) found in nature. We refer to the magnetic monopoles as euclidean monopoles. The euclidean monopole equation has exactly the same form as our space-time monopole equation (ME), \[ F_A=\ast D_A\phi, \] with the exception that $\ast$ acts here with respect to the euclidean metric and the base manifold is $\mathbb R^3$ instead of $\mathbb R^{2+1}$. The euclidean monopole equations are also referred to as Bogomolny equations. For more on euclidean monopoles we refer an interested reader to books by Jaffe and Taubes \cite{JaffeTaubes} and Atiyah and Hitchin \cite{AtiyahHitchin}. In this paper we study the space-time monopole equation, which was first introduced by Ward \cite{Ward89}. Both the euclidean and the space-time monopole equations are examples of integrable systems and have an equivalent formulation as a Lax pair. This and much more can be found in \cite{DCU}.\\ \indent Given a space-time monopole equation \begin{equation}\tag{ME} F_A=\ast D_A\phi, \end{equation} the unknowns are a pair $(A,\phi)$. $A$ is a connection $1$-form given by \begin{equation} A=A_0dt+A_1dx+A_2dy,\quad\mbox{where}\quad A_\alpha: \mathbb R^{2+1} \rightarrow \mathfrak{g}. \end{equation} $\mathfrak{g}$ is the Lie algebra of a Lie group $G$, which is typically taken to be a matrix group $SU(n)$ or $U(n)$. In this paper we consider $G=SU(n),$ but everything we say here should generalize to any compact Lie group.\\ \indent To be more general we could say $A$ is a connection on a principal G-bundle. Then observe that the G-bundle we deal here with is a trivial bundle $\mathbb R^{2+1} \times G$. \newline \indent Next, $\phi$ is a section of a vector bundle associated to the G-bundle by a representation. We use the adjoint representation. Since we have a trivial bundle, we can just think of the Higgs field $\phi$ as a map from $\mathbb R^{2,1} \rightarrow \mathfrak{g}$.\\ \indent $F_A$ is the curvature of $A$. It is a Lie algebra valued $2$-form on $\mathbb R^{2+1}$ \begin{equation} F_A=\frac{1}{2}F_{\alpha\beta}dx^\alpha\wedge dx^\beta,\quad\mbox{where}\quad F_{\alpha\beta}=\partial_\alpha A_\beta -\partial_\beta A_\alpha +[A_\alpha,A_\beta]. \label{curv} \end{equation} $[\cdot,\cdot]$ denotes the Lie bracket, which for matrices can be thought of simply as $[X,Y]=XY-YX$. When we write $[\phi,B]$, where $B$ is a $1$-form, we mean \begin{equation} [\phi,B]=[\phi,B_i]dx^i\quad\mbox{and}\quad [B,C]=\frac{1}{2}[B_i,C_j]dx^i\wedge dx^j,\mbox{ for two $1$-forms $B,C$.} \end{equation} In the physics language, frequently adopted by the mathematicians, $A$ is called a gauge potential, $\phi$ a scalar field and $F_A$ is called an electromagnetic field. \newline\indent Next, $D_A$ is the covariant exterior derivative associated to $A$, and $D_A\phi$ is given by \begin{equation} D_A \phi=D_\alpha \phi dx^\alpha,\quad\mbox{where}\quad D_\alpha\phi=\partial_\alpha \phi + [A_\alpha,\phi]. \end{equation} \indent The space-time monopole equation (ME) is obtained by a dimensional reduction of the anti-self-dual Yang Mills equations on $\mathbb R^{2+2}$ given by \begin{equation}\tag{ASDYM} F_A=-\ast F_A. \end{equation} We now present the details of the derivation of (ME) from (ASDYM) outlined in \cite{DCU}. Let \[ dx_1^2 + dx_2^2 - dx_3^2 - dx_4^2 \] be a metric on $\mathbb R^{2+2}$, then in coordinates (ASDYM) is \begin{equation} F_{12}=-F_{34},\quad F_{13}=-F_{24},\quad F_{23}=F_{14}.\label{asdymcoor} \end{equation} Next step is the dimensional reduction, where we assume the connection $A$ is independent of $x_3$, and we let $A_3=\phi$. Then \eqref{asdymcoor} becomes \begin{equation} D_0 \phi=F_{12}, \quad D_1 \phi=F_{02}, \quad D_2 \phi=F_{10}, \end{equation} where we use index $0$ instead of $4$. This is exactly (ME) written out in components. \begin{remark} Equivalently we could write (ME) as \begin{equation} F_{\alpha\beta}=-\epsilon_{\alpha\beta\gamma}D^\gamma \phi, \end{equation} where $\epsilon_{\alpha\beta\gamma}$ is a completely antisymmetric tensor with $\epsilon_{012}=1$, and where we raise the index $\gamma$ using the Minkowski metric. We choose to work with the Hodge operator $\ast$ as it simplifies our task in Section \ref{mesys}. \end{remark} \indent There is another way to write (ME) \cite{DCU}, which is very useful for computations. (ME) is an equation involving $2$-forms on both sides. By taking the parts corresponding to $dt\wedge dx$ and $dt\wedge dy$, and the parts corresponding to $dx\wedge dy$ we can obtain the following two equations respectively \begin{align} \partial_t A + [A_0, A]-dA_0&=\ast d\phi + [\ast A, \phi],\label{m1a}\\ dA + [A,A]&=\ast(\partial_t \phi + [A_0, \phi]).\label{m2a} \end{align} Observe that now operators $d$ and $\ast$ act only with respect to the spatial variables. Similarly, $A$ now denotes only the spatial part of the connection, i.e., $A=(A_1,A_2)$. Moreover, \eqref{m1a} is an equation involving $1$-forms, and \eqref{m2a} involves $2$-forms.\\ \end{subsection} \subsection{Gauge Transformations}\label{gaugesection} (ME) is invariant under gauge transformations. Indeed, if we have a smooth map $g$, with compact support such that $g: \mathbb R^{2+1} \rightarrow G$, and \begin{align} A \rightarrow A_g&=gAg^{-1} + gdg^{-1},\\ \phi \rightarrow \phi_g&=g\phi g^{-1}, \end{align} then a computation shows $F_A \rightarrow g F_A g^{-1}$ and $D_A\phi \rightarrow gD_A\phi g^{-1}$. Therefore if a pair $(A,\phi)$ solves (ME), so does $(A_g,\phi_g)$.\\ \indent We would like to discuss the regularity of the gauge transformations. If $A\in X, \phi \in Y$ where $X, Y$ are some Banach spaces, the smoothness and compact support assumption on $g$ can be lowered just enough so the gauge transformation defined above is a continuous map from $X$ back into $X$, and from $Y$ back into $Y$. First note that since we are mapping into a compact Lie group, we can assume $g\in L^\infty_{t,x}$ and $\|g\|_{L^\infty_{t,x}}=\|g^{-1}\|_{L^\infty_{t,x}}.$ Next, note that the Main Theorem produces a solution so that $\phi$ and the spatial parts of the connection $A_1,A_2 \in C_b(I,H^s), \frac{1}{4}<s<\frac{1}{2}$, and $A_0 \in C_b(I,\dot H^r), r \in (0, 2s]$. We have the following \begin{lem}\label{gaugeaction1} Let $0<\alpha<1$, and $Y=C_b(I,\dot H^{1}\cap\dot H^{\alpha+1})\cap L^\infty,$ then the gauge action is a continuous map from \begin{equation} \begin{split} C_b(I, H^\alpha) &\times Y \rightarrow C_b(I, H^\alpha)\\ &(h,g) \mapsto ghg^{-1} + gdg^{-1}, \end{split} \end{equation} and the following estimate holds: \begin{equation} \|h_g\|_{C_b(I, H^\alpha)} \lesssim (\|h\|_{C_b(I, H^\alpha)}+1)\|g\|^2_Y.\label{hg} \end{equation} \end{lem} \begin{proof} The continuity of the map easily follows from the inequalities we obtain below. Next, for fixed $t$ we have \[ \|g(t)h(t)g^{-1}(t)+g(t)dg^{-1}(t)\|_{H^\alpha}\lesssim \|ghg^{-1}\|_{L^2}+\|D^\alpha(ghg^{-1})\|_{L^2} + \|gdg^{-1}\|_{H^\alpha}, \] where for the ease of notation we eliminated writing of the variable $t$ on the right hand side of the inequality. The first term is bounded by $\|h(t)\|_{H^\alpha}\|g\|_{L^\infty}^2$. For the second one we have \begin{equation}\nonumber \|D^\alpha(ghg^{-1})\|_{L^2} \lesssim \|D^\alpha gh\|_{L^2}\|g\|_{L^\infty}+\|hD^\alpha g^{-1}\|_{L^2}\|g\|_{L^\infty} +\|h\|_{\dot H^\alpha}\|g\|^2_{L^\infty}. \end{equation} It is enough to only look at the first term since $g$ and $g^{-1}$ have the same regularity. By H\"older's inequality and Sobolev embedding \begin{equation} \|D^\alpha gh\|_{L^2}\leq \|D^\alpha g\|_{L^{2/\alpha}}\|h\|_{L^{(1/2-\alpha/2)^{-1}}}\lesssim \|g\|_{\dot H^1}\|h\|_{\dot H^\alpha},\label{gA} \end{equation} where we use that $\frac{\alpha}{2}=\frac{1}{2}-\frac{1-\alpha}{2}.$ Finally for the last term we have \begin{equation} \|gdg^{-1}\|_{H^\alpha}\lesssim \|g\|_{\dot H^1}\|g\|_{L^\infty}+\|D^\alpha gdg^{-1}\|_{L^2}+\|g\|_{\dot H^{\alpha+1}}\|g\|_{L^\infty}, \end{equation} and we are done if we observe that the second term can be handled exactly as in \eqref{gA}. \end{proof} \begin{remark} We assume $0<\alpha<1$ since this is the case we need. However it is not difficult to see the lemma still holds with $\alpha=0$ or $\alpha \geq 1$; see \cite{Czubak}. \end{remark} From the lemma, we trivially obtain the following corollary. \begin{cor}\label{welld} Let $0<r,s<1$, $X=C_b(I,\dot H^r)\times C_b(I, H^s)\times C_b(I,H^s)$ and $Y=C_b(I,\dot H^{1}\cap \dot H^{s+1}\cap \dot H^{r+1})\cap L^\infty.$ Then the gauge action is a continuous map from \begin{equation} \begin{split} X \times Y &\rightarrow X\\ (A_0,A_1,A_2) & \mapsto A_g, \end{split} \end{equation} as well as from \begin{equation} \begin{split} C_b(I,H^s) \times Y &\rightarrow C_b(I,H^s)\\ \phi & \mapsto \phi_g=g\phi g^{-1}, \end{split} \end{equation} and the following estimates hold \begin{align} \|A_g\|_X \lesssim (1+\|A\|_X)\|g\|^2_Y,\\ \|\phi_g\|_{C_b(I,H^s)} \lesssim \|\phi\|_{C_b(I,H^s)}\|g\|^2_Y. \end{align} \end{cor} Since in this paper we work in the Coulomb gauge, we ask: given any initial data $a_1, a_2, \phi_0 \in H^s(\mathbb R^2)$, can we find a gauge transformation so that the initial data is placed in the Coulomb gauge? Dell'Antonio and Zwanziger produce a global $\dot H^1$ Coulomb gauge using variational methods \cite{DZ}. Here, we also require $g \in \dot H^{s+1}$, and two dimensions are tricky. Fortunately, if the initial data is small, we can obtain a global gauge with the additional regularity as needed. This is considered by the author and Uhlenbeck for two dimensions and higher in \cite{CzubakUhlenbeck}. The result in two dimensions is the following \begin{thm}(\cite{CzubakUhlenbeck})\label{coulombg} Given $A(0)=a$ sufficiently small in $H^s(\mathbb R^2)\times H^s(\mathbb R^2)$, there exists a gauge transformation $g \in \dot H^{s+1}(\mathbb R^2) \cap \dot H^{1}(\mathbb R^2)\cap L^\infty$ so that $\partial^i(ga_ig^{-1}+g\partial_ig^{-1})=0$. \end{thm} \end{section} \section{The Monopole Equation in the Coulomb Gauge as a system of Wave \& Elliptic Equations}\label{mesys} We begin with a proposition, where we show how we can rewrite the monopole equation in the Coulomb gauge as a system of wave equations coupled with an elliptic equation, to which we refer as the auxiliary monopole equations (aME). Then we have an important result that states that local wellposedness (LWP) for (ME) in the Coulomb gauge can be obtained from LWP of (aME). \begin{prop} The monopole equation, $F_A=\ast D_A \phi$ on $\mathbb R^{2+1}$ in the Coulomb gauge with initial data \begin{equation}\label{id0} A_i|_{t=0}=a_i,\quad i=1,2 \quad\mbox and \mbox\quad \phi|_{t=0}=\phi_0 \end{equation} with $\partial^i a_i=0$ can be rewritten as the following system \[ \mbox{(aME)}\quad \left\{ \begin{array}{l} \begin{split} \square u &= \mathcal B_+(\phi,\nabla f,A_0),\nonumber\\ \square v &= \mathcal B_-(\phi,\nabla f,A_0),\nonumber\\ \triangle A_0 &=\mathcal C(\phi,\nabla f,A_0),\nonumber \end{split} \end{array} \right. \] where \begin{align} &\mathcal C=-\partial_1[A_0,\partial_2 f]+\partial_2[A_0,\partial_1 f]+\partial_i[\partial_if,\phi],\\ &\mathcal B_\pm=-\mathcal B_1 \pm\mathcal B_2+\mathcal B_3\pm\mathcal B_4,\label{Bpm} \end{align} and \begin{equation}\label{bis} \begin{split} \mathcal B_1&=[\partial_1f,\partial_2f],\\ \mathcal B_2&=R_1[\partial_2f,\phi]-R_2[\partial_1f,\phi],\\ \mathcal B_3&=[A_0,\phi],\\ \mathcal B_4&=R_j[A_0,\partial_j f], \end{split} \end{equation} with $R_j$ denoting Riesz transform, $(-\triangle)^{-\frac{1}{2}}\partial_j.$ The initial data for (aME) is given by \begin{equation}\label{id} \begin{split} u(0)&=v(0)=0,\\ \partial_t u(0)&=\phi_0 + h,\\ \partial_t v(0)&=\phi_0 - h, \end{split} \end{equation} where $h=R_1 a_2-R_2 a_1$. \end{prop} \begin{remark}\label{comparison} (aME) has some resemblance to a system considered by Selberg in \cite{Selberg} for the Maxwell-Klein-Gordon (MKG) equations, where he successfully obtains almost optimal local wellposedness in dimensions $1+4$. Besides the dimension considered, there are two fundamental technical differences applicable to our problem. First comes from the fact that the monopole equation we consider here is an example of a system in the non-abelian gauge theory whereas (MKG) is an example of a system in the abelian gauge theory. The existence of a global Coulomb gauge requires smallness of initial data in the non-abelian gauge theories, but is not needed in the abelian theories. Another technical difference arises from Selberg being able to solve the elliptic equation for his temporal variable $A_0$ using Riesz representation theorem, where he does not require smallness of the initial data. The elliptic equation in (aME) is more difficult, so we include $A_0$ in the Picard iteration. As a result we are not able to allow large data by taking a small time interval, which we could do if we only had the two wave equations. Finally, we point out that the proof of our estimates involving $A_0$ is modeled after Selberg's proof in \cite{Selberg} (see Remark \ref{estinMKG} and Section \ref{ellipticpiece1}). \end{remark} \begin{proof} Recall equations \eqref{m1a} and \eqref{m2a} \begin{eqnarray} \partial_t A + [A_0, A]-dA_0=\ast d\phi + [\ast A, \phi]\label{m1},\\ dA + [A,A]=\ast(\partial_t \phi + [A_0, \phi]),\label{m2} \end{eqnarray} where $d$ and $\ast$ act only with respect to the spatial variables, and $A$ denotes only the spatial part of the connection. If we impose the Coulomb gauge condition, then \begin{equation} d^\ast A=0. \end{equation} By equivalence of closed and exact forms on $\mathbb R^n$, we can further suppose that \begin{equation}\label{aisdf} A=\ast df, \end{equation} for some $f:\mathbb R^{2+1} \rightarrow \mathfrak g$. Observe \begin{equation} d \ast d f=\triangle f dx\wedge dy, \quad [\ast df, \ast df]=[df,df]=\frac{1}{2}[\partial_if,\partial_jf]dx^i\wedge dx^j, \end{equation} and $\ast\ast \omega=-\omega$ for a one-form on $\mathbb R^2$. It follows \eqref{m1} and \eqref{m2} become \begin{eqnarray} \partial_t \ast df + [A_0, \ast df]-dA_0=\ast d\phi - [df, \phi],\label{e1'}\\ \triangle f + [\partial_1f, \partial_2f]=\partial_t \phi + [A_0, \phi].\label{e2'} \end{eqnarray} Take $d^\ast$ of \eqref{e1'} to obtain \[ \triangle A_0= d^\ast [A_0, \ast df]+d^\ast[df, \phi]. \] This is the elliptic equation in (aME). Now take $d$ of \eqref{e1'} \begin{equation} \partial_t \triangle f + \partial^j[A_0, \partial_jf]=\triangle \phi + \partial_2[\partial_1f,\phi]- \partial_1[\partial_2f,\phi].\label{e1'''} \end{equation} Consider \eqref{e1'''} and \eqref{e2'} on the spatial Fourier transform side \begin{align} -\partial_t |\xi|^2 \hat f +|\xi|^2 \hat \phi & = i (\xi_2\widehat{[\partial_1f,\phi]}- \xi_1\widehat{[\partial_2f,\phi]} -\xi_j\widehat{[A_0, \partial_jf]}) \label{eI}\\ -|\xi|^2 \hat f -\partial_t \hat \phi &=- \widehat{[\partial_1f, \partial_2f]} + \widehat{[A_0, \phi]}.\label{eII} \end{align} This allows us to write \eqref{eI} and \eqref{eII} as a system for $\phi$ and $df$ \begin{eqnarray} (\partial_t - i |\xi|)(\hat \phi + i |\xi|\hat f)= -\hat {\mathcal B}_{+}(\phi,df,A_0),\label{system1}\\ (\partial_t + i |\xi|)(\hat \phi - i |\xi|\hat f)= -\hat {\mathcal B}_{-}(\phi,df,A_0),\label{system2} \end{eqnarray} where \begin{equation} \hat {\mathcal B}_{\pm}=-\widehat{[\partial_1f, \partial_2f]} + \widehat{[A_0,\phi]} \pm \left(\frac{\xi_1}{|\xi|}\widehat{[\partial_2f,\phi]}-\frac{\xi_2}{|\xi|}\widehat{[\partial_1f,\phi]}+\frac{\xi_j}{|\xi|}\widehat{[A_0, \partial_jf]}\right). \end{equation} Indeed, multiply \eqref{eI} by $\frac{i}{|\xi|}$, and first add the resulting equation to \eqref{eII} to obtain \eqref{system1}, and then subtract it from \eqref{eII} to obtain \eqref{system2}. To uncover the wave equation, we let \begin{equation} \hat \phi + i |\xi|\hat f=(\partial_t + i|\xi|)\hat u \quad\mbox{and}\quad \hat \phi - i |\xi|\hat f=(\partial_t - i|\xi|)\hat v\label{two}, \end{equation} where $u,v: \mathbb R^{2+1} \rightarrow \mathfrak g$. See remark \ref{iremark} below.\\ \indent Now we discuss initial data. From (\ref{two}) \begin{equation} \partial_t \widehat {u(0)}=\hat \phi_0 + i|\xi|\widehat {f(0)} - i|\xi|\widehat {u(0)}, \end{equation} and \begin{equation} \partial_t \widehat {v(0)}=\hat \phi_0 - i|\xi|\widehat {f(0)} + i|\xi|\widehat {v(0)}. \end{equation} Note, we are free to choose any data for $u$ and $v$ as long as in the end we can recover the original data for $\phi$ and $A$. Hence we just let $u(0)=v(0)=0$. We still need to say what $|\xi|\widehat{f(0)}$ is. Let $\hat h=i|\xi|\widehat {f(0)}$. By \eqref{id0} and \eqref{aisdf} \[ a_1=A_1(0)=-\partial_2 f(0),\quad a_2=A_2(0)=\partial_1 f(0), \] so we need \[ R_2h=-a_1,\quad R_1h=a_2. \] Differentiate the first equation with respect to $x$, the second with respect to $y$, and add them together to obtain \begin{equation} \triangle D^{-1} h=\partial_1 a_2 - \partial_2 a_1, \end{equation} as needed. \end{proof} \begin{remark}\label{iremark} $u$ and $v$ are our new unknowns, but we are really interested in $\phi$ and $df$. Therefore, we observe that once we know what $u$ and $v$ are, we can determine $\phi$ and $df$ by using \begin{equation}\label{phidf} \begin{split} \hat \phi=\frac{(\partial_t + i|\xi|)\hat u + (\partial_t - i|\xi|)\hat v}{2},\\ i|\xi| \hat f=\frac{(\partial_t + i|\xi|)\hat u - (\partial_t - i|\xi|)\hat v}{2}, \end{split} \end{equation} or equivalently \begin{equation}\label{newphidf} \begin{split} \phi&=\frac{(\partial_t + iD)u+(\partial_t - iD)v}{2},\\ \partial_j f&=R_j\left(\frac{(\partial_t + iD)u-(\partial_t - iD)v}{2}\right). \end{split} \end{equation} From $df$ we get $A$ by letting $A=\ast df$. Finally, with the exception of the nonlinearity $\mathcal B_2$ when we discuss our estimates in Section \ref{main}, for simplicity we keep the nonlinearities in terms of $\phi$ and $df$. However, since $\phi$ and $df$ can be written in terms of derivatives of $u$ and $v$ we sometimes write ${\mathcal B}_\pm(\phi,df,A_0)$ as ${\mathcal B}_\pm(\partial u,\partial v,A_0)$. \end{remark} Next we have a theorem, where we show how LWP for (aME) implies LWP for (ME) in the Coulomb gauge. For completeness, we first state exactly what we mean by LWP of (aME).\\ \indent Let $r\in (0,\min(2s,1+s)], s>0$. Consider the system (aME) with initial data \[ (u,u_t)|_{t=0}=(u_0,u_1)\quad\mbox{and}\quad(v,v_t)|_{t=0}=(v_0,v_1) \] in $H^{s+1}\times H^s$, then (aME) is LWP if: \newline\noindent \textit{\textbf{{(Local Existence)}}} There exist $T>0$ depending continuously on the norm of the initial data, and functions \begin{align*} A_0 \in C_b([0,T],\dot H^{r}),\quad u,v \in \mathcal H_T^{s+1,\theta}\hookrightarrow C_b([0,T],H^{s+1})\cap C^1_b([0,T],H^s), \end{align*} which solve (aME) on $[0,T] \times \mathbb R^2$ in the sense of distributions and such that the initial conditions are satisfied. \newline\noindent \textit{\textbf{(Uniqueness)}} If $T>0$ and $(A_0,u,v)$ and $(A_0',u',v')$ are two solutions of (aME) on $(0,T)\times \mathbb R^2$ belonging to \[ C_b([0,T],\dot H^{r})\times \mathcal H_T^{s+1,\theta}\times \mathcal H_T^{s+1,\theta}, \] with the same initial data, then $(A_0,u,v)=(A_0',u',v')$ on $(0,T)\times \mathbb R^2$. \newline\noindent \textit{\textbf{(Continuous Dependence on Initial Data)}} For any $(u_0,u_1), (v_0,v_1) \in H^{s+1}\times H^s$ there is a neighborhood $U$ of the initial data such that the solution map $(u_0,u_1),(v_0,v_1) \rightarrow (A_0,u,v)$ is continuous from $U$ into $C_b([0,T],\dot H^{r})\times \big(C_b([0,T],H^{s+1})\cap C^1_b([0,T],H^s)\big)^2$.\newline\indent In fact, by the results in \cite{SelbergEstimates} combined with estimates for the elliptic equation, we can show these stronger estimates \begin{equation}\label{cdimp} \begin{split} \|u-&u'\|_{\mathcal H_T^{s+1,\theta}}+\|v-v'\|_{\mathcal H_T^{s+1,\theta}}+\|A_0-A_0'\|_{C_b([0,T],\dot H^{r})}\\ &\lesssim \|u_0-u_0'\|_{H^{s+1}}+\|u_1-u_1'\|_{H^{s}} +\|v_0-v_0'\|_{H^{s+1}}+\|v_1-v_1'\|_{H^{s}}, \end{split} \end{equation} where $(u_0',u_1'),(v_0',v_1')$ are sufficiently close to $(u_0,u_1),(v_0,v_1)$. \begin{remark} Note that below we have no restriction on $s$, i.e., if we \emph{could} show (aME) is LWP in $H^{s+1}\times H^s,$ $s>0$, we would get LWP of (ME) in the Coulomb gauge in $H^s$ for $s>0$ as well. \end{remark} \begin{thm}(\textbf{Return to the Monopole Equation})\label{returnME} Consider (ME) in the Coulomb gauge with the following initial data in $H^s$ for $s>0$ \begin{equation}\label{id1} A_i|_{t=0}=a_i,\quad i=1,2 \quad\mbox {and} \mbox\quad \phi|_{t=0}=\phi_0 \end{equation} with $\partial^i a_i=0$. Then local wellposedness of (aME) with initial data as in \eqref{id} implies local wellposedness of (ME) in the Coulomb gauge with initial data given by \eqref{id1}. \end{thm} \begin{proof} Begin by observing that given initial data in the Coulomb gauge, the solutions of (aME) imply $A$ remains in the Coulomb gauge. Indeed, solutions of (aME) produce $df$ via \eqref{newphidf}, so we get $A=\ast df$, and $d^\ast A=\ast d \ast(\ast df)=0$ as claimed. \newline\noindent \textit{\textbf{(Local Existence)}} From \eqref{newphidf}, if \[ u,v \in \mathcal H_T^{s+1,\theta},\quad\mbox{then}\quad \phi, A=\ast df \in H_T^{s,\theta}, \] as needed. Next we verify that if $(u,v,A_0)$ solve (aME), then $(\phi,df,A_0)$ solve (ME) in the Coulomb gauge. Note, the monopole equation in the Coulomb gauge is equivalent to \eqref{e1'} and \eqref{e2'}. Suppose $u,v,A_0$ solve (aME). It follows $(df,\phi)$ solve (\ref{system1}) and (\ref{system2}). Add (\ref{system1}) to (\ref{system2}) to recover (\ref{eII}), which is equivalent to \eqref{e2'}.\\ \indent Next given (aME) we need to show \eqref{e1'} holds. Write \eqref{e1'} in coordinates, \begin{eqnarray} \partial_1 A_0 -\partial_2\phi +\partial_t\partial_2f=[\partial_1f,\phi]-[A_0,\partial_2f],\label{something}\\ \partial_2 A_0+\partial_1\phi -\partial_t\partial_2f=[\partial_2f,\phi]+[A_0,\partial_1f].\label{somethingelse} \end{eqnarray} From the elliptic equation in (aME) we have \begin{equation} \begin{split}\label{anot} A_0=\triangle^{-1}(-\partial_1[A_0,\partial_2f]+\partial_2[A_0,\partial_1f] +\partial_1[\partial_1f,\phi]+\partial_2[\partial_2f,\phi]). \end{split} \end{equation} Also subtract (\ref{system1}) from (\ref{system2}) and multiply by $|\xi|$ on both sides to obtain \eqref{e1'''}, which implies \begin{equation}\label{difference} \begin{split} \phi-\partial_tf=\triangle^{-1}(\partial_i[A_0,\partial_if]-\partial_2[\partial_1f,\phi]+\partial_1[\partial_2f,\phi]). \end{split} \end{equation} In order to recover \eqref{something}, first use \eqref{anot} to get $\partial_1 A_0$ \begin{equation}\label{something2} \partial_1 A_0=\triangle^{-1}(-\partial^2_1[A_0,\partial_2f]+\partial_1\partial_2[A_0,\partial_1f] +\partial^2_1[\partial_1f,\phi]+\partial_1\partial_2[\partial_2f,\phi]). \end{equation} Next use \eqref{difference} to get $\partial_2(\phi-\partial_tf)$ \begin{equation}\label{something3} \partial_2(\phi-\partial_tf)=\triangle^{-1}(\partial_2\partial_1[A_0,\partial_1f]+\partial_2^2[A_0,\partial_2f] -\partial_2^2[\partial_1f,\phi]+\partial_2\partial_1[\partial_2f,\phi]), \end{equation} and subtract it from \eqref{something2} to get \eqref{something} as needed. We recover \eqref{somethingelse} in the exactly same way. \newline\noindent \textit{\textbf{(Continuous Dependence on Initial Data)}} We would like to show \begin{equation}\label{cdoid} \begin{split} \|A_0-A_0'\|_{C_b([0,T],\dot H^{r})}+&\|A_1-A_1'\|_{H_T^{s,\theta}}+\|A_2-A_2'\|_{H_T^{s,\theta}}+\|\phi-\phi'\|_{H_T^{s,\theta}}\\ &\lesssim \|a_1-a_1'\|_{H^s}+\|a_2-a_2'\|_{H^s}+\|\phi_0-\phi'_0\|_{H^s} \end{split} \end{equation} for any $a_1',a_2',\phi_0'$ sufficiently close to $a_1,a_2,\phi_0$. In view of LWP for (aME) with data given by \[ u(0)=v(0)=0,\quad \partial_t u(0)=\phi_0 + h,\quad\mbox{and}\quad \partial_t v(0)=\phi_0 - h,\quad h=R_1 a_2-R_2 a_1, \] and by \eqref{cdimp} we have \begin{equation}\label{cdimp1} \begin{split} \|u-&u'\|_{\mathcal H_T^{s+1,\theta}}+\|v-v'\|_{\mathcal H_T^{s+1,\theta}}+\|A_0-A_0'\|_{C_b([0,T],\dot H^{r})}\\ &\lesssim \|u_0'\|_{H^{s+1}}+\|\phi_0 + h-u_1'\|_{H^{s}} +\|v_0'\|_{H^{s+1}}+\|\phi_0 - h-v_1'\|_{H^{s}}, \end{split} \end{equation} for all $u_0',v_0',u_1',v_1'$ satisfying \begin{equation}\label{cdimp2} \|u_0'\|_{H^{s+1}}+\|\phi_0 + h-u_1'\|_{H^{s}} +\|v_0'\|_{H^{s+1}}+\|\phi_0 - h-v_1'\|_{H^{s}}\leq\delta, \end{equation} for some $\delta >0$. In particular choose \begin{equation}\label{cdimp3} u_0'=v_0'=0,\quad u_1'=\phi_0'+h' \quad\mbox{and}\quad v_1'=\phi_0'-h',\quad h'=R_1 a_2'-R_2 a_1', \end{equation} such that \begin{equation}\label{cdimp4} \begin{split} \|\phi_0+h-\phi_0'-h'\|_{H^s}& + \|\phi_0-h-\phi_0'+h'\|_{H^s}\\ &\quad \lesssim \|\phi_0-\phi_0'\|_{H^s} + \|R_1(a_2-a_2')\|_{H^s}+\|R_2(a_1-a_1')\|_{H^s}\\ &\quad\leq \|\phi_0-\phi'_0\|_{H^s}+\|a_1-a_1'\|_{H^s}+\|a_2-a_2'\|_{H^s}\\ &\quad \leq \delta. \end{split} \end{equation} Then by \eqref{cdimp1}-\eqref{cdimp4}, $\|A_0-A_0'\|_{C_b([0,T],\dot H^{r})}$ is bounded by the right hand side of \eqref{cdoid}. Next observe \begin{align*} \|A_1-A_1'\|_{H_T^{s,\theta}} &\lesssim\|R_2(\partial_t+iD)(u-u')\|_{H_T^{s,\theta}}+\|R_2(\partial_t-iD)(v-v')\|_{H_T^{s,\theta}}\\ & \leq \|u-u'\|_{\mathcal H_T^{s+1,\theta}} + \|v-v'\|_{\mathcal H_T^{s+1,\theta}}. \end{align*} So again by \eqref{cdimp1}-\eqref{cdimp4} $\|A_1-A_1'\|_{H_T^{s,\theta}}$ is bounded by the right hand side of \eqref{cdoid}. We bound the difference for $A_2$ and $\phi$ in a similar fashion. \newline\noindent \textit{\textbf{(Uniqueness)}} By LWP of (aME), $A_0$ is unique in the required class. We need to show $A$ and $\phi$ are unique in $H^{s,\theta}_T$. However, by \eqref{cdoid} this is obvious. \end{proof} \begin{section}{Proof of the Main Theorem}\label{main} By Theorem \ref{returnME} it is enough to show LWP for (aME). We start by explaining how we are going to perform our iteration. \begin{subsection}{{Set up of the Iteration}} Equations (aME) are written for functions $u$ and $v$. Nevertheless, functions $u$ and $v$ are only our auxiliary functions, and we are really interested in solving for $df$ and $\phi$. In addition, the nonlinearities $\mathcal B_\pm$ are a linear combination of $\mathcal B_i$'s, $i=1,2,3,4$ given by \eqref{bis}, and $\mathcal B_i$'s are written in terms of $\phi, df$ and $A_0$. Also, when we do our estimates, it is easier to keep the ${\mathcal B}_i$'s in terms of $\phi$ and $df$ with the exception of ${\mathcal B}_2$, which we rewrite in terms of $\partial u$ and $\partial v$\footnote{See Section \ref{nullformq} for the details.}. These comments motivate the following procedure for our iteration. Start with $\phi_{-1}=df_{-1}=0$. Then ${\mathcal B}_\pm\equiv 0$. Solve the homogeneous wave equations for $u_0,v_0$ with the initial data given by \eqref{id}. Then to solve for $df_0, \phi_0$, use \eqref{newphidf}. Then feed $\phi_0$ and $df_0$ into the elliptic equation, \begin{equation} \triangle A_{0,0}=d^\ast([A_{0,0}, \ast df_0]+ [df_0, \phi_0]), \end{equation} and solve for $A_{0,0}$. Next we take $df_0, \phi_0$ and $A_{0,0}$ plug them into $\mathcal B_1, \mathcal B_3, \mathcal B_4$, but rewrite $\mathcal B_2$ in terms of $\partial u_0, \partial v_0$. We continue in this manner, so at the j'th step of the iteration, $j \geq 1$, we solve \begin{equation}\nonumber \begin{split} \square u_j&= -{\mathcal B}_1(\nabla f_{j-1})+ {\mathcal B}_2(\partial u_{j-1}, \partial v_{j-1}) +{\mathcal B}_3(A_{0,j-1},\phi_{j-1})+{\mathcal B}_4(A_{0,j-1},\nabla f_{j-1}),\\ \square v_j&= -{\mathcal B}_1(\nabla f_{j-1})-{\mathcal B}_2(\partial u_{j-1}, \partial v_{j-1})+{\mathcal B}_3(A_{0,j-1},\phi_{j-1})-{\mathcal B}_4(A_{0,j-1},\nabla f_{j-1}),\\ \triangle A_{0,j}&=d^\ast([A_{0,j}, \ast df_j]+ [df_j, \phi_j]). \end{split} \end{equation} \end{subsection} \begin{subsection}{Estimates Needed} The elliptic equation is discussed in Section \ref{ellipticpiece1}. Therefore we begin by discussing the inversion of the wave operator in ${\mathcal H^{s+1,\theta}}$ spaces. The main idea is that for the purposes of local in time estimates $\square^{-1}$ can be replaced with $\Lambda^{-1}_+\Lambda^{-1}_-$. The first estimates, leading to wellposedness for small initial data, were proved by Klainerman and Machedon in \cite{KlainermanMachedon95}. The small data assumption was removed by Selberg in \cite{SelbergEstimates}, where he showed that by introducing $\epsilon$ small enough in the invertible version of the wave operator, i.e., $\Lambda^{-1}_+\Lambda^{-1+\epsilon}_-$, we can use initial data as large as we wish\footnote{See also \cite{KS} Section 5 for an excellent discussion and motivation of the issues involved in the Picard iteration.}. In \cite{SelbergEstimates} Selberg also gave a very useful, general framework for local wellposedness of wave equations, which reduces the proof of the Main Theorem to establishing the estimates below, for the nonlinearities $\mathcal B_\pm$, and to combining them with appropriate elliptic estimates from Section \ref{ellipticpiece1}. The needed estimates for $\mathcal B_\pm$ are \begin{equation} \|\Lambda^{-1}_+\Lambda^{-1+\epsilon}_-\mathcal B_\pm(\partial u, \partial v,A_{0})\|_{\mathcal H^{s+1,\theta}} \lesssim \|u\|_{\mathcal H^{s+1,\theta}} + \|v\|_{\mathcal H^{s+1,\theta}},\label{bound} \end{equation} \begin{equation}\label{lip} \begin{split} \|\Lambda^{-1}_+\Lambda^{-1+\epsilon}_- \bigl(\mathcal B_\pm(\partial u,\partial v,A_0)-\mathcal B_\pm(\partial u',&\partial v',A_0')\bigr)\|_{\mathcal H^{s+1,\theta}} \\ &\lesssim \|u-u'\|_{\mathcal H^{s+1,\theta}}+\|v-v'\|_{\mathcal H^{s+1,\theta}}, \end{split} \end{equation} where the suppressed constants depend continuously on the ${\mathcal H^{s+1,\theta}}$ norms of $u,u',v,v'$. Since $\mathcal B_\pm$ are bilinear, (\ref{lip}) can follow from (\ref{bound}). In this paper small initial data is necessary\footnote{See Theorem \ref{coulombg} and Section \ref{ellipticpiece1}.}, so we do not need $\epsilon$, but we keep it to make the estimates general. Let $\frac{1}{4}<s<\frac{1}{2}$ and set $\theta,\epsilon$ as follows \begin{align*} \frac{3}{4}-\frac{\epsilon}{2}<\theta\leq s+\frac{1}{2}-\epsilon,\quad\mbox{and}\quad \theta<1-\epsilon,\quad 0\leq \epsilon<\min\left( 2s-\frac{1}{2},\frac{1}{2}\right). \end{align*} Next observe $\Lambda_+\Lambda_-^{1-\epsilon} {\mathcal H^{s+1,\theta}}=H^{s,\theta-1+\epsilon}$, as well as that \[ \|\nabla f\|_{H^{s,\theta}}, \|\phi\|_{H^{s,\theta}} \lesssim \|u\|_{\mathcal H^{s+1,\theta}} + \|v\|_{\mathcal H^{s+1,\theta}}. \] Therefore, using \eqref{Bpm} and \eqref{bis}, it is enough to prove the following \begin{align} \|\mathcal B_1\|_{H^{s,\theta-1+\epsilon}}&=\|[\partial_1f,\partial_2f]\|_{H^{s,\theta-1+\epsilon}} \lesssim \|\nabla f\|_{H^{s,\theta}}^2\label{M1}, \\ \|\mathcal B_2\|_{H^{s,\theta-1+\epsilon}}&\lesssim\|[\partial_jf,\phi]\|_{H^{s,\theta-1+\epsilon}} \lesssim \|\partial_j f\|_{H^{s,\theta}}\|\phi\|_{H^{s,\theta}},\quad j=1,2, \label{M3} \\ \|\mathcal B_3\|_{H^{s,\theta-1+\epsilon}}&\lesssim\|A_0\phi\|_{H^{s,\theta-1+\epsilon}} \lesssim \|A_0\| \|\phi\|_{H^{s,\theta}} \label{M2}, \\ \|\mathcal B_4\|_{H^{s,\theta-1+\epsilon}}&\lesssim\|A_0 \partial_jf\|_{H^{s,\theta-1+\epsilon}} \lesssim \|A_{0}\| \|\partial_j f\|_{H^{s,\theta}}, \quad j=1,2,\label{M4} \end{align} where the norm we are using for $A_0$ is immaterial, mainly because we show in Section \ref{ellipticpiece1}, \begin{equation} \|A_0\|\lesssim \|\nabla f\|_{H^{s,\theta}}\|\phi\|_{H^{s,\theta}}\label{ell}. \end{equation} A few remarks are in order. Estimate \eqref{M1} corresponds to estimates for the null form $Q_{ij}$, and estimate \eqref{M3} gives rise to a new null form $Q$ (this is discussed in the next two sections). $A_0$ in estimates \eqref{M2} and \eqref{M4} solves the elliptic equation in (aME), which results in a quite good regularity for $A_0$. As a result, we do not have to look for any special structures to get \eqref{M2} and \eqref{M4} to hold, so we can drop the brackets, and also treat these estimates as equivalent since $\phi$ and $df$ exhibit the same regularity. Finally, since Riesz transforms are clearly bounded on $L^2$, we ignore them in the estimates needed in \eqref{M3} and \eqref{M4}. The estimates \eqref{M1} and \eqref{M3} for the null forms are the most interesting. Hence we discuss them first, and then we consider the elliptic terms. \begin{subsubsection}{Null Forms--Proof of Estimate (\ref{M1})}\label{nullformqij} $[\partial_1f,\partial_2f]$ has a structure of a null form $Q_{ij}:$ \[ [\partial_1f, \partial_2f]=\partial_1f\partial_2f-\partial_2f\partial_1f=Q_{12}(f,f). \] It follows \eqref{M1} is equivalent to \[ \|Q_{12}(f,f)\|_{H^{s,\theta-1+\epsilon}} \lesssim \|\nabla f\|_{H^{s,\theta}}^2. \] Fortunately the hard work for null forms of type $Q_{\alpha,\beta}$ in two dimensions is already carried out by Zhou in \cite{Zhou}. His proof is done using spaces $N^{s+1,\theta}$ with the norm given by\footnote{see \cite{SelbergT} Section 3.5 for a comparison with ${\mathcal H^{s+1,\theta}}$ spaces.} \begin{equation} \|u\|_{N^{s+1,\theta}}=\|\Lambda_+^{s+1}\Lambda^\theta_-u\|_{L^2}\label{xsth}. \end{equation} In his work $\theta=s+\frac{1}{2}$. We state Zhou's result. \begin{theorem*} (\cite{Zhou}) Consider in $\mathbb R^{2+1}$ the space-time norms \eqref{xsth} and functions $\varphi, \psi$ defined on $\mathbb R^{2+1}$, the estimates \[ \|Q_{\alpha\beta}(\varphi,\psi)\|_{N^{s,s-\frac{1}{2}}}\lesssim \|\varphi\|_{N^{s+1,s+\frac{1}{2}}}\|\psi\|_{N^{s+1,s+\frac{1}{2}}} \] hold for any $\frac{1}{4} < s < \frac{1}{2}$. \end{theorem*} Our iteration is done using spaces ${\mathcal H^{s+1,\theta}}$. Inspection of Zhou's proof shows that it could be easily modified to be placed in the context of ${\mathcal H^{s+1,\theta}}$ spaces. However, even though our auxiliary functions' iterates $u_j$ and $v_j$ belong to ${\mathcal H^{s+1,\theta}}$, from \eqref{newphidf} we only have \begin{equation} df \in H^{s,\theta} \Rightarrow \|\Lambda^s\Lambda^\theta_-Df\|_{L^2(\mathbb R^{2+1})} < \infty,\label{df} \end{equation} but again inspection of Zhou's proof shows we can still handle $Q_{12}(f,f)$ given only that \eqref{df} holds. Moreover, Zhou's proof works for $\frac{1}{4}<s<\frac{1}{2}$, but studying of his proof motivated an alternate proof that uses ${\mathcal H^{s+1,\theta}}$ and works for all values of $s>\frac{1}{4}$. The proof is closely related to the original proof in \cite{Zhou}, but on the surface it seems more concise. The reason for this is that we use Theorem F from \cite{KS}, which involves all the technicalities. See \cite{Czubak} for the details. \end{subsubsection} \begin{subsubsection}{Null Forms--Proof of Estimate (\ref{M3})}\label{nullformq} We need \[ \|[\partial_j f,\phi]\|_{H^{s,\theta-1+\epsilon}} \lesssim \|\partial_j f\|_{H^{s,\theta}}\|\phi\|_{H^{s,\theta}}, \quad j=1,2. \] However analysis of the first iterate shows that for this estimate to hold we need $s > \frac{3}{4}$, so we need to work a little bit harder, and use \eqref{newphidf}\footnote{The obvious way is to just substitute for $\phi$ and leave $\partial_jf$ the same, but it is an exercise to see that this does not work (for several reasons!).} \begin{equation}\label{dfphibracket} [\partial_j f, \phi] =\frac{1}{4}[R_j (\partial_t u + iDu -\partial_t v + iDv),\partial_t u + iDu +\partial_t v - iDv]. \end{equation} If we use the bilinearity of the bracket, we can group \eqref{dfphibracket} by terms involving brackets of $u$ with itself, $v$ with itself, and then also by the terms that are mixed i.e., involve both $u$ and $v$. So we have \begin{align*} 4[\partial_j f, \phi]&=[R_j(\partial_t + iD)u,(\partial_t + iD)u]-[R_j(\partial_t - iD)v,(\partial_t - iD)v]\\ &\quad +[R_j(\partial_t +iD)u,(\partial_t - iD)v]-[R_j(\partial_t - iD)v,(\partial_t + iD )u]. \end{align*} Since $u$ and $v$ are matrix valued and do not commute we need to combine the last two brackets to take advantage of a null form structure. This corresponds to \eqref{newnullformplus2} below (note the plus sign in the formula).\newline\indent The needed estimates are contained in the following theorem \begin{thm} Let $s>\frac{1}{4}$ and \begin{align*} &\frac{3}{4}-\frac{\epsilon}{2}<\theta\leq s+\frac{1}{2},\quad\mbox{and}\quad \theta< 1-\epsilon,\\ &0\leq \epsilon<\min\left( 2s-\frac{1}{2},\frac{1}{2}\right). \end{align*} and let $Q(\varphi,\psi)$ be given by \begin{align} Q(\varphi,\psi)&=(\partial_t \pm iD)R_j\varphi(\partial_t \pm iD)\psi - (\partial_t \pm iD)\varphi (\partial_t\pm iD)R_j\psi\\ \mbox{or}&\nonumber\\ Q(\varphi,\psi)&=(\partial_t \pm iD)R_j\varphi(\partial_t \mp iD)\psi + (\partial_t \pm iD)\varphi (\partial_t\mp iD)R_j\psi.\label{newnullformplus2} \end{align} Then \begin{equation} Q({\mathcal H^{s+1,\theta}},{\mathcal H^{s+1,\theta}}) \hookrightarrow H^{s,\theta-1+\epsilon} \end{equation} or equivalently, the following estimate holds \begin{equation}\label{nullformest} \|Q(\varphi,\psi)\|_{H^{s,\theta-1+\epsilon}} \lesssim \|\varphi\|_{\mathcal H^{s+1,\theta}}\|\psi\|_{\mathcal H^{s+1,\theta}}. \end{equation} \end{thm} \begin{proof} We show the details only for \[ (\partial_t + iD)R_j\varphi(\partial_t - iD)\psi + (\partial_t + iD)\varphi (\partial_t- iD)R_j\psi \] as the rest follows similarly. Observe the symbol of $Q$ is \[ q(\tau,\xi,\lambda,\eta)=\big(\frac{\xi_j}{|\xi|}+\frac{\eta_j}{|\eta|}\big)(\tau +|\xi|)(\lambda-|\eta|). \] Suppose $\tau \lambda \geq 0$, then \[ q\leq 2\big|(\tau +|\xi|)(\lambda-|\eta|)\big|\leq \left\{\begin{array}{l} 2\big||\tau|+|\xi|\big|\big||\lambda|-|\eta|\big|\quad\mbox {if}\mbox\quad \tau, \lambda \geq 0,\\ 2\big||\tau|-|\xi|\big|\big||\eta| +|\lambda| \big|\quad\mbox {if}\mbox\quad \tau,\lambda\leq 0. \end{array}\right. \] It follows \begin{equation}\label{someestimate} \iint_{\tau\lambda\geq 0}|\Lambda^s\Lambda_-^{\theta-1+\epsilon}Q(\varphi,\psi)|^2d\tau d\xi \lesssim \| D_+\varphi D_-\psi\|^2_{H^{s,\theta-1+\epsilon}} +\|D_-\varphi D_+\psi\|^2_{H^{s,\theta-1+\epsilon}} \end{equation} and the estimate follows by Theorem \ref{dudv} below.\newline\indent Suppose $\tau \lambda < 0$. If we break down the computations into two regions \begin{equation} \{ (\tau,\xi),(\lambda,\eta) : |\tau| \geq 2|\xi| \quad\mbox{or}\quad |\lambda| \geq 2 |\eta|\} \quad\mbox {and} \quad \mbox {otherwise},\label{regions} \end{equation} then in the first region, we bound $q$ by \[ q\leq 2(|\tau| +|\xi|)(|\lambda|+|\eta|) \] since there we do not need any special structure\footnote{It is a simple exercise in the first region. See Appendix B in \cite{Czubak}.}.\newline\indent In the second region, we have \[ q\leq 4 |\xi||\eta|\left|\frac{\xi_i}{|\xi|}+\frac{\eta_i}{|\eta|}\right|, \] which is the absolute value of the symbol of the null form $Q_{tj}$ in the first iterate. It has received a lot of attention, but we have not seen a reference, where it was discussed in the context other than of the initial data in $H^{s+1}\times H^{s}.$ This may be, because it has not come up as a nonlinearity before, and/or because it can be handled in the same way as the null form $Q_{ij}$. The details are in \cite{Czubak}. \end{proof} Now we prove an estimate needed to show \eqref{someestimate} is bounded by the square of the right hand side of \eqref{nullformest}. \begin{thm}\label{dudv} Let $s>0$ and \begin{align*} &\max\left(\frac{1}{2},1-s\right)<\theta<1,\\ &0\leq \epsilon\leq 1-\theta, \end{align*} then \[ \|D_+\varphi D_-\psi\|_{H^{s,\theta-1+\epsilon}}\lesssim\|\varphi\|_{\mathcal H^{s+1,\theta}}\|\psi\|_{\mathcal H^{s+1,\theta}} \] \end{thm} \begin{proof} We would like to show \[ \|\Lambda^s \Lambda^{\theta-1+\epsilon}_- (D_+\varphi D_-\psi)\|_{L^2(\mathbb R^{2+1})} \lesssim\|\varphi\|_{\mathcal H^{s+1,\theta}}\|\psi\|_{\mathcal H^{s+1,\theta}}. \] This follows from showing \[ H^{s,\theta} \cdot \mathcal H^{s+1,\theta-1} \hookrightarrow H^{s,\theta-1+\epsilon}, \] which by the product rule\footnote{On $L^2$ this is very easy to establish using triangle inequality. See \cite{KS}.} for the operator $\Lambda^s$ in turn follows from \begin{align*} H^{0,\theta} \cdot \mathcal H^{s+1,\theta-1} \hookrightarrow H^{0,\theta-1+\epsilon},\\ H^{s,\theta} \cdot \mathcal H^{1,\theta-1} \hookrightarrow H^{0,\theta-1+\epsilon}. \end{align*} It is easy to check \[ \mathcal H^{s+1,\theta-1}\hookrightarrow H^{s+1+\theta-1,0} \quad\mbox{and}\quad \mathcal H^{1,\theta-1} \hookrightarrow H^{\theta,0}, \] so we just need to show \begin{align*} H^{0,\theta}\cdot H^{s+\theta,0}\hookrightarrow H^{0,\theta-1+\epsilon},\\ H^{s,\theta} \cdot H^{\theta,0}\hookrightarrow H^{0,\theta-1+\epsilon}, \end{align*} which are weaker than \begin{align*} H^{0,\theta}\cdot H^{s+\theta,0}\hookrightarrow L^2,\\ H^{s,\theta} \cdot H^{\theta,0}\hookrightarrow L^2, \end{align*} but those follow from the Klainerman-Selberg estimate \eqref{E} as long as $s+\theta > 1$, which holds by the conditions we impose on $s$ and $\theta$. \end{proof} An alternate approach could be to follow the set up used by \cite{KlainermanMachedon95} and estimate the integral directly. \end{subsubsection} \begin{subsubsection}{Elliptic Piece: Proof of Estimate (\ref{M2})}\label{ellpiece} Recall we wish to show \begin{equation} \|A_0w\|_{H^{s,\theta-1+\epsilon}} \lesssim \|A_{0}\| \|w\|_{H^{s,\theta}}.\label{M21} \end{equation} We need this estimate during our iteration, so we really mean $A_{0,j}$, but for simplicity we omit writing of the index $j$. Now we choose a norm for $A_0$ to be anything that makes \eqref{M21} possible to establish. This results in \[ \|A_0\|=\|A_0\|_{L^{{\tilde p}}_tL^\infty_x}+\|D^sA_0\|_{L^{p}_tL^q_x}, \] where \begin{equation}\label{aspqs} \begin{split} &{\tilde p} \in \left(1-2s,\frac{1}{2}\right),\\ \frac{2}{p}=1-\frac{1}{q}, \quad &\max\left(\frac{1}{3}(1-2s),\frac{s}{2}\right) < \frac{1}{q} < \frac{2}{3}s. \end{split} \end{equation} For now we assume we can show $A_0 \in L^{{\tilde p}}_tL^\infty_x \cap L^{p}_t\dot W^{s,q}_x$ and delay the proof to Section \ref{ellipticpiece1}, where the reasons for our choices of ${\tilde p}, p, q$ should become clear. We start by using $\theta-1+\epsilon < 0$ \begin{equation}\label{beginning} \|A_{0}w\|_{H^{s,\theta-1+\epsilon}} \leq \|\Lambda^s (A_{0}w) \|_{L^2(\mathbb R^{2+1})} \lesssim \|A_{0}w \|_{L^2(\mathbb R^{2+1})} + \|D^s(A_{0}w) \|_{L^2(\mathbb R^{2+1})} \end{equation} For the first term by H\"older's inequality \begin{equation}\label{firstT} \begin{split} \|A_{0}w \|_{L^2(\mathbb R^{2+1})}&\leq \|A_0\|_{L^{\tilde p}_tL^\infty_x}\|w\|_{L^{{\tilde p}'}_tL^{2}_x},\quad \frac{1}{{\tilde p}}+\frac{1}{{\tilde p}'}=\frac{1}{2}, {\tilde p} \mbox{ as in }\eqref{aspqs} \\ &\lesssim\|A_0\|\|w\|_{H^{0,\theta}},\quad\mbox{by \eqref{C}}\\ &\leq \|A_0\|\|w\|_{H^{s,\theta}}. \end{split} \end{equation} We bound the second term in \eqref{beginning} by \[ \|D^s(A_{0}w) \|_{L^2(\mathbb R^{2+1})}\lesssim \underbrace{\|A_0\|_{L^{\tilde p}_tL^\infty_x}\|D^s w\|_{L^{{\tilde p}'}_tL^{2}_x}}_{I}+ \underbrace{\|D^s A_0\|_{L^{p}_tL^q_x}\|w\|_{L^{p'}_tL^{q'}_x}}_{II} \] where $\frac{1}{p}+\frac{1}{p'}=\frac{1}{2}=\frac{1}{q}+\frac{1}{q'}$ and $p,q$ are as in\eqref{aspqs} and ${\tilde p}$ as in \eqref{firstT}. $I$ is handled similarly to \eqref{firstT} as follows. Apply \eqref{C} with $u=D^sw$ to obtain\footnote{ Or we could bound $\|D^sw\|_{L^{{\tilde p}'}_tL^{2}_x}$ by $\|\Lambda^sw\|_{L^{{\tilde p}'}_tL^{2}_x}$ and apply \eqref{C} with $u=\Lambda^s u$.} \begin{equation}\label{usingC} I\lesssim \|A_0\|\|D^s w\|_{H^{0,\theta}}\leq \|A_0\|\|w\|_{H^{s,\theta}}. \end{equation} We now consider II. By the choices of $p,q,$ Klainerman-Selberg estimate \eqref{D} applies\footnote{See the discussion in Section \ref{Aestneeded} for an explanation.} and gives \begin{equation}\label{usingD2} II \leq \|A_0\|\|w\|_{L^{p'}_tL^{q'}_x}\lesssim \|A_0\|\|w\|_{H^{1-\frac{2}{q'}-\frac{1}{p'},\theta}}. \end{equation} From \eqref{aspqs} we also have \begin{equation}\label{usingD3} II \lesssim\|A_0\|\|w\|_{H^{1-\frac{2}{q'}-\frac{1}{p'},\theta}}\lesssim \|A_0\|\|w\|_{H^{s,\theta}}. \end{equation} and \eqref{M21} follows now from \eqref{firstT}, \eqref{usingC} and \eqref{usingD3}. \begin{remark}\label{estinMKG} The above proof illustrates other difficulties due to working in $2$ dimensions. Initially, we wanted to follow Selberg's proof of estimate (38) in \cite{Selberg}, and just use $\|\Lambda^s A_0\|_{L^p_tL^q_x}$ norm. Unfortunately in $2D$, the condition $sq>2$ needed to show $A_0 \in L^p_tL^\infty_x$ is disjoint from conditions needed to use Klainerman-Tataru estimate \eqref{A} and establish that $\Lambda^s A_0 \in {L^p_tL^q_x}$ in the first place. This resulted in the $L^{{\tilde p}}_tL^\infty_x \cap L^{p}_t\dot W^{s,q}_x$ space above and also having to employ Klainerman-Selberg estimate \eqref{D}, which was not needed in \cite{Selberg} for the proof of (38). \end{remark} \end{subsubsection} \end{subsection} \begin{subsection}{Elliptic Regularity: Estimates for $A_0$.}\label{ellipticpiece1} Here we present a variety of a priori estimates for the nondynamical variable $A_0$. At each point we could add the index $j$ to $A_0, df$ and $\phi$. Therefore the presentation also applies to the iterates $A_{0,j}$. It is an exercise to show that the estimates we obtain here are enough to solve for $A_{0,j}$ at each step as well as to close the iteration for $A_0$. Let $A_0$ solve \begin{align*} \triangle A_0 =d^\ast [A_0,\ast df]+ d^\ast[df, \phi ]=-\partial_1[A_0,\partial_2 f]+\partial_2[A_0,\partial_1 f]+\partial_i[\partial_if,\phi]. \end{align*} There is a wide range of estimates $A_0$ satisfies. Nevertheless, the two spatial dimensions limit our ``range of motion.'' For example, it does not seem possible to place $A_0(t)$ in $L^2$. We state the general results and only show the cases we need to prove $A_0 \in L^{\tilde p}_tL^\infty_x \cap L^p_t\dot W^{s,q}_x$ as required in the last section. The rest of the cases can be found in \cite{Czubak}. We add that the proofs of both of the following theorems were originally inspired by Selberg's proof of his estimate (45) in \cite{Selberg}. We start with the homogeneous estimates. \begin{thm}\label{thm1} Let $s > 0$, and let $0\leq a\leq s+1$ be given. And suppose $1\leq p\leq \infty$ and $1<q<\infty$ satisfy \begin{align} \max\left(\frac{1}{3}(1+2a-4s), \frac{1}{2}(1+a-4s), \frac{1}{2}\min(a,1)\right) < \frac{1}{q} < \frac{1+a}{2},\label{cq1}\\ 1-\frac{2}{q}+a-2s \leq \frac{1}{p} \leq \frac{1}{2}\left(1-\frac{1}{q}\right), \quad \frac{1}{p}<\left(1-\frac{2}{q}+a\right)\label{cp1}. \end{align} \begin{itemize} \item[i)] If $0\leq a\leq 1$ and the ${H^{s,\theta}}$ norm of $\nabla f$ is sufficiently small, then $A_0 \in L^p_t\dot W^{a,q}_x$ and we have the following estimate \begin{equation} \|A_0\|_{L^p_t\dot W^{a,q}_x}\lesssim \|\phi\|_{H^{s,\theta}}\|\nabla f\|_{H^{s,\theta}}. \end{equation} \item[ii)] If $1<a\leq s+1$ and $A_0 \in L^p_tL^{(1/q-1/2)^{-1}}_x$, then $A_0 \in L^p_t\dot W^{a,q}_x$ and we have the following estimate \begin{equation} \|A_0\|_{L^p_t\dot W^{a,q}_x}\lesssim (\|A_0\|_{L^p_tL^{(1/q-1/2)^{-1}}_x}+\|\phi\|_{H^{s,\theta}})\|\nabla f\|_{H^{s,\theta}}. \end{equation} \end{itemize} \end{thm} \begin{cor}\label{corr1} Let $s>0$, then $A_0 \in C_b(I:\dot H^a_x)$, where \[ 0 < a \leq \left\{ \begin{array}{l} \begin{split} 2s&\quad\mbox{if}\quad 0<s\leq 1\\ 1+s&\quad\mbox{if}\quad 1< s \end{split} \end{array} \right. \] \end{cor} \begin{proof}[Proof of Corollary \ref{corr1}] Suppose $0<s<\frac{1}{2}$. Then use part i) of the theorem with $q=2$ and $p=\infty$ to obtain $A_0 \in L^\infty_t\dot H^a_x$ for $a\leq 2s$. $A_0$ continuous as a function of time easily follows from a contraction argument in $C_b(I:\dot H^a_x)$ using $L^\infty_t\dot H^a_x$ estimates. $s\geq\frac{1}{2}$ is considered in \cite{Czubak}. \end{proof} So far we just need $s>0$ in order to make the estimates work. The requirement for $s>\frac{1}{4}$ does not come in till we start looking at the nonhomogeneous spaces, where also the range of $p$ and $q$ is smaller. However, we can distinguish two cases $aq<2$ and $aq > 2$. \begin{thm}\label{thm2} Let $s > 0$, and suppose the ${H^{s,\theta}}$ norm of $\nabla f$ is sufficiently small. \begin{itemize} \item[i)] If $aq < 2$ for $0< a <(2s,1)$ and if $p$ and $q$ satisfy \begin{align} \max\left(\frac{1}{2}+a-2s, \frac{a}{2}\right) < &\frac{1}{q} < \frac{1}{2},\label{cq2}\\ 1-\frac{2}{q}+a-2s \leq &\frac{1}{p} < \frac{1}{2} -\frac{1}{q},\label{cp2} \end{align} then $A_0 \in L^p_tW^{a,q}_x$ and we have the following estimate \begin{equation} \|A_0\|_{L^p_tW^{a,q}_x}\lesssim \|\phi\|_{H^{s,\theta}}\|\nabla f\|_{H^{s,\theta}}. \end{equation} \item[ii)] If $aq > 2$, then we need $s>\frac{1}{4}$ and $0<a<\min(4s-1,1+s,2s)$. Suppose $p$ and $q$ also satisfy \begin{align} \max\left(\frac{a-s}{2},\frac{1}{2}+a-2s\right) &< \frac{1}{q} < \frac{1}{2}\min(a,1),\label{cq3}\\ 1-\frac{2}{q}+a-2s &\leq \frac{1}{p} < \frac{1}{2} -\frac{1}{q},\label{cp3} \end{align} then $A_0 \in L^p_tW^{a,q}_x$ and we have the following estimate \begin{equation} \|A_0\|_{L^p_tW^{a,q}_x}\lesssim \|\phi\|_{H^{s,\theta}}\|\nabla f\|_{H^{s,\theta}}. \end{equation} \end{itemize} \end{thm} \begin{cor}\label{corr2} If $s>\frac{1}{4}$ and the ${H^{s,\theta}}$ norm of $\nabla f$ is sufficiently small, we have in particular $A_0 \in L^p_tL^\infty_x$ for $p$ satisfying \begin{equation} 1-2s < \frac{1}{p}< \frac{1}{2}, \end{equation} and we have the following estimate \begin{equation} \|A_0\|_{L^p_tL^\infty_x}\lesssim \|\phi\|_{H^{s,\theta}}\|\nabla f\|_{H^{s,\theta}}. \end{equation} \end{cor} \begin{proof}[Proof of Corollary \ref{corr2}] For each $p \in (1-2s,\frac{1}{2})$ we can find some $a$ and $q$, which satisfy the conditions of Theorem \ref{thm2}, part ii). The corollary then follows from the Sobolev embedding: $W^{a,q}(\mathbb R^2) \hookrightarrow L^\infty(\mathbb R^2)$ for $aq > 2$. \end{proof} \begin{remark} Here we also would like to emphasize the arrival of the necessity of $s>\frac{1}{4}$. Conditions on $\frac{1}{p}$ in \eqref{cp3} are needed so we can use below the Klainerman-Tataru estimate \eqref{A}. In order to be able to choose such $\frac{1}{p}$, obviously $1-\frac{2}{q}+a-2s$ must be strictly less than $\frac{1}{2} -\frac{1}{q}$. This forces $\frac{1}{q}$ to be strictly greater than $\frac{1}{2}+a-2s$. We also need $aq>2$ to use the Sobolev embedding in Corollary \ref{corr2}, so if we want to be able to find $q$ between $\frac{1}{2}+a-2s$ and $\frac{a}{2}$, a is forced to be strictly less than $4s-1$. Therefore $s$ must be greater than $\frac{1}{4}$. See below for another instance of requiring $s>\frac{1}{4}$. \end{remark} \subsection{Proof of estimates needed in \ref{ellpiece}}\label{Aestneeded} Recall we would like to show $A_0 \in L^{{\tilde p}}_tL^\infty_x \cap L^p_t\dot W^{s,q}_x$. Therefore, we are interested in part i) of Theorem \ref{thm1} and part ii) in Theorem \ref{thm2}, so we can conclude Corollary \ref{corr2}. Moreover, we need a specific case of part i) in Theorem \ref{thm1}, because we need $A_0 \in L^p_t\dot W^{s,q}_x$, where $p, q$ in addition satisfy \begin{equation}\label{newpq} 1-\frac{2}{p} \leq \frac{1}{q}<\frac{1}{2}, \quad\mbox{and}\quad \frac{2}{q}-\frac{1}{2}+\frac{1}{p}\leq s, \end{equation} so we can use \begin{equation} H^{s,\theta} \hookrightarrow H^{1-(1-\frac{2}{q})-(\frac{1}{2}-\frac{1}{p}),\theta}(\mathbb R^{2+1}) \hookrightarrow L^{(1/2-1/p)^{-1}}_tL^{(1/2-1/q)^{-1}}_x, \end{equation} in \eqref{usingD2} and \eqref{usingD3}. When we put \eqref{newpq} together with \eqref{cq1} and \eqref{cp1} with $a=s$, we obtain second line of \eqref{aspqs}, namely \begin{equation}\label{aspqs1} \frac{2}{p}=1-\frac{1}{q}, \quad \max\left(\frac{1}{3}(1-2s),\frac{s}{2}\right) < \frac{1}{q} < \frac{2}{3}s. \end{equation} \begin{remark} Observe that in order to be able to find such $q$ we must have $s>\frac{1}{4}$. \end{remark} Consider \begin{equation}\label{e3} \begin{split} \|A_0\|_{L^p_t\dot W^{s,q}_x}&=\|\triangle^{-1}(d^\ast [A_0,\ast df]+ d^\ast[df, \phi ])\|_{L^p_t\dot W^{s,q}_x}\\ &\lesssim\|D^{-1}(A_0\nabla f)\|_{L^p_t\dot W^{s,q}_x}+ \|D^{-1}(\nabla f\phi)\|_{L^p_t\dot W^{s,q}_x}\\ &\lesssim\|D^{s-1}(A_0\nabla f)\|_{L^p_tL^q_x}+ \|D^{s-1}(\nabla f\phi)\|_{L^p_tL^q_x}\\ &\lesssim\|A_0\nabla f\|_{L^p_tL^r_x}+ \|D^{s-1}(\nabla f\phi)\|_{L^p_tL^q_x}, \end{split} \end{equation} where we use the Sobolev embedding with $\frac{1}{q}=\frac{1}{r}-\frac{1-s}{2}.$ The latter term is bounded by $\|\nabla f\|_{H^{s,\theta}}\|\phi\|_{H^{s,\theta}}$ using the Klainerman-Tataru estimate \eqref{A}, whose application we discuss in the section below. For the former we use $\frac{1}{r}=\frac{1}{q}+\frac{1-s}{2}=(\frac{1}{q}-\frac{s}{2})+\frac{1}{2}$ \begin{equation} \|A_0\nabla f\|_{L^p_tL^r_x} \leq \|A_0\|_{L^p_tL_x^{(1/q-s/2)^{-1}}}\|\nabla f\|_{L^\infty_tL^2_x}\lesssim \|A_0\|_{L^p_t\dot W^{s,q}_x}\|\nabla f\|_{{H^{s,\theta}}}. \end{equation} Then if the ${H^{s,\theta}}$ norm of $\nabla f$ is sufficiently small, we obtain \begin{equation} \|A_0\|_{L^p_t\dot W^{s,q}_x}\lesssim \|\nabla f\|_{H^{s,\theta}}\|\phi\|_{H^{s,\theta}}, \end{equation} as needed. \newline\indent For the non-homogeneous estimate, since here $\frac{1}{4}<s<\frac{1}{2}$ the upper bound for $a$ is simply $4s-1$. In addition, for our purposes right now it suffices to show the estimate for one particular $a$. Therefore we set $0<a<\min(s,4s -1)$ for $\frac{1}{4}<s<\frac{1}{2}$, and we let $p,q$ satisfy \eqref{cq3} and \eqref{cp3}. We have \begin{equation}\label{e4} \begin{split} \|A_0\|_{L^p_t W^{a,q}_x}&\lesssim\|D^{-1}(A_0\nabla f)\|_{L^p_t W^{a,q}_x}+ \|D^{-1}(\nabla f\phi)\|_{L^p_t W^{a,q}_x}\\ &\lesssim\|D^{-1}(A_0\nabla f)\|_{L^p_tL^q_x}+ \|D^{-1}(\nabla f\phi)\|_{L^p_tL^q_x}\\ &\quad +\|D^{a-1}(A_0\nabla f)\|_{L^p_tL^q_x}+\|D^{a-1}(\nabla f\phi)\|_{L^p_tL^q_x}. \end{split} \end{equation} Klainerman-Tataru estimate \eqref{A} handles the second and the last term (see below). Consider the first term \begin{equation} \begin{split} \|D^{-1}(A_0\nabla f)\|_{L^p_tL^q_x} &\lesssim\|A_0\nabla f\|_{L^p_tL^r_x},\quad \frac{1}{q}=\frac{1}{r}-\frac{1}{2},\\ &\leq \|A_0\|_{L^p_tL^q_x}\|\nabla f\|_{L^\infty_tL^2_x}\\ &\leq \|A_0\|_{L^p_tW^{a,q}_x}\|\nabla f\|_{{H^{s,\theta}}}. \end{split} \end{equation} For the third term we have \begin{equation}\nonumber \begin{split} \|D^{a-1}(A_0\nabla f)\|_{L^p_tL^q_x}&\lesssim \|A_0\nabla f\|_{L^p_tL^r_x},\quad \frac{1}{q}=\frac{1}{r}-\frac{1-a}{2}\\ &\lesssim \|A_0\|_{L^p_tL^q_x}\|D^a\nabla f\|_{L^\infty_tL^2_x}, \quad \frac{1}{r}=\frac{1}{q}+(\frac{1}{2} -\frac{a}{2})\\ &\lesssim \|A_0\|_{L^p_tW^{a,q}_x}\|\nabla f\|_{{H^{s,\theta}}}, \end{split} \end{equation} Then as before, this completes the proof if the ${H^{s,\theta}}$ norm of $\nabla f$ is sufficiently small. \end{subsection} \end{section} \subsection{Applying Klainerman-Tataru Theorem}\label{KTT}\nonumber We said that several estimates above follow from the Klainerman-Tataru estimate \eqref{A}. We need to check that this is in fact the case. We begin by stating the theorem. We state it for two dimensions only, and as it was given in \cite{KS} (the original result holds for $n\geq 2$). \begin{theorem*}(\cite{KlainermanTataru}) Let $1 \leq p \leq \infty$, $ 1 \leq q < \infty$. Assume that \begin{eqnarray} \frac{1}{p} \leq \frac{1}{2}\left(1-\frac{1}{q}\right),\label{c1}\\ 0 < \sigma < 2\left(1-\frac{1}{q} -\frac{1}{p}\right),\label{c2}\\ s_1, s_2 < 1 - \frac{1}{q}-\frac{1}{2p},\label{c3}\\ s_1+s_2+\sigma=2(1- \frac{1}{q}-\frac{1}{2p}).\label{c4} \end{eqnarray} Then \[ \|D^{-\sigma}(uv)\|_{L^p_tL^q_x(\mathbb R^2)} \lesssim \|u\|_{H^{s_1,\theta}} \|v\|_{H^{s_2,\theta}}, \] provided $\theta > \frac{1}{2}$. \end{theorem*} The first time we use the theorem is in \eqref{e3} for the term $\|D^{s-1}(\nabla f\phi)\|_{L^p_tL^q_x}$. Note $\sigma=1-s$. Clearly $1 \leq p \leq \infty$, $ 1 \leq q < \infty$. Next by \eqref{aspqs1} $\frac{2}{p}=1-\frac{1}{q}$, so \eqref{c1} holds. Since $s<\frac{1}{2}$, $\sigma>0$, and we can see \eqref{c2} holds when we substitute $\frac{1}{2}-\frac{1}{2q}$ for $\frac{1}{p}$ in the right hand side and use $\frac{1}{q}<\frac{2}{3}s$. Next we let $s_1=s_2$ and with $\sigma=1-s>0$, \eqref{c4} implies \eqref{c3}, so we only check \eqref{c4}. To that end we must be able to choose $s_1$ so that $2s_1=1-\frac{2}{q}-\frac{1}{p}+s\leq 2s,$ which is equivalent to our condition on $p$ and one of the lower bounds on $\frac{1}{q}$.\\ \indent The next place we use the theorem is in \eqref{e4} for $\|D^{-1}(\nabla f\phi)\|_{L^p_tL^q_x}$, $\|D^{a-1}(\nabla f\phi)\|_{L^p_tL^q_x}$, where $p$ and $q$ are as in \eqref{cq3} and \eqref{cp3} with $0<a<\min(s,4s -1)<1$. Then for $\sigma=1$, by the right hand side of \eqref{cp3}, \eqref{c2} holds and implies \eqref{c1}. Note, since \eqref{c2} is true with $\sigma=1$, it is true with $\sigma=1-a$. Next, for $\sigma=1$ \eqref{c4} gives \eqref{c3} and also for $\sigma=1-a$ as long as $0<a<1$. So again it is sufficient to see we can have $s_1$ defined by \eqref{c4} such that $s_1\leq s$, but for $\sigma=1-a$ that follows from the left hand side of \eqref{cp3}, and shows we can find it for $\sigma=1$ as well.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The cosmic star formation rate (SFR) is observed to peak between redshifts $z\approx3$ and $z\approx2$ after which it decreases by an order of magnitude \citep[e.g.][]{Hopkins2006}. Its evolution is thought to be determined by the combination of the formation and growth of (dark matter) haloes, which depends on cosmology, and the distribution of SFRs in haloes as a function of halo mass and redshift. The latter depends on processes such as gas accretion, stellar mass loss, radiative cooling, (re-)ionization, and feedback from star formation and active galactic nuclei (AGN) \citep[e.g.][]{Schaye2010}. The observed rates of star formation in galaxies can only be sustained for long periods of time if the galaxies are being fed continuously \citep[e.g.][]{Bauermeister2010}. This feeding happens through the accretion of gas from the intergalactic medium. In the simplest picture of spherical collapse, it is assumed that all gas accreting onto a halo is shock-heated to the virial temperature of the halo, reaching a quasi-static equilibrium supported by the pressure of the hot gas. If the cooling time of the halo gas is sufficiently short, then it may subsequently enter the central galaxy. Whether or not the halo gas can accrete onto the galaxy therefore depends on both the temperature and the density of the gas \citep{Rees1977}. However, if the cooling time of gas that has gone through a (hypothetical) accretion shock at the virial radius is short compared with the age of the Universe, then there can be no hydrostatic halo and hence also no virial shock. This will be the case for low-mass haloes \citep{Rees1977, White1978}. The accretion rate onto the central galaxy then depends on the infall rate, but not on the cooling rate \citep{White1991}. Indeed, \citet{Birnboim2003} showed that a virial shock is absent for low-mass haloes in a spherically symmetric simulation. However, cosmological simulations show significant deviations from spherical symmetry. Galaxies form inside the filaments that make up the cosmic web and these filaments contribute significantly to, or even dominate, the gas supply of galaxies \citep{Dekel2009a}. Because the density inside filaments is higher than that of the ambient gas, the cooling time of the gas is much shorter and the gas can accrete cold onto haloes of higher masses than predicted by spherically symmetric models \citep[e.g.][]{Keres2005, Dekel2006}. This cold gas can reach the high densities of the interstellar medium (ISM) much more efficiently than the tenuous, hot gas in the halo. It is this `cold mode' of accretion that predominantly feeds the galaxy and powers star formation \citep[][hereafter V10]{Keres2009a, Brooks2009, Voort2010}. The fraction of the accreted gas that is accreted cold, i.e.\ the fraction of the gas that did not pass through a virial shock, depends on both halo mass and redshift (\citealt{Keres2005, Dekel2006, Ocvirk2008, Keres2009a, Brooks2009}; V10). In V10 we used simulations to show that the rate at which gas accretes onto central galaxies is generally much lower than the rate at which gas accretes onto their host haloes. Furthermore, we found that while halo accretion rates are determined by the depth of the gravitational potentials, galaxy accretion rates are also sensitive to processes such as metal-line cooling and feedback from star formation and AGN. Here we use a large cosmological hydrodynamical simulation that includes radiative cooling (computed element by element and thus including metal lines), star formation, stellar mass loss, and outflows driven by both supernovae and AGN, to determine how hot and cold accretion influence the global star formation history (SFH). We calculate global accretion rate densities, both for hot- and cold-mode accretion, and for accretion onto haloes as well as accretion onto galaxies. We will compare the evolution of these different global accretion rates to the resulting global SFH and learn how they all link together. We will show that the sharp drop in the global SFH after $z\approx2$ reflects the corresponding sharp drop in the rate of cold-mode accretion onto haloes. Motivated by semi-analytic models and simulations that have shown AGN feedback to be crucial for the suppression of star formation in high-mass haloes \citep[e.g.][]{Springeletal2005, Bower2006, Croton2006, Lagos2008, Booth2009, McCarthy2010}, we use an identical simulation, except for the omission of feedback from AGN, to investigate the effect of AGN feedback on the global accretion rates. We will show that the hot accretion mode is more strongly suppressed by AGN feedback than the cold mode. Without AGN feedback, low-redshift star formation would not be predominantly fuelled by the cold accretion mode and the drop in the cosmic SFR would be much less strong. The outline of this paper is as follows. In Section~\ref{sec:sim} we briefly describe the simulations from which we derive our results and discuss our method for selecting recently accreted gas. In Section~\ref{sec:global} we compare global accretion rates to the cosmic SFR and show which haloes contribute most to the global accretion rates and the cosmic SFH. In Section~\ref{sec:REFAGN} we investigate the effect of AGN feedback on the hot and cold accretion rate densities. Finally, we summarize and discuss our results in Section~\ref{sec:concl}. \section{Simulations} \label{sec:sim} We use a modified version of \textsc{gadget-3} \citep[last described in][]{Springel2005}, a smoothed particle hydrodynamics (SPH) code that uses the entropy formulation of SPH \citep{Springel2002}, which conserves both energy and entropy where appropriate. This work is part of the OverWhelmingly Large Simulations (OWLS) project \citep{Schaye2010}, which consists of a large number of cosmological simulations with varying (subgrid) physics. Here we make use of the so-called `AGN' and `reference' models, which are identical except that only model AGN includes supermassive black holes and AGN feedback. The AGN simulation will be our fiducial model, but we will compare it with the OWLS reference model in Section~\ref{sec:REFAGN}. As the simulations are fully described in \citet{Schaye2010}, we will only summarize their main properties here. The simulations described here assume a $\Lambda$CDM cosmology with parameters $\Omega_\mathrm{m} = 1 - \Omega_\Lambda = 0.238$, $\Omega_\mathrm{b} = 0.0418$, $h = 0.73$, $\sigma_8 = 0.74$, $n = 0.951$. These values are consistent\footnote{The only significant discrepancy is in $\sigma_8$, which is 8 per cent, or 2.3$\sigma$, lower than the value favoured by the WMAP 7-year data.} with the WMAP year~7 data \citep{Komatsu2011}. A cubic volume with periodic boundary conditions is defined, within which the mass is distributed over $N^3$ dark matter and as many gas particles. The box size (i.e.\ the length of a side of the simulation volume) of the simulations used in this work is 100~$h^{-1}$ comoving Mpc, with $N=512$. The (initial) particle masses for baryons and dark matter are $1.2\times10^8$~M$_\odot$ and $5.6\times10^8$~M$_\odot$, respectively. The gravitational softening length is 7.8~$h^{-1}$comoving kpc, i.e.\ 1/25 of the mean dark matter particle separation, but we imposed a maximum of 2~$h^{-1}$proper kpc, which is reached at $z=2.91$. The abundances of eleven elements released by massive stars and intermediate mass stars are followed as described in \citet{Wiersma2009b}. We assume the stellar initial mass function (IMF) of \citet{Chabrier2003}, ranging from 0.1 to 100~M$_\odot$. As described in \citet{Wiersma2009a}, radiative cooling and heating are computed element by element in the presence of the cosmic microwave background radiation and the \citet{Haardt2001} model for the UV/X-ray background from galaxies and quasars. Star formation is modelled according to the recipe of \citet{Schaye2008}. A polytropic equation of state $P_\mathrm{tot}\propto\rho_\mathrm{gas}^{4/3}$ is imposed for densities exceeding $n_\mathrm{H}=0.1$~cm$^{-3}$, where $P_\mathrm{tot}$ is the total pressure and $\rho_\mathrm{gas}$ the density of the gas. Gas particles with proper densities $n_\mathrm{H}\ge0.1$~cm$^{-3}$ and temperatures $T\le10^5$~K are moved onto this equation of state and can be converted into star particles. The star formation rate (SFR) per unit mass depends on the gas pressure and is set to reproduce the observed Kennicutt-Schmidt law \citep{Kennicutt1998}. Feedback from star formation is implemented using the prescription of \citet{Vecchia2008}. About 40 per cent of the energy released by type II SNe is injected locally in kinetic form. The rest of the energy is assumed to be lost radiatively. The initial wind velocity is 600 km s$^{-1}$. Our fiducial simulation includes AGN feedback. The model, which is a modified version of the model introduced by \citet{Springeletal2005}, is fully described and tested in \citet{Booth2009}. In short, a seed mass black hole is placed in every resolved halo. These black holes grow by accretion of gas, which results in the injection of energy in the surrounding medium, and by mergers. The accretion rate onto the black hole equals the so-called Bondi-Hoyle accretion rate \citep{Bondi1944} if the gas density is low ($n_{\rm H} < 10^{-1}\,{\rm cm}^{-3}$). However, in dense, star-forming gas, where the accretion rate would be severely underestimated because the simulations do not include a cold, interstellar gas phase and because the Jeans scales are unresolved, the accretion rate is multiplied by an efficiency parameter, $\alpha$, given by $\alpha=(n_\mathrm{H}/n^*_\mathrm{H})^\beta$, where $n^*_\mathrm{H}=0.1$~cm$^{-3}$ is the threshold density for star formation and $\beta=2$. Note, however, that our results are insensitive to the choice for $\beta$ as long as it is chosen to be large (see \citealt{Booth2009}). A fraction of 1.5 per cent of the rest-mass energy of the accreted gas is injected into the surrounding medium in the form of heat, by increasing the temperature of at least one neighbouring gas particle by at least $10^8$~K. The minimum temperature increase ensures that the feedback is effective, because the radiative cooling time of the heated gas is sufficiently long, and results in fast outflows. When AGN feedback is included, the SFR is reduced for haloes with $M_\mathrm{halo}\gtrsim10^{12}$~M$_\odot$ \citep{Booth2009}. The AGN simulation reproduces the observed mass density in black holes at $z=0$ and the black hole scaling relations \citep{Booth2009} and their evolution \citep{Booth2010} as well as the observed optical and X-ray properties, stellar mass fractions, SFRs, stellar age distributions and the thermodynamic profiles of groups of galaxies \citep{McCarthy2010}. The Lagrangian nature of the simulation is exploited by tracing each fluid element back in time, which is particularly convenient for this work, as it allows us to study the temperature history of accreted gas. During the simulations the maximum past temperature, $T_\mathrm{max}$, was stored in a separate variable. The variable was updated for each SPH particle at every time step for which the temperature exceeded the previous maximum temperature. The artificial temperature the particles obtain when they are on the equation of state (i.e.\ when they are part of the unresolved multiphase ISM) was ignored in this process. This may cause us to underestimate the maximum past temperature of gas that experienced an accretion shock at higher densities. Ignoring such shocks is, however, consistent with our aims, as we are interested in the maximum temperature reached \emph{before} the gas entered the galaxy. Resolution tests are not included in this paper. However, \citet{Schaye2010} have shown that the box size and resolution of the reference simulation used in this paper suffice to obtain nearly converged results for the cosmic SFH at $z<3$, which changes by much less than a factor of 2 when changing the resolution by a factor of 8. At higher redshifts the global SFR density is, however, underestimated as a result of the finite resolution. Because AGN feedback moves the peak star formation activity to lower-mass haloes, the convergence of the AGN simulation may be slightly worse. V10 tested the convergence of accretion rates and the fraction of the accretion due to the hot mode as a function of halo mass. They found that increasing the resolution gives slightly higher cold accretion fractions, which would only strengthen the conclusions of this work. For quantities averaged over the entire simulation volume, we have to keep in mind that the minimum halo mass that is included depends on the resolution. The global fraction of gas accreted in the cold mode may therefore also depend on the resolution, because the cold fraction increases with decreasing halo mass. Increasing the resolution would allow us to include lower-mass haloes, thus increasing the global cold accretion fraction. Again, this would only strengthen our conclusions. \subsection{Accretion and mergers} \label{sec:acc} In this section we summarize our method for determining the gas accretion rates onto haloes and galaxies and the amount of star formation within haloes. Our method is described in more detail in V10. We use \textsc{subfind} \citep{Dolag2009} to identify gravitationally bound haloes and subhaloes. In this work we only investigate accretion onto haloes that are not embedded inside larger haloes. In other words, we do not consider subhaloes. The simulations are saved at discrete output redshifts, called snapshots. The redshift intervals between snapshots are $\Delta z=0.25$ for $0< z\le 4$, and $\Delta z=0.5$ for $4< z\le 6$. This is the time resolution for determining accretion rates. For all haloes, called descendants, with more than 100 dark matter particles (i.e.\ with $M_\mathrm{halo}\gtrsim10^{10.8}$~M$_\odot$), we find the progenitor at the previous output redshift. We do this by finding the group containing most of the 25 most bound dark matter particles of the descendant. Haloes without a well-defined progenitor are discarded from our analysis. All gas entering a halo, both from accretion and from mergers, contributes to its growth. We therefore identify all the gas and recently formed star particles that are in the descendant, but not in its progenitor. To participate in star formation, gas has to enter not only the halo, but also the ISM of the galaxy. For the same haloes as we have selected above, we identify all particles entering the ISM (i.e.\ moving onto the equation of state) of the descendant galaxy between the two consecutive output redshifts. Galaxy mergers are therefore automatically included. To make sure we are not looking at accretion onto substructures within the main halo, we only consider accretion taking place within the inner 15 per cent of the virial radius. This corresponds to the growth of the central galaxy. Finally, we identify all the stars formed in the same haloes and redshift bins to calculate the corresponding SFRs. Again, unresolved haloes and satellites are excluded. We decided to include gas contributed by mergers, as well as smoother accretion flows, because both are mechanisms by which haloes and galaxies grow and both can provide fuel for star formation. We note, however, that the contribution from mergers is much smaller than that from smooth accretion. Mergers with mass ratios greater than 1:10 account for only $\la 10$ per cent of the total halo growth, except in groups and clusters at low redshift (see V10). The simulations used for this study do not resolve haloes below $M_\mathrm{halo}=10^{10.8}$~M$_\odot$. We are therefore missing contributions to the global accretion rates and SFR from lower mass haloes. Because we use a fixed maximum past temperature threshold of $10^{5.5}$~K to distinguish hot from cold accretion (see below), small haloes with virial temperatures below this threshold will by definition accrete almost all gas in the cold mode. The mass regime that we investigate here is therefore the most interesting regime, where changes in accretion mode are expected to occur \citep{Dekel2006}. \section{Global accretion and star formation} \label{sec:global} \citet{Schaye2010} showed that without feedback from supernovae and AGN, the SFR density is overpredicted by a large factor at $z\lesssim2$. By including supernova feedback they could lower the SFR density, but if the predicted SFR density matched the observed peak at $z\approx2$, then it overpredicted the SFR density at $z=0$. The drop in the global SFR below $z=2$ is much closer to the observed slope if AGN feedback is included, but in that case the SFR density may be slightly too low \citep{Schaye2010}. This discrepancy could be solved by decreasing the efficiency of feedback from star formation, which had been set to reproduce the observed peak in the SFH using models without AGN feedback. It is, however, not clear how seriously we should take the discrepancy, since the AGN simulation does reproduce the observed masses and ages of the central galaxies of groups \citep{McCarthy2010}. This would suggest that the problem may be solved by increasing $\sigma_8$ from 0.74 to the currently favoured value of 0.81, which would have a relatively strong effect on global star formation rates \citep[see][]{Schaye2010} while leaving the evolution of haloes of a given mass nearly unchanged. Because, for the purposes of this work, the AGN simulation is the most realistic run from the OWLS suite, we will adopt it as our fiducial model. In Section~\ref{sec:REFAGN} we will discuss how its predictions differ from those of a simulation without AGN feedback. \begin{figure*} \center \includegraphics[scale=0.58]{globalaccrate.eps} \caption {\label{fig:global} Evolution of global accretion rate and SFR densities in resolved haloes with well-defined progenitors. The left, middle, and right panels show global accretion rate densities onto haloes, onto central galaxies, and global SFR densities, respectively. Red and blue curves show accretion rate densities (left and middle panels) and SFR densities (right panel) resulting from hot- and cold-mode accretion, respectively. In all panels, the black curve is the sum of the red and blue curves, the green curve shows the global SFR density in the selected haloes, and the grey curve shows the global SFR density in the entire simulation box. The small `step' visible at $z\approx4$ is caused by the sudden increase in the time resolution for determining accretion, i.e.\ $\Delta z$ between snapshots decreases by a factor of two at $z=4$. Galaxies accrete most of their gas in the cold mode and this mode is responsible for an even larger fraction of the star formation. Because of outflows driven by supernovae and AGN, the SFR density is generally lower than the galaxy accretion rate density. The global SFR declines more rapidly than either the total and hot-mode accretion rate densities. This decline must therefore be caused by the drop in global cold-mode accretion rate, though with a delay.} \end{figure*} \begin{figure*} \center \includegraphics[scale=0.58]{globalaccratemassbin.eps} \caption {\label{fig:globalmass} Evolution of global accretion rate densities onto haloes (left column), galaxies (middle column), and global SFR densities (right column) for different halo mass bins. From top to bottom only haloes have been included with: $10^{11}\le M_\mathrm{halo}<10^{12}$~M$_\odot$, $10^{12}\le M_\mathrm{halo}<10^{13}$~M$_\odot$, and $M_\mathrm{halo}\ge10^{13}$~M$_\odot$. The curves show the same quantities as in Figure~\ref{fig:global}. Above $z\approx3.5$ the highest mass bin contains no haloes. At all redshifts most of the cold halo accretion, galaxy accretion, and star formation happens in low-mass haloes (i.e.\ $M_\mathrm{halo}<10^{12}$~M$_\odot$). At $z\gtrsim2$ low-mass haloes also dominate the global hot halo accretion rates. At $z\lesssim1$ the total halo accretion rate is dominated by high-mass haloes, but nearly all of the gas is accreted hot and unable to accrete onto galaxies.} \end{figure*} \begin{figure*} \center \includegraphics[scale=0.58]{globalaccrateREFAGN.eps} \caption {\label{fig:globalREFAGN} Evolution of global accretion rate densities onto haloes (left column), galaxies (middle column), and global SFR densities for simulations with (solid curves, same as in Figure \ref{fig:global}) and without (dashed curves) AGN feedback. The black, red, and blue curves show the global accretion and SFR densities from all, hot-mode, and cold-mode accretion, respectively. AGN feedback suppresses halo accretion only slightly, but the effect on galaxy accretion and star formation is large, up to an order of magnitude. AGN feedback preferentially suppresses \textsl{hot}-mode galaxy accretion and star formation from gas accreted in the \textsl{hot} mode.} \end{figure*} The temperature of accreting gas has been found to follow a bimodal distribution (e.g.\ \citealt{Keres2005}; V10). Therefore, two modes of accretion are considered: the hot and the cold mode. The cooling function peaks at about $T=10^{5-5.5}$~K \citep[e.g.][]{Wiersma2009a}, so there is little gas around this temperature. We therefore define hot (cold) accretion as accreted gas with maximum past temperatures above (below) $10^{5.5}$~K. Much of the gas that is shock-heated to much higher temperatures is expected to stay hot for a long time, whilst gas that is heated to lower temperatures can more easily cool and condense onto the galaxy. When considering accretion onto galaxies, it is important to note that the terms `hot' and `cold' refer to the maximum past temperature rather than the current temperature. In fact, gas that has been shock-heated to temperatures in excess of $10^{5.5}$~K \emph{must} cool down before it can accrete onto a galaxy, as we identify this type of accretion as gas joining the ISM, which in our simulations means that the gas density crosses the threshold of $n_\mathrm{H}=0.1$~cm$^{-3}$ while the temperature $T\le10^5$~K. Thus, although gas that, according to our terminology (which is consistent with that used in the literature), is accreted hot onto galaxies has been hot in the past, it is cold at the time of accretion. In V10 we showed that the maximum past temperature is usually reached around the time the gas accreted onto the halo. The global accretion rate density onto \textit{haloes}, i.e.\ the gas mass accreting onto resolved haloes per year and per comoving Mpc$^3$, is shown in the left panel of Figure~\ref{fig:global} by the solid, black curve. The solid, red and blue curves show global accretion rates for hot- and cold-mode accretion, respectively. These global accretion rate densities are averaged over the time interval between two snapshots. The SFR density in the haloes we consider is shown by the green curve. The SFH in the entire box is shown by the dashed, grey curve. It is higher than the dashed, green curve, because the latter excludes star formation in subhaloes and unresolved haloes (i.e.\ $M_\mathrm{halo}<10^{10.8}$~M$_\odot$). The global accretion rate onto resolved main haloes (solid, black curve) peaks at $z\approx3$. It is fairly constant, varying only by about a factor of two from $z\approx4$ down to $z\approx0$. The average accretion rate onto haloes of a given mass decreases more strongly towards lower redshift (V10). However, the number of haloes at a fixed mass increases and higher mass haloes form with decreasing redshift. The combination of these effects results in an almost constant global accretion rate density. We note, however, that the normalization and shape are not fully converged with respect to resolution: higher resolution simulations may find higher total and cold accretion rate densities. Increasing the resolution would allow us to include haloes with lower masses, which would boost the cold accretion rate density by up to a factor of $\sim2-3$. The total accretion rate density would therefore increase at $z\gtrsim2$, where cold accretion dominates. The global accretion rate is an order of magnitude higher than the global SFR in the same haloes (green, dashed curve), indicating that most of the gas that accretes onto haloes never forms stars. The global growth of haloes is dominated by cold accretion for $z>2$, but by $z=0$ the contribution of the hot mode exceeds that of the cold mode by an order of magnitude. The cold accretion rate density peaks around $z=3$ and falls rapidly thereafter, while the hot accretion rate density increases down to $z\approx2$ and flattens off at lower redshifts. The global SFR peaks around $z=2$ and declines by an order of magnitude to $z=0$. An order of magnitude drop is also visible for the global cold accretion rate from $z=3$ to $z=0$. Neither the total nor the hot accretion rate histories can explain the drop in the cosmic SFH, which must therefore be driven by the drop in the cold accretion rate. The middle panel of Figure~\ref{fig:global} shows the global accretion rates onto central \textit{galaxies}. The green and grey curves are identical to the ones shown in the left panel. The black, red, and blue curves describe the total, hot, and cold accretion rate densities, respectively. The total accretion rate shows the amount of gas that joins the ISM. The gas can, however, be removed from the ISM by supernova and AGN feedback, as well as by dynamical processes. This is why the overall normalization of the SFR is generally lower than the ISM accretion rate. The global SFR peaks later than the global galaxy accretion rate ($z\approx 2$ versus $z\approx 3$, which corresponds to a difference of $\sim1$~Gyr in the age of the Universe). This delay probably results from the time it takes to convert interstellar gas into stars. The long gas consumption time scale implied by the assumed, Kennicutt star formation law ($\sim1$~Gyr for typical densities) allows the existence of reservoirs of accreted gas. The SFR can therefore temporarily be higher than the galaxy accretion rate, as happens for $z<0.5$ (compare the dashed, green curve with the solid, black curve, which include the same sample of resolved haloes). Gas returned to the ISM by stellar mass loss, a process that is included in our simulations and that becomes important for $z<1$ \citep{Schaye2010, Leitner2010}, also increases the SFR relative to the accretion rate. As we did in the left panel of Figure~\ref{fig:global} for accretion onto haloes, we split the global accretion rate onto galaxies into separate contributions from the hot and cold modes. The global hot accretion rate peaks around $z=2$, as does the SFR density. The hot accretion rate is, however, not nearly enough to account for all the star formation in these galaxies, falling short by at least a factor of 2 at all redshifts. At $z>3$, the global cold accretion rate is an order of magnitude higher than the global hot accretion rate. This difference decreases to $\sim0.25$~dex by $z=0$. At all redshifts, it is mostly cold accretion that allows for the growth of \textit{galaxies}, even though hot accretion dominates the growth of \textit{haloes} below $z\approx2$. Comparing the middle panel to the left panel, we notice that the global cold accretion rate onto galaxies is a factor of $\sim3-4$ lower than the cold accretion rate onto haloes. Not all cold gas that accretes onto haloes makes it into the central galaxy to form stars (see also V10). The shapes of the blue curves are, however, similar, indicating that the fraction of the gas accreting cold onto the halo that proceeds to accrete onto the galaxy is roughly constant with time. The situation is very different for global hot accretion rates (red curves). Hot accretion onto the ISM has already peaked at $z\approx2.5$, while hot accretion onto haloes continues to increase down to $z=0$. This can be explained by noting that as the Universe evolves, gas is heated to higher temperatures, because haloes become more massive. In addition, the average density of the Universe goes down. The lower densities and higher temperatures gives rise to longer cooling times. Moreover, winds from supernovae and AGN eject low-entropy gas at high redshift, raising the entropy of the gas in haloes at low redshift \citep{Crain2010, McCarthy2011}. Hence, as time goes on, more of the halo gas is unable to cool and reach the central galaxy. While the gravitational potential is the most important factor for the growth of haloes, for the growth of galaxies, the cooling function and feedback processes also come into play. The right panel of Figure \ref{fig:global}, which shows the global SFR densities due to hot and cold accretion, confirms that the main fuel for star formation is gas accreted in the cold mode. The difference becomes smaller towards lower redshift. However, even at $z=0$ hot mode gas contributes 0.3~dex less than cold mode gas. To investigate which haloes contribute most to the global accretion rates and SFHs, we show the same quantities as in Figure~\ref{fig:global} for three different halo mass bins in Figure~\ref{fig:globalmass}. From top to bottom, the mass ranges are $10^{11}\le M_\mathrm{halo}<10^{12}$~M$_\odot$, $10^{12}\le M_\mathrm{halo}<10^{13}$~M$_\odot$, and $M_\mathrm{halo}\ge10^{13}$~M$_\odot$, which contain 21813, 2804, and 285~haloes at $z=0$. The shape of the total halo accretion rate density in the lowest mass range is in agreement with that found by \citet{Bouche2010} based on an extended Press-Schechter formalism and a fit to dark matter accretion rates in N-body simulations. For $z>2$ the global accretion rate densities onto haloes (left column), galaxies (middle column), and the global star formation rate density (right column) are all dominated by haloes with $M_\mathrm{halo}<10^{12}$~$M_\odot$ (top row). At that time, higher-mass haloes are still too rare to contribute significantly. Below $z\approx2$ haloes with $M_\mathrm{halo}=10^{12-13}$~M$_\odot$ (middle row) begin to contribute significantly and for accretion onto haloes, but not for accretion onto galaxies or star formation, their contribution is overtaken by $M_\mathrm{halo}=10^{13-14}$~M$_\odot$ haloes (bottom row) around $z=1$. Observe that the global halo accretion rate density starts to decline at $z\approx3$ and $z\approx2$ for the low and middle halo mass bins, respectively, and that it keeps increasing down to $z=0$ for $M_\mathrm{halo}\ge10^{13}$~M$_\odot$. The global cold accretion rate density decreases with time for $z<3$, $z<2.5$, and $z<1$ for the low, middle, and high halo mass bins, respectively. Both the galaxy accretion rate density and the SFR density are dominated by $M_\mathrm{halo}<10^{12}$~M$_\odot$ haloes at all redshifts. Towards lower redshifts, high-mass haloes account for larger fractions of the total galaxy accretion rate density and SFR density, though they never dominate. High-mass haloes do dominate the halo accretion rate at low redshift, but nearly all of the gas is accreted hot and only a very small fraction of this gas is subsequently able to cool down onto galaxies. \section{Effect of AGN feedback} \label{sec:REFAGN} It is interesting to see what the influence of AGN feedback is on our results. Because AGN feedback is more important in higher-mass haloes, for which hot-mode accretion is more important, we expect it to have a larger effect on the hot-mode accretion rate density. Moreover, it has been hypothesized \citep[e.g.][]{Keres2005, Dekel2006} that hot, dilute gas may be more vulnerable to AGN feedback than cold streams and may therefore be preferentially prevented from accreting onto galaxies. Indeed, \citet{Theuns2002} had already demonstrated that supernova-driven outflows follow the path of least resistance, leaving the cold filaments that produce HI absorption intact. \citet{McCarthy2011} have shown that feedback from AGN at high redshift increases the entropy of the halo gas at low redshift. The hot gas will therefore be even hotter and less dense at low redshift than it would be in the absence of AGN feedback, making it more susceptible to being heated or entrained in an outflow, and thus to being prevented from accreting. We compare our fiducial simulation, which includes AGN feedback, to the OWLS `reference model' which is identical to our fiducial run except that it does not include black holes and AGN feedback. This allows us to assess the effect of AGN feedback on the global hot and cold accretion rates. Figure~\ref{fig:globalREFAGN} shows the same solid curves as were shown in Figure~\ref{fig:global}. They indicate the total, hot, and cold accretion rate densities onto haloes (left panel) and onto galaxies (middle panel) and the star formation rate density resulting from all, hot, and cold accretion (right panel). The dashed curves show the same global accretion rates and SFHs for the simulation without AGN feedback. For accretion onto galaxies and for star formation the differences are striking. When AGN feedback is excluded, late-time star formation is no longer predominantly fuelled by gas accreted in the cold mode. As expected, all accretion rate densities are reduced by the inclusion of AGN feedback. The effect on halo accretion is, however, small, as was also shown by V10. The hot and cold halo accretion rate densities are reduced by at most 0.2~and 0.1~dex, respectively. This reduction implies that AGN feedback also affects some gas outside of haloes. Even though the effect is small, AGN feedback reduces hot halo accretion more than cold halo accretion. The differential effect of AGN feedback on hot and cold accretion is much more pronounced for accretion onto galaxies than for accretion onto haloes and it increases towards lower redshift. At very high redshift ($z=9$), \citet{Powell2010} have shown that outflows (driven by supernova feedback) do not affect the galaxy inflow rates. Our results indicate that this may change towards lower redshifts, when densities are much lower. While AGN feedback reduces cold accretion rate densities by up to 0.4~dex (at $z=0$), the hot accretion rate densities decrease by up to 0.8~dex (also at $z=0$). The SFR densities are reduced by up to 0.6~dex for star formation powered by cold-mode accretion, but by 1~dex for hot-mode accretion. The reduction due to AGN feedback is thus $\sim0.4$~dex greater for the hot mode than for the cold mode, both for galaxy accretion and for star formation. The larger reduction indicates that AGN feedback preferentially, but not exclusively, prevents hot mode gas from accreting onto galaxies and participating in star formation. Hence, the inclusion of AGN feedback strongly boosts the size of the drop in the cosmic SFR at late times. This preferential suppression of hot accretion is the result of two effects, namely of the differential effect at a fixed halo mass, indicating that hot-mode gas is more vulnerable to feedback than cold-mode gas, and of the fact that AGN feedback is effective only in massive haloes (with $M_\mathrm{halo}\gtrsim10^{12}$), for which hot accretion is important. The latter is the dominant effect. \section{Conclusions} \label{sec:concl} We have investigated the evolution of the global gas accretion rate densities onto haloes and onto their central galaxies and we have done so for both the hot and cold accretion modes. In addition, we studied the contributions from gas accreted through the cold and hot modes to the cosmic star formation history. We made use of a 100 Mpc/h, $2\times 512^3$ particle SPH simulation from the OWLS project that includes radiative cooling (computed element by element and thus including metal lines), star formation, stellar mass loss, supernova feedback, and AGN feedback. We isolated the effect of AGN feedback by comparing to a second simulation that did not include AGN, but which was otherwise identical. The hot and cold accretion modes were separated by using a fixed maximum past temperature threshold of $T_\mathrm{max}=10^{5.5}$~K. The global gas accretion rate density onto haloes is much higher than that onto galaxies and both rates exceed the cosmic SFR density. This confirms the finding of V10 that most of the gas accreting onto haloes does not result in star formation. This is the case for both accretion modes, but the differences are larger for the hot mode. The global SFR declines after $z\approx2$, whereas the global hot-mode accretion rate onto haloes shows no such trend. From this, we conclude that the global SFR follows the drop in the global cold-mode accretion rate onto haloes, which sets in at $z\approx3$, but with a delay of order the gas consumption time scale in the ISM. Star formation tracks cold-mode accretion rather than hot-mode accretion because cold streams can reach the central galaxy, where star formation takes place, much more easily than gas that is shock-heated to high temperatures near the virial radius. Much of the hot gas cannot cool within a Hubble time and therefore cannot accrete onto the central galaxy. In addition, we demonstrated that it is very important that hot gas is more susceptible to removal by outflows driven by feedback from AGN. Without AGN feedback, gas accreted in the hot mode contributes significantly to the cosmic SFR below $z=1$ and the drop in the SFR below $z=2$ would be much smaller. For the hot mode the difference between the accretion rates onto haloes and onto galaxies is larger at lower redshifts. While the hot accretion mode dominates the growth of haloes by an order of magnitude at $z\approx0$, it is still less important than cold accretion for the growth of the central galaxies. At $z > 2$, cold accretion even dominates the global accretion rate onto haloes. We demonstrated that AGN feedback suppresses accretion onto galaxies and that it does so much more efficiently for the hot mode than for the cold mode. This happens because AGN feedback only becomes more efficient than feedback from star formation in high-mass haloes, which are dominated by hot accretion, and because hot-mode gas is more dilute and therefore more vulnerable to feedback. In addition, as demonstrated by \citet{McCarthy2011}, by ejecting low-entropy halo gas at high redshift ($z \ga 2$), AGN feedback results in an increase of the entropy, and thus a reduction of the cooling rates, of hot halo gas at low redshift. While \citet{Keres2009a} did not investigate accretion onto haloes, they did also find that cold accretion is most important for the growth of galaxies, with hot accretion becoming increasingly important towards lower redshifts (see also \citealt{Keres2005,Ocvirk2008,Brooks2009}; V10). However, their simulation included neither winds from supernovae nor feedback from AGN. AGN feedback was in fact ignored by all previous cosmological simulations investigating gas accretion except for \citet{Khalatyan2008}, who simulated a single object, and except for V10. Our results suggest that the neglect of this important process leads to a strong overestimate of the global accretion rate and SFR densities and of the importance of the hot accretion mode for galaxy accretion and star formation. In summary, the rapid decline in the cosmic SFR density below $z=2$ is driven by the corresponding drop in the cold accretion rate density onto haloes. The total accretion rate onto haloes falls off much less rapidly because the hot mode becomes increasingly important. AGN feedback, which acts preferentially on gas accreted in the hot mode, prevents the hot halo gas from accreting onto galaxies and forming stars and is therefore a crucial factor in the steep decline of the cosmic SFR density. \section*{Acknowledgements} We would like to thank Avishai Dekel and all the members of the OWLS team for valuable discussions and the anonymous referee for useful comments. The simulations presented here were run on Stella, the LOFAR BlueGene/L system in Groningen, on the Cosmology Machine at the Institute for Computational Cosmology in Durham as part of the Virgo Consortium research programme, and on Darwin in Cambridge. This work was sponsored by the National Computing Facilities Foundation (NCF) for the use of supercomputer facilities, with financial support from the Netherlands Organization for Scientific Research (NWO), also through a VIDI grant, and from the Marie Curie Initial Training Network CosmoComp (PITN-GA-2009-238356). \bibliographystyle{mn2e}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In Part 1 \cite{Part1}, we formulated the boundary--value problem of the reflection and transmission of an arbitrary plane wave due to a slab of an electro--optic structurally chiral material (SCM) in terms of a 4$\times$4 matrix ordinary differential equation. A SCM slab is helicoidally nonhomogeneous in the thickness direction, and therefore must exhibit the circular Bragg phenomenon (CBP). Endowed with one of 20 classes of point group symmetry, the SCM slab was subjected in Part 1 to a dc electric field parallel to its axis of nonhomogeneity. The enhancement of the CBP by the application of the axial dc electric field has either switching or circular--polarization--rejection applications in optics. The twin possibilities of thinner filters and electrical control of the CBP, depending on the local crystallographic class as well as the constitutive parameters of the SCM, emerged. Our objective here is to generalize the theory of Part 1 to the application of an arbitrarily oriented dc electric field in order to control the CBP. The matrix ordinary differential equation then becomes more complicated, even if the plane wave is normally incident. However, the exhibition of the CBP is not in doubt, in general, as it depends solely on the structural chirality of the SCM. The plan of this paper is as follows: Section \ref{the} contains a brief description of the optical permittivity matrix of a SCM, and the Oseen transformation is employed to derive the 4$\times$4 matrix ordinary differential equation. Section \ref{numres} contains an account of numerical results and the conclusions drawn therefrom on the alignment of the dc electric field in relation to the exhibition of the CBP. The notation is the same as for Part 1. Vectors are denoted in boldface; the cartesian unit vectors are represented by $\hat{\mathbf{u}}_x$, $\hat{\mathbf{u}}_y$, and $\hat{\mathbf{u}}_z$; symbols for column vectors and matrixes are decorated by an overbar; and an $% \exp(-i\omega t)$ time--dependence is implicit with $\omega$ as the angular frequency. \section{Theoretical formulation \label{the}} We are interested in the reflection and transmission of plane waves due to a SCM slab of thickness $L$. The axis of helicoidal nonhomogeneity of the SCM is designated as the $z$ axis, and the SCM is subjected to a uniform dc electric field $\mathbf{E}^{dc} $. The half--spaces $z\leq 0$ and $z\geq L$ are vacuous. An arbitrarily polarized plane wave is obliquely incident on the SCM from the half--space $z\leq 0$. As a result, reflected and transmitted plane waves exist in the half--spaces $z\leq 0$ and $z\geq L$, respectively. A boundary--value problem has to be solved in order to determine the reflection and transmission coefficients. \subsection{Structurally chiral material} As the electro--optic SCM has the $z$ axis as its axis of helicoidal nonhomogeneity and is subjected to a dc electric field $\mathbf{E}^{dc}$, the optical relative permittivity matrix of this material may be stated as \begin{equation} \bar{\epsilon}^{SCM}(z) = \bar{S}_{z}\left(\frac{h\pi z}{\Omega}\right)\cdot% \bar{R}_{y}(\chi) \cdot\bar{\epsilon}_{PE}(z) \cdot\bar{R}_{y}(\chi)\cdot \bar{S}_{z}\left(-\,\frac{h\pi z}{\Omega}\right)\,. \label{AAepsr} \end{equation} The matrix $\bar{\epsilon}_{PE}(z)$ incorporates both the Pockels effect \cite{Boyd} and the arbitrarily oriented but uniform $\mathbf{E}^{dc} $. Correct to the first order in the components of the dc electric field, this matrix is given by \begin{equation} \displaystyle{\ \bar{\epsilon}_{PE}\approx\left( \begin{array}{ccc} \epsilon _{1}^{(0)}(1-\epsilon _{1}^{(0)}\sum_{K=1}^3 r_{1K}E_{K}^{dc} ) & -\epsilon _{1}^{(0)}\epsilon _{2}^{(0)}\sum_{K=1}^3 r_{6K}E_{K}^{dc} & -\epsilon _{1}^{(0)}\epsilon _{3}^{(0)}\sum_{K=1}^3 r_{5K}E_{K}^{dc} \\[5pt] -\epsilon _{2}^{(0)}\epsilon _{1}^{(0)}\sum_{K=1}^3 r_{6K}E_{K}^{dc} & \epsilon _{2}^{(0)}(1-\epsilon _{2}^{(0)}\sum_{K=1}^3 r_{2K}E_{K}^{dc} ) & -\epsilon _{2}^{(0)}\epsilon _{3}^{(0)}\sum_{K=1}^3 r_{4K}E_{K}^{dc} \\[5pt] -\epsilon _{3}^{(0)}\epsilon _{1}^{(0)}\sum_{K=1}^3 r_{5K}E_{K}^{dc} & -\epsilon _{3}^{(0)}\epsilon _{2}^{(0)}\sum_{K=1}^3 r_{4K}E_{K}^{dc} & \epsilon _{3}^{(0)}(1-\epsilon _{3}^{(0)}\sum_{K=1}^3 r_{3K}E_{K}^{dc} ) \end{array} \right) }\,, \label{PocEps} \end{equation} where \begin{equation} \left( \begin{array}{l} E_{1}^{dc}(z) \\[5pt] E_{2}^{dc}(z) \\[5pt] E_{3}^{dc}(z) \end{array} \right) = \bar{R}_{y}(\chi)\cdot\bar{S}_{z}\left(-\,\frac{h\pi z}{\Omega}% \right)\cdot\mathbf{E}^{dc} \,, \end{equation} $\epsilon _{1,2,3}^{(0)}$ are the principal relative permittivity scalars in the optical regime, whereas $r_{JK}$ (with $1\leq J\leq 6$ and $1\leq K\leq 3 $) are the electro--optic coefficients \cite{Part1,Boyd}. The SCM can be locally isotropic, uniaxial, or biaxial, depending on the relative values of $\epsilon_1^{(0)}$, $\epsilon_2^{(0)}$, and $% \epsilon_3^{(0)}$. Furthermore, the SCM may belong to one of 20 crystallographic classes of local point group symmetry, in accordance with the relative values of the electro--optic coefficients $r_{JK}$. The tilt matrix \begin{equation} \bar{R}_{y}(\chi )=\left( \begin{array}{ccc} -\sin \chi & 0 & \cos \chi \\ 0 & -1 & 0 \\ \cos \chi & 0 & \sin \chi \end{array} \right) \end{equation} involves the angle $\chi \in\left[0,\pi/2\right]$ with respect to the $x$ axis in the $xz$ plane. The use of the rotation matrix \begin{equation} \bar{S}_z(\zeta)=\left( \begin{array}{ccc} \cos \zeta & -\,\sin\zeta & 0 \\ \sin\zeta & \cos \zeta & 0 \\ 0 & 0 & 1 \end{array} \right) \end{equation} in (\ref{AAepsr}) involves the half--pitch $\Omega $ of the SCM along the $z$ axis. In addition, the handedness parameter $h=1$ for structural right--handedness and $h=-1$ for structural left--handedness. Without significant loss of generality, we chose \begin{equation} \mathbf{E}^{dc} = E^{dc} (\hat{\mathbf{u}}_x \cos\chi_{dc} +\hat{\mathbf{u}}% _z \sin\chi_{dc})\,,\quad \chi_{dc}\in\left[0,\pi/2\right]\,, \end{equation} and we note that the case $\chi_{dc}=\pi/2$ has been tackled in Part 1 \cite{Part1}. \subsection{Propagation in the SCM} The Maxwell curl postulates for the chosen SCM slab are given by \begin{eqnarray} &&\left. \begin{array}{l} \nabla \times \mathbf{E}(x,y,z)=i\omega\mu_o\mathbf{H}(x,y,z) \\[5pt] \nabla \times \mathbf{H}(x,y,z)=-i\omega\epsilon_o\bar{\epsilon}% ^{SCM}(z)\cdot \mathbf{E}(x,y,z) \end{array} \right\} \,, \nonumber \\ &&\qquad\qquad 0<z<L\,, \end{eqnarray} where $\epsilon_o$ and $\mu_o$ are the permittivity and the permeability of free space (i.e., vacuum). As a plane wave is incident obliquely on the SCM, $\forall z$ we set \cite{Part1} \begin{equation} \left. \begin{array}{l} \mathbf{E}(x,y,z)= \mathbf{e}(z)\, \exp\left[ i\kappa(x\cos\phi+y\sin\phi)\right] \\[5pt] \mathbf{H}(x,y,z)= \mathbf{h}(z)\, \exp\left[ i\kappa(x\cos\phi+y\sin\phi)\right] \end{array} \right\}\,, \end{equation} where the wavenumber $\kappa$ and the angle $\phi$ are determined by the incidence conditions. The essential part of the Maxwell curl postulates can then be stated in terms of the column vector \begin{equation} {\bar{\psi}}\left( z\right) =\left( \begin{array}{c} e_{x}(z) \\ e_{y}(z) \\ h_{x}(z) \\ h_{y}(z) \end{array} \right) \, . \label{campoe_h} \end{equation} As in Part 1\cite{Part1}, it is advantageous to exploit the Oseen transformation by defining the column vector \begin{equation} {\bar{\psi}}^{\prime }(z)=\bar{M}\left(\frac{h\pi z}{\Omega}\right)\cdot {% \bar{\psi}}(z)\,, \end{equation} where the unitary 4$\times $4 matrix \begin{equation} \bar{M}(\zeta)=\left( \begin{array}{cccc} \cos \zeta & \sin \zeta & 0 & 0 \\ -\sin \zeta & \cos \zeta & 0 & 0 \\ 0 & 0 & \cos \zeta & \sin \zeta \\ 0 & 0 & -\sin \zeta & \cos \zeta \end{array} \right) \,. \end{equation} The column vector ${\bar{\psi}}^{\prime }(z)$ satisfies the 4$\times$4 matrix ordinary differential equation \begin{equation} \label{oblique} \frac{d}{dz}{\bar{\psi}}^{\prime }(z)= i \bar{A}^\prime(z)\cdot{\bar{\psi}}% ^{\prime }(z)\,, \qquad 0 < z <L\,, \end{equation} where the decomposition \begin{equation} \label{defineA} \bar{A}^\prime(z) =\bar{A}_0^\prime(u) + \bar{A}_s^\prime(u)\,\sin\chi_{dc} + \left[\bar{A}_{cs}^\prime(u)\sin\left(\frac{h\pi z}{\Omega}\right) + \bar{A% }_{cc}^\prime(u)\cos\left(\frac{h\pi z}{\Omega}\right) \right]\cos\chi_{dc} \, \end{equation} clarifies the significance of the orientation of $\mathbf{E}^{dc}$, and is correct to the first order in $E^{dc}$. The various quantities appearing on the right side of (\ref{defineA}) are as follows: \begin{eqnarray} \bar{A}_0^\prime(u) &=& \left( \begin{array}{cccc} 0 & -i\frac{h\pi}{\Omega} & 0 & \omega\mu_o \\ i\frac{h\pi}{\Omega} & 0 & -\omega\mu_o & 0 \\ 0 & -\omega\epsilon_o\epsilon_2^{(0)} & 0 & -i\frac{h\pi}{\Omega} \\ \omega\epsilon_o\epsilon_d & 0 & i\frac{h\pi}{\Omega} & 0 \end{array} \right) \nonumber \\[6pt] &+& \kappa\alpha_3\, \bar{C}_1^\prime(u) +\frac{\kappa^2}{\omega\epsilon_o}\,% \frac{\epsilon_d}{\epsilon_1^{(0)}\epsilon_3^{(0)}} \,\bar{C}_3^\prime(u)- \frac{\kappa^2}{\omega\mu_o} \,\bar{C}_4^\prime(u)\,, \end{eqnarray} \begin{eqnarray} \bar{A}_s^\prime(u) &=&-\,\omega\epsilon_o\frac{\epsilon_2^{(0)}}{% \epsilon_1^{(0)}} \left( \begin{array}{cccc} 0 & \quad0 & \quad0 & \quad0 \\ 0 & \quad0 & \quad0 & \quad0 \\ \epsilon_e+\epsilon_h & \quad -\epsilon_m & \quad0 & \quad 0 \\ \epsilon_\iota\cos\chi+(\epsilon_j+\epsilon_\ell) \frac{\sin 2\chi}{2}% +\epsilon_k\sin\chi & \quad-(\epsilon_e+\epsilon_h) & \quad0 & \quad0 \end{array} \right) \nonumber \\[6pt] &+&\kappa\frac{\epsilon_2^{(0)}}{\epsilon_1^{(0)}\epsilon_3^{(0)}} \left[-\,% \frac{\alpha_1}{\epsilon_1^{(0)}}\,\bar{C}_1^\prime(u) +(\epsilon_f+\epsilon_g)\,\bar{C}_2^\prime(u)\right]+ \frac{\kappa^2}{% \omega\epsilon_o}\, \left(\frac{\epsilon_d}{\epsilon_1^{(0)}\epsilon_3^{(0)}}% \right)^2\,\frac{\alpha_2}{\epsilon_d}\,\bar{C}_3^\prime(u) \,, \end{eqnarray} \begin{eqnarray} \bar{A}_{cs}^\prime(u)&=& \omega\epsilon_o \left( \begin{array}{cccc} 0 & \quad 0 & \quad0 & \quad 0 \\ 0 & \quad0 & \quad0 & \quad 0 \\ -\delta_c & \quad E^{dc}\,\left(\epsilon_2^{(0)}\right)^2 r_{22} & \quad 0 & \quad 0 \\ \delta_\iota & \quad \delta_c & \quad0 & \quad0 \end{array} \right) \nonumber \\[6pt] &+& \frac{\kappa}{\epsilon_1^{(0)}\epsilon_3^{(0)}} \left[ \delta_j\epsilon_d\,\bar{C}_1^\prime(u)+ \delta_d\epsilon_2^{(0)}\,\bar{C}% _2^\prime(u)\right] +\frac{\kappa^2}{\omega\epsilon_o}\, \left(\frac{% \epsilon_d}{\epsilon_1^{(0)}\epsilon_3^{(0)}}\right)^2\,\delta_k\,\bar{C}% _3^\prime(u)\,, \end{eqnarray} \begin{eqnarray} \bar{A}_{cc}^\prime(u)&=& \omega\epsilon_o \left( \begin{array}{cccc} 0 & \quad 0 & \quad0 & \quad 0 \\ 0 & \quad0 & \quad0 & \quad 0 \\ -(\delta_e-\delta_h) & \delta_\ell & \quad 0 & \quad 0 \\ \delta_m & \quad \delta_e-\delta_h & \quad0 & \quad0 \end{array} \right) \nonumber \\[6pt] &+& \frac{\kappa}{\epsilon_1^{(0)}\epsilon_3^{(0)}} \left[\delta_n\epsilon_d% \,\bar{C}_1^\prime(u) + (\delta_f-\delta_g)\epsilon_2^{(0)}\,\bar{C}% _2^\prime(u)\right] +\frac{\kappa^2}{\omega\epsilon_o}\, \left(\frac{% \epsilon_d}{\epsilon_1^{(0)}\epsilon_3^{(0)}}\right)^2\,\delta_p\,\bar{C}% _3^\prime(u)\,, \end{eqnarray} \begin{equation} \bar{C}_1^\prime(u)= \left( \begin{array}{cccc} \cos u & 0 & 0 & 0 \\ -\sin u & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & \sin u & \cos u \end{array} \right)\,, \end{equation} \begin{equation} \bar{C}_2^\prime(u)= \left( \begin{array}{cccc} 0 & -\cos u & 0 & 0 \\ 0 & \sin u & 0 & 0 \\ 0 & 0 & \sin u & \cos u \\ 0 & 0 & 0 & 0 \end{array} \right)\,, \end{equation} \begin{equation} \bar{C}_3^\prime(u)= \left( \begin{array}{cccc} 0 & \quad0 & \quad -\sin u\cos u & - \cos^2u \\ 0 & \quad0 & \quad\sin^2u & \sin u \cos u \\ 0 & \quad0 & \quad 0 & 0 \\ 0 & \quad0 & \quad0 & 0 \end{array} \right)\,, \end{equation} \begin{equation} \bar{C}_4^\prime(u)= \left( \begin{array}{cccc} 0 & 0 & \quad 0 & \quad0 \\ 0 & 0 & \quad0 & \quad 0 \\ -\sin u\cos u & - \cos^2u & \quad0 & \quad0 \\ \sin^2u & \sin u \cos u & \quad0 & \quad0 \end{array} \right)\,, \end{equation} \begin{eqnarray} &&\alpha_1= \epsilon _{1}^{(0)}\epsilon_j\cos^2\chi-\epsilon _{3}^{(0)}\epsilon_\ell\sin^2\chi +\epsilon _{1}^{(0)}\epsilon_k\cos\chi \nonumber \\ &&\qquad\quad -\epsilon _{3}^{(0)}\epsilon_\iota\sin\chi\,, \\ &&\alpha_2=\left(\epsilon _{1}^{(0)}\epsilon_n+\epsilon _{3}^{(0)}\epsilon_p\right)\cos\chi \nonumber \\ &&\qquad\quad+\left(\epsilon _{1}^{(0)}\epsilon_s+\epsilon _{3}^{(0)}\epsilon_q\right)\sin\chi\,, \\[6pt] &&\alpha_3 =\epsilon _{d}\sin 2\chi \frac{\left( \epsilon _{1}^{(0)}-\epsilon _{3}^{(0)}\right) }{2\epsilon _{1}^{(0)}\epsilon _{3}^{(0)}}\,, \end{eqnarray} \begin{eqnarray} &&\epsilon _{d}=\frac{\epsilon _{1}^{(0)}\epsilon _{3}^{(0)}}{\epsilon _{1}^{(0)}\cos ^{2}\chi +\epsilon _{3}^{(0)}\sin ^{2}\chi }\,, \\[9pt] &&\epsilon_{e} = E^{dc}\, \epsilon_1^{(0)} \epsilon_d (r_{41}\cos^2\chi-r_{63}\sin^2\chi)\,, \\[9pt] &&\epsilon_{f}=E^{dc}\,\epsilon_d\sin\chi\,\cos\chi(r_{41}% \epsilon_3^{(0)}+r_{63}\epsilon_1^{(0)})\,, \\ &&\epsilon_{g} = E^{dc}\, \epsilon_d (r_{43}\epsilon_3^{(0)}\sin^2\chi+r_{61}\epsilon_1^{(0)}\cos^2\chi)\,, \\ &&\epsilon_{h}=E^{dc}\, \epsilon_1^{(0)}\epsilon_d\sin\chi\,\cos\chi(r_{43}-r_{61})\,, \end{eqnarray} \begin{eqnarray} &&\epsilon_\iota=E^{dc} \,\frac{\epsilon_1^{(0)}}{\epsilon_2^{(0)}}% \,\epsilon_d^2 (r_{31}\cos^2\chi-r_{53}\sin^2\chi)\,, \\[9pt] &&\epsilon_{j}=E^{dc} \frac{\epsilon_1^{(0)}}{\epsilon_2^{(0)}}% \,\epsilon_d^2 \sin\chi (r_{11} -r_{53} )\,, \\[9pt] &&\epsilon_{k}=E^{dc}\, \frac{\epsilon_1^{(0)}}{\epsilon_2^{(0)}}% \,\epsilon_d^2 (r_{13}\sin^2\chi-r_{51}\cos^2\chi)\,, \\[9pt] &&\epsilon_{\ell}=E^{dc}\, \frac{\epsilon_1^{(0)}}{\epsilon_2^{(0)}}% \,\epsilon_d^2 \cos\chi (r_{33} -r_{51} )\,, \end{eqnarray} \begin{eqnarray} &&\epsilon_{m}=E^{dc}\,\epsilon_1^{(0)}\epsilon_2^{(0)} (r_{21}\cos\chi+r_{23}\sin\chi)\,, \\ &&\epsilon_{n} = E^{dc}\, \epsilon_d (r_{53}\epsilon_3^{(0)}\sin^2\chi+r_{11}\epsilon_1^{(0)}\cos^2\chi)\,, \\ &&\epsilon_{p}=E^{dc}\,\epsilon_d\sin^2\chi\, (r_{31}\epsilon_3^{(0)}+r_{53}\epsilon_1^{(0)})\,, \\ &&\epsilon_{q} = E^{dc}\, \epsilon_d (r_{33}\epsilon_3^{(0)}\sin^2\chi+r_{51}\epsilon_1^{(0)}\cos^2\chi)\,, \\ &&\epsilon_{s}=E^{dc}\,\epsilon_d\cos^2\chi\, (r_{51}\epsilon_3^{(0)}+r_{13}\epsilon_1^{(0)})\,, \end{eqnarray} \begin{eqnarray} \delta_c &=& E^{dc}\, \epsilon_d \,\epsilon_2^{(0)} (r_{42}\cos\chi - r_{62}\sin\chi)\,, \\ \delta_d&=&E^{dc}\, \epsilon_d (r_{42}\epsilon_3^{(0)}\,\sin\chi+r_{62}\epsilon_1^{(0)}\,\cos\chi)\,, \\ \delta_e &=& E^{dc} \,\epsilon_d \,\epsilon_2^{(0)} (r_{43}\cos^2\chi+r_{61}\sin^2\chi)\,, \\ \delta_f &=& E^{dc}\, \epsilon_d \sin\chi\,\cos\chi\,(r_{43}\epsilon_3^{(0)}-r_{61}\epsilon_1^{(0)})\,, \\ \delta_g&=&E^{dc}\, \epsilon_d (r_{41}\epsilon_3^{(0)}\,\sin^2\chi-r_{63}\epsilon_1^{(0)}\,\cos^2\chi)\,, \\ \delta_h &=& E^{dc}\, \epsilon_d\, \epsilon_2^{(0)}\,\sin\chi\,\cos\chi\,(r_{41}+r_{63})\,, \end{eqnarray} \begin{eqnarray} \delta_\iota &=& E^{dc}\,\epsilon_d^2\left[ \sin\chi\,(r_{52}\cos\chi-r_{12}\sin\chi)\right. \nonumber \\ &&\quad\qquad + \left.\cos\chi\,(r_{52}\sin\chi-r_{32}\cos\chi)\right]\,, \\ \delta_j &=& E^{dc}\,\epsilon_d\left[\epsilon_1^{(0)} \cos\chi\,(r_{52}\cos\chi-r_{12}\sin\chi) \right. \nonumber \\ &&\quad\qquad-\left.\epsilon_3^{(0)} \sin\chi\,(r_{52}\sin\chi-r_{32}\cos\chi)\right]\,, \\ \delta_k&=& E^{dc} \,\left[\epsilon_1^{(0)} \cos\chi\,(r_{52}\epsilon_3^{(0)}\sin\chi+r_{12}\epsilon_1^{(0)}\cos\chi) \right. \nonumber \\ &&\quad\qquad+\left.\epsilon_3^{(0)} \sin\chi\,(r_{52}\epsilon_1^{(0)}\cos\chi+r_{32}\epsilon_3^{(0)}\sin\chi)% \right]\,, \end{eqnarray} \begin{eqnarray} \delta_\ell &=& E^{dc}\,\left(\epsilon_2^{(0)}\right)^2\, (r_{23}\cos\chi-r_{21}\sin\chi)\,, \\ \delta_m &=& E^{dc}\,\epsilon_d^2\, \left[\sin^2\chi\, (r_{11}\sin\chi-r_{13}\cos\chi)\right. \nonumber \\ &&\quad\qquad+ \cos^2\chi\,(r_{31}\sin\chi-r_{33}\cos\chi) \nonumber \\ &&\quad\qquad\left.-2\sin\chi\cos\chi\,(r_{51}\sin\chi-r_{53}\cos\chi)\right] \\ \delta_n &=& E^{dc}\,\epsilon_d\, \left[\sin^2\chi\cos\chi\, (r_{11}\epsilon_1^{(0)}-r_{31}\epsilon_3^{(0)})\right. \nonumber \\ &&\quad\qquad-\sin\chi\cos^2\chi\, (r_{13}\epsilon_1^{(0)}-r_{33}\epsilon_3^{(0)}) \nonumber \\ &&\quad\qquad\left.-(r_{51}\sin\chi-r_{53}\cos\chi)(\epsilon_1^{(0)}\,\cos^2% \chi-\epsilon_3^{(0)}\,\sin^2\chi)\right]\,, \\ \delta_p&=&-E^{dc}\left[ \left(\epsilon_1^{(0)}\cos\chi\right)^2(r_{11}\sin\chi-r_{13}\cos\chi) \right. \nonumber \\ &&\quad\qquad+ \left(\epsilon_3^{(0)}\sin\chi\right)^2(r_{31}\sin\chi-r_{33}\cos\chi) \nonumber \\ &&\quad\qquad\left.+2\epsilon_1^{(0)}\epsilon_3^{(0)}\sin\chi\cos\chi\, (r_{51}\sin\chi-r_{53}\cos\chi)\right]\,, \\ u&=& \frac{h\pi z}{\Omega}\,-\phi\,. \end{eqnarray} By virtue of linearity, the solution of the 4$\times$4 matrix ordinary differential equation (\ref{oblique}) must be of the form \begin{equation} \label{oblique-soln1} {\bar{\psi}}^{\prime }(z_2)= \bar{U}^\prime(z_2-z_1)\cdot{\bar{\psi}}% ^{\prime }(z_1)\,, \end{equation} whence \begin{eqnarray} \bar{\psi}(z_2)&=&\bar{M}\left(-\,\frac{h\pi z_2}{\Omega}\right)\cdot\bar{U}% ^\prime(z_2-z_1)\cdot\bar{M}\left(\frac{h\pi z_1}{\Omega}\right)\cdot\bar{% \psi}(z_1) \nonumber \\[4pt] &\equiv&\bar{U}(z_2-z_1)\cdot\bar{\psi}(z_1)\,, \nonumber \\ &&\qquad \quad0\leq z_\ell\leq L\,,\quad \ell=1,2\,. \label{oblique-soln2} \end{eqnarray} Just as for Part 1 \cite{Part1}, we chose to implement the piecewise homogeneity approximation method \cite{LakhtakiaB} to calculate $\bar{U}^\prime(z)$. \subsection{Reflection and transmission} The incident plane wave is delineated by the electric field phasor \begin{equation} {\bf e}_{inc}(z)= \left( a_L\,\frac{i\bf s-{\bf p}_+}{\sqrt{2}} - a_R\,\frac{i\bf s+{\bf p}_+}{\sqrt{2}} \right)\,e^{ik_o z\cos\theta} \,, \qquad z \leq 0\,, \label{eq9.50} \end{equation} where $a_L$ and $a_R$ are the amplitudes of the LCP and RCP components, respectively. The electric field phasors associated with the reflected and transmitted plane waves, respectively, are given as \begin{equation} {\bf e}_{ref}(z)= \left( -r_L\,\frac{i\bf s-{\bf p}_-}{\sqrt{2}} + r_R\,\frac{i\bf s+{\bf p}_-}{\sqrt{2}} \right)\,e^{-ik_o z\cos\theta} \,\qquad z \leq 0\,, \label{eq9.53} \end{equation} and \begin{equation} {\bf e}_{tr}(z)= \left( t_L\,\frac{i\bf s-{\bf p}_+}{\sqrt{2}} - t_R\,\frac{i\bf s+{\bf p}_+}{\sqrt{2}} \right)\,e^{ik_o (z-L)\cos\theta} \,,\qquad \quad z \geq L\,. \label{eq9.54} \end{equation} The amplitudes $r_{L,R}$ and $t_{L,R}$ indicate the as--yet unknown strengths of the LCP and RCP components of the reflected and transmitted plane waves, both of which are elliptically polarized in general. The propagation vector of the incident plane wave makes an angle $\theta \in \left[ 0,\,\pi/2\right)$ with respect to the $+z$ axis, and is inclined to the $x$ axis in the $xy$ plane by an angle $\psi \in\left[ 0,\,2\pi\right]$; accordingly, the transverse wavenumber $ \kappa = k_o\,\sin\theta$, where $k_o=\omega\sqrt{\epsilon_o\mu_o}$ is the wavenumber in free space. The free--space wavelength is denoted by $\lambda_o=2\pi/k_o$. The vectors \begin{eqnarray} \label{eq9.51} &&\bf s=-\hat{\bf{u}}_x\sin\phi + \hat{\bf{u}}_y \cos\phi\,, \\ \label{eq9.52} &&{\bf p}_\pm=\mp\left( \hat{\bf{u}}_x \cos\phi + \hat{\bf{u}}_y \sin\phi \right) \cos\theta + \hat{\bf{u}}_z \sin\theta\, \end{eqnarray} are of unit magnitude. The reflection--transmission problem amounts to four simultaneous, linear algebraic equation \cite{Part1,LakhtakiaB}, which can be solved by standard matrix manipulations. It is usually convenient to define reflection and transmission coefficients, which appear as the elements of the 2$\times$2 matrixes in the following relations: \begin{equation} \label{eq9.55} \left[\begin{array}{c}r_L\\r_R\end{array}\right] = \left[\begin{array}{cc}r_{LL} & r_{LR}\\r_{RL} & r_{RR}\end{array}\right] \, \left[\begin{array}{c}a_L\\a_R\end{array}\right]\,, \end{equation} \begin{equation} \label{eq9.56} \left[\begin{array}{c}t_L\\t_R\end{array}\right] = \left[\begin{array}{cc}t_{LL} & t_{LR}\\t_{RL} & t_{RR}\end{array}\right] \, \left[\begin{array}{c}a_L\\a_R\end{array}\right]\,. \end{equation} Co--polarized coefficients have both subscripts identical, but cross--polarized coefficients do not. The square of the magnitude of a reflection or transmission coefficient is the corresponding reflectance or transmittance; thus, $R_{LR} = \vertr_{LR}\vert^2$ is the reflectance corresponding to the reflection coefficient $r_{LR}$, and so on. \section{Numerical results and conclusion}\label{numres} With respect to the orientation of ${\bf E}^{dc}$, the right side of (\ref{defineA}) can be divided into three parts. The first part is indifferent to ${\bf E}^{dc}$ and therefore to $\chi_{dc}$, the second shows itself at maximum advantage for \emph{axial} dc electric fields (i.e., when $\chi_{dc}=90^\circ$), whereas the third is most effective for \emph{transverse} dc electric fields (i.e., when $\chi_{dc}=0^\circ$). The effects of the first part have been studied extensively already \cite{LakhtakiaB}, and those of the second part have been the focus of Part 1 as well as of other papers \c{RL06,RLno2}. When considering the effects of the third part as well as the interplay of the second and the third parts, we must keep in mind that the number of variables for a comprehensive parametric study is large. These variables include the local isotropy, uniaxiality, or biaxiality, as determined by the relative values of $\epsilon_{1,2,3}^{(0)}$; the local point group symmetry of which there are 20 classes, as determined by the relative values of $r_{JK}$; the two angles of incidence $\theta$ and $\phi$; the angle $\chi$ of the tilt dyadic, the half--pitch $\Omega$, and the normalized thickness $L/\Omega$; and the angle $\chi_{dc}$. Given this plethora of variables, we had to restrict the scope of our investigation. With guidance from the results reported for Part 1, we chose to focus on a locally biaxial SCM, since such materials can offer high electro--optic coefficients which would lower the magnitude of the applied dc electric field. In particular, we opted for the orthorhombic $mm2$ class, choosing the relative permittivity scalars and the electro--optic coefficients the same as for potassium niobate \cite{ZSB}. Furthermore, normal incidence is the most common condition for using planar optical devices, and so we set $\theta=0^\circ$. Finally, the effect of $\phi$ not being significant on the exhibition of the CBP \cite{Part1}, we set $\phi=0^\circ$. Figure \ref{orthomm2-1} shows the reflectances and transmittance spectrums of a structurally right--handed SCM with half--pitch $\Omega=150$~nm and tilt angle $\chi=90^\circ$, when $E^{dc}=10^7$~V~m$^{-1}$ and $\chi_{dc}\in \left[0^\circ,90^\circ\right]$. No dependence on $\chi_{dc}$ in the six plots presented actually indicates that the magnitude of the dc electric field is too low to have any significant effect; indeed, the spectrums are virtually the same as for $E^{dc}=0$. The high ridge in the plot of $R_{RR}$ located at $\lambda_o\approx 667$~nm, and its absence in the plot of $R_{LL}$, are signatures of the CBP, along with the trough in the plot of $T_{RR}$. Figure~\ref{orthomm2-2} contains the same plots as the previous figure, but for $E^{dc}=0.67\times10^9$~V~m$^{-1}$~---~the same value as used for Fig.~8 of Part 1. This magnitude is high enough to have an effect on the CBP, which also means that the reflectance and the transmittance spectrums change with $\chi_{dc}$. The center--wavelength of the Bragg regime is 646~nm and the full--width--at--half--maximum bandwidth is 69~nm for $\chi_{dc}=90^\circ$, but the corresponding quantities are 667~nm and 40~nm for $\chi_{dc}=0^\circ$. In addition, the peak value of $R_{RR}$ diminishes by about 10\% as $\chi_{dc}$ changes from $90^\circ$ to $0^\circ$. The situation changes significantly when the sign of $E^{dc}$ is altered, as exemplified by Fig.~\ref{orthomm2-3} for $E^{dc}=-0.67\times10^9$~V~m$^{-1}$. The center--wavelength of the Bragg regime is 688~nm and the full--width--at--half--maximum bandwidth is 15~nm for $\chi_{dc}=90^\circ$, but the corresponding quantities remain at 667~nm and 40~nm for $\chi_{dc}=0^\circ$. In addition, the peak value of $R_{RR}$ increases by about 600\% as $\chi_{dc}$ changes from $90^\circ$ to $0^\circ$. Thus, the exhibition of the CBP is affected dramatically in the center--wavelength, the bandwidth, and the peak co--handed and co--polarized reflectance by the sign of $E^{dc}$ as well as the orientation angle $\chi_{dc}$. Whereas Figs.~\ref{orthomm2-2} and \ref{orthomm2-3} were drawn for SCMs with $\chi=90^\circ$, calculations for Figs.~\ref{orthomm2-4} and \ref{orthomm2-5} were made for $\chi=45^\circ$. These two figures indicate a blue--shifting of the CBP on the order of 100~nm as $\chi_{dc}$ changes from $90^\circ$ to $0^\circ$. Furthermore, the bandwidth is greatly affected by the value of $\chi_{dc}$ and the sign of $E^{dc}$; indeed, the CBP vanishes for $\chi_{dc}$ in the neighborhood of $50^\circ$ when $E^{dc}=0.67\times10^9$~V~m$^{-1}$. Thus, the exhibition of the CBP is in two different ranges of $\chi_{dc}$ that do not overlap but are in proximity of each other. Other types of Bragg phenomenons may appear in the spectral response characteristics. For example, Fig.~\ref{orthomm2-4} shows a high--$R_{RL}$ ridge which suggests that the electro--optic SCM can be made to function like a normal mirror (high $R_{RL}$ and $R_{LR}$) in a certain spectral regime than like a structurally right--handed chiral mirror (high $R_{RR}$ and low $R_{LL}$) \cite{LX}. We conclude that the exhibition of the circular Bragg phenomenon by an electro--optic structurally chiral material can be controlled not only by the sign and the magnitude of a dc electric field but also by its orientation in relation to axis of helicoidal nonhomogeneity. Although we decided to present numerical results here only for normal incidence, several numerical studies confirm that our conclusions also apply to oblique incidence. Thus, the possibility of electrical control of circular--polarization filters, that emerged in Part 1, has been reaffirmed and extended. Theoretical studies on particulate composite materials with electro--optic inclusions \cite{LM2006} suggest the attractive possibility of fabricating porous SCMs with sculptured--thin--film technology \cite{LakhtakiaB}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In potential theory the notion of subharmonic functions, $\mathcal{SH}$, is of fundamental importance, and in pluripotential theory the notion of plurisubharmonic functions, $\mathcal{PSH}$, is of the same importance. In 1985, Caffarelli et al.~\cite{CHS} proposed a model that makes it possible to study the common properties of potential and pluripotential theories, as well as the transition between them. It also gives a splendid tool in geometric constructions. The core focus of the Caffarelli-Nirenberg-Spruck framework is what is known today as $m$-subharmonic functions, $\mathcal{SH}_m$. These functions are considered in complex space, and on different types of complex manifolds. If $n$ is the underlying dimension, then it holds that \[ \mathcal{PSH}=\mathcal{SH}_n \subset\cdots\subset \mathcal{SH}_1=\mathcal{SH}\, . \] To mention a few references related to the Caffarelli-Nirenberg-Spruck model~\cite{Blocki_weak, huang, Li, phong, WanWang}. Let $\Omega\subset \mathbb C^n$ be a bounded domain, and let $f$ be a real-valued function defined on the topological boundary $\partial \Omega$. It is well-known that one can not always extend $f$ to the inside to a $m$-subharmonic function. This is not possible even in the cases $m=1$, and $m=n$. The aim of this paper is to find a characterization of the functions $f$ that have this classical extension property, but in the process we shall also be interested in when this extension can be approximated in neighborhoods of $\bar\Omega$. The first obstruction is that $\Omega$ is only assumed to be a bounded domain. This does not yield a satisfying amount of $m$-subharmonic functions. Therefore, we assume that there exists at least one non-constant and negative $m$-subharmonic function $\psi:\bar{\Omega}\to\RE$ such that for any $c\in\RE$ the set $\{x\in\Omega:\psi(x)<c\}$ is relatively compact in $\Omega$ (see Definition~\ref{def_msubkomp} for the meaning of being $m$-subharmonic on $\bar\Omega$). A bounded domain in $\C^n$ that satisfies this condition is called $P_m$-hyperconvex. More about this in Section~\ref{sec Pmhxdomains}. Inspired by the work of Poletsky~\cite{Po,PO2}, and Poletsky and Sigurdsson~\cite{PS}, we use ideas from the theory of function algebras defined on a compact set. In the mentioned references, the authors use the beautiful and intricate holomorphic disk-theory. Within the Caffarelli-Nirenberg-Spruck framework there are no Poletsy disks except in the case $m=n$. Therefore, we uses the idea of duality between functions and its corresponding Jensen measures. In Section~\ref{sec Jensen and msh}, we introduce and study necessary properties of $m$-subharmonic functions defined on a compact set in $\C^{n}$, and with the help of those results we arrive in Section~\ref{sec_extention} at the following theorem: \bigskip \noindent \textbf{Theorem~\ref{ext_in_pm_hyp}.} \emph{Let $\Omega$ be a bounded $P_m$-hyperconvex domain in $\mathbb C^n$, $1\leq m\leq n$, and let $f$ be a real-valued function defined on $\partial \Omega$. Then the following are equivalent:} \medskip \begin{enumerate}\itemsep2mm \item \emph{there exists $F\in \mathcal{SH}_m(\bar \Omega)$ such that $F=f$ on $\partial \Omega$;} \item $f\in \mathcal{SH}_m(\partial \Omega)$. \end{enumerate} \medskip \noindent\emph{Furthermore, if $f$ is continuous on $\partial \Omega$, then the function $F$ can be chosen to be continuous on $\bar{\Omega}$.} \bigskip Theorem~\ref{ext_in_pm_hyp} is the first result of this kind within the Caffarelli-Nirenberg-Spruck model. It should be emphasized that this is not only the classical Dirichlet problem that in the case $m=1$, can be traced back to the work of Brelot, Lebesgue, Perron, Poincar\'{e}, Wiener, and others. This since $F\in \mathcal{SH}_m(\bar \Omega)$, and by Theorem~\ref{cont}, these functions can be characterize by approximation on neighborhoods of $\bar \Omega$. If $m=n$, then Theorem~\ref{ext_in_pm_hyp} was proved in~\cite{HP}. A natural question that arises from Theorem~\ref{ext_in_pm_hyp} is how to decided wether a function $u$ is in $\mathcal{SH}_m(\bar \Omega)$ or not. From Theorem~\ref{thm_pmboundary} it follows that under the assumption that $\Omega\subset\C^n$ is a bounded open set, and that $u\in\mathcal{SH}_m(\bar \Omega)$, then $u\in \mathcal{SH}_m(\Omega)$, and $u\in\mathcal{SH}_m(\partial\Omega)$. The converse statement is not always true. But if we assume that $\Omega$ is $P_m$-hyperconvex, then we prove in Theorem~\ref{cor_msubbdvalue} that \[ u\in\mathcal{SH}_m(\bar \Omega) \quad\Leftrightarrow\quad u\in\mathcal{SH}_m(\Omega) \text{ and } u\in\mathcal{SH}_m(\partial\Omega)\, . \] This justify further the study of the geometry of domains that admits a negative exhaustion function that belongs to $\mathcal{SH}_m(\bar \Omega)$. This is done in Section~\ref{sec Pmhxdomains}. We end this note with some concluding remarks on uniform approximation of $m$-subharmonic functions (Section~\ref{sec approx}). \bigskip Background information on potential theory can be found in~\cite{armitage,doob,landkof}, and for more information about pluripotential theory in~\cite{demailly_bok,K}. A beautiful treatise on subharmonic and plurisubharmonic functions is the monograph~\cite{hormander} written by H\"ormander. Definition and basic properties of $m$-subharmonic functions can be found in~\cite{SA}. \bigskip One concluding remark is in order. There are well-developed axiomatic, and algebraic, potential theories that could have been deployed in connection with this paper. We have chosen not to do so, and leave it for others to draw full benefits of these abstract models in order to learn more about the Caffarelli-Nirenberg-Spruck framework on compact sets. We want to mention the references~\cite{ArsoveLeutwiler, BlietnerHansen3, Constantinescu, Gamelin}. \section{Jensen measures and $m$-subharmonic functions}\label{sec Jensen and msh} In this section we will define the class of $m$-subharmonic function defined on a compact set, $X\subset \mathbb C^n$, and we will prove some properties of such functions. Among other things, we shall show that these functions are closely connected to approximation by $m$-subharmonic functions defined on strictly larger domains. But, first we need some notions and definitions. Let $\mathcal{SH}_m^o(X)$ denote the set of functions that are the restriction to $X$ of functions that are $m$-subharmonic and continuous on some neighborhood of $X$. Furthermore, let $\mathcal{USC}(X)$ be the set of upper semicontinuous functions defined on $X$. Next, we define a class of Jensen measures. \begin{definition}\label{def_JzmK} Let $X$ be a compact set in $\C^n$, $1\leq m\leq n$, and let $\mu$ be a non-negative regular Borel measure defined on $X$ with $\mu(X)=1$. We say that $\mu$ is a \emph{Jensen measure with barycenter} $z\in X$ \emph{w.r.t.} $\mathcal{SH}_m^o(X)$ if \[ u(z)\leq \int_{X} u \, d\mu \qquad\qquad \text{for all } u \in \mathcal{SH}_m^o(X)\, . \] The set of such measures will be denoted by $\mathcal{J}_z^m(X)$. \end{definition} \begin{remark} If $X_1\subset X_2$ are compact sets in $\mathbb C^n$, then for every $z\in X_1$ it holds \[ \mathcal{J}_z^m(X_1)\subset \mathcal{J}_z^m(X_2)\, . \] \end{remark} We shall need the following convergence result in $\mathcal{J}_z^m(X)$. It is obtained in a standard way using the Banach-Alaoglu theorem, and therefore the proof is omitted. \begin{theorem} \label{thm_convmeasures} Let $X$ be a compact set in $\C^n$. Let $\{z_k\} \subset X$ be a sequence that is converging to $z$, as $k\to\infty$. For each $k$, let $\mu_k \in \mathcal{J}_{z_k}^m(X)$. Then there is a subsequence $\{\mu_{k_j}\}$, and a measure $\mu \in \mathcal{J}_z^m(X)$ such that $\mu_{k_j}$ converges weak-$^\ast$ to $\mu$. \end{theorem} Using the Jensen measures in Definition~\ref{def_JzmK} we shall now define what it means for a function to be $m$-subharmonic on a compact set. \begin{definition} \label{def_msubkomp} Let $X$ be a compact set in $\C^n$. An upper semicontinuous function $u$ defined on $X$ is said to be \emph{$m$-subharmonic on $X$}, $1\leq m\leq n$, if \[ u(z) \leq \int_X u \, d\mu\, , \ \text { for all } \ z \in X \ \text { and all }\ \mu \in \mathcal{J}_z^m(X)\, . \] The set of $m$-subharmonic defined on $X$ will be denoted by $\mathcal{SH}_m(X)$. \end{definition} \begin{remark} By definition, we see that $\mathcal{SH}_m^o(X) \subset \mathcal{SH}_m(X)$. \end{remark} It is easy to see that $m$-subharmonic functions on compact sets share a lot of basic properties with $m$-subharmonic functions on open sets. Some of these properties are listed below. \begin{theorem} \label{thm_basicprop2} Let $X$ be a compact set in $\mathbb C^n$, and $1\leq m\leq n$. Then \medskip \begin{enumerate}\itemsep2mm \item if $u,v \in \mathcal{SH}_m(X)$, then $su+kv \in \mathcal{SH}_m(X)$ for $s,k\geq 0$; \item if $u,v \in \mathcal{SH}_m(X)$, then $\max\{u,v\}\in \mathcal{SH}_m(X)$; \item if $u_j\in \mathcal{SH}_m(X)$ is a decreasing sequence, then $u=\lim_{j\to \infty} u_j\in \mathcal{SH}_m(X)$, provided $u(z)>-\infty$ for some point $z\in X$; \item if $u\in \mathcal{SH}_m(X)$ and $\gamma:\mathbb R\to\mathbb R$ is a convex and nondecreasing function, then $\gamma\circ u\in\mathcal{SH}_m(X)$. \end{enumerate} \end{theorem} \begin{proof} Properties $(1)$ and $(2)$ follows by Definition \ref{def_msubkomp}. To prove $(3)$, let $u_j\searrow u$. Then we have that $u\in \mathcal{USC}(\bar\Omega)$. For $z\in X$, $\mu\in \mathcal J_z^m(X)$, we have by the monotone convergence theorem that \[ u(z)=\lim_{j\to \infty}u_j(z)\leq \lim_{j\to \infty}\int u_j\,d\mu=\int\lim_{j\to \infty}d\mu=\int u\,d\mu\, . \] Part $(4)$ is a consequence of the Jensen inequality. \end{proof} The set $\mathcal{SH}_m^o(X)$ is a convex cone of continuous functions containing the constants, and separating points, and therefore we can apply the techniques of Choquet theory to get the following two versions of Edwards' duality theorem. Generalizations of Edwards' Theorem can be found in~\cite{GogusPerkinsPoletsky}. \begin{theorem}\label{thm_edwards} Let $X$ be a compact subset in $\mathbb C^n$, $1\leq m\leq n$, and let $\phi$ be a real-valued lower semicontinuous function defined on $X$. Then we have \begin{enumerate} \item[(a)] \[ \sup\left \{\psi(z):\psi \in \mathcal{SH}_m^o(X) , \psi \leq \phi \right\}=\inf\left \{\int \phi \, d\mu : \mu \in \mathcal{J}_z^m(X)\right\}\, , \ \text {and} \] \item[(b)] \begin{multline*} \sup\left\{\psi (z) : \psi \in \mathcal{SH}_m(X)\cap \mathcal{C}(X), \psi \leq \phi\right\}\\=\sup\left\{\psi (z) : \psi \in \mathcal{SH}_m(X), \psi \leq \phi\right\} =\inf\left\{\int \phi \, d\mu: \mu \in \mathcal{J}_z^m(X)\right\}\, . \end{multline*} \end{enumerate} \end{theorem} \begin{proof} Part (a) is the direct consequence of Edwards' Theorem, and the proof of part (b) is postponed until after Theorem~\ref{cont} is proved. \end{proof} One important reason to study $m$-subharmonic functions on compact sets is that they are connected to approximation. In the case $m=1$, Theorem~\ref{cont} goes back to Debiard and Gaveau~\cite{DebiardGaveau}, and Bliedtner and Hansen~\cite{BlietnerHansen1,BlietnerHansen2}(see also~\cite{perkins,perkins2}). In the case $m=n$, part $(a)$, was shown by Poletsky in \cite{Po}, and part $(b)$ in~\cite{CHP} . In Section~\ref{sec approx}, we shall have some concluding remarks in connection with this type of approximation. \begin{theorem}\label{cont} Let $X \subset \C^n$ be a compact set, and $1 \leq m \leq n$. \begin{itemize}\itemsep2mm \item[$(a)$] Let $u\in \mathcal {USC}(X)$. Then $u \in \mathcal{SH}_m(X)\cap \mathcal{C}(X)$ if, and only if, there is a sequence $u_j \in \mathcal{SH}_m^o(X)$ such that $u_j \nearrow u$ on $X$. \item[$(b)$] Then $u \in \mathcal{SH}_m(X)$ if, and only if, there is a sequence $u_j \in \mathcal{SH}_m^o(X)$ such that $u_j \searrow u$. \end{itemize} \end{theorem} \begin{proof} \emph{Part $(a)$:} Let $u \in \mathcal{SH}_m(X)\cap \mathcal{C}(X)$. Since the Dirac measure $\delta_z$ is in $\mathcal{J}_z^m(X)$, we have that \[ u(z)=\inf\left\{\int u \, d\mu: \mu \in \mathcal{J}_z^m(X)\right\}\, . \] Theorem \ref{thm_edwards} part (a), yields now that \[ u(z)=\inf\left \{\int u \, d\mu: \mu \in \mathcal{J}_z^m(X)\right\}=\sup\left\{\varphi(z): \varphi \in \mathcal{SH}_m^o(X), \varphi \leq u\right\}\, . \] Since the functions in $\mathcal{SH}_m^o(X)$ are continuous, Choquet's lemma (see e.g. Lemma~2.3.4 in~\cite{K}) says that there exists a sequence $u_j \in \mathcal{SH}_m^o(X)$ such that $u_j \nearrow u$. Now assume that there exists a sequence $u_j \in \mathcal{SH}_m^o(X)$ such that $u_j \nearrow u$. Then $u$ can be written as the supremum of continuous functions. Hence, $u$ is lower semicontinuous. Thus, $u$ is continuous. Let $z \in X$, and $\mu \in \mathcal{J}_z^m(X)$, then \[ u(z)=\lim_j u_j(z)\leq \lim_j \int u_j \, d\mu =\int \lim_j u_j \, d\mu =\int u \, d\mu\, . \] By Definition~\ref{def_msubkomp} we know that $u \in \mathcal{SH}_m(X)$. \bigskip \emph{Part $(b)$:} First assume that $u$ is the pointwise limit of a decreasing sequence of functions $u_j \in \mathcal{SH}_m^o(X)$. Then we have that $u\in \mathcal {USC}(X)$. Let $z \in X$ and $\mu \in \mathcal{J}_z^m(X)$, then it follows that \[ u(z)=\lim_j u_j(z)\leq \lim_j \int u_j \, d\mu= \int\lim_j\,d\mu=\int u \, d\mu\, . \] Hence $u \in \mathcal{SH}_m(X)$. For the converse, assume that $u \in \mathcal{SH}_m(X)$. We now want to show that there is a sequence of functions $u_j \in \mathcal{SH}_m^o(X)$ such that $u_j \searrow u$ on $X$. We begin by showing that for every $f \in \mathcal{C}(X)$ with $u < f$ on $X$, we can find $v \in \mathcal{SH}_m^o(X)$ such that $u < v \leq f$. Let \[ F(z)=\sup\{\varphi(z): \varphi \in \mathcal{SH}_m^o(X), \varphi \leq f\} \, . \] From Theorem \ref{thm_edwards} part (a) it follows now that \[ F(z)=\inf\left\{\int f \, d\mu : \mu \in \mathcal{J}_z^m(X)\right\}\, . \] From the Banach-Alaoglu theorem we know that $\mathcal{J}_z^m(X)$ is weak-$^\ast$ compact, hence for all $z \in X$ we can find $\mu_z \in \mathcal{J}_z^m(X)$ such that \[ F(z)=\int f \, d\mu_z\, . \] We have \[ F(z)=\int f \, d\mu_z > \int u \, d\mu_z \geq u(z)\, . \] Hence, $u < F$. By the construction of $F$ we know that for every given $z \in X$, there exists a function $v_z \in \mathcal{SH}_m^o(X)$ such that $v_z \leq F$ and $u(z)<v(z) \leq F(z).$ Since the function $u-v_z$ is upper semicontinuous, then the set \[ U_z=\{y\in X: u(y)-v_z(y)<0\} \] is open in $X$. By assumption $X$ is compact, and therefore there are finitely many points $z_1,\ldots,z_k$ with corresponding functions $v_{z_1},\ldots,v_{z_k}$, and open sets $U_{z_1},\ldots, U_{z_k}$, such that $u < v_{z_j}$ on $U_{z_j}$. Furthermore, \[ X=\bigcup_{j=1}^k U_{z_j}\, . \] The function $v=\max\{v_{z_1},\ldots,v_{z_k}\}$ is in $\mathcal{SH}_m^o(X)$, and $u < v \leq f$. We are now ready to prove that $u$ can be approximated as in the statement in the theorem. The function $u$ is upper semicontinuous, and therefore it can be approximated with a decreasing sequence $\{f_j\}$ of continuous functions. We can then find $v_1 \in \mathcal{SH}_m^o(X)$ such that $u < v_1 \leq f_1$. If we now assume that we can find a decreasing sequence of functions $\{v_1,\ldots,v_k\}$ such that $v_j \in \mathcal{SH}_m^o(X)$, and $u < v_j$ for $j=1,\ldots,k$, then we can find a function $v_{k+1} \in \mathcal{SH}_m^o(X)$ such that $u<v_{k+1}$ and $v_{k+1} \leq \min\{f,v_k\}$. Now the conclusion of the theorem follows by induction. \end{proof} \begin{remark} In Theorem~\ref{cont} part $(a)$ we have uniform approximation on $X$. One can assume that the decreasing sequence in Theorem~\ref{cont} part $(b)$ is smooth. This follows from a standard diagonalization argument. \end{remark} \begin{proof}[Proof of Theorem~\ref{thm_edwards} part (b).] Let us define the following families of probability measures defined on $X$ \[ \begin{aligned} &\mathcal{M}_z^m(X)=\left \{\mu: \, u(z)\leq \int u\, d\mu\, , \,\, \forall u\in \mathcal{SH}_m(X)\cap \mathcal C(X)\right \}, \\ &\mathcal{N}_z^m(X)=\left \{\mu: \, u(z)\leq \int u\, d\mu\, , \,\, \forall u\in \mathcal{SH}_m(X) \right \}\, . \\ \end{aligned} \] We have \[ \mathcal{N}_z^m(X)\subset \mathcal{M}_z^m(X)\subset \mathcal{J}_z^m(X)\, , \] since $\mathcal{SH}_m^o(X)\subset\mathcal{SH}_m(X)\cap \mathcal C(X)\subset \mathcal{SH}_m(X)$. On the other hand, let $z\in X$, $\mu \in \mathcal{J}_z^m(X)$, and let $\varphi\in \mathcal{SH}_m(X)$, then by Theorem~\ref{cont} part (b) there exists a decreasing sequence $u_j\in \mathcal{SH}_m^o(X)$ such that $u_j\to \varphi$, when $j\to \infty$, and then we have \[ \varphi(z)=\lim_{j\to \infty}u_j(z)\leq \lim_{j\to \infty}\int u_j\,d\mu=\int\lim_{j\to \infty}u_j\,d\mu=\int\varphi\, d\mu\, . \] Hence, $\mu\in \mathcal{N}_m(X)$, and therefore $\mathcal{J}_z^m(X)\subset \mathcal{N}_z^m(X)$. \end{proof} A direct consequence of Theorem~\ref{cont} part $(b)$ is the following corollary. \begin{corollary}\label{cor} If $X_1\subset X_2$ are compact sets in $\mathbb C^n$, then $\mathcal{SH}_m(X_2)\subset \mathcal{SH}_m(X_1)$. \end{corollary} \begin{proof} To see this take $u\in \mathcal{SH}_m(X_2)$, then by Theorem~\ref{cont} part $(b)$ there exists a sequence $u_j\in \mathcal{SH}_m^o(X_2)$ decreasing to $u$. Since $u_j$ belongs also to $\mathcal{SH}_m^o(X_1)$ then $u\in \mathcal{SH}_m(X_1)$. \end{proof} In Theorem~\ref{localization_Pm}, we shall need the following localization theorem. The case $m=n$ is Gauthier's localization theorem from~\cite{Gauthier}. For the proof of the following theorem, and later sections we need to recall the following definition. A function $u$ is said to be \emph{strictly $m$-subharmonic} on $\Omega$ if for every $p \in \Omega$ there exists a constant $c_p >0$ such that $u(z)-c_p |z| ^2$ is $m$-subharmonic in a neighborhood of $p$. \begin{theorem}\label{localization} If $X \subset \C^n$ is a compact set, then $u \in \mathcal{SH}_m(X)\cap \mathcal{C}(X)$ if, and only if, for each $z \in X$, there is a neighborhood $B_z$ such that $u |_{X\cap \bar B_z}\in \mathcal{SH}_m(X\cap \bar B_z)\cap \mathcal{C}(X\cap \bar B_z)$. \end{theorem} \begin{proof} This proof is inspired by~\cite{Gauthier}. First we see that the restriction of a function $u \in \mathcal{SH}_m(X)\cap \mathcal C(X)$ to $X \cap \bar B$ is $m$-subharmonic on that set. This follows from Corollary~\ref{cor}. Now we show the converse statement. Since $X$ is compact there exists a finite open covering $\{B_j\}$ of $X$. Assume that $u|_{X \cap \bar B_j} \in \mathcal{SH}_m(X \cap \bar B_j)\cap \mathcal C(X \cap \bar B_j)$ for all $j$. For every $j$, we can find compact sets $K_{j,k}$ such that $K_{j,k} \subset B_k$ and \[ \partial B_j \cap X \subset \bigcup_{k \neq j}K_{j,k}\, . \] Let $K_k = \bigcup_j K_{j,k}$, and note that $K_k \subset B_k$. Set \[ d_k = \mbox{dist}(K_k, \partial B_k)\, . \] For every $k$ there exists a function $\chi_k$ that is smooth on $\C^n$, $-1 \leq \chi_k \leq 0$, $\chi_k(z)=0$ when $\mbox{dist}(z, K_k) \leq \frac{d_k}{2}$, and $\chi_k=-1$ outside of $B_k$. Choose an arbitrary constant $c > 0$. The function $|z|^2$ is strictly $m$-subharmonic, so there exists a constant $\eta^0_k>0$ such that for every $0 < \eta_k<\eta_k^0$, the function $\eta_k \chi_k +c|z|^2$ is $m$-subharmonic and continuous on an open set $V_k$, $B_k \Subset V_k$. Choose a sequence $\{\varepsilon_j\}$ of positive numbers such that \begin{equation}\label{eq:etaepsilon} 2 \max_{z \in \bar B_j} \varepsilon_j < \min_{z \in \bar B_j} \eta_j\, , \end{equation} for every $z \in X$. The reason for this will be clear later. By the assumption that $u|_{X \cap \bar B_j} \in \mathcal{SH}_m(X \cap \bar B_j)\cap \mathcal C(X \cap \bar B_j)$ for every $j$, Theorem~\ref{cont} part (b) says that there exist open sets $U_j$, $(X \cap B_j) \Subset U_j \Subset V_j$ and functions $u_j \in \mathcal{SH}_m(U_j)\cap \mathcal C(U_j)$ such that \begin{equation} \label{eq:epsilon_j} |u-u_j|<\varepsilon_j \ \text{on} \ X \cap \bar B_j\, . \end{equation} For $z \in (U_j \setminus X)\cup (X\cap \bar B_j)$ set \[ f_j(z)=u_j(z) + \eta_j \chi_j(z) + c|z|^2\, , \] and elsewhere set $f_j=-\infty.$ Now define the function \[ v(z)=\max_j f_j(z)\, . \] It remains to show that $v$ approximates $u$ uniformly on $X$, and that $v \in \mathcal{SH}_m^o(X)$. For $z \in X$ we have \begin{multline}\label{exp} |u(z)-v(z)| = |u(z)-\max_j f_j(z)|\\ = \left|u(z)-\max_{z \in X\cap \bar B_j}(u_j(z)+\eta_j \chi_j + c|z|^2)\right|\\ \leq \max_{z \in X\cap \bar B_j} \eta_j + |u(z)-\max_{z \in X\cap \bar B_j} u_j(z)| + c|z|^2\, . \end{multline} By choosing the constants $c, \eta_j, \varepsilon_j$ in the right order and small enough, then the right-hand side of~(\ref{exp}) can be made arbitrary small. Hence, $v$ approximates $u$ uniformly on $X$. To prove that $v \in \mathcal{SH}_m^o(X)$, first take $z \in X$ that does not lie on the boundary of any $B_j$. The functions $f_k$, that are not $- \infty$ at $z$, are finitely many and they are continuous and $m$-subharmonic in a neighborhood of $z$. If $z \in \partial B_j \cap X$, then there exists a $k$ such that $z \in (X \cap K_k) \subset (X \cap B_k)$. For this $j$ and $k$ we have \begin{multline*} f_j(z)=u_j(z)+\eta_j \chi_j + c|z|^2 = u_j(z) - \eta_j +c|z|^2\\ =\bigl(u_j(z)-u_k(z)\bigr)+\bigl(u_k(z)+\eta_k 0+c|z|^2\bigr) - \eta_j\\ =f_k(z)+\bigl(u_j(z)-u_k(z)\bigr)-\eta_j \leq f_k(z) \end{multline*} where the last inequality follows from assumption (\ref{eq:etaepsilon}) together with (\ref{eq:epsilon_j}) (that makes sure that $|u_j(z)-u_k(z)|<\varepsilon_j+ \varepsilon_k$). This means that locally, near $z$, we can assume that the function $v$ is the maximum of functions $f_k$, $k \neq j$, where the functions $f_k$ are continuous and $m$-subharmonic in a neighborhood of $z$. This concludes the proof. \end{proof} As an immediate consequence we get the following gluing theorem for $m$-sub\-har\-mo\-nic functions on compact sets. \begin{corollary} Let $\omega \Subset \Omega$ be open sets, let $u\in \mathcal{SH}_m(\bar \omega)\cap \mathcal C(\bar \omega)$, $v\in \mathcal{SH}_m(\bar\Omega)\cap\mathcal C(\bar\Omega)$ and $u(z)\leq v(z)$ for $z\in \partial \omega$. Then the function \[ \varphi=\begin{cases} v, \, \text { on } \, \bar\Omega\setminus \omega,\\ \max\{u,v\}, \; \text { on } \, \omega, \end{cases} \] belongs to $\mathcal{SH}_m(\bar \Omega)\cap\mathcal C(\bar \Omega)$. \end{corollary} \begin{proof} Let $\varepsilon >0$ and define \[ \varphi_{\varepsilon}=\begin{cases} v+\varepsilon, \, \text { on } \, \bar\Omega\setminus \omega,\\ \max\{u,v+\varepsilon\}, \; \text { on } \, \omega. \end{cases} \] Then by Theorem~\ref{localization} we get that $\varphi_{\varepsilon}\in \mathcal{SH}_m(\bar \Omega)\cap\mathcal C(\bar \Omega)$ and $\varphi_{\varepsilon}\searrow \varphi$, as $\varepsilon \to 0$. By Theorem~\ref{thm_basicprop2} we conclude that $\varphi\in \mathcal{SH}_m(\bar \Omega)\cap\mathcal C(\bar \Omega)$. \end{proof} Let us now look at a bounded domain $\Omega$ in $\mathbb C^n$. We want to investigate what the connection is between $\mathcal{SH}_m(\bar \Omega)$ and $\mathcal{SH}_m(\Omega)$. It is easy to show that $\mathcal{SH}_m(\bar \Omega) \subset \mathcal{SH}_m(\Omega)$. Using Definition \ref{def_msubkomp} we know that a function $\varphi \in \mathcal{SH}_m(\Omega)\cap \mathcal{USC}(\bar \Omega)$ is in $\mathcal{SH}_m(\bar \Omega)$ if $\varphi(z) \leq \int \varphi \, d\mu$ for all $\mu \in \mathcal{J}_z^m(\bar \Omega)$ where $z \in \bar \Omega$. In the same way as in \cite{HP} we can show that it is enough to look at the measures in $\mathcal{J}_z^m(\bar \Omega)$ for $z \in \partial \Omega$. \begin{theorem} \label{thm_pmboundary} Let $\Omega$ be a bounded open set in $\C^n$, and $1 \leq m \leq n$. \medskip \begin{enumerate}\itemsep2mm \item[$(1)$] If $\varphi \in \mathcal{SH}_m(\bar \Omega)$, then $\varphi \in \mathcal{SH}_m(\Omega)$ and $\varphi \in \mathcal{SH}_m(\partial\Omega)$. \item[$(2)$] If $\varphi \in \mathcal{SH}_m(\Omega)\cap \mathcal{USC}(\partial \Omega)$, and \[ \varphi(z) \leq \int \varphi \, d\mu\, , \ \text { for all } \ z \in \partial\Omega \ \text { and all }\ \mu \in \mathcal{J}_z^m(\bar \Omega)\, , \] then $\varphi \in \mathcal{SH}_m(\bar \Omega).$ \end{enumerate} \end{theorem} \begin{proof} \emph{Part} $(1):$ By Theorem~\ref{cont} part $(b)$ there exists a sequence $\varphi_j\in \mathcal{SH}_m^o(\bar \Omega)$ decreasing to $\varphi$. Then $\varphi_j\in \mathcal{SH}_m(\Omega)$, so $\varphi \in \mathcal{SH}_m(\Omega)$. The fact that $\varphi \in \mathcal{SH}_m(\partial\Omega)$ follows from Corollary~\ref{cor}. \bigskip \emph{Part} $(2):$ By Theorem \ref{cont} part $(b)$ we want to prove that there is a decreasing sequence of functions $\varphi_j$ in $\mathcal{SH}_m^o(\bar \Omega)$ such that $\varphi_j \rightarrow \varphi$ on $\bar \Omega$. Since $\varphi$ is upper semicontinuous we can find $\{u_j\} \subset \mathcal{C}(\bar \Omega)$ such that $u_j \searrow \varphi$ on $\bar \Omega$. We are going to show that we can find functions $\{v_j\} \in \mathcal{SH}_m^o(\bar \Omega)$ such that $v_j \leq u_j$ and $v_j(z) \searrow \varphi(z)$ for every $z \in \partial \Omega$. From this it will follow that the functions \[ \varphi_j=\begin{cases} \max\{\varphi(z),v_j(z)\} & \text{if} \ z \in \bar \Omega\\ v_j(z) & \text{otherwise}\\ \end{cases} \] will belong to $\mathcal{SH}_m^o(\bar \Omega)$, and $\varphi_j \searrow \varphi$ on $\bar \Omega$. To construct the approximating sequence $\{v_j\}$ define first \[ F_j(z):=\sup\left \{v(z): v \in \mathcal{SH}_m^o(\bar \Omega), v \leq u_j\right \}=\inf\left \{\int u_j \, d\mu: \mu \in \mathcal{J}_z^m(\bar \Omega)\right\}. \] Since $\mathcal{J}_z^m(\bar \Omega)$ is compact in the weak$^\ast$-topology we can, for all $z\in \bar \Omega$ find $\mu_z \in \mathcal{J}_z^m(\bar \Omega)$ such that $F_j(z)=\int u_j \, d\mu_z$. We know, by the construction of $F_j$, that $F_j \leq u_j$, and \[ F_j(z)=\int u_j \, d\mu_z > \int \varphi \, d\mu_z \geq \varphi(z) \ \text{ for all }z \in \partial \Omega\, . \] By the construction of $F_j$ we know that for every $z\in \partial \Omega$ we can find $v_z \in \mathcal{SH}_m^o(\bar \Omega)$ such that $v_z \leq F_j$ and $\varphi(z)< v_z(z)\leq F_j(z)$. The function $\varphi - v_z$ is upper semicontinuous and therefore the set \[ U_z=\{w \in \partial \Omega: \varphi(w)-v_z(w)<0\} \] is open in $\partial \Omega$. It now follows from the compactness of $\partial \Omega$ that there are finitely many points $z_1,\ldots,z_k$ with corresponding functions $v_{z_1},\ldots,v_{z_k}$ and open sets $U_{z_1},\ldots,U_{z_k}$ such that $\varphi < v_{z_j}$ in $U_{z_j}$ and $\partial \Omega=\cup_{j=1}^kU_{z_j}$. The function $v_j=\max\{v_{z_1},\ldots,v_{z_k}\}$ belongs to $\mathcal{SH}_m^o(\bar \Omega)$ and $\varphi(z)< v_j(z) \leq u_j(z)$ for $z \in \partial \Omega$. This completes the proof. \end{proof} \section{$P_m$-hyperconvex domains}\label{sec Pmhxdomains} Assume that $\Omega\subset\C^n$ is a bounded open set, and let $1 \leq m \leq n$. Theorem~\ref{cont} give rise to the question of how to decide if $u$ is in $\mathcal{SH}_m(\bar \Omega)$. From Theorem~\ref{thm_pmboundary} it follows that if $u\in\mathcal{SH}_m(\bar \Omega)$, then $u\in \mathcal{SH}_m(\Omega)$, and $u\in\mathcal{SH}_m(\partial\Omega)$. The converse statement is not true, not even under the assumption that $\Omega$ is $m$-hyperconvex (see Definition~\ref{def_mhx}). But if we assume that $\Omega$ admits a negative exhaustion function in $\mathcal{SH}_m(\bar \Omega)$ (notice here that $\bar\Omega$ is a compact set), then we shall prove in Theorem~\ref{cor_msubbdvalue} that \[ u\in\mathcal{SH}_m(\bar \Omega) \quad\Leftrightarrow\quad u\in\mathcal{SH}_m(\Omega) \text{ and } u\in\mathcal{SH}_m(\partial\Omega)\, . \] First we shall recall the definition of a $m$-hyperconvex domain. \begin{definition}\label{def_mhx} Let $\Omega$ be a domain in $\C^n$, and $1 \leq m \leq n$. We say that $\Omega$ is \emph{$m$-hyperconvex} if it admits an exhaustion function that is negative and in $\mathcal{SH}_m(\Omega)$. \end{definition} Let us now make a formal definition of $P_m$-hyperconvex domains. \begin{definition}\label{def_Pmhx} Let $\Omega$ be a domain in $\C^n$, and let $1 \leq m \leq n$. We say that $\Omega$ is \emph{$P_m$-hyperconvex} if it admits an exhaustion function that is negative, and in $\mathcal{SH}_m(\bar \Omega)$. \end{definition} From Theorem~\ref{thm_pmboundary} it follows that a $P_m$-hyperconvex domain is also $m$-hyper\-con\-vex. The converse is not true. The case $m=n$ was studied in~\cite{HP}, and discussed in~\cite{PW}. A $P_n$-hyperconvex domain is $P_m$-hyperconvex for every $m=1,\ldots,n$, and as observed in~\cite{HP}, the notion of $P_n$-hyperconvexity is strictly weaker than the notion of \emph{strict hyperconvexity} that has been studied and used by for example Bremermann~\cite{bremermann}, and Poletsky~\cite{PO3}. Furthermore, a $P_m$-hyperconvex domain is fat in the sense $\Omega=(\bar{\Omega})^{\circ}$. It is straight forward to see that if $\Omega_1$ and $\Omega_2$ are $P_m$-hyperconvex domains in $\C^n$, then $\Omega_1 \cap \Omega_2$ is $P_m$-hyperconvex in $\C^n$, and $\Omega_1 \times \Omega_2$ is $P_m$-hyperconvex in $\mathbb C^{2n}$. \bigskip As in the case of $m$-hyperconvex domains, we have in Theorem~\ref{thm_charPmhx} several nice characterizations of $P_m$-hyperconvex domains in terms of the barrier functions, and Jensen measures. The property that a domain is (globally) $P_m$-hyperconvex if, and only if, it is locally $P_m$-hyperconvex we leave to Theorem~\ref{localization_Pm}. \begin{theorem}\label{thm_charPmhx} Let $\Omega$ be a bounded domain in $\C^n$. Then the following assertions are equivalent: \medskip \begin{enumerate}\itemsep2mm \item $\Omega$ is $P_m$-hyperconvex in the sense of Definition~\ref{def_Pmhx}; \item $\Omega$ admits a negative exhaustion function that is in $\mathcal{SH}_m(\bar \Omega)\cap \mathcal{C}(\bar \Omega)$; \item $\partial \Omega$ has a weak barrier at every point $z_0\in\partial \Omega$ that is in $\mathcal{SH}_m(\bar \Omega)$, i.e. there exists a function $u\in\mathcal{SH}_m(\bar \Omega)$, such that $u<0$ on $\Omega$ and \[ \lim_{z\to z_0\atop z\in\Omega} u(x)=0\, ; \] \item for every $z\in \partial \Omega$, and every $\mu \in \mathcal{J}_z^m(\bar \Omega)$, we have that $\operatorname{supp} (\mu) \subseteq \partial \Omega$; \item $\Omega$ admits a continuous negative exhaustion function which is $m$-subharmonic on $\bar \Omega$, smooth and strictly $m$-subharmonic on $\Omega$. \end{enumerate} \end{theorem} \begin{proof} $(1) \Rightarrow (4):$ Assume that $\Omega$ is $P_m$-hyperconvex, then there exists a negative exhaustion function $\psi \in \mathcal{SH}_m(\bar \Omega)$. Take $z \in \partial \Omega$ and let $\mu \in \mathcal{J}_z^m(\bar \Omega)$, then \[ 0=\psi(z)\leq \int \psi \, d\mu \leq 0\, . \] Since $\psi <0 $ on $\Omega$, we have that $\operatorname{supp}(\mu) \subseteq \partial \Omega$. \medskip $(2) \Rightarrow (1):$ Follows by Definition \ref{def_Pmhx}. \medskip For the implications $(4)\Rightarrow (3)$, $(4) \Rightarrow (2)$, and $(4)\Rightarrow (1)$, assume that for all $w \in \partial \Omega$, the every measures $\mu \in \mathcal{J}_w^m(\bar \Omega)$ satisfy $\operatorname{supp} (\mu) \subseteq \partial \Omega$. Let $z\in \Omega$, $r>0$ be such that $B(z,r)\Subset \Omega$ and let \[ u(z)= \sup\{\varphi(z): \varphi \in \mathcal{SH}_m(\bar\Omega)\cap \mathcal{C}(\bar \Omega), \varphi \leq 0, \varphi \leq -1 \ \text{on} \ B(z,r)\}\, . \] Then $u$ is lower semicontinuous, and by Theorem~\ref{thm_edwards} part (b), we have that \[ u(z)=\inf\left\{\int -\chi_{B(z,r)} \, d\mu : \mu \in \mathcal{J}_z^m(\Omega)\right\}=-\sup\left\{\mu(B(z,r)): \mu \in \mathcal{J}_z^m(\Omega)\right\}\, . \] We shall prove that $\lim_{\xi\to \partial \Omega}u(\xi)=0$. Assume the contrary, i.e. that there is a point $z \in \partial \Omega$ such that $\liminf_{\xi \rightarrow z}u(\xi)<0$. Then we can find a sequence $z_n \rightarrow z$ such that $u(z_n)<-\varepsilon$ for every $n$. We can find corresponding measures $\mu_n \in \mathcal{J}_{z_n}^m(\bar\Omega)$ such that $\mu_n(B(z,r))>\varepsilon$. By Theorem~\ref{thm_convmeasures} we can (by passing to a subsequence) assume that $\mu_n$ converges weak-$^\ast$ to a measure $\mu \in \mathcal{J}_z^m(\bar \Omega)$. Then, using Lemma~2.3 in \cite{CCW}, we have that \[ \mu(\overline {B(z,r)})=\int \chi_{\overline {B(z,r)}} \, d\mu \geq \limsup_{n \rightarrow \infty} \int \chi_{\overline {B(z,r)}} \, d\mu_n =\limsup_{n \rightarrow \infty} \mu_n(\overline {B(z,r)}) >\varepsilon\geq 0. \] This contradicts the assumption that $\mu \in \mathcal{J}_z^m(\bar \Omega)$ only has support on the boundary. It remains to show that $u \in \mathcal{SH}_m(\bar \Omega)\cap \mathcal{C}(\bar \Omega)$. We have that $u^*\in \mathcal {SH}_m(\Omega)\cap \mathcal {USC}(\bar \Omega)$ and $\lim_{\xi\to \partial \Omega} u^*(\xi)=0$ so by the generalized Walsh theorem (Proposition~3.2 in~\cite{Blocki_weak}) we get that $u^*\in \mathcal C(\bar\Omega)$. This means that $u=u^*$ and $u$ is a continuous function. Finally Theorem~\ref{thm_pmboundary} gives us that $u\in \mathcal {SH}_m(\bar \Omega)$. Note that $u$ is a continuous exhaustion function for $\Omega$. \medskip $(3) \Rightarrow (4):$ Let $z \in \partial \Omega$ and assume that there exists a function $\varphi \in \mathcal{SH}_m(\bar \Omega)$, $\varphi \neq 0$ such that $\varphi \leq 0$ and $\varphi(z)=0$. Let $\mu \in \mathcal{J}_z^m(\bar \Omega)$, then \[ 0=\varphi(z)\leq \int \varphi \, d\mu \leq 0\, . \] Hence $\operatorname{supp}(\mu) \subseteq \partial \Omega$. \medskip The proof of equivalence (1)$\Leftrightarrow$(5) is postponed to Corollary~\ref{smooth}. \end{proof} \section{An extension theorem}\label{sec_extention} In this section we shall prove the extension theorem discussed in the introduction (Theorem~\ref{ext_in_pm_hyp}). We provide also two new characterizations of $P_m$-hyperconvex domains (Corollary~\ref{cor2} and Theorem~\ref{localization_Pm}), and finally we prove that for a $P_m$-hyperconvex domain $\Omega$ one can find a continuous $m$-subharmonic exhaustion function on $\bar \Omega$, which is strictly $m$-subharmonic and smooth in $\Omega$ (Corollary~\ref{smooth}). We shall need the following lemma. \begin{lemma}\label{lem} Assume that $\Omega$ is a $P_m$-hyperconvex domain in $\C^n$, $1 \leq m \leq n$, and let $U$ be an open neighborhood of $\partial \Omega$. If $f\in\mathcal{SH}_m(U)\cap C^{\infty}(U)$ is a smooth function in some neighborhood of $\partial \Omega$, then there is a function $F \in \mathcal{SH}_m(\bar \Omega) \cap \mathcal{C}(\bar \Omega)$ such that $F=f$ on $\partial \Omega$. \end{lemma} \begin{proof} Let $\psi\in \mathcal {SH}_m(\bar \Omega)\cap \mathcal C(\bar \Omega)$ be an exhaustion function for $\Omega$ (see Theorem~\ref{thm_charPmhx}). Let $U$ be an open set such that $\partial \Omega \subseteq U$ and $f\in \mathcal{SH}_m(U)\cap \mathcal C^{\infty}(U)$, and let $V$ be an open set such that $\partial \Omega \subseteq V \Subset U$. Moreover, let $K\subseteq \Omega$ be a compact set such that $\bar\Omega \subseteq K \cup U$ and $\partial K \subseteq V$. Since $\Omega$ is also $m$-hyperconvex there exists a smooth and strictly $m$-subharmonic exhaustion function $\varphi$ for $\Omega$ (see~\cite{ACH}). Let $M>1$ be a constant large enough so that for all $z\in K$ \[ \varphi(z) - 1 > M \psi(z)\, . \] From Theorem~\ref{cont} part $(a)$ there exists an increasing sequence $\psi_j\in \mathcal{SH}_m(\Omega_j)\cap \mathcal \mathcal{C}(\bar \Omega_j)$, where $\bar \Omega \subseteq \Omega_j\Subset \Omega \cup V$, and such that $\psi_j\to \psi$ uniformly on $\bar \Omega$ so that \[ \psi-\psi_j<\frac {1}{Mj}\, . \] Let us define \[ \varphi_j := \begin{cases} \max\left\{\varphi-\frac{1}{j}, M \psi_j\right\}, & \text{ if } z \in \Omega\, ,\\ M\psi_j, & \text{ if } z \in \Omega_j \setminus \Omega\, . \end{cases} \] Note that the function $\varphi_j$ is $m$-subharmonic and continuous on $\Omega_j$, and $\varphi_j=\varphi-\frac{1}{j}$ on $K$. Next let $g$ be a smooth function such that $g=1$ on $V$, and $\operatorname{supp}(g)\subseteq U$. Since $\varphi_j$ is strictly $m$-subharmonic on the set where $g$ is non-constant, we can choose a constant $C$ so large that the function \[ F_j:=C\varphi_j+gf \] belongs to $\mathcal{SH}_m^o(\bar\Omega)$. Observe that \[ \max \{\varphi,M\psi\}-\max\left\{\varphi-\frac{1}{j}, M\psi_j\right\} \leq \frac{1}{j}\, , \] and define \[ F:=C\max\{\varphi,M\psi\}+gf. \] Then we have that \[ F \geq F_j \geq F-\frac{1}{j}\, , \] and therefore, by uniform convergence, we get that $F\in \mathcal{SH}_m(\bar\Omega)\cap \mathcal{C}(\bar\Omega)$. Furthermore, for $z \in \partial \Omega$, we have that \[ 0 \leq f(z)-F_j(z)=-C\varphi_j(z)=-CM\psi_j(z)\to -CM\psi (z)=0\,\text{ as } j\to \infty\, , \] and we see that $F=f$ on $\partial \Omega$. \end{proof} Now we state and prove the main theorem of this section. \begin{theorem}\label{ext_in_pm_hyp} Let $\Omega$ be a bounded $P_m$-hyperconvex domain in $\mathbb C^n$, $1 \leq m \leq n$, and let $f$ be a real-valued function defined on $\partial \Omega$. Then the following are equivalent: \medskip \begin{enumerate}\itemsep2mm \item there exists $F\in \mathcal{SH}_m(\bar \Omega)$ such that $F=f$ on $\partial \Omega$; \item $f\in \mathcal{SH}_m(\partial \Omega)$. \end{enumerate} Furthermore, if $f$ is continuous on $\partial \Omega$, then the function $F$ can be chosen to be continuous on $\Omega$. \end{theorem} \begin{proof} (1)$\Rightarrow$(2): Follows immediately from Corollary~\ref{cor}. \bigskip (2)$\Rightarrow$(1): Let $f\in \mathcal{SH}_m(\partial \Omega)$, then by Theorem~\ref{cont} part $(b)$ there exists a decreasing sequence $u_j\in\mathcal{SH}_m^o(\partial \Omega)$ of smooth functions such that $u_j\to f$, $j\to \infty$. By assumption $\Omega$ is in particular a regular domain, and therefore there is a sequence of harmonic functions $h_j$ defined on $\Omega$, continuous on $\bar \Omega$ such that $h_j=u_j$ on $\partial \Omega$. Define \[ Sh_j=\sup\left \{v\in \mathcal{SH}_m(\bar \Omega): v\leq h_j\right\}\, , \] then by Theorem~\ref{thm_edwards} we have that \[ Sh_j=\sup\left \{v\in \mathcal{SH}_m(\bar \Omega)\cap\mathcal \mathcal{C}(\bar \Omega): v\leq h_j\right\}\, . \] Hence, $Sh_j$ is lower semicontinuous. Next we shall prove that in fact $Sh_j$ is continuous. By Lemma~\ref{lem} there exists $H_j\in \mathcal{SH}_m(\bar \Omega)\cap\mathcal C (\bar \Omega)$ such that $H_j=h_j$ on $\partial \Omega$. This implies that $H_j\leq Sh_j\leq (Sh_j)^*$, so $(Sh_j)^*=h_j=H_j$ on $\partial \Omega$. Note also that for all $z\in \partial \Omega$, and all $\mu\in \mathcal J_z^m(\bar \Omega)$ it holds that $\operatorname{supp}(\mu)\subseteq \partial \Omega$ by Theorem~\ref{thm_charPmhx} and then \[ \int_{\bar \Omega}(Sh_j)^*\,d\mu=\int_{\partial \Omega}(Sh_j)^*\,d\mu=\int_{\partial \Omega}H_j\,d\mu=\int_{\bar \Omega}H_j\,d\mu\geq H_j(z)=(Sh_j)^*(z)\, , \] and therefore by Theorem~\ref{thm_pmboundary} $(Sh_j)^*\in \mathcal{SH}_m(\bar \Omega)$, so $(Sh_j)^*=Sh_j$ and finally $Sh_j\in \mathcal{SH}_m(\bar \Omega)\cap\mathcal{C}(\bar \Omega)$. Now let \[ F=\lim_{j\to \infty}Sh_j\, . \] Observe that $F=f$ on $\partial \Omega$, and $F\in \mathcal{SH}_m(\bar \Omega)$, since it is the limit of a decreasing sequence $Sh_j\in \mathcal{SH}_m(\bar \Omega)\cap\mathcal C(\bar \Omega)$. \bigskip To prove the last statement of this theorem assume that $f\in \mathcal C(\partial \Omega)$. Let $h$ be a harmonic function on $\Omega$ that is continuous on $\bar \Omega$ with boundary values $f$. As in the previous part of the proof define \[ Sh=\sup\left \{v\in \mathcal{SH}_m(\bar \Omega): v\leq h\right\}=\sup\left \{v\in \mathcal{SH}_m(\bar \Omega)\cap\mathcal C(\bar \Omega): v\leq h\right\}\, , \] so $Sh$ is lower semicontinuous. Furthermore, since $Sh\leq Sh_j$, then $(Sh)^*\leq (Sh_j)^*=Sh_j$ and \[ (Sh)^*\leq \lim_{j\to \infty}Sh_j=F\leq Sh\, , \] we have that $(Sh)^*=Sh=F\in \mathcal{SH}_m(\bar \Omega)\cap\mathcal C(\bar \Omega)$. \end{proof} Earlier we saw that if $\Omega$ is $P_m$-hyperconvex and if $z \in \partial \Omega$, then the measures in $\mathcal{J}_z^m(\bar \Omega)$ only have support on $\partial \Omega$. Following the line of \cite{HP} we will now see that, when $\Omega$ is $P_m$-hyperconvex, we actually have that $\mathcal J_z^m(\bar \Omega)=\mathcal J_z^m(\partial \Omega)$ for $z \in \partial \Omega$. This gives us another characterization of $P_m$-hyperconvex domains. \begin{corollary}\label{cor2} Let $\Omega$ be a bounded domain in $\C^n$. Then $\Omega$ is $P_m$-hyperconvex if, and only if, for all $z\in \partial \Omega$ we have $\mathcal J_z^m(\bar \Omega)=\mathcal J_z^m(\partial \Omega)$. \end{corollary} \begin{proof} First assume that $\Omega$ is $P_m$-hyperconvex. It is clear that $\mathcal J_z^m(\partial \Omega)\subseteq\mathcal J_z^m(\bar \Omega)$. To prove the converse inclusion take $z\in \partial \Omega$, $\mu \in \mathcal J_z^m(\bar\Omega)$ and $f\in \mathcal{SH}_m^o(\partial \Omega)$, then $f\in \mathcal{SH}_m(\partial \Omega)\cap \mathcal C(\partial \Omega)$ and by Theorem~\ref{ext_in_pm_hyp} there exists $F\in \mathcal{SH}_m(\bar\Omega)$ such that $F=f$ on $\partial \Omega$. For $z\in \partial \Omega$ and $\mu\in \mathcal J_z^m(\bar \Omega)$ we have $\operatorname{supp}(\mu)\subseteq \partial \Omega$ and \[ f(z)=F(z)\leq \int_{\bar\Omega}F\,d\mu=\int_{\partial \Omega}F\,d\mu=\int_{\partial\Omega}f\,d\mu, \] which means that $\mu\in \mathcal J_z^m(\partial \Omega)$. For the converse implication assume that for all $z\in \partial \Omega$ we have $\mathcal J_z^m(\bar \Omega)=\mathcal J_z^m(\partial \Omega)$, then for all $z\in \partial \Omega$ and all $\mu \in \mathcal J_z^m(\bar \Omega)$ we have $\operatorname{supp}(\mu)\subseteq \partial \Omega$ so by Theorem~\ref{thm_charPmhx} $\Omega$ is $P_m$-hyperconvex. \end{proof} On $P_m$-hyperconvex domains, we can now characterize the functions $u \in \mathcal{SH}_m(\bar \Omega)$ as those functions that are in $\mathcal{SH}_m(\Omega)$ and $u|_{\partial \Omega} \in \mathcal{SH}_m(\partial \Omega)$. \begin{theorem}\label{cor_msubbdvalue} Let $\Omega$ be a bounded $P_m$-hyperconvex domain in $\mathbb C^n$. Then $u\in \mathcal{SH}_m(\bar \Omega)$ if, and only if, $u\in \mathcal{SH}_m(\Omega)$, and $u\in \mathcal{SH}_m(\partial \Omega)$. \end{theorem} \begin{proof} It follows from Theorem~\ref{thm_pmboundary} and Corollary~\ref{cor2}. \end{proof} As a corollary we obtain that for $P_m$-hyperconvex domains the exhaustion function can be chosen to be strictly $m$-subharmonic and smooth, as it was announced in Theorem~\ref{thm_charPmhx}. \begin{corollary}\label{smooth} Let $\Omega$ be a bounded $P_m$-hyperconvex domain in $\mathbb C^n$. Then $\Omega$ admits a continuous negative exhaustion function which is $m$-subharmonic on $\bar \Omega$, smooth and strictly $m$-subharmonic on $\Omega$. \end{corollary} \begin{proof} Since $\Omega$ is also $m$-hyperconvex then there exists a negative exhaustion function $\varphi\in \mathcal {SH}_m(\Omega)\cap \mathcal C^{\infty}(\Omega)$, which is strictly $m$-subharmonic on $\Omega$. Now it follows from Theorem~\ref{cor_msubbdvalue} that $\varphi \in \mathcal {SH}_m(\bar\Omega)$. \end{proof} Finally, we can prove that if a domain is locally $P_m$-hyperconvex then it is globally $P_m$-hyperconvex. \begin{theorem}\label{localization_Pm} Let $\Omega$ be a bounded domain in $\mathbb C^n$ such that for every $z\in \partial \Omega$ there exists a neighborhood $U_z$ such that $\Omega\cap U_z$ is $P_m$-hyperconvex, then $\Omega$ is $P_m$-hyperconvex. \end{theorem} \begin{proof} Assume that $\Omega$ is locally $P_m$-hyperconvex. Then it is also locally $m$-hyper\-con\-vex. By Theorem~3.3 in~\cite{ACH}, we know that $\Omega$ must be globally $m$-hyperconvex. Thus, there exists $\psi \in \mathcal{SH}_m(\Omega)\cap \mathcal{C}(\bar \Omega)$, $\psi \not\equiv 0,$ such that $\psi|_{\partial \Omega}=0$. We shall now show that $\psi \in \mathcal{SH}_m(\bar \Omega)$. Thanks to Theorem~\ref{localization} it is enough to show that for every $z \in \bar \Omega$ there is a ball $B_z$ such that $\psi|_{\bar \Omega \cap \bar B_z} \in \mathcal{SH}_m(\bar \Omega \cap \bar B_z)$. For $z \in \Omega$ there exists $r>0$ such that $B(z,r)\Subset \Omega$ and then $\psi|_{B(z,r)}\in \mathcal {SH}_m(B(z,r))\cap \mathcal C(\bar B(z,r))$. Since, \begin{equation}\label{ball} \mathcal J^m_z(\bar B(z,r))=\mathcal J^m_z(\partial B(z,r))=\{\delta_z\}\, , \end{equation} we have that $\psi\in\mathcal {SH}_m(\partial B(z,r))$ and therefore by Corollary~\ref{cor_msubbdvalue} we have that $\psi\in\mathcal {SH}_m(\bar B(z,r))$. Now it is sufficient to look at $z \in \partial \Omega$. Fix $z_0 \in \partial \Omega$, and a small ball $B_{z_0}$ around $z_0$. Without loss of generality assume that $B_{z_0} \Subset U_{z_0}$ such that $\Omega \cap B_{z_0}$ is $P_m$-hyperconvex. Once again, by Corollary \ref{cor_msubbdvalue} it is enough to show that $\psi \in \mathcal{SH}_m\bigl(\partial (\Omega \cap B_{z_0})\bigr)$, i.e. for every $z \in \partial (\Omega \cap B_{z_0})$, and every $\mu \in \mathcal{J}_{z}^m(\partial (\Omega \cap B_{z_0}))$, it holds \begin{equation}\label{eq:locally} \psi(z)\leq \int \psi \, d\mu\, . \end{equation} Suppose that $z \in \partial \Omega \cap B_{z_0}\setminus \partial B_{z_0}$. First we shall show that if $\mu \in \mathcal{J}_z^m(\partial(\Omega \cap B_{z_0}))$, then $\mu$ has support on $\partial \Omega$ and therefore condition $(\ref{eq:locally})$ will be fulfilled. Since $\Omega \cap U_{z_0}$ is $P_m$-hyperconvex, it has an exhaustion function $\varphi \in \mathcal{SH}_m(\bar \Omega \cap \bar {U}_{z_0})$, and especially $\varphi \in \mathcal{SH}_m\left(\partial(\bar \Omega \cap \bar {B}_{z_0})\right)$. Let $\mu \in \mathcal{J}_z^m\left(\partial(\Omega \cap B_{z_0})\right)$, then we have \[ 0=\varphi(z)\leq \int \varphi \, d\mu \leq 0\, , \] which means that $\mu$ has support where $\varphi=0$, i.e. on $\partial \Omega$.\\ Next, suppose that $z \in \bar\Omega \cap \partial B_{z_0}$. We claim that \[ \mathcal{J}_{z}^m\left(\partial(\Omega \cap B_{z_0})\right)=\{\delta_z\}\, , \] and this makes that $(\ref{eq:locally})$ holds. From (\ref{ball}) and from Theorem~\ref{ext_in_pm_hyp} we know that for every $z \in \bar \Omega \cap \partial B_{z_0}$ there exists a function $\varphi \in \mathcal {SH}_m(\bar B_{z_{0}})\subseteq\mathcal{SH}_m\left(\bar\Omega \cap \bar B_{z_0}\right)$ such that $\varphi(z)=0$ and $\varphi(\xi)<0$ for every $\xi \neq z$. By the same argument as above, we see that $\mathcal{J}_{z}^m\left(\partial(\Omega \cap B_{z_0})\right)=\{\delta_z\}$. \end{proof} \section{Some concluding remarks on approximation}\label{sec approx} Approximation is a central part of analysis. The type of approximation needed depends obviously on the situation at hand. In connection with Theorem~\ref{cont} one can ask the following question. Let $\Omega\subseteq\C^n$ be a bounded and open set, and let $1\leq m\leq n$. Under what assumptions on $\Omega$ do we have that \begin{equation}\label{question} \mathcal{SH}_m(\Omega) \cap \mathcal C(\bar \Omega)=\mathcal{SH}_m(\bar\Omega) \cap \mathcal C(\bar \Omega)? \end{equation} In the case when $m=1$, this type of theorem can be traced back to the work of Walsh~\cite{Walsh}, Keldysh~\cite{Keldych}, and Deny~\cite{deny, deny2}, where they considered harmonic functions. In the harmonic case, some call this theorem the approximation theorem of Keldysh-Brelot after the contributions~\cite{Keldych,Brelot, Brelot2}. For subharmonic functions this type of approximation is included in the inspiring work of Bliedtner and Hansen~\cite{BlietnerHansen2} (see also~\cite{BlietnerHansen1,Hansen}). The articles mentioned are in a very general setting. For us here it suffice to mention: \begin{theorem}\label{thm_question} Let $\Omega$ be a bounded domain in $\RE^n$. The following assertions are then equivalent: \begin{enumerate}\itemsep2mm \item for each $u$ in $\mathcal{SH}(\Omega) \cap \mathcal C(\bar \Omega)$ and each $\varepsilon>0$ there is a function $v$ in $\mathcal{SH}(\bar\Omega) \cap \mathcal C(\bar \Omega)$ such that $|u-v|<\varepsilon$ on $\bar\Omega$; \item the sets $\RE^n\backslash\bar{\Omega}$, and $\RE^n\backslash \Omega$, are thin at the same points of $\bar\Omega$. \end{enumerate} \end{theorem} \noindent For further information on the case $m=1$ we refer to the inspiring survey written by Hedberg~\cite{Hedberg} (see also~\cite{gardiner}). If we look at the other end case of the Caffarelli-Nirenberg-Spruck model, when $m=n$, and we are in the world of pluripotential, then our approximation question bear resemblance with the so called Mergelyan approximation of holomorphic function. Therefore, some call~(\ref{question}) the $\mathcal{PSH}$-Mergelyan property (see e.g.~\cite{H}). The first positive result for the $\mathcal{PSH}$-Mergelyan property is due to Sibony. In 1987, he proved in~\cite{S} that every smoothly bounded and pseudoconvex domain has this property. Later Forn\ae ss and Wiegerinck~\cite{FW} generalized this in their beautiful paper to arbitrary domains with $C^1$-boundary. Recently, Persson and Wiegerinck~\cite{PW} proved that a domain of which the boundary is continuous with the possible exception of a countable set of boundary points, has the $\mathcal{PSH}$-Mergelyan property (this generalize~\cite{avelin2,H}). Furthermore, in~\cite{PW} they constructed very enlightening examples that show that there can be no corresponding Theorem~\ref{thm_question} in the case $m=n$. \bigskip At this point there is no satisfactory answer to question~(\ref{question}) within the Caffarelli-Nirenberg-Spruck framework that covers the knowledge of the end cases $m=1$, and $m=n$. Even so, in Theorem~\ref{bm-reg} we give a family of bounded domains that satisfies~(\ref{question}), and we prove several characterizations of this type of domains. Obviously, there are domains that satisfies~(\ref{question}), and is not included in Theorem~\ref{bm-reg}. For further information, and inspiration, on approximation we refer to~\cite{gardiner,Gauthier2} and the references therein. \begin{theorem} \label{bm-reg} Assume that $\Omega$ is a bounded domain in $\C^n$, $n\geq 2$, $1\leq m\leq n$. Then the following assertions are equivalent: \medskip \begin{enumerate}\itemsep2mm \item for every continuous function $f:\partial\Omega\to \RE$ we have that \[ \operatorname{PB}^m_f\in\mathcal{SH}_m(\bar{\Omega})\cap \mathcal C(\bar {\Omega})\, , \text{ and } \operatorname{PB}^m_f=f \text{ on } \partial \Omega\, , \] where \[ \operatorname{PB}^m_f(z)=\sup\Bigg\{v(z): v\in\mathcal{SH}_m(\bar \Omega),\; v(\xi)\leq f(\xi)\, , \;\; \forall \xi\in\partial\Omega\Bigg\}\, ; \] \item $\partial\Omega$ has a strong barrier at every point $z_0\in\partial\Omega$ that is $m$-subharmonic on $\bar{\Omega}$, i.e. there exists a $m$-subharmonic function $u:\bar{\Omega}\to\RE$ such that \[ \lim_{x\to y_0\atop x\in\Omega} u(x)=0\, , \] and \[ \limsup_{x\to y\atop x\in\Omega} u(x)<0 \qquad \text{ for all } y\in\bar{\Omega}\backslash\{y_0\}\, ; \] \item $\Omega$ admits an exhaustion function $\varphi$ that is negative, continuous, $m$-subharmonic on $\bar{\Omega}$, smooth on $\Omega$, and such that \[ \left(\varphi(z)-|z|^2\right)\in \mathcal{SH}_m(\bar{\Omega})\, ; \] \item for every $z\in \partial \Omega$ we have that $\mathcal{J}_z^m(\bar \Omega)=\{\delta_z\}$. \end{enumerate} \end{theorem} \begin{proof} $(1)\Rightarrow(4):$ Fix $z\in \partial \Omega$, $\mu\in \mathcal J_z^m(\bar \Omega)$, and let $f$ be a real-valued continuous function defined on $\partial \Omega$ such that $f(z)=0$ and $f(\xi)<0$ for $\xi \neq z$. Then it holds that \[ 0=\operatorname{PB}^m_f(z)\leq \int\operatorname{PB}^m_f\,d\mu\leq 0\, , \] and therefore it follows that $\operatorname{supp}(\mu)\subseteq \{z\}$, Thus, $\mu=\delta_z$. \medskip $(4)\Rightarrow(1):$ First note that it follows from (4) that every continuous functions defined on the boundary $\partial\Omega$ is $m$-subharmonic on $\partial\Omega$ in the sense of Definition~\ref{def_msubkomp}. Let $f\in \mathcal C(\partial \Omega)$, then by Theorem~\ref{ext_in_pm_hyp} and Theorem~\ref{thm_charPmhx} there exists a function $F\in \mathcal {SH}_m(\bar \Omega)\cap \mathcal C(\bar \Omega)$ such that $F=f$ on $\partial \Omega$. Let us define \[ \textbf{S}_f(z)=\sup\Bigg\{v(z): v\in\mathcal{SH}_m(\Omega),\; \varlimsup_{\zeta\rightarrow\xi \atop \zeta\in\Omega}v(\zeta)\leq f(\xi)\, , \;\; \forall \xi\in\partial\Omega\Bigg\}\, . \] For $z\in \Omega$, we have that \[ F(z)\leq \operatorname{PB}^m_f(z)\leq \textbf{S}_f(z)\, , \] and then \[ \lim_{\zeta\to \xi}\textbf{S}_f(\zeta)=f(\xi)\,\qquad \text{for all } \xi\in \partial \Omega\, . \] Thanks to the the generalized Walsh theorem (Proposition~3.2 in~\cite{Blocki_weak}) we get that $\textbf{S}_f\in \mathcal {SH}_m(\Omega)\cap \mathcal C(\bar \Omega)$, and by Theorem~\ref{cor_msubbdvalue}, $\textbf{S}_f\in \mathcal {SH}_m(\bar \Omega)$. Hence, $\operatorname{PB}^m_f=\textbf{S}_f$ and the proof is finished. \medskip $(1)\Rightarrow(2):$ Fix $z\in \partial \Omega$, and let $f$ be a continuous function on $\partial \Omega$ such that $f(z)=0$ and $f(\xi)<0$ for $\xi \neq z$. Then the function $\operatorname{PB}^m_f$ is a strong barrier at $z$. \medskip $(2)\Rightarrow(4):$ Fix $z\in \partial \Omega$, $\mu\in \mathcal J_z^m(\bar \Omega)$, and let $u_z$ be a strong barrier at $z$. Then it holds that \[ 0=u_z(z)\leq \int u_z\,d\mu\leq 0\, , \] and therefore $\operatorname{supp}(\mu)\subseteq \{z\}$. Thus, $\mu=\delta_z$. \medskip $(4)\Rightarrow(3):$ Let $\mathcal J_z^{c,m}$ be the class of Jensen measures defined by continuous $m$-subharmonic functions on $\Omega$ (see~\cite{ACH}), i.e. $\mu\in \mathcal J_z^{c,m}$ if \[ u(z)\leq \int_{\bar{\Omega}} u \, d\mu\, , \text{ for all } u \in \mathcal{SH}_m(\Omega) \cap \mathcal C(\bar \Omega)\, . \] Let $z\in \partial \Omega$, and note that $\mathcal {SH}_m^o(\bar \Omega)\subseteq \mathcal{SH}_m(\Omega) \cap \mathcal C(\bar \Omega)$, so \[ \mathcal J_z^{c,m}\subseteq \mathcal J_z^m(\bar \Omega)=\{\delta_z\}\, . \] Therefore by Theorem~4.3 in~\cite{ACH}, there exits an exhaustion function $\varphi$ that is negative, smooth, $m$-subharmonic on $\Omega$, continuous on $\bar \Omega$, and such that \[ \left(\varphi(z)-|z|^2\right)\in \mathcal{SH}_m(\Omega)\, . \] Condition (4) implies that every continuous function defined on the boundary is also $m$-subharmonic. This means that $(\varphi(z)-|z|^2)\in \mathcal{SH}_m(\partial\Omega)$. Finally, Theorem~\ref{cor_msubbdvalue} gives us that $(\varphi(z)-|z|^2)\in \mathcal {SH}_m(\bar \Omega)$. \medskip $(3)\Rightarrow(2):$ Condition (3) implies that for $z\in\partial\Omega$ we have that \[ \varphi(z)-|z|^2=-|z|^2\in \mathcal {SH}_m(\partial \Omega)\, . \] Take $z_0\in \partial \Omega$, and note that \[ -|z-z_0|^2=-|z|^2+z\bar z_0+\bar zz_0-|z_0|^2\in \mathcal {SH}_m(\partial \Omega)\, . \] Theorem~\ref{ext_in_pm_hyp} and Theorem ~\ref{thm_charPmhx} imply that there exists $F\in \mathcal {SH}_m(\bar \Omega)\cap \mathcal C(\bar \Omega)$ such that $F=-|z-z_0|^2$ on $\partial \Omega$. The function $F$ is a strong barrier at $z_0$. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} This report examines the property of memory under periodic driving in a classical nonlinear system, namely the complex Ginzburg-Landau (CGL) equation ~\cite{RevModPhys.74.99}. Our aim is to demonstrate properties of memory and coherence, following measurement probes. This is an interesting topic in its own right but also illustrates classical precursors to phenomena in quantum information contexts. The interplay between nonlinearity and quantum mechanics is an important ingredient in the fascinating but complicated issue of quantum-classical correspondence \cite{pang2005quantum,bolivar2013quantum}. The intense recent focus on quantum information science and technology and associated potential physical sources of qubits provides new impetus for this topic. Clearly some properties are purely quantum in their nature \cite{kotler2021direct,de2021quantum}. However, other important features have quantum parallels. Measurement rates and decoherence are major concerns for quantum information devices. Classical analogs appear in glassy and disordered systems and more generally in appropriate classes of nonlinear equations ~\cite{PhysRevA.65.032321,PhysRevA.62.012307}, where analogs of coherence and decoherence, entangled states ~\cite{PhysRevA.65.032321,spreeuw1998classical}, chimeric patterns \cite{yao2013robustness}, frustration, and complexity \cite{guastello2008chaos,guastello2009introduction} are very rich. Another prominent example where such analogies have been intensely pursued over the past decade concerns the emerging field of the so-called pilot-wave hydrodynamics~\cite{bush1,bush2}. Our purpose here is to explore a nonlinear example of "measurement" and "memory" from this perspective. Many nonlinear equations can be used to explore these parallels, such as the nonlinear Schr\"{o}dinger (NLS) model~\cite{sulem,ablowitz,Kevrekidis2015}, the sine-Gordon equation \cite{cuevas2014sine}, coupled double-well potentials \cite{christoffel1981quantum}, etc. Indeed, nonlinearity often arises as a semi-classical approximation to quantum many-body systems (e.g., in BCS superconductivity \cite{tinkham2004introduction}, charge-density waves, Josephson junction equations \cite{pino2016nonergodic,PhysRevLett.122.054102}, or the Gross-Pitaevskii equation (GPE)~\cite{pethick,stringari}). Typically, nonlinear equations are the result of slaving among coupled linear fields, and their properties can be controlled by a variety of external drives. In fact, the NLS has been studied as a result of a quantum feedback process upon identical quantum systems subject to weak measurements~\cite{PhysRevA.62.012307}, while the GPE emerges as a result of a mean-field approximation of cold, dilute atomic quantum gases. These lines of analogy between quantum systems and nonlinear classical ones have led to the study of classical analogues of entanglement \cite{PhysRevA.65.032321,spreeuw1998classical} and decoherence \cite{gong2003quantum}, the latter including a nanomechanical resonator \cite{maillet2016classical}. We have chosen here to consider the two-dimensional (2D) CGL, as a prototypical example of such connections \cite{RevModPhys.74.99}. The CGL is interesting because it represents a {\it generic} amplitude model in the presence of slow variation and weak nonlinearity, corresponding to a normal form-type partial differential equation near a primary bifurcation of a homogeneous state~\cite{RevModPhys.74.99}. At the same time, it exhibits secondary excitations in the form of topological coherent structures (domain walls, solitons and vortices), contains explicit self-consistent dissipation, and has been extensively used to successfully model phenomena in a number of physical systems, extending from superconductivity and superfluidity to liquid crystals, Rayleigh-Benard convection and Bose-Einstein condensates~\cite{Bodenschatz_annurev,RevModPhys.74.99}. In addition, it provides an appropriate vehicle to explore our interests here as it possesses certain instabilities and coherent structures, as well as highly nontrivial metastable states, such as a vortex glass~\cite{bohr,PhysRevE.65.016122} that we will explore in what follows. As we will see, the topological excitations and their interactions are a key source of (space and time) multiscale patterns and transitions, including freezing and stretched exponential relaxation, which relate to our interest in entanglement and decoherence. Here we specifically consider the 2D CGL with a cubic nonlinearity and an external periodic driving coupled to the field amplitude. It is worthwhile to note here that periodically-forced variants of the CGL equation have been argued as offering a generic model of parametrically forced systems such as the Faraday wave experiment~\cite{alastair}. In this context of hydrodynamics, indeed, there exist well-documented examples of matching the CGL and its coefficients to concrete hydrodynamic experiments~\cite{LEGAL2003114}. By periodically driving the CGL equation, we induce an analogue of an external perturbation akin to a periodic measurement process on a quantum system. In this driven system we can anticipate a ``phase diagram" relevant to a classical analog of decoherence: one axis is akin to the ``measurement rate" and the other axis represents the effective ``mixing rate", induced by the drive. We might anticipate a demarcation curve, separating regions of maintaining and losing coherence, i.e., ones preserving the memory of the original state and ones losing it. Here, the measurement rate would correspond to a field pulse applied to the system periodically. This will mix the states of the system. After a pulse, if the system recovers, the scenario will be analogous to that of coherence in a quantum system. The mixing rate is proportional to the strength of the nonlinearity in the CGL equation. The stronger the nonlinearity, the higher the expected mixing rate. As perhaps the simplest case, we apply the driving field uniformly in space, at a periodic rate in time, and with periodic boundary conditions. We numerically follow a variety of diagnostics including ones quantifying the induced vorticity, the compressible and incompressible energy spectra and the cascades that they reveal and use the creation and annihilation of the vortices as a prototypical illustration of maintaining coherence, or potentially losing memory of the initial condition of the system. We identify both of these regimes and delineate the transition between them. We believe that this type of phenomenology may be broadly applicable in other distributed nonlinear dynamical systems and that this study may pave the way for further efforts to identify features of quantum systems that have non-trivial classical precursors. Indeed, this and related systems could offer interesting additional paradigms to the rapidly developing area of pilot-wave hydrodynamics~\cite{bush1,bush2}. Our presentation will be structured as follows. In Sec. II we present the basic features of the model. Section III encompasses the quench dynamics protocol and associated observables. In section IV, we comment on our numerical findings, while section V concludes the report offering some perspectives for future work. % \section{The Model: 2D-CGL equation.}\label{model} We consider the two-dimensional (2D) dimensionless CGL equation in the prototypical form~\cite{RevModPhys.74.99} \begin{equation} \label{eq:cg1} \begin{split} \frac{\partial}{\partial t} A &= A+ (1+i \beta) \Delta A -(1+i \alpha) |A|^2 A\\& +A_0 \delta (t- [T_0+l T]),~~l=0,1,2,... \end{split} \end{equation} Here, $A$ is a complex field and $\alpha$ and $\beta$ are real parameters. $A_0$ represents the strength of the external field (associated with the mixing rate, as discussed above), while $T$ represents the period of the quenching (i.e., the measurement rate) and $T_0$ is the initial offset time which allows the system to relax before the quenching starts. For $\alpha = \beta =0$, this equation becomes a regular CGL~\cite{RevModPhys.74.99}. On the other hand in the limit $\alpha, \beta \rightarrow \infty$, this equation reduces to the nonlinear Schr\"{o}dinger equation. This 2D model admits a variety of coherent structures, including vortices and domain walls \cite{RevModPhys.74.99,bohr,PhysRevE.65.016122,JANIAUD1992269}. The symmetry breaking instabilities of the model are well studied: these include the Benjamin-Feir, Eckhaus, convective, phase instability and absolute 2D instabilities~\cite{PhysRevA.46.R2992}. A general overview of the model 2D findings can be found in \cite{RevModPhys.74.99}. The phase diagram associated with the system suggests that there exist three pattern formation regimes, namely defect turbulence, phase turbulence and vortex glass, depending on the parameter values of the model \cite{RevModPhys.74.99,PhysRevE.65.016122}. The peculiarity of the vortex glass state is that in this regime the vortices are arranged in cellular structures thought of as frozen states, i.e., very long lived metastable states~\cite{PhysRevE.65.016122}. It is the latter frozen type of extremely long-lived attractor of the system that we find particularly appealing for our explorations of (nonlinearity induced) coherence vs. decoherence and would like to further explore. In what follows, we will use quench dynamics into this vortex glass regime and will subsequently perturb the relevant state using the external drive. \begin{figure*}[!htbp] \includegraphics[width = \textwidth]{fig1.pdf} \caption{Snapshots of the absolute value of the field ($|A|$) for $\alpha=0.7$ and $\beta=-0.7$ at different times. We consider the state at $t=1200$ as the initial condition for the rest of the numerical studies reported in this work. Movies are available at the link \url{https://www.dropbox.com/sh/65ze3j94l3bcw4v/AADF_UjJeM-EAIgUlno_noiia?dl=0}} \label{fig:fig1} \end{figure*} \subsection{Quench Dynamics} An absolute instability induces defect pair formation in the model as demonstrated, e.g., in \cite{PhysRevA.46.R2992}. This defect pair formation mechanism can be systematically controlled via the nonlinear model parameters $\alpha$ and $\beta$~\cite{bohr}. In this work we fix $\alpha=0.7$ and $\beta=-0.7$. For this set of parameter values, a random initial condition leads to the formation of a vortex glass state as shown in Fig.~\ref{fig:fig1}. For our numerical experiments, we consider a 2D plane of size $L_x \times L_y$, ($L_x =L_y=80$) with $512 \times 512$ grid points. We solve Eq.~\eqref{eq:cg1} numerically by using an explicit Runge-Kutta algorithm of order 8, namely DOP853~\cite{freelyDOP853,mine-01-03-447}. We initialized the simulation with $A=0.001*R$, where $R$ represents random numbers drawn from the uniform distribution $[-0.5,0.5]$ as shown in Fig.~\ref{fig:fig1}(a). Additionally, periodic boundary conditions $A_{i,0} = A_{i,M}$, $A_{i,M+1} = A_{i,1}$, for $1 \le i \le L$ and $A_{0,j} = A_{L,j}$, $A_{L+1,j} = A_{1,j}$, for $1 \le j \le M$ are considered. The snapshots of field amplitude at different times in Fig.~\ref{fig:fig1}(b-d) show the vortex defects that are surrounded by shock lines \cite{chate1996phase}. They form a long-lived metastable cell structure together. The vortex defects are known to produce spiral waves \cite{aranson1992stability}. In this work we are interested in performing amplitude quenches to excite the system and observe its potential return to equilibrium (or absence thereof). We consider the field shown in Fig.~\ref{fig:fig1}(d) at $t= 1200$, as corresponding to $T_0$ in Eq.~(\ref{eq:cg1}), i.e., as the initial condition for the following numerical experiments. This state is a relaxed state in the sense that no more vortex-antivortex annihilations take place upon further evolution until at least $T=2000$. On the other hand, this is a metastable state in which (slow) vortex motion still persists. Hence, such a state is often referred to as a glassy state~\cite{PhysRevE.65.016122}. We now periodically drive the (glassy state of the) system for a fixed amplitude $A_0$ (real variable) and the period $T$. We fix the time periods $T=10$ and $T=15$ for our numerical experiments. These time periods are relatively short as compared to the time required for the system to relax. As a result, a periodic drive takes the system out of equilibrium by changing its total energy and we then observe the subsequent response of the system to both a single excitation, but also importantly to a periodic sequence of such excitations. \subsection{Observables} We consider the normalized distribution $P_{i}=P(n_i)$, where $P(n_i)$ is the probability distribution of amplitudes (PDA) $n_i=|A_i|$. For the complex CGL system, we first define the observable Shannon entropy \begin{equation} S_v =-\sum_{i}\Big[P_{i}\log(P_{i})\Big]. \end{equation} as a standard information-theoretic diagnostic of the system. Moreover, since the topoplogical excitations in 2D of the CGL model are vortices, we additionally consider the following observables for the measurements. We measure the absolute value of the winding number \cite{Akhavan:2020} \begin{equation} \Gamma =\frac{1}{2\pi}\int |\omega|dx dy , \end{equation} where $\omega$ is the vorticity. The vorticity is related to the velocity field $\vect{v}$ as $\bf{\omega}=\nabla \times \bf{v}$. This diagnostic counts the total number of vortices in the system. The change in this number will be used to assess the potential loss of memory through the creation of additional excitations and the departure from the previously reached glassy state, upon introducing the relevant perturbation. Further, in order to follow the energy distribution in the system we separate the field into compressible and incompressible parts. The term $| \nabla A|^2 $ [i.e., the energetic contribution associated with the Laplacian in Eq.~\eqref{eq:cg1}] can be written as \begin{eqnarray}\label{eq:vort_sp1aa} |\nabla A|^2= \left( \rho |\bm{v}|^2+ \left| \nabla \sqrt{\rho} \right|^2 \right), \end{eqnarray} where the transformation $A=\sqrt{\rho} e^{i \phi}$ yields $\rho=|A|^2$ and the velocity $\bm{v}=\nabla \phi$. Here, following, e.g., the exposition of~\cite{bradley2012energy}, the first and second terms represent the density of the kinetic energy ($E_\text{ke}$) and the quantum pressure ($E^{q}$), respectively, where the energies are given by \begin{eqnarray}\label{eq:vort_sp1bb} E_\text{ke}=\int n|\bm{v}|^2 d^2 r,~~~E^q=\int |\nabla \sqrt{\rho}|^2 d^2 r. \end{eqnarray} The velocity vector $\bm{v}$ now can be written as a sum over a solenoidal part (incompressible) $\bm{v}^\text{ic} $ and an irrotational (compressible) part $\bm{v}^\text{c}$ as \begin{eqnarray}\label{eq:vort_sp1} \bm{v}=\bm{v}^\text{ic}+\bm{v}^\text{c}, \end{eqnarray} such that $\nabla \cdot{\bm{v}^\text{ic}} = 0$ and $\nabla \times{\bm{v}^\text{c}} = 0$. The incompressible and compressible kinetic energies are then \cite{Mithun2021decay} \begin{eqnarray}\label{eq:vort_sp2} E^\text{ic,c} = \int d^2 r |\sqrt{n} \bm{v}^\text{ic,c}(\bm{r})|^2. \end{eqnarray} We additionally find the vortex spectra of the system \cite{Eyink_2006,RevModPhys.71.S383,kraichnan1980two,Numasato_2010,Mithun2021decay}. In $k$-space ($k$-wave vector), the total incompressible and compressible kinetic energy $E_{ke}^\text{ic,c}$ is represented by \begin{eqnarray}\label{eq:vort_sp3} E^\text{ic,c}(k)= k\sum_{j=x,y}\int_0^{2\pi} d\phi_k |\mathcal{F}_j(\bm{k})^\text{ic,c}|^2, \end{eqnarray} where $\mathcal{F}_j(\bm{k})$ is the Fourier transform of $\sqrt{n} u_j$ of the $j$-th component of $\bm{u}= (u_x, u_y)$ and ($k=\sqrt{k_x^2+k_y^2}$, $\phi_k$) represents polar coordinates. Through the above Fourier space diagnostics, we can assess the transfer of energy between different wavenumbers, as well as the fractions of energy that pertain primarily to the vortical excitations (associated with the incompressible part) and ones associated with sound waves (the compressible part), analogously to earlier works such as~\cite{bradley2012energy,Mithun2021decay}. \begin{figure*}[!hbtp] \includegraphics[width = \textwidth]{densitym.pdf} \caption{ Snapshots of the field amplitude $|A|$ for drive strengths (a-e) $A_0=2.0$, (f-j) $A_0=3.6$, and (k-o) $A_0=4.8$. The other parameters are: repetition time interval $T=10$, and initial perturbation time $T_0=1200$, for CGL model parameters $\alpha=0.7$ and $\beta=-0.7$.} \label{fig:fig2} \end{figure*} \begin{figure}[!htbp] \includegraphics[width = 0.48\textwidth]{entropy.pdf} \caption{Shannon entropy $S_v$ for $T=10$ for different values of drive amplitude $A_0$. } \label{fig:figS} \end{figure} \begin{figure*}[!htbp] \includegraphics[width = 8.cm]{L80_T10} \includegraphics[width = 8.cm]{L80_T15} \caption{Winding number $\Gamma$ for (a) $T=10$ and (b) $T=15$. The axes here are $A_{0}$ (the drive amplitude) and time $t$ (measured from the time of first kick).} \label{fig:pbc_wt} \end{figure*} % \section{Numerical experiments } We begin the numerical experiments by considering different values of the driving amplitude $A_0$ for a fixed drive periodicity $T$. Fig.~\ref{fig:fig2} shows the field amplitude for $A_0=2,~3.6~\text{and}~4.8$ and the relevant ``kicks'' repeated every $T=10$ time units. The figure indicates the existence of three different regimes; (i) the system is nearly unaffected by the periodic driving: this is the regime where coherence is fully preserved and the relevant perturbation is weak. (ii) Nucleation of new vortex pairs and their subsequent annihilations thereof: this is the intermediate regime where effective decoherence first emerges, substantially modifying the ultimate fate of the system, although the latter is still in the state bearing vortices with domain walls separating them. (iii) Finally, a constant density regime emerges: here, the perturbation is strong and consequently it entirely collapses the system to a different state with no memory of the initial condition. In regime (i), despite the fact that no new vortices are generated, the imprint of the ``measurement'' (or more accurately here, perturbation) process arises through the rapid emission of spiral waves which can be clearly observed in Fig.~\ref{fig:fig2}(a-e). Regime (ii) can be seen in panels (f-l), while regime (iii) is shown in panels (m-o). We first characterize these observations using the defined observables. Fig.~\ref{fig:figS} shows the Shannon entropy based on the probability distribution of amplitude $|A|$ for different values of $A_0$. The entropy initially increases with increase in $A_0$ due to the high density fluctuations. At larger times, the entropy nearly saturates in regimes (i) and (ii), while it decreases in regime (iii). In regime (i), the saturation is rather natural to expect, as the system rapidly relaxes to the previous glassy state. Similarly in regime (iii), the state evolves towards a constant amplitude (of unity), hence naturally the attraction to this state leads the entropy to a rapid decrease towards $0$. The most dynamic state is the intermediate one of regime (ii), where the relaxation is slow and therefore the entropy presents the oscillatory dynamics observed in Fig.~\ref{fig:figS}. The measurement based on Shannon entropy provides a cumulative diagnostic (rather than a distributed one) that clearly highlights the effect of periodic driving on the amplitude. At the same time, the entropy does not immediately provide information about the vortex configurations (aside perhaps from regime (iii) where the tendency of the entropy to go to $0$ can lead us to infer the absence of vorticity). In that vein, we now characterize the dynamics based on the absolute value of the winding number that provides us with a sense of the vortex creation and annihilation processes occurring in the system as a result of the drive. We consider two periods $T=10$ and $T=15$. The results of both cases shown in Fig.~\ref{fig:pbc_wt} illustrate that as $A_0$ increases, the number of initially generated vortices also increases in accordance with Fig.~\ref{fig:fig2}. The results of the $T=10$ case show that for lower values of $A_0=(2,2.4)$, the system does not respond to the kicks by changing its vortical content; once again, this illustrates the regime (i) of effective coherence preservation. For larger values of kick amplitude, on the other hand, the number of initially generated vortices increases (in the presence of the perturbations), yet at the same time, the rate of vortex-antivortex annihilations also progressively increases with $A_0$. On the other hand, for $T=15$ the rate of annihilation is decreased as compared to the $T=10$ case. This is due to the fact that the system has more time to relax in between the kicks as $T$ increases. For intermediate values of $A_0$, we therefore observe that the time evolution of the system preserves a dynamic effect where significant (additional) vorticity is present, as shown by the colorbar of Fig.~\ref{fig:pbc_wt} and the system has decohered from its original glassy state. For large values of $A_0$, eventually the vortex annihilation events take over and the system reaches the stable, locally attracting equilibrium state of $|A|=1$. \begin{figure*}[!htbp] \includegraphics[width = 8.cm]{L80_T10EIC.pdf} \includegraphics[width = 8.cm]{L80_T15EIC.pdf} \caption{ Incompressible energy as a function of time for (a) $T=10$ and (b) $T=15$. Colors represent different values of the drive strength $A_{0}$. } \label{fig:EIC} \end{figure*} \begin{figure*}[!htbp] \includegraphics[width = 8.cm]{L80_T10EC.pdf} \includegraphics[width = 8.cm]{L80_T15EC.pdf} \caption{Compressible energy as a function of time for (a) $T=10$ and (b) $T=15$. Colors represent different values of the drive strength $A_{0}$.} \label{fig:EC} \end{figure*} The measured incompressible and compressible energies for the cases of $T=10$ and $T=15$ are shown in Fig.~\ref{fig:EIC} and Fig.~\ref{fig:EC}, respectively. As expected, for $T=10$, the incompressible energy is nearly a constant for $A_0=(2,2.4)$ [see Fig.~\ref{fig:EIC}(a)], representing the invariance of the vortex configuration. For the range $A_0=2.6-2.8$ the incompressible energy initially decreases and then saturates at later times corresponding to the one vortex pair reduction shown in Fig.~\ref{fig:pbc_wt}(a). The increase in the rate of annihilation is well reflected in Fig.~\ref{fig:EIC}(a) for higher $A_0$, where incompressible energy decreases. Similar results are seen in Fig.~\ref{fig:EIC}(b) for $T=15$, although the larger relaxation time induces the associated phenomenology at larger values of $A_0$. For higher values of $A_0$, it is clear that the incompressible energy is fluctuating in a way that parallels the winding number and reflects the potential emergence (for intermediate values of $A_0$) and disappearance (for higher values of $A_0$) of vortex-antivortex pairs through the relevant creation and annihilation processes. \begin{figure*}[!htbp] \includegraphics[width = 5.8cm]{C_L80_T10_A0_2.pdf} \includegraphics[width = 5.8cm]{C_L80_T10_A0_3_6.pdf} \includegraphics[width = 5.8cm]{C_L80_T10_A0_4_8.pdf} \includegraphics[width = 5.8cm]{IC_L80_T10_A0_2.pdf} \includegraphics[width = 5.8cm]{IC_L80_T10_A0_3_6.pdf} \includegraphics[width = 5.8cm]{IC_L80_T10_A0_4_8.pdf} \caption{(Top panel) Compressible energy spectra for (a) $A_0=2$, (b) $A_0=3.6$ and (a) $A_0=4.8$. (Bottom panel) The corresponding incompressible energy spectra for (a) $A_0=2$, (b) $A_0=3.6$ and (a) $A_0=4.8$. The vertical lines (from left to right) represent $k=1/\xi$ and $k=2 \pi/\xi$, where $\xi$ is the so-called healing length associated with the size of individual vortices. The other parameters are $T=10$ and $\alpha=0.7$ and $\beta=-0.7$.} \label{fig:T10spectra} \end{figure*} We next analyze both the compressible $E^\text{c}(k)$ and incompressible $E^\text{ic}(k)$ energy spectra, of the system collected at different evolution times. This is typically done in order to appreciate the energy exchanges in a system and the possible connection of the associated mechanisms with potential turbulent or coherent-structure (such as vortex) induced dynamics~\cite{bradley2012energy}. The energy distribution from smaller scale (corresponding to the vortex core size) to larger scale (system size) can be inferred from such spectra. The upper and lower panels in Fig.~\ref{fig:T10spectra} show the compressible and incompressible energy spectra for three different values of $A_0~(=2.0,3.6,4.8)$ for fixed $T=10$. The compressible vortex spectra shown in the upper panel of Fig.~\ref{fig:T10spectra} do not exhibit any convincing scaling laws. However, at the same time we note that for $1/\xi < k < 2\pi/\xi$, where $\xi$ represents the numerically measured healing length (vortex core radius), spectra are closer to the reference line drawn for $k^{-7/2}$. This power-law is a characteristic of a superfluid turbulence corresponding to the sound wave equilibrium, as discussed, e.g., in~\cite{Mithun2021decay,PhysRevA.86.053621}. The decrease in compressible energy with increase in time for higher $A_0$ is due to the loss of small amplitude fluctuations as shown in Fig.~\ref{fig:fig2}. The incompressible energy spectra of the same case are depicted in the bottom panel of Fig.~\ref{fig:T10spectra}. The spectra of different values of $A_0$ show the existence of $k^{-3}$-power law, associated with individual vortex cores~\cite{bradley2012energy}, for sufficiently large $k$. On the other hand, interestingly and somewhat unexpectedly, there exists a lower interval of wavenumbers where our observations are close to a Kolmogorov $k^{-5/3}$ power-law that is typically associated in 2D settings with the inverse energy cascade as observed, e.g., in superfluid vortex turbulence \cite{bradley2012energy}. We conjecture that the existence of $k^{-5/3}$ power-law, even for smaller $A_0$, is due to the irregular arrangement and slow motion of vortices, although this is clearly a topic that merits further investigation. The downward shift of vortex spectra at larger times for larger values of $A_0$ is due to the decrease in the vortex numbers. We find similar results for the $T=15$ case as well (results omitted for brevity). In this section we demonstrated the effect of a periodic drive on the defect dynamics of the CGL model. Our results suggest that there are cascading processes even in the weaker amplitude cases, where the configuration at the level of its density profile appears to preserve its coherence and glassy state structure. For higher values of $A_0$, it is apparent (see, e.g., the middle and right panels of Fig.~\ref{fig:T10spectra}) that there are stronger energy exchange mechanisms at work, both ones mediated via the substantial additional compressible (sound) energy and ones realized via the vortex-antivortex creation (hence increase of incompressible energy) or annihilation (hence conversion of the incompressible energy into sound waves). In this case of larger $A_0$, these processes result in the loss of memory and effective (nonlinearity-driven) decoherence of the dynamics. \section{Conclusions and Future Challenges} One of the motivations for this work has been to explore a classical analog of quantum effects such as decoherence via suitable nonlinear classical field theoretic examples. The driven complex Ginzburg-Landau CGL) equation in 2D offers an intriguing playground for the exploration of these effects as it possesses certain instabilities and coherent topological structures such as vortices and domain walls, and features wide parametric regimes of metastable, long-lived states such as the vortex glasses utilized here~\cite{bohr,PhysRevE.65.016122}. In the periodically driven version of this dissipative nonlinear system that we considered here, we have aimed to establish a phase diagram that offers a glimpse into a classical (nonlinearity-induced) analogue of the notion of decoherence. In particular, the axis of variation of the period of the drive $T$ is an analog of the ``measurement rate", while the amplitude axis $A_0$ as an analog of the ``mixing rate". As we have shown, we indeed numerically find a demarcation curve between the examined cases, with a parametric region representing decoherence and another preserving the system's statistical memory; the intermediate regime between these two features the most dynamical environment where vortex creation-annihilation events most dramatically and persistently take place. We believe that this study poses a number of challenges that are worth pursuing in the future work. Clearly, the form, in space and time, of the external field we have used here has many potential variations beyond the spatially uniform, time periodic one used here, analogous to various measurement protocols employed in quantum technologies. A systematic comparison of quantum systems with classical nonlinear ones as regards quantum features such as entanglement and decoherence is called for, including both dissipative and non-dissipative (open and closed) cases. The Gross-Pitaevskii model and its many-body variants~\cite{lode1} could represent an ideal framework for exploring the analogies and differences between classical nonlinear and genuinely quantum systems \cite{PhysRevA.102.033301}. Moreover, the transverse field Ising and similar models provide examples where coherence times can be equated with correlation lengths in equivalent higher-dimensional classical models \cite{Stinchcombe_1973}. Finally, in the present report, we have explored issues pertaining to driving in connection to coherence vs. decoherence and memory vs. memory loss, but have not examined aspects pertaining to entanglement. The latter may be an especially interesting topic (along with its similarities and differences to wave interactions) for future study. \section{Acknowledgements} This material is based upon work supported by the US National Science Foundation under Grants No. PHY-2110030 and DMS-1809074 (PGK). The work at Los Alamos National Laboratory was carried out under the auspices of the U.S. DOE and NNSA under Contract No. DEAC52-06NA25396. \bibliographystyle{apsrev4} \let\itshape\upshape \normalem \section{Introduction} This report examines the property of memory under periodic driving in a classical nonlinear system, namely the complex Ginzburg-Landau (CGL) equation ~\cite{RevModPhys.74.99}. Our aim is to demonstrate properties of memory and coherence, following measurement probes. This is an interesting topic in its own right but also illustrates classical precursors to phenomena in quantum information contexts. The interplay between nonlinearity and quantum mechanics is an important ingredient in the fascinating but complicated issue of quantum-classical correspondence \cite{pang2005quantum,bolivar2013quantum}. The intense recent focus on quantum information science and technology and associated potential physical sources of qubits provides new impetus for this topic. Clearly some properties are purely quantum in their nature \cite{kotler2021direct,de2021quantum}. However, other important features have quantum parallels. Measurement rates and decoherence are major concerns for quantum information devices. Classical analogs appear in glassy and disordered systems and more generally in appropriate classes of nonlinear equations ~\cite{PhysRevA.65.032321,PhysRevA.62.012307}, where analogs of coherence and decoherence, entangled states ~\cite{PhysRevA.65.032321,spreeuw1998classical}, chimeric patterns \cite{yao2013robustness}, frustration, and complexity \cite{guastello2008chaos,guastello2009introduction} are very rich. Another prominent example where such analogies have been intensely pursued over the past decade concerns the emerging field of the so-called pilot-wave hydrodynamics~\cite{bush1,bush2}. Our purpose here is to explore a nonlinear example of "measurement" and "memory" from this perspective. Many nonlinear equations can be used to explore these parallels, such as the nonlinear Schr\"{o}dinger (NLS) model~\cite{sulem,ablowitz,Kevrekidis2015}, the sine-Gordon equation \cite{cuevas2014sine}, coupled double-well potentials \cite{christoffel1981quantum}, etc. Indeed, nonlinearity often arises as a semi-classical approximation to quantum many-body systems (e.g., in BCS superconductivity \cite{tinkham2004introduction}, charge-density waves, Josephson junction equations \cite{pino2016nonergodic,PhysRevLett.122.054102}, or the Gross-Pitaevskii equation (GPE)~\cite{pethick,stringari}). Typically, nonlinear equations are the result of slaving among coupled linear fields, and their properties can be controlled by a variety of external drives. In fact, the NLS has been studied as a result of a quantum feedback process upon identical quantum systems subject to weak measurements~\cite{PhysRevA.62.012307}, while the GPE emerges as a result of a mean-field approximation of cold, dilute atomic quantum gases. These lines of analogy between quantum systems and nonlinear classical ones have led to the study of classical analogues of entanglement \cite{PhysRevA.65.032321,spreeuw1998classical} and decoherence \cite{gong2003quantum}, the latter including a nanomechanical resonator \cite{maillet2016classical}. We have chosen here to consider the two-dimensional (2D) CGL, as a prototypical example of such connections \cite{RevModPhys.74.99}. The CGL is interesting because it represents a {\it generic} amplitude model in the presence of slow variation and weak nonlinearity, corresponding to a normal form-type partial differential equation near a primary bifurcation of a homogeneous state~\cite{RevModPhys.74.99}. At the same time, it exhibits secondary excitations in the form of topological coherent structures (domain walls, solitons and vortices), contains explicit self-consistent dissipation, and has been extensively used to successfully model phenomena in a number of physical systems, extending from superconductivity and superfluidity to liquid crystals, Rayleigh-Benard convection and Bose-Einstein condensates~\cite{Bodenschatz_annurev,RevModPhys.74.99}. In addition, it provides an appropriate vehicle to explore our interests here as it possesses certain instabilities and coherent structures, as well as highly nontrivial metastable states, such as a vortex glass~\cite{bohr,PhysRevE.65.016122} that we will explore in what follows. As we will see, the topological excitations and their interactions are a key source of (space and time) multiscale patterns and transitions, including freezing and stretched exponential relaxation, which relate to our interest in entanglement and decoherence. Here we specifically consider the 2D CGL with a cubic nonlinearity and an external periodic driving coupled to the field amplitude. It is worthwhile to note here that periodically-forced variants of the CGL equation have been argued as offering a generic model of parametrically forced systems such as the Faraday wave experiment~\cite{alastair}. In this context of hydrodynamics, indeed, there exist well-documented examples of matching the CGL and its coefficients to concrete hydrodynamic experiments~\cite{LEGAL2003114}. By periodically driving the CGL equation, we induce an analogue of an external perturbation akin to a periodic measurement process on a quantum system. In this driven system we can anticipate a ``phase diagram" relevant to a classical analog of decoherence: one axis is akin to the ``measurement rate" and the other axis represents the effective ``mixing rate", induced by the drive. We might anticipate a demarcation curve, separating regions of maintaining and losing coherence, i.e., ones preserving the memory of the original state and ones losing it. Here, the measurement rate would correspond to a field pulse applied to the system periodically. This will mix the states of the system. After a pulse, if the system recovers, the scenario will be analogous to that of coherence in a quantum system. The mixing rate is proportional to the strength of the nonlinearity in the CGL equation. The stronger the nonlinearity, the higher the expected mixing rate. As perhaps the simplest case, we apply the driving field uniformly in space, at a periodic rate in time, and with periodic boundary conditions. We numerically follow a variety of diagnostics including ones quantifying the induced vorticity, the compressible and incompressible energy spectra and the cascades that they reveal and use the creation and annihilation of the vortices as a prototypical illustration of maintaining coherence, or potentially losing memory of the initial condition of the system. We identify both of these regimes and delineate the transition between them. We believe that this type of phenomenology may be broadly applicable in other distributed nonlinear dynamical systems and that this study may pave the way for further efforts to identify features of quantum systems that have non-trivial classical precursors. Indeed, this and related systems could offer interesting additional paradigms to the rapidly developing area of pilot-wave hydrodynamics~\cite{bush1,bush2}. Our presentation will be structured as follows. In Sec. II we present the basic features of the model. Section III encompasses the quench dynamics protocol and associated observables. In section IV, we comment on our numerical findings, while section V concludes the report offering some perspectives for future work. % \section{The Model: 2D-CGL equation.}\label{model} We consider the two-dimensional (2D) dimensionless CGL equation in the prototypical form~\cite{RevModPhys.74.99} \begin{equation} \label{eq:cg1} \begin{split} \frac{\partial}{\partial t} A &= A+ (1+i \beta) \Delta A -(1+i \alpha) |A|^2 A\\& +A_0 \delta (t- [T_0+l T]),~~l=0,1,2,... \end{split} \end{equation} Here, $A$ is a complex field and $\alpha$ and $\beta$ are real parameters. $A_0$ represents the strength of the external field (associated with the mixing rate, as discussed above), while $T$ represents the period of the quenching (i.e., the measurement rate) and $T_0$ is the initial offset time which allows the system to relax before the quenching starts. For $\alpha = \beta =0$, this equation becomes a regular CGL~\cite{RevModPhys.74.99}. On the other hand in the limit $\alpha, \beta \rightarrow \infty$, this equation reduces to the nonlinear Schr\"{o}dinger equation. This 2D model admits a variety of coherent structures, including vortices and domain walls \cite{RevModPhys.74.99,bohr,PhysRevE.65.016122,JANIAUD1992269}. The symmetry breaking instabilities of the model are well studied: these include the Benjamin-Feir, Eckhaus, convective, phase instability and absolute 2D instabilities~\cite{PhysRevA.46.R2992}. A general overview of the model 2D findings can be found in \cite{RevModPhys.74.99}. The phase diagram associated with the system suggests that there exist three pattern formation regimes, namely defect turbulence, phase turbulence and vortex glass, depending on the parameter values of the model \cite{RevModPhys.74.99,PhysRevE.65.016122}. The peculiarity of the vortex glass state is that in this regime the vortices are arranged in cellular structures thought of as frozen states, i.e., very long lived metastable states~\cite{PhysRevE.65.016122}. It is the latter frozen type of extremely long-lived attractor of the system that we find particularly appealing for our explorations of (nonlinearity induced) coherence vs. decoherence and would like to further explore. In what follows, we will use quench dynamics into this vortex glass regime and will subsequently perturb the relevant state using the external drive. \begin{figure*}[!htbp] \includegraphics[width = \textwidth]{fig1.pdf} \caption{Snapshots of the absolute value of the field ($|A|$) for $\alpha=0.7$ and $\beta=-0.7$ at different times. We consider the state at $t=1200$ as the initial condition for the rest of the numerical studies reported in this work. Movies are available at the link \url{https://www.dropbox.com/sh/65ze3j94l3bcw4v/AADF_UjJeM-EAIgUlno_noiia?dl=0}} \label{fig:fig1} \end{figure*} \subsection{Quench Dynamics} An absolute instability induces defect pair formation in the model as demonstrated, e.g., in \cite{PhysRevA.46.R2992}. This defect pair formation mechanism can be systematically controlled via the nonlinear model parameters $\alpha$ and $\beta$~\cite{bohr}. In this work we fix $\alpha=0.7$ and $\beta=-0.7$. For this set of parameter values, a random initial condition leads to the formation of a vortex glass state as shown in Fig.~\ref{fig:fig1}. For our numerical experiments, we consider a 2D plane of size $L_x \times L_y$, ($L_x =L_y=80$) with $512 \times 512$ grid points. We solve Eq.~\eqref{eq:cg1} numerically by using an explicit Runge-Kutta algorithm of order 8, namely DOP853~\cite{freelyDOP853,mine-01-03-447}. We initialized the simulation with $A=0.001*R$, where $R$ represents random numbers drawn from the uniform distribution $[-0.5,0.5]$ as shown in Fig.~\ref{fig:fig1}(a). Additionally, periodic boundary conditions $A_{i,0} = A_{i,M}$, $A_{i,M+1} = A_{i,1}$, for $1 \le i \le L$ and $A_{0,j} = A_{L,j}$, $A_{L+1,j} = A_{1,j}$, for $1 \le j \le M$ are considered. The snapshots of field amplitude at different times in Fig.~\ref{fig:fig1}(b-d) show the vortex defects that are surrounded by shock lines \cite{chate1996phase}. They form a long-lived metastable cell structure together. The vortex defects are known to produce spiral waves \cite{aranson1992stability}. In this work we are interested in performing amplitude quenches to excite the system and observe its potential return to equilibrium (or absence thereof). We consider the field shown in Fig.~\ref{fig:fig1}(d) at $t= 1200$, as corresponding to $T_0$ in Eq.~(\ref{eq:cg1}), i.e., as the initial condition for the following numerical experiments. This state is a relaxed state in the sense that no more vortex-antivortex annihilations take place upon further evolution until at least $T=2000$. On the other hand, this is a metastable state in which (slow) vortex motion still persists. Hence, such a state is often referred to as a glassy state~\cite{PhysRevE.65.016122}. We now periodically drive the (glassy state of the) system for a fixed amplitude $A_0$ (real variable) and the period $T$. We fix the time periods $T=10$ and $T=15$ for our numerical experiments. These time periods are relatively short as compared to the time required for the system to relax. As a result, a periodic drive takes the system out of equilibrium by changing its total energy and we then observe the subsequent response of the system to both a single excitation, but also importantly to a periodic sequence of such excitations. \subsection{Observables} We consider the normalized distribution $P_{i}=P(n_i)$, where $P(n_i)$ is the probability distribution of amplitudes (PDA) $n_i=|A_i|$. For the complex CGL system, we first define the observable Shannon entropy \begin{equation} S_v =-\sum_{i}\Big[P_{i}\log(P_{i})\Big]. \end{equation} as a standard information-theoretic diagnostic of the system. Moreover, since the topoplogical excitations in 2D of the CGL model are vortices, we additionally consider the following observables for the measurements. We measure the absolute value of the winding number \cite{Akhavan:2020} \begin{equation} \Gamma =\frac{1}{2\pi}\int |\omega|dx dy , \end{equation} where $\omega$ is the vorticity. The vorticity is related to the velocity field $\vect{v}$ as $\bf{\omega}=\nabla \times \bf{v}$. This diagnostic counts the total number of vortices in the system. The change in this number will be used to assess the potential loss of memory through the creation of additional excitations and the departure from the previously reached glassy state, upon introducing the relevant perturbation. Further, in order to follow the energy distribution in the system we separate the field into compressible and incompressible parts. The term $| \nabla A|^2 $ [i.e., the energetic contribution associated with the Laplacian in Eq.~\eqref{eq:cg1}] can be written as \begin{eqnarray}\label{eq:vort_sp1aa} |\nabla A|^2= \left( \rho |\bm{v}|^2+ \left| \nabla \sqrt{\rho} \right|^2 \right), \end{eqnarray} where the transformation $A=\sqrt{\rho} e^{i \phi}$ yields $\rho=|A|^2$ and the velocity $\bm{v}=\nabla \phi$. Here, following, e.g., the exposition of~\cite{bradley2012energy}, the first and second terms represent the density of the kinetic energy ($E_\text{ke}$) and the quantum pressure ($E^{q}$), respectively, where the energies are given by \begin{eqnarray}\label{eq:vort_sp1bb} E_\text{ke}=\int n|\bm{v}|^2 d^2 r,~~~E^q=\int |\nabla \sqrt{\rho}|^2 d^2 r. \end{eqnarray} The velocity vector $\bm{v}$ now can be written as a sum over a solenoidal part (incompressible) $\bm{v}^\text{ic} $ and an irrotational (compressible) part $\bm{v}^\text{c}$ as \begin{eqnarray}\label{eq:vort_sp1} \bm{v}=\bm{v}^\text{ic}+\bm{v}^\text{c}, \end{eqnarray} such that $\nabla \cdot{\bm{v}^\text{ic}} = 0$ and $\nabla \times{\bm{v}^\text{c}} = 0$. The incompressible and compressible kinetic energies are then \cite{Mithun2021decay} \begin{eqnarray}\label{eq:vort_sp2} E^\text{ic,c} = \int d^2 r |\sqrt{n} \bm{v}^\text{ic,c}(\bm{r})|^2. \end{eqnarray} We additionally find the vortex spectra of the system \cite{Eyink_2006,RevModPhys.71.S383,kraichnan1980two,Numasato_2010,Mithun2021decay}. In $k$-space ($k$-wave vector), the total incompressible and compressible kinetic energy $E_{ke}^\text{ic,c}$ is represented by \begin{eqnarray}\label{eq:vort_sp3} E^\text{ic,c}(k)= k\sum_{j=x,y}\int_0^{2\pi} d\phi_k |\mathcal{F}_j(\bm{k})^\text{ic,c}|^2, \end{eqnarray} where $\mathcal{F}_j(\bm{k})$ is the Fourier transform of $\sqrt{n} u_j$ of the $j$-th component of $\bm{u}= (u_x, u_y)$ and ($k=\sqrt{k_x^2+k_y^2}$, $\phi_k$) represents polar coordinates. Through the above Fourier space diagnostics, we can assess the transfer of energy between different wavenumbers, as well as the fractions of energy that pertain primarily to the vortical excitations (associated with the incompressible part) and ones associated with sound waves (the compressible part), analogously to earlier works such as~\cite{bradley2012energy,Mithun2021decay}. \begin{figure*}[!hbtp] \includegraphics[width = \textwidth]{densitym.pdf} \caption{ Snapshots of the field amplitude $|A|$ for drive strengths (a-e) $A_0=2.0$, (f-j) $A_0=3.6$, and (k-o) $A_0=4.8$. The other parameters are: repetition time interval $T=10$, and initial perturbation time $T_0=1200$, for CGL model parameters $\alpha=0.7$ and $\beta=-0.7$.} \label{fig:fig2} \end{figure*} \begin{figure}[!htbp] \includegraphics[width = 0.48\textwidth]{entropy.pdf} \caption{Shannon entropy $S_v$ for $T=10$ for different values of drive amplitude $A_0$. } \label{fig:figS} \end{figure} \begin{figure*}[!htbp] \includegraphics[width = 8.cm]{L80_T10} \includegraphics[width = 8.cm]{L80_T15} \caption{Winding number $\Gamma$ for (a) $T=10$ and (b) $T=15$. The axes here are $A_{0}$ (the drive amplitude) and time $t$ (measured from the time of first kick).} \label{fig:pbc_wt} \end{figure*} % \section{Numerical experiments } We begin the numerical experiments by considering different values of the driving amplitude $A_0$ for a fixed drive periodicity $T$. Fig.~\ref{fig:fig2} shows the field amplitude for $A_0=2,~3.6~\text{and}~4.8$ and the relevant ``kicks'' repeated every $T=10$ time units. The figure indicates the existence of three different regimes; (i) the system is nearly unaffected by the periodic driving: this is the regime where coherence is fully preserved and the relevant perturbation is weak. (ii) Nucleation of new vortex pairs and their subsequent annihilations thereof: this is the intermediate regime where effective decoherence first emerges, substantially modifying the ultimate fate of the system, although the latter is still in the state bearing vortices with domain walls separating them. (iii) Finally, a constant density regime emerges: here, the perturbation is strong and consequently it entirely collapses the system to a different state with no memory of the initial condition. In regime (i), despite the fact that no new vortices are generated, the imprint of the ``measurement'' (or more accurately here, perturbation) process arises through the rapid emission of spiral waves which can be clearly observed in Fig.~\ref{fig:fig2}(a-e). Regime (ii) can be seen in panels (f-l), while regime (iii) is shown in panels (m-o). We first characterize these observations using the defined observables. Fig.~\ref{fig:figS} shows the Shannon entropy based on the probability distribution of amplitude $|A|$ for different values of $A_0$. The entropy initially increases with increase in $A_0$ due to the high density fluctuations. At larger times, the entropy nearly saturates in regimes (i) and (ii), while it decreases in regime (iii). In regime (i), the saturation is rather natural to expect, as the system rapidly relaxes to the previous glassy state. Similarly in regime (iii), the state evolves towards a constant amplitude (of unity), hence naturally the attraction to this state leads the entropy to a rapid decrease towards $0$. The most dynamic state is the intermediate one of regime (ii), where the relaxation is slow and therefore the entropy presents the oscillatory dynamics observed in Fig.~\ref{fig:figS}. The measurement based on Shannon entropy provides a cumulative diagnostic (rather than a distributed one) that clearly highlights the effect of periodic driving on the amplitude. At the same time, the entropy does not immediately provide information about the vortex configurations (aside perhaps from regime (iii) where the tendency of the entropy to go to $0$ can lead us to infer the absence of vorticity). In that vein, we now characterize the dynamics based on the absolute value of the winding number that provides us with a sense of the vortex creation and annihilation processes occurring in the system as a result of the drive. We consider two periods $T=10$ and $T=15$. The results of both cases shown in Fig.~\ref{fig:pbc_wt} illustrate that as $A_0$ increases, the number of initially generated vortices also increases in accordance with Fig.~\ref{fig:fig2}. The results of the $T=10$ case show that for lower values of $A_0=(2,2.4)$, the system does not respond to the kicks by changing its vortical content; once again, this illustrates the regime (i) of effective coherence preservation. For larger values of kick amplitude, on the other hand, the number of initially generated vortices increases (in the presence of the perturbations), yet at the same time, the rate of vortex-antivortex annihilations also progressively increases with $A_0$. On the other hand, for $T=15$ the rate of annihilation is decreased as compared to the $T=10$ case. This is due to the fact that the system has more time to relax in between the kicks as $T$ increases. For intermediate values of $A_0$, we therefore observe that the time evolution of the system preserves a dynamic effect where significant (additional) vorticity is present, as shown by the colorbar of Fig.~\ref{fig:pbc_wt} and the system has decohered from its original glassy state. For large values of $A_0$, eventually the vortex annihilation events take over and the system reaches the stable, locally attracting equilibrium state of $|A|=1$. \begin{figure*}[!htbp] \includegraphics[width = 8.cm]{L80_T10EIC.pdf} \includegraphics[width = 8.cm]{L80_T15EIC.pdf} \caption{ Incompressible energy as a function of time for (a) $T=10$ and (b) $T=15$. Colors represent different values of the drive strength $A_{0}$. } \label{fig:EIC} \end{figure*} \begin{figure*}[!htbp] \includegraphics[width = 8.cm]{L80_T10EC.pdf} \includegraphics[width = 8.cm]{L80_T15EC.pdf} \caption{Compressible energy as a function of time for (a) $T=10$ and (b) $T=15$. Colors represent different values of the drive strength $A_{0}$.} \label{fig:EC} \end{figure*} The measured incompressible and compressible energies for the cases of $T=10$ and $T=15$ are shown in Fig.~\ref{fig:EIC} and Fig.~\ref{fig:EC}, respectively. As expected, for $T=10$, the incompressible energy is nearly a constant for $A_0=(2,2.4)$ [see Fig.~\ref{fig:EIC}(a)], representing the invariance of the vortex configuration. For the range $A_0=2.6-2.8$ the incompressible energy initially decreases and then saturates at later times corresponding to the one vortex pair reduction shown in Fig.~\ref{fig:pbc_wt}(a). The increase in the rate of annihilation is well reflected in Fig.~\ref{fig:EIC}(a) for higher $A_0$, where incompressible energy decreases. Similar results are seen in Fig.~\ref{fig:EIC}(b) for $T=15$, although the larger relaxation time induces the associated phenomenology at larger values of $A_0$. For higher values of $A_0$, it is clear that the incompressible energy is fluctuating in a way that parallels the winding number and reflects the potential emergence (for intermediate values of $A_0$) and disappearance (for higher values of $A_0$) of vortex-antivortex pairs through the relevant creation and annihilation processes. \begin{figure*}[!htbp] \includegraphics[width = 5.8cm]{C_L80_T10_A0_2.pdf} \includegraphics[width = 5.8cm]{C_L80_T10_A0_3_6.pdf} \includegraphics[width = 5.8cm]{C_L80_T10_A0_4_8.pdf} \includegraphics[width = 5.8cm]{IC_L80_T10_A0_2.pdf} \includegraphics[width = 5.8cm]{IC_L80_T10_A0_3_6.pdf} \includegraphics[width = 5.8cm]{IC_L80_T10_A0_4_8.pdf} \caption{(Top panel) Compressible energy spectra for (a) $A_0=2$, (b) $A_0=3.6$ and (a) $A_0=4.8$. (Bottom panel) The corresponding incompressible energy spectra for (a) $A_0=2$, (b) $A_0=3.6$ and (a) $A_0=4.8$. The vertical lines (from left to right) represent $k=1/\xi$ and $k=2 \pi/\xi$, where $\xi$ is the so-called healing length associated with the size of individual vortices. The other parameters are $T=10$ and $\alpha=0.7$ and $\beta=-0.7$.} \label{fig:T10spectra} \end{figure*} We next analyze both the compressible $E^\text{c}(k)$ and incompressible $E^\text{ic}(k)$ energy spectra, of the system collected at different evolution times. This is typically done in order to appreciate the energy exchanges in a system and the possible connection of the associated mechanisms with potential turbulent or coherent-structure (such as vortex) induced dynamics~\cite{bradley2012energy}. The energy distribution from smaller scale (corresponding to the vortex core size) to larger scale (system size) can be inferred from such spectra. The upper and lower panels in Fig.~\ref{fig:T10spectra} show the compressible and incompressible energy spectra for three different values of $A_0~(=2.0,3.6,4.8)$ for fixed $T=10$. The compressible vortex spectra shown in the upper panel of Fig.~\ref{fig:T10spectra} do not exhibit any convincing scaling laws. However, at the same time we note that for $1/\xi < k < 2\pi/\xi$, where $\xi$ represents the numerically measured healing length (vortex core radius), spectra are closer to the reference line drawn for $k^{-7/2}$. This power-law is a characteristic of a superfluid turbulence corresponding to the sound wave equilibrium, as discussed, e.g., in~\cite{Mithun2021decay,PhysRevA.86.053621}. The decrease in compressible energy with increase in time for higher $A_0$ is due to the loss of small amplitude fluctuations as shown in Fig.~\ref{fig:fig2}. The incompressible energy spectra of the same case are depicted in the bottom panel of Fig.~\ref{fig:T10spectra}. The spectra of different values of $A_0$ show the existence of $k^{-3}$-power law, associated with individual vortex cores~\cite{bradley2012energy}, for sufficiently large $k$. On the other hand, interestingly and somewhat unexpectedly, there exists a lower interval of wavenumbers where our observations are close to a Kolmogorov $k^{-5/3}$ power-law that is typically associated in 2D settings with the inverse energy cascade as observed, e.g., in superfluid vortex turbulence \cite{bradley2012energy}. We conjecture that the existence of $k^{-5/3}$ power-law, even for smaller $A_0$, is due to the irregular arrangement and slow motion of vortices, although this is clearly a topic that merits further investigation. The downward shift of vortex spectra at larger times for larger values of $A_0$ is due to the decrease in the vortex numbers. We find similar results for the $T=15$ case as well (results omitted for brevity). In this section we demonstrated the effect of a periodic drive on the defect dynamics of the CGL model. Our results suggest that there are cascading processes even in the weaker amplitude cases, where the configuration at the level of its density profile appears to preserve its coherence and glassy state structure. For higher values of $A_0$, it is apparent (see, e.g., the middle and right panels of Fig.~\ref{fig:T10spectra}) that there are stronger energy exchange mechanisms at work, both ones mediated via the substantial additional compressible (sound) energy and ones realized via the vortex-antivortex creation (hence increase of incompressible energy) or annihilation (hence conversion of the incompressible energy into sound waves). In this case of larger $A_0$, these processes result in the loss of memory and effective (nonlinearity-driven) decoherence of the dynamics. \section{Conclusions and Future Challenges} One of the motivations for this work has been to explore a classical analog of quantum effects such as decoherence via suitable nonlinear classical field theoretic examples. The driven complex Ginzburg-Landau CGL) equation in 2D offers an intriguing playground for the exploration of these effects as it possesses certain instabilities and coherent topological structures such as vortices and domain walls, and features wide parametric regimes of metastable, long-lived states such as the vortex glasses utilized here~\cite{bohr,PhysRevE.65.016122}. In the periodically driven version of this dissipative nonlinear system that we considered here, we have aimed to establish a phase diagram that offers a glimpse into a classical (nonlinearity-induced) analogue of the notion of decoherence. In particular, the axis of variation of the period of the drive $T$ is an analog of the ``measurement rate", while the amplitude axis $A_0$ as an analog of the ``mixing rate". As we have shown, we indeed numerically find a demarcation curve between the examined cases, with a parametric region representing decoherence and another preserving the system's statistical memory; the intermediate regime between these two features the most dynamical environment where vortex creation-annihilation events most dramatically and persistently take place. We believe that this study poses a number of challenges that are worth pursuing in the future work. Clearly, the form, in space and time, of the external field we have used here has many potential variations beyond the spatially uniform, time periodic one used here, analogous to various measurement protocols employed in quantum technologies. A systematic comparison of quantum systems with classical nonlinear ones as regards quantum features such as entanglement and decoherence is called for, including both dissipative and non-dissipative (open and closed) cases. The Gross-Pitaevskii model and its many-body variants~\cite{lode1} could represent an ideal framework for exploring the analogies and differences between classical nonlinear and genuinely quantum systems \cite{PhysRevA.102.033301}. Moreover, the transverse field Ising and similar models provide examples where coherence times can be equated with correlation lengths in equivalent higher-dimensional classical models \cite{Stinchcombe_1973}. Finally, in the present report, we have explored issues pertaining to driving in connection to coherence vs. decoherence and memory vs. memory loss, but have not examined aspects pertaining to entanglement. The latter may be an especially interesting topic (along with its similarities and differences to wave interactions) for future study. \section{Acknowledgements} This material is based upon work supported by the US National Science Foundation under Grants No. PHY-2110030 and DMS-1809074 (PGK). The work at Los Alamos National Laboratory was carried out under the auspices of the U.S. DOE and NNSA under Contract No. DEAC52-06NA25396. \bibliographystyle{apsrev4} \let\itshape\upshape \normalem
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{ Introduction\label{S1}} The nonlocal models have showed a great level of capability in the study of phenomena in many of the branches of science. They have been one of the main alternatives to reformulate different types of problems in Applied Mathematics. The usage of these models has been notable in fields like kinetic equations, phase transitions, diffusion models and other themes of continuum mechanics \cite{Sh, Metzler, Carreras, Metzler2, Gardiner, Bakunin, Neuman, Bucur, Vazquez}. There are several ways to introduce the nonlocality when we try to model some classical problems. Among others works we must highlight \cite{Rossi2, Vazquez2, Gunzburger-Du-L-1, Du, kava} in the nonlocal framework and \cite{caputo, Miller, rubin, Debnath, hilfer, Oustaloup, kibas, kulish} from the point of view of the fractional analysis. In a general context, the main idea to built a nonlocal model basically relies on considering derivatives of nonlocal type, or of fractional derivatives, instead the classical ones. This new way to measure the variability, somehow, allows to introduce and modulate long-range interactions. In our specific context, of optimal control problems governed by partial differential equations, instead of considering differential equations, we shall present a nonlocal model built by means integral equations. These integrals are somehow, the convolution of the states with certain family of kernels. This family is parametrized by a number, called \textit{horizon}, which is the responsible of the degree of the nonlocal interaction. The proposed optimization problem is driven by the nonlocal $p$-laplacian as state equation, and Dirichlet boundary conditions are imposed. The control is the right-hand side forcing function, the source, and the cost to minimize belongs to a fairly general class of integral functional. The purpose of the present article is the analysis of this type of nonlocal optimal control problem, the existence of solutions and their asymptotic behavior when the nonlocality, the \textit{horizon}, tends to zero. Since in the limit we recover the formulation of certain classical control problems, some meaningful conclusions about approximation or existence of classical solutions are obtained as well. Consequently, two different problems will be addressed in the article, the nonlocal model and the classical or local counterpart. To go into the details, we firstly specify the ambient space we work on, and then, we shall formulate these two optimal control problems. \subsection{Hypotheses} Specifically, the framework in which we shall work can be described as follows. The domain is $\Omega\subset\mathbb{R}^{N},$ a bounded open domain. We define its extension $\Omega_{\delta}=\Omega\cup_{p\in\partial\Omega }B\left( p,\delta\right) ,\ $where $B\left( x,r\right) $ is the notation of an open ball centered at $x\in\mathbb{R}^{N}$ and radius $r>0$ and $\delta$ is a positive number. About the right term of the elliptic equations, the function $f,$ called the source, we assume $f\in L^{p^{\prime}}\left( \Omega\right) $ where $p^{\prime}=\frac{p}{p-1}$ and $p>1.$ Concerning the kernels $\left( k_{\delta}\right) _{\delta>0},$ we assume that it is a sequence of nonnegative radial functions such that for any $\delta, \begin{equation} \operatorname*{supp}k_{\delta}\subset B\left( 0,\delta\right) \label{1 \end{equation} an \[ \frac{1}{C_{N}}\int_{B\left( 0,\delta\right) }k_{\delta}\left( \left\vert z\right\vert \right) dz=1 \] where $C_{N}=\frac{1}{\operatorname*{meas}\left( S^{N-1}\right) \int_{S^{N-1}}\left\vert \omega\cdot e\right\vert ^{p}d\sigma^{N-1}\left( \omega\right) ,$ where $\sigma^{N-1}$ stands for the $N-1$ dimensional Haussdorff measure on the unit sphere $S^{N-1}$ and $e$ is any unitary vector in $\mathbb{R}^{N}.$ In addition, the kernels satisfy the uniform estimatio \begin{equation} k_{\delta}\left( \left\vert z\right\vert \right) \geq\frac{c_{0}}{\left\vert z\right\vert ^{N+\left( s-1\right) p}}\label{2 \end{equation} where $c_{0}>0$ and $s\in\left( 0,1\right) $ are given constants such that $N>ps.$ The natural frame in which we shall work is the nonlocal energy space \begin{equation} X=\left\{ u\in L^{p}\left( \Omega_{\delta}\right) :B\left( u,u\right) <\infty\right\} \label{3 \end{equation} where $B$ is the operator defined in $X\times X$ by means of the formul \begin{equation} B\left( u,v\right) =\int_{\Omega_{\delta}}\int_{\Omega_{\delta}}k_{\delta }\left( \left\vert x^{\prime}-x\right\vert \right) \frac{\left\vert u\left( x^{\prime}\right) -u\left( x\right) \right\vert ^{p-2}\left( u\left( x^{\prime}\right) -u\left( x\right) \right) }{\left\vert x^{\prime }-x\right\vert ^{p}}\left( v\left( x^{\prime}\right) -v\left( x\right) \right) dx^{\prime}dx.\label{4 \end{equation} We define also the constrained energy space a \[ X_{0}=\left\{ u\in X:u=0\text{ in }\Omega_{\delta}\setminus\Omega\right\} \] It is well-known that for any given $\delta>0$ the space $X=X\left( \delta\right) $ is a Banach space with the norm \[ \left\Vert u\right\Vert _{X}=\left\Vert u\right\Vert _{L^{p}\left( \Omega_{\delta}\right) }+\left( B\left( u,u\right) \right) ^{1/p}. \] The dual of $X$ will be denoted by $X^{\prime}\ $and can be endowed with the norm defined by \[ \left\Vert g\right\Vert _{X^{\prime}}=\sup\left\{ \left\langle g,w\right\rangle _{X^{\prime}\times X}:w\in X,\text{ }\left\Vert w\right\Vert _{X}=1\right\} . \] Analogous definitions applies to the space $X_{0}=X_{0}\left( \delta\right) .$ There is another functional space that we will use in the formulation of our problem and that is susceptible to be used as a set of controls. It is the space of diffusion coefficients, that is \[ \mathcal{H}\doteq\left\{ h:\Omega_{\delta}\rightarrow\mathbb{R}\mid h\left( x\right) \in\lbrack h_{\min},h_{\max}]\text{ a.e. }x\in\Omega,\text{ }h=0\text{ in }\Omega_{\delta}-\Omega\right\} , \] where $h_{\min}\ $and $h_{\max}$ are positive constants such that $0<h_{\min }<h_{\max}.$ \subsection{Formulation of the problems} \subsubsection{Nonlocal Optimal control in the source.\textbf{\ }} The nonlocal optimal control in the source is an optimal control problem denoted by $\left( \mathcal{P}^{\delta}\right) $ whose formulation is as follows: the problem, for each $\delta>0$ fixed, consists on finding $g\in L^{p^{\prime}}\left( \Omega\right) $ such that minimizes the functiona \begin{equation} I_{\delta}\left( g,u\right) =\int_{\Omega}F\left( x,u\left( x\right) ,g\left( x\right) \right) dx,\label{a \end{equation} where $u\in X$ is the solution of the nonlocal boundary problem $\left( P^{\delta}\right) \begin{equation} \left\{ \begin{tabular} [c]{l $\displaystyle B_{h}\left( u,w\right) =\int_{\Omega}g\left( x\right) w\left( x\right) dx,\text{ for any }w\in X_{0},\smallskip$\\ $\displaystyle u=u_{0}\text{ in }\Omega_{\delta}\setminus\Omega, \end{tabular} \right. \ \ \ \ \ \label{b \end{equation} wher \begin{equation} B_{h}\left( u,w\right) =\int_{\Omega_{\delta}}\int_{\Omega_{\delta}}H\left( x^{\prime},x\right) k_{\delta}\left( \left\vert x^{\prime}-x\right\vert \right) \frac{\left\vert u\left( x^{\prime}\right) -u\left( x\right) \right\vert ^{p-2}\left( u\left( x^{\prime}\right) -u\left( x\right) \right) }{\left\vert x^{\prime}-x\right\vert ^{p}}\left( v\left( x^{\prime }\right) -v\left( x\right) \right) dx^{\prime}dx\label{bb \end{equation} $H\left( x^{\prime},x\right) \doteq\frac{h\left( x^{\prime}\right) +h\left( x\right) }{2}$ and $u_{0}$ is a given function. The above nonlocal boundary condition, $u=u_{0}$ in $\Omega_{\delta}\setminus\Omega,$ must be interpreted in the sense of traces. Indeed, in order to make sense it is necessary that $u_{0}\ $belongs to the space $\widetilde{X}_{0}=\left\{ \left. w\right\vert _{\Omega_{\delta}\setminus\Omega}:w\in X\right\} $. This space is well defined independently of the parameter $s\in\left( 0,1\right) $ we choose in (\ref{2}). It is easy to check that a norm for this space is the one defined as \[ \left\Vert v\right\Vert _{\widetilde{X}_{0}}=\inf\left\{ \left\Vert w\right\Vert _{L^{p}\left( \Omega_{\delta}\right) }+\left( B\left( w,w\right) \right) ^{1/p}:w\in X\text{ such that }\left. w\right\vert _{\Omega_{\delta}\setminus\Omega}=v\right\} . \] The integrand $F$ that appear in the cost we want to optimize, is under the forma \begin{equation} F\left( x,u\left( x\right) ,g\left( x\right) \right) =G\left( x,u\left( x\right) \right) +\beta\left\vert g\left( x\right) \right\vert ^{p^{\prime}}+\gamma B_{h_{0}}\left( u,u\right) ,\label{integrand_source \end{equation} where $\beta$ and $\gamma$ are given positive constants, $h_{0}\in\mathcal{H}$ is also given, and $G:\mathbb{R}\times\mathbb{R\rightarrow R}$ is assumed to be a measurable positive function such that $G(x,\cdot)$ is uniformly Lipschitz continuous, that is, for any $x\in\Omega$ and any $\left( u,v\right) \in\mathbb{R}^{2}$ there exists a positive constant $L$ such that $\left\vert G\left( x,u\right) -G\left( x,v\right) \right\vert \leq L\left\vert u-v\right\vert .$ We formulate the nonlocal optimal control problem a \begin{equation} \min_{\left( g,u\right) \in\mathcal{A}^{\delta}}I_{\delta}\left( g,u\right) \label{NLCP \end{equation} wher \[ \mathcal{A}^{\delta}\doteq\left\{ \left( f,v\right) \in L^{p^{\prime }\left( \Omega\right) \times X:v\text{ solves (\ref{b}) with }g=f\right\} . \] Recall that under the above circumstances, the nonlocal participating states in (\ref{b}), can be viewed as elements of the convex space \[ u_{0}+X_{0}\doteq\left\{ v\in X:v=u_{0}+w\text{ where }w\in X_{0}\right\} . \] \subsubsection{The local optimal control in the source} The corresponding local optimal control is a problem denoted by $\left( \mathcal{P}^{loc}\right) $ whose goal is to find $g\in L^{p^{\prime}}\left( \Omega\right) $ such that minimizes the functiona \begin{equation} I\left( g,u\right) =\int_{\Omega}\left( G\left( x,u\left( x\right) \right) +\beta\left\vert g\left( x\right) \right\vert ^{p^{\prime}}\right) dx+\gamma b_{h_{0}}\left( u,u\right) ,\label{c \end{equation} where $u\in W^{1,p}\left( \Omega\right) $ is the solution of the local boundary problem $\left( P^{loc}\right) \begin{equation} \left\{ \begin{tabular} [c]{l $\displaystyle b_{h}\left( u,w\right) =\int_{\Omega}g\left( x\right) w\left( x\right) dx,\text{ for any }w\in W_{0}^{1,p}\left( \Omega\right) \smallskip$\\ $\displaystyle u=u_{0}\text{ in }\partial\Omega, \end{tabular} \right. \ \ \ \label{d \end{equation} $b\left( \cdot,\cdot\right) $ is the operator defined in $W^{1,p}\left( \Omega\right) \times W^{1,p}\left( \Omega\right) $ by means o \[ b_{h}\left( u,v\right) \doteq\int_{\Omega}h\left( x\right) \left\vert \nabla u\left( x\right) \right\vert ^{p-2}\nabla u\left( x\right) \nabla v\left( x\right) dx, \] $h_{0}\in\mathcal{H}$ and $u_{0}$ is a given function from the trace fractional Sobolev space $W^{1-1/p,p}\left( \partial\Omega\right) $. Throughout the article, when dealing with the local case, we shall assume $\Omega$ is a bounded smooth domain. The statement of the local optimal control problem i \begin{equation} \min_{\left( g,u\right) \in\mathcal{A}^{loc}}I\left( g,u\right) \label{e \end{equation} wher \[ \mathcal{A}^{loc}\doteq\left\{ \left( f,v\right) \in L^{p^{\prime}}\left( \Omega\right) \times W^{1,p}\left( \Omega\right) :v\text{ solves (\ref{d}) with }g=f\right\} . \] As usual, if we identify $u_{0}$ with a function $V_{0}\in W^{1,p}\left( \Omega\right) $ whose trace is $u_{0},$ and at the same time, $V_{0}$ is denoted by $u_{0}$ too, then, as usual, the competing states are those that form space \[ u_{0}+W_{0}^{1,p}\left( \Omega\right) \doteq\left\{ v\in W^{1,p}\left( \Omega\right) :v=u_{0}+w\text{ where }w\in W_{0}^{1,p}\left( \Omega\right) \right\} . \] The analysis of this type of problems is a subject that has been extensively studied in previous works \cite{Gunzburger-Du-L-1, Ak, Hinds-Radu, D'Elia-Gunz2, D'Elia-Gunz2b}. As far as the author knows, the first work dealing with nonlocal optimal control problems is \cite{D'Elia-Gunz1}. A series of articles containing different type of controls have appeared in the last years. Some good samples are \cite{D'Elia-Gunz1, D'Elia-Gunz2, FernandezBonder, Warma}. About the analysis of $G$-convergence or $\Gamma $-convergence the reader can consult \cite{Ponce2, Zhou, D'Elia-Gunz1, Mengesha2, bellidob, Mengesha}. We can find some theoretical advances about the explicit computation of the limit problem. In this sense we must underline among others \cite{Mengesha2, Bonder, Bellido-Egrafov, waurick, Du}. Much more should be commented about the influence that this type of problems has received from an outstanding list of seminal papers whose main topic, has been the analysis and characterization of Sobolev Spaces. See for instance \cite{Bourgain-Brezis, Ponce2, Rossi2, Rossi3, Mazon, DiNezza-Palatucci-Valdinoci, Du}. In what concerns the numerical analysis of nonlocal problems see \cite{DeliaNumer} and references therein. We must say that to a great extent, the work \cite{D'Elia-Gunz1} has served as inspiration for the present article. Nonetheless, we must emphasize the techniques we use here, in some aspects, substantially differ from the ones employed there. One of the features of our development is the usage of a principle of minimum energy in the in order to characterize the $G -convergence of the state equation (see \cite[Chapter 5, p. 162]{Jikov} for a detailed study in a concrete linear case). Recall that since we are dealing with the exponent $p>1,$ the linearity for the $p$-laplacian disappears and consequently, the classical Lax-Milgram Theorem no longer applies. Besides, in \cite{D'Elia-Gunz1} this linearity and the specificity of the type of cost functionals, jointly with the necessary conditions of optimality are the key points for the achievement of existence of optimal controls. By contrast, in our context, the proof of existence, both for the state equation and the optimal control problem, is obtained by means of the Direct Method and the result of $G$-convergence. After, we prove convergence of the nonlocal state equation and the nonlocal optimal control problem to the local ones. Even though these achievements could be significant since the analysis could be applied to a rather general class of cost functionals, the results obtained for the particular case $p=2$ are not less attractive. The reason is that for such a case the non-local model can approximate classical problems including the squared gradient within the cost functional. Though we have not examined any numerical method for the approximation of solutions yet, some techniques derived from a maximum principle (see \cite{Cea-Mala} for the local case) could be explored in order to build a descent method for the case $p=2$ (see \cite{Andres}). \subsection{Results and organization} The purpose of this manuscript is twofold: there is a first part of the paper devoted to study the existence of nonlocal optimal designs. This objective is achieved for a cost functional class whose format may include the non-linear term of the non-local operator. See Theorem \ref{Well-posedness} in Section \ref{S3}. The proof of this theorem is basically, based on a previous result of $G$- convergence (Theorem \ref{G-convergence} in Section \ref{S3}). The aim of the second part is the convergence of the nonlocal problem toward the local optimal design one. In a first stage we prove convergence of the state equation to the classical $p$-laplacian when the \textit{horizon} $\delta\rightarrow0$ (Theorem \ref{Th2} in Section \ref{S4}). Then, we face the study of convergence for the optimal design problem. The main result is Theorem \ref{Th3}, in Section \ref{S5}. A case of particular interest, the one that we assume $p=2,$ is analyzed. The type of cost functional for which we study the convergence, includes the nonlocal gradient, and consequently, the local counterpart optimal problem we approximate contains the square of the gradient (Theorem \ref{Th4} in Section \ref{S5}). In order to facilitate the reading of the article, some specific preliminary results are previously explained in Section \ref{S2}. Some compactness and basic inequalities are commented, and the proof of existence of solution for the nonlocal state problem is analyzed (Theorem \ref{Th1}). \section{Preliminary results and well-posedness of the state equation\label{S2}} \subsection{Preliminaries\label{S2_1}} Here we review some technical tools we are going to use. \begin{enumerate} \item \label{S2_1_1}The embeddin \[ X_{0}\subset L^{p}\left( \Omega\right) \] is compact. In order to check that we firstly notice $X_{0}\subset W^{s,p}\left( \Omega_{\delta}\right) ,$ and since the elements of $X_{0}$ vanish in $\Omega_{\delta}\setminus\Omega,$ then extension by zero outside $\Omega_{\delta}$ gives rise to elements of $W^{s,p}\left( \mathbb{R ^{N}\right) $ (see \cite[Lemma 5.1]{DiNezza-Palatucci-Valdinoci}). Then \[ X_{0}\subset W_{0}^{s,p}\left( \Omega\right) =\left\{ f\in W^{s,p}\left( \mathbb{R}^{N}\right) :f=0\text{ in }\mathbb{R}^{N}\setminus\Omega\right\} \] Besides, we are in position to state the existence of a constant $c=c\left( N,s,p\right) $ such that for any $w\in X_{0} \begin{equation} c\left\Vert w\right\Vert _{L^{p}\left( \Omega_{\delta}\right) }^{p}\leq \int_{\Omega_{\delta}}\int_{\Omega_{\delta}}\frac{\left\vert w\left( x^{\prime}\right) -w\left( x\right) \right\vert ^{p}}{\left\vert x^{\prime }-x\right\vert ^{N+sp}}dx^{\prime}dx\label{Pre0 \end{equation} (see \cite[Th. 6.5]{DiNezza-Palatucci-Valdinoci}). By paying attention to the hypotheses on the kernel (\ref{2}), and using (\ref{Pre0}) we conclude there is a positive constant $C$ such that the nonlocal Poincar\'{e} inequality \begin{equation} C\left\Vert w\right\Vert _{L^{p}\left( \Omega_{\delta}\right) }^{p}\leq B_{h}\left( w,w\right) .\label{Prel3 \end{equation} holds for any $w\in X_{0}.$\newline We consider now a sequence $\left( w_{j}\right) _{j}\in X_{0}$ uniformly bounded in $X_{0},$ that is, there is a constant $C$ such that for every $j$ \begin{equation} B_{h}\left( w_{j},w_{j}\right) \leq C.\label{Prel1 \end{equation} By (\ref{Prel3}) $\left( w_{j}\right) _{j}$ is uniformly bounded $L^{p}\left( \Omega_{\delta}\right) \ $which, jointly with (\ref{2}) guarantees $\left( w_{j}\right) _{j}$ is uniformly bounded in $W_{0 ^{s,p}\left( \Omega_{\delta}\right) $. We employ now the compact embedding $W_{0}^{s,p}\left( \Omega_{\delta}\right) \subset L^{p}\left( \Omega_{\delta}\right) $ (see \cite[Th. 7.1]{DiNezza-Palatucci-Valdinoci}) to ensure the existence of a subsequence from $\left( w_{j}\right) _{j},$ still denoted by $\left( w_{j}\right) _{j},$ such that $w_{j}\rightarrow w$ strongly in $L^{p}\left( \Omega_{\delta}\right) ,$ for some $w\in X_{0}.$ \item \label{S2_1_2}If we take a sequence $\left( w_{j}\right) _{j}$ from $u_{0}+X_{0}$ such that $B_{h}\left( w_{j},w_{j}\right) \leq C,$ then $u_{j}-u_{0}\in X_{0}.$ But, sinc \[ B_{h}\left( w_{j}-u_{0},w_{j}-u_{0}\right) \leq c\left( B_{h}\left( w_{j},w_{j}\right) +B_{h}\left( u_{0},u_{0}\right) \right) \] for a certain constant $c,$ the \[ B_{h}\left( w_{j}-u_{0},w_{j}-u_{0}\right) \leq C \] If we apply now the nonlocal Poincar\'{e} inequality (\ref{Prel3}) we hav \begin{equation} C\left\Vert w_{j}-u_{0}\right\Vert _{L^{p}}^{p}\leq B_{h}\left( w_{j -u_{0},w_{j}-u_{0}\right) ,\label{Prel2 \end{equation} whereby we state the sequence $\left( w_{j}\right) _{j}$ is uniformly bounded in $L^{p}\left( \Omega_{\delta}\right) $ and therefore, there exists a function $w\in L^{p}\left( \Omega\right) $ such that, for a subsequence of $\left( w_{j}\right) _{j},$ still denoted by $\left( w_{j}\right) _{j},$ $w_{j}\rightarrow u$ strongly in $L^{p}\left( \Omega_{\delta}\right) .$ Moreover, since $u_{0}+X_{0}$ is a closed set, $w\in u_{0}+X_{0}.$ \item \label{S2_1_3}Let $\left( g_{\delta},u_{\delta}\right) _{\delta}$ be a sequence of pairs such that the uniform estimatio \[ B_{h}\left( u_{\delta},u_{\delta}\right) \leq C, \] is fulfilled (where $C$ is a positive constant). Then, from $\left( u_{\delta}\right) _{\delta}$ we can extract a subsequence, labelled also by $u_{\delta},$ such that $u_{\delta}\rightarrow u$ strongly in $L^{p}\left( \Omega\right) $ and $u\in W^{1,p}\left( \Omega\right) $ (see \cite[Th. 1.2]{Ponce1}). Furthermore, the following inequality is fulfille \begin{equation} \lim_{\delta\rightarrow0}B_{h}\left( u_{\delta},u_{\delta}\right) \geq \int_{\Omega}h\left( x\right) \left\vert \nabla u\left( x\right) \right\vert ^{p}dx\label{Prel4 \end{equation} (see \cite{Ponce1, Andres-Julio3, Munoz}). Besides, it is also well-known that if $u_{\delta}=u\in W^{1,p}\left( \Omega\right) ,$ then the above limit is \begin{equation} \lim_{\delta\rightarrow0}B_{h}\left( u,u\right) =\int_{\Omega}h\left( x\right) \left\vert \nabla u\left( x\right) \right\vert ^{p}dx\label{Prel5 \end{equation} (see \cite[Corollary 1]{Bourgain-Brezis} and \cite[Th. 8]{Andres-Julio2}). \end{enumerate} \subsection{The state equation} For the well-posedness of the nonlocal control problem $\left( \mathcal{P _{s}^{\delta}\right) $ it is imperative to prove existence and uniqueness for the nonlocal boundary problem $\left( P^{\delta}\right) $. A remarkable fact that will be employed for this goal is the characterization of (\ref{1}) by means of a Dirichlet principle. For the proof, we just need to adapt (because we have to include the nonlocal boundary condition $u_{0})$ the lines given in \cite{Bonder}.\smallskip Throughout this section, $u_{0}\in\widetilde{X}_{0},$ $\delta>0$ and $g$ $\in L^{p^{\prime}}\left( \Omega\right) $ are assumed to be fixed. We seek a solution to the problem (\ref{b}) and as we have commented, the crucial point in this searching is the inherent relation of the nonlocal boundary problem with the following minimization problem \begin{equation} \min_{w\in u_{0}+X_{0}}\mathcal{J}\left( w\right) \label{NL_Dirichlet \end{equation} wher \[ \mathcal{J}\left( w\right) \doteq\frac{1}{p}B_{h}\left( w,w\right) -\int_{\Omega}g\left( x\right) w\left( x\right) dx. \] \begin{lemma} There exists a solution $u\in u_{0}+X_{0},$ to the problem of minimization (\ref{NL_Dirichlet}). \end{lemma} \begin{proof} First of all, we check $\mathcal{J}$ is bounded from below. Let $w$ be any function from $X$ such that $w-u_{0}\in X_{0}$. By using the nonlocal Poincar\'{e} inequality (\ref{Prel2}) there is a constant $c>o$ such that \[ c\left\Vert w-u_{0}\right\Vert _{L^{p}}^{p}\leq B_{h}\left( w,w\right) +B_{h}\left( u_{0},u_{0}\right) , \] whence we hav \begin{equation} \left\Vert w\right\Vert _{L^{p}}-\left\Vert u_{0}\right\Vert _{L^{p} \leq\left\Vert w-u_{0}\right\Vert _{L^{p}}\leq\left( \frac{B_{h}\left( w,w\right) +B_{h}\left( u_{0},u_{0}\right) }{c}\right) ^{1/p .\label{basic_ineq \end{equation} If we apply now the H\"{o}lder's inequality and Young's inequality we get \begin{align*} \mathcal{J}\left( w\right) & \geq\frac{1}{p}B_{h}\left( w,w\right) -\left\Vert g\right\Vert _{L^{p^{\prime}}}\left\Vert w\right\Vert _{L^{p}}\\ & \geq\frac{1}{p}B_{h}\left( w,w\right) -\left\Vert g\right\Vert _{L^{p^{\prime}}}\frac{1}{c^{1/p}}\left( B_{h}\left( w,w\right) +B_{h}\left( u_{0},u_{0}\right) \right) ^{1/p}-\left\Vert g\right\Vert _{L^{p^{\prime}}}\left\Vert u_{0}\right\Vert _{L^{p}}\\ & \geq\frac{1}{p}B_{h}\left( w,w\right) -\frac{1}{p}\left( B_{h}\left( w,w\right) +B_{h}\left( u_{0},u_{0}\right) \right) -\frac{1}{p^{\prime }\left( \frac{\left\Vert g\right\Vert _{L^{p^{\prime}}}}{c^{1/p}}\right) ^{p^{\prime}}-\left\Vert g\right\Vert _{L^{p^{\prime}}}\left\Vert u_{0}\right\Vert _{L^{p}}\\ & =-\frac{1}{p}B_{h}\left( u_{0},u_{0}\right) -\frac{1}{p^{\prime}}\left( \frac{\left\Vert g\right\Vert _{L^{p^{\prime}}}}{c^{1/p}}\right) ^{p^{\prime }}-\left\Vert g\right\Vert _{L^{p^{\prime}}}\left\Vert u_{0}\right\Vert _{L^{p}}. \end{align*} To prove the existence of solution we take a minimizing sequence $\left( u_{j}\right) \subset u_{0}+X_{0}$ so that \begin{equation} m=\lim_{j\rightarrow\infty}\left( \frac{1}{p}B_{h}\left( u_{j},u_{j}\right) -\int_{\Omega}g\left( x\right) u_{j}\left( x\right) dx\right) \label{inf_1 \end{equation} where $m$ is the infimum $\inf_{w\in u_{0}+X_{0}}\mathcal{J}\left( w\right) .$ From this convergence we ensure that there is a constant $C>0$ such that \[ \frac{1}{p}B_{h}\left( u_{j},u_{j}\right) -\int g\left( x\right) u_{j}\left( x\right) dx\leq C \] for any $j.$ Thus, we get the estimatio \begin{align*} 0 & \leq B_{h}\left( u_{j},u_{j}\right) \leq C+\left\vert \int_{\Omega }g\left( x\right) u_{j}\left( x\right) dx\right\vert \\ & \leq C\left( 1+\left\Vert g\right\Vert _{L^{2}\left( \Omega\right) }\left\Vert u_{j}\right\Vert _{L^{2}\left( \Omega\right) }\right) \end{align*} with $C>0.$ Again, the nonlocal Poincar\'{e} inequality gives \[ c\left\Vert u_{j}-u_{0}\right\Vert _{L^{p}}^{p}\leq B_{h}\left( u_{j ,u_{j}\right) +B_{h}\left( u_{0},u_{0}\right) \leq C\left( 1+\left\Vert g\right\Vert _{L^{p^{\prime}}\left( \Omega\right) }\left\Vert u_{j \right\Vert _{L^{p}\left( \Omega\right) }\right) +B_{h}\left( u_{0 ,u_{0}\right) \] and therefor \[ \left\Vert u_{j}\right\Vert _{L^{p}}^{p}\leq C\left( 1+\left\Vert g\right\Vert _{L^{p^{\prime}}\left( \Omega\right) }\left\Vert u_{j \right\Vert _{L^{p}\left( \Omega\right) }+\left\Vert u_{0}\right\Vert _{L^{p}}^{p}\right) \] From the two above inequalities we deduce the sequences $B_{h}\left( u_{j},u_{j}\right) \ $and $\left\Vert u_{j}\right\Vert _{L^{p}}$ are uniformly bounded. By virtue of the compactness embedding $X_{0}\subset L^{p},$ we know there is a subsequence of $\left( u_{j}\right) ,$ which will be denoted also by $\left( u_{j}\right) ,$ strongly convergent in $L^{p}\left( \Omega\right) $ to some $u\in u_{0}+X_{0}.$\newline We retake (\ref{inf_1}) and use the lower semicontinuity in $L^{p}$ of the operator $\mathcal{J}$ to writ \begin{align*} m & =\lim_{j}\frac{1}{p}B_{h}\left( u_{j},u_{j}\right) -\lim_{j \int_{\Omega}g\left( x\right) u_{j}\left( x\right) dx\\ & \geq\frac{1}{p}B_{h}\left( u,u\right) -\int_{\Omega}g\left( x\right) u\left( x\right) dx\\ & =\mathcal{J}\left( u\right) . \end{align*} From this inequality we conclude that $u$ is a minimizer. \end{proof} \begin{lemma} $u$ is a solution of the minimization principle (\ref{NL_Dirichlet}) if, and only if, $u$ solves the problem (\ref{b}). \end{lemma} The proof is standard. Assume $u$ solves (\ref{b}). We have only note that if we take any $v\in u_{0}+X_{0}$ then $w\doteq u-v\in X_{0}$ and $B_{h}\left( u,w\right) =\int_{\Omega}g\left( x\right) w\left( x\right) dx,$ that i \[ B_{h}\left( u,u\right) =B_{h}\left( u,v\right) +\int_{\Omega}g\left( u-v\right) dx. \] By applying Young's inequality to the first term of the right part in the above equality we ge \[ B_{h}\left( u,u\right) \leq\frac{1}{p}B_{h}\left( v,v\right) +\frac {1}{p^{\prime}}B_{h}\left( u,u\right) +\int_{\Omega}g\left( u-v\right) dx, \] and thu \[ \frac{1}{p}B_{h}\left( u,u\right) -\int_{\Omega}gudx\leq\frac{1}{p B_{h}\left( v,v\right) -\int_{\Omega}gvdx \] which is equivalent to write $\mathcal{J}\left( u\right) \leq\mathcal{J \left( v\right) .$\newline And reciprocally, if $u$ is a minimizer of $\mathcal{J}$ on $u_{0}+X_{0}$ then we can take the admissible function $w=u+t\xi,$ where $\xi$ is any element form $X_{0}.$ Since the function $j\left( t\right) =\mathcal{J}\left( u+t\xi\right) $ attains a minimum at $t=0,$ then $j^{\prime}\left( 0\right) =0$ and this equality can be easily rewritten as $B_{h}\left( u,\xi\right) =\int_{\Omega}g\left( x\right) \xi\left( x\right) dx.$ \begin{theorem} \label{Th1}There exists a unique solution to the nonlocal boundary problem $\left( P^{\delta}\right) $ given in (\ref{b}). \end{theorem} All that remains is to prove the uniqueness. The proof is automatic due to the convexity of $\mathcal{J}:$ indeed, if $u$ and $v$ are minimizers, then \[ m=\min_{w\in u_{0}+X_{0}}\mathcal{J}\left( w\right) =J\left( u\right) =J\left( v\right) . \] Besides, for any $\alpha\in\left( 0,1\right) ,$ the function $\alpha u+\left( 1-\alpha\right) v$ is admissible for minimization principle. Then, thanks to the strict convexity of $\mathcal{J},$ we deduce that \[ m\leq\mathcal{J}\left( \alpha u+\left( 1-\alpha\right) v\right) <\alpha\mathcal{J}\left( u\right) +\left( 1-\alpha\right) \mathcal{J \left( v\right) =m \] which is a contradiction. \begin{remark} The result remains valid if we assume $g$ to be in the space $X_{0}^{\prime }\ $and the proof follows along the same lines from above. \end{remark} The existence and uniqueness of solution for the local state equation $\left( P^{loc}\right) $ is a basic issue. Even we have to adapt some details, \cite{Chipot} is a reference we can follow in order to carry out this task. Although the details about the proof are interesting, the uniqueness is an aspect that could be analyzed apart. Indeed, if $u$ and $v$ are two different solutions of the state equation (\ref{d}), then $b_{h}\left( u,w\right) =b\left( v,w\right) $ for any $w\in X_{0}.$ This is to say that for any $w\in X_{0} \begin{equation} \int_{\Omega}h\left( x\right) \left( \left\vert \nabla u\left( x\right) \right\vert ^{p-2}\nabla u\left( x\right) \nabla w\left( x\right) -\left\vert \nabla v\left( x\right) \right\vert ^{p-2}\nabla v\left( x\right) \nabla w\left( x\right) \right) dx=0.\label{uni \end{equation} By taking $w=u-v$ we obtai \[ \int_{\Omega}h\left( x\right) \left( \left\vert \nabla u\left( x\right) \right\vert ^{p-2}\nabla u\left( x\right) -\left\vert \nabla v\left( x\right) \right\vert ^{p-2}\nabla v\left( x\right) \right) \nabla\left( u-v\right) \left( x\right) dx=0. \] At this point we take into account the next elementary inequality: if $1<p<\infty,$ then there exist two positive constants $C=C\left( p\right) $ and $c=c\left( p\right) $ such that for every $a,$ $b\in\mathbb{R}^{N} \begin{equation} c\left\{ \left\vert a\right\vert +\left\vert b\right\vert \right\} ^{p-2}\left\vert a-b\right\vert ^{2}\leq\left( \left\vert a\right\vert ^{p-2}a-\left\vert b\right\vert ^{p-2}b\right) \cdot\left( a-b\right) \leq C\left\{ \left\vert a\right\vert +\left\vert b\right\vert \right\} ^{p-2}\left\vert a-b\right\vert ^{2}.\label{ele_ine \end{equation} Finally, by applying (\ref{ele_ine}) in (\ref{uni}) the uniqueness follows (see \cite[Prop.17.3 and Th. 17.1]{Chipot}\textbf{).} \section{$G$-Convergence for the state equation and existence of nonlocal optimal controls\label{S3}} Let $\left( g_{j}\right) _{j}$ be a minimizing sequences of controls for the problem $\left( \mathcal{P}^{\delta}\right) $ and let $\left( u_{j}\right) _{j}$ be the corresponding sequence of states. As in the end the sequences we are going to work with, are minimizing sequences, we shall assume that there is a constant $C>0$ such that for any \[ \int_{\Omega}\left\vert g_{j}\left( x\right) \right\vert ^{p^{\prime}}dx<C. \] Hence, we can extract a subsequence weakly convergent in $L^{p^{\prime }\left( \Omega\right) $ to some $g\in L^{p^{\prime}}\left( \Omega\right) $. We also know the following variational equality for any $v\in X_{0}:$ \[ B_{h}\left( u_{j},v\right) =\int_{\Omega}g_{j}\left( x\right) v\left( x\right) dx \] In particular \[ B_{h}\left( u_{j},u_{j}-u_{0}\right) =\int_{\Omega}g_{j}\left( x\right) \left( u_{j}\left( x\right) -u_{0}\left( x\right) \right) dx. \] Holder's inequality and the linearity of $B_{h}\left( w,\cdot\right) ,$ for any $w\in X,$ lead us the estimatio \[ B_{h}\left( u_{j},u_{j}\right) \leq\left\Vert g_{j}\right\Vert _{L^{p^{\prime}}}\left( \left\Vert u_{j}\right\Vert _{L^{p}}+\left\Vert u_{0}\right\Vert _{L^{p}}\right) +B_{h}\left( u_{j},u_{0}\right) . \] If we take into account (\ref{basic_ineq}) and make use of the Young's inequality we deduc \begin{align*} B_{h}\left( u_{j},u_{j}\right) & \leq\left\Vert g_{j}\right\Vert _{L^{p^{\prime}}}\left( \left( \frac{B_{h}\left( u_{j},u_{j}\right) +B_{h}\left( u_{0},u_{0}\right) }{c}\right) ^{1/p}+2\left\Vert u_{0}\right\Vert _{L^{p}}\right) \\ & +\frac{1}{p^{\prime}}B_{h}\left( u_{j},u_{j}\right) +\frac{1}{p B_{h}\left( u_{0},u_{0}\right) , \end{align*} and thereby \[ \left( 1-\frac{1}{p^{\prime}}\right) B_{h}\left( u_{j},u_{j}\right) \leq C+D\left( B_{h}\left( u_{j},u_{j}\right) \right) ^{1/p \] for some positive constants $C$ and $D.$ The above inequality implies $B_{h}\left( u_{j},u_{j}\right) $ is uniformly bounded and by (\ref{basic_ineq}) $\left\Vert u_{j}\right\Vert _{L^{p}}$ too. If at this point we use point \ref{S2_1_2} from Subsection \ref{S2_1}, we can state the strong convergence in $L^{p}$, at least for a subsequence of $\left( u_{j}\right) _{j},$ to some function $u^{\ast}\in u_{0}+X_{0}.$ Let $u$ be the state associated to $g.$ We pose wether the identity $u=u^{\ast}$ is true or not: \begin{theorem} [$G$-convergence]\label{G-convergence}Under the above circumstance \[ \lim_{j\rightarrow\infty}\min_{w\in u_{0}+X_{0}}\left\{ \frac{1}{p B_{h}\left( w,w\right) -\int_{\Omega}g_{j}\left( x\right) w\left( x\right) dx\right\} =\min_{w\in u_{0}+X_{0}}\left\{ \frac{1}{p}B_{h}\left( w,w\right) -\int_{\Omega}g\left( x\right) w\left( x\right) dx\right\} \] and $u=u^{\ast}.$ \end{theorem} \begin{proof} Assume $m_{j}$ and $m$ denote the minimum values from the left and right respectively. We prove $\lim_{j}m_{j}\leq$ $m: \begin{align*} \lim_{j}m_{j} & =\lim_{j}\left( \frac{1}{p}B_{h}\left( u_{j},u_{j}\right) -\int_{\Omega}g_{j}\left( x\right) u_{j}\left( x\right) dx\right) \\ & \leq\lim_{j}\left( \frac{1}{p}B_{h}\left( u,u\right) -\int_{\Omega g_{j}\left( x\right) u\left( x\right) dx\right) \\ & =\frac{1}{p}B_{h}\left( u,u\right) -\int_{\Omega}g\left( x\right) u\left( x\right) dx\\ & =\min_{w\in u_{0}+X_{0}}\left\{ \frac{1}{p}B_{h}\left( w,w\right) -\int_{\Omega}g\left( x\right) w\left( x\right) dx\right\} . \end{align*} We check $\lim_{j}m_{j}\geq m:$ we know $u_{j}\rightarrow u^{\ast}$ strongly in $L^{p},$ $g_{j}\rightharpoonup g$ weakly in $L^{p^{\prime}}$ and therefore \[ \lim_{j}\int_{\Omega}g_{j}\left( x\right) u_{j}\left( x\right) dx=\int_{\Omega}g\left( x\right) u^{\ast}\left( x\right) dx. \] We apply these convergences to analyze the limit of the energy functional: \begin{align*} \lim_{j}m_{j} & =\lim_{j}\left( \frac{1}{p}B_{h}\left( u_{j},u_{j}\right) -\int_{\Omega}g_{j}\left( x\right) u_{j}\left( x\right) dx\right) \\ & =\frac{1}{p}\lim_{j}B_{h}\left( u_{j},u_{j}\right) -\int_{\Omega}g\left( x\right) u^{\ast}\left( x\right) dx\\ & \geq\frac{1}{p}B_{h}\left( u^{\ast},u^{\ast}\right) -\int_{\Omega }g\left( x\right) u^{\ast}\left( x\right) dx\\ & \geq\frac{1}{p}B_{h}\left( u,u\right) -\int_{\Omega}g\left( x\right) u\left( x\right) dx\\ & =\min_{w\in u_{0}+X_{0}}\left\{ \frac{1}{2}B_{h}\left( w,w\right) -\int_{\Omega}g\left( x\right) w\left( x\right) dx\right\} \end{align*} where the first inequality is due to the lower semicontinuity of the operator $B_{h}\left( \cdot,\cdot\right) $ with respect to the weak convergence in $L^{p}.$ We have proved $\lim_{j}m_{j}=$ $m.$ Also, from the above chain of inequalities it is obvious to see that both $u$ and $u^{\ast}$ are solutions to the problem (\ref{NL_Dirichlet}), then according to Theorem \ref{Th1} $u=u^{\ast}.$ \end{proof} \begin{corollary} \label{remark_conver_norms}The following convergences hol \begin{equation} \lim_{j\rightarrow\infty}B_{h}\left( u_{j},u_{j}\right) =B_{h}\left( u,u\right) ,\label{convergence_norms \end{equation} an \begin{equation} \lim_{j\rightarrow\infty}B_{h}\left( u_{j}-u,u_{j}-u\right) =0.\label{convergence_norms_main \end{equation} \end{corollary} \begin{proof} (\ref{convergence_norms}) follows from the proof of the above theorem and can be rewritten as this convergence of norms \begin{align*} & \lim_{j\rightarrow\infty}\int_{\Omega_{\delta}}\int_{\Omega_{\delta }H\left( x^{\prime},x\right) k_{\delta}\left( \left\vert x^{\prime }-x\right\vert \right) \frac{\left\vert u_{j}\left( x^{\prime}\right) -u_{j}\left( x\right) \right\vert ^{p}}{\left\vert x^{\prime}-x\right\vert ^{p}}dx^{\prime}dx\\ & =\int_{\Omega_{\delta}}\int_{\Omega_{\delta}}H\left( x^{\prime},x\right) k_{\delta}\left( \left\vert x^{\prime}-x\right\vert \right) \frac{\left\vert u\left( x^{\prime}\right) -u\left( x\right) \right\vert ^{p}}{\left\vert x^{\prime}-x\right\vert ^{p}}dx^{\prime}dx. \end{align*} But this convergence is equivalent to say that the norm of the sequence \[ \Psi_{j}\left( x^{\prime},x\right) =H^{1/p}\left( x^{\prime},x\right) k_{\delta}^{1/p}\left( \left\vert x^{\prime}-x\right\vert \right) \frac{\left( u_{j}\left( x^{\prime}\right) -u_{j}\left( x\right) \right) }{\left\vert x^{\prime}-x\right\vert \] converges to the norm of the functio \[ \Psi\left( x^{\prime},x\right) =H^{1/p}\left( x^{\prime},x\right) k_{\delta}^{1/p}\left( \left\vert x^{\prime}-x\right\vert \right) \frac{\left( u\left( x^{\prime}\right) -u\left( x\right) \right) }{\left\vert x^{\prime}-x\right\vert }. \] Since, additionally, up to a subsequence, $\left( \Psi_{j}\right) _{j}$ converges pointwise a.e. $\left( x^{\prime},x\right) \in\Omega_{\delta }\times\Omega_{\delta}$ to $\Psi$, then $\Psi_{j}$ strongly converges to $\Psi\left( x^{\prime},x\right) $ in $L^{p}\left( \Omega_{\delta \times\Omega_{\delta}\right) $ (see \cite[Pag. 78]{Riesz}) and (\ref{convergence_norms}) has been proved. \end{proof} \begin{remark} \label{convergenceinnorm}The convergence (\ref{convergence_norms}), together with the strong convergence of $\left( u_{j}\right) _{j},$ is precisely equivalent to the strong convergence in $X$. In particular, \begin{equation} \lim_{j\rightarrow\infty}B\left( u_{j}-u,u_{j}-u\right) =0\label{convergence_norms_2 \end{equation} and \begin{equation} \lim_{j\rightarrow\infty}B\left( u_{j},u_{j}\right) =B\left( u,u\right) .\label{convergence_norms_3 \end{equation} We also realize that for any $h_{0}\in\mathcal{H}$ we have \begin{align*} \lim_{j\rightarrow\infty}B_{h}\left( u_{j}-u,u_{j}-u\right) & =\lim_{j\rightarrow\infty}B_{\frac{h}{h_{0}}h_{0}}\left( u_{j}-u,u_{j -u\right) \\ & \geq\frac{h_{\min}}{h_{\max}}\lim_{j\rightarrow\infty}B_{h_{0}}\left( u_{j}-u,u_{j}-u\right) \end{align*} Consequently, from (\ref{convergence_norms_main}) we deduce $\lim _{j\rightarrow\infty}B_{h_{0}}\left( u_{j}-u,u_{j}-u\right) =0,$ for any $h_{0}\in\mathcal{H}$ and thereby \begin{equation} \lim_{j\rightarrow\infty}B_{h_{0}}\left( u_{j},u_{j}\right) =\lim _{j\rightarrow\infty}B_{h_{0}}\left( u,u\right) .\label{convergence_norms_3-h0 \end{equation} \end{remark} The convergences of the states we have just described above, are still valid if we consider a sequence of sources $\left( g_{j}\right) _{j}$, uniformly bounded in the dual space $X_{0}^{\prime}.$ Since $X_{0}$ is reflexive, $X_{0}^{\prime}$ too, and we can ensure the sequence $\left( g_{j}\right) _{j}$ is weakly convergent, up to a subsequence, to an operator $g\in X_{0}^{\prime}.$ Let $\left( u_{j}\right) _{j}$ and $u$ the underlying states of $\left( g_{j}\right) _{j}$ and $g$ respectively. Then, thanks to the precedent analysis, we know the sequence $\left( u_{j}\right) _{j},$ the states associated to the controls $\left( g_{j}\right) _{j},$ converges weakly to $u$ in $L^{p}\supset X,$ where $u$ is the stated associated to $g.$ Take now any element $L\in X_{0}^{\prime}.$ Under these circumstances there exists a function $u_{L}\in u_{0}+X_{0}$ such that $B_{h}\left( u_{L},w\right) =\left\langle L,w\right\rangle _{X^{\prime}\times X}$ for any $w\in X_{0}.$ Then, we easily deduc \[ \lim_{j\rightarrow\infty}\left\langle L,u_{j}-u_{0}\right\rangle _{X_{0}^{\prime}\times X_{0}}=\left\langle L,u-u_{0}\right\rangle _{X_{0}^{\prime}\times X_{0} \] The explanation of that relies on the strong convergence achieved in the above corollary: indeed, H\"{o}lder's inequality and (\ref{convergence_norms_2}) straightforwardly provide \begin{align*} & \lim_{j\rightarrow\infty}\left\vert \left\langle L,u_{j}-u_{0}\right\rangle _{X_{0}^{\prime}\times X_{0}}-\left\langle L,u-u_{0}\right\rangle _{X_{0}^{\prime}\times X_{0}}\right\vert \\ & =\lim_{j\rightarrow\infty}\left\vert B_{h}\left( u_{L},u_{j}-u\right) \right\vert \\ & \leq\lim_{j\rightarrow\infty}B_{h}^{1/p^{\prime}}\left( u_{L},u_{L}\right) B_{h}^{1/p}\left( u_{j}-u,u_{j}-u\right) \\ & =0. \end{align*} The analysis performed explicitly confirm the fact that the sequence $\left( u_{j}\right) _{j}$ is weakly convergent to $u$ in $X_{0},$ or in other words, the sequence of problems \[ \min_{w\in u_{0}+X_{0}}\left\{ \frac{1}{p}B_{h}\left( w,w\right) -\int_{\Omega}g_{j}\left( x\right) w\left( x\right) dx\right\} \] $G$-converges to the problem \[ \min_{w\in u_{0}+X_{0}}\left\{ \frac{1}{p}B_{h}\left( w,w\right) -\int_{\Omega}g\left( x\right) w\left( x\right) dx\right\} \] (see the abstract energy criterion established in \cite[Chapter 5, p. 162]{Jikov}). \begin{theorem} [Well posedness]\label{Well-posedness}There exists a solution $\left( g,u\right) $ to the control problem $\left( \mathcal{P}^{\delta}\right) $ given at (\ref{NLCP}). \end{theorem} \begin{proof} Let $\left( g_{j},u_{j}\right) $ be a minimizing sequence. Then, up to subsequence, we know that $g_{j}\rightharpoonup g$ weakly in $L^{p^{\prime}}$ and $u_{j}\rightarrow u$ strongly in $L^{p}.$ In addition, by Theorem \ref{G-convergence} the couple $\left( g,u\right) $ is admissible for the control problem, that is $\left( g,u\right) \in\mathcal{A}^{\delta}$. Factually, this couple is a minimizer of the problem. To check that we observe the infimum $i$ of the minimization principle can be computed a \[ i\doteq\lim_{j}I\left( g_{j},u_{j}\right) =\lim_{j\rightarrow\infty \int_{\Omega}F\left( x,u_{j}\left( x\right) ,g_{j}\left( x\right) \right) dx. \] If we take into account the properties of $G$ is straightforward to verify the tha \begin{equation} \lim_{j\rightarrow\infty}\int_{\Omega}G\left( x,u_{j}\left( x\right) \right) dx=\int_{\Omega}G\left( x,u\left( x\right) \right) dx\label{conver_G \end{equation} In addition, by using the Fatou's Lemma and the convergence (\ref{convergence_norms_3-h0}) it is automatic to check tha \begin{align*} i & \geq\text{ }\underset{j\rightarrow\infty}{\lim\inf}\int_{\Omega}F\left( x,u_{j}\left( x\right) ,g_{j}\left( x\right) \right) dx\\ & \geq\int_{\Omega}F\left( x,u\left( x\right) ,g\left( x\right) \right) dx\\ & =I\left( g,u\right) . \end{align*} The above inequality implies that $\left( g,u\right) $ is a minimizer. \end{proof} \begin{remark} \label{unique}If $p=2$ and $G\left( x,\cdot\right) $ is convex, then the solution of (\ref{NLCP}) is unique. The uniqueness is guaranteed because of to the strict convexity of the function $t\rightarrow\left\vert t\right\vert ^{p^{\prime}}$ and the linearity of the state equation: if there are two different solutions $\left( g,u\right) $ and $\left( f,v\right) ,$ then the stated associated with the source $y_{s}\left( x\right) =sg\left( x\right) +\left( 1-s\right) f\left( x\right) $ ($s\in\left( 0,1\right) )$ is $u_{s}\left( x\right) =su\left( x\right) +\left( 1-s\right) v\left( x\right) .$ If we apply the above properties of convexity, and the one of the operator $B_{h_{0}}$ as well, then we arrive at \begin{align*} J_{\delta}\left( y_{s},u_{s}\right) & =\int_{\Omega}F\left( x,u_{s}\left( x\right) ,y_{s}\left( x\right) \right) dx=\int_{\Omega}\left( G\left( x,u_{s}\left( x\right) \right) +\beta\left\vert y_{s}\left( x\right) \right\vert ^{p^{\prime}}\right) dx+\gamma B_{h_{0}}\left( u_{s ,u_{s}\right) \\ & <s\int_{\Omega}\left( G\left( x,u\left( x\right) \right) +\beta \left\vert g\left( x\right) \right\vert ^{p^{\prime}}\right) dx+\left( 1-s\right) \int_{\Omega}\left( G\left( x,v\left( x\right) \right) +\beta\left\vert f\left( x\right) \right\vert ^{p^{\prime}}\right) dx\\ & +\gamma sB_{h_{0}}\left( u,u\right) +\gamma\left( 1-s\right) B_{h_{0 }\left( v,v\right) \\ & =sJ_{\delta}\left( g,u\right) +\left( 1-s\right) J_{\delta}\left( f,v\right) , \end{align*} which is contradictory because both $\left( g,u\right) $ and $\left( f,v\right) $ are minimizers of $J_{\delta}.$ \end{remark} \section{Convergence of the state equation if $\delta\rightarrow0$\label{S4}} Assume the source $g$ and $u_{0}\in W^{1-1/p,p}\left( \partial\Omega\right) $ are fixed functions. If, for each $\delta,$ we consider the corresponding sequence of states $\left( u_{\delta}\right) _{\delta}\subset u_{0 +W_{0}^{1,p}\left( \Omega\right) ,$ the \[ B_{h}\left( u_{\delta},v\right) =\int g\left( x\right) v\left( x\right) dx \] for any $v\in X_{0}.$ Consequently, as in the previous sections, we easily prove $\left\Vert u_{\delta}\right\Vert _{L^{p}\left( \Omega\right) }$ and $B_{h}\left( u_{\delta},u_{\delta}\right) $ are sequences uniformly bounded in $\delta.$ Then, by using part \ref{S2_1_3} from Subsection \ref{S2_1}, these estimations imply the existence of a function $u^{\ast}\in u_{0 +W_{0}^{1,p}\left( \Omega\right) $ and a subsequence of $u_{\delta}$ (still denoted $u_{\delta})$, $\ $such that $u_{\delta}\rightarrow u^{\ast}$ strongly in $L^{p}\left( \Omega\right) $. Now, we look for the state equation that should be satisfied by the pair $\left( g,u^{\ast}\right) $. The answer to this question is given in the following convergence result: \begin{theorem} \label{Th2 \begin{equation \begin{tabular} [c]{l $\displaystyle\lim_{\delta\rightarrow0}\min_{w\in u_{0}+X_{0}}\left\{ \frac{1}{p}B_{h}\left( w,w\right) -\int g\left( x\right) w\left( x\right) dx\right\} \smallskip$\\ $\displaystyle=\min_{w\in u_{0}+W_{0}^{1,p}\left( \Omega\right) }\left\{ \frac{1}{p}\int_{\Omega}h\left( x\right) \left\vert \nabla w\left( x\right) \right\vert ^{p}dx-\int g\left( x\right) w\left( x\right) dx\right\} \end{tabular} \ \ \ \ \ \label{7 \end{equation} and $\left( g,u^{\ast}\right) \in\mathcal{A}^{loc}.$ \end{theorem} \begin{proof} We define the local state $u$ that corresponds to the source $g$ by means of the local proble \[ \min_{w\in u_{0}+W_{0}^{1,p}\left( \Omega\right) }\left\{ \frac{1}{p b_{h}\left( w,w\right) -\int g\left( x\right) w\left( x\right) dx\right\} \] If $u_{\delta}$ is the solution to the nonlocal state equation, then $u_{\delta}$ solve \[ \min_{w\in u_{0}+X_{0}}\left\{ \frac{1}{p}B_{h}\left( w,w\right) -\int g\left( x\right) w\left( x\right) dx\right\} \] and from $\left( u_{\delta}\right) _{\delta}$ we can extract a subsequence strongly convergent to $u^{\ast}\in u_{0}+W_{0}^{1,p}\left( \Omega\right) $ in $L^{p}.$ By means of \[ \lim_{\delta\rightarrow0}B_{h}\left( u_{\delta},u_{\delta}\right) \geq \int_{\Omega}h\left( x\right) \left\vert \nabla u^{\ast}\left( x\right) \right\vert ^{2}dx, \] (see (\ref{Prel4})) we are allowed to writ \begin{align*} & \lim_{\delta\rightarrow0}\min_{w\in u_{0}+X_{0}}\left\{ \frac{1}{p B_{h}\left( w,w\right) -\int g\left( x\right) w\left( x\right) dx\right\} \\ & =\lim_{\delta\rightarrow0}\left( \frac{1}{p}B_{h}\left( u_{\delta },u_{\delta}\right) -\int g\left( x\right) u_{\delta}\left( x\right) dx\right) \\ & \geq\frac{1}{p}\int_{\Omega}h\left( x\right) \left\vert \nabla u^{\ast }\left( x\right) \right\vert ^{p}dx-\int g\left( x\right) u^{\ast}\left( x\right) dx\\ & \geq\frac{1}{p}\int_{\Omega}h\left( x\right) \left\vert \nabla u\left( x\right) \right\vert ^{p}dx-\int_{\Omega}g\left( x\right) u\left( x\right) dx\\ & =\min_{w\in u_{0}+H_{0}^{1}\left( \Omega\right) }\left\{ \frac{1}{p \int_{\Omega}h\left( x\right) \left\vert \nabla w\left( x\right) \right\vert ^{p}dx-\int g\left( x\right) w\left( x\right) dx\right\} . \end{align*} We prove the reverse inequality: it suffices the usage of the convergence given at (\ref{Prel5}) to realize that \begin{align*} & \lim_{\delta\rightarrow0}\min_{w\in u_{0}+X_{0}}\left\{ \frac{1}{p B_{h}\left( w,w\right) -\int g\left( x\right) w\left( x\right) dx\right\} \\ & =\lim_{\delta\rightarrow0}\left( \frac{1}{p}B_{h}\left( u_{\delta },u_{\delta}\right) -\int g\left( x\right) u_{\delta}\left( x\right) dx\right) \\ & \leq\lim_{\delta\rightarrow0}\left( \frac{1}{p}B_{h}\left( u,u\right) -\int g\left( x\right) u\left( x\right) dx\right) \\ & =\frac{1}{p}b_{h}\left( u,u\right) -\int g\left( x\right) u\left( x\right) dx\\ & =\min_{w\in u_{0}+H_{0}^{1}\left( \Omega\right) }\left\{ \frac{1}{p \int_{\Omega}h\left( x\right) \left\vert \nabla w\left( x\right) \right\vert ^{p}dx-\int g\left( x\right) w\left( x\right) dx\right\} . \end{align*} The above two estimations amount to state two consequences: on the one side, these estimations clearly give the convergence result (\ref{7}). On the other side, from the above discussion, it can be read that both $u$ and $u^{\ast}$ are solutions to the classical boundary problem (\ref{d}), and thus, by the uniqueness proved for this problem, we deduce $u=u^{\ast}.$ \end{proof} The proof we have just done provides the convergence of energie \begin{equation} \lim_{\delta\rightarrow0}B_{h}\left( u_{\delta},u_{\delta}\right) =b_{h}\left( u,u\right) .\label{strong_conv_p2 \end{equation} If we use (\ref{Prel5}) the above limit can be rewritten as follows \begin{equation} \lim_{\delta\rightarrow0}\int_{\Omega_{\delta}}\int_{\Omega_{\delta}}H\left( x^{\prime},x\right) k_{\delta}\left( \left\vert x^{\prime}-x\right\vert \right) \left( \frac{\left\vert u_{\delta}\left( x^{\prime}\right) -u_{\delta}\left( x\right) \right\vert ^{p}}{\left\vert x^{\prime }-x\right\vert ^{p}}-\frac{\left\vert u\left( x^{\prime}\right) -u\left( x\right) \right\vert ^{p}}{\left\vert x^{\prime}-x\right\vert ^{p}}\right) dx^{\prime}dx=0.\label{strong_conv_p2_bis \end{equation} Moreover, for the particular case $p=2$ we have strong convergence in $X_{0}: $ \begin{equation} \lim_{\delta\rightarrow0}B\left( u_{\delta}-u,u_{\delta}-u\right) =0.\label{linear-case \end{equation} (the proof is automatic). Furthermore, if $p=2$ \ and $h_{0}$ is any function from $\mathcal{H},$ then \[ \lim_{\delta\rightarrow0}B_{h_{0}}\left( u_{\delta}-u,u_{\delta}-u\right) =0 \] an \begin{equation} \lim_{\delta\rightarrow0}B_{h_{0}}\left( u_{\delta},u_{\delta}\right) =B_{h_{0}}\left( u,u\right) .\label{linear-case0 \end{equation} \section{Approximation to the optimal control problem\label{S5}} We know that, for each $\delta,$ there exists at least a solution $\left( g_{\delta},u_{\delta}\right) $ to the problem (\ref{NLCP}). Our purpose is the asymptotic analysis of this sequence of solutions. We shall prove that the limit in $\delta$ of the sequence $\left( g_{\delta},u_{\delta}\right) $, the pair $\left( g,u\right) $ derived at the previous section, solves the corresponding local optimal control problem $\left( \mathcal{P}^{loc}\right) $ defined in (\ref{e}). The tools we use are nothing more than those used in Theorem \ref{Th2}. \begin{theorem} \label{Th3}Let $\left( g_{\delta},u_{\delta}\right) $ be the sequence of solutions to the control problem (\ref{NLCP}) with $\gamma=0$. Then there exists a pair $\left( g,u\right) \in L^{p^{\prime}}\left( \Omega\right) \times\left( u_{0}+W_{0}^{1,p}\left( \Omega\right) \right) $ and a subsequence of indexes $\delta$ for which the following conditions hold: \begin{enumerate} \item $g_{\delta}\rightharpoonup g$ weakly in $L^{p^{\prime}}\left( \Omega\right) ,$ $u_{\delta}\rightarrow u$ strongly in $L^{p}\left( \Omega\right) $ as $\delta\rightarrow0.$ \item \begin{equation \begin{tabular} [c]{l $\displaystyle\lim_{\delta\rightarrow0}\min_{w\in u_{0}+X_{0}}\left\{ \frac{1}{p}B_{h}\left( w,w\right) -\int g_{\delta}\left( x\right) w\left( x\right) dx\right\} \medskip$\\ $\displaystyle=\min_{w\in u_{0}+W_{0}^{1,p}\left( \Omega\right) }\left\{ \frac{1}{p}b_{h}\left( w,w\right) -\int g\left( x\right) w\left( x\right) dx\right\} \end{tabular} \label{G-convergence-energy \end{equation} and $\left( g,u\right) \in\mathcal{A}^{loc}.$ \item $\left( g,u\right) $ is a solution to the local control problem (\ref{e}). \end{enumerate} \end{theorem} \begin{proof} \begin{enumerate} \item As in the previous analysis, it is clear that from any minimizing sequence $\left( g_{\delta},u_{\delta}\right) $ we can extract a subsequence of $\left( g_{\delta}\right) _{\delta}$ weakly convergent to $g$ in $L^{p^{\prime}}$. Also, from the associated states $\left( u_{\delta}\right) _{\delta}$ we extract a subsequence that converges, strongly in $L^{p},$ to a function $u^{\ast}\in u_{0}+W_{0}^{1,p}\left( \Omega\right) $. See Subsection \ref{S2_1} part \ref{S2_1_3}. \item We are going to see the state function $u^{\ast}$ is the one that corresponds to the control $g:$\newline Let $u$ be the underlying state of $g.$ Then, on the one side (\ref{Prel4}) allows us to writ \begin{align*} & \lim_{\delta\rightarrow0}\min_{w\in u_{0}+X_{0}}\left\{ \frac{1}{p B_{h}\left( w,w\right) -\int g_{\delta}\left( x\right) w\left( x\right) dx\right\} \\ & =\lim_{\delta\rightarrow0}\left( \frac{1}{p}B_{h}\left( u_{\delta },u_{\delta}\right) -\int g_{\delta}\left( x\right) u_{\delta}\left( x\right) dx\right) \\ & \geq\left( \frac{1}{p}b_{h}\left( u^{\ast},u^{\ast}\right) -\int g\left( x\right) u^{\ast}\left( x\right) dx\right) \\ & \geq\min_{w\in u_{0}+W_{0}^{1,p}\left( \Omega\right) }\left\{ \frac {1}{p}b_{h}\left( w,w\right) -\int g\left( x\right) w\left( x\right) dx\right\} . \end{align*} On the other side, it is clear that (\ref{Prel5}) allows us to write \begin{align*} & \lim_{\delta\rightarrow0}\min_{w\in u_{0}+X_{0}}\left\{ \frac{1}{p B_{h}\left( w,w\right) -\int g_{\delta}\left( x\right) w\left( x\right) dx\right\} \\ & \leq\lim_{\delta\rightarrow0}\left( \frac{1}{p}B_{h}\left( u,u\right) -\int g_{\delta}\left( x\right) u\left( x\right) dx\right) \\ & =\frac{1}{p}b_{h}\left( u,u\right) -\int g\left( x\right) u\left( x\right) dx\\ & =\min_{w\in u_{0}+W_{0}^{1,p}\left( \Omega\right) }\left\{ \frac{1 {p}b_{h}\left( w,w\right) -\int g\left( x\right) w\left( x\right) dx\right\} . \end{align*} From the above lines we infer that both $u$ and $u^{\ast}$ are solutions to the local state problem (\ref{d}). Thereby, if we use uniqueness, which have been checked at the end of Section \ref{S2}, we deduce that $u=u^{\ast}$ and consequently, that $\left( g,u\right) \in\mathcal{A}^{loc}$. Furthermore, another consequence we can derive is the following convergence of energies: \[ \lim_{\delta\rightarrow0}B_{h}\left( u_{\delta},u_{\delta}\right) =b_{h}\left( u,u\right) \] \item Take any $\left( f,v\right) \in\mathcal{A}^{loc}$ and consider the sequence of solutions $\left( f,v_{\delta}\right) $ of the nonlocal boundary problem $\left( P^{\delta}\right) $ with $g=f.$ Since $\left( f,v_{\delta }\right) \in\mathcal{A}^{\delta}$ then \begin{equation} I\left( f,v\right) =\lim_{\delta}I_{\delta}\left( f,v_{\delta}\right) \geq\lim_{\delta}I_{\delta}\left( g_{\delta},u_{\delta}\right) \geq I\left( g,u\right) .\label{8 \end{equation} To prove that we notice that the first equality of (\ref{8}) is true because according to Theorem \ref{Th2} $v_{\delta}\rightarrow v\in u_{0}+W_{0 ^{1,p}\left( \Omega\right) $ strongly in $L^{p}$ and $\left( f,v\right) \in\mathcal{A}^{loc}.$ If we use now the latter strong convergence and we pay attention to the convergence (\ref{conver_G}), then it is straightforward to deduce \begin{align*} \lim_{\delta}I_{\delta}\left( f,v_{\delta}\right) & =\lim_{\delta \int_{\Omega}\left( G\left( x,v_{\delta}\left( x\right) \right) +\beta\left\vert f\left( x\right) \right\vert ^{p^{\prime}}\right) dx\\ & =\int_{\Omega}\left( G\left( x,v\left( x\right) \right) +\beta \left\vert f\left( x\right) \right\vert ^{p^{\prime}}\right) dx\\ & =I\left( f,v\right) . \end{align*} The first inequality of (\ref{8}) is due to the fact that $\left( g_{\delta },u_{\delta}\right) $ is a sequence of minimizers for the cost $I_{\delta}$ and therefore, \[ \lim_{\delta}I_{\delta}\left( f,v_{\delta}\right) \geq\lim_{\delta I_{\delta}\left( g_{\delta},u_{\delta}\right) . \] And the second inequality of (\ref{8}) holds because (\ref{conver_G}) and Fatou's Lemma yiel \begin{align*} \lim_{\delta}I_{\delta}\left( g_{\delta},u_{\delta}\right) & \geq \liminf_{\delta}\int_{\Omega}\left( G\left( x,u_{\delta}\left( x\right) \right) +\beta\left\vert g_{\delta}\left( x\right) \right\vert ^{p^{\prime }}\right) dx\\ & \geq\int_{\Omega}\left( G\left( x,u\left( x\right) \right) +\beta\left\vert g\left( x\right) \right\vert ^{p^{\prime}}\right) dx\\ & =I\left( g,u\right) . \end{align*} \end{enumerate} \end{proof} \subsection{Case $p=2$} The thesis of Theorem \ref{Th3} remains true when we put $\gamma>0$ and $p=2.$ To prove this statement we previously define the concrete optimization problems we have to face. The nonlocal control problem $\left( \mathcal{P}^{\delta}\right) $ read as \begin{equation} \min_{\left( g,u\right) \in\mathcal{A}^{\delta}}J_{\delta}\left( g,u\right) \label{NLCP_2 \end{equation} where \[ J_{\delta}\left( g,u\right) =I_{\delta}\left( g,u\right) +B_{h_{0}}\left( u,u\right) , \] $B_{h_{0}}\left( \cdot,\cdot\right) $ is defined as in (\ref{bb}) with $h=h_{0}$ and $p=2.$ The set of the admissibility i \[ \mathcal{A}^{\delta}=\left\{ \left( f,v\right) \in L^{2}\left( \Omega\right) \times X:v\text{ solves (\ref{b}) with }g=f\right\} , \] where (\ref{b}) has also to considered for the specific case $p=2.$It must be underlined that for each $\delta$, there is a solution there is at least a solution $\left( g_{\delta},u_{\delta}\right) \in L^{2}\left( \Omega\right) \times\left( u_{0}+H_{0}^{1}\left( \Omega\right) \right) $ to the problem (\ref{NLCP_2}). \smallskip The corresponding local control problem $\left( \mathcal{P}^{loc}\right) $ is stated as \begin{equation} \min_{\left( g,u\right) \in\mathcal{A}^{loc}}J\left( g,u\right) \label{e_2 \end{equation} wher \[ J\left( g,u\right) =I\left( g,u\right) +\gamma\int_{\Omega}h_{0}\left( x\right) \left\vert \nabla u\left( x\right) \right\vert ^{2}dx, \] wit \[ \mathcal{A}^{loc}=\left\{ \left( f,v\right) \in L^{2}\left( \Omega\right) \times W^{1,2}\left( \Omega\right) :v\text{ solves (\ref{d}) with }g=f\right\} \] and (\ref{d}) is assumed to be constrained to the case $p=2.$ \begin{theorem} \label{Th4}Let $\left( g_{\delta},u_{\delta}\right) \ $be a sequence of solutions to the problem (\ref{NLCP_2}). Then there exists a pair $\left( g,u\right) \in L^{2}\left( \Omega\right) \times\left( u_{0}+W^{1,2}\left( \Omega\right) \right) $ and a subsequence of indexes $\delta$ for which the following conditions hold: \begin{enumerate} \item $g_{\delta}\rightharpoonup g$ weakly in $L^{2}\left( \Omega\right) \ $and $u_{\delta}\rightarrow u$ strongly in $L^{2}\left( \Omega\right) $ if $\delta\rightarrow0.$ \item The identity (\ref{G-convergence-energy}) holds and $\left( g,u\right) \in\mathcal{A}^{loc}.$ \item $\left( g,u\right) $ is a solution to the local control problem (\ref{e_2}).If in addition $G\left( x,\cdot\right) $ is assumed to be convex, then the solution $\left( g,u\right) $ is unique. \end{enumerate} \end{theorem} \begin{proof} The procedure carried out for the proof of Theorem \ref{Th3} moves perfectly into this context. It only remains to verify part 3. More concretely, we only need to sho \begin{equation} J\left( f,v\right) \geq J\left( g,u\right) \text{ for any }\left( f,v\right) \in\mathcal{A}^{loc}.\label{ineq-fin \end{equation} To check (\ref{ineq-fin}) we take the sequence of solutions $\left( f,v_{\delta}\right) $ of the nonlocal boundary problem $\left( P^{\delta }\right) $ with $g=f.$ By using the inequalit \[ \lim_{\delta}B_{h_{0}}\left( v,v\right) =\int_{\Omega}h_{0}\left( x\right) \left\vert \nabla v\left( x\right) \right\vert ^{2}dx \] (see (\ref{Prel5})) and taking into account the limit (\ref{linear-case0}) we realize that \begin{align*} J\left( f,v\right) & =\int_{\Omega}\left( G\left( x,v\left( x\right) \right) +\beta\left\vert f\left( x\right) \right\vert ^{2}\right) dx+\int_{\Omega}h_{0}\left( x\right) \left\vert \nabla v\left( x\right) \right\vert ^{2}dx\\ & =\int_{\Omega}\left( G\left( x,v\left( x\right) \right) +\beta \left\vert f\left( x\right) \right\vert ^{2}\right) dx+\lim_{\delta }B_{h_{0}}\left( v,v\right) \\ & =\lim_{\delta}\int_{\Omega}\left( G\left( x,v_{\delta}\left( x\right) \right) +\beta\left\vert f\left( x\right) \right\vert ^{2}\right) dx+\lim_{\delta}B_{h_{0}}\left( v_{\delta},v_{\delta}\right) \\ & =\lim_{\delta}J_{\delta}\left( f,v_{\delta}\right) . \end{align*} By applying now the optimality of $\left( g_{\delta},u_{\delta}\right) $ for $J_{\delta},$ we clearly infe \[ J\left( f,v\right) =\lim_{\delta}J_{\delta}\left( f,v_{\delta}\right) \geq\lim_{\delta}J_{\delta}\left( g_{\delta},u_{\delta}\right) . \] And finally, by recalling the inequality \[ \lim_{\delta}B_{h_{0}}\left( u_{\delta},u_{\delta}\right) \geq b_{h_{0 }\left( u,u\right) \] (see \ref{Prel4}), we get \begin{align*} \lim_{\delta}J_{\delta}\left( g_{\delta},u_{\delta}\right) & =\lim _{\delta}\int_{\Omega}\left( G\left( x,u_{\delta}\left( x\right) \right) +\beta\left\vert g_{\delta}\left( x\right) \right\vert ^{2}\right) dx+\lim_{\delta}B_{h_{0}}\left( u_{\delta},u_{\delta}\right) \\ & \geq\int_{\Omega}\left( G\left( x,u\left( x\right) \right) +\beta\left\vert g\left( x\right) \right\vert ^{2}\right) dx+b_{h_{0 }\left( u,u\right) \\ & =J\left( g,u\right) . \end{align*} By linking the above chain of inequalities we have proved (\ref{ineq-fin}). Regarding uniqueness, it is sufficient to repeat the same argument of Remark \ref{unique}. \end{proof} \begin{acknowledgement} This work was supported by the Spanish Project MTM2017-87912-P, Ministerio de Econom\'{\i}a, Industria y Competitividad (Spain), and by the Regional Project SBPLY/17/180501/000452, JJ. CC. de Castilla-La Mancha. There are no conflicts of interest to this work. \end{acknowledgement}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{}} \section{Does the top-quark have a large chiral weak-transition moment in $t\rightarrow W^+ b $ decay?} By W-boson longitudinal-transverse quantum interference, the relatively simple 4-angle beam-referenced stage-two spin-correlation function \begin{equation} {{\mathcal{G}}{|}}_{0} + {{\mathcal{G}}{|}}_{sig} \end{equation} enables measurement of the relative phase of the 2 dominant amplitudes in $ t\rightarrow W^{+}b $ decay with both gluon and quark production contributions [1-4]. In the standard model, for the $t \rightarrow W^+ b$ decay mode, the relative phase is $0^o$ between the dominant $A(0,-1/2)$ and $A(-1,-1/2)$ helicity amplitudes for the standard model $V-A$ coupling. For the case of an additional chiral-tensorial-coupling in $g_L = g_{f_M + f_E} = 1$ units, \begin{equation} \frac{1}{2} \Gamma ^\mu =g_L \left[ \gamma ^\mu P_L + \frac{1 } {2\Lambda _{+} }\imath \sigma ^{\mu \nu } ({q_W})_\nu P_R \right] \nonumber \\ = P_R \left[ \gamma ^\mu + \iota \sigma ^{\mu \nu } v_\nu \right] \end{equation} where $P_{L,R} = \frac{1}{2} ( 1 \mp \gamma_5 ) $ and $\Lambda_+ = E_W / 2 \sim 53 GeV$ in the top rest frame. In the case of such an additional large $t_R \rightarrow b_L$ chiral weak-transition moment, there is instead a $180^o$ relative phase between the $A(0,-1/2)$ and $A(-1,-1/2)$ helicity amplitudes. The associated on-shell partial-decay-width $\Gamma( t \rightarrow W^+ b )$ does differ for these two Lorentz-invariant couplings [ $\Gamma_{SM}=1.55GeV$, $\Gamma_+ = 0.66GeV$ versus less than $12.7 GeV @ 95\% C.L. $ in CDF conf. note 8953(2007) in PDG2008 ]. \section{Helicity decay amplitudes} In the $t_1$ rest frame, the matrix element for $t_1 \rightarrow {W_1}^{+} b$ is \begin{equation} \langle \theta _1^t ,\phi _1 ,\lambda _{W^{+} } ,\lambda _b |\frac 12,\lambda _1\rangle =D_{\lambda _1,\mu }^{(1/2)*}(\phi _1 ,\theta _1^t ,0)A\left( \lambda _{W^{+} } ,\lambda _b \right) \end{equation} where $\mu =\lambda _{W^{+} } -\lambda _b $ in terms of the ${W_1}^+$ and $b$-quark helicities. An asterisk denotes complex conjugation. The final ${W_1}^{+}$ momentum is in the $\theta _1^t ,\phi _1$ direction and the $b$-quark momentum is in the opposite direction. We use the Jacob-Wick phase convention. So there are 4 moduli and 3 relative phases to be measured, see refs. in [1]. Both in the SM and in the case of an additional large $t_R \rightarrow b_L $ chiral weak-transition moment, the $\lambda _{b}=-1/2$ and $ \lambda \overline{_{b}}$ $=1/2$ amplitudes are more than $\sim 30$ times larger than the $\lambda _{b}=1/2$ and $\lambda \overline{_{b}}$ $=-1/2$ amplitudes. \begin{table}[hbt]\centering \vskip -2.cm \includegraphics[width=12.cm]{TABaa.eps} \caption{Helicity amplitudes for $(V-A)$ coupling and the $(+)$ coupling of Eq.(2). Note $A_{New} = A_{g_L =1} / \sqrt \Gamma $.} \end{table} \section{ Idea of a W-boson Longitudinal-Transverse interference measurement } For the charged-lepton-plus-jets reaction $p p $ or $p \bar{p} \rightarrow t \bar{t} \rightarrow (W^+ b) (W^- \bar{b} ) \rightarrow (l^{+} \nu b ) (W^- \bar{b} )$, one selects a ``signal contribution" so that its intensity-observable is the product of an amplitude in which the $W^+$ is longitudinally-polarized with the complex-conjugate of an amplitude in which the $W^+$ is transversely polarized, summed with the complex-conjugate of this product. The helicity formalism is a general method for investigating applications of W-boson interference in stage-two spin-correlation functions for describing the charged-lepton plus jets channel, and for the di-lepton plus jets channel. \section{ Observables and signatures } The 2 dominant polarized partial widths are \begin{eqnarray} \Gamma (0,0) & \equiv &\left| A(0,-1/2)\right| ^{2}, \, \, \Gamma (-1,-1)\equiv \left| A(-1,-1/2)\right| ^{2} \end{eqnarray} The 2 W-boson Longitudinal-Transverse interference widths are \begin{eqnarray} \Gamma _{\mathit{R}}(0,-1) &=&\Gamma _{\mathit{R}}(-1,0)\equiv {{ Re}[A(0,-1/2)A(-1,-1/2)^{\ast }}] \nonumber \\ & \equiv & |A(0,-1/2)||A(-1,-1/2)|\cos \beta _{L} \\ \Gamma _{\mathit{I}}(0,-1) &=&-\Gamma _{\mathit{I}}(-1,0) \equiv {Im}% [A(0,-1/2)A(-1,-1/2)^{\ast }] \nonumber \\ & \equiv &-|A(0,-1/2)||A(-1,-1/2)|\sin \beta _{L} \end{eqnarray} The relative phase of these 2 dominant amplitudes is $\beta _{L}$. In both models \begin{equation} Probability \, \, W_{L}=\frac{\Gamma (0,0)}{\Gamma }=0.70 \end{equation} \begin{equation} Probability \, \, W_{T}=\frac{\Gamma (-1,-1)}{\Gamma }=0.30 \end{equation} But there are the respective signatures \begin{eqnarray} \eta _{L} \equiv \frac{ \Gamma_R (0,-1) } { \Gamma } = + \, 0.46 \, (SM) \nonumber \end{eqnarray} for the Standard Model, and \begin{eqnarray} \eta _{L} = - \, 0.46 \, (+) \nonumber \end{eqnarray} for the case of an additional large chiral weak-transition moment. In both models, unless there is a violation of time-reversal invariance \begin{equation} \eta _{L}^{^{\prime }}\equiv \frac{\Gamma _{I}(0,-1)}{\Gamma }=0 \end{equation} \section{ Definition of angles in spin-correlation function } For $p p $ or $p \bar{p} \rightarrow t \bar{t} \rightarrow (W^+ b) (W^- \bar{b} ) \rightarrow (l^{+} \nu b ) (W^- \bar{b} )$, the ``Spin-Correlation Function'' depends on four angles. While the cosine of the \textbf{4th angle}, $\cos \Theta _{B}$ , can be integrated out, the expressions are clearer if it is not. $\Theta _{B}$ is the ``beam referencing angle" in the $(t\ \overline{t})_{cm}$ frame [3,4]. \textbf{The 3 angles are:} (i) The spherical angles $\mathbf{\theta }_{a}$ and $\phi _{a}$ which specify the final positive-charged lepton in the $W_{1}^{+}$ rest frame when the boosts are from the $(t\ \overline{t})_{cm}$ frame to the $t_1$ frame and then to the $W_{1}^{+}$ rest frame. \begin{figure}[hbt]\centering \includegraphics[width=10.cm]{CN1P.eps} \caption{The spherical angles $ \theta_a$, $\phi_a $ specify the $ l^+ $ momentum in the ${W_1}^+$ rest frame.} \end{figure} \begin{figure}[hbt]\centering \includegraphics[width=14.cm]{CN3P.eps} \caption{Summary illustration showing the three angles $\theta _1^t$, $\theta _2^t$ and $\phi $ describing the first stage in the sequential-decays of the $t\bar{t}$ system in which $% t_1 \rightarrow {W_1}^{+}b$ and $\bar{t_2} \rightarrow {W_2}^{-}\bar b$. In (a) the $b$ momentum, not shown, is back to back with the ${W_1}^{+}$. In (b) the $\bar{b}$ momentum, not shown, is back to back with the ${W_2}^{-}$.} \end{figure} (ii) The cosine of the polar angle $\theta _{2}^{t}$ to specify the $W_{2}^{-}$ momentum direction in the anti-top rest frame. Usage of $\cos \theta _{2}^{t}$ is equivalent to using the $(t\ \overline{t})_{cm}$ energy of this hadronically decaying $W_{2}^{-}$ . \begin{figure}[hbt]\centering \includegraphics[width=10.cm]{CAN1aP.eps} \caption{The $g_1$ gluon-momentum or $q_1$ quark-momentum ``beam" direction is specified by the spherical angles $\Theta_B, \Phi_B$. $\cos \Theta _{B}$ can be integrated out.} \end{figure} \section{ Spin-correlation function } By W-boson longitudinal-transverse interference, the relatively simple 4-angle beam-referenced stage-two spin-correlation function \begin{equation} {{\mathcal{G}}{|}}_{0} + {{\mathcal{G}}{|}}_{sig} \end{equation} enables measurement of the relative phase of the 2 dominant amplitudes in $ t\rightarrow W^{+}b $ decay: The ``background term" is \begin{eqnarray} { \mathcal{G}}{|} _{0} &=& \,\, B(s,\Theta _{B}){|} _{0} \,\, \left\{ \frac{1}{2}\Gamma (0,0)\sin ^{2}\theta _{a}+\Gamma (-1,-1)\sin ^{4}% \frac{\theta _{a}}{2}\right\} \end{eqnarray} The ``signal term" is \begin{eqnarray} {\mathcal{G}}{|}_{sig} &=&{-}\frac{\pi}{2\sqrt{2}} \,\,B(s,\Theta _{B}){|} _{sig} \,\, \cos \theta _{2}^{t}\sin \theta _{a}\sin ^{2}\frac{\theta _{a}}{2} \nonumber \\ &&\left\{ \Gamma _{R}(0,-1)\cos \phi _{a}-\Gamma _{I}(0,-1)\sin \phi _{a}\right\} {\mathcal{R}} \end{eqnarray} The signal contribution is suppressed by by the factor ${\mathcal{R}} = ({\mathtt{prob}} \, W_L) - ({\mathtt{prob}} \, W_T) = 0.40 $. Eqs (11,12) omit a common overall factor $ \frac{16\pi ^{3}g^{4}}{9 s^{2}} [\overline{\Gamma }(0,0)+\overline{\Gamma }% (1,1)] $, see [3,4]. \bigskip \textbf{Beam-Referencing Factors: } In Eq.(10), one adds the quark and the gluon production contributions with for quark-production \begin{eqnarray} { {B}_{0}^{q}(s,\Theta _{B}) } &=& { \frac{1}{24}[1+\cos ^{2}\Theta _{B}+\frac{4m^{2}}{% s}\sin ^{2}\Theta _{B}] } \nonumber \\ { {B}_{sig}^{q}(s,\Theta _{B}) } &=& { \frac{1}{24} [1+\cos ^{2}\Theta _{B}-\frac{4m^{2}}{% s}\sin ^{2}\Theta _{B}] } \end{eqnarray} and for gluon-production \begin{eqnarray} {B}_{0}^{g}(s,\Theta _{B}) &=& \overline{c}(s,\Theta _{B})[\sin ^{2}\Theta _{B}(1+\cos ^{2}\Theta _{B})+\frac{8m^{2}}{s}(\cos ^{2}\Theta _{B}+\sin ^{4}\Theta _{B})-% \frac{16m^{4}}{s^{2}}(1+\sin ^{4}\Theta _{B})] \nonumber \\ {B}_{sig}^{g}(s,\Theta _{B}) &=& \overline{c}(s,\Theta _{B})[\sin ^{2}\Theta _{B}(1+\cos ^{2}\Theta _{B})-\frac{8m^{2}}{s}( 1 +\sin ^{2}\Theta _{B})+% \frac{16m^{4}}{s^{2}}(1+\sin ^{4}\Theta _{B})] \end{eqnarray} where the overall gluon-pole-factor \begin{equation} \overline{c}(s,\Theta _{B})=\frac{3 s^{2} g^4 }{96 (m^{2}-t)^{2}(m^{2}-u)^{2}} \,\, [7+% \frac{36p^{2}}{s}\cos ^{2}\Theta _{B}] \end{equation} depends on the $(t \bar{t})_{c.m.}$ center-of-momentum energy $\sqrt{s}$ and $\cos{\Theta_B}$, and includes the gluon color factor. In application, for instance to $p p \rightarrow t \bar{t} X$, parton-level top-quark spin-correlation functions need to be smeared with the appropriate parton-distribution functions with integrations over $\cos\Theta_B$ and the $ ( t \bar{t} )_{c.m.} $ energy, $\sqrt{s}$. There is a common final-state interference structure in these BR-S2SC functions for the charged-lepton plus jets reaction $p p $ or $p \bar{p} \rightarrow t \bar{t} \rightarrow \ldots $. From (11,12), the final-state relative phase effects do not depend on whether the final $t_1 \overline{t}_2$ system has been produced by gluon or by quark production. Measurement of the sign of the $\eta_L \equiv \frac{\Gamma _{R}(0,-1)}{\Gamma } = \pm 0.46$(SM/+) helicity parameter could exclude a large chiral weak-transition moment in favor of the SM prediction. \section{ The three theoretical numerical puzzles } These puzzles arose in a general search for empirical ambiguities between the SM's $(V-A)$ coupling and possible single additional Lorentz couplings that could occur in top-quark decay experiments, see refs. in [1]. \bigskip \textbf{1st Puzzle's associated phenomenological $m_{top}$ mass formula: } With ICHEP2008 empirical mass values ($m_{t} = 172.4\pm1.2GeV, m_{W} = 80.413\pm0.048GeV$) \begin{equation} y=\frac{m_{W}}{m_{t}}=0.4664\pm 0.0035 \nonumber \end{equation} This can be compared with the amplitude equality in the upper part of ``Table 1", see the corresponding two ``220" entries, \begin{equation} A_{+} (0,-1/2) = a \, A_{SM} (-1,-1/2) \nonumber \end{equation} with $a=1+O(v\neq y\sqrt{2},x)$. By expanding in the mass ratio $x^{2}=(m_{b}/m_{t})^{2}=7\cdot 10^{-4}$, \begin{eqnarray*} 1-\sqrt{2}y-y^{2}-\sqrt{2}y^{3}=x^{2}(\frac{2% }{1-y^{2}}-\sqrt{2}y)-x^{4}(\frac{1-3y^2}{(1-y^2)^3})+\ldots \\ =1.89x^{2}-0.748x^{4}+\ldots \nonumber \end{eqnarray*} The only real-valued solution to this cubic equation is $y=0.46006$ ($m_{b}=0$). \bigskip \textbf{Resolution of 2nd and 3rd Puzzles: } In the lower part of ``Table 1", the two R-handed helicity b-quark amplitudes $A_{New} = A_{g_L =1} / \sqrt \Gamma $ have the same magnitude in the SM and the (+) model. As a consequence of Lorentz-invariance, for the $t \rightarrow W^+ b$, the 4 intensity-ratios, $\, {\Gamma_{L,T}}|_{\lambda_b = \mp \frac{1}{2}} / {\Gamma( t \rightarrow W^+ b )} \,$ are identical for the standard model $V-A$ coupling and for the case of an additional chiral weak-moment of relative strength $\Lambda_+ = E_W / 2$. This intensity-ratio equivalence does not depend on the numerical values of $y=\frac{m_{W}}{m_{t}}$ or $x= m_{b}/m_{t}$. \section{ High energy properties of off-shell continuation } A simple off-shell continuation of Eq(2) is the $t \rightarrow W^+ b$ vertex \begin{equation} \frac{1}{2} \Gamma ^\mu =g_L \left[ \gamma ^\mu P_L + \frac{m_t } { p_t \cdot q_W }\imath \sigma ^{\mu \nu } ({q_W})_\nu P_R \right] \end{equation} To lowest order, this additional coupling has good high energy properties, i.e. it does not destroy 1-loop unitarity, for the processes $t \overline{t} \rightarrow W^{+}\ W^{-}$ and $b\ \overline{b}\rightarrow W^{-}\ W^{+}$ with both $W$-bosons with longitudinal polarization, nor with either or both $W$-bosons with transverse polarizations. There are of course no effects to lowest order for their analogues with a $Z^{o}Z^{o}$ final state, nor for\ the processes like $t\overline{\ b}\rightarrow e^{+}\nu _{e}$ . By power-counting, versus the SM's fermion cancellations, there are no new effects for the ABJ gauge anomalies. The processes $t\overline{\ b}% \rightarrow W^{+} \gamma, t\overline{\ b \rightarrow W^{+} H$ with $H$ a Higgs boson, are not divergent. However, the additional gauge-vertex diagram for the process $t\overline{\ b}% \rightarrow W^{+}\ Z^{o}$ with longitudinal gauge-boson polarizations is linearly divergent in the large $E_{t}$\ limit in the $(t\ \overline{b})_{cm}$ frame. \ Since this divergence involves a denominator factor of $ p_{t}\cdot (k_{W}+k_{Z})$, instead of additional neutral current couplings to the third-generation fermions, we introduce an additional more-massive $X^{\pm }$ boson to cancel this divergence. The additional $tXb$ vertex is \begin{equation} \frac{1}{2} \Gamma_X ^\mu = - g_L \left[ \frac{m_t } { p_t \cdot q_X }\imath \sigma ^{\mu \nu } ({q_ X})_\nu P_R \right] \end{equation} and the additional $WXZ$ vertex is of the same structure as the usual $WWZ$ vertex of the SM. This additional $tXb$ vertex does not effect the SM's fermion cancellations of ABJ gauge anomalies. \section{ Manifestation of intensity-ratio equivalence } The $t \rightarrow W^+ b$ helicity decay amplitudes in the SM and those with the additional (+) coupling of Eqs.(2, 18) are related by a set of transformation matrices. Similar to matrix representations of Lie Groups, these matrices form various algebras and sub-algebras, with associated symmetries which should be useful in investigating deeper dynamics in top quark physics [1,2,5]. This is an analytic generalization of the many numerical patterns in Table 1, i.e. it isn't merely 2 isolated amplitudes being related per Eq.(17). Instead, this makes manifest the intensity-ratio equivalence of Sec. 7. The $tWb$-transformations are: $A_{+}=v M $ $A_{SM}$, $v P $ $A_{SM}, \ldots $; $A_{SM}=v^{-1} M $ $A_{+}$, $- v^{-1} P $ $A_{+}, \ldots $; $v$ is the W-boson velocity in the t-quark rest frame; with $\Lambda_{+} =E_W /2$ but unfixed values of $m_W/m_t$ and $m_b/m_t$. Figs. (4-5) give a compact and complete summary. \begin{figure}[hbt]\centering \begin{minipage}[c]{.45\linewidth}\centering \includegraphics[width=7.95cm]{fig4lefta.eps} \end{minipage} \hskip 0.05cm \begin{minipage}[c]{.45\linewidth}\centering \includegraphics[width=7.95cm]{fig4righta.eps} \end{minipage} \vskip -4.cm \caption{Left: Self-Transformation factors between the helicity amplitudes of the Standard Model. Right: Self-Transformation factors between the helicity amplitudes of the (+) model; note the sign changes in the outer four factors versus those for the SM.} \end{figure} \begin{figure}[hbt]\raggedright \hskip -0.5cm \includegraphics[width=14.cm]{fig5c.eps} \vskip 1.cm \caption{Diagram displaying the roles of all the matrices in transforming the helicity amplitudes of either the SM or the (+) model. The matrices in the diagram form the 8-element closed transformation algebra. Included are the sign factors needed in order to get the correct transformation result using each of the matrices.} \end{figure} With $\alpha = a/v, \beta = b/v$, these matrices are: $M=$ $diag(1,-1,-1,1)$, \begin{equation} P(\alpha) \equiv \left[ \begin{array}{cccc} 0 & \alpha & 0 & 0 \\ - 1/\alpha & 0 & 0 & 0 \\ 0 & 0 & 0 & -1/2\alpha \\ 0 & 0 & 2\alpha & 0 \end{array} \right], \, B(\beta)\equiv \left[ \begin{array}{cccc} 0 & 0 & 0 & -\beta \\ 0 & 0 & 2\beta & 0 \\ 0 & 1/2\beta & 0 & 0 \\ -1/\beta & 0 & 0 & 0 \end{array} \right], \end{equation} \begin{equation} G(\alpha,\beta) \equiv \left[ \begin{array}{cccc} 0 & 0 & -2\alpha \beta & 0 \\ 0 & 0 & 0 & \beta / \alpha \\ 1/2 \alpha \beta & 0 & 0 & 0 \\ 0 & -\alpha/\beta & 0 & 0 \end{array} \right], \, Q(\alpha) \equiv \left[ \begin{array}{cccc} 0 & \alpha & 0 & 0 \\ 1/\alpha & 0 & 0 & 0 \\ 0 & 0 & 0 & 1/2\alpha \\ 0 & 0 & 2\alpha & 0 \end{array} \right], \end{equation} \begin{equation} C(\beta)\equiv \left[ \begin{array}{cccc} 0 & 0 & 0 & \beta \\ 0 & 0 & 2\beta & 0 \\ 0 & 1/2\beta & 0 & 0 \\ 1/\beta & 0 & 0 & 0 \end{array} \right], \, H(\alpha,\beta) \equiv \left[ \begin{array}{cccc} 0 & 0 & 2\alpha \beta & 0 \\ 0 & 0 & 0 & \beta / \alpha \\ 1/2 \alpha \beta & 0 & 0 & 0 \\ 0 & \alpha/\beta & 0 & 0 \end{array} \right], \end{equation} Including the identity matrix, this is the closed 8-element transformation algebra with a commutator/anticommutator structure. It has $7=4_{-}+3_{+}$ three-element closed subalgebras ($\mp$ subscript denotes non-trivial commutators/anticommutators). There are 2x2 matrix compositions of the above matrices with associated commutator/anticommutator algebras. \begin{acknowledgments} For their assistance in various stages of this research, we thank E.G. Barbagiovanni, J.J. Berger, A. Borgia, P. Gan, R. Joachim, N.H. Kim, S. Piotrowski, E.K. Pueschel, and J.R. Wickman. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Analysis of Offline Local Search Algorithms for Facility Location} \label{appendix:local-search} In this section, we prove theorems related to the local search algorithms for facility location. \subsection{Local Search for facility location} \uflapprox* \begin{proof} This proof is almost identical to the analysis of the $\alpha_FL$-approximation local search algorithm for facility location, except we take $\phi$ into consideration in all the inequalities. Eventually we shall have an $\alpha_FL|C|\phi$ term on the right side of the inequality. Formally, we let $(S^*, \sigma^*)$ be the optimum solution to facility location instance. Focus on an $i^* \in S^*$. Since there is no $\phi$-efficient operation that opens $i^*$ (recall that we can open $i^*$ even if we have $i\in S^*$), we have \begin{align*} \sum_{j \in \sigma^{*-1}(i^*) }d(j,\sigma_j) \le \lambda f_{i^* } \cdot 1_{i^*\notin S} + \sum_{j \in \sigma^{*-1}(i^*)} (d(j,i^*) + \phi) .\end{align*} This implies \begin{align} \sum_{j \in \sigma^{*-1}(i^*) } d(j,\sigma_j) \le \lambda f_{i^*} + \sum_{j \in \sigma^{*-1}(i^*)} d(j,i^*) + |\sigma^{*-1}(i^*)|\phi. \label{inequ:ufl-open} \end{align} Summing the inequalities over all $i^* \in S^*$ gives us \begin{align} \mathsf{cc}(\sigma) \leq \lambda f(S^*) + \mathsf{cc}(\sigma^*) + |C|\phi. \label{inequ:ufl-C} \end{align} For every $i \in S$, let $\psi(i)$ be the nearest facility in $S^*$ to $i$. For every $i^* \in S^*$ with $\psi^{-1}(i^*) \neq \emptyset$, let $\psi^*(i^*)$ be the nearest facility in $\psi^{-1}(i^*)$ to $i^*$. Focus on some $i \in S, i^* = \psi(i)$ such that $\psi^*(i^*) = i$. The operation that swaps in $i^*$, swaps out $i$ and connects $\sigma^{*-1}(i^*) \cup \sigma^{-1}(i)$ to $i^*$ is not $\phi$-efficient. This implies \begin{align*} &\quad \lambda f_i + \sum_{j \in \sigma^{*-1}(i^*) \cup \sigma^{-1}(i)} d(j,\sigma_j) \\ & \le \lambda f_{i^*} + \sum_{j \in \sigma^{*-1}(i^*)}d(j, i^*) + \sum_{j \in \sigma^{-1}(i) \setminus \sigma^{*-1}(i^*)}d(j, i^*) + \big|\sigma^{*-1}(i^*) \cup \sigma^{-1}(i)\big|\phi \\ & \le \lambda f_{i^*} + \sum_{j \in \sigma^{*-1}(i^*)} d(j,i^*) + \sum_{j \in \sigma^{-1}(i) \setminus \sigma^{*-1}(i^*)} [d(j,\sigma^*(j)) + 2d(j,i)] + \big|\sigma^{*-1}(i^*) \cup \sigma^{-1}(i)\big|\phi. \end{align*} To see the second inequality, notice that $d(j, i^*) \leq d(j, i) + d(i, i^*) \leq d(j, i) + d(i, \sigma^*(j)) \leq 2d(j, i) + d(j, \sigma^*(j))$. Canceling $\sum_{j \in \sigma^{-1}(i) \setminus \sigma^{*-1}(i^*)} d(j, i)$ on both sides and relaxing the right side a bit gives us \begin{align} \quad \lambda f_i + \sum_{j \in \sigma^{*-1}(i^*)} d(j,\sigma_j)&\leq \lambda f_{i^*} + \sum_{j \in \sigma^{*-1}(i^*)}d(j, i^*) + \big|\sigma^{*-1}(i^*) \cup \sigma^{-1}(i)\big|\phi \nonumber\\ &+ \sum_{j \in \sigma^{-1}(i)} \left(d(j,i) + d(j, \sigma^*(j)))\right). \label{inequ:ufl-swap} \end{align} Notice that it could happen that $i = i^*$ in the above setting; the inequality was implied by the operation that opens $i = i^*$ and connects $\sigma^{*-1}(i^* = i)$ to $i$. Now, focus on a $i \in S$ with $\psi^*(\psi(i)) \neq i$. Then closing $i$ and connecting each client in $j \in \sigma^{-1}(i)$ to $\psi^*(\sigma^*(j)) \neq i$ is not $\phi$-efficient. So, we have \begin{align*} \lambda f_i + \sum_{j \in \sigma^{-1}(i)} d(j,i) &\le \lambda f_i+ \sum_{j \in \sigma^{-1}(i)}d(j, \psi^*(\sigma^*(j)) ) + \big|\sigma^{-1}(i)\big|\phi \\ & \le \sum_{j \in \sigma^{-1}(i)} [2d(j,\sigma^*(j)) + d(j,i)] + \big|\sigma^{-1}(i)\big|\phi. \end{align*} To see the inequality, we have $d(j, \psi^*(\sigma^*(j))) \leq d(j, \sigma^*(j)) + d(\sigma^*(j), \psi(\sigma^*(j))) \leq d(j, \sigma^*(j)) + d( \sigma^*(j), i) \leq 2d(j, \sigma^*(j)) + d(j, i)$. This implies \begin{align} \lambda f_i \leq 2\sum_{j \in \sigma^{-1}(i)} d(j,\sigma^*(j)) + \big|\sigma^{-1}(i)\big|\phi. \label{inequ:ufl-close} \end{align} Now, consider the inequality obtained by summing up \eqref{inequ:ufl-swap} for all pairs $(i, i^*)$ with $i^* = \psi(i)$ and $\psi^*(i^*) = i$, \eqref{inequ:ufl-close} for all $i$ with $\psi^*(\psi(i)) \neq i$, and \eqref{inequ:ufl-open} for all $i^*$ with $\psi^{-1}(i^*) = \emptyset$. This inequality will be $ \lambda f(S) + \mathsf{cc}(\sigma) \leq \lambda f(S^*) + 2\mathsf{cc}(\sigma^*) + \mathsf{cc}(\sigma) + 2|C|\phi$, which is \begin{align} \lambda f(S) \leq \lambda f(S^*) + 2\mathsf{cc}(\sigma^*) + 2|C|\phi. \label{inequ:ufl-F} \end{align} Summing up Inequalities~\eqref{inequ:ufl-C} and $1/\lambda$ times \eqref{inequ:ufl-F} gives $f(S) + \mathsf{cc}(\sigma) \leq (1+\lambda) f(S^*) + (1+2/\lambda)\left(\mathsf{cc}(\sigma^*) + |C|\phi\right) = \alpha_FL\left(\mathsf{opt} + |C|\phi\right)$, since $1 + \lambda = 1+2/\lambda = 1+\sqrt{2}=\alpha_FL$. This finishes the proof of Theorem~\ref{thm:FL-offline-apx-ratio}. \end{proof} \ufloperations* The theorem follows from the proof of Theorem~\ref{thm:FL-offline-apx-ratio}. Let $\phi = 0$ in the theorem statement and the proof. \eqref{inequ:ufl-C} and \eqref{inequ:ufl-F} were obtained by adding many of the inequalities of the form \eqref{inequ:ufl-open}, \eqref{inequ:ufl-swap} and \eqref{inequ:ufl-close}. Notice that each inequality corresponds to a local operation. In the setting for Theorem~\ref{thm:FL-offline-operations}, the inequalities do not hold anymore since we do not have the condition that $0$-efficient operations do not exist. However for an inequality correspondent to an operation $\textrm{op}$, we can add $\nabla_\text{op}$ to the right side so that the inequality becomes satisfied. Then adding all the inequalities that were used to obtain \eqref{inequ:ufl-C}, we obtain \begin{align*} \mathsf{cc}(\sigma) \leq \lambda f(S^*) + \mathsf{cc}(\sigma^*) + \sum_{\textrm{op} \in {\mathcal{P}}_{\mathrm{C}}} \nabla_{\textrm{op}} \end{align*} where ${\mathcal{P}}_{\mathrm{C}}$ is the set of operations correspondent to the inequalities. Similarly we can obtain a set ${\mathcal{P}}_{\mathrm{F}}$ of operations, such that \begin{align*} \lambda f(S) \leq \lambda f(S^*) + 2\mathsf{cc}(\sigma^*) + \sum_{\textrm{op} \in {\mathcal{P}}_{\mathrm{F}}} \nabla_{\textrm{op}}. \end{align*} It is easy to check that each of ${\mathcal{P}}_{\mathrm{C}}$ and ${\mathcal{P}}_{\mathrm{F}}$ contains at most 1 operation opens or swaps in $i^*$, for every $i^* \in S^* \subseteq f$ and does not contain operations that open or swap in facilities outside $S^*$. ${\mathcal{P}}_{\mathrm{C}} \uplus {\mathcal{P}}_{\mathrm{F}}$ contains at most $|S| \leq |F|$ close operations. Rewriting the two inequalities almost gives us Theorem~\ref{thm:FL-offline-operations}, except for the requirement that each $\textrm{op} \in {\mathcal{P}}_{\mathrm{C}} \cup {\mathcal{P}}_{\mathrm{F}}$ has $\nabla_{\mathrm{op}} > 0$; this can be ensured by removing $\textrm{op}$'s with $\nabla_{\textrm{op}} \leq 0$ from ${\mathcal{P}}_{\mathrm{C}}$ and ${\mathcal{P}}_{\mathrm{F}}$. \section{Proofs of Useful Lemmas} \label{appendix:helper-proofs} \helpersumba* \begin{proof} Define $a_{T+1} = +\infty$. \begin{align*} \sum_{t = 1}^T \frac{b_t}{a_t} &= \sum_{t = 1}^T \frac{B_t - B_{t-1}}{ a_{t}}=\sum_{t = 1}^{T} B_t \left(\frac{1}{a_t} - \frac{1}{a_{t+1}}\right) = \sum_{t = 1}^{T}\frac{B_t}{a_t} \left(1 - \frac{a_t}{a_{t+1}}\right) \leq \alpha \sum_{t = 1}^{T}\left(1 - \frac{a_{t}}{a_{t+1}}\right)\\ &=\alpha T- \alpha\sum_{t = 1}^{T-1}\frac{a_t}{a_{t+1}} \leq \alpha T- \alpha(T-1)\Big(\frac{a_1}{a_T}\Big)^{1/(T-1)} \\ &= \alpha(T-1)\left(1-e^{-\ln\frac{a_T}{a_1}/(T-1)}\right) + \alpha \leq \alpha(T-1)\ln\frac{a_T}{a_1}/(T-1) + \alpha = \alpha\left(\ln \frac{a_T}{a_1}+1\right). \end{align*} The inequality in the second line used the following fact: if the product of $T-1$ positive numbers is $\frac{a_1}{a_T}$, then their sum is minimized when they are equal. The inequality in the third line used that $1-e^{-x} \leq x$ for every $x$. \end{proof} \helperstar* \begin{proof} By the conditions in the lemma, opening facility $i$ and reconnecting $\tilde C$ to $i $ is not $\phi$-efficient. This gives that at the moment, we have \[ \sum_{\tilde j \in \tilde C}d(\tilde j, S) \leq \sum_{\tilde j \in \tilde C}d(\tilde j, \sigma_{\tilde j}) \leq f_i + \sum_{\tilde j \in \tilde C} d(i , \tilde j) + |\tilde C|\cdot \phi \] By triangle inequalities we have $ d(\tilde j, S) \ge d(i, S) - d(i, \tilde j )$ for every $\tilde j \in \tilde C$. Combining with the previous inequality yields: \begin{align*} d(i, S) \le \frac{1}{|\tilde C|}\sum_{\tilde j \in \tilde C}\left(d(\tilde j, S) + d(i, \tilde j)\right) \leq \frac{ f_i + 2\sum_{\tilde j \in \tilde C} d(i, \tilde j) }{|\tilde C|}+ \phi. \hspace*{80pt} \qedhere \end{align*} \end{proof} \section{Moving Clients to Facility Locations} \label{appendix:moving-clients} In this section we show that by moving clients to their nearest facilities, we lose a multiplicative factor of $2$ and an additive factor of $1$ in the approximation. That is, an $\alpha$ approximate solution for the new instance, is $2\alpha+1$ approximate for the original instance. Throughout this section, we simply use the set of open facilities to define a solution and all clients are connected to their respective nearest open facilities. Let a facility location instance be given by $F, (f_j)_{j \in C}, C$ and $d$. Let $\psi_j$ be the nearest facility in $F$ to $j$ for every $j \in C$. By moving all clients $j$ to $\psi_j$, we obtain a new instance. Let $S^*$ be the optimum solution to the original instance. Suppose we have an solution $S$ for the new instance that is $\alpha$-approximate solution. Thus $f(S) + \sum_{j \in C}d(\psi_j, S) \leq \alpha\left(f(S^*) + \sum_{j \in C}d(\psi_j, S^*)\right)$. We show that $S$ is $2\alpha+1$ approximate for the original instance. Notice that for every $j \in C$, we have $d(j, S) - d(j, \psi_j ) \leq d(\psi_j, S) \leq d(j, S) + d(j, \psi_j)$ by triangle inequalities. \begin{align*} f(S) + \sum_{ j \in C} d(j, S) &\leq f(S) + \sum_{ j \in C} \left(d(\psi_j, S) + d(j, \psi_j)\right) \\ &\leq \alpha\left(f(S^*) + \sum_{j \in C}d(\psi_j, S^*)\right) + \sum_{j \in C} d(j, \psi_j) \end{align*} For every $j \in C$, since $\psi_j$ is the nearest facility in $F$ to $j$, we have $d(\psi_j, S^*) \leq d(j, \psi_j) + d(j, S^*) \leq 2d(j, S^*)$. Thus, we have \begin{align*} f(S) + \sum_{ j \in C} d(j, S) &\leq \alpha f(S^*) + 2\alpha\sum_{j \in C}d(j, S^*) + \sum_{j \in C} d(j, \psi_j)\\ &\leq \alpha f(S^*) + (2\alpha + 1)\sum_{j \in C}d(j, S^*). \end{align*} Thus, we have that $S$ is a $(2\alpha+1)$-approximate solution for the original instance. \section{Missing Proofs from Section~\ref{sec:fast-UFL}} \samplelocalsearch* \begin{proof} We are going to lower bound the expected value of $\mathsf{cost}_\lambda(S^0, \sigma^0) - \mathsf{cost}_\lambda(S^1, \sigma^1)$. By Theorem~\ref{thm:FL-offline-operations}, there are two sets ${\mathcal{P}}_{\mathrm{C}}$ and ${\mathcal{P}}_{\mathrm{F}}$ of local operations satisfying the properties. Below, we let ${\mathcal{Q}}$ be one of the following three sets: ${\mathcal{P}}_{\mathrm{C}}$, or ${\mathcal{P}}_{\mathrm{F}}$, or ${\mathcal{P}}_{\mathrm{C}} \biguplus {\mathcal{P}}_{\mathrm{F}}$. For every $i \in F$, let ${\mathcal{Q}}_{i}$ be the set of operations in ${\mathcal{Q}}$ that open or swap in $i$. Let ${\mathcal{Q}}_{\emptyset}$ be the set of $\mathsf{close}$ operations in ${\mathcal{Q}}$. Let $\Phi_i$ be maximum of $\nabla_{\mathsf{op}}$ over all $\mathsf{op} \in {\mathcal{Q}}_{i}$ (define $\Phi_i = 0$ if ${\mathcal{Q}}_{i} = \emptyset$); define $\Phi_\emptyset$ similarly. Notice that if $i \in S$ then open $i$ will not decrease the cost since we maintain that all the clients are connected to their nearest open facilities. Thus, ${\mathcal{Q}}_{i} = \emptyset$ for $i \in S$. Then, conditioned on that we consider $\mathsf{close}$ operations in $\mathsf{sampled\mhyphen local\mhyphen search}$, the cost decrement of the iteration is at least $\Phi_\emptyset$. Conditioned on that we consider opening or swapping in $i$ in the iteration, the decrement is at least $\Phi_i$. Thus, $\mathsf{cost}_\lambda(S^0, \sigma^0) - \E[\mathsf{cost}_\lambda(S^1, \sigma^1)] \geq \frac{\Phi_\emptyset}{3} + \sum_{i \in F\setminus S}\frac{2\Phi_i}{3|F \setminus S|}$. Therefore, \begin{align*} \sum_{\mathsf{op} \in {\mathcal{Q}}}\nabla_{\mathsf{op}} &\leq |{\mathcal{Q}}_\emptyset|\Phi_\emptyset + \sum_{i \in F \setminus S}|{\mathcal{Q}}_i|\Phi_i \leq |F|\Phi_\emptyset + 2\sum_{i \in F \setminus S}\Phi_i \\ &\leq 3 |F|(\mathsf{cost}_\lambda(S^0, \sigma^0) - \E[\mathsf{cost}_\lambda(S^1, \sigma^1)]), \end{align*} since the third and fourth properties in the theorem imply $|{\mathcal{Q}}_\emptyset| \leq |F|$ and $|{\mathcal{Q}}_i| \leq 2$ for every $i \in F \setminus S$. Replacing ${\mathcal{Q}}$ with each of ${\mathcal{P}}_{\mathrm{C}}$, ${\mathcal{P}}_{\mathrm{F}}$ and ${\mathcal{P}}_{\mathrm{C}} \biguplus {\mathcal{P}}_{\mathrm{F}}$, we obtain \begin{align*} \mathsf{cost}_\lambda(S^0, \sigma^0) - \E[\mathsf{cost}_\lambda(S^1, \sigma^1)] \geq \frac1{3 |F|}\max\left\{ \begin{array}{c} \mathsf{cc}(\sigma^0) - (\lambda f(S^*) + \mathsf{cc}(\sigma^*))\\ \lambda f(S) - (\lambda f(S^*) + 2\mathsf{cc}(\sigma^*))\\ \mathsf{cost}_\lambda(S^0, \sigma^0) - (2\lambda f(S^*) + 3\mathsf{cc}(\sigma^*)) \end{array} \right\}. \end{align*} This finishes the proof of the lemma. \end{proof} \fliterate* \begin{proof} We break the procedure in two stages. The first stage contains $M_1 = O\left( |F|\log\frac{\Gamma}{\epsilon'}\right)$ iterations of the for-loop in $\mathsf{FL\mhyphen iterate}(M)$, where $M_1$ is sufficiently large. Applying Lemma~\ref{lemma:sample-local-search} and using the third term in the $\max$ operator, for any execution of $\mathsf{sampled\mhyphen local\mhyphen search}$, we have \begin{align*} &\quad \E\big[\big(\mathsf{cost}_\lambda(S^1, \sigma^1) - (2\lambda f(S^*) + 3\mathsf{cc}(\sigma^*))\big)_+\big] \\ &\leq \left(1- \frac{1}{3 |F|}\right)\big(\mathsf{cost}_\lambda(S^0, \sigma^0) - (2\lambda f(S^*) + 3\mathsf{cc}(\sigma^*))\big)_+, \end{align*} where $(S^0, \sigma^0)$ and $(S^1, \sigma^1)$ are as defined w.r.t the execution, and $x_+$ is defined as $\max\{x ,0\}$ for every real number $x$. Notice that when $\mathsf{cost}_\lambda(S^0, \sigma^0) \leq 2\lambda f(S^*) + 3\mathsf{cc}(\sigma^*)$, the inequality holds trivially. Truncating at $0$ is needed later when we apply the Markov inequality. So, after $M_1$ iterations, we have \begin{align*} &\quad \E\big[\big(\mathsf{cost}_\lambda(S, \sigma) - (2\lambda f(S^*) + 3\mathsf{cc}(\sigma^*))\big)_+\big] \\ &\leq \left(1- \frac{1}{3 |F|}\right)^{M_1}\big(\mathsf{cost}_\lambda(S^\circ, \sigma^\circ) - (2\lambda f(S^*) + 3\mathsf{cc}(\sigma^*))\big)_+\leq \frac{\epsilon'}{2\Gamma}\mathsf{opt}. \end{align*} The second inequality holds since $\mathsf{cost}_\lambda(S^\circ, \sigma^\circ) \leq \lambda\mathsf{cost}(S^\circ, \sigma^\circ) \leq O(1)\mathsf{opt}$ and $M = O\left(\frac{|F|}{\epsilon'}\log \Gamma\right)$ is sufficiently large. Using Markov's inequality, with probability at least $1-\frac{1}{2\Gamma}$, we have at the end of the first stage, $$(\mathsf{cost}_\lambda(S, \sigma) - (2\lambda f(S^*) + 3\mathsf{cc}(\sigma^*)))_+ \leq \epsilon'\cdot\mathsf{opt}.$$ If the event happens, we say the first stage is successful. We assume the first stage is successful and analyze the second stage. The second stage contains $\log_2(2\Gamma)$ phases, and each phase contains $\frac{48 |F|}{\epsilon'}$ iterations. We focus on one phase in the stage. Assume that at the beginning of an iteration in the phase, we have \begin{align*} \mathsf{cc}(\sigma) \leq \big(\lambda + \frac{\epsilon'}2\big) f(S^*) + \big(1+\frac{\epsilon'}2\big)\mathsf{cc}(\sigma^*) \text{ and } \lambda f(S) \leq \big(\lambda + \frac{\lambda\epsilon'}2\big) f(S^*) + \big(2+\frac{\lambda\epsilon'}2\big)\mathsf{cc}(\sigma^*). \end{align*} Then at the moment, we have $\mathsf{cost}(S, \sigma) \leq (1 + \lambda + \epsilon')f(S^*) + (1+2/\lambda + \epsilon')\mathsf{cc}(\sigma^*) = (\alpha_{\mathsf{FL}} + \epsilon')\mathsf{opt}$ (obtained by adding the first inequality and $1/\lambda$ times the second inequality). Then we must have $\mathsf{cost}(S^{\mathsf{best}}, \sigma^{\mathsf{best}}) \leq (\alpha_{\mathsf{FL}} + \epsilon')\mathsf{opt}$ in the end of this execution of $\mathsf{FL\mhyphen iterate}$ since $(S^{\mathsf{best}}, \sigma^{\mathsf{best}})$ is the best solution according to the original (i.e, non-scaled) cost. Thus, we say a phase in the second stage is successful if both inequalities hold at the end of some iteration in the phase; then we can pretend that the phase ends at the moment it is successful. If one of the two inequalities does not hold at the end of an iteration, then by Lemma~\ref{lemma:sample-local-search}, for the execution of $\mathsf{sampled\mhyphen local\mhyphen search}$ in the next iteration, we have $\mathsf{cost}_\lambda(S^0, \sigma^0) - \E[\mathsf{cost}_{\lambda}(S^1, \sigma^1)] \geq \frac{\epsilon'}{6 |F|}(f(S^*) + \mathsf{cc}(\sigma^*)) = \frac{\epsilon'}{6 |F|}\mathsf{opt}$. Then, by stopping times of martingales, in expectation, the phase stops in at most $\frac{24 |F|}{\epsilon'}$ iterations since at the beginning of the phase we have $\mathsf{cost}_\lambda(S, \sigma) \leq \max\{3+\epsilon', 2\lambda+\epsilon'\}(f(S^*) + \mathsf{cc}(\sigma^*)) \leq 4\cdot\mathsf{opt}$ and $\mathsf{cost}_\lambda(S, \sigma)$ is always positive. By Markov's inequality, the probability that the phase does not stop early (i.e, is not successful) is at most $1/2$. The probability that the second stage succeeds, i.e, at least one of its phases succeeds is at least $1-1/(2\Gamma)$. Thus with probability at least $1-1/\Gamma$, both stages succeed and we have $\mathsf{cost}(S^{\mathrm{best}}, \sigma^{\mathrm{best}}) \leq (\alpha_{\mathsf{FL}} + \epsilon')\mathsf{opt}$. The number of iterations we need in the two stages is $O\left(\frac{ |F|}{\epsilon'}\log \Gamma\right)$. \end{proof} \section{Open Problems and Discussions} \label{sec:discussions} We initiated the study of facility location problem in general metric spaces in recourse and dynamic models. Several interesting problems remain open: The most obvious one is can we get $O(1)$-competitive online/dynamic algorithms with polylog amortized recourse or fast update times in the fully dynamic setting. Another interesting direction is can we extend our results to the capacitated facility location and capacitated $k$-median, where there is an upper bound on the number of clients that can be assigned to a single open facility. From technical point of view, it would be interesting to find more applications of local search and probabilistic tree embedding techniques in the dynamic algorithms model. Finally, as alluded in the introduction, an exciting research direction is to understand the power of recourse in the online model. \section{Introduction} \label{sec:intro} In the {\em (uncapacitated) facility location problem}, we are given a metric space $(F \cup C,d)$, where $F$ is the set of facility locations, $C$ is the set of clients, and $d: (F \cup C) \times (F \cup C) \rightarrow {\mathbb{R}}_{\geq 0}$ is a distance function, which is non-negative, symmetric and satisfies triangle inequalities. For each location $i \in F$, there is a facility opening cost $f_i \geq 0$. The goal is open a subset $S \subseteq F$ of facilities so as to minimize cost of opening the facilities and the connection cost. The cost of connecting a client $j$ to an open facility $i$ is equal to $d(j,i)$. Hence, the objective function can be expressed concisely as $\min_{S\subseteq F} \left(f(S) + \sum_{j \in C}d(j, S)\right)$, where for a set $S \subseteq F$, $f(S) := \sum_{i \in S}f_i$ is the total facility cost of $S$ and $d(j, S):=\min_{i \in S}d(j, i)$ denotes the distance of $j$ to the nearest location in $S$. The facility location problem arises in countless applications: in the placement of servers in data centers, network design, wireless networking, data clustering, location analysis for placement of fire stations, medical centers, and so on. Hence, the problem has been studied extensively in many different communities: approximation algorithms, operations research, and computational geometry. In the approximation algorithms literature in particular, the problem occupies a prominent position as the development of every major technique in the field is tied to its application on the facility location problem. See the text book by Williamson and Shmoys \cite{Williamson} for more details. The problem is hard to approximate to a factor better than 1.463 \cite{Guha1998}. The current best-known polynomial-time algorithm is given by the third author, and achieves 1.488-approximation \cite{Li13}. In many real-world applications the set of clients arrive online, the metric space can change over time, and there can be memory constraints: This has motivated the problem to be studied in various models: online \cite{Meyerson2001,Fotakis08algorithmica,Anagnostopoulos2004,Fotakis2007}, dynamic \cite{Cohen-Addad19, Goranci19,CyganCMS18,Wesolowsky:1973,Farahani2009, Eisenstat,AnNS15}, incremental \cite{Fotakis06,Charikar1997,Fotakis2011}, streaming \cite{Indyk2004, Fotakis11, Lammersen, Czumaj2013, Charikar1997}, game theoretic \cite{Vetta2002,FotakisT13,FotakisT13a}, to name a few. This paper is concerned with {\em online and dynamic models}. Thus to keep the flow of presentation linear, we restrict ourselves to the results in these two models here. \remove{In the Section , we put our results in the broader context. } \medskip Motivated by its applications in network design and data clustering, Meyerson \cite{Meyerson2001} initiated the study of facility location problem in the online setting. Here, clients arrive online one-by-one, the algorithm has to assign the newly arriving client to an already opened facility or needs to open a new facility to serve the request. The decisions made by the algorithm are {\em irrevocable}, in the sense that a facility that is opened cannot be closed and the clients cannot be reassigned. In the online setting, Meyerson \cite{Meyerson2001} designed a very elegant randomized algorithm that achieves an $O(\log n)$ competitive ratio, and also showed that no online algorithm can obtain $O(1)$ competitive ratio. This result was later extended by Fotakis \cite{Fotakis08algorithmica} to obtain an {\em asymptotically optimal} $O(\log n/\log \log n)$-competitive algorithm. Both the algorithms and analysis techniques in \cite{Fotakis08algorithmica, Meyerson2001} were influential, and found many applications in other models such as streaming \cite{Fotakis2011}. \shr{I think the lower bound was shown in a different paper; and the bound of Meyerson was $O(\log n)$.} The lowerbound in Fotakis \cite{Fotakis08algorithmica} holds even in very special metric spaces such as HSTs or the real line. Since then, several online algorithms have been designed achieving the same competitive ratio with more desirable properties such as deterministic \cite{Anagnostopoulos2004}, primal-dual \cite{Fotakis2007}, or having a small memory footprint \cite{Fotakis11}. We refer to a beautifully written survey by Fotakis \cite{Fotakis2011} for more details. The main reason to assume that decisions made by an algorithm are irrevocable is because the cost of changing the solution is expensive in some applications. However, if one examines these above applications closely, say for example connecting clients to servers in data centers, it is more natural to assume that decisions need not be irrevocable but the algorithm {\em should not change the solution too much}. This is even more true in modern data centers where topologies can be reconfigured; see \cite{GhobadiMPDKRBRG16} for more details. A standard way of quantifying the restriction that an online algorithm does not make too many changes is using the notion of {\em recourse}. The recourse per step of an online algorithm is the {\em number of changes} it makes to the solution. Recourse captures the {\em minimal} amount of changes an online algorithm {\em has to make} to maintain a desired competitive ratio due to the {\em information theoretic} limits. For the facility location problem, depending on the application, the recourse can correspond to: 1) the number of changes made to the opened facilities (called \emph{facility recourse}) 2) the number of reconnections made to the clients (called \emph{client recourse}). \sh{Notice that we can assume for every facility we open/close, we have to connect/disconnect at least one client. Thus the client recourse is at least the facility recourse.} In the clustering applications arising in massive data sets, the opened facilities represent cluster centers, which represent summaries of data. Here one is interested in making sure that summaries do not change too frequently as more documents are added online. Therefore, facility recourse is a good approximation to the actual cost of changing the solution \cite{Charikar1997,Fotakis06}. On the other hand, in network design problems, client recourse is the true indicator of the cost to implement the changes in the solution. As a concrete example, consider the problem of connecting clients to servers in datacenters, which was one of the main motivation for Meyerson \cite{Meyerson2001} to initiate the study of online facility location problem. Here, it is important that one does not reconnect clients to servers too many times, as such changes can incur significant costs both in terms of disruption of service and the labor cost. \sh{Consider another scenario where a retailing company tries to maintain stores to serve the dynamically changing set of clients. As the clients are changing so frequently, it would be infeasible to build/shutdown even one store for every new client. In this application, small client recourse per step is desirable, as that will automatically forbid frequent changes of store locations.}\xguor{Discuss the NeurIPS'19 paper on fully dynamic facility location by Cohen-Addad et~al.\cite{Cohen-Addad19}} In this light, a natural question that arises is: \shr{Also the facility recourse is always smaller than the client recourse since every time we open or close a facility, at least one client will be reconnected. Do we need to mention facility recourse in the results? Can we say this here? } \vspace{2mm} {\em Is it possible to maintain a constant approximation for the facility location problem if we require that the facility and client recourse is small?} \vspace{2mm} Our first main result shows that indeed this is possible. In the following theorems, we use $n$ to denote the total number of facility locations and all clients that ever arrived, and $D$ to denote the diameter of the metric $d$ (assuming all distances are integers). \begin{theorem} \label{UFL-recourse} There is a deterministic online algorithm for the facility location problem that achieves a competitive ratio of $(1+\sqrt{2} + \epsilon)$ with $O\left(\frac{\log n}{\epsilon}\log\frac1\epsilon\right)$ amortized facility and client recourse against an adaptive adversary. \end{theorem} Our algorithm to show the above theorem differs from the previous approaches used in the context of online variants of facility location problem, and is based on {\em local search}. The local search algorithm is one of the most widely used algorithms for the facility location problem in practice and is known to achieve an approximation factor of $(1+\sqrt 2)$ in the offline setting. See the influential paper by Arya {\em et al} \cite{AryaGKMP01} and a survey by Munagala \cite{Munagala16}. Thus our result matches the best known approximation ratio for offline facility location using local search. Further, our result shows that the local search algorithm augmented with some small modifications is inherently {\em stable} as it does not make too many changes to the solutions even if clients are added in an online fashion. This gives further justification for its popularity among practitioners. Prior to Theorem \ref{UFL-recourse}, the known results \cite{Fotakis06, Diveki2011,Fotakis11} needed one or more of these assumptions: 1) the facility costs are {\em the same} 2) we are interested in knowing only the cost of solution 3) we are interested only in bounding the {\em facility recourse}. In particular, there was no known algorithm that bounds the client recourse, which is an important consideration in many applications mentioned above. Moreover, our algorithm also achieves a better approximation factor; previously best known algorithm for the facility location problem achieved a competitive ratio of 48 \cite{Fotakis2011}. Our result in the recourse setting for the facility location problem should be contrasted with the similar results shown recently for online Steiner tree \cite{Gupta015}, set cover \cite{GuptaK0P17}, scheduling \cite{GuptaKS14}, and matchings and flows \cite{BernsteinHR19,GuptaKS14}. Moreover, these results also raise an intriguing questions: {\em is polylog amount of recourse enough to beat information theoretic lowerbounds in the online algorithms? Is recourse as or more powerful than randomization?} \medskip While having a small client recourse is enough in data center applications, it is not enough in some others. Take wireless networks as a concrete example. Here, the set of clients (mobile devices) keeps changing over time, and it is necessary to {\em update} the assignment of clients to facilities as {\em quickly} as possible so to minimize the service disruption. These applications motivated Cygan {\em et~al}~\cite{CyganCMS18}, Goranci {\em et~al}~\cite{Goranci19} and Cohen-Addad {\em et~al}~\cite{Cohen-Addad19} to study the facility location problem in the framework of {\em dynamic algorithms}. The dynamic model of \cite{CyganCMS18} and \cite{Cohen-Addad19} is different from what we study here, so we discuss it at end of this section. The dynamic facility location problem is similar to the one in online setting except that at each time step either {\em a new client arrives or an existing client departs}. The goal is to always maintain a solution that is a constant factor approximation to the optimal solution, while minimizing {\em the total time spent in updating the solution.} We emphasize that we require our dynamic algorithms to maintain {\em an actual assignment of clients to facilities}, not just the set of open facilities and an estimate of connection cost. This is important for applications mentioned above. This setting was considered in \cite{Goranci19}, who showed that for metric spaces with {\em doubling dimension $\kappa$}, there is a deterministic fully dynamic algorithm with $\tilde O(2^{\kappa^2})$ update time, which maintains a constant approximation. However, for more general metric spaces no results were known in the dynamic setting, and we give the first results. First we consider the incremental setting, where clients only arrive and never depart. \begin{theorem} \label{UFL-dynamicIncremental} In the incremental setting against an adaptive adversary, there is a randomized dynamic algorithm for the facility location problem that, with probability at least $1-1/n^2$, maintains an approximation factor of $(1+\sqrt{2} + \epsilon)$ and has \emph{total} update time of $O(\frac{n^2}{\epsilon^2}\log^3n\log\frac1\epsilon)$. \end{theorem} Note that it takes $\Theta(n|F|)$ space to specify the input in our model (see Section~\ref{subsec:specify-input}). Hence the running time of our algorithms is almost optimal up to polylog factors when $|F| = \Omega(n)$. The proof of above theorem uses randomized local search and builds on our result in the recourse setting. We use randomization to convert the recourse bound into an update time bound. Further, our analysis of above theorem also implies one can obtain $O(\frac{n|F|}{\epsilon^2}\log^3n\log\frac1{\epsilon})$ running time by losing $O(1)$ factors in the approximation ratio; see the remark at the end of Section \ref{sec:dfl}. \medskip Next we study the fully dynamic setting. Here, we first consider an important class of metric spaces called hierarchically well separated tree (HST) metrics {\cite{Bartal96}; see Definition~\ref{def:HST} for the formal definition, and Section~\ref{subsec:specify-input} for more details about how the input sequence is given. \remove{A $k$-hierarchically well-separated tree is defined as a rooted weighted tree with following properties: 1) The edge weight from any node to each of its children is same. 2) The edge weights along any path from the root to a leaf are decreasing by a factor of at least $k$. In most applications, $k$ is assumed to be a small constant.} \shr{This is different from what I defined in the preliminary section.} For HST metric spaces, we show the following result. \remove{In the following result, $n$ denotes the total number of clients that arrive in the entire course of the algorithm. }\shr{This was already defined.} \begin{theorem} \label{UFL-HST} In the fully dynamic setting against adaptive adversaries, there is a deterministic algorithm for the facility location problem that achieves an $O(1)$ approximation factor with $O(|F|)$ preprocessing time and $O(n\log^3 D)$ total update time for the HST metric spaces. \end{theorem} A seminal result by Bartal \cite{Bartal96}, which was later tightened by Fakcharoenphol, Rao and Talwar \cite{Fakcharoenphol2003}, shows that any arbitrary $N$-point metric space can be embedded into a distribution over HSTs such that the expected distortion is at most $O(\log N)$, which is also tight. Moreover, such a probabilistic embedding can also be computed in $O(N^2\log N)$ time; see recent results by Blelloch, Gu and Sun for details \cite{Blelloch0S17}. These results immediately imply the following theorem, provided the input is specified as in Section~\ref{subsec:specify-input}. \begin{theorem} \label{UFL-fullydynamic} In the fully dynamic setting against oblivious adversary, there is a randomized algorithm for the facility location problem that maintains an approximation factor of $O(\log |F|)$ with \sh{preprocessing time of $O(|F|^2\log |F|)$} and $O(n\log^3 D)$ total update time. The approximation guarantee holds only in expectation for every time step of the algorithm. \end{theorem} Observe that unlike the incremental setting, the above theorem holds only in the oblivious adversary model, as probabilistic embedding techniques preserve distances only in expectation as can be seen by taking a cycle on $n$ points. Our result also shows that probabilistic tree embeddings using HSTs can be a very useful technique in the design of dynamic algorithms, similar to its role in online algorithms \cite{Bartal96, BartalBBT97, Umboh15, BubeckCLLM18}. \medskip Our algorithms in Theorems \ref{UFL-HST} and \ref{UFL-fullydynamic} in the fully dynamic setting also have the nice property that amortized client and facility {\em recourse} is $O(\log^3D)$ (in fact, we can achieve a slight better bound of $O(\log^2 D)$ as can be seen from the analysis). This holds as our dynamic algorithms maintain the entire assignment of clients to facilities {\em explicitly} in memory at every time step. Thus, the amortized client reconnections is at most the amortized update time. This is useful when one considers an online setting where clients arrive and depart, and is interested in small client recourse. A fully dynamic online model of facility location problem, where clients arrive and \emph{depart} was recently studied by Cygan {\em et~al}~\cite{CyganCMS18} and Cohen-Addad {\em et~al}~\cite{Cohen-Addad19}, but with different assumption on recourse. In this model, when a client arrives, the algorithm has to assign it to an open facility immediately; While upon departure of a client, if a facility was opened at the same location, then the clients that were assigned to that location should be reassigned immediately and irrevocably. Cygan {\em et~al}~\cite{CyganCMS18} studied the case when recourse is \emph{not} allowed: they showed that a delicate extension of Meyerson's \cite{Meyerson2001} algorithm obtains asymptotically tight competitive ratio of $O(\log n /\log \log n)$. Cohen-Addad {\em et~al}~\cite{Cohen-Addad19} later showed that this can be improved to $O(1)$ if recourse is allowed. However, both results holds only for the {\em uniform facility costs} and Cygan {\em et~al}\cite{CyganCMS18} even showed an {\em unbounded} lower bound for the non-uniform facility cost case in their model. Moreover, in their model reconnections of clients are assumed to be ``automatic'' and do not count towards the client recourse; it is not clear how many client reconnections their algorithm will make. \subsection{Our Techniques} Our main algorithmic technique for proving Theorems~\ref{UFL-recourse} and \ref{UFL-dynamicIncremental} is local search, which is one of the powerful algorithm design paradigms. \sh{Indeed, for both results, the competitive (approximation) ratio we achieve is $1+\sqrt{2}+\epsilon$, which matches the best approximation ratio for offline facility location obtained using local search \cite{AryaGKMP01}. Both of our results are based on the following key lemma. Suppose we maintain local optimum solutions at every time step in our algorithm. When a new client $j_t$ comes at time $t$, we add it to our solution using a simple operation, and let $\Delta_t$ be the increase of our cost due to the arrival of $j_t$. The key lemma states that the sum of $\Delta_t$ values in the first $T'$ time steps can be bounded in terms the optimum cost at time $T'$. With a simple modification to the local search algorithm, in which we require each local operation decreases enough cost for every client it reconnects, one can bound the total client recourse. } \sh{ The straightforward way to implement the local search algorithm takes time $\Omega(n^3)$. To derive a better running time, we leverage the randomized local search idea of Charikar and Guha \cite{CharikarGhua2005}. At every iteration, \sh{we randomly choose a facility $i$ or a closing operation, and then perform the best operation that opens or swaps in $i$, or closes a facility if that is what we choose}. By restricting the facility $i$ and with the help of the heap data structure, an iteration of the algorithm can be implemented in time $O(|C|\log |F|)$. As in \cite{CharikarGhua2005} we can also show that each iteration can make a reasonable progress in expectation, leading to a bound of $\tilde O(|F|)$ on the number of iterations for the success of the algorithm with high probability. We remark that the algorithm in \cite{CharikarGhua2005} used a different local search framework. Therefore, our result shows that the classic algorithm of \cite{AryaGKMP01} can also be made fast. } \sh{However, directly replacing the randomized local search procedure with a deterministic one does not work: The solution at the end of each time might not be a local optimum as we did not enumerate all possible local operations. Thus the key lemma does not hold any more. Nevertheless we show that applying a few local operations around $j_t$ upon its arrival can address the issue. With the key lemma, one can bound the number of times we perform the iterative randomized local search procedure, and thus the overall running time. } \sh{Our proof for Theorem~\ref{UFL-HST} is based on a generalization of the greedy algorithm for facility location on HST metrics, which was developed in \cite{EsencayiGLW19} in the context of {\em differential privacy} but only for the case of {\em uniform} facility cost. The intuition of the algorithm is as follows: If for some vertex $v$ of the HST $T$, the number of clients in the tree $T_v$ (the sub-tree of $T$ rooted at $v$) times the length of parent edge of $v$ is big compared to the cost of the cheapest facility in $T_v$, then we should open that facility. Otherwise, we should not open it and let the clients in $T_v$ be connected to outside $T_v$ through the parent edge. This intuition can be made formal: We mark $v$ in the former case; then simply opening the cheapest facility in $T_v$ for all \emph{lowest marked} vertices $v$ leads to a constant approximation for facility location. } \sh{The above offline algorithm leads to a \emph{dynamic data structure} that maintains $O(1)$-approximate solutions, supports insertion and deletion of clients, and reports the connecting facility of a client in $O(\log D)$ time. This is the case since each time a client arrives or departs, only its ancestors will be affected. However, in a dynamic algorithm setting, we need to maintain the assignment vector in memory, so that when the connecting facility of a client changes, it needs to be notified. This requires that the number of reconnections made by our algorithm to be small. To achieve the goal, we impose two constants for each $v$ when deciding whether $v$ should be marked and the cheapest facility in $T_v$ should be open. When a vertex $v$ changes its marking/opening status, we update the constants in such a way that it becomes hard for the status to be changed back. } \section{Preliminaries} \label{sec:prelim} Throughout the paper, we use $F$ to denote the set of potential facilities for all the problems and models; we assume $F$ is given upfront. $C$ is the dynamic set of clients we need to connect by our algorithm. This is not necessarily the set of clients that are present: In the algorithms for online facility location with recourse and dynamic facility location in the incremental setting, we fix the connections of some clients as the algorithms proceed. These clients are said to be ``frozen'' and excluded from $C$. We shall always use $d$ to denote the hosting metric containing $F$ and all potential clients. For any point $j$ and subset $V$ of points in the metric, we define $d(j, V) = \min_{v \in V}d(j, v)$ to be the minimum distance from $j$ to a point in $V$. We assume all distances are integers, the minimum non-zero distance between two points is 1. We define $D$, the diameter or the aspect ratio of a metric space, as the largest distance between two points in it. Let $n$ be $|F|$ plus the total number of clients arrived during the whole process. The algorithms do not need to know the exact value of $n$ in advance, except that in the dynamic algorithm for facility location in the incremental setting (the problem in Theorem~\ref{UFL-dynamicIncremental}), to achieve the $1- 1/n^2$ success probability, a sufficiently large $\Gamma = \mathrm{poly}(n, \log D, \frac1\epsilon)$ needs to be given.\footnote{For an algorithm that might fail, we need to have some information about $n$ to obtain a failure probability that depends on $n$.} \shr{TODO: remove $n_c$ and $n_f$. $n_c$ is never used. We can just use $|F|$.Done. } In all the algorithms, we maintain a set $S$ of open facilities, and a connection $\sigma \in S^C$ of clients in $C$ to facilities in $S$. We do not require that $\sigma$ connects clients to their respective nearest open facilities. For any solution $(S' \subseteq F, \sigma' \in S'^C)$, we use $\mathsf{cc}(\sigma') = \sum_{j \in C}d(j, \sigma_j)$ to denote the connection cost of the solution. For facility location, we use $\mathsf{cost}(S', \sigma') = f(S') + \mathsf{cc}(\sigma')$ to denote the total cost of the solution $(S', \sigma')$, where $f(S') := \sum_{i \in S'} f_i$. Notice that $\sigma$ and the definitions of $\mathsf{cc}$ and $\mathsf{cost}$ functions depend on the dynamic set $C$. Throughout the paper, we distinguish between a ``moment'', a ``time'' and a ``step''. A moment refers to a specific time point during the execution of our algorithm. A time corresponds to an arrival or a departure event: At each time, exactly one client arrives or departs, and time $t$ refers to the period from the moment the $t$-th event happens until the moment the $(t+1)$-th event happens (or the end of the algorithm). One step refers to one statement in our pseudo-codes indexed by a number. \subsection{Hierarchically Well Separated Trees} \begin{definition} \label{def:HST} A hierarchically-well-separated tree (or HST for short) is an edge-weighted rooted tree with the following properties: \begin{itemize}[topsep=3pt,itemsep=0pt] \item all the root-to-leaf paths have the same number of edges, \item if we define the level of vertex $v$, ${\mathsf{level}}(v)$, to be the number of edges in a path from $v$ to any of its leaf descendant, then for an non-root vertex $v$, the weight of the edge between $v$ and its parent is exactly $2^{{\mathsf{level}}(v)}$. \end{itemize} Given a HST $T$ with the set of leaves being $X$, we use $d_T$ to denote the shortest path metric of the tree $T$ (with respect to the edge weights) restricted to $X$. \end{definition} The classic results by Bartal \cite{Bartal96} and Fakcharoenphol, Rao and Talwar \cite{Fakcharoenphol2003} state that we can embed any $N$-point metric $(X, d)$ (with minimum non-zero distance being $1$) to a distribution $\pi$ of \emph{expanding}\footnote{A metric $(X, d_T)$ is expanding w.r.t $(X, d)$ if for every $u, v \in X$, we have $d_T(u, v) \geq d(u, v)$.} HST metrics $(X, d_T)$ with distortion $O(\log N)$: For every $u, v \in X$, we have $d_T(u, v) \geq d(u, v)$ and $\E_{u, v}[d_T(u, v)] \leq O(\log N) d(u, v)$. Moreover, there is an efficient randomized algorithm \cite{Blelloch0S17} that outputs a sample of the tree $T$ from $\pi$. Thus applying standard arguments, Theorem~\ref{UFL-HST} implies Theorem~\ref{UFL-fullydynamic}. \subsection{Specifying Input Sequence} \label{subsec:specify-input} In this section we specify how the input sequence is given. For the online and dynamic facility location problem, we assume the facility locations $F$, their costs $(f_i)_{i \in F}$, and the metric $d$ restricted to $F$ are given upfront, and they take $O(|F|^2)$ space. Whenever a client $j \in C$ arrives, it specifies its distance to every facility $i \in F$ (notice that the connection cost of an assignment $\sigma \in S^C$ does not depend on distances between two clients and thus they do not need to be given). Thus the whole input contains $O(n|F|)$ words. For Theorems~\ref{UFL-HST} and \ref{UFL-fullydynamic}, as we do not try to optimize the constants, we {\em do not} need that a client specifies its distance to every facility. By losing a multiplicative factor of $2$ and an additive factor of $1$ in the approximation ratio, we can assume that every client $j$ is collocated with its nearest facility in $F$ (See Appendix~\ref{appendix:moving-clients}). Thus, we only require that when a client $j$ comes, it reports the position of its nearest facility. For Theorem~\ref{UFL-HST}, the HST $T$ over $F$ is given at the beginning using $O(|F|)$ words. For Theorem~\ref{UFL-fullydynamic}, the metric $d$ over $F$ is given at the beginning using $O(|F|^2)$ words. Then, we use an efficient algorithm \cite{Blelloch0S17} to sample a HST $T$. \subsection{Local Search for facility location} The local-search technique has been used to obtain the classic $(1+\sqrt 2)$-approximation offline algorithm for facility location \cite{AryaGKMP01}. We now give an overview of the algorithm, which will be the baseline of our online and dynamic algorithms for facility location. One can obtain a (tight) $3$-approximation for facility location without scaling facility costs. Scaling the facility costs by a factor of $\lambda := \sqrt{2}$ when deciding whether an operation can decrease the cost, we can achieve a better approximation ratio of $\alpha_{\mathsf{FL}}:= 1+\sqrt{2}$. Throughout, we fix the constants $\lambda = \sqrt{2}$ and $\alpha_{\mathsf{FL}} = 1+\sqrt{2}$. For a solution $(S', \sigma')$ to a facility location instance, we use $\mathsf{cost}_\lambda(S', \sigma') = \lambda f(S') + \mathsf{cc}(\sigma')$ to denote the cost of the solution $(S', \sigma')$ with facility costs scaled by $\lambda = \sqrt{2}$. We call $\mathsf{cost}_\lambda(S', \sigma')$ the \emph{scaled cost} of $(S', \sigma')$. Given the current solution $(S, \sigma)$ for a facility location instance defined by $F, C, d$ and $(f_i)_{i \in F}$, we can apply a \emph{local operation} that changes the solution $(S, \sigma)$. A valid local operation is one of the following. \begin{itemize} \item An $\mathsf{open}$ operation, in which we open some facility $i \in F$ and reconnect a subset $C' \subseteq C$ of clients to $i$. We allow $i$ to be already in $S$, in which case we simply reconnect $C'$ to $i$. This needs to be allowed since our $\sigma$ does not connect clients to their nearest open facilities. \item A $\mathsf{close}$ operation, we close some facility $i' \in S$ and reconnect the clients in $\sigma^{-1}(i')$ to facilities in $S \setminus \{i'\}$. \item In a $\mathsf{swap}$ operation, we open some facility $i \notin S$ and close some facility $i' \in S$, reconnect the clients in $\sigma^{-1}(i')$ to facilities in $S \setminus \{i'\} \cup \{i\}$, and possibly some other clients to $i$. We say $i$ is \emph{swapped in} and $i'$ is \emph{swapped out} by the operation. \end{itemize} Thus, in any valid operation, we can open and/or close at most one facility. A client can be reconnected if it is currently connected to the facility that will be closed, or it will be connected to the new open facility. After we apply a local operation, $S$ and $\sigma$ will be updated accordingly so that $(S, \sigma)$ is always the current solution. For the online algorithm with recourse model, since we need to bound the number of reconnections, we apply a local operation only if the \emph{scaled} cost it decreases is large compared to the number of reconnections it makes. This motivates the following definition: \begin{definition}[Efficient operations for facility location] \label{def:phieff} Given a $\phi \geq 0$, we say a local operation on a solution $(S, \sigma)$ for a facility location instance is $\phi$-efficient, if it decreases $\mathsf{cost}_\lambda(S, \sigma)$ by more than $\phi$ times the number of clients it reconnects. \end{definition} The following two theorems can be derived from the analysis for the local search algorithms for facility location. We include their proofs in Appendix~\ref{appendix:local-search} for completeness. \begin{restatable}{theorem}{uflapprox}\label{thm:FL-offline-apx-ratio} Consider a facility location instance with cost of the optimum solution being $\mathsf{opt}$ (using the original cost function). Let $(S, \sigma)$ be the current solution in our algorithm and $\phi \geq 0$ be a real number. If there are no $\phi$-efficient local operations on $(S, \sigma)$, then we have \begin{align*} \mathsf{cost}(S, \sigma) \leq \alpha_{\mathsf{FL}}\big(\mathsf{opt} + |C|\phi\big). \end{align*} \end{restatable} In particular, if we apply the theorem with $\phi = 0$, then we obtain that $(S, \sigma)$ is a $(\alpha_{\mathsf{FL}} = 1+\sqrt{2})$-approximation for the instance. \sh{The following theorem will be used to analyze our randomized local search procedure.} \begin{restatable}{theorem}{ufloperations}\label{thm:FL-offline-operations} Let $(S, \sigma)$ be a solution to a facility location instance and $\mathsf{opt}$ be the optimum cost. Then there are two sets ${\mathcal{P}}_{\mathrm{C}}$ and ${\mathcal{P}}_{\mathrm{F}}$ of valid local operations on $(S, \sigma)$, where each operation $\mathrm{op}$ decreases the scaled cost $\mathsf{cost}_\lambda(S, \sigma)$ by $\nabla_{\mathrm{op}} > 0$, such that the following holds: \begin{itemize} \item $\sum_{\mathrm{op} \in {\mathcal{P}}_{\mathrm{C}}} \nabla_{\mathrm{op}} \geq \mathsf{cc}(\sigma)- (\lambda f(S^*) + \mathsf{cc}(\sigma^*)) $. \item $\sum_{\mathrm{op} \in {\mathcal{P}}_{\mathrm{F}}} \nabla_{\mathrm{op}} \geq \lambda f(S) - (\lambda f(S^*) + 2\mathsf{cc}(\sigma^*)) $. \item There are at most $|F|$ $\mathsf{close}$ operations in ${\mathcal{P}}_{\mathrm{C}} \biguplus {\mathcal{P}}_{\mathrm{F}}$. \item For every $i \in F$, there is at most 1 operation in each of ${\mathcal{P}}_{\mathrm{C}}$ and ${\mathcal{P}}_{\mathrm{F}}$ that opens or swaps in $i$. \end{itemize} \end{restatable} \subsection{Useful Lemmas} The following lemmas will be used repeatedly in our analysis and thus we prove them separately in Appendix~\ref{appendix:helper-proofs}. \begin{restatable}{lemma}{helpersumba} \label{lemma:helper-sum-b/a} Let $b \in {\mathbb{R}}_{\geq 0}^T$ for some integer $T \geq 1$. Let $B_{T'} = \sum_{t=1}^{T'} b_t$ for every $T' = 0, 1, \cdots, T$. Let $0 < a_1 \leq a_2 \leq \cdots \leq a_T$ be a sequence of real numbers and $\alpha > 0$ such that $B_t \leq \alpha a_t$ for every $t \in [T]$. Then we have \begin{align*} \sum_{t = 1}^T \frac{b_t}{a_t} \leq \alpha \left(\ln \frac{a_T}{a_1} + 1\right). \end{align*} \end{restatable} \begin{restatable}{lemma}{helperstar} \label{lemma:helper-star} Assume at some moment of an algorithm for facility location, $C$ is the set of clients, $(S, \sigma)$ is the solution for $C$. Let $i \in F$ and $\tilde C \subseteq C$ be any non-empty set of clients. Also at the moment there are no $\phi$-efficient operation that opens $i$ for some $\phi \geq 0$. Then we have \begin{align*} d(i, S) \leq \frac{f_i + 2\sum_{\tilde j \in \tilde C} d(i, \tilde j)}{|\tilde C|} + \phi. \end{align*} \end{restatable} \paragraph{Organization} The rest of the paper is organized as follows. In Section~\ref{sec:ofl}, we prove Theorem~\ref{UFL-recourse} by giving our online algorithm for facility location with recourse. Section~\ref{sec:fast-UFL} gives the randomized local search procedure, that will be used in the proof of Theorem~\ref{UFL-dynamicIncremental} in Section~\ref{sec:dfl}. Section~\ref{sec:dfl-fully} is dedicated to the proof of Theorem~\ref{UFL-fullydynamic}, by giving the fully dynamic algorithm for facility location in HST metrics. We give some open problems and future directions in Section~\ref{sec:discussions}. Some proofs are deferred to the appendix for a better flow of the paper. \subsubsection{draft for key lemma in k-median} \begin{lemma} We have \shr{We can say the proof of the lemma is similar to that of Lemma~\ref{lmm:delta-cost-bound}. Or we can make Lemma~\ref{lmm:delta-cost-bound} more general so that it becomes a meta-lemma for all the similar lemmas that we are going to use.\\} \begin{align*} \sum_{t = 1}^{T'}\Delta_t \leq O\left(\frac{\log T'}{\eta}\right) \mathsf{opt}_{T'}. \end{align*} \end{lemma} \jiayi{For this proof, we can combine k-mean and ufl case} \begin{proof} Recall the inequalities for open: \begin{align*} \sum_{j \in S, \sigma^*(j)=i^* } d(j,S) \le f + \sum_{j \in S,\sigma^*(j)=i^*}d(j,i^*) + |J|\cdot \phi .\end{align*} Focus in one star $ i $, for $ \forall $ $ j_{t_1},\ldots, j_{t_{q-1}}\in \{j: \sigma^*(j)=i^*\} $ \begin{align*} D \le d(j_{\tau} ,S) + d(j,i^*) \qquad \text{for $\tau=t_1,\ldots,t_{q-1}$} .\end{align*} When $ j_{t_q} $ arrives \begin{align*} \Delta \le d(j_{t_{q-1} } ,\psi^*(i^*))\le D+d(j_{t_{q-1} } \ ,i^*) .\end{align*} \begin{align*} \Delta \le \frac{f}{q-1} + \frac{2\cdot \sum_{\tau=t_1}^{t_{q-1} } d(j_{\tau} ,i^*) }{q-1} + d(j_{t_{q-1} }, i^*) + |J|\cdot \phi .\end{align*} Sum up $ \Delta $ for each client and then for all stars, we have: \begin{align*} \sum_{i} \Delta_i & \le \sum_{i} f\cdot \log|C_i| + 2 C^*_i \log|C_i| + C^*_i + \frac{|C_i|(|C_i|+1)}{2} \phi \\ & \le |S|f\cdot \log T' + 2C^* \log T' + T'^2\phi .\end{align*} \jiayi{I need to know which definition of total cost I should use. What about $ \le O(\log T')\mathsf{opt}_{T'} $ when $ \eta <1 $ and $O(\frac{\log T'}{\eta}) $ when $ \eta\ge 1 $ } $ |S|f\le (1+ \eta)kf = F^* + \eta kf \le F^* + 12 \mathsf{opt}$. \end{proof} \medskip \section{Fully Dynamic Algorithm for Facility Location} In this section, we give our algorithms for facility location under the online algorithm with recourse model and fully dynamic setting. We give two results in the setting. The first and simpler result is concerned with the case where we have uniform facility cost and are interested in the \emph{facility recourse}: every time we close a facility, we incur a recourse of 1, regardless of the number of reconnected clients. In this case we show a simple $(3+\epsilon)$-approximation with $O(1)$-amortized facility recourse. In the second result we consider non-uniform facility cost and client-reconnection recourse. We obtain $O(\log^2D)$ amortized recourse, but with $O(\log |F|)$-competitive ratio. This is done by embedding the metric on $F$ to a distribution of metrics induced by hierarchically-well-separated trees using the classic FRT technique. \subsection{Problem with Uniform Facility Cost and Facility Recourse Considered} We first consider the case where all the facility costs are the same, i.e, $f_i = f$ for every $i \in F$. The total recourse is defined as how many clients we close during the course of the algorithm. Our algorithm is simple: when a client $j$ comes at a time, we open the nearest facility to $j$ (if it has not yet opened) and connect it to the facility; when a client $j$ departs at a time, we simply remove it from the solution. Then we do the following: while there exists a local operation that can decrease $\mathsf{cost}(S, \sigma)$ by at least $\epsilon' f$ (where $\epsilon' = \Theta(\epsilon)$, and $\epsilon$ is the additive loss in our competitive ratio), we perform the operation. The following theorem can be derived from the analysis of the local search algorithm for facility location and the proof can be found in Appendix xx for completeness. \begin{restatable}{theorem}{uflapproxfacilityrecourse} Let $(S, \sigma)$ be a solution to a facility location instance with optimum set of open facilities being $S^*$ and optimum cost being $\mathsf{opt}$. Let $\phi \geq 0$. If no local operations on $(S, \sigma)$ can decrease $\mathsf{cost}(S, \sigma)$ by more than $\phi$, then we have \begin{align*} \mathsf{cost}(S, \sigma) \leq 3 \mathsf{opt} + (|S| + 2|S^*|)\phi. \end{align*} \end{restatable} Thus at the end of every time, we have that $\mathsf{cost}(S, \sigma) \leq 3\mathsf{opt} + (|S| + 2|S^*|)\epsilon'f$ where $\mathsf{opt}$ is the cost of the optimum solution at the time step and $S^*$ is the set of facilities it opens. This implies $|S|(1-\epsilon')f + \mathsf{cc}(\sigma) \leq (3 + 2\epsilon')\mathsf{opt}$, implying that $\mathsf{cost}(S, \sigma) \leq \frac{3+\epsilon'}{1-2\epsilon'}\mathsf{opt}$. This is an $(3+O(\epsilon'))$-approximation. It suffices to bound the number of facility closing operations. We are interested in the following adjusted cost of any solution $(S', \sigma')$: $\mathsf{cost}(S', \sigma') - \sum_{j \in C}d(j, F)$; notice that $\sum_{j \in C}d(j, F)$ only depends on the instance. We shall show that the adjusted cost changes smoothly as clients arrive and depart. The following observations can be made: \begin{obs} Let $C'$ and $C''$ be two sets of clients such that $C' \subseteq C''$ and $|C'' \setminus C'| = 1$. The the adjusted cost of the optimum solution for $C''$ and that for $C'$ differs by at most $f$. \end{obs} \begin{proof} Let $j$ be the unique client in $C'' \setminus C'$. Let $\mathsf{opt}'_{C'}$ and $\mathsf{opt}'_{C''}$ be the adjusted costs of the optimum solutions for $C'$ and $C''$ respectively. Then we have $\mathsf{opt}'_{C'} \leq \mathsf{opt}'_{C''}$: Given the optimum solution for $C''$, remove $j$ will decrease the cost by at least $d(j, F)$ and thus will decrease the adjusted cost. We now prove $\mathsf{opt}_{C''} \leq \mathsf{opt}_{C''} + f$: Start from the optimum solution for $C'$. We then add $j$, and open the nearest facility to $j$ (if it has not been opened) and connect $j$ to the facility. Notice this increase the adjusted cost by at most $f$. \end{proof} Then it is easy to see that our algorithm has an amortized facility recourse of $1/\epsilon'$. When a client $j$ arrives, opening its nearest facility in $f$ and connect it to the facility increases the adjusted cost by at most $f$. When a client $j$ departs, removing it can only decrease the adjusted cost. Since each local operation decreases the adjusted cost by at least $\epsilon' f$, the total number of local operations we apply in the algorithm is at most $1/\epsilon'$ times the number of arrived clients. Since every local operation closes at most 1 facility, the algorithm has an amortized facility recourse of $1/\epsilon' = O(1/\epsilon)$. \section{Offline Algorithms for $k$-Median} \begin{algorithm} \caption{$\mathsf{offline\mhyphen k\mhyphen median\mhyphen iterate}(M)$} \label{alg:offline-k-median} \begin{algorithmic}[1] \State repeat the following $M$ times \State\hspace*{\algorithmicindent} $i \gets $ random facility in $F \setminus S$ \State\hspace*{\algorithmicindent} $(\Delta, i') \gets \mathsf{\Delta\mhyphen swap\mhyphen in}(i)$ \State\hspace*{\algorithmicindent} \textbf{if} $\Delta < 0$ \textbf{then} open $i$ and close $i'$ by update $S, \sigma$ and heaps accordingly \end{algorithmic} \end{algorithm} The fast offline algorithm for $k$-median can be analyzed similarly. We can use the 2 approximation for $k$-center to produce an initial solution for $k$-median. It is easy to se that this gives an $2|C|$-approximation. Then we run $\mathsf{offline\mhyphen k\mhyphen median\mhyphen iterate}(M)$ for some $M$ to be decided later. The procedure $\mathsf{offline\mhyphen k\mhyphen median\mhyphen iterate}(M)$ is similar to $\mathsf{offline\mhyphen UFL\mhyphen iterate}$ but is simper: We only try to perform $\mathsf{swap}$ operations. Applying theorem \sh{xx} for a solution $(S, \sigma)$ with $|S| = k$ and $f$ being sufficiently large, we can find a set ${\mathcal{P}}$ of swap operations (if $f$ is large enough, an $\mathsf{open}$ operation can not decrease the cost) satisfying the properties stated in the theorem. By a similar argument, we can show the following lemma: \begin{lemma} Let $(S^\circ, \sigma^\circ)$ and $(S^\bullet, \sigma^\bullet)$ be respectively the solutions before and after we run $\mathsf{offline\mhyphen k\mhyphen median\mhyphen iterate}(M)$. Then we have \begin{align*} \E\big[(\mathsf{cost} (S^\bullet, \sigma^\bullet) - 5\mathsf{opt})_+ | (S^\circ, \sigma^\circ)\big] \leq \left(1-\frac1{|F|}\right)^M \left(\mathsf{cost} (S^\circ, \sigma^\circ) - 5\mathsf{opt}\right)_+. \end{align*} \end{lemma} Again, by choosing $M$ to be of order $O(n^2\log^2n)$, we can guarantee that with probability at least $1-1/n^2$, the output $k$-median solution has cost at most $(5+1/n^2)\mathsf{opt}$. \remove{\subsection{Our Techniques: Old Version} All our algorithms in the dynamic setting have the following two-step framework: In the first step, we design an algorithm that maintains the desired approximation factor with small recourse. Here we build upon our ideas in Theorem \ref{UFL-recourse}. Note that notion of recourse, which does not take into account update time, quantifies how many changes an algorithm {\em has to make} to maintain the desired approximation factor due to {\em information theoretic limits} imposed by online and dynamic models. Next, in the second step, we design new algorithms and data structures to ensure that changes made by the recourse algorithm can be {\em implemented with small update times}. This second step for the incremental algorithm for facility location makes use of randomization. On the other hand, our algorithm for fully dynamic setting for HST metrics shows one can convert a recourse bound into an update time bound easily on tree metrics. Despite this, our guarantee for general metric spaces holds only against oblivious adversaries. This is due to the fact that the probabilistic embedding of general metric spaces using HSTs only preserves distances in expectation. \medskip We believe that this two-step framework of separating the information theoretic aspects of dynamic algorithms with the data structure or implementation aspects using recourse as a conceptual tool should find more applications. While our theorems make this two step process explicit, similar ideas have appeared before in the literature, although not explicitly stated in this language; see \cite{GuptaK0P17, BernsteinC18, DuanHZ19,BK20} for some examples. The result of Gupta et al \cite{GuptaK0P17} for the dynamic set cover also builds on their results in the recourse model. However, they analyze same algorithms in both the settings. Other results that fit in this two step framework is elegant result of Duan, He and Zhang \cite{DuanHZ19} on the dynamic edge coloring, and the result of Bhattacharya and Kulkarni on dynamic topological sorting \cite{BK20}. In these results, it is possible to show that the algorithms can be made deterministic if one is only interested only recourse bounds. The randomization is only required in converting this recourse bound into an update time bound. Finally, as also observed in \cite{GuptaK0P17}, the notion of recourse also helps in bridging online and dynamic algorithms communities, and thus exposing each community to the rich algorithmic tool kit developed in the other. \medskip \subsection{Our Techniques} Besides our ideas for analysis, we should cover the following \begin{itemize} \item local search: not used before in online facility location. \item also not much analysed in the context of dynamic algorithms. \item but a powerful algorithmic paradigm and has lot of applications. \item we believe that our framework should be useful for other applications. \item I have also not seen many applications of HSTs in dyanmic algorithms. \end{itemize} Finally, as a broader appearl of our techniques, as an example we show that our results almost directly extend to $k$-median problem. \medskip } \end{document} \section{Fully Dynamic Algorithm for Facility Location on Hierarchically Well Separated Tree Metrics} \label{sec:dfl-fully} In this section, we give our fully dynamic algorithm for facility location on hierarchically-well-separated-tree (HST) metrics. Our algorithm achieves $O(1)$-approximation and $O(\log^2D)$ amortized update time. As we mentioned early, we assume each client is collocated with a facility. From now on, we fix the HST $T$ and assume the leaves of $T$ is $X = F$; let $V$ be the set of all nodes in $T$. Let $d_T$ be the metric induced by $T$ over the set $V$ of vertices. \noindent{\bf Notations.} Recall that ${\mathsf{level}}(v)$ is the level of $v$ in $T$. For every vertex $v \in V$, define $\Lambda_v$ to be the set of children of $v$, $X_v$ to be the set of leaf descendants of $v$, and $T_v$ be the maximal sub-tree of $T$ rooted at $v$. We extend the facility cost from $X$ to all vertices in $V$: for every $v \in V \setminus X$, we define $f_v = \min_{i \in X_v}f_i$. We can assume that each internal vertex $v$ is a facility; by opening $v$ we mean opening a copy of the $i \in X_v$ with $f_i = f_v$. This assumption only loses a factor of $2$ in the competitive ratio: On one hand, having more facilities can only make our problem easier; on the other hand, the cost of connecting a client to any $i \in X_v$ is at most twice that of connecting it to $v$. By the definition, the facility costs along a root-to-leaf path are non-decreasing. \subsection{Offline Algorithm for Facility Location on HST Metrics} In this section, we first give an offline $O(1)$-approximation algorithm for facility location on the HST metric $d_T$ as a baseline. Notice that facility location on trees can be solved exactly using dynamic programming. However the algorithm is hard to analyze in the dynamic algorithm model since the solution is sensitive to client arrivals and departures. Our algorithm generalizes the algorithm in \cite{EsencayiGLW19} for facility location with uniform facility cost, that was used to achieve the differential privacy requirement. For every vertex $v \in V$, we let $N_v$ be the number of clients at locations in $X_v$. Although according to the definition $N_v$'s are integers, in most part of the analysis we assume there are non-negative \emph{real numbers}. This will be useful when we design the dynamic algorithm. Let $\alpha \in \{1, 2\}^V$ and $\beta \in \{1, 2\}^{V \setminus X}$ be vectors given to our algorithm. They are introduced solely for the purpose of extending the algorithm to the dynamic setting; for the offline algorithm we can set $\alpha$ and $\beta$ to be all-1 vectors. \paragraph{Marked and Open Facilities} For every vertex $v \in V$, we say $v$ is \emph{marked} w.r.t the vectors $N$ and $\alpha$ if $$N_v \cdot 2^{{\mathsf{level}}(v)} > f_v/\alpha_v $$ and \emph{unmarked} otherwise. The following observation can be made: \begin{obs} Let $u$ be the parent of $v$. If $v$ is marked w.r.t $N$ and $\alpha$, so is $u$. \end{obs} \begin{proof} $v$ is marked w.r.t $N$ and $\alpha$ implies $N_v 2^{{\mathsf{level}}(v)} > f_v/\alpha_v $. Notice that $N_u \geq N_v, {\mathsf{level}}(u) = {\mathsf{level}}(v) + 1, \alpha_v \leq 2\alpha_u$ and $f_u \leq f_v$. So, $N_u 2^{{\mathsf{level}}(u)} \geq 2N_v2^{{\mathsf{level}}(v)} > 2 f_v/\alpha_v \geq f_u/\alpha_u $. \end{proof} Thus there is a monotonicity property on the marking status of vertices in $T$. We say a vertex $v$ is highest unmarked (w.r.t $N$ and $\alpha$) if it is unmarked and its parent is marked; we say a vertex $v$ is lowest marked if it is marked but all its children are unmarked. However, sometimes we say a vertex $u$ is the lowest marked ancestor of a leaf $v \in X$ if either $u=v$ is marked, or $u\neq v$ is marked and the child of $u$ in the $u$-$v$ path is unmarked; notice that in this case, $u$ might not be a lowest marked vertex since it may have some other marked children. If we need to distinguish between the two cases, we shall use that $u$ is lowest marked \emph{globally} to mean $u$ is a lowest marked vertex. If a leaf vertex $v \in X$ is marked, then we open $v$. For every marked vertex $v \in V\setminus X$, we open $v$ if and only if $$\left(\sum_{u \in \Lambda_v: u \text{ unmarked}} N_u \right)2^{{\mathsf{level}}(v)}> f_v/(\alpha_v\beta_v).$$ Notice that all unmarked vertices are closed. \begin{obs} \label{obs:departure-lowest-open} If $v$ is lowest marked, then $v$ is open. \end{obs} \begin{proof} We can assume $v \notin X$ since otherwise $v$ is open. So, $N_v 2^{{\mathsf{level}}(v)} > f_v/\alpha_v$ and all children of $v$ are unmarked. Thus, $\sum_{u \in \Lambda_v: {u\text{ unmarked}}}N_u = \sum_{u \in \Lambda_v}N_u = N_v$. Therefore, $\left(\sum_{u \in \Lambda_v: {u\text{ unmarked}}}N_u\right) 2^{{\mathsf{level}}(v)} = N_v 2^{{\mathsf{level}}(v)} > f_v/\alpha_v \geq f_v/(\alpha_v\beta_v)$. Thus $v$ will be open. \end{proof} With the set of open facilities defined, every client is connected to its nearest open facility according to $d_T$, using a consistent tie-breaking rule (e.g, the nearest open facility with the smallest index). We assume the root $r$ of $T$ has $\frac{f_v}{2^{{\mathsf{level}}(v)}} \leq 1$ by increasing the number of levels. So $r$ will be marked whenever $N_r \geq 1$. This finishes the description of the offline algorithm. \paragraph{Analysis of $O(1)$-Approximation Ratio.} We show the algorithm achieves an $O(1)$-approximation. First we give a lower bound on the optimum cost. For every $v \in V$, let $${\mathsf{LB}}(v) = \min\set{N_v2^{{\mathsf{level}}(v)}, f_v}.$$ Then we have \begin{lemma} \label{lemma:departure-LB} Let $U$ be a set of vertices in $T$ without an ancestor-descendant pair; i.e, for every two distinct vertex $u$ and $v$ in $U$, $u$ is not an ancestor of $v$. Then the cost of the optimum solution is at least $\sum_{v \in U}{\mathsf{LB}}(v)$. \end{lemma} \begin{proof} Fix an optimum solution. Consider any $v \in U$. We consider the cost inside $T_v$ in the optimum solution: the connection cost of clients, plus the cost of open facilities in $T_v$. Then this cost is at least ${\mathsf{LB}}(v)= \min\set{N_v2^{{\mathsf{level}}(v)}, f_v}$: If we open a facility in $T_v$ then the facility cost is at least $f_v$; otherwise, all the $N_v$ clients in $T_v$ have to be connected to outside $T_v$, incurring a cost of at least $N_v2^{{\mathsf{level}}(v)}$. The lemma follows from that the trees $T_v$ over all $v \in U$ are disjoint and thus we are not over-counting the costs in the optimum solution. \end{proof} Then let $U$ be the set of highest unmarked vertices and marked leaves; clearly $U$ does not have an ancestor-descendant pair. By Lemma~\ref{lemma:departure-LB}, the optimum cost is at least $\sum_{v \in U}{\mathsf{LB}}(v)$. We prove the following lemma. \begin{lemma} \label{lemma:departure-UB} The solution produced by our algorithm has cost at most $O(1)\sum_{u \in U}{\mathsf{LB}}(u)$. \end{lemma} \begin{proof} First consider the facility cost of our solution. If a leaf $v$ is marked and open, we have $N_v > f_v/\alpha_v$ (as ${\mathsf{level}}(v) = 0$) and thus ${\mathsf{LB}}(v) = \min\set{N_v,f_v} \geq f_v/\alpha_v$. Then $f_v$ can be bounded by $\alpha_v{\mathsf{LB}}(v) \leq 2{\mathsf{LB}}(v)$. If $v \in V \setminus X$ is marked and open, then by our algorithm we have $\sum_{u \in \Lambda_v: u \text{ unmarked}}N_u 2^{{\mathsf{level}}(v)} \geq f_v/(\alpha_v\beta_v )$. Since each $u$ in the summation is unmarked, we have ${\mathsf{LB}}(u) = N_u 2^{{\mathsf{level}}(u)}$. Thus, we have $\sum_{u \in \Lambda_v: u\text{ unmarked}}{\mathsf{LB}}(u) = \frac12\sum_{u}N_u 2^{{\mathsf{level}}(v)} \geq \frac12 f_v/(\alpha_v\beta_v) \geq \frac18 f_v$. That is $f_v$ can be bounded by $8\sum_{u \in \Lambda_v:u \text{ unmarked}}{\mathsf{LB}}(u)$. Notice that each $u$ in the summation has $u \in U$ since it is highest unmarked. So, summing the bounds over all open facilities $v$ gives us that the facility cost of our solution is at most $8\sum_{u \in U}{\mathsf{LB}}(u)$. Now consider the connection cost. For every $v \in X$, let $u$ be the highest unmarked ancestor of $v$ (if $v$ itself is open, then its connection cost is $0$ and we do not need to consider this case). Let $w$ be the parent of $u$; so $w$ is marked. Then there must be an open facility in the maximal tree rooted at $w$: consider any lowest marked vertex in the sub-tree rooted at $w$; it must be open by Lemma~\ref{obs:departure-lowest-open}. Thus, any client at $v$ has connection cost at most $2 \times 2^{{\mathsf{level}}(w)} = 4 \times 2^{{\mathsf{level}}(u)}$. Thus, the total connection cost in our solution is at most $4\sum_{u \in U \setminus X}N_u2^{{\mathsf{level}}(u)} = 4\sum_{u \in U \setminus X}{\mathsf{LB}}(u)$. This finishes the proof of the lemma. \end{proof} Combining Lemmas~\ref{lemma:departure-LB} and \ref{lemma:departure-UB} gives that our algorithm is an $O(1)$-approximation. One lemma that will be useful in the analysis of dynamic algorithm is the following: \begin{lemma} \label{lemma:departure-ub-outside} For any open facility $v$ in our solution, the number of clients connected to $v$ that are outside $T_v$ is at most $O(\log D)\frac{f_v}{2^{{\mathsf{level}}(v)}}$. \end{lemma} \begin{proof} We consider each ancestor $u$ of $v$ and count the number clients connected to $v$ with lowest common ancestor with $v$ being $u$. Focus on a child $w$ of $u$ that is not $v$ or an ancestor of $v$. If $w$ is marked, then no clients in $T_w$ will be connected to $v$ since some facility in $T_w$ will be open. Thus, let $U'$ be the unmarked children of $u$ that is not $v$ or an ancestor of $v$. Then if we have $\sum_{w \in U'}N_w2^{{\mathsf{level}}(u)} \geq f_u/(\alpha_u\beta_u)$, then $u$ will be marked and open and clients in $T_w, w \in U'$ will not be connected to $v$. Otherwise we have $\sum_{w \in U'}N_w< f_u/(\alpha_u\beta_u\cdot 2^{{\mathsf{level}}(u)} ) \leq f_u/2^{{\mathsf{level}}(u)} \leq f_v/2^{{\mathsf{level}}(v)}$ as $f_u \leq f_v$ and ${\mathsf{level}}(u) \geq {\mathsf{level}}(v)$. The lemma follows since we have at most $O(\log D)$ ancestors of $v$. \end{proof} \paragraph{Remark} The algorithm so far gives a \emph{data structure} that supports the following operations in $O(\log D)$ time: i) updating $N_v$ for some $v \in X$ and ii) returning the nearest open facility of a leaf $v \in X$. Indeed the algorithm can be made simpler: We set $\alpha$ to be the all-1 vector, and we open the set of lowest marked facilities (so both $\alpha$ and $\beta$ are not needed). For every vertex $u \in V$, we maintain the nearest open facility $\psi_u$ to $u$ in $T_u$. Whenever a client at $v$ arrives or departs, we only need change $N_u$, $\psi_u$, marking and opening status of $u$ for ancestors $u$ of $v$. To return the closest open facility to a leaf $v \in X$, we travel up the tree from $v$ until we find an ancestor $u$ with $\psi_u$ defined, and return $\psi_u$. Both operations take $O(\log D)$ time. However, our goal is to maintain the solution $(S, \sigma)$ \emph{explicitly} in memory. Thus we also have to bound the the number of reconnections during the algorithm, since that will be a lower bound on the total running time. \subsection{Dynamic Algorithm for Facility Location on HST Metrics} In this section, we extend the offline algorithm to a dynamic algorithm with $O(\log^3 D)$-amortized update time; recall that $D$ is the aspect ratio of the metric. We maintain $\alpha, \beta$ and $N$-vectors, and at any moment of the algorithm, the marking and opening status of vertices are exactly the same as that obtained from the offline algorithm for $\alpha, \beta$ and $N$. Initially, let $\alpha$ and $\beta$ be all-$1$ vectors, and $N$ be the all-0 vector. So all the vertices are unmarked. Whenever a client at some $v \in X$ arrives or departs, the $\alpha, \beta$ values, the marking and opening status of ancestors of $v$ may change and we show how to handle the changes. The vertices that are not ancestors of $v$ are not affected during the process. When a client at $v$ arrives or departs, we increase or decrease the $N_u$ values for all ancestors $u$ of $v$ by 1 \emph{continuously} at the same rate (we can think of that the number of clients at $v$ increases or decreases by 1 continuously). During the process, the marking and opening status of these vertices may change. If such an event happens, we change $\alpha$ and/or $\beta$ values of the vertex so that it becomes harder for the status to change back in the future. Specifically, we use the following rules: \begin{itemize} \item If a vertex $u$ changes to marked (from being unmarked), then we change $\alpha_u$ to $2$ (notice that $u$ remains marked w.r.t the new $\alpha$), and $\beta_u$ to $1$. In this case, we do not consider the opening status change of $u$ as an event. \item If a vertex $u$ changes to unmarked (from being marked), we change $\alpha_u$ to $1$ (notice that $u$ remains unmarked w.r.t the new $\alpha$). The $\beta_u$ value becomes useless. In this case, we also do not consider the opening status change of $u$ as an event. \item If a marked vertex $u$ becomes open (from being closed), then we change $\beta_u$ to $2$ (notice that $u$ remains open w.r.t the new $\beta$). \item If a marked vertex $u$ becomes closed (from being open), then we change $\beta_u$ to $1$ (notice that $u$ remains closed w.r.t the new $\beta$). \end{itemize} We call the 4 types of events above as marking, unmarking, opening and closing events. Now we talk about the order the events happen. When we increase $N_u$ values of ancestors of $v$ continuously, one of the following two events may happen: \begin{itemize \item The highest unmarked ancestor $u$ of $v$ may become globally lowest marked, and this may \emph{induce} a closing event for the parent $w$ of $u$. \item The lowest marked ancestor $u$ of $v$ may become open. \end{itemize} Similarly, when we decrease $N_u$ values of ancestors of $v$ continuously, one of the following two events may happen: \begin{itemize \item The lowest marked ancestor $u$ of $v$ may become unmarked (we must that $u$ was lowest marked globally), and this may \emph{induce} an opening event for the parent $w$ of $u$. \item The lowest marked ancestor $u$ of $v$ may become closed. \end{itemize} Above, if two events happen at the same time, we handle an arbitrary event. Notice that after we handle the event, the conditions for the other event might not hold any more, in which case we do not handle it. Once we have finished the process of increasing or decreasing $N_u$ values by 1, the clients will be connected to their respective nearest open facilities, breaking ties using the consistent rule. A reconnection happens if a client is connected to a different facility. \paragraph{Bounding Number of Reconnections} Now we analyze the reconnections made in the algorithm. When a client at $v \in X$ arrives or departs, at most $O(\log D)$ vertices $u$ will have their $N_u$ values changed by $1$. We distribute 4 tokens to each ancestor $u$ of $v$, that are of type-A, type-B, type-C and type-D respectively.\footnote{The types are only defined for convenience.} We are going to use these tokens to charge the events happened. First focus on the sequence of marking/unmarking events happened at a vertex $u$. Right before $u$ becomes unmarked we have $N_u \leq f_u/(2 \times 2^{{\mathsf{level}}(u)})$ since at the moment we have $\alpha_u = 2$. Immediate after that $\alpha_u$ is changed to $1$. For $u$ to become marked again, we need $N_u \leq f_u/2^{{\mathsf{level}}(u)}$. So during the period $N_u$ must have been increased by at least $f_u/(2 \times 2^{{\mathsf{level}}(u)})$. Similarly, right before $u$ becomes marked we have $N_u \geq f_u/2^{{\mathsf{level}}(u)}$ since at the moment we have $\alpha_u = 1$. Then we change $\alpha_u$ to $2$ immediately. For $u$ to become unmarked again, $N_u$ should be decreased by at least $ f_u/(2\times2^{{\mathsf{level}}(u)})$. So, when a marking/unmarking event happens at $u$, we can spend $\Omega(f_u/2^{{\mathsf{level}}(u)})$ type-A tokens owned by $u$. Then we focus on the sequence $\mathcal{S}$ of opening/closing events at $u$ between two adjacent marking/unmarking events at $u$. At these moments, $u$ is marked and $\alpha_u = 2$. For the first event in $\mathcal{S}$, we can spend $\Omega(f_u/2^{{\mathsf{level}}(u)})$ type-B tokens owned by $u$. If some opening/closing event $e$ in $\mathcal{S}$ is induced by an unmarking/marking event of some child $u'$ of $u$, then we can spend $\Omega(f_{u'}/2^{{\mathsf{level}}(u')}) \geq \Omega(f_u/2^{{\mathsf{level}}(u)})$ type-C tokens owned by $u'$ for $e$, and the event $e'$ after $e$ in $\mathcal{S}$ if it exists. Notice that we already argued that $u'$ has collected enough number of type-C tokens. Then we focus on an event $e'$ in $\mathcal{S}$ such that both $e$ and the event $e$ before $e'$ in $\mathcal{S}$ are not induced. First, assume $e$ is an opening event and $e'$ is a closing event. Then, after $e$ we have $\sum_{u' \in \Lambda_u: u' \text{ unmarked}} N_{u'} = f_u/(2 \times 2^{{\mathsf{level}}(u)})$ and before $e'$ we have $\sum_{u' \in \Lambda_u: u' \text{ unmarked}} N_{u'} = f_u/(4 \times 2^{{\mathsf{level}}(u)})$. Notice that the set of unmarked children of $u$ may change, and let $U'$ and $U''$ be the sets of unmarked children of $u$ at the moments after $e$ and before $e'$ respectively. Again if there is some $u' \in (U' \setminus U'') \cup (U'' \setminus U')$, we spend $\Omega(\frac{f_{u'}}{2^{{\mathsf{level}}(u')}}) \geq \Omega(\frac{f_u}{2^{{\mathsf{level}}(u)}})$ type-C tokens owned by $u'$. Otherwise, $U = U'$ and $f_u/(4\times 2^{({\mathsf{level}}(u))})$ clients in $T_u$ must have departed between $e$ and $e'$ and we can then spend $\Omega(f_u/2^{{\mathsf{level}}(u)})$ type-D tokens for $e'$. The case when $e$ is an closing event and $e'$ is an opening event can be argued in the same way. Thus, whenever an event happens at $u$, we can spend $\Omega(f_u/2^{{\mathsf{level}}(u)})$ tokens; moreover if an opening/closing event at $u$ was induced by an unmarking/marking event at some child $u'$ of $u$, then we can spend $\Omega(f_{u'}/2^{{\mathsf{level}}(u')})$ tokens for the event at $u$. A facility $u$ changes its opening status when an event happens at $u$. Notice that, we reconnect a client only if it was connected to a ready-to-close facility, or it needs to be connected to newly open facility. By Lemma~\ref{lemma:departure-ub-outside}, at any moment the number of clients connected to $u$ from outside $T_u$ is at most $O(\log D)\cdot \frac{f_u}{2^{{\mathsf{level}}(u)}}$. At the moment $u$ changes its opening status because of an non-induced event, then before and after the event the number of clients connected to $u$ from $T_u$ is of order $O\left(\frac{f_u}{2^{{\mathsf{level}}(u)}}\right)$. $u$ changes its opening status due to a marking/unmarking event happened at some child $u'$ of $u$, then before and after the event the number of clients connected to $u$ from $T_u$ is of order $\Theta\left(\frac{f_{u'}}{2^{{\mathsf{level}}(u')}}\right)$. Thus, on average, for each token we spent we connect at most $O(\log D)$ clients. Since each client arrival or departure distributes at most $O(\log D)$ tokens, we have that the amortized number of reconnections (per client arrival/departure) is at most $O(\log^2D)$. \paragraph{Analyzing Update Time} Then with the bound on the number of reconnections (recourse), we can bound the update time easily. Indeed, we can maintain a $\psi_u$ for every $u \in V$, which indicates the nearest open facility to $u$ in $T_u \setminus u$ ($\psi_u$ could be undefined). We also maintain a value $N'_u$ for marked vertices $u$ where $N'_u = \sum_{v \in \Lambda_v, v\text{ unmarked}} N_v$. Whenever a client at $v$ arrives or departs, we need to change $\alpha_u, \beta_u, N_u, N'_u, \psi_u$, marking and opening status of $u$ only for ancestors $u$ of $v$. The update can be made in $O(\log D)$ time for every client arrival or departure using the information on the vertices. The bottleneck of the algorithm comes from reconnecting clients. We already argued that the amortized number of reconnections per client arrival/departure is $O(\log^2D)$ and thus it suffices to give an algorithm that can find the clients to be connected efficiently. For every vertex $u$, we maintain a double-linked-list of unmarked children $u'$ of $u$ with $N_{u'} \geq 1$. With this structure it is easy to see that for every client that needs to be reconnected, we need $O(\log D)$ time to locate it. If $u$ becomes open, we need to consider each unmarked children $u'$ of $u$ and reconnect clients in $T_{u'}$ to $u$. The time needed to locate these clients can be made $O(\log D)$ times the number of clients. For every strict ancestor $w$ of $u$ for which there are no open facilities in between we can use the $\psi_w$ information to see if we need to reconnect clients in $T_w$. If yes, then for every unmarked child $w'$ of $w$ with $N_{w'} \geq 1$ that is not an ancestor of $u$, we need to connect the clients in $T_{w'}$ to $u$. Again enumerating these clients takes time $O(\log D)$ times the number of clients. Similarly, if $u$ becomes closed, we then need to connect all clients connected to $u$ to the nearest open facility to $u$, which can be computed using $\psi$ values of $u$ and its ancestors. Enumerating the clients takes time $O(\log D)$ times the number of clients. Overall, the amortized running time per client arrival/departure is $O(\log ^3D)$. \section{$(1+\sqrt{2}+\epsilon)$-Approximate Dynamic Algorithm for Facility Location in Incremental Setting} \label{sec:dfl} In this section, we prove Theorem~\ref{UFL-dynamicIncremental} by combining the ideas from Sections \ref{sec:ofl} and \ref{sec:fast-UFL} to derive a dynamic algorithm for facility location in the incremental setting. As for the online algorithm in Section~\ref{sec:ofl}, we divide our algorithm into stages. Whenever a client comes, we use a simple rule to accommodate it. Now we can not afford to consider all possible local operations as in Section~\ref{sec:ofl}. Instead we use the randomized local search idea from the algorithm in Section~\ref{sec:fast-UFL} by calling the procedure $\mathsf{FL\mhyphen iterate}$. We call the procedure only if the cost of our solution has increased by a factor of $1+\epsilon'$ (where $\epsilon' = \Theta(\epsilon)$ is small enough). In our analysis, we show a lemma similar to Lemma \ref{lmm:ofl-delta-cost-bound}: The total increase of costs due to arrival of clients is small, compared to the optimum cost for these clients. Then, we can bound the number of times we call $\mathsf{FL\mhyphen iterate}$. Recall that we are given an integer $\Gamma = \mathrm{poly}\big(n, \log D, \frac1\epsilon\big)$ that is big enough: We are aiming at a success probability of $1-1/\Gamma$ for each call of $\mathsf{FL\mhyphen iterate}$. Our final running time will only depend on $O(\log \Gamma)$. The main algorithm will be the same as Algorithm~\ref{alg:ofl}, except that we use Algorithm~\ref{alg:fast-UFL-one-stage} as the algorithm for one stage. As before, we only need to design one stage of the algorithm. Recall that in a stage we are given an initial set $C$ of clients, an $O(1)$-approximate solution $(S, \sigma)$ for $C$. Clients come one by one and our goal is to maintain an $(\alpha_{\mathsf{FL}} + O(\epsilon'))$-approximate solution at any time. The stage terminates if no client comes or our solution has cost more than $1/\epsilon'$ times the cost of the initial solution. \begin{algorithm} \caption{One Stage of Dynamic Algorithm for Facility Location} \label{alg:fast-UFL-one-stage} \begin{algorithmic}[1] \Require{ \begin{itemize} \item $C$: the initial set of clients \item $(S, \sigma)$: initial solution for $C$, which is $O(1)$-approximate \end{itemize} } \State let $M = O\left(\frac{ |F|}{\epsilon'}\log\Gamma\right)$ be large enough \label{step:fast-UFL-M} \State $(S, \sigma) \gets \mathsf{FL\mhyphen iterate}\left(M\right)$, $\textsf{init}\gets \mathsf{cost}(S, \sigma), {\mathsf{last}} \gets \textsf{init}$ \label{step:fast-UFL-init} \For{$t \gets 1, 2, 3, \cdots$, terminating if no more clients arrive} \For{$q = \ceil{\log\frac{{\mathsf{last}}}{|F|}}$ to $\ceil{\log\frac{\mathsf{last}}{\epsilon'}}$} \label{step:fast-UFL-enumerate-q} \State \textbf{if} $i \gets \arg\min_{i \in F\setminus S, f_i \leq 2^q}d(j_t, i)$ exists, \textbf{then} call $\mathsf{try\mhyphen open'}(i)$ \label{step:fast-UFL-try-open} \Comment{$\mathsf{try\mhyphen open'}$ is the same as $\mathsf{try\mhyphen open}$ except we consider the cost instead of scaled cost.} \EndFor \State $C \gets C \cup \{j_t\}$ and call $\mathsf{try\mhyphen open'}\big(\arg\min_{i \in F \setminus S}(d(j_t, i) + f_i)\big)$ \label{step:fast-UFL-handle-j} \If{$\mathsf{cost}(S, \sigma) > (1+\epsilon')\cdot {\mathsf{last}}$} \State $(S, \sigma) \gets \mathsf{FL\mhyphen iterate}\left(M\right)$ \label{step:fast-UFL-call-iterate} \If {$\mathsf{cost}(S, \sigma) > {\mathsf{last}}$} ${\mathsf{last}} \gets \mathsf{cost}(S, \sigma)$ \EndIf \label{step:fast-UFL-update-last} \If{${\mathsf{last}} > \textsf{init}/\epsilon'$} terminate the stage \EndIf \label{step:fast-UFL-terminate} \EndIf \EndFor \end{algorithmic} \end{algorithm} Notice that in a stage, we are considering the original costs of solutions (instead of scaled costs as inside $\mathsf{FL\mhyphen iterate}$). During a stage we maintain a value ${\mathsf{last}}$ which gives an estimation on the cost of the current solution $(S, \sigma)$. Whenever a client $j_t$ comes, we apply some rules to open some facilities and connect $j_t$ (Steps~\ref{step:fast-UFL-enumerate-q} to \ref{step:fast-UFL-handle-j}). These operations are needed to make the cost increase due to the arrival of $j_t$ (defined as $\Delta_t$ later) small. In the algorithm $\mathsf{try\mhyphen open'}$ is the same as $\mathsf{try\mhyphen open}$, except that we use the original cost instead of the scaled cost (this is not important but only for the sake of convenience). If $\mathsf{cost}(S, \sigma)$ becomes too large, i.e, $\mathsf{cost}(S, \sigma) > (1+\epsilon'){\mathsf{last}}$, then we call $(S, \sigma) \gets \mathsf{FL\mhyphen iterate}(M)$ for the $M$ defined in Step~\ref{step:fast-UFL-M} (Step~\ref{step:fast-UFL-call-iterate}), and update ${\mathsf{last}}$ to $\mathsf{cost}(S, \sigma)$ if we have $\mathsf{cost}(S, \sigma) > {\mathsf{last}}$ (Step~\ref{step:fast-UFL-update-last}). We terminate the algorithm when ${\mathsf{last}} \geq \mathsf{init}/\epsilon$, where $\mathsf{init}$ is $\mathsf{cost}(S, \sigma)$ at the beginning of the stage (Step~\ref{step:fast-UFL-terminate}). We say an execution of $\mathsf{FL\mhyphen iterate}(M)$ is successful if the event in Lemma~\ref{lemma:ufl-iterate} happens. Then we have \begin{lemma} \label{lemma:ufl-dynamic-ratio} If all executions of $\mathsf{FL\mhyphen iterate}$ are successful, the solution $(S, \sigma)$ at the end of each time is $(1+\epsilon')(\alpha_{\mathsf{FL}}+\epsilon')$-approximate. \end{lemma} \begin{proof} This holds since we always have $\mathsf{cost}(S, \sigma) \leq (1+\epsilon'){\mathsf{last}}$ at the end of each time, where ${\mathsf{last}}$ is the cost of some $(\alpha_{\mathsf{FL}} + \epsilon')$-approximate solution at some moment before. As we only add clients to $C$, the cost of the optimum solution can only increase and thus the claim holds. \end{proof} Now we argue each execution of $\mathsf{FL\mhyphen iterate}(M)$ is successful with probability at least $1-1/\Gamma$. This will happen if $(S, \sigma)$ is $O(1)$-approximate before the call. By Lemma~\ref{lemma:ufl-iterate}, we only need to make sure that the $(S, \sigma)$ before the execution is $O(1)$-approximate. This is easy to see: Before Step~\ref{step:fast-UFL-handle-j} in time $t$, we have $\mathsf{cost}(S, \sigma) \leq O(1)\mathsf{opt}$; the increase of $\mathsf{cost}(S, \sigma)$ in the step is at most the value of $\mathsf{opt}$ after the step (i.e, we consider the client $j_t$ when defining $\mathsf{opt}$). Thus, we have $\mathsf{cost}(S, \sigma) \leq O(1)\mathsf{opt}$ after the step. \subsection{Bounding Number of Times of Calling $\mathsf{FL\mhyphen iterate}$} It remains to bound the number of times we call $\mathsf{FL\mhyphen iterate}$. Again, we use $T$ to denote the last time step of Algorithm~\ref{alg:fast-UFL-one-stage} (i.e, one stage of the dynamic algorithm) and $\Delta_t$ to denote the cost increase due to the arrival of $j_t$: it is the value of $\mathsf{cost}(S, \sigma)$ before Step~\ref{step:fast-UFL-handle-j} minus that after Step~\ref{step:fast-UFL-handle-j} in time $t$. For every time $t \in [T]$, let $C_t$ be the set $C$ at the end of time $t$, and let $\mathsf{opt}_t$ be the cost of the optimum solution for $C_t$. Let ${\mathsf{last}}_t$ be the value of ${\mathsf{last}}$ at the \emph{beginning} of time $t$. Due to Step~\ref{step:fast-UFL-handle-j}, we have the following observation: \begin{obs} \label{obs:dfl-delta-t} For every $t \in [T]$, we have $\Delta_t \leq \min_{i \in F}(f_i + d(i, j_t))$. \end{obs} \begin{proof} Let $i = \arg\min_{i \in F}(f_i + d(i, j_t))$ and consider Step~\ref{step:fast-UFL-handle-j} at time $t$. If $d(j_t, S) \leq f_i + d(i, j_t)$ before the step, then we have $\Delta_t \leq d(i, j_t)$. Otherwise, $i \notin S$ and $d(j_t, S) > f_i + d(i, j_t)$. Then $\mathsf{try\mhyphen open'}(i)$ in the step will open $i$ and we have $\Delta_t \leq f_i + d(i, j_t)$. \end{proof} We can also prove the following lemma that bounds $\Delta_t$: \begin{lemma} \label{lemma:dfl-delta-t} Let $t \in [T], i^* \in F$ such that $f_{i^*} \leq {\mathsf{last}}_t/\epsilon'$ and $C' \subseteq C_{t-1}$ be any non-empty subset. Then we have \begin{align*} \Delta_t \leq \frac{2}{|C'|}\left(\max\set{f_{i^*}, {\mathsf{last}}_t/|F|} + \sum_{j \in C'}d(i^*, j)\right) + 5d(i^*, j_t). \end{align*} \end{lemma} \begin{proof} In this proof, we focus on the time $t$ of the algorithm. If $i^* \in S$ before Step~\ref{step:fast-UFL-handle-j}, then we have $\Delta_t \leq d(i^*, j_t)$ and thus we can assume $i^* \notin S$ before Step~\ref{step:fast-UFL-handle-j}. Since Loop~\ref{step:fast-UFL-enumerate-q} only adds facilities to $S$, we have that $i^* \notin S$ at any moment in Loop~\ref{step:fast-UFL-enumerate-q}. Let $q = \ceil{\log \max\set{f_{i^*}, {\mathsf{last}}_t/|F|}}$; notice this $q$ is considered in Loop~\ref{step:fast-UFL-enumerate-q}. Let $i \in F\setminus S$ be the facility with $f_i \leq 2^q$ nearest to $j_t$ at the beginning of the iteration $q$; this is the facility we try to open in Step~\ref{step:fast-UFL-try-open} in the iteration for $q$. Notice that $d(j_t, i) \leq d(j_t, i^*)$ since $i^*$ is a candidate facility. Since we called $\mathsf{try\mhyphen open}(i)$ in Step~\ref{step:fast-UFL-try-open}, there is no $0$-efficient opening operation that opens $i$ after the step. Then, we can apply Lemma~\ref{lemma:helper-star} on this facility $i$, the set $C'$ and $\phi = 0$. So, after Step~\ref{step:fast-UFL-try-open} of the iteration for $q$, we have \begin{align*} d(j_t, S) \leq \frac{1}{|C'|}\left(f_i + 2\sum_{j \in C'}d(i, j)\right) + d(i, j_t). \end{align*} Notice that $d(i, i^*) \leq d(i, j_t) + d(j_t, i^*) \leq 2d(j_t, i^*)$, $f_i \leq 2\max\set{f_{i^*}, \epsilon'{\mathsf{last}}_t/|F|} $ and $S$ can only grow before the end of Step~\ref{step:fast-UFL-handle-j}. We have \begin{align*} \Delta_t &\leq \frac{1}{|C'|}\left(2\max\set{f_{i^*},{\mathsf{last}}_t/|F|} + 2\sum_{j \in C'}(d(i^*, j) + d(i^*, i))\right) + d(i^*, j_t) \\ &\leq \frac{2}{|C'|}\left(\max\set{f_{i^*},{\mathsf{last}}_t/|F|} + \sum_{j \in C'}d(i^*, j)\right) + 5d(i^*, j_t). \qedhere \end{align*} \end{proof} With the lemma, we can then prove the following lemma: \begin{lemma} \label{lemma:dfl-Delta} For every $T' \in [T-1]$, we have \begin{align*} \sum_{t = 1}^{T'} \Delta_t \leq O(\log T') \cdot \mathsf{opt}_{T'} \end{align*} \end{lemma} \begin{proof} The proof is similar to that of Lemma~\ref{lmm:ofl-delta-cost-bound}. Let $(S^*, \sigma^*)$ be the optimum solution for clients $C_{T'}$. Focus on some $i^* \in S^*$ and assume $(C_{T'} \setminus C_0) \cap \sigma^{*-1}(i^*) = \{j_{t_1}, j_{t_2}, \cdots, j_{t_s}\}$ with $1 \leq t_1 < t_2 < \cdots < t_s \leq T'$. We have $\Delta_{t_1} \leq f_{i^*} + d(i^*, j_{t_1})$ by Observation~\ref{obs:dfl-delta-t}. Then focus on any $k \in [2, s]$. If $f_{i^*} > {\mathsf{last}}_{t_k}/\epsilon$, then we must have $\mathsf{opt}_{t_k} \geq {\mathsf{last}}_{t_k}/\epsilon$ and the stage will terminate at time ${t_k}$. Thus ${t_k} = T$, contradicting the assumption that ${t_k} \leq T' \leq T-1$. So we assume $f_{i^*} \leq {\mathsf{last}}_{t_k}/\epsilon$. We can apply Lemma~\ref{lemma:dfl-delta-t} with $i^*$ and $C' = \{j_{t_1}, j_{t_2}, \cdots, j_{t_{k-1}} \}$ to obtain that $\Delta_{t_k} \leq \frac{2}{k-1}\left(\max\set{f_{i^*},{\mathsf{last}}_{t_k}/|F|} + \sum_{k'=1}^{k-1}d(i^*, j_{t_{k'}})\right) + 5d(i^*, j_{t_k})$. We can replace ${\mathsf{last}}_{t_k}$ with ${\mathsf{last}}_{T'}$ since ${\mathsf{last}}_{t_k} \leq {\mathsf{last}}_{T'}$. The sum of upper bounds over all $k \in [s]$ is a linear combinations of $\max\set{f_{i^*},{\mathsf{last}}_{T'}/|F|}$ and $d(i^*, j_{t_{k'}})$'s. In the linear combination, the coefficient for $\max\set{f_{i^*},{\mathsf{last}}_{T'}/|F|}$ is at most $1 + \frac21 + \frac22 + \frac23 + \cdots + \frac2{s-1} = O(\log s) = O(\log T')$. The coefficient for $d(i^*, j_{t_{k'}})$ is at most $5 + \frac2{k'} + \frac2{k'+1} + \cdots \frac2{s-1} = O(\log s) = O(\log T')$. Thus, overall, we have $\sum_{k = 1}^{s}\Delta_{t_k} \leq O(\log T') \big(\max\set{f_{i^*},{\mathsf{last}}_{T'}/|F|} + \sum_{k'=1}^s d(i^*, j_{t_{k'}})\big)$. Therefore $\sum_{t = 1}^{T'} \Delta_t \leq O(\log T') \left( \mathsf{cost}(S^*, \sigma^*) + |S^*|{\mathsf{last}}_{T'}/|F|\right)$, by taking the sum of the above inequality over all $i^* \in S^*$. The bound is at most $O(\log T')(\mathsf{opt}_{T'} + {\mathsf{last}}_{T'}) = O(\log T') \cdot \mathsf{opt}_{T'}$, since $|S^*| \leq |F|$ and ${\mathsf{last}}_{T'} \leq O(1)\mathsf{opt}_{T'-1} \leq O(1) \mathsf{opt}_{T'}$. \end{proof} Between two consecutive calls of $\mathsf{FL\mhyphen iterate}$ in Step~\ref{step:fast-UFL-call-iterate} at time $t_1$ and $t_2 > t_1$, $\mathsf{cost}(S, \sigma)$ should have increased by at least $\epsilon'{\mathsf{last}}_{t_2}$: At the end of time $t_1$, we have $\mathsf{cost}(S, \sigma) \leq {\mathsf{last}}_{t_1+1} = {\mathsf{last}}_{t_2}$ since otherwise ${\mathsf{last}}$ should have been updated in time $t_1$. We need to have $\mathsf{cost}(S, \sigma) > (1+\epsilon'){\mathsf{last}}_{t_2}$ after Step~\ref{step:fast-UFL-handle-j} at time $t_2$ in order to call $\mathsf{FL\mhyphen iterate}$. Thus, the increase of the cost during this period is at least $\epsilon' {\mathsf{last}}_{t_2}$. Thus, we have $\sum_{t=t_1+1}^{t_2}\frac{\Delta_t}{\epsilon'\cdot{\mathsf{last}}_t} \geq 1$ since ${\mathsf{last}}_t = {\mathsf{last}}_{t_2}$ for every $t \in (t_1, t_2]$. The argument also holds when $t_1 = 0$ and $t_2 > t_1$ is the first time in which we call $\mathsf{FL\mhyphen iterate}$. Counting the call of $\mathsf{FL\mhyphen iterate}$ in Step~\ref{step:fast-UFL-init}, we can bound the total number of times we call the procedure by $1 + \frac{1}{\epsilon'}\sum_{t=1}^T\frac{\Delta_t}{{\mathsf{last}}_t}$. Again let $\Phi_{T'}= \sum_{t = 1}^{T'} \Delta_t$ for every $T' \in [0, T]$. Lemma~\ref{lemma:dfl-Delta} says $\Phi_{t} \leq O(\log t) \mathsf{opt}_{t}$ for every $t \in [0, T-1]$. For every $t \in [T]$, since $\Delta_t \leq \mathsf{opt}_t$, thus we have $\Phi_t = \Phi_{t-1} + \Delta_t \leq O(\log t) \mathsf{opt}_{t-1} \leq O(\log T) {\mathsf{last}}_t$ since ${\mathsf{last}}_t$ will be at least the cost of some solution for $C_{t-1}$. Applying Lemma~\ref{lemma:helper-sum-b/a} with $a_t = {\mathsf{last}}_t, b_t = \Delta_t$ and $B_t = \Phi_t$ for every $t$, the number of times we call $\mathsf{FL\mhyphen iterate}$ can be bounded by \begin{align*} 1+\frac{1}{\epsilon'}\sum_{t=1}^T\frac{\Delta_t}{{\mathsf{last}}_t} \leq \frac{1}{\epsilon'} O(\log T) \left(\ln\frac{{\mathsf{last}}_T}{{\mathsf{last}}_1} + 1\right) = O\left(\frac{\log T}{\epsilon}\log\frac{1}{\epsilon}\right). \end{align*} We can then analyze the running time and the success probability of our algorithm. Focus on each stage of the algorithm. By Observation~\ref{obs:time-iterate}, each call to $\mathsf{FL\mhyphen iterate}(M)$ takes time $O(M|C|\log |F|) = O\left(\frac{ |F|}{\epsilon'}(\log \Gamma) |C|\log n \right) = O\left(\frac{ n\cdot|C_T|}{\epsilon}\log^2 n\right)$, where $C$ is the set of clients in the algorithm at the time we call the procedure, $C_T \supseteq C$ is the number set of clients at the end of time $T$, and $M = O\left(\frac{|F|}{\epsilon'}\log \Gamma\right)$ is as defined in Step~\ref{step:fast-UFL-M}. The total number of times we call the procedure is at most $O\left(\frac{\log T}{\epsilon}\log\frac1\epsilon\right) \leq O\left(\frac{\log n}{\epsilon}\log\frac1\epsilon\right)$. Thus, the running time we spent on $\mathsf{FL\mhyphen iterate}$ is $O\left(\frac{ n\cdot|C_T|}{\epsilon^2}\log^3 n\log\frac{1}{\epsilon}\right)$. The running time for Steps~\ref{step:fast-UFL-enumerate-q} to \ref{step:fast-UFL-handle-j} is at most $T \cdot O\big(\log \frac{|F|}{\epsilon'}\big) \cdot O\big(|C_T|\log |F|\big) = O(|C_T|T\log^2 \frac{|F|}{\epsilon}) \leq O(n|C_T|\log^2\frac{n}{\epsilon})$. Thus, the total running time of a stage is at most $O\left(\frac{ n\cdot|C_T|}{\epsilon^2}\log^3 n\log\frac{1}{\epsilon}\right)$. Now consider all the stages together. The sum of $|C_T|$ values over all stages is at most $2n$ since every client appears in at most 2 stages. So, the total running time of our algorithm is $O\left(\frac{n^2}{\epsilon^2}\log^3 n\log\frac1\epsilon\right)$. For the success probability, the total number of times we call $\mathsf{FL\mhyphen iterate}(M)$ is at most $O\left(\log_{1/\epsilon} (nD)\frac{\log n}{\epsilon}\log \frac1\epsilon\right) = \mathrm{poly}(\log n, \log D, \frac1\epsilon)$. If we have $\Lambda$ is at least $n^2$ times this number, which is still $\mathrm{poly}(n, \log D, \frac{1}{\epsilon})$, then the success probability of our algorithm is at least $1-1/n^2$. Finally, we remark that the success of the algorithm only depends on the success of all executions of $\mathsf{FL\mhyphen iterate}$. Each execution has success probability $1-1/\Gamma$ even if the adversary is adaptive. This finishes the proof of Theorem~\ref{UFL-dynamicIncremental}. \paragraph{Remark} We can indeed obtain an algorithm that has both $O(\log T)$ amortized client recourse and $\tilde O(n^2)$ total running time, by defining $\phi = \frac{\mathsf{cost}(S, \sigma)}{\alpha_{\mathsf{FL}}\epsilon'}$ and only performing $\phi$-efficient local operations. However, this will require us to put $\phi$ everywhere in our analysis and deteriorate the cleanness of the analysis. Thus, we choose to separate the two features in two algorithms: small recourse and $\tilde O(n^2)$ total running time. We also remark that the total running time for all calls of $\mathsf{FL\mhyphen iterate}$ is only $\tilde O(n|F|)$, and the $\tilde O(n^2)$ time comes from Steps~\ref{step:fast-UFL-enumerate-q} to \ref{step:fast-UFL-handle-j}. By losing a multiplicative factor of $2$ and additive factor of $1$ in the approximation ratio, we can assume every client is collocated with its nearest facility (See Appendix~\ref{appendix:moving-clients}). Then at any time we only have $O(|F|)$ different positions for clients, and the running time of the algorithm can be improved to $O(\frac{n|F|}{\epsilon^2}\log^3n\log\frac1{\epsilon})$. \section{Fast Local Search via Randomized Sampling} \label{sec:fast-UFL} \janr{This whole paragraph reads weird, and needs to rewritten. We have not defined category, the first line is incorrect.} \shr{Rewrote the paragraph.} From now on, we will be concerned with dynamic algorithms. Towards proving Theorem \ref{UFL-dynamicIncremental} for the incremental setting, we first develop a randomized procedure that allows us to perform local search operations fast. In the next section, we use this procedure and ideas from the previous section to develop the dynamic algorithm with the fast update time. The high level idea is as follows: We partition the set of local operations into many ``categories'' depending on which facility it tries to open or swap in. In each iteration of the procedure, we sample the category according to some distribution and find the best local operation in this category. By only focusing on one category, one iteration of the procedure can run in time $O(|C|\log |F|)$. On the other hand, the categories and the distribution over them are designed in such a way that in each iteration, the cost of our solution will be decreased by a multiplicative factor of $1 - \Omega\big(\frac1{ |F|}\big)$. This idea has been used in \cite{CharikarGhua2005} to obtain their $\tilde O(n^2)$ algorithm for approximating facility location. However, their algorithm was based on a different local search algorithm and analysis; for consistency and convenience of description, we stick to original local search algorithm of \cite{AryaGKMP01} that leads to $(1+\sqrt{2})$-approximation for the problem. Our algorithm needs to use the heap data structure. \subsection{Maintaining Heaps for Clients} Unlike the online algorithm for facility location in Section~\ref{sec:ofl}, in the dynamic algorithm, we guarantee that the clients are connected to their nearest open facilities. That is, we always have $\sigma_j = \arg\min_{i\in S} d(j, i)$; we still keep $\sigma$ for convenience of description. We maintain $|C|$ min-heaps, one for each client $j \in C$: The min-heap for $j$ will contain the facilities in $S \setminus \{\sigma_j\}$, with priority value of $i$ being $d(j, i)$. This allows us to efficiently retrieve the second nearest open facility to each $j$: This is the facility at the top of the heap for $j$ and we use the procedure $\mathsf{heap\mhyphen top}(j)$ to return it. \begin{figure*} \begin{algorithm}[H] \caption{$\mathsf{\Delta\mhyphen open}(i)$: \Return $\lambda f_i - \sum_{j \in C} \max\{0, d(j, \sigma_{j}) - d(j, i)\}$} \label{alg:Delta-open} \end{algorithm}\vspace*{-25pt} \begin{algorithm}[H] \caption{$\mathsf{try\mhyphen open}(i)$} \label{alg:try-open} \begin{algorithmic}[1] \State \textbf{if} $\mathsf{\Delta\mhyphen open}(i) < 0$ \textbf{then} open $i$ by updating $S, \sigma$ and heaps accordingly \end{algorithmic} \end{algorithm}\vspace*{-25pt} \begin{algorithm}[H] \caption{$\mathsf{\Delta\mhyphen swap\mhyphen in}(i)$} \label{alg:Delta-swap-in} \begin{algorithmic}[1] \State $C' \gets \{j \in C: d(j, i) < d(j, \sigma_j)\}$ and $\Psi \gets \lambda f_i - \sum_{j \in C'} \big(d(j, \sigma_j) - d(j, i)\big)$ \label{step:Delta-swap-in-C'-Psi} \State $\Delta \gets \min_{i' \in S}\left\{\sum_{j \in \sigma^{-1}(i') \setminus C'}\big[\min\{d(j, i), d(j, \mathsf{heap\mhyphen top}(j))\} - d(j, i')\big] - \lambda f_{i'}\right\} + \Psi$ \label{step:Delta-swap-in-Delta} \State \Return $(\Delta, \text{the $i'$ above achieving the value of $\Delta$})$ \label{step:Delta-swap-in-i'} \end{algorithmic} \end{algorithm}\vspace*{-25pt} \begin{algorithm}[H] \caption{$\mathsf{\Delta\mhyphen close}$} \label{alg:Delta-close} \begin{algorithmic}[1] \State $\Delta \gets \min_{i' \in S}\left\{\sum_{j \in \sigma^{-1}(i')}\big[d(j, \mathsf{heap\mhyphen top}(j)) - d(j, i')\big] - \lambda f_{i'}\right\}$ \State \Return $(\Delta, \text{the $i'$ above achieving the value of $\Delta$})$ \end{algorithmic} \end{algorithm} \end{figure*} We define four simple procedures $\mathsf{\Delta\mhyphen open}, \mathsf{try\mhyphen open}, \mathsf{\Delta\mhyphen swap\mhyphen in}$ and $\mathsf{\Delta\mhyphen close}$ that are described in Algorithms \ref{alg:Delta-open}, \ref{alg:try-open}, \ref{alg:Delta-swap-in} and \ref{alg:Delta-close} respectively. Recall that we use the \emph{scaled cost} for the local search algorithm; so we are working on the scaled cost function in all these procedures. $\mathsf{\Delta\mhyphen open}(i)$ for any $i \notin S$ returns $\Delta$, the increment of the scaled cost that will be incurred by opening $i$. (For it to be useful, $\Delta$ should be negative, in which case $|\Delta|$ indicates the cost decrement of opening $i$). This is just one line procedure as in Algorithm~\ref{alg:Delta-open}; \janr{Which of these 1 line procedure?}\shr{in Algorithm~\ref{alg:Delta-open}} $\mathsf{try\mhyphen open}$ will open $i$ if it can reduce the scaled cost. $\mathsf{\Delta\mhyphen swap\mhyphen in}(i)$ for some $i \notin S$ returns a pair $(\Delta, i')$, where $\Delta$ is the smallest scaled cost increment we can achieve by opening $i$ and closing some facility $i' \in S$, and $i'$ gives the facility achieving the smallest value. (Again, for $\Delta$ to be useful, it should be negative, in which case $i'$ is the facility that gives the maximum scaled cost decrement $|\Delta|$.) Similarly, $\mathsf{\Delta\mhyphen close}$ returns a pair $(\Delta, i')$, which tells us the maximum scaled cost decrement we can achieve by closing one facility and which facility can achieve the decrement. Notice that in all the procedures, the facility we shall open or swap in is given as a parameter, while the facility we shall close is chosen and returned by the procedures. With the heaps, the procedures $\mathsf{\Delta\mhyphen open}, \mathsf{\Delta\mhyphen swap\mhyphen in}$ and $\mathsf{\Delta\mhyphen close}$ can run in $O(|C|)$ time. We only analyze $\mathsf{\Delta\mhyphen swap\mhyphen in}(i)$ as the other two are easier. First, we define $C'$ to be the set of clients $j$ with $d(j, i) < d(j, \sigma_j)$; these are the clients that will surely be reconnected to $i$ once $i$ is swapped in. Let $\Psi = \lambda f_i - \sum_{j \in C'} (d(j, \sigma_j) - d(j, i))$ be the net scaled cost increase by opening $i$ and connecting $C'$ to $i$. The computation of $C'$ and $\Psi$ in Step~\ref{step:Delta-swap-in-C'-Psi} takes $O(|C|)$ time. If additionally we close some $i' \in S$, we need to reconnect each client in $\sigma^{-1}(i') \setminus C'$ to either $i$, or the top element in the heap for $j$, whichever is closer to $j$. Steps \ref{step:Delta-swap-in-Delta} and \ref{step:Delta-swap-in-i'} compute and return the best scaled cost increment and the best $i'$. Since $\sum_{i' \in S}|\sigma^{-1}(i')| = |C|$, the running time of the step can be bounded by $O(|C|)$. The running time for $\mathsf{try\mhyphen open}$, swapping two facilities and closing a facility (which are not defined explicitly as procedures, but used in Algorithms~\ref{alg:sample}) can be bounded by $O(|C|\log |F|)$. The running times come from updating the heap structures: For each of the $|C|$ heaps, we need to delete and/or add at most $2$ elements; each operation takes time $O(\log |F|)$. \subsection{Random Sampling of Local Operations} \begin{figure*} \begin{algorithm}[H] \caption{$\mathsf{sampled\mhyphen local\mhyphen search}$} \label{alg:sample} \begin{algorithmic}[1] \If{$\mathsf{rand}(0, 1) < 1/3$} \Comment{$\mathsf{rand}(0, 1)$ returns a uniformly random number in $[0, 1]$} \State$(\Delta, i') \gets \mathsf{\Delta\mhyphen close}$ \State\textbf{if} $\Delta < 0$ \textbf{then} close $i'$ by updating $S, \delta$ and heaps accordingly \Else \State $i \gets $ random facility in $F \setminus S$ \State $\Delta \gets \mathsf{\Delta\mhyphen open}(i), (\Delta', i') \gets \mathsf{\Delta\mhyphen swap\mhyphen in}(i)$ \State \textbf{if} $\Delta \leq \Delta'$ and $\Delta < 0$ \textbf{then} open $i$ by updating $S, \delta$ and heaps accordingly \State \textbf{else if} $\Delta' < 0$ \textbf{then} open $i$ and close $i'$ by updating $S, \delta$ and heaps accordingly \EndIf \end{algorithmic} \end{algorithm} \vspace*{-15pt} \begin{algorithm}[H] \caption{$\mathsf{FL\mhyphen iterate}(M)$} \label{alg:FL-iterate} \begin{algorithmic}[1] \State $(S^{\mathrm{best}}, \sigma^{\mathrm{best}}) \gets (S, \sigma)$ \For{$\ell \gets 1$ to $M$} \State call $\mathsf{sampled\mhyphen local\mhyphen search}$ \If{$\mathsf{cost}(S, \sigma) < \mathsf{cost}(S^{\mathrm{best}}, \sigma^{\mathrm{best}})$} $(S^{\mathrm{best}}, \sigma^{\mathrm{best}}) \gets (S, \sigma)$ \EndIf \EndFor \State \Return $(S^{\mathrm{best}}, \sigma^{\mathrm{best}})$ \end{algorithmic} \end{algorithm} \end{figure*} With the support of the heaps, we can design a fast algorithm to implement randomized local search. $\mathsf{sampled\mhyphen local\mhyphen search}$ in Algorithm~\ref{alg:sample} gives one iteration of the local search. We first decide which operation we shall perform randomly. With probability $1/3$, we perform the $\mathsf{close}$ operation that will reduce the scaled cost the most (if it exists). With the remaining probability $2/3$, we perform either an $\mathsf{open}$ or a $\mathsf{swap}$ operation. To reduce the running time, we randomly choose a facility $i \in F \setminus S$ and find the best operation that opens or swaps in $i$, and perform the operation if it reduces the cost. One iteration of $\mathsf{sampled\mhyphen local\mhyphen search}$ calls the procedures in Algorithms~\ref{alg:Delta-open} to \ref{alg:Delta-close} at most once and performs at most one operation, and thus has running time $O(|C|\log |F|)$. In the procedure $\mathsf{FL\mhyphen iterate}(M)$ described in Algorithm~\ref{alg:FL-iterate}, we run the $\mathsf{sampled\mhyphen local\mhyphen search}$ $M$ times. It returns the best solution obtained in these iterations, according to the \emph{original (non-scaled) cost}, which is not necessarily the solution given in the last iteration. So we have \begin{obs} \label{obs:time-iterate} The running time of $\mathsf{FL\mhyphen iterate}(M)$ is $O(M|C|\log |F|)$, where $C$ is the set of clients when we run the procedure. \end{obs} Throughout this section, we fix a facility location instance. Let $(S^*, \sigma^*)$ be the optimum solution (w.r.t the original cost) and $\mathsf{opt} = \mathsf{cost}(S^*, \sigma^*)$ be the optimum cost. Fixing one execution of $\mathsf{sampled\mhyphen local\mhyphen search}$, we use $(S^0, \sigma^0)$ and $(S^1, \sigma^1)$ to denote the solutions before and after the execution respectively. Then, we have \begin{restatable}{lemma}{samplelocalsearch} \label{lemma:sample-local-search} Consider an execution of $\mathsf{sampled\mhyphen local\mhyphen search}$ and fix $(S^0, \sigma^0)$. We have \begin{align*} \mathsf{cost}_\lambda(S^0, \sigma^0) - \E[\mathsf{cost}_\lambda(S^1, \sigma^1)] \geq \frac1{3 |F|}\max\left\{ \begin{array}{c} \mathsf{cc}(\sigma^0) - (\lambda f(S^*) + \mathsf{cc}(\sigma^*))\\ \lambda f(S) - (\lambda f(S^*) + 2\mathsf{cc}(\sigma^*))\\ \mathsf{cost}_\lambda(S^0, \sigma^0) - (2\lambda f(S^*) + 3\mathsf{cc}(\sigma^*)) \end{array} \right\}. \end{align*} \end{restatable} \begin{restatable}{lemma}{fliterate} \label{lemma:ufl-iterate} Let $(S^\circ, \sigma^\circ)$ be the $(S, \sigma)$ at the beginning of an execution of $\mathsf{FL\mhyphen iterate}(M)$, and assume it is an $O(1)$-approximation to the instance. Let $\Gamma \geq 2$ and $M = O\left(\frac{ |F|}{\epsilon'}\log\Gamma\right)$ is big enough. Then with probability at least $1-\frac1\Gamma$, the solution returned by the procedure is $(\alpha_{\mathsf{FL}} + \epsilon')$-approximate. \end{restatable} \section{$(1+\sqrt{2}+\epsilon)$-Competitive Online Algorithm with Recourse} \label{sec:ofl} In this section, we prove Theorem~\ref{UFL-recourse} by giving the algorithm for online facility location with recourse. \subsection{The Algorithm} For any $\epsilon >0$, let $\epsilon' = \Theta(\epsilon)$ be a parameter that is sufficiently small so that the approximation ratio $\alpha_{\mathsf{FL}} + O(\epsilon')= 1+\sqrt{2} + O(\epsilon')$ achieved by our algorithm is at most $\alpha_{\mathsf{FL}} + \epsilon$. Our algorithm for online facility location is easy to describe. Whenever the client $j_t$ comes at time $t$, we use a simple rule to connect $j_t$, as defined in the procedure $\mathsf{initial\mhyphen connect}$ in Algorithm~\ref{alg:initial-connect}: either connecting $j_t$ to the nearest facility in $S$, or opening and connecting $j_t$ to its nearest facility in $F \setminus S$, whichever incurs the smaller cost. Then we repeatedly perform $\phi$-efficient operations (Definition \ref{def:phieff}), until no such operations can be found, for $\phi=\frac{\epsilon'\cdot \mathsf{cost}(S, \sigma)}{\alpha_{\mathsf{FL}}|C|}$. \footnote{There are exponential number of possible operations, but we can check if there is a $\phi$-efficient one efficiently. $\mathsf{close}$ operations can be handled easily. To check if we can open a facility $i$, it suffices to check if $\sum_{j \in C: d(j, i) + \phi < d(j,\sigma_j)} (d(j, \sigma_j) - d(j, i)- \phi ) > \lambda f_i \cdot 1_{i \notin S}$. $\mathsf{swap}$ operations are more complicated but can be handled similarly.} \begin{algorithm}[htb] \caption{$\mathsf{initial\mhyphen connect}(j)$} \label{alg:initial-connect} \begin{algorithmic}[1] \If{$\min_{i \in F\setminus S}(f_i + d(i, j)) < d(j, S)$} \State let $i^* = \arg\min_{i \in F\setminus S}(f_i + d(i, j))$, $S \gets S \cup \{i^*\}, \sigma_j \gets i^*$ \Else \ $\sigma_j \gets \arg\min_{i \in S} d(j, i)$ \EndIf \end{algorithmic} \end{algorithm} We can show that the algorithm gives an $(\alpha_{\mathsf{FL}} + \epsilon)$-approximation with amortized recourse $O(\log D\log n)$; recall that $D$ is the aspect ratio of the metric. To remove the dependence on $D$, we divide the algorithm into stages, and \emph{freeze} the connections of clients that arrived in early stages. The final algorithm is described in Algorithm~\ref{alg:ofl}, and Algorithm~\ref{alg:ofl-one-stage} gives one stage of the algorithm. \begin{algorithm} \caption{One Stage of Online Algorithm for Facility Location} \label{alg:ofl-one-stage} \begin{algorithmic}[1] \Require{ \begin{itemize} \item $C$: initial set of clients \item $(S, \sigma)$: a solution for $C$ which is $O(1)$-approximate \item Clients $j_1, j_2, \cdots $ arrive from time to time \end{itemize} } \Ensure{ Guaranteeing that $(S, \sigma)$ at the end of each time $t$ is $\frac{\alpha_{\mathsf{FL}}}{1 - \epsilon'}$-approximate } \State $\mathsf{init} \gets \mathsf{cost}(S, \sigma)$ \For{$t \gets 1, 2, \cdots$, terminating if no more clients will arrive} \State $C\gets C\cup\{j_t\}$, and call $\mathsf{initial\mhyphen connect}(j_t)$ \label{step:ofl-settle-down} \While{there exists an $\frac{\epsilon'\cdot \mathsf{cost}(S, \sigma)}{\alpha_{\mathsf{FL}}|C|}$-efficient local operation} \label{step:ofl-while} perform the operation \EndWhile \If {$\mathsf{cost}(S, \sigma) > \mathsf{init}/\epsilon'$} terminate the stage \EndIf \EndFor \end{algorithmic} \end{algorithm} \begin{algorithm}[htb] \caption{Online Algorithm for Facility Location} \label{alg:ofl} \begin{algorithmic}[1] \State $C \gets \emptyset, S \gets \emptyset, \sigma = ()$ \Repeat \State $C^\circ \gets C, (S^\circ, \sigma^\circ) \gets (S, \sigma)$ \State redefine the next time to be time 1 and run one stage as defined in Algorithm \ref{alg:ofl-one-stage} \State permanently open one copy of each facility in $S^\circ$, and permanently connect clients in $C^\circ$ according to $\sigma^\circ$ (we call the operation \emph{freezing} $S^\circ$ and $C^\circ$) \State $C \gets C \setminus C^\circ$, restrict the domain of $\sigma$ to be the new $C$ \Until no clients come \end{algorithmic} \end{algorithm} In Algorithm~\ref{alg:ofl-one-stage}, we do as described above,\janr{what does before refers to here?}\shr{I changed to ``above''.} with two modifications. First, we are given an initial set $C$ of clients and a solution $(S, \sigma)$ for $C$ which is $O(1)$-approximate. Second, the stage will terminate if the cost of our solution increases by a factor of more than $1/\epsilon'$. The main algorithm (Algorithm~\ref{alg:ofl}) is broken into many stages. Since we shall focus on one stage of the algorithm for most part of our analysis, we simply redefine the time so that every stage starts with time 1. The improved recourse comes from the \emph{freezing} operation: at the end of each stage, we permanently open one copy of each facility in $S^\circ$, and permanently connect clients in $C^\circ$ to copies of $S^\circ$ according to $\sigma^\circ$, where $C^\circ$ and $(S^\circ, \sigma^\circ)$ are the client set and solution at the beginning of the stage. Notice that we assume the original facilities in $S^\circ$ will still participate in the algorithm in the future; that is, they are subject to opening and closing. Thus each facility may be opened multiple times during the algorithm and we take the facility costs of all copies into consideration. This assumption is only for the sake of analysis; the actual algorithm only needs to open one copy and the costs can only be smaller compared to the described algorithm. From now on, we focus on one stage of the algorithm and assume that the solution given at the beginning of each stage is $O(1)$-approximate. In the end we shall account for the loss due to the freezing of clients and facilities. Within a stage, the approximation ratio follows directly from Theorem~\ref{thm:FL-offline-apx-ratio}: Focus on the moment after the while loop at time step $t$ in Algorithm~\ref{alg:ofl-one-stage}. Since there are no $\frac{\epsilon'\cdot \mathsf{cost}(S,\sigma)}{\alpha_{\mathsf{FL}}|C|}$-efficient local operations on $(S, \sigma)$, we have by the theorem that $\mathsf{cost}(S, \sigma) \leq \alpha_{\mathsf{FL}}\left(\mathsf{opt} + |C|\cdot \frac{\epsilon'\cdot \mathsf{cost}(S, \sigma)}{\alpha_{\mathsf{FL}}|C|}\right) = \alpha_{\mathsf{FL}}\mathsf{opt} + \epsilon'\cdot\mathsf{cost}(S, \sigma)$, where $\mathsf{opt}$ is the cost of the optimum solution for $C$. Thus, at the end of each time, we have $\mathsf{cost}(S, \sigma) \leq \frac{\alpha_{\mathsf{FL}}}{1-\epsilon'}\cdot\mathsf{opt}$. \subsection{Bounding Amortized Recourse in One Stage} We then bound the amortized recourse in a stage; we assume that $\mathsf{cost}(S, \sigma) > 0$ at the beginning of the stage since otherwise there will be no recourse involved in the stage (since we terminate the stage when the cost becomes non-zero). We use $T$ to denote the last time of the stage. For every time $t$, let $C_t$ be the set $C$ at the end of time $t$, and $\mathsf{opt}_t$ to be the cost of the optimum solution for the set $C_t$. For every $t \in [T]$, we define $\Delta_t$ to be the value of $\mathsf{cost}(S, \sigma)$ after Step~\ref{step:ofl-settle-down} at time step $t$ in Algorithm~\ref{alg:ofl-one-stage}, minus that before Step~\ref{step:ofl-settle-down}. We can think of this as the cost increase due to the arrival of $j_t$. The key lemma we can prove is the following: \begin{lemma}\label{lmm:ofl-delta-cost-bound} For every $T' \in [T]$, we have $$\sum_{t = 1}^{T'} \Delta_t \leq O(\log T')\mathsf{opt}_{T'}.$$ \end{lemma} \begin{proof} Consider the optimum solution for $C_{T'}$ and focus on any star $(i, C')$ in the solution; that is, $i$ is an open facility and $C'$ is the set of clients connected to $i$. Assume $C' \setminus C_0 = \{j_{t_1}, j_{t_2}, \cdots, j_{t_s}\}$, where $1 \leq t_1 < t_2 < \cdots < t_s \leq T'$; recall that $C_0$ is the initial set of clients given at the beginning of the stage. We shall bound $\sum_{s' = 1}^s\Delta_{t_{s'}}$ in terms of the cost of the star $(i, C' \setminus C_0)$. By the rule specified in $\mathsf{initial\mhyphen connect}$, we have $ \Delta_{t_1 }\le f_i + d(i, j_{t_1})$. Now focus on any integer $k \in [2, s]$. Before Step~\ref{step:ofl-settle-down} at time $t_k$, no $\Big(\phi:= \frac{ \epsilon'\cdot \mathsf{cost}(S, \sigma) }{\alpha_{\mathsf{FL}}|C_{t_k-1}|} \leq \frac{O(\epsilon')\cdot\mathsf{opt}_{t_k-1}}{t_k-1} \leq \frac{O(\epsilon')\cdot\mathsf{opt}_{T'}}{t_k-1} \Big)$-efficient operation that opens $i$ is available. Thus, we can apply Lemma~\ref{lemma:helper-star} on $i$, $\tilde C = \{j_{t_1}, j_{t_2}, \cdots, j_{t_{k-1}}\}$ and $\phi$ to conclude that before Step~\ref{step:ofl-settle-down}, we have \begin{align*} d(i,S)\leq \frac{f_i + 2\cdot \sum_{k'=1}^{k-1 } d(i, j_{t_{k'} }) }{k-1}+ \frac{ O(\epsilon') \cdot \mathsf{opt}_{T'}}{t_k-1}. \end{align*} \iffalse in particular, adding facility $ i $ and reconnecting $ \{j_{t_1}, j_{t_2}, \cdots, j_{t_{k-1}}\}$ to $i $ is not $ \frac{ \epsilon '\cdot \mathsf{cost}(S, \sigma) }{\mid C \mid } $-efficient. This gives that at the moment, we have \[ \sum_{k'=1}^{k-1}d(j_{t_{k'} }, S) \leq f_i + \sum_{k'=1}^{k-1} d(i , j_{t_{k'} }) + ( k-1)\cdot \frac{ \epsilon' \cdot \mathsf{cost}(S, \sigma)}{|C|}. \] Let $ D = d(i, S)$ be the distance between $ i $ and its nearest open facility at the moment. By triangle inequalities we have $ d(j_{t_{k'} }, S) \ge D - d(i, j_{t_{k'} } )$. Combining with the previous inequality yields: \[ D \le \frac{1}{k-1}\sum_{k'=1}^{k-1}\left(d(j_{t_{k'}}, S) + d(i, j_{t_{k'}})\right) \leq \frac{f_i + 2\cdot \sum_{k'=1}^{k-1 } d(i, j_{t_{k'} }) }{k-1}+ \frac{ \epsilon' \cdot \mathsf{cost}(S, \sigma)}{|C| }. \] \fi In $\mathsf{initial\mhyphen connect}(j_{t_k})$, we have the option of connecting $j_{t_k}$ to its nearest open facility. Thus, we have \begin{align*} \Delta_{t_{k} } \le d(i, S) + d(i, j_{t_{k}} ) &\le \frac{f_i + 2\cdot \sum_{k'=1}^{k-1 } d(i, j_{t_{k'} }) }{k-1}+ \frac{ O(\epsilon') \cdot \mathsf{opt}_{T'}}{t_k-1 } + d(i, j_{t_{k}} ). \end{align*} We now sum up the above inequality for all $k \in [2, s]$ and that $\Delta_{t_1}\leq f_i + d(j, j_{t_1})$. We get \begin{align} \sum_{k=1}^s \Delta_{t_{k}} \leq O(\log s)\left(f_i + \sum_{k'=1}^sd(i, j_{t_{k'}})\right) + O(\epsilon')\sum_{k=2}^s\frac{\mathsf{opt}_{T'}}{t_k-1}. \label{inequ:each-star} \end{align} To see the above inequality, it suffices to consider the coefficients for $f_i$ and $d(i, j_{t_{k'}})$'s on the right-hand side. The coefficient for $f_i$ is at most $1 + \frac11 + \frac12 + \cdots + \frac1{s-1} = O(\log s)$; the coefficient for each $d(i, j_{t_{k'}})$ is $ 1 + \frac{2}{k'} + \frac{2}{k'+1} + \cdots +\frac{2}{s-1} = O(\log s)$. We now take the sum of \eqref{inequ:each-star} over all stars $(i, C')$ in the optimum solution for $C_{T'}$. The sum for the first term on the right side of \eqref{inequ:each-star} will be $O(\log T')\mathsf{opt}_{T'}$ since $f_i + \sum_{k'=1}^sd(i, j_{t_{k'}})$ is exactly the cost of the star $(i, C' \setminus C_0 \subseteq C')$. The sum for the second term will be $O(\epsilon'\log T')\cdot \mathsf{opt}_{T'}$ since the set of integers $t_k-1$ overall stars $(i, C')$ and all $k \geq 2$ are all positive and distinct. Thus overall, we have $\sum_{t = 1}^{T'} \Delta_t \leq O(\log T')\mathsf{opt}_{T'}$. \end{proof} With Lemma~\ref{lmm:ofl-delta-cost-bound}, we can now bound the amortized recourse of one stage. In time $t$, $\mathsf{cost}(S, \delta)$ first increases by $\Delta_t$ in Step~\ref{step:ofl-settle-down}. Then after that, it decreases by at least $\frac{\epsilon'\mathsf{cost}(S, \sigma)}{\alpha_{\mathsf{FL}}|C|} \geq \frac{\epsilon'\mathsf{opt}_t}{\alpha_{\mathsf{FL}}|C|} \geq \frac{\epsilon'\mathsf{opt}_t}{\alpha_{\mathsf{FL}}|C_T|}$ for every reconnection we made. Let $\Phi_{T'} = \sum_{t = 1}^{T'}\Delta_t$; Lemma~\ref{lmm:ofl-delta-cost-bound} says $\Phi_t \leq \alpha \mathsf{opt}_{t}$ for some $\alpha = O(\log T)$ and every $t \in [T]$. Noticing that $(\mathsf{opt}_t)_{t \in T}$ is a non-decreasing sequence, the total number of reconnections is at most \begin{align*} &\frac{\textsf{init}}{\epsilon'\cdot\mathsf{opt}_1/(\alpha_{\mathsf{FL}}|C_T|)} + \sum_{t=1}^T\frac{\Delta_t}{\epsilon' \cdot \mathsf{opt}_{t}/(\alpha_{\mathsf{FL}}|C_T|)} = \frac{\alpha_{\mathsf{FL}}|C_T|}{\epsilon'} \left( \frac{\textsf{init}}{\mathsf{opt}_1} + \sum_{t = 1}^{T-1} \frac{\Delta_t}{\mathsf{opt}_{t}} + \frac{\Delta_T}{\mathsf{opt}_T}\right). \end{align*} Notice that $\mathsf{init} \leq O(1)\mathsf{opt}_0 \leq O(1)\mathsf{opt}_1$. Applying Lemma~\ref{lemma:helper-sum-b/a} with $T$ replaced by $T-1$, $b_t = \Delta_t, B_t = \Phi_t$ and $a_t = \mathsf{opt}_{t}$ for every $t$, we have that $\sum_{t=1}^{T-1}\frac{\Delta_t}{\mathsf{opt}_{t}} \leq \alpha \left(\ln\frac{\mathsf{opt}_{T-1}}{\mathsf{opt}_1} + 1\right) = O\left(\log T\log\frac1{\epsilon'}\right)$, since we have $\mathsf{opt}_{T-1} \leq O(1/\epsilon')\cdot\mathsf{opt}_1$\xguor{I guess this should be $\mathsf{opt}_{T-1} \leq \mathsf{opt}_1*O(1/\epsilon')$}\shr{Yes. Done}. Notice that $\Delta_T \leq \mathsf{opt}_T$ since $\mathsf{opt}_T \geq \min_{i \in F}(f_i + d(i, j_T)) \geq \Delta_T$. So, the total number of reconnections is at most $O\left(\frac{\log T}{\epsilon'}\log\frac1{\epsilon'}\right)\cdot|C_T|$. The amortized recourse per client is $O\left(\frac{\log T}{\epsilon'}\log\frac1{\epsilon'}\right) \leq O\left(\frac{\log n}{\epsilon'}\log\frac1{\epsilon'}\right)$, where in the amortization, we only considered clients involved in the stage. Recall that $n$ is the total number of clients arrived. As each client appears in at most 2 stages, the overall amortized recourse is $O\left(\frac{\log n}{\epsilon'}\log\frac1{\epsilon'}\right)$. Finally we consider the loss in the approximation ratio due to freezing of clients. Suppose we are in the $p$-th stage. Then the clients arrived at and before $(p-2)$-th stage has been frozen and removed. Let $\overline{\mathsf{opt}}$ be the cost of the optimum solution for all clients arrived at or before $(p-1)$-th stage. Then the frozen facilities and clients have cost at most $\overline\mathsf{opt} \cdot O\left(\epsilon' + \epsilon'^2 + \epsilon'^2 + \cdots \right) = O(\epsilon')\overline{\mathsf{opt}}$. In any time in the $p$-th stage, the optimum solution taking all arrived clients into consideration has cost $\overline\mathsf{opt}' \geq \overline\mathsf{opt}$, and our solution has cost at most $(\alpha_{\mathsf{FL}} + O(\epsilon'))\overline\mathsf{opt}'$ without considering the frozen clients and facilities. Thus, our solution still has approximation ratio $\frac{(\alpha_{\mathsf{FL}} + O(\epsilon'))\overline\mathsf{opt}' + O(\epsilon')\overline\mathsf{opt}}{\overline\mathsf{opt}'} = \alpha_{\mathsf{FL}} + O(\epsilon')$ when taking the frozen clients into consideration.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The electric charge conservation and the masslessness of photon are the two fundamental ingredients of the Standard Model. They have been tested experimentally many times with high precision and at present we have no evidence whatsoever that would question their validity. Yet, over the past two decades there has been considerable interest in constructing and analysing models in which one or both of the above postulates do not hold true \cite{GN,we1,okun,we2,s,ts,Qnoncons,mtt,mn}. The purpose of these works was to understand better the reasons why the electric charge is conserved and the photon is massless. Also, without these works it would be very hard to think of new experimental ways of testing these laws. One of the important discoveries made in those works was the close relation between the two ideas: electric charge (non)conservation and electric charge (de)quantization \cite{we1,we2,Mel}. All previous works on the subject have shared one common property: it was impossible to violate the electric charge conservation without giving the photon a (tiny) mass at the same time. This is not surprising at all if we consider Maxwell equations of classical electrodynamics. Since observational limit on the photon mass is very tight (it has to be less than $ 10^{-24} $ GeV or even $ 10^{-36} $ GeV \cite{partdata}), model-building is severely restricted. In this paper, our question is: can we make a model without that undesirable property? In other words, can one modify the Standard Model in such a way that the electric charge is not conserved but the photon is exactly massless? Based on all our previous experience with that kind of models, the answer would seem almost definitely, no. However, in the present work we show how to construct a model in which the electric charge is not conserved but the photon {\em is massless at the tree level \/}.\footnote{The important further questions if this property survives at higher orders of the perturbation theory and if the model is renormalizable, are left open.} Briefly speaking, the main idea is as follows. We point out that the photon mass in the most general case gets two contributions: the first from the mass term of the U(1) field (usually denoted by B) and the second from the spontaneous breaking of the electromagnetic symmetry. By the appropriate choice of parameters we can ensure that these two contributions cancel each other so that the photon remains massless (at least, at the tree level) despite the fact that the electric charge is not conserved. The plan of the work is this: in Section 2 we reproduce some familiar formulas of the Standard Model in order to establish our notation and to make easier the comparison with the further material. Section 3 deals with the case when the Standard Model is extended by adding a hard mass term for the U(1) gauge field. In Section 4 we add to the model of Section 3 an electrically charged scalar singlet with non- zero vacuum expectation thus violating spontaneously the conservation of electric charge. In Section 5 we substitute the scalar singlet by the scalar doublet (with electric charge violating vacuum average) leaving the rest of the model the same as in Section 4. Our discussion and conclusions are contained in Section 6. \section{Standard Model} We shall restrict ourselves to only one generation of quarks and leptons (the addition of other generations will only clutter the notation without giving any new insights). The part of the total Lagrangian which is relevant for our purposes is this \begin{eqnarray} {\cal L}_0 &=& {\cal L}_l + {\cal L}_q + {\cal L}_s\\ {\cal L}_l &=& \bar{L} (i \partial + g {\tau^a \over 2} A^a -{g' \over 2} B) L + \bar{e}_R (i \partial -g' B) e_R,\\ {\cal L}_q &=& \bar{Q} (i \partial + g {\tau^a \over 2} A^a +{g' \over 6} B) Q + \bar{u}_R (i \partial +{2g' \over 3} B) u_R + \bar{d}_R (i \partial -{g' \over 3} B) d_R,\\ {\cal L}_s &=& |(\partial_{\mu} -ig {\tau^a \over 2} A^a_{\mu} -i {g' \over 2} B_{\mu}) \phi|^2 - h(\phi^{\dagger} \phi - { 1 \over 2} v^2)^2. \end{eqnarray} where $A^a= A_{\mu}^a \gamma_{\mu}$ and $B^a= B_{\mu}\gamma_{\mu}$ are $SU(2)$ and $U(1)$ gauge fields, \begin{equation} L =\left( \begin{array}{c} \nu_{eL} \\ e_L \end{array} \right) ,\;\; Q =\left( \begin{array}{c} u_{L} \\ d_L \end{array} \right) ,\;\; \phi =\left( \begin{array}{c} \phi^+ \\ \phi^0 \end{array} \right) \end{equation} After spontaneous symmetry breaking, \begin{equation} \langle \phi \rangle = {1 \over \sqrt{2}} \left( \begin{array}{c} 0 \\ v \end{array} \right) , \end{equation} we are interested in the part of Lagrangian $L_0$ which is quadratic in the gauge fields $A_i$ and $B$: \begin{equation} {\cal L}^{quad}_0 = {1 \over 8} v^2 [g^2(A^2_{1\mu}+A^2{2\mu}+A^2{3\mu}) + g'^2 B^2_{\mu} - 2 gg' A_{3\mu} B^{\mu}]. \end{equation} This gives us the following mass matrix for the pair of fields $A_3$ and $B$ : \begin{equation} M^0 = \left( \begin{array}{cc} M^0_{33} & M^0_{34} \\ M^0_{43} & M^0_{44} \end{array} \right) = \left( \begin{array}{cc} {1 \over 4} g^2 v^2 & -{1 \over 4}gg' v^2 \\ -{1 \over 4}gg'v^2 & {1 \over 4} g'^2 v^2 \end{array} \right). \end{equation} Diagonalizing this mass matrix we arrive at a pair of physical fields, $A$ and $Z$ which are identified with the photon and Z-boson: \begin{eqnarray} A^3_{\mu} &=& Z_{\mu} \cos \theta + A_{\mu} \sin \theta \label{9}\\ B_{\mu} &=& A_{\mu} \cos \theta - Z_{\mu} \sin \theta. \label{10} \end{eqnarray} Here, the Weinberg angle is given by the standard expression: \begin{equation} \sin ^2 \theta = {g'^2 \over g^2 + g'^2}. \end{equation} Now, changing the fields $A_3, B$ into $A, Z$ in our initial Lagrangian we finally obtain the electromagnetic interactions of quarks and leptons: \begin{eqnarray} {\cal L}^{em} &=& {\cal L}_l^{em} + {\cal L}_q^{em} \label{12}\\ {\cal L}_l^{em} &=& A_{\mu}[{1 \over 2}(g\sin\theta - g'\cos\theta) \bar{\nu}_{L}\gamma^{\mu}{\nu}_L -{1 \over 2}(g\sin\theta + g'\cos\theta) \bar{e}_{L}\gamma^{\mu}{e}_L \nonumber\\ &&- g'\cos\theta \bar{e}_{R}\gamma^{\mu}{e}_R] \label{13} \\ {\cal L}_q^{em} &=& A_{\mu}[{1 \over 2} g\sin\theta(\bar{u}_{L}\gamma^{\mu}{u}_L - \bar{d}_{L}\gamma^{\mu}{d}_L) + g'\cos\theta({1 \over 6}\bar{u}_{L}\gamma^{\mu}{u}_L + \nonumber\\ &&{1 \over 6}\bar{d}_{L}\gamma^{\mu}{d}_L +{2 \over 3}\bar{u}_{R}\gamma^{\mu}{u}_R - {1 \over 3}\bar{d}_{R}\gamma^{\mu}{d}_R)] \label{14} \end{eqnarray} From this formula we can read off the values of the electric charges of quarks and leptons: \begin{eqnarray} Q_{\nu} &=& {1 \over 4}(g\sin\theta - g'\cos\theta)\\ Q_e &=& -{1 \over 4}g\sin\theta -{3 \over 4}g'\cos\theta\\ Q_e^5 &=& -{1 \over 4}(g\sin\theta - g'\cos\theta)\\ Q_u &=& {1 \over 4}g\sin\theta +{5 \over 12}g'\cos\theta\\ Q_u^5 &=& {1 \over 4}g\sin\theta - {1 \over 4}g'\cos\theta\\ Q_d &=& -{1 \over 4}g\sin\theta - {1 \over 12}g'\cos\theta\\ Q_d^5 &=& -{1 \over 4}g\sin\theta + {1 \over 4}g'\cos\theta \end{eqnarray} Consequently, the electric charges of neutron and proton are: \begin{eqnarray} Q_n &=& Q_u+2Q_d= - {1 \over 4}(g\sin\theta - g'\cos\theta)\\ Q_p &=& 2Q_u+Q_d={1 \over 4}g\sin\theta + {3 \over 4}g'\cos\theta. \label{23} \end{eqnarray} At this point one may wonder why Eq.(\ref{13}) to (\ref{23}) do not look very familiar. The reason is this: in writing down Eq. (\ref{13}) to (\ref{23}) we have not taken into account the formula \begin{equation} \label{24} g \sin \theta = g' \cos \theta, \end{equation} which holds true in the Standard Model. We did not use this equation when deriving Eq.(\ref{13}) to (\ref{23}) because there exists a crucial distinction between them: Eq.(\ref{24}) will {\em not apply} in the extended models to be considered below (Sections 3, 4, 5) whereas Eq. (\ref{13}) to (\ref{23}) {\em will still be true} in all those models (provided one puts in the modified value for $\sin \theta$, see below.) Of course, if one wants to stay within the Standard Model, than one has to put $g \sin \theta = g' \cos \theta$ in Eq.(\ref{13}) to (\ref{23}) to recover the standard form for the electromagnetic Lagrangian: \begin{equation} {\cal L}_{em}= A_{\mu} \sum_f Q_f \bar{f} \gamma^{\mu}f \end{equation} with the correct values of fermion charges $Q_f$: \begin{equation} Q_{\nu}=0, \;\; Q_e=-e, \;\; Q_u={2 \over 3}e, \;\; Q_d=-{1 \over 3}e. \end{equation} Needless to say, all axial charges $Q^{5}_{i}$ vanish identically in the Standard Model. \section{Minimal hybrid $SU(2) \times U(1)$ model} Let us consider a model which differs from the Standard Model only in one point: its lagrangian contains a mass term for the $U(1)$ gauge field $B$ (before spontaneous symmetry breaking): \begin{equation} {\cal L}'={\cal L}_0 +{1 \over 2}m^2 B^2_{\mu}. \end{equation} After symmetry breaking, we obtain from this lagrangian the following mass matrix for the gauge fields: \begin{equation} M' = M^0 + \Delta M' = \left( \begin{array}{cc} M^0_{33} & M^0_{34} \\ M^0_{43} & M^0_{44} + m^2 \end{array} \right) = \left( \begin{array}{cc} {1 \over 4} g^2 v^2 & -{1 \over 4}gg' v^2 \\ -{1 \over 4}gg'v^2 & {1 \over 4} g'^2 v^2 + m^2 \end{array} \right). \end{equation} Following the same path as in the standard case (Section 2), we diagonalize the mass matrix to obtain the physical fields. Although these fields are different from the standard fields (\ref{9}), (\ref{10}), we keep the same notation for them: $A$ and $Z$ since we have to identify them with the observable particles: photon and Z-boson: \begin{eqnarray} A^3_{\mu} &=& Z_{\mu} \cos \theta ' + A_{\mu} \sin \theta ' \\ B_{\mu} &=& A_{\mu} \cos \theta ' - Z_{\mu} \sin \theta '. \end{eqnarray} The gauge boson masses acquire small corrections (assuming that $m$ is small); in particular, the photon gets non-zero mass: \begin{eqnarray} M_Z &=& {1 \over 2} \sqrt{g^2+g'^2} v + O({m^2 \over v})\\ M_{\gamma} &=& gm + O({m^2 \over v}). \label{32} \end{eqnarray} The Weinberg angle gets a small correction, too (to avoid misunderstanding, we note that we are working on the tree level throughout the paper so the word "correction" has nothing to do with perturbation theory): \begin{equation} \sin^2\theta ' =\sin^2 \theta + {m^2 \over M_Z^2} (1 - {e^2 \over \sin^2 \theta} + e^2 - \sin^4 \theta) \approx 0.64 {m^2 \over M_Z^2}. \end{equation} This fact leads to a drastic consequence: the electric charge non- conservation. To show that, let us find the electromagnetic part of the lagrangian. If we compare the ways of reasoning in Sections 2 and 3 we shall see that the same formulas, Eq. (\ref{13}) and (\ref{14}) applies also in the present case; the only change that should be made is to change $\sin \theta$ to $\sin \theta^{'}$, the rest of the formulas being unchanged: \begin{eqnarray} {\cal L}^{'em} &=& {\cal L}_l^{'em} + {\cal L}_q^{'em} \\ {\cal L}_l^{'em} &=& A_{\mu}[{1 \over 2}(g\sin\theta' - g'\cos\theta' )\bar{\nu}_{L}\gamma^{\mu}{\nu}_L -{1 \over 2}(g\sin\theta' + g'\cos\theta' ) \bar{e}_{L}\gamma^{\mu}{e}_L \nonumber\\ &&- g'\cos\theta' \bar{e}_{R}\gamma^{\mu}{e}_R] \label{35} \\ {\cal L}_q^{'em} &=& A_{\mu}[{1 \over 2} g\sin\theta'(\bar{u}_{L}\gamma^{\mu}{u}_L - \bar{d}_{L}\gamma^{\mu}{d}_L) + g'\cos\theta' ({1 \over 6}\bar{u}_{L}\gamma^{\mu}{u}_L +{1 \over 6}\bar{d}_{L}\gamma^{\mu}{d}_L + \nonumber\\ && {2 \over 3}\bar{u}_{R}\gamma^{\mu}{u}_R - {1 \over 3}\bar{d}_{R}\gamma^{\mu}{d}_R)] \label{36} \end{eqnarray} Based on this formula, we can arrive at an important conclusion: as soon as the equality $g \sin \theta = g' \cos \theta$ is broken, the electromagnetic current conservation is violated immediately. To avoid confusion, one essential point needs to be emphasized here. We have defined the electromagnetic current (and thereby the electric charge) as the current interacting with (i.e. standing in front of) the electromagnetic field $A_{\mu}$. Naturally, one can ask about the standard fermion electromagnetic current of the form \begin{equation} j_{\mu}= e(-\bar{e} \gamma_{\mu}e + {2 \over 3} \bar{u} \gamma_{\mu}u - {1 \over 3} \bar{d} \gamma_{\mu}d). \label{37} \end{equation} Although this current is still conserved in the present model , it unfortunately becomes devoid of physical meaning, because all physical processes and experiments are based on the interaction between the charges and electromagnetic fields; therefore in the framework of the present model we have to attach physical meaning and reserve the name "electromagnetic current" for the current of Eq.(\ref{35}) and (\ref{36}), rather than that of Eq. (\ref{37}) To summarise, this theory features three fundamental deviations from the Standard Model: massiveness of photon, the electric charge dequantization, and the electric charge non-conservation. Now, let us discuss the experimental limits on the parameter $m$ which result from the above three features. In our case, the experimental upper bound on the photon mass gives, by far, the strongest constraint on the value of $m$. It has been established that the photon mass should be less than $ 10^{-24} $ GeV or even $ 10^{-36} $ \cite{partdata}. Therefore, from Eq. (\ref{32}) we find that the parameter $m$ cannot exceed $2 \times 10^{-24}$ GeV or $2 \times 10^{-36}$ GeV. With such small values of the parameter $m$, the charge dequantization and charge non-conservation effects are expected to be too small to be observed. For instance, the best experimental limits on electric charge dequantization are given by the following figures: neutron charge: $Q_n < 10^{-21}$ \cite{n}; charge of an atom: $Q_a < 10^{-18}$ \cite{ep} neutrino charge: $Q_{\nu} < 10^{-13}$ \cite{brf} or $10^{-17}$ \cite{bc} (for a detailed discussion of these and other constraints, see \cite{bv}). Thus we can conclude that the upper bound on the parameter $m$ imposed by the masslessness of photon makes all other predictions very hard to observe which limits our interest in this model. Note that this model (with no fermions) was first suggested in Ref. \ cite{clt} under the name of "hybrid model". The authors of Ref. \cite{clt} were motivated by the systematic search for renormalizable gauge models beyond the standard $SU(2) \times U(1)$ model. As concerns the renormalizability of the model which is certainly a very important issue, it has been proved in Ref. \cite{clt} that the theory posesses the property called tree unitarity which is a weaker property than renormalizability. We are not aware of any work which would further address the problem of renormalizability of this type of models. Although it may appear to be of academical rather than phenomenological character, this work would certainly be very desirable because it would include or exclude a whole new class of gauge models from the set of renormalizable gauge theories. (Note that we do not share the belief that non-renormalizability of a theory automatically makes it physically uninteresting.) \section{Hybrid model with a scalar singlet} Let us now add to the Lagrangian of the previous section a piece containing the scalar singlet field $\phi_1$ with the electric charge $\epsilon$ (which coincides with the hypercharge in this case): \begin{equation} {\cal L}_1 = {\cal L}_0 + {1 \over 2}m^2B^2 + |(\partial_{\mu} -i {g' \over 2} B_{\mu}) \epsilon_1 ) \phi_1|^2 + P(\phi_1 , \phi). \end{equation} Now, assume that the field $\phi_1$ has non-zero vacuum expectation value $v_1$: $\langle \phi_1 \rangle = v_1$. Then, after spontaneous symmetry breaking the mass matrix of the system $A_3, B$ is: \begin{equation} M^1 = M^0 + \Delta M^1 = \left( \begin{array}{cc} M^0_{33} & M^0_{34} \\ M^0_{43} & M^0_{44} + m^2 +{1 \over 2} g'^2 v_1^2 \epsilon_1^2 \end{array} \right) = \left( \begin{array}{cc} {1 \over 4} g^2 v^2 & -{1 \over 4}gg' v^2 \\ -{1 \over 4}gg'v^2 & {1 \over 4} g'^2 v^2 + m^2 +{1 \over 2} g'^2 v_1^2 \epsilon_1^2 \end{array} \right). \end{equation} Performing the diagonalization as before, we obtain the mass of the physical photon to be: \begin{equation} M^2_{\gamma} = g^2 (m^2 + {1 \over 2} g'^2 v_1^2 \epsilon_1^2 ) \end{equation} The formula for the photon mass (squared) consists of two contributions: the first is proportional to $m^2$ (``hard mass'') and the second is proportional to $v_1^2$ (``soft mass''). Nothing seems to prevent us from considering {\em negative} values either for $m^2$ or for $v_1^2$. Thus we are led to a very interesting possibility: to choose these two parameters in such a way that they exactly cancel each other so that the photon remains massless\footnote{Here, we disregard a possible appearence of a Nambu-Goldstone boson. One may expect that its manifestations would be sufficiently suppressed, but even if they were not, the model could be modified in analogy with Ref. \cite{mtt}.} (at least, at the tree level): \begin{equation} m^2 + {1 \over 2} g^{'2} v_1^2 \epsilon_1^2 = 0. \label{41} \end{equation} Note that if this condition is satisfied, the Z-boson mass becomes exactly equal to that of the standard Z-boson: \begin{equation} M_Z = {1 \over 2} \sqrt{g^2+g'^2} v . \end{equation} Now, do we obtain the electric charge non-conservation or dequantization in the fermion sector, in analogy with the result of Section 3? Unfortunately, the answer is: no. The reason is this: the calculation of the Weinberg angle in this model (denoted by $\theta_1$) shows that this angle is {\em exactly equal} to the Weinberg angle of the Standard Model: \begin{equation} \sin^2 \theta_1 = \sin^2 \theta. \end{equation} Note that this exact equality has been obtained without assuming $m^2$ or $v_1^2$ to be small (but, of course, assuming that the condition of photon masslessness, Eq. \ref{41} holds.) From this equality it follows that the fermion electromagnetic current in this model remains exactly the same as in the Standard Model: \begin{equation} j_{\mu}= e(-\bar{e} \gamma_{\mu}e + {2 \over 3} \bar{u} \gamma_{\mu}u - {1 \over 3} \bar{d} \gamma_{\mu}d). \end{equation} In other words any effects of the electric charge non-conservation or dequantization are absent {\em in the fermion sector}. Here we would like to stress an essential point: the absence of these effects in the fermion sector {\em does not mean} that they are absent altogether. One should not forget that giving the vacuum expectation to the charged scalar field $\phi_1$ leads to the electric charge non- conservation {\em in the scalar sector}. However, from the phenomenological point of view, these effects are much harder to observe. Such effects would be similar to those arising in a model with charged scalar field but without the $m^2$ term. Models of such type have been considered in the literature before and we do not intend to go into details here. To conclude this Section, we see that in the context of the present model, vanishing of the photon mass leads to vanishing effects of charge non-conservation and charge dequantization {\em in the fermion sector} (but not {\em in the scalar sector} ). \section{Hybrid model with a scalar doublet} In the previous section we have considered the model with a hard mass for the field B and the scalar {\em singlet} violating the electromagnetic U(1) symmetry. In this Section, let us change the singlet into the scalar {\em doublet}, again violating U(1) symmetry; the rest of the model will be the same. Thus, the lagrangian of our new model reads: \begin{equation} {\cal L}_2 = {\cal L}_0 + {1 \over 2}m^2B^2 + |(\partial_{\mu} -ig {\tau^a \over 2} A^a_{\mu} -i {g' \over 2} (1 + \epsilon_2)B_{\mu}) \phi_2|^2 + P(\phi_2 , \phi). \end{equation} where the electric charges of the scalar doublet are: \begin{equation} Q(\phi_2) =\left( \begin{array}{c} 1+ {\epsilon_2 \over 2} \\ {\epsilon_2 \over 2} \end{array} \right). \end{equation} We break the electromagnetic symmetry by assuming \begin{equation} \langle \phi_2 \rangle = {1 \over \sqrt{2}} \left( \begin{array}{c} 0 \\ v_2 \end{array} \right). \end{equation} After the spontaneous breakdown of symmetry the mass matrix of neutral gauge fields takes the form: \begin{equation} M^{(2)} = M^0 + \Delta M^{(2)} = \left( \begin{array}{cc} M^0_{33} + {1 \over 4} g^2 v^2_2 & M^0_{34} - {1 \over 4}gg'(1 + \epsilon_2)v^2_2 \\ M^0_{43} - {1 \over 4}gg'(1 + \epsilon_2)v^2_2 & M^0_{44} + m^2 +{1 \over 4} g'^2 (1 + \epsilon_2)^2 v_2^2 \end{array} \right). \end{equation} The condition for the photon to be massless is: \begin{equation} m^2 + {1 \over 4} \epsilon^2 g'^2 {v^2 v^2_2 \over v^2 + v^2_2} = 0 \end{equation} From now on, we will assume that this condition is satisfied. Then, the mass of Z-boson is given by: \begin{equation} M_Z^2 = {1 \over 4}(g^2 + g'^2)v^2 + {1 \over 4}g^2 v_2^2 + {1 \over 4}g'^2(1 + \epsilon_2)^2 v_2^2 + m^2. \end{equation} For the Weinberg angle we obtain, neglecting the terms of the order of $\epsilon_2^2$: \begin{equation} \sin^2 \theta_2 = {g'^2(v^2 + (1+ 2\epsilon_2)v_2^2) \over (g^2 + g'^2)v^2 + (g^2 + g'^2(1 + 2\epsilon_2 ))v_2^2} \end{equation} Assuming that the vacuum expectation of the second doublet is much smaller than that of the Higgs doublet, we can write down a simpler expression: \begin{equation} \sin^2\theta_2=\sin^2\theta(1+2\epsilon_2\cos^2\theta{v_2^2 \over v^2}), \end{equation} where $\theta$ is the Weinberg angle {\em of the Standard Model}. As before, the electromagnetic interaction is given by Eq. \ref{12}-- \ref{14} in which $\sin \theta$ has to be substituted by $\sin \theta_2$: \begin{eqnarray} {\cal L}^{em}_2 &=& {\cal L}_{2l}^{em} + {\cal L}_{2q}^{em} \\ {\cal L}_{2l}^{em} &=& A_{\mu}[{1 \over 2}(g\sin\theta_2 - g'\cos\theta_2) \bar{\nu}_{L}\gamma^{\mu}{\nu}_L -{1 \over 2}(g\sin\theta_2 + g'\cos\theta_2) \bar{e}_{L}\gamma^{\mu}{e}_L \nonumber\\ && - g'\cos\theta_2 \bar{e}_{R}\gamma^{\mu}{e}_R] \\ {\cal L}_{2q}^{em} &=& A_{\mu}[{1 \over 2} g\sin\theta_2(\bar{u}_{L}\gamma^{\mu}{u}_L - \bar{d}_{L}\gamma^{\mu}{d}_L) + g'\cos\theta_2({1 \over 6}\bar{u}_{L}\gamma^{\mu}{u}_L +{1 \over 6}\bar{d}_{L}\gamma^{\mu}{d}_L + \nonumber\\ && {2 \over 3}\bar{u}_{R}\gamma^{\mu}{u}_R - {1 \over 3}\bar{d}_{R}\gamma^{\mu}{d}_R)] \end{eqnarray} We see that the charge dequantization and charge non-conservation effects are controlled by the parameter \begin{equation} \delta = g \sin \theta_2 -g'\cos \theta_2. \end{equation} This parameter measures the deviation of our theory from the Standard Model (in the latter $g \sin \theta -g'\cos \theta =0$). Up to the terms of the order of ${v_2^2 \over v^2}$ we have: \begin{equation} \delta=e\epsilon_2{v^2_2 \over v^2}. \end{equation} In terms of $\delta$ we can conveniently express the dequantized lepton and quark charges. The neutrino charge is: \begin{equation} Q_{\nu}= {1 \over 4} \delta. \end{equation} The axial electron charge is equal to: \begin{equation} Q^5_e= - {1 \over 4} \delta. \end{equation} Our normalization is such that the vector electron charge should coincide exactly with $-e$, without any corrections: \begin{equation} Q_e= -e. \end{equation} The vector ($Q_u$) and the axial ($Q_u^5$) charges of u-quark are given by: \begin{eqnarray} Q_u &=& {2 \over 3}e + {1 \over 12}\delta \\ Q_u^5 &=& {1 \over 4} \delta. \end{eqnarray} The charges of d-quark are equal to: \begin{eqnarray} Q_d &=& -{1 \over 3}e - {1 \over 6} \delta \\ Q_d^5 &=& -{1 \over 4} \delta. \end{eqnarray} Consequently, the vector charge of the neutron is: \begin{equation} Q_n = Q_u + 2Q_d = -{1 \over 4} \delta. \end{equation} The vector charge of the proton equals \begin{equation} Q_p = 2Q_u + Q_d = e. \end{equation} Therefore, although the electric charge is dequantized in this model, nevertheless the following relations between the fermion charges hold true: \begin{equation} Q_n + Q_{\nu} =0 ; \;\;\; Q_p + Q_e =0. \end{equation} From various experiments testing the validity of electric charge quantization we can infer the following upper bounds on the parameter $\delta$. From the upper bound (\cite{brf,bc}) on the (electron) neutrino charge: \begin{equation} \delta < 4 \times 10^{-13} or 4 \times 10^{-17}. \end{equation} From the constraint (\cite{n}) on the neutron electric charge: \begin{equation} \delta < 4 \times 10^{-21} . \end{equation} From the tests (\cite{ep}) of the neutrality of atoms: \begin{equation} \delta < 4 \times 10^{-18} . \end{equation} \section{Conclusion and outlook} To conclude, we have studied a series of hybrid $SU(2) \times U(1)$ models in which the symmetry is broken down not only spontaneously but also explicitely by adding a hard mass term for the $U(1)$ field in the lagrangian. We study the issue of electric charge nonconservation and dequantization in these models. For this purpose we construct and analyze a series of hybrid models with different scalar contents. In the minimal hybrid model the electric charge is not conserved (even though there are no charge violating vacuum expectation values). The reason is that the mixing angle between the photon and the Z-boson gets changed as compared with the Standard Model, so that the electromagnetic current receives an additional non-conserved contribution. The same reason accounts for the fact that the electric charges become slightly different from their standard values (i.e., charge dequantization occurs). In this minimal model the photon acquires a non-zero mass which puts a tight limit on the allowed magnitude of the hard mass term for the $U(1)$ field. Next, we added to the minimal hybrid model a scalar singlet with a non-zero electric charge. We then assumed that this singlet has a non-vanishing vacuum expectation thus violationg spontaneously the electromagnetic symmetry. We showed that the photon mass (at the tree level) receives two contributions: the first from the hard mass term of the $U(1)$ field, and the second from the scalar vacuum expectation. The parameters can be chosen such that these two terms cancel against each other so that the photon remains massless (at the tree level). If this is done, the Weinberg angle is not changed, therefore the electric charges of the fermions remain the same and there is no electric charge non-conservation in the fermion sector (but in the scalar sector the electric charge is not conserved). Finally, we presented a hybrid model with an extra scalar doublet (in addition to the Higgs doublet of the Standard Model). Again, we assumed that the doublet spontaneously violates the electromagnetic symmetry. In analogy with the previous model, the photon mass again consists of two terms: one due to the $U(1)$ hard mass term and the other due to the new scalar doublet vacuum average. Also, we can arrange for these two terms to cancel. However, here starts the difference with the scalar-singlet model and an interesting consequence arises. Although the photon mass is zero (at the tree level), the Weinberg angle {\em does \/} get modified, so that the electric charges of the fermions become {\em dequantized \/} and, moreover, the electromagnetic current (defined as the current interacting with the electromagnetic field) is {\em no longer conserved \/}. From the results of the experiments measuring the electric charges of the neutron and atoms, and the astrophysical limits on the neutrino electric charge we have derived upper bounds on the parameter that governs the effects of charge dequantization and non-conservation in our model. The next important step would be to consider the hybrid models beyond the tree level and address the problem of renormalizability of such models. The authors are grateful to R.Foot and R.Volkas for stimulating discussions. This work was supported in part by the Australian Research Council.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \thispagestyle{empty} In 1924 Banach and Tarski~\cite{Banach-Tarski-original} decompose a solid ball into five pieces, and reassemble them into two balls using rotations. That is now called the Banach-Tarski paradox. Von Neumann~\cite{Neumann1929} observes that the reason for this phenomenon is that the group of rotations of $\R^3$ admits a free subgroup. He introduces the concept of amenable groups. Tarski~\cite{Tarski1938} later proves amenability to be the only obstruction to the existence of "paradoxical" decompositions (like the one in Banach-Tarski's article~\cite{Banach-Tarski-original}) of the action of the group on itself by multiplication, as well as any free actions of the group. One way to prove the result of Banach-Tarski is to see it as an almost everywhere free action of $SO_3(\R)$ and correct for the countable set where it is not (see e.g. Wagon~\cite[Cor.~3.10]{Banach-Tarski}). The original definition of amenability of a group $G$ is the existence of an invariant mean. A mean is a normalised positive linear functional on $l^\infty(G)$. It is called invariant if it is preserved by translation on the argument. Groups that contain free subgroups are non-amenable. It is proven by Ol'shanskii in 1980~\cite{Olshanskii1980} that it is also possible for a non-amenable group to not have a free subgroup. Adyan~\cite{MR682486} shows in 1982 that all Burnside groups of a large enough odd exponent (which are known to be infinite by result of Novikov and Adyan from 1968~\cite{adyannovakov}) are non-amenable. Clearly they do not contain free subgroups. For more information and properties of amenability, see~\cite{bartholdi},\cite{article},\cite{greenleaf},\cite{Banach-Tarski}. It is worth noting that despite the existence of a large amount of equivalent definitions of amenability, to our knowledge until recently all examples of non-amenable groups without free subgroups are proven (Ol'shanskii~\cite{Olshanskii1980}, Adyan~\cite{MR682486}, Ol'shanskii~\cite{0036-0279-35-4-L13}, Ol'shanskii-Sapir~\cite{finpresnonam}) to be such using the co-growth criterion. See Grigorchuk~\cite{Gri77} for the announcement of the criterion, or~\cite{grigcogreng} for a full proof. For other proofs, see Cohen~\cite{cogr1}, Szwarc~\cite{cogr3}. The criterion is closely related to Kesten's criterion in terms of probability of return to the origin~\cite{kesten}. Monod constructs in \cite{h-main} a class of groups of piecewise projective homeomorphisms $H(A)$ (where $A$ is a subring of $\R$). By comparing the action of $H(A)$ on the projective line $\P(\R)$ with that of $PSL_2(A)$, he proves that it is non-amenable for $A\neq\Z$ and without free subgroups for all $A$. This can be used to obtain non-amenable subgroups with additional properties. In particular, Lodha~\cite{Lodha2014} proves that a certain subgroup of $H(\Z[\frac{\sqrt{2}}{2}])$ is of type $F_\infty$ (in other words, such that there is a connected CW complex $X$ which is aspherical and has finitely many cells in each dimension such that $\pi_1(X)$ is isomorphic to the group). That subgroup was constructed earlier by Moore and Lodha~\cite{lhodafinpres} as an example of a group that is non-amenable, without free subgroup and finitely presented. It has three generators and only $9$ defining relations (compare to the previous example by Ol'shanskii-Sapir~\cite{finpresnonam} with $10^{200}$ relations). This subgroup is the first example of a group of type $F_\infty$ that is non-amenable and without a free subgroup. Later, Lodha~\cite{lhoda-tarski} also proves that the Tarski numbers (the minimal number of pieces needed for a paradoxical decomposition) of all the groups of piecewise projective homeomorphisms are bounded by $25$. It is not known whether the group $H(\Z)$ of piecewise projective homeomorphisms in the case $A=\Z$ defined by Monod is amenable. One of the equivalent conditions for amenability is the existence of a non-degenerate measure with trivial Poisson boundary (see Kaimanovich-Vershik~\cite{kaimpoisson}, Rosenblatt~\cite{rosenblatt}). This measure can be chosen to be symmetric. It is also known that amenable groups can have measures with non-trivial boundary. In a recent result Frisch-Hartman-Tamuz-Vahidi-Ferdowski~\cite{choquet-deny} describe an algebraic necessary and sufficient condition for a group to admit a measure with non-trivial boundary. In the present paper we give sufficient conditions for non-triviality of the Poisson boundary on $H(\Z)$. There are several equivalent ways to define the Poisson boundary (see Kaimanovich-Vershik~\cite{kaimpoisson}). Consider a measure $\mu$ on a group $G$ and the random walk it induces by multiplication on the left. It determines an associated Markov measure $P$ on the trajectory space $G^\N$. \begin{defi}\label{poisson} Consider the following equivalence relation on $G^\N$: two trajectories $(x_0,x_1,\dots)$ and $(y_0,y_1,\dots)$ are equivalent if and only if there exist $i_0\in\N$ and $k\in\Z$ such that for every $i>i_0$ $x_i=y_{i+k}$. In other words, if the trajectories coincide after a certain time instant up to a time shift. The \textit{Poisson boundary} (also called \textit{Poisson-Furstenberg boundary}) of $\mu$ on $G$ is the quotient of $(G^\N,P)$ by the measurable hull of this equivalence relation. \end{defi} Note that if the support of the measure does not generate $G$, in which case we say that the measure is \textit{degenerate}, this defines the boundary on the subgroup generated by the support of the measure rather than on $G$. For a more recent survey on results concerning the Poisson boundary, see~\cite{Erschler2010}. Kim, Koberda and Lodha have shown in~\cite{chainhomeo} that $H(\Z)$ contains Thompson's group $F$ as a subgroup. This group is the group of orientation-preserving piecewise linear self-isomorphisms of the closed unit interval with dyadic slopes, with a finite number of break points, all break points being dyadic numbers (see Cannon-Floyd-Perry~\cite{thomsoncfp} or Meier's book~\cite[Ch.~10]{meier} for details and properties). It is not known whether it is amenable, which is a celebrated open question. Kaimanovich~\cite{kaimanovichthompson} and Mishchenko~\cite{mischenko2015} prove that the Poisson boundary on $F$ is not trivial for finitely supported non-degenerate measures. They study the induced walk on the dyadic numbers in their proofs. However, there exist non-degenerate symmetric measures on $F$ for which the induced walk has trivial boundary as proven by Juschenko and Zheng~\cite{juszheng}. The results of the current article are inspired by the paper of Kaimanovich. It is not hard to prove that $H(\Z)$ is not finitely generated (see Remark~\ref{fingen}), so we will consider measures the support of which is not necessarily finite. Our main result is as follows. Consider the group $H(\Z)$ of piecewise projective homeomorphisms, as defined by Monod~\cite{h-main}, in the case $A=\Z$. For $g\in H(\Z)$ denote by $Br(g)$ the number of \textit{break points} of $g$, which is the ends of pieces in its piecewise definition. We will say that a measure $\mu$ on a subgroup of $H(\Z)$ has \textit{finite first break moment} if the expected number of break points $\mathbb{E}[Br]$ is finite. A group $H$ is called \textit{locally solvable} if all finitely generated subgroups are solvable. Then \begin{thm}\label{main} For any subgroup $H$ of $H(\Z)$ which is not locally solvable and any measure $\mu$ on $H$ with finite first break moment $\mathbb{E}[Br]$ and such that the support of $\mu$ generates $H$ as a semigroup, the Poisson boundary of $(H,\mu)$ is non-trivial. \end{thm} For a measure $\mu$ on a finitely generated group, we say that $\mu$ has \textit{finite first moment} if the word length over any finite generating set has finite first moment with respect to $\mu$. This is well defined as word lengths over different finite generating sets are bilipschitz, and in particular the finiteness of the first moment does not depend on the choice of generating set. We remark (see Remark~\ref{brfin}) that any measure $\mu$ on a finitely generated subgroup $H$ of $H(\Z)$ that has finite first moment also has finite expected number of break points. Therefore by Theorem~\ref{main} if $\mu$ is a measure on a non-solvable finitely generated subgroup $H$ such that the support of $\mu$ generates $H$ as a semigroup and $\mu$ has finite first moment, the Poisson boundary of $(H,\mu)$ is non-trivial. Furthermore, in the other case we will show (Lemma~\ref{mineps}) that so long as $H$ is not abelian, we can construct a symmetric non-degenerate measure with finite $1-\varepsilon$ moment and non-trivial Poisson boundary. The structure of the paper is as follows. In Section~\ref{prelim}, given a fixed $s\in\R$, to every element $g\in H(\Z)$ we associate (see Definition~\ref{confdef}) a configuration $C_g$. Each configuration is a function from the orbit of $s$ into $\Z$. The value of a configuration $C_g$ at a given point of the orbit of $s$ represents the slope change at that point of the element $g$ to which it is associated. There is a natural quotient map of the boundary on the group into the boundary on the configuration space. The central idea of the paper is to show that under certain conditions, the value of the configuration at a given point of the orbit of $s$ almost always stabilises. If that value is not fixed, this then implies non-triviality of the boundary on the configuration space, and thus non-triviality of the Poisson boundary on the group. These arguments bear resemblance to Kaimanovich's article on Thompson's group~\cite{kaimanovichthompson}, but we would like to point out that the action on $\R$ considered in the present article is different. In Section~\ref{sectfour} we obtain the first result for non-triviality of the Poisson boundary (see Lemma~\ref{constr}). Measures satisfying the assumptions of that lemma do not necessarily have finite first break moment. In Section~\ref{thompsect} we study copies of Thompson's group $F$ in $H(\Z)$. Building on the results from it, in Section~\ref{schreier} we obtain transience results (see Lemma~\ref{algtho}) which we will need to prove Theorem~\ref{main}. In Section~\ref{anothersuff} we prove Lemma~\ref{ltwo} which is the main tool for proving non-triviality of the Poisson boundary. In the particular case of Thompson's group, the lemma already allows us to answer a question by Kaimanovich~\cite[7.A]{kaimanovichthompson}: \begin{cor}\label{finfirstthomp} Any measure on Thompson's group $F$ that has finite first moment and the support of which generates $F$ as a semi-group has non-trivial Poisson boundary. \end{cor} We mention that the arguments of Lemma~\ref{ltwo} could also be applied for the action and configurations considered in Kaimanovich's article, giving an alternative proof of the corollary. Combining the lemma with the transience results from Section~\ref{schreier} we obtain non-triviality of the Poisson boundary under certain conditions (see Lemma~\ref{algone}), which we will use to prove the main result. As the negation of those conditions passes to subgroups, it suffices to show that if $H$ is finitely generated and does not satisfy them, it is then solvable, which we do in Section~\ref{algsec}. Remark that the theorem generalises the result of Corollary~\ref{finfirstthomp}. In Section~\ref{last} we give an additional remark on the case of finite $1-\varepsilon$ moment. \section*{Acknowledgements} I would like to offer my special thanks to my thesis director, Anna Erschler. Discussions with Laurent Bartholdi have been extremely helpful, and they inspired me to consider the case of finite first moment. I would also like to thank Vadim Kaimanovich for his remarks on the preliminary version of the paper. I am also grateful to Dmytro Savchuk for the question that lead to Remark~\ref{grapham}. \section{Preliminaries}\label{prelim1} \subsection{$\Pl$ and $H(\Z)$} The projective linear group $PSL_2(\R)$ is defined as $SL_2(\R)/\{Id,-Id\}$, which is the natural quotient that describes the linear actions on the projective space $\P(\R)$. As the latter can be defined as $\mathbb{S}/(x\sim-x)$, we can think of it as a circle for understanding the dynamics of the action of the projective group. Remark that it is commonly understood as the boundary of the hyperbolic plane. In this paper we will not be interested in the interior of the hyperbolic plane as we do a piecewise definition of $H(A)$ on $\P(\R)$. An element $h\in PSL_2(\R)$ is called: \begin{enumerate} \item \textbf{Hyperbolic} if $|tr(h)|>2$ (or equivalently, $tr(h)^2-4>0$). In this case a calculation shows that $h$ has two fixed points in $\P(\R)$. One of the points is attractive and the other repulsive for the dynamic of $h$, meaning that starting from any point and multiplying by $h$ (respectively $h^{-1}$) we get closer to the attractive (resp. the repulsive) fixed point. \item \textbf{Parabolic} if $|tr(h)|=2$. In this case $h$ has exactly one "double" fixed point. We can identify $\P(\R)$ with $\R\cup\{\infty\}$ in such a way that the fixed point is $\infty$, in which case $h$ becomes a translation on $\R$. We will go into detail about the identification below. \item \textbf{Elliptic} if $|tr(h)|<2$. Then $h$ has no fixed points in $\P(\R)$ and is conjugate to a rotation. If we consider it as an element of $PSL_2(\C)$, we can see that it has two fixed points in $\P(\C)$ that are outside $\P(\R)$. \end{enumerate} Consider an element $\begin{pmatrix}x\\y\end{pmatrix}\in\R^2\setminus0$. If $y\neq 0$, identify it with $\frac{x}{y}$, otherwise with $\infty$. This clearly passes on $\P(\R)$, and the action of $PSL_2(\R)$ becomes $\begin{pmatrix}a&b\\c&d\end{pmatrix}. x=\frac{ax+b}{cx+d}$. The conventions for infinity are $\begin{pmatrix}a&b\\c&d\end{pmatrix}(\infty)=\frac{a}{c}$ if $c\neq0$ and $\infty$ otherwise, and if $c\neq 0$, $\begin{pmatrix}a&b\\c&d\end{pmatrix}.(-\frac{d}{c})=\infty$. Note that by conjugation we can choose any point to be the infinity. Let us now look into the groups defined by Monod~\cite{h-main}. We define $\Gamma$ as the group of all homeomorphisms of $\R\cup\{\infty\}$ that are piecewise in $PSL_2(\R)$ with a finite number of pieces. Take a subring $A$ of $\R$. We define $\Gamma(A)$ to be the subgroup of $\Gamma$ the elements of which are piecewise in $PSL_2(A)$ and the extremities of the intervals are in $P_A$, the set of fixed points of hyperbolic elements of $PSL_2(A)$. \begin{defi} The group of piecewise projective homeomorphisms $H(A)$ is the subgroup of $\Gamma(A)$ formed by the elements that fix infinity. \end{defi} It can be thought of as a group of homeomorphisms of the real line, and we will use the same notation in both cases. We will note $G=H(\Z)$ to simplify. Note in particular that $\infty\notin P_\Z$. This means that the germs around $+\infty$ and $-\infty$ are the same for every element of $G$. The only elements in $\Pl$ that fix infinity are \begin{equation}\label{agrp} \left\{\left(\alpha_n=\begin{pmatrix}1&n\\0&1\end{pmatrix}\right)_{n\in\Z}\right\}= G\cap \Pl. \end{equation} Fix $g\in G$ and let its germ at infinity (on either side) be $\alpha_n$. Then $g\alpha_{-n}$ has finite support. The set of elements $\bar{G}\subset G$ that have finite support is clearly a subgroup, and therefore if we denote $\A=\{\alpha_n,n\in\Z\}$, we have $$G=\bar{G}+\A$$ For the purposes of this article, we also need to define: \begin{defi}\label{tildeg} Consider the elements of $\Gamma$ that fix infinity and are piecewise in $\Pl$. We call the group formed by those elements the \textit{piecewise $\Pl$ group}, and denote it as $\G$. \end{defi} Remark that in an extremity $\gamma$ of the piecewise definition of an element $g\in\G$, the left and right germs $g(\gamma-0)$ and $g(\gamma+0)$ have a common fixed point. Then $g(\gamma+0)^{-1}g(\gamma-0)\in \Pl$ fixes $\gamma$. Therefore the extremities are in $P_\Z\cup\Q\cup\{\infty\}$, that is in the set of fixed points of any (not necessarily hyperbolic) elements of $\Pl$. In other words, the only difference between $\G$ and $G=H(\Z)$ is that $\G$ is allowed to have break points in $\Q\cup\{\infty\}$, that is in the set of fixed points of parabolic elements. Clearly, $G\leq\G$. This allows us to restrain elements, which we will need in Section~\ref{algsec}: \begin{defi}\label{restr} Let $f\in\G$, and $a,b\in\R$ such that $f(a)=a$ and $f(b)=b$. The function $f\restriction_{(a,b)}$ defined by $f\restriction_{(a,b)}(x)=f(x)$ for $x\in(a,b)$ and $f(x)=x$ otherwise is called a restriction. \end{defi} Remark that $f\restriction_{(a,b)}\in\G$. The idea of this definition is that we extend the restrained function with the identity function to obtain an element of $\G$. The subject of this paper is $G$, however in order to be able to apply results from previous sections in Section~\ref{algsec}, we will prove several lemma for $\G$. The equivalent result will easily follow for $G$ just from the fact that it is a subgroup. \subsection{Random walks} Throughout this article, for a measure $\mu$ on a group $H$ we will consider the random walk by multiplication on the left. That is the walk $(x_n)_{n\in\N}$ where $x_{n+1}=y_nx_n$ and the increments $y_n$ are sampled by $\mu$. In other words, it is the random walk defined by the kernel $p(x,y)=yx^{-1}$. Remark that for walks on groups it is standard to consider the walk by multiplications on the right. In this article the group elements are homeomorphisms on $\R$ and as such they have a natural action on the left on elements of $\R$, which is $(f,x)\mapsto f(x)$. We will use Definition~\ref{poisson} as the definition of Poisson boundary. For completeness' sake we also mention its description in terms of harmonic functions. For a group $H$ and a probability measure $\mu$ on $H$ we say that a function $f$ on $H$ is \textit{harmonic} if for every $g\in H$, $f(g)=\sum_{h\in H}f(hg)\mu(h)$. For a non-degenerate measure, the $L^\infty$ space on the Poisson boundary is isomorphic to the space of bounded harmonic functions on $H$, and the exact form of that isomorphism is given by a classical result called the \textit{Poisson formula}. In particular, non-triviality of the Poisson boundary is equivalent to the existence of non-trivial bounded harmonic functions. We recall the entropy criterion for triviality of the Poisson boundary. \begin{defi} Consider two measures $\mu$ and $\lambda$ on a discrete group $H$. We denote $\mu*\lambda$ their \textit{convolution}, defined as the image of their product by the multiplication function. Specifically: $$\mu*\lambda(A)=\int\mu(Ah^{-1})d\lambda(h).$$ \end{defi} Remark that $\mu^{*n}$ gives the probability distribution for $n$ steps of the walk, starting at the neutral element. For a probability measure $\mu$ on a countable group $H$ we denote $H(\mu)$ its \textit{entropy}, defined by $$H(\mu)=\sum_{g\in H}-\mu(g)\log{\mu(g)}.$$ One of the main properties of entropy is that the entropy of a product of measures is not greater than the sum of their entropies. Combining that with the fact that taking image of a measure by a function does not increase its entropy, we obtain $H(\mu*\lambda)\leq H(\mu) +H(\lambda)$. Avez~\cite{avez72} introduces the following definition: \begin{defi} The \textit{entropy of random walk} (also called \textit{asymptomatic entropy}) of a measure $\mu$ on a group $H$ is defined as $\lim_{n\rightarrow\infty}\frac{H(\mu^{*n})}{n}$. \end{defi} \begin{thm}[Entropy Criterion (Kaimanovich-Vershik~\cite{kaimpoisson}, Derriennic~\cite{derast})]\label{entropy} Let $H$ be a countable group and $\mu$ a non-degenerate probability measure on $H$ with finite entropy. Then the Poisson boundary of $(H,\mu)$ is trivial if and only if the asymptotic entropy of $\mu$ is equal to zero. \end{thm} \section{Some properties of groups of piecewise projective homeomorphisms}\label{prelim} In Subsection~\ref{slopechange} we study $P_\Z$ and the group action locally around points of it. In Subsection~\ref{confsect}, using the results from the first subsection, to each element $g\in\G$ we associate a configuration $C_g$. We then also describe how to construct an element with a specific associated configuration. \subsection{Slope change points in $G=H(\Z)$}\label{slopechange} Let $g$ be a hyperbolic element of $\Pl$. Let it be represented by $\begin{pmatrix}a&b\\c&d\end{pmatrix}$ and denote $tr(g)=a+d$ its trace. Then its fixed points are $\frac{d-a\pm\sqrt{tr(g)^2-4}}{c}$. As the trace is integer and greater than $2$ in absolute value, this number is never rational. Furthermore, it is worth noting that $\Q(\sqrt{tr(g)^2-4})$ is stable by $\Pl$ and therefore by $\G$ (and $G$). If we enumerate all prime numbers as $(p_i)_{i\in\N}$, we have, for $I\neq J\subset\N$ finite, $\Q(\sqrt{\prod_{i\in I}p_i})\cap\Q(\sqrt{\prod_{i\in J}p_i})=\Q$. We just mentioned that $P_\Z\cap\Q=\emptyset$ so we have $$P_\Z=\bigsqcup_{I\subset\N\mbox{ finite}}\left(P_\Z\bigcap\Q\left(\sqrt{\prod_{i\in I}p_i}\right)\right)$$ where each set in the decomposition is stable by $\G$. Note also that the fixed points of parabolic elements of $\Pl$ are rational. This actually completely characterizes the set $P_\Z$, as we will now show that $P_\Z\bigcap\Q\left(\sqrt{\prod_{i\in I}p_i}\right)=\Q\left(\sqrt{\prod_{i\in I}p_i}\right)\setminus\Q$: \begin{lemma}\label{all} Take any $s\in\Q(\sqrt{k})\setminus\Q$ for some $k\in\N$. Then $s\in P_\Z$. \end{lemma} Remark that $k$ is not an exact square, as $\Q(\sqrt{k})\setminus\Q$ has to be non-empty. \begin{proof} Note first that to have $\sqrt{tr^2-4}\in\Q(\sqrt{k})$ for some matrix it suffices to find integers $x\geq 2$ and $y$ such that $x^2-ky^2=1$. Indeed, any matrix with trace $2x$ will then satisfy this, for example $\begin{pmatrix}x&x^2-1\\1&x\end{pmatrix}$. This is known as Pell's equality, and has infinitely many solutions for any $k$ that is not a square (see Mordell's book~\cite[Ch.~8]{mordell}). Write $s=\frac{p}{q}+\frac{p'}{q'}\sqrt{k}$ for some integers $p,q,p',q'$. Applying Pell's equality for $(p'q'q^2)^2k$, we obtain integers $x$ and $a$ such that $x^2-a^2(p'q'q^2)^2k=1$. In other words, $x^2-y^2k=1$ for $y=p'q'q^2a$. We construct $\begin{pmatrix}x+q'^2pqa&b\\q'^2q^2a&x-q'^2pqa\end{pmatrix}$ where $b=\frac{x^2-q'^4p^2q^2a^2-1}{q'^2q^2a}=p'^2q^2ak-q'^2p^2a\in a\Z$. The matrix has $s$ for a fixed point, and $s$ is not rational, therefore the matrix is a hyperbolic element of $\Pl$. \end{proof} \begin{remark}\label{fingen} The break points a finite number of elements of $H(\Z)$ are all contained in the sets $\Q(\sqrt{k})$ for a finite number of $k$, so Lemma~\ref{all} implies that $H(\Z)$ is not finitely generated. \end{remark} In order to define configurations, we wish to study the slope changes at elements of $P_\Z$. Consider $g\in\G$ and $s\in P_\Z$ such that $g(s+0)\neq g(s-0)$. Then it is easy to see that $f=g(\gamma-0)^{-1}g(\gamma+0)\in \Pl$ fixes $s$. Therefore, in order to study the slope changes we need to understand the stabiliser of $s$ in $\Pl$. We prove: \begin{lemma}\label{cyclic} Fix $s\in\P(\R)$. The stabiliser $St_s$ of $s$ in $\Pl$ is either isomorphic to $\Z$ or trivial. \end{lemma} \begin{proof} Assume that $St_s$ is not trivial, and let $f\in St_s$ be different from the identity. Clearly, $f$ is not elliptic. If $f$ is hyperbolic, $s\in P_\Z$, and if $f$ is parabolic, $s\in\Q\cup\{\infty\}$. We distinguish three cases, that is $s\in P_\Z$, $s=\infty$ and $s\in\Q$. We first assume $s\in P_\Z$. Let $s=r+r'\sqrt{k}$ with $r,r'\in\Q$ and $k\in\Z$. Note that the calculations in the beginning of the section yield that for every element $f$ in $St_s$ that is not the identity, $f$ is hyperbolic and the other fixed point of $f$ is $\bar{s}=r-r'\sqrt{k}$. Let $i=\begin{pmatrix}\frac{1}{2}&-\frac{r+r'\sqrt{k}}{2}\\\frac{1}{r'\sqrt{k}}&1-\frac{r}{r'\sqrt{k}}\end{pmatrix}\in PSL_2(\R)$ and consider the conjugation of $St_s$ by $i$. By choice of $i$ we have that $i(s)=0$ and $i(\bar{s})=\infty$. Therefore the image of $St_s$ is a subgroup of the elements of $PSL_2(\R)$ that have zeros on the secondary diagonal. Furthermore, calculating the image of an example matrix $\begin{pmatrix}a&b\\c&d\end{pmatrix}$, for $tr=a+d$ the trace of the matrix, we get \begin{equation}\label{cyc} i\begin{pmatrix}a&b\\c&d\end{pmatrix}i^{-1}=\begin{pmatrix}\frac{\sqrt{tr^2-4}+tr}{2}&0\\0&\frac{\sqrt{tr^2-4}-tr}{2}\end{pmatrix} \end{equation} Thus to understand the image of $St_s$ we just need to study the elements of the form $\frac{x+y\sqrt{k}}{2}$ with $x^2-ky^2=4$. This appears in a generalized form of Pell's equation, and those elements are known~\cite[Ch.~8]{mordell} to be powers of a fundamental solution (which is also true for the classic Pell equation if you identify a solution $x^2-y^2k=1$ with a unit element $x+y\sqrt{k}$ in $\Z[\sqrt{k}]$). This proves that the image of $St_s$ by this conjugation, which is isomorphic to $St_s$, is a subgroup of a group isomorphic to $\Z$. $St_s$ is then also isomorphic to $\Z$. The matrix with the fundamental solution in the upper left corner defines a canonical generator for the group of elements of the form seen in (\ref{cyc}), and its smallest positive power in the image of $St_s$ defines a canonical generator for $St_s$. Assume now $s=\infty$. As we described in (\ref{agrp}), the stabiliser of $\infty$ is $(\alpha_n)_{n\in\N}$, which is trivially isomorphic to $\Z$. Lastly, assume that $s=\frac{p}{q}\in\Q$ with $p$ and $q$ co-prime. There exist $m$ and $n$ such that $pm+qn=1$. Then $i=\begin{pmatrix}m&n\\-q&p\end{pmatrix}\in \Pl$ verifies $i(s)=\infty$. Thus the conjugation by $i$ defines an injection from the subgroup that fixes $s$ into $St_\infty=\A$. We observe that non-trivial subgroups of $\Z$ are isomorphic to $\Z$, which concludes the proof.\end{proof} Having an isomorphism between $St_s$ (for $s\in P_\Z$) and $\Z$ will be useful to us, so we wish to know its exact form. We prove: \begin{lemma}\label{log} Let $s\in P_\Z$. There exists $\phi_s\in\R^+$ that remains constant on the orbit $Gs$ of $s$ such that $f\mapsto\log_{\phi_s}(f'(s)))$ defines an isomorphism between $St_s$ and $\Z$. \end{lemma} \begin{proof} The derivative on the fixed point is multiplicative. Therefore for a fixed $s$, this follows from Lemma~\ref{cyclic} and the fact that subgroups of $\Z$ are isomorphic to $\Z$ (or trivial, which is impossible here). What we need to prove is that $\phi$ remains constant on $Gs$. Fix $s$ and consider $s'\in Gs$. Let $j\in \Pl$ be such that $j(s)=s'$. Then the conjugation by $j$ defines a bijection between $St_s$ and $St_{s'}$. Calculating the derivative on an element $f\in St_s$ we get $(jfj^{-1})'(s')=j'(s)(j^{-1})'(j(s))f'(s)=f'(s)$, which proves the result. \end{proof} We further denote $\psi:\A\mapsto\Z$ (see \ref{agrp}) the map that associates $n$ to $\alpha_n$, and $\psi_r$ the conjugate map for any $r\in\Q$. Remark that this is well defined by Lemma~\ref{cyclic} and conjugations in $\Z$ being trivial. \subsection{Configurations}\label{confsect} Fix $s\in P_\Z$ and let $\phi=\phi_s$ be given by Lemma~\ref{log}. By the isomorphism it defines, there exists an element $g_s$ that fixes $s$, such that $g_s'(s)=\phi_s$. As $s\notin\Q$, $g_s$ is hyperbolic. We associate to each element of the piecewise $\Pl$ group $\G$ (see Definition~\ref{tildeg}) a configuration representing the changes of slope at each point of the orbit $\G s=Gs$ of $s$, precisely: \begin{defi}\label{confdef}To $g\in\G$ we assign $C_g:Gs\rightarrow\Z$ by $$C_g(\gamma)=\log_\phi(g'(\gamma+0)g'(\gamma-0)^{-1}).$$ \end{defi} Note that by choice of $\phi$ this value is well defined: indeed, $g(\gamma+0)g(\gamma-0)^{-1}\in \Pl$, fixes $\gamma$, and is therefore in $St_\gamma$. Remark that by definition of $\G$ each configuration in the image of the association has a finite support. Remark also that the configuration ignores information about the changes in slope outside the orbit of $s$. For $s\in\Q$ we further denote $C_g(\gamma)=\psi_\gamma(g'(\gamma+0)g'(\gamma-0)^{-1})$, which will have similar properties. In the rest of the paper we will consider $s\in P_\Z$ unless otherwise specified. For completeness' sake, remark also that $G=H(\Z)\leq\G$ and the orbits of $G$ and $\G$ on $s$ are the same (as they are both the same as the orbit of $\Pl$) and therefore Definition~\ref{confdef} could be done directly for $G$, and what we would obtain is the same as restraining from the current definition. \begin{lemma}\label{unit}\label{hs} For every $s\in P_\Z$, there exists an element $h_s\in G$ such that $h_s(s-0)^{-1}h_s(s+0)=g_s$ and all other slope changes of $h_s$ are outside $Gs$. In particular, $C_{h_s}=\delta_s$. \end{lemma} \begin{proof} Fix $s\in P_\Z$ and let $k=k_s$ be the unique square-free integer such that $s\in\Q(\sqrt{k})$. We will construct $h_s$ such that $h_s(s)=s$. Note that in that case we have $C_{h_s^{-1}}=-\delta_s$. This implies that if we construct an element $\tilde{h}_s$ that verifies $\tilde{h}_s(s-0)^{-1}\tilde{h}_s(s+0)=g_s^{\pm1}$ and all other slope changes are outside $Gs$, choosing $h_s=\tilde{h}_s^{\pm1}$ gives the result. In other words, we can replace $g_s$ with $g_s^{-1}$. Seen as a function on $\R$, $g_s$ is defined in all points but $-\frac{d}{c}$. It is then continuous in an interval around $s$. Moreover, if the interval is small enough, $s$ is the only fixed point in it. Therefore for some $\varepsilon$, either $g_s(x)>x$ for every $x\in(s,s+\varepsilon)$, or $g_s(x)<x$ in that interval. As we have the right to replace it with its inverse, without loss of generality we assume that $g_s$ is greater than the identity in a right neighbourhood of of $s$. Write $s=r+r'\sqrt{k}$ with $r,r'\in\Q$. Then the other fixed point of $g_s$ is its conjugate $\bar{s}=r-r'\sqrt{k}$. Remark that it is impossible for $-\frac{d}{c}$ to be between $s$ and $s'$ as the function $g_s$ is increasing where it is continuous and has the same limits at $+\infty$ and $-\infty$ (see Figure~\ref{trivplot}). If $r'<0$, $g_s$ is greater than the identity in $(s,\bar{s})$ as it is continuous there. In that case, it is smaller than the identity to the left of the fixed points, but as it is increasing and has a finite limit at $-\infty$, this implies (see Figure~\ref{trivplot}) that $-\frac{d}{c}<s$. Similarly, if $s>\bar{s}$, $g_s$ is increasing and greater than the identity to the right of $s$, but has a finite limit at $+\infty$, so $-\frac{d}{c}>s$. \begin{figure} \centering \begin{minipage}{8cm}\centering\caption{Graphs of $g_s$ and the identity}\label{trivplot} \begin{tikzpicture} \begin{axis}[xmin=-4,xmax=4,ymin=-4,ymax=4,axis lines = middle, legend pos = south west,xtick={-10},ytick={17}] \addplot[domain=-3.8:3.8,color=black]{x}; \addlegendentry{$Id$} \addplot[color=blue,samples=100,domain=-4:-2,restrict y to domain=-4:4,dashed,thick]{(2*x+3)/(x+2)}; \addlegendentry{$g_s$} \addplot[color=red,samples=100,domain=-4:2,restrict y to domain=-4:4]{(2*x-3)/(2-x)}; \addplot[color=red,samples=100,domain=2:4,restrict y to domain=-4:4]{(2*x-3)/(2-x)}; \addlegendentry{$g_s^{-1}$} \addplot[color=blue,samples=100,domain=-2:4,restrict y to domain=-4:4,dashed,thick]{(2*x+3)/(x+2)}; \node[label={-30:{$s$}},circle,fill,inner sep=1pt] at (axis cs:-1.732,-1.732) {}; \end{axis} \end{tikzpicture} \end{minipage} \begin{minipage}{8cm}\centering\caption{Graphs of $g_s$ and $j_s$}\label{plot} \begin{tikzpicture} \begin{axis}[xmin=-4,xmax=4,ymin=-4,ymax=4,axis lines = middle, legend pos = south west,xtick={-10},ytick={17}] \addplot[domain=-3.8:3.8,color=black]{x}; \addlegendentry{$Id$} \addplot[color=blue,samples=100,domain=-4:-2,restrict y to domain=-4:4,dashed,thick]{(2*x+3)/(x+2)}; \addlegendentry{$g_s$} \addplot[color=red,samples=100,domain=-4:0,restrict y to domain=-4:4]{(2*x-3)/(2-x)}; \addlegendentry{$g_s^{-1}$} \addplot[samples=100,domain=0:4,restrict y to domain=-4:4,densely dotted,thick]{(4*x-1)/x}; \addlegendentry{$j_s$} \addplot[color=blue,samples=100,domain=-2:4,restrict y to domain=-4:4,dashed,thick]{(2*x+3)/(x+2)}; \addplot[color=red,samples=150,domain=0:2,restrict y to domain=-4:4]{(2*x-3)/(2-x)}; \addplot[color=red,samples=100,domain=2:4,restrict y to domain=-4:4]{(2*x-3)/(2-x)}; \addplot[samples=100,domain=-4:0,restrict y to domain=-4:4,densely dotted,thick]{(4*x-1)/x}; \node[label={-1:{$\bar{t}$}},circle,fill,inner sep=1pt] at (axis cs:0.268,0.268) {}; \node[label={110:{$\tilde{s}$}},circle,fill,inner sep=1pt] at (axis cs:0.414,1.5858) {}; \end{axis} \end{tikzpicture} \end{minipage} \end{figure} We will find a hyperbolic element $j_s$ verifying: the larger fixed point $t$ of $j_s$ is not in $Gs$ and $t>-\frac{d}{c}$, while the smaller fixed point $\bar{t}$ is between $s$ and $\bar{s}$, and $j_s$ is greater than the identity between $\bar{t}$ and $t$. If $r'<0$ consider the interval $(\bar{t},\bar{s})$. At its infimum, $j_s$ has a fixed point while $g_s$ is greater than the identity, and at its supremum the inverse is true. By the mean values theorem, there exists $\tilde{s}$ in that interval such that $j_s(\tilde{s})=g_s(\tilde{s})$ (see Figure~\ref{plot}). If $r'>0$, consider the interval $(s,-\frac{d}{c})$. At its infimum, $g_s$ is fixed and therefore smaller than $j_s$, and at its supremum $g_s$ diverges towards $+\infty$ while $j_s$ has a finite limit. Again by the mean values theorem, there exists $\tilde{s}$ in that interval where $g_s$ and $j_s$ agree. As $-\frac{d}{c}<t$ by hypothesis, in both cases we have $s<\tilde{s}<t$. We then define \begin{equation*}h_s(x)=\begin{cases} x & x\leq s \\ g_s(x) & s\leq x\leq\tilde{s}\\ j_s(x) & \tilde{s}\leq x\leq t \\ x & t\leq x \\ \end{cases} \end{equation*} Thus it would suffice to prove that we can construct $j_s$ that verifies those properties and such that $\tilde{s}\notin Gs$. Note that $\tilde{s}$ is a fixed point of $g_s^{-1}j_s$, so to prove that it is not in $Gs$ it will suffice to study the trace of the latter. Remark that in this definition $h_s$ is strictly greater than the identity in an open interval, and equal to it outside (this is with the assumption on $g_s$, in the general case $h_s$ has its support in an open interval, and is either strictly greater then the identity on the whole interval, or strictly smaller). Write $r=\frac{p}{q}$. By Bezout's identity, there are integers $\tilde{m}$ and $\tilde{n}$ such that $q\tilde{n}-p\tilde{m}=1$. Then the matrix $i=\begin{pmatrix}\tilde{n}&p\\\tilde{m}&q\end{pmatrix}\in \Pl$ verifies $i.0=\frac{p}{q}$. Taking $\tilde{j}_s=i^{-1}j_si$ it suffices to find $\tilde{j}_s$ with fixed points outside $Gs$, the smaller one being close enough to $0$, and the greater one large enough. Remark that the only information we have on $g_s$ is its trace, so this does not complicate the computations for $\tilde{s}$. We will define $\tilde{j}_s$ in the form $\begin{pmatrix}x'+ma'&n^2l_sa'-m^2a'\\a'&x'-ma'\end{pmatrix}$ where $x'^2-n^2a'^2l_s=1$. Its fixed points are $m\pm n\sqrt{l_s}$. By choosing $m$ arbitrarily large, the second condition will be satisfied. Note $ig_s^{-1}i^{-1}=\begin{pmatrix}\tilde{a}&\tilde{b}\\\tilde{c}&\tilde{d}\end{pmatrix}$ and $tr(g_s)^2-4=o^2k$. Calculating the trace of $g_s^{-1}j_s$ we get $tr(g_s)x'+a'\tilde{b}+mz_1+nz_2$ with $z_1,z_2\in\Z$. Then, admitting that $n$ divides $x'-1$ (which will be seen in the construction of $x'$) we obtain for some $z_i\in Z$, $i\in\N$: \begin{equation}\label{moche} \begin{split} tr(g_s^{-1}j_s)^2-4&=mz_3+nz_4+a'^2\tilde{b}^2+2a'\tilde{b}x'tr(g_s)+x'^2tr(g_s)^2-tr(g_s)^2+tr(g_s)^2-4\\ &=mz_3+nz_5+a'^2\tilde{b}^2+2a'\tilde{b}tr(g_s)+n^2a'^2l_str(g_s)^2+o^2k\\ &=mz_3+nz_6+a'^2\tilde{b}^2+2a'\tilde{b}tr(g_s)+o^2k. \end{split} \end{equation} Take a prime $p_s$ that is larger than $k$ and $b(tr(g_s)+2)$. There is an integer $a''<p_s$ such that $b(tr(g_s)+2)a''\equiv-1\mod{p_s}$. Take $a=o^2ka''$. Then $$a'^2\tilde{b}^2+2a'\tilde{b}tr(g_s)+o^2k=o^2k(b(tr(g_s)+2)a''+1)(b(tr(g_s)-1)).$$ As $\Z[p_s]$ is a field, clearly $b(tr(g_s)-2)a''\not\equiv-1\mod{p_s}$. As $b(tr(g_s)+2)a''<p_s^2$, the product is divisible by $p_s$ but not $p_s^2$. We will choose $m$ and $n$ divisible by $p_s^2$, which will then ensure that the value in (\ref{moche}) is divisible by $p_s$ but not $p_s^2$, proving that $\tilde{s}\notin Gs$. All that is left is choosing $n$ and $m$. As we just noted, we need them to be multiples of $p_s^2$. Aside from that $n$ needs to satisfy $x'^2-n^2a'^2l_s=1$, $l_s$ must not be a square times $k$ and we need to be able to make $m-n\sqrt{l_s}$ arbitrarily small. Write $m=p_s^2m'$ and $n=p_s^2n'$. Then $m'$ can be anything so long as $m-n\sqrt{l_s}$ becomes arbitrarily small. In other words, we are only interested in the fractional part of $n'\sqrt{l_s}$. We choose $x'=n'^2a'^2p_s^5-1$ and will prove that the conditions are satisfied for $n'$ large enough. Then $x'^2-n^2a'^2l_s=1$ is satisfied for $l_s=p_s(n'^2a'^2p_s^5-2)$. In particular, $p_s$ divides $l_s$ but its square does not, so $l_s$ is not equal to a square times $k$. Moreover, $\sqrt{l_s}=\sqrt{(n'a'p_s^3)^2-2p_s}$ and as the derivative of the square root is strictly decreasing, $\sqrt{(n'a'p_s^3)^2-2p_s}-n'a'p_s^3\rightarrow0$ for $n'\rightarrow\infty$. Its factorial part then clearly converges towards $1$, which concludes the proof.\end{proof} For a product inside the group $\G$, by the chain rule we have $$(g_2g_1)'(\gamma)=g_2'(g_1(\gamma))g_1'(\gamma)$$ and thus \begin{equation}\label{der}C_{g_2g_1}(\gamma)=C_{g_1}(\gamma)+C_{g_2}(g_1(\gamma))\end{equation} That gives us a natural action of $\G$ on $\Z^{Gs}$ by the formula $(g,C)\rightarrow C_g+S^gC$ where $S^gC(\gamma)=C(g(\gamma))$. It is easy to check that it also remains true for $s\in\Q$. \begin{lemma}\label{nostable} There is no configuration $C:Gs\rightarrow\Z$ such that $C=C_{h_s}+S^{h_s}C$. \end{lemma} Indeed, applying (\ref{der}) and taking the value at $s$ we get a contradiction. Consider $g$ and $h$ such $C_g=C_h$. We have $C_{Id}=C_{g^{-1}}+S^{g^{-1}}C_g$ and thus $C_{hg^{-1}}=C_{g^{-1}}+S^{g^{-1}}C_h=C_{Id}=0$. We denote $$H_s=\{g\in G:C_g=0\}.$$ Then: \begin{lemma}\label{generate} The element $h_s$ and the subgroup $H_s$ generate $G$ for every $s\in P_\Z$. \end{lemma} \begin{proof}We show for $g\in G$ by induction on $\|C_g\|_1=\sum_{x\in Gs}|C_g(x)|$ that it is in the group generated by $\{h_s\}\cup H_s$ . The base is for $\|C_g\|_1=0$, in which case we have $C_g\equiv 0$ and the result is part of the statement hypothesis. We take $g\in G$ and assume that every element with smaller $l^1$ measure of its configuration is in the group generated by $\{h_s\}\cup H_s$. We take any $\alpha\in\supp(C_g)$. Without loss of generality, we can assume that $C_g(\alpha)>0$. As $g(\alpha)\in Gs$, by Lemma~\ref{trace} there exists $h\in H_s$ such that $h(s)=g(\alpha)$ and $C_h=0$. Let $\tilde{g}=hh_sh^{-1}$. As $h_s\in\{h_s\}\cup H_s$, we have $\tilde{g}\in\langle \{h_s\}\cup H_s\rangle$. Applying the composition formula~(\ref{der}) we obtain $C_{\tilde{g}}(x)=0$ for $x\neq g(\alpha)$ and $C_{\tilde{g}}(g(\alpha))=1$. We consider $\bar{g}=\tilde{g}^{-1}g$. If $x\neq g(\alpha)$, by the composition formula (\ref{der}) we get $C_{\bar{g}}(x)=C_g(x)$, and at $\alpha$ we have $C_{\bar{g}}(\alpha)=C_g(\alpha)-1$. By hypothesis we then have $\bar{g}\in\langle \{h_s\}\cup H_s\rangle$, and as $\tilde{g}$ is also included in this set, so is $g$.\end{proof} \begin{lemma}\label{trace} For any $g\in \Pl$ and $\gamma\in\R$ there exists $h\in H_s$ such that $g(\gamma)=h(\gamma)$. \end{lemma} \begin{proof}By Monod's construction in~\cite[Proposition~9]{h-main}, we know that we can find $h\in G$ that agrees with $g$ on $\gamma$ of the form $q^{-1}g$ where $q=\begin{pmatrix}a&b+ra\\c&d+rc\end{pmatrix}$ in the interval between its fixed points that contains infinity and the identity otherwise. To have this result, what is required is that either $r$ or $-r$ (depending on the situation) be large enough. Clearly, $C_h\equiv0$ would follow from slope change points of $q$ being outside $Gs$ (as neither of them is infinity). In particular, it is enough to prove that for some infinitely large $r$, the fixed points of $\begin{pmatrix}a&b+ra\\c&d+rc\end{pmatrix}$ are outside $\Q(\sqrt{k})$. The trace of that matrix is $(a+d)+rc$. Let $p$ be a large prime number that does not divide $2$, $k$ or $c$. As $c$ and $p$ are co-prime, there exists $r_0$ such that $a+d+r_0c=p+2\pmod{p}$. Then for every $i\in\Z$, we have $(a+d+(r_0+p^2i)c)^2-4=4p(modp^2)$. As $p$ and $4$ are co-prime, this implies that for each $r=r_0+p^2i$ the fixed points of that matrix are not in $\Q(\sqrt{k})$ as $p$ does not divide $k$.\end{proof} \section{Convergence condition}\label{sectfour} Fix $s\in P_\Z$ and let us use the notations from Subsection~\ref{confsect}. For a measure $\mu$ on $\G$ we denote $C_\mu=\bigcup_{g\in\supp(\mu)}\supp(C_g)$ its "support" on $Gs$. That is, $C_\mu\subset Gs$ is the set of points in which at least one element that is inside the support of $\mu$ in the classical sense changes slope. We thus obtain the first result \begin{lemma}\label{base} Consider the piecewise $\Pl$ group $\G$ (see Definition~\ref{tildeg}). Let $\mu$ be a measure on a subgroup of $\G$ such that $C_\mu$ is transient with respect to $\mu$ for the natural action of $\G$ on $\R$ and $h_s$ is in the semigroup generated by $\supp(\mu)$. Then the Poisson boundary of $\mu$ on the subgroup is not trivial. \end{lemma} \begin{proof}Consider a random walk $g_n$ with $g_{n+1}=h_ng_n$. For a fixed $\gamma\in Gs$ we have $$C_{g_{n+1}}(\gamma)=C_{g_n}(\gamma)+C_{h_n}(g_n(\gamma))$$ By the hypothesis of transiency this implies that $C_{g_n}(\gamma)$ stabilises. In other words, $C_{g_n}$ converges pointwise towards a limit $C_\infty$. This defines a hitting measure on $\Z^{Gs}$ that is a quotient of $\mu$'s Poisson boundary. Moreover, it is $\mu$-invariant by the natural action on $\Z^{Gs}$. It remains to see that it is not trivial. Assume the opposite, which is that there exists a configuration $C$ such that for almost all walks, the associated configuration $C_{g_n}$ converges pointwise to $C$. By hypothesis there are elements $h_1,\dots,h_m$ with positive probability such that $h_mh_{m-1}\dots h_1=h_s$. There is a strictly positive probability for a random walk to start with $h_mh_{m-1}\dots h_1$. Applying~(\ref{der}) we get $C=C_{h_s}+S^{h_s}C$, which is contradictory to Lemma~\ref{nostable}.\end{proof} This lemma, along with Lemma~\ref{generate} implies: \begin{lemma}\label{constr} Fix $s\in P_\Z$. Let $\mu$ be a measure on $G=H(\Z)$ that satisfies the following conditions: (i) The element $h_s$ belongs to the support of $\mu$, (ii) The intersection of the support of $\mu$ with the complement of $H_s$ is finite, (iii) The action of $\mu$ on the orbit of $s$ is transient. Then the Poisson boundary of $\mu$ is non-trivial. \end{lemma} We will now show how measures satisfying whose assumptions can be constructed. Remark that the question of existence of a measure with non-trivial boundary has already been solved by Frisch-Hartman-Tamuz-Vahidi-Ferdowski~\cite{choquet-deny}. In our case, notice that $\A\subset H_s$ (see (\ref{agrp})), and it is isomorphic to $\Z$. We can then use a measure on $\A$ to ensure transience of the induced walk on the orbit. To prove that, we use a lemma from Baldi-Lohoué-Peyrière~\cite{var} (see also Woess~\cite[Section~2.C,3.A]{woess2000random}). Here we formulate a stronger version of the lemma, as proven by Varopoulos~\cite{Varopoulis1983}: \begin{lemma}[Comparison lemma]\label{var} Let $P_1(x,y)$ and $P_2(x,y)$ be doubly stochastic kernels on a countable set $X$ and assume that $P_2$ is symmetric. Assume that there exists $\varepsilon\geq 0$ such that $$P_1(x,y)\geq\varepsilon P_2(x,y)$$ for any $x,y$. Then \begin{enumerate} \item For any $0\leq f\in l^2(X)$ $$\sum_{n\in\N}\langle P_1^nf,f\rangle\leq \frac{1}{\varepsilon}\sum_{n\in\N}\langle P_2^nf,f\rangle.$$ \item If $P_2$ is transient then so is $P_1$ (for any point $x\in X$, it follows from (1) applied to $f=\delta_x$). \end{enumerate} \end{lemma} Here, doubly stochastic kernels means that the operators are reversible and the inverse is also Markov. It is in particular the case for $P(x,y)=\mu(yx^{-1})$ for some measure on a group (as the inverse is $(x,y)\mapsto\mu(xy^{-1})$). \begin{remark}\label{gen} If $\lambda$ is a transient measure on $\A$ and $\mu$ satisfies conditions (i) and (ii) of Lemma~\ref{constr}, then the comparison lemma by Baldi-Lohoué-Peyrière (Lemma~\ref{var}) implies that $\varepsilon\lambda+(1-\varepsilon)\mu$ satisfies all the conditions of the lemma for any $0<\varepsilon<1$. In other words, this is a way to construct non-degenerate symmetric measures on $G$ with non-trivial Poisson boundary. \end{remark} For completeness' sake, we show that there exist measures positive on all of $G$ that have non-trivial boundary. \begin{lemma} Let $\mu$ be a measure on a group $H$ with finite entropy and non-zero asymptotic entropy and which generates $H$ as a semigroup. Then there exists a measure $\tilde{\mu}$ with support equal to $H$ that also has finite entropy and non-zero asymptotic entropy. Furthermore, if $\mu$ is symmetric, so is $\tilde{\mu}$. \end{lemma} \begin{proof} Define $\tilde{\mu}=\frac{1}{e}\sum_{i\in\N}\frac{\mu^{*i}}{i!}$. By a result of Kaimanovich~\cite[Corollary~to~Theorem~4]{entrlemma} we get $$h(H,\tilde{\mu})=h(H,\mu)\sum_{i\in\N}\frac{i}{ei!}=h(H,\mu).$$ Moreover, as the entropy of $\tilde{\mu}^{*n}$ is not smaller than the entropy of $\tilde{\mu}$, finite asymptotic entropy implies finite entropy. \end{proof} From this lemma and the entropy criterion Theorem~\ref{entropy} it follows that to have a measure positive on all of $G$ with non-trivial boundary it suffices to construct a measure verifying the conditions of Lemma~\ref{constr} with finite asymptotic entropy, which we can achieve with the construction presented in Remark~\ref{gen}. \section{Thompson's group as a subgroup of $G=H(\Z)$}\label{thompsect} In~\cite{chainhomeo} Kim, Kuberda and Lodha show that any two increasing homeomorphisms of $\R$ the supports of which form a 2-chain (as they call it) generate, up to taking a power of each, a group isomorphic to Thompson's group $F$. Let us give the exact definition of this term. For a homeomorphism $f$ of $\R$ we call its support $\supp(f)$ the set of points $x$ where $f(x)\neq x$. Remark that we do not define the closure of that set as support, as it is sometimes done. Consider four real numbers $a,b,c,d$ with $a<b<c<d$. Take two homeomorphisms $f$ and $g$ such that $\supp(f)=(a,c)$ and $\supp(g)=(b,d)$. In that case we say that their supports form a 2-chain, and the homeomorphisms generate a 2-prechain group. In other words, two homeomorphisms generate a 2-prechain if their supports are open intervals that intersect each other but neither is contained in the other. Clearly, there exist many such pairs in $G$. We will give a simple example. Fix $s$ and find positive rational numbers $\tilde{r}$ and $\tilde{r}'$ such that $\tilde{r}<s<\tilde{r}+\tilde{r}'\sqrt{p_s}<t$. Recall that $p_s$ is a prime larger than $k$. Then choose a hyperbolic element $\tilde{g}$ that fixes $\tilde{r}+\tilde{r}'\sqrt{p_s}$ and define \begin{equation*}\tilde{h}_s(x)=\begin{cases} \tilde{g}_s(x) & \tilde{r}-\tilde{r}'\sqrt{p_s}\leq x\leq\tilde{r}+\tilde{r}'\sqrt{p_s} \\ x & \mbox{otherwise.} \\ \end{cases} \end{equation*} By definition of $\tilde{r}$ and $\tilde{r}'$, $\tilde{h}_s$ and $h_s$ clearly form a 2-prechain, and thus up to a power they generate a copy of Thompson's group (see~\cite[Theorem~3.1]{chainhomeo}). We will denote $\mathfrak{a}_s$ the action $F\curvearrowright\R$ this defines. To obtain the convergence results, we need to prove that the induced random walks on the Schreier graphs of certain points are transient. By the comparison lemma by Baldi-Lohoué-Peyrière (Lemma~\ref{var}) it would suffice to prove it for the simple random walk on the graph, which is why we will study its geometry. In the dyadic representation of Thompson's group, the geometry of the Schreier graph on dyadic numbers has been described by Savchuk~\cite[Proposition~1]{slav10}. It is a tree quasi-isometric to a binary tree with rays attached at each point (see Figure~\ref{sav}), which implies transience of the simple random walk. For a different proof of transience see Kaimanovich~\cite[Theorem~14]{kaimanovichthompson}. We will see that the Schreier graph has similar geometry in the case of $\mathfrak{a}_s$ (see Figure~\ref{treefig}). \begin{lemma}\label{tree-old} Consider two homeomorphisms $f$ and $g$ of $\R$ the supports of which are $\supp(f)=(a,c)$ and $\supp(g)=(b,d)$ with $a<b<c<d$. Denote $H$ the group generated by $f$ and $g$. Then the simple random walk on the Schreier graph of $H$ on the orbit of $b$ is transient. \end{lemma} \begin{proof} Up to replacing $f$ or $g$ with its inverse, we can assume without loss of generality that $f(x)>x$ for $x\in\supp(f)$ and $g(x)>x$ for $x\in\supp(g)$. Denote by $\Gamma$ the Schreier graph of $H$ on the orbit of $b$. The vertices of this graph are the points of the orbit $Hb$ of $b$ by $H$, and two points are connected by an edge if and only if $f$, $g$, $f^{-1}$ or $g^{-1}$ sends one point into the other. Denote by $\tilde{\Gamma}$ the subgraph defined by the vertexes that belong to the closed interval $[b,c]$. At every point $x$ of $\Gamma$ such that $x\notin[b,c]$, in a neighbourhood $(x-\varepsilon,x+\varepsilon)$ of $x$, one of the two elements $f$ and $g$ acts trivially, and the other one is strictly greater than the identity map. Without loss of generality, let $f$ act trivially. Let $i_0$ be the largest integer such that $g^{i_0}(x)\in[b,c]$. Then the set of points $(g^i(x))_{i\geq i_0}$ is a ray that starts at an element of $\tilde{\Gamma}$. As the simple random walk on $\Z$ is recurrent (see~\cite[Chapter~3,~Theorem~2.3]{durrett2005probability}), the walk always returns to $\tilde{\Gamma}$ in finite time, and that part of the graph ($\tilde{\Gamma})$ is what we need to study. Replacing, if necessary, $f$ or $g$ by its power, we can assume that $g^{-1}(c)<f(b)$. Denote $A=[b,g^{-1}(c)]=g^{-1}([b,c])$, $B=[f(b),c]=f([b,c])$ and $C=(g^{-1}(c),f(b))=[b,c]\setminus(A\cup B)$. Consider $x\in\tilde{\Gamma}$ with $x\neq b$ and $x\notin C$. Consider a reduced word $c_nc_{n-1}\dots c_1$ with $c_i\in\{f^{\pm1},g^{\pm1}\}$ that describes a path in $\tilde{\Gamma}$ from $b$ to $x$. In other words $c_nc_{n-1}\dots c_1(b)=x$ and the suffixes of that word satisfy $c_ic_{i-1}\dots c_1(b)\in\tilde{\Gamma}$ for every $i\leq n$. The fact that the word is reduced means that $c_i\neq c_{i+1}^{-1}$ for every $i$. We claim that if $x\in A$, this word ends with $g^{-1}=c_n$, and if $x\in B$, $c_n=f$. We prove the latter statement by induction on the length of the word $n$. If a word of length one, it is $g$ since $f$ fixes $b$ and since $g^{-1}(b)\notin [b,c]$. As $g(b)\in B$ this gives the base for the induction. Assume that the result is true for any reduced word of length strictly less than $n$ whose suffixes, when applied to $b$, stay in $[b,c]$. We will now prove it for $x=c_nc_{n-1}\dots c_1(b)$. We denote $y=c_{n-1}c_{n-2}\dots c_1(b)$ the point just before $x$ in that path. We first consider the case $x\in B$ (as we will see from the proof, the other case is equivalent). We distinguish three cases: $y\in A$, $y\in B$ and $y\in C$. If $y\in A$, by induction hypothesis we have $c_{n-1}=g^{-1}$. As the word is reduced we thus have $c_n\neq g$. However, from $y\in A$ and $x\in B$ we have $y<x$. Therefore, $c_n\notin\{f^{-1},g^{-1}\}$, and the only possibility left is $c_n=f$. If $y\in B$, by induction hypothesis we have $c_{n-1}=f$. Therefore, as the word is reduced, $c_n\neq f^{-1}$. From $g^{-1}(c)<f(b)$ it follows that $g(B)\cap[b,c]=\emptyset$. As $x\in B$, this implies that $c_n\neq g$. Similarly, $g^{-1}(B)\subset A$, therefore $c_n\neq g^{-1}$. The only possibility left is $c_n=f$. If $y\in C$, consider the point $y'=c_{n-2}\dots c_1(b)$. If $y'\in A$, by induction hypothesis $c_{n-2}=g^{-1}$. Then $c_{n-1}\neq g$. As $y>y'$, this implies that $c_{n-1}=f$. However, $g(A)\subset B$, which is a contradiction. In a similar way, we obtain a contradiction for $y'\in B$. However, both $f^{-1}(C)$ and $g(C)$ are outside $[b,c]$, while $f(C)\subset B$ and $g^{-1}(C)\subset A$. Therefore the case $y\in C$ is impossible by induction hypotheses on $c_{n-2}\dots c_1$. This completes the induction. Remark that we also obtained $\tilde{\Gamma}\cap C=\emptyset$, so the result holds for all points of $\tilde{\Gamma}$. In particular, if two paths in $\tilde{\Gamma}$ described by reduced words arrive at the same point, the last letter in those words is the same, which implies that $\tilde{\Gamma}$ is a tree. Remark also that the result implies that $c\notin\tilde{\Gamma}$ as $c\in B$ and $f^{-1}(c)=c$. Moreover, for a vertex $x\in A$, we have that $f(x)$, $g(x)$ and $g^{-1}(x)$ also belong to $\tilde{\Gamma}$. Similarly, for $x\in B$, $g^{-1}(x)$, $f(x)$ and $f^{-1}(x)$ are in $\tilde{\Gamma}$. Therefore every vertex aside from $b$ has three different neighbours. The simple walk on $\tilde{\Gamma}$ is thus transient. \end{proof} By the comparison lemma by Baldi-Lohoué-Peyrière (Lemma~\ref{var}), this implies transience on the Schreier graph of $s$ for any measure on $G$ such that $h_s$ and $\bar{h}_s$ are in the semigroup generated by the support of the measure. If the support of a given measure generates $G$ as a semigroup, conditions $(i)$ and $(iii)$ in Lemma~\ref{base} are then automatically satisfied. In particular, any measure $\mu$ on $G$ that generates it as a semigroup and such that there exists $s$ for which $\supp(\mu)\cap(G\setminus H_s)$ is finite has a non-trivial Poisson boundary. In the proof of Lemma~\ref{tree-old} we obtained a description of the graph of $\mathfrak{a}_s$, which is similar to the one by Savchuk~\cite{slav10} in the case of the dyadic action: \begin{remark}\label{tree} Consider two homeomorphisms $f$ and $g$ of $\R$ the supports of which are $\supp(f)=(a,c)$ and $\supp(g)=(b,d)$ with $a<b<c<d$. Denote $H$ the group generated by $f$ and $g$. Then the Schreier graph of $H$ on the orbit of $b$ is described in Figure~\ref{treefig} (solid lines are labelled by $f$ and dashed lines by $g$). \begin{figure}[!h]\caption{Schreier graph of $\mathfrak{a}_s$}\label{treefig}\centering\begin{tikzpicture}[-stealth] \tikzset{node/.style={circle,draw,inner sep=0.7,fill=black}} \tikzset{every loop/.style={min distance=8mm,in=55,out=125,looseness=10}} \tikzstyle{level 1}=[level distance=2.4cm,sibling distance=3cm] \tikzstyle{level 2}=[level distance=2.4cm,sibling distance=12mm] \tikzstyle{level 3}=[level distance=1.5cm,sibling distance=5mm] \tikzstyle{level 4}=[level distance=1cm,sibling distance=5mm] \node[node,label=below:{$b$}](0){} child[grow=left,<-]{node[node](-1){}} child[grow=right,->]{node[node]{} child[grow=south west,<-,dashed]{node[node]{} child[grow=south west,<-,dashed]{node[node]{} child[grow=south west,<-,dashed]{} child[grow=south east,->,solid]{} child[grow=left,<-,solid,level distance=1cm]{node[node](1){} child[grow=left,level distance=1cm]{node[node](8){}}}} child[grow=south east,->,solid]{node[node]{} child[grow=south west,<-,dashed]{} child[grow=south east,->,solid]{} child[grow=right,->,dashed,level distance=0.2cm]{node[node](9){} child[grow=right,level distance=0.2cm]{node[node](10){}}}} child[grow=left,<-,solid,level distance=1.5cm]{node[node](2){} child[grow=left,level distance=1.5cm]{node[node](11){}}}} child[grow=south east,->,solid]{node[node]{} child[grow=south west,<-,dashed]{node[node]{} child[grow=south west,<-,dashed]{} child[grow=south east,->,solid]{} child[grow=left,<-,solid,level distance=0.2cm]{node[node](3){} child[grow=left,level distance=0.2cm]{node[node](3b){}}}} child[grow=south east,->,solid]{node[node]{} child[grow=south west,<-,dashed]{} child[grow=south east,->,solid]{} child[grow=right,->,dashed,level distance=1cm]{node[node](4){} child[grow=right,level distance=1cm]{node[node](4b){}}}} child[grow=right,->,dashed,level distance=1.5cm]{node[node](6){} child[grow=right,level distance=1.5cm]{node[node](6b){}}}} child[grow=right,->,dashed,level distance=2.4cm]{node[node](5){} child[grow=right, level distance=2.4cm]{node[node](7){}}} }; \draw (0) edge[loop above,dashed] (0); \draw (-1) edge[loop above,dashed] (-1); \draw (1) edge[loop above,dashed] (1); \draw (2) edge[loop above,dashed] (2); \draw (3) edge[loop above,dashed] (3); \draw (3b) edge[loop above,dashed] (3b); \draw (4) edge[loop above] (4); \draw (4b) edge[loop above] (4b); \draw (5) edge[loop above] (5); \draw (6) edge[loop above] (6); \draw (6b) edge[loop above] (6b); \draw (7) edge[loop above] (7); \draw (8) edge[loop above,dashed] (8); \draw (9) edge[loop above] (9); \draw (10) edge[loop above] (10); \draw (11) edge[loop above,dashed] (11); \end{tikzpicture}\end{figure} \end{remark} \begin{proof} In the proof of Lemma~\ref{tree-old} we have shown that for every vertex $x\in\tilde{\Gamma}$ that is not $b$, $x$ has exactly three different neighbours in $\tilde{\Gamma}$. We also proved that $\tilde{\Gamma}$ is a tree. It is therefore a binary tree. Furthermore, if $x\in A$, it is equal to $g^{-1}(y)$ where $y$ is closer to $b$ than $x$ (in the graph), and if $x\in B$, $x=f(y)$ where $y$ is again closer to $b$. We think of $y$ as the parent of $x$. Then every vertex $x$ has two children: left child $g^{-1}(x)$ and right child $f(x)$. Furthermore, if $x$ is a left child, $x\in A$ and $f^{-1}(x)\notin\tilde{\Gamma}$. Equivalently, if $x$ is a right child, $g(x)\notin\tilde{\Gamma}$.\end{proof} Compare to the Schreier graph of the dyadic action as described by Savchuk~\cite[Proposition~1]{slav10}(see Figure~\ref{sav}). \begin{figure}[!h]\centering\caption{Schreier graph of the dyadic action of $F$ for the standard generators}\label{sav}\begin{tikzpicture}[-stealth] \tikzset{no edge/.style={edge from parent/.append style={draw=none}}} \tikzset{node/.style={circle,draw,inner sep=0.7,fill=black}} \tikzset{every loop/.style={min distance=8mm,in=55,out=125,looseness=10}} \tikzstyle{level 1}=[level distance=2.4cm,sibling distance=3cm] \tikzstyle{level 2}=[level distance=2.4cm,sibling distance=12mm] \tikzstyle{level 3}=[level distance=1.5cm,sibling distance=5mm] \tikzstyle{level 4}=[level distance=1cm,sibling distance=5mm] \node[node,label=south west:{$\frac{3}{4}$}](34){} child[grow=left,<-,level distance=2.4cm]{[no edge] node[node,label=below:{$\frac{7}{8}$}](78){}child[grow=left,<-,level distance=2.4cm]{[no edge] node[node,label=below:{$\frac{15}{16}$}](1516){}}} child[grow=right,level distance=2.4cm,dashed]{node[node,label=below:{$\frac{1}{2}$}](12){}child[grow=right,level distance=2.4cm,dashed]{node[node,label=below:{$\frac{1}{4}$}](14){}}} child[grow=down,->,level distance=1.5cm]{node[node,label=north west:{$\frac{5}{8}$}]{} child[grow=south west,<-,dashed,level distance=1.2cm]{node[node,label=right:{$\frac{13}{16}$}](1316){} child[grow=left,<-,level distance=2cm]{[no edge] node[node](1316a){}child[grow=left,<-,level distance=2cm]{[no edge] node[node](1316b){}}} child[grow=south west,->,solid,level distance=1.2cm]{node[node,label=above:{$\frac{11}{16}$}](1116){} child[grow=south west,<-,dashed,level distance=7.5mm]{node[node,label=north west:{$\frac{27}{32}$}](2732){} child[grow=left,<-,level distance=1.2cm]{[no edge] node[node](2732a){}child[grow=left,<-,level distance=1.2cm]{[no edge] node[node](2732b){}}} child[grow=south west,->,solid,level distance=7.5mm]{node[node,label=left:{$\frac{23}{32}$}](2332){} child[grow=south west,<-,dashed,level distance=1cm]{} child[grow=south east,->,solid,level distance=1cm]{} child[grow=right,->,dashed,level distance=6mm]{node[node](1){} child[grow=right,level distance=6mm]{node[node](8){}}}}} child[grow=south east,->,solid,level distance=1.5cm]{node[node,label=left:{$\frac{19}{32}$}]{} child[grow=south west,<-,dashed,level distance=1cm]{} child[grow=south east,->,solid,level distance=1cm]{} child[grow=right,->,dashed,level distance=0.2cm]{node[node](9){} child[grow=right,level distance=0.2cm]{node[node](10){}}}} child[grow=right,->,dashed,level distance=1cm]{node[node](2){} child[grow=right,level distance=1cm]{node[node](11){}}}}} child[grow=south east,->,solid,level distance=2.4cm]{node[node,label=north east:{$\frac{9}{16}$}]{} child[grow=south west,<-,dashed,level distance=7.5mm]{node[node,label=north west:{$\frac{25}{32}$}](2532){} child[grow=left,<-,level distance=0.6cm]{[no edge] node[node](2532a){}child[grow=left,<-,level distance=0.6cm]{[no edge] node[node](2532b){}}} child[grow=south west,->,solid,level distance=7.5mm]{node[node,label=left:{$\frac{21}{32}$}](2332){} child[grow=south west,<-,dashed,level distance=1cm]{} child[grow=south east,->,solid,level distance=1cm]{} child[grow=right,->,dashed,level distance=0.6cm]{node[node](3){} child[grow=right,level distance=0.6cm]{node[node](3b){}}}}} child[grow=south east,->,solid,level distance=1.5cm]{node[node,label=north east:{$\frac{17}{32}$}]{} child[grow=south west,<-,dashed,level distance=1cm]{} child[grow=south east,->,solid,level distance=1cm]{} child[grow=right,->,dashed,level distance=1cm]{node[node](4){} child[grow=right,level distance=1cm]{node[node](4b){}}}} child[grow=right,->,dashed,level distance=1.5cm]{node[node](6){} child[grow=right,level distance=1.5cm]{node[node](6b){}}}} child[grow=right,->,dashed,level distance=2.4cm]{node[node](5){} child[grow=right, level distance=2.4cm]{node[node](7){}}} }; \draw (1516) edge[bend right=10] (78); \draw (78) edge[bend right=10] (34); \draw (1516) edge[bend left=10,dashed] (78); \draw (78) edge[bend left=10,dashed] (34); \draw (1316b) edge[bend right=10] (1316a); \draw (1316a) edge[bend right=10] (1316); \draw (1316b) edge[bend left=10,dashed] (1316a); \draw (1316a) edge[bend left=10,dashed] (1316); \draw (2732b) edge[bend right=10] (2732a); \draw (2732a) edge[bend right=10] (2732); \draw (2732b) edge[bend left=10,dashed] (2732a); \draw (2732a) edge[bend left=10,dashed] (2732); \draw (2532b) edge[bend right=10] (2532a); \draw (2532a) edge[bend right=10] (2532); \draw (2532b) edge[bend left=10,dashed] (2532a); \draw (2532a) edge[bend left=10,dashed] (2532); \draw (12) edge[loop above] (12); \draw (14) edge[loop above] (14); \draw (1) edge[loop above,min distance=6mm,in=55,out=125,looseness=10] (1); \draw (2) edge[loop above] (2); \draw (3) edge[loop above,min distance=6mm,in=55,out=125,looseness=10] (3); \draw (3b) edge[loop above,min distance=6mm,in=55,out=125,looseness=10] (3b); \draw (4) edge[loop above] (4); \draw (4b) edge[loop above] (4b); \draw (5) edge[loop above] (5); \draw (6) edge[loop above] (6); \draw (6b) edge[loop above] (6b); \draw (7) edge[loop above] (7); \draw (8) edge[loop above,min distance=6mm,in=55,out=125,looseness=10] (8); \draw (9) edge[loop above] (9); \draw (10) edge[loop above] (10); \draw (11) edge[loop above] (11); \end{tikzpicture}\end{figure} \section{Schreier graphs of finitely generated subgroups of $H(\Z)$ and $\G$}\label{schreier} We will build on the result from Remark~\ref{tree}. In a more general case, the comparison lemma by Baldi-Lohoué-Peyrière (Lemma~\ref{var}) implies that the existence of a regular subtree (like $\tilde{\Gamma}$) is enough to ensure transience on the Schreier graph. To obtain such a tree, we only need the assumptions of the remark inside the closed interval $[b,c]$. We will now prove a lemma that ensures transience while allowing the graph to be more complicated outside $[b,c]$. This will help us understand subgroups of $G$ for which the supports of their generators are not necessarily single intervals. \begin{lemma}\label{algtho} Let $f,g$ be homeomorphisms on $\R$ and assume that there exist $b<c$ such that $g(b)=b$, $f(c)=c$, $(b,c]\subset\supp(g)$ and $[b,c)\subset\supp(f)$. Assume also that there exists $s\in\R$ with $s\leq b$ such that for some $n\in\Z$, $f^n(s)\in[b,c]$. Let $H$ be the subgroup of the group of homeomorphisms on $\R$ generated by $f$ and $g$. Then the simple walk of $H$ on the Schreier graph $\Gamma$ of $H$ on the orbit $s$ is transient. \end{lemma} \begin{proof} Without loss of generality, $f(x)>x$ and $g(x)>x$ for $x\in(b,c)$ (and the end point that they do not fix). In that case clearly $n\geq0$. We will apply the comparison lemma by Baldi-Lohoué-Peyrière (Lemma~\ref{var}) with $P_1$ defined on $\Gamma$ as the kernel of the simple random walk of $H$ on $\Gamma$. In other words, $P_1(x,f(x))=P_1(x,f^{-1}(x))=P_1(x,g(x))=P_1(x,g^{-1}(x))=\frac{1}{4}$ for every $x\in\Gamma$. Let us now define $P_2$. Let $a$ be the largest fixed point of $f$ that is smaller than $b$, and $d$ the smallest fixed point of $g$ that is larger than $c$. For $x\in(a,b)$ we define $n(x)=min(n|f^n(x)\in[b,c])$. Similarly, we define for $x\in(c,d)$, $m(x)=min(m|g^{-m}\in[b,c])$. We define \begin{equation*}\begin{minipage}{8.2cm} $P_2(x,f(x))=\begin{cases} \frac{1}{4} & x\in[b,c] \\ \frac{1}{4} & x\in(a,b)\mbox{ and }n(x)\mbox{ is odd}\\ \frac{3}{4} & x\in(a,b)\mbox{ and }n(x)\mbox{ is even}\\ 0 & \mbox{otherwise.}\\ \end{cases}$\end{minipage}\begin{minipage}{8.3cm} $P_2(x,f^{-1}(x))=\begin{cases} \frac{1}{4} & x\in[b,c] \\ \frac{3}{4} & x\in(a,b)\mbox{ and }n(x)\mbox{ is odd}\\ \frac{1}{4} & x\in(a,b)\mbox{ and }n(x)\mbox{ is even}\\ 0 & \mbox{otherwise.}\\ \end{cases}$\end{minipage} \end{equation*} \begin{equation*}\begin{minipage}{8.2cm} $P_2(x,g(x))=\begin{cases} \frac{1}{4} & x\in[b,c] \\ \frac{3}{4} & x\in(c,d)\mbox{ and }m(x)\mbox{ is odd}\\ \frac{1}{4} & x\in(c,d)\mbox{ and }m(x)\mbox{ is even}\\ 0 & \mbox{otherwise.}\\ \end{cases}$\end{minipage}\begin{minipage}{8.3cm} $P_2(x,g^{-1}(x))=\begin{cases} \frac{1}{4} & x\in[b,c] \\ \frac{1}{4} & x\in(c,d)\mbox{ and }m(x)\mbox{ is odd}\\ \frac{3}{4} & x\in(c,d)\mbox{ and }m(x)\mbox{ is even}\\ 0 & \mbox{otherwise.}\\ \end{cases}$\end{minipage} \end{equation*} Of course, we have $P_2(x,y)=0$ otherwise. This clearly defines a stochastic kernel (as the sum of probabilities at each $x$ is $1$), and it follows directly from the definition that it is symmetric. It is therefore doubly stochastic and symmetric. We now check that it is transient similarly to Lemma~\ref{tree-old}. Indeed, take a point $x\in[f(b),c]$ (respectively $x\in[b,g^{-1}(c)]$). Consider the subgraph $\tilde{\Gamma}(x)$ of the vertices of the form $c_nc_{n-1}\dots c_1(x)$ with $c_ic_{i-1}\dots c_1(x)\in [b,c]$ for every $i$ and $c_1\in\{f^{-1},g^{-1}\}$ (respectively $c_1\in\{g,f\}$). Equivalently to Lemma~\ref{tree-old}, $\tilde{\Gamma}(x)$ is a binary tree. Moreover, the graph $\bar{\Gamma}(x)$ defined by the vertices of the form $\tilde{c}^n(y)\in\Gamma$ with $\tilde{c}\in\{g,f^{-1}\}$, $n\in\N$ and $y\in\tilde{\Gamma}(x)$ is equivalent to the one in Lemma~\ref{tree-old}. In particular, the simple random walk on it is transient. Take any $y\in\Gamma\cap(a,d)$. Then either $f^n(y)\in[f(b),c]$ for some $n$, or $g^{-n}\in[b,g^{-1}(c)]$. In either case, there is $x$ such that $y$ belongs to $\bar{\Gamma}(x)$. By the comparison lemma by Baldi-Lohoué-Peyrière (Lemma~\ref{var}), we have $\sum_{n\in\N}\langle P_2^n\delta_y,\delta_y\rangle<\infty$. Therefore $P_2$ is transient. We apply Lemma~\ref{var} again for $P_1\geq\frac{1}{3}P_2$, which concludes the proof. \end{proof} Remark that with this result we can apply the comparison lemma by Baldi-Lohoué-Peyrière (Lemma~\ref{var}) to obtain transience for a random walk induced by a measure on a subgroup of the piecewise $\Pl$ group $\G$ (see Definition~\ref{tildeg}), the support of which contains two such elements and generates that subgroup as a semi-group. For the sake of completeness, we will also consider amenability of Schreier graphs of subgroups of $\G$. A locally finite graph is called amenable if for every $\varepsilon$ there exists a finite set of vertices $S$ such that $|\partial S|/|S|<\varepsilon$ where $\partial S$ is the set of vertices adjacent to $S$. This closely mirrors F{\o}lner's criterion for amenability of groups. In particular, a finitely generated group is amenable if and only if its Cayley graph is. In his article, Savchuk~\cite{slav10} shows that the Schreier graph of the dyadic action of Thompson's group $F$ is amenable. He also mentions that it was already noted in private communication between Monod and Glasner. The amenability of the graph comes from the fact that sets with small boundary can be found in the rays (see Figure~\ref{sav}). We will prove that for finitely generated subgroups of $\G$ we can find sets quasi-isometric to rays. \begin{remark}Consider a point $s\in\R$ and a finitely generated subgroup $H$ of the piecewise $\Pl$ group $\G$ (see Definition~\ref{tildeg}). Let $a=\sup(Hs)$. Let $S$ be a finite generating set and consider the Schreier graph $\Gamma$ defined by the action of $H$ on $Hs$. Then there is $b<a$ such that the restriction of $\Gamma$ to $(b,a)$ is a union of subgraphs quasi-isometric to rays. \end{remark} \begin{proof}As all elements of $H$ are continuous (when seen as functions on $\R$), they all fix $a$. Therefore they admit left germs at $a$. By definition, the germs belong to the stabiliser $St_a$ of $a$ in $PSL_2(\Z)$. By Lemma~\ref{cyclic}, $St_a$ is cyclic. Let $h\in PSL_2(\Z)$ be a generator of $St_a$. Then the left germ at $a$ of any element $s_i\in S$ is equal to $h^{n_i}$ for some $n_i\in\Z$. Up to replacing $h$ with $h^{GCD(\{n_i:s_i\in S\})}$, we can assume that there exists $g\in H$ such that the left germ at $a$ of $g$ is $h$. Let $(b,a)$ be a small enough left neighbourhood such that the restrictions of all elements of $S\cup\{g\}$ to $(b,a)$ are equal to their left germs at $a$. For example, one can choose $b$ to be the largest break point of an element of $S\cup\{g\}$ that is smaller than $a$. Consider the following equivalence relation on $Hs\cap(b,a)$: $x\sim y$ if and only if there exists $n\in\Z$ such that $h^n(x)=y$. As the restriction of $h$ to $(b,a)$ is an increasing function, an equivalence class is of the form $(h^n(x))_{n\in\N}$ for some $x\in(b,a)$. We will prove that this set is quasi-isometric to a ray (when seen as a subgraph of $\Gamma$). It is by definition of $b$ preserved by elements of $S$. Furthermore, the graph distance $d$ is bilipschitz to the standard distance $d'$ on $\N$. Indeed, on one hand, we have $d>\frac{1}{\max(|n_i|:s_i\in S)}d'$. On the other hand, $d<|g|d'$ where $|g|$ is the word length of $g$. This proves the result. \end{proof} This implies: \begin{remark}\label{grapham}Consider a point $s\in\R$ and a finitely generated subgroup $H<\G$. The Schreier graph defined by the action of $H$ on $Hs$ is amenable. \end{remark} \section{Convergence conditions based on expected number of break points}\label{anothersuff} The aim of this section is to describe sufficient conditions for convergence similar to Theorem~\ref{base} that do not assume leaving $C_\mu$ (which is potentially infinite). The ideas presented are similar to the arguments used in studies of measures with finite first moment on wreath products (see Kaimanovich~\cite[Theorem~3.3]{Kaimanovich1991}, Erschler~\cite[Lemma~1.1]{erschler2011}). Consider the piecewise $\Pl$ group $\G$ (see Definition~\ref{tildeg}) and a measure $\mu$ on it. We think of the measure as something that could be positive on all points of $\G$. Fix $s\in P_\Z\cup\Q$ and denote, for $g\in\G$, $A_g=\supp(C_g)$ (for $s\in\Q$, see discussion after Definition\ref{confdef} and after the proof of Lemma~\ref{log}). Take $x\in Gs$ and consider a random walk $(g_n)_{n\in\N}$ with increments $h_n$, that is $g_{n+1}=h_ng_n$. Then by (\ref{der}), $$C_{g_n}(x)\neq C_{g_{n+1}}(x)\iff g_n(x)\in A_{h_n}.$$ In other words, $C_{g_n}(x)$ converges if and only if $g_n(x)\in A_{h_n}$ only for a finite number of values of $n$. For a fixed $n$, the probability that $g_n(x)$ belongs to $A_{h_n}$ is $$\langle p^{*n}\delta_x,\sum_{h\in\G}\mu(h)\chi_{A_h}\rangle$$ where $p$ is the induced kernel on $Gs$. Taking the sum over $n$ we get: \begin{lemma}\label{conv} Fix $\mathfrak{o}\in Gs$. For a random walk $g_n$ on $\G$ with law $\mu$, the value $C_{g_n}(\mathfrak{o})$ converges with probability $1$ if and only if $$\sum_{n\in\N}\langle p^{*n}\delta_\mathfrak{o},\sum_{h\in\G}\mu(h)\chi_{A_h}\rangle<\infty$$ where $p$ is the induced kernel on $Gs$. \end{lemma} We define $f_\mu$ as \begin{equation}\label{fmu} f_\mu=\sum_{h\in\G}\mu(h)\chi_{\supp(C_h)} \end{equation} and show that it suffices for $f_\mu$ to be $l^1$ and $\mu$ transient : \begin{lemma}\label{ltwo} Let $s\in P_\Z\cup\Q$ be fixed. Take a measure $\mu$ on $\G$ such that the induced random walk on the Schreier graph on $Gs$ is transient and $f_\mu\in l^1(Gs)$ (as defined in (\ref{fmu})). Then for a random walk $g_n$ on $\G$ with law $\mu$, the associated configuration $C_{g_n}$ converges pointwise with probability $1$. \end{lemma} Remark in particular that $\mathbb{E}[Br]<\infty$ implies $f_\mu\in l^1(\G)$, where $Br(g)$ is the number of break points of $g$. Indeed, for any fixed $s$, $\|f_\mu\|_1$ is the expected number of break points inside the orbit $Gs$, which is smaller than the total expected number of break points. This is, of course, also true for measures on $H(\Z)$ as $H(\Z)\leq\G$. \begin{proof} Fix a point $\mathfrak{o}$ in the Schreier graph on $Gs$. We denote by $p$ the induced kernel on $Gs$ and write $f=f_\mu$. We have \begin{equation}\label{ltwosum} \sum_{n\in\N}\langle p^{*n}\delta_\mathfrak{o},f\rangle=\sum_{n\in\N}\sum_{x\in Gs}p^{*n}(\mathfrak{o},x)f(x)=\sum_{x\in Gs}f(x)\sum_{n\in\N}p^{*n}(\mathfrak{o},x) \end{equation} where we will have the right to interchange the order of summation if we prove that the right-hand side is finite. We write $p^{*n}(\mathfrak{o},x)=\check{p}^{*n}(x,\mathfrak{o})$ where $\check{p}$ is the inverse kernel of $p$. Let $\check{P}(x,y)$ be the probability that a random walk (with law $\check{p}$) starting at $x$ visits $y$ at least once. Then $\sum_{n\in\N}\check{p}^{*n}(x,y)=\check{P}(x,y)\sum_{n\in\N}\check{p}^{*n}(y,y)$. Indeed, $\sum_{n\in\N}\check{p}^{*n}(x,y)$ is the expected number of visits of $y$ of a walk starting at $x$ and random walk that starts from $x$ and visits $y$ exactly $k$ times is the same as the concatenation of a walk that goes from $x$ to $y$ and a walk that starts from $y$ and visits it $k$ times. Thus \begin{equation}\label{ltwoinv} \sum_{n\in\N}p^{*n}(\mathfrak{o},x)=\sum_{n\in\N}\check{p}^{*n}(x,\mathfrak{o})=\check{P}(x,\mathfrak{o})\sum_{n\in\N}\check{p}^{*n}(\mathfrak{o},\mathfrak{o})\leq\sum_{n\in\N}\check{p}^{*n}(\mathfrak{o},\mathfrak{o}). \end{equation} Then if we denote $c(p,\mathfrak{o})=\sum_{n\in\N}p^{*n}(\mathfrak{o},\mathfrak{o})$, \begin{equation}\label{ltwofin} \sum_{x\in Gs}f(x)\sum_{n\in\N}p^{*n}(\mathfrak{o},x)\leq c(p,\mathfrak{o})\|f\|_1<\infty. \end{equation} Applying Lemma~\ref{conv} we obtain the result. \end{proof} Combining this result with the result of Lemma~\ref{algtho} which gives transience of the induced random walk on $Gs$ under certain conditions, we obtain: \begin{lemma}\label{algone} Consider the piecewise $\Pl$ group $\G$ (see Definition~\ref{tildeg}). Let $H$ be a subgroup of $\G$. Assume that there exist $b<c$ such that $g(b)=b$, $f(c)=c$, $(b,c]\subset\supp(g)$ and $[b,c)\subset\supp(f)$ for some $f,g\in H$ (see Figure~\ref{alto} on page~\pageref{alto}). Assume also that there exists $s\in P_\Z\cup\Q$ and $\varepsilon_s>0$ with $s\leq b$ such that for some $n\in\Z$, $f^n(s)\in[b,c]$, and also $g(s-\varepsilon)=s-\varepsilon$ and $g(s+\varepsilon)\neq s+\varepsilon$ for every $0<\varepsilon\leq\varepsilon_s$. Then for any $\mu$ on $H$ with finite first break moment ($\mathbb{E}[Br]<\infty$) such that $\supp(\mu)$ generates $H$ as a semigroup, the Poisson boundary of $\mu$ on $H$ is non-trivial. \end{lemma} \begin{proof} By Lemma~\ref{algtho}, the simple random walk on the Schirer graph of $s$ by $\langle f,g\rangle$ is transient. By the comparison lemma by Baldi-Lohoué-Peyrière (Lemma~\ref{var}), as the support of $\mu$ generates $H$ as a semigroup, the random walk by $\mu$ on the Schreier graph of $s$ is then transient. Applying Lemma~\ref{ltwo}, the associated configurations converge as $\mu$ has finite first break moment. However, by hypothesis on $s$, $g(s)=s$ and $C_g(s)\neq 0$. Therefore, as $g\in H$, the limit configuration cannot be singular. Thus the Poisson boundary of $\mu$ on $H$ is non-trivial. \end{proof} For finitely generated subgroups of $\G$, from Lemma~\ref{ltwo} we have: \begin{remark}\label{brfin} The amount of break points is subadditive in relation to multiplication. In particular, if a measure $\mu$ has finite first moment, then it has finite first break moment. \end{remark} \begin{cor}\label{firstfin} Consider a measure $\mu$ on $\G$, the support of which generates a finitely generated subgroup, and such that $\mu$ has a finite first moment on that subgroup. Assume that there exists $s\in P_\Z$ such that the random walk on the Schreier graph on $Gs$ of this subgroup is transient. Then, for almost all random walks on $\G$ with law $\mu$, the associated configuration converges pointwise. \end{cor} \begin{proof} Follows from Remark~\ref{brfin} and Lemma~\ref{ltwo}.\end{proof} In such cases it is enough to prove that the associated limit configuration is not always the same, which can require case-specific arguments. We already have it in the case of Thompson's group: \begin{proof}[Proof of Corollary~\ref{finfirstthomp}]Fix $s\in P_\Z$ and consider the action $\mathfrak{a}_s$ of Thompson's group $F$ on $\R$ as defined in Section~\ref{thompsect}. Take a measure $\mu$ on $F$ that generates it as a semigroup. From Lemma~\ref{tree-old} and the comparison lemma by Baldi-Lohoué-Peyrière (Lemma~\ref{var}) the walk $\mu$ induces on the orbit of $s$ is transient. Applying Corollary~\ref{firstfin} this implies that the associated configuration stabilises, and by Lemma~\ref{nostable}, it cannot always converge towards the same point. Therefore the Poisson boundary of $\mu$ is not trivial.\end{proof} We remark that arguments similar to the ones in this section can also be made for the action of Thompson's group considered in Kaimanovich's article~\cite{kaimanovichthompson}. In a more general case, we can use the stronger result by Varopoulos of the comparison Lemma~\ref{var} in order to prove that if the transient walk diverges quickly enough, we can also have the result for $f_\mu\in l^2(Gs)$ (and not necessarily in $l^1$): \begin{lemma}\label{ltwoforreal} Fix $s\in P_\Z$. Consider a measure $\mu_0$ such that $\tilde{f}=f_{\mu_0}\in l^2(Gs)$. Consider $\lambda$ on $H_s$ such that $\sum_{n\in\N}\langle\lambda^{*n}\tilde{f},\tilde{f}\rangle<\infty$. Let $\mu=\varepsilon\lambda+(1-\varepsilon)\mu_0$ with $0<\varepsilon<1$. Then for almost all random walks on $G$ with law $\mu$, the associated configuration converges pointwise. \end{lemma} \begin{proof} Clearly, $f_\mu=(1-\varepsilon)\tilde{f}$. Then by the comparison Lemma~\ref{var} we get: $$\sum_{n\in\N}\langle\mu^{*n}f_\mu,f_\mu\rangle<\frac{1}{\varepsilon(1-\varepsilon)^2}\sum_{n\in\N}\langle\lambda^{*n}\tilde{f},\tilde{f}\rangle<\infty.$$ Denote $f=f_\mu$. Consider $x\in P_\Z$ such that it is possible for the value of the associated configuration at $x$ to change. In other words, there is $n_0\in\N$ and $y\in P_\Z$ such that $x\in\supp(\mu^{*n_0})y$ and $f(y)>0$. Denote by $p$ the probability to reach $x$ from $y$. Then $\sum_{n\in\N}\langle\mu^{*n}\delta_y,f\rangle>p\sum_{n\in\N}\langle\mu^{*n+n_0}\delta_x,f\rangle$. In particular, if the first is finite, so is the second. However, we clearly have $\sum_{n\in\N}\langle\mu^{*n}\delta_y,f\rangle<\frac{1}{f(y)}\sum_{n\in\N}\langle\mu^{*n}f,f\rangle$ which concludes the proof. \end{proof} In particular, if for any $s$ all associated configurations cannot be stable by all the elements of $\langle\supp(\mu)\rangle$, we obtain a non-trivial boundary. \begin{cor} Fix $s\in P_\Z$. Consider a measure $\mu_0$ such that $h_s\in\supp(\mu_0)^{*n_0}$ for some $n_0$ and $\tilde{f}=f_{\mu_0}\in l^2(Gs)$. Consider $\lambda$ on $H_s$ such that $\sum_{n\in\N}\langle\lambda^{*n}\tilde{f},\tilde{f}\rangle<\infty$. Let $\mu=\varepsilon\lambda+(1-\varepsilon)\mu_0$ with $0<\varepsilon<1$. Then the Poisson boundary of $\mu$ on the subgroup generated by its support is non-trivial. \end{cor} \begin{proof} Follows from Lemma~\ref{ltwoforreal} and Lemma~\ref{nostable}. \end{proof} Remark that there always exists a symmetric measure $\lambda$ satisfying those assumptions as $\A\subset H_s$ ($\A$ was defined in (\ref{agrp})). \begin{figure} \centering \begin{minipage}{8cm}\centering\caption{Graphs of $f$ and $g$ and positions of $b$ and $c$}\label{alto} \begin{tikzpicture} \begin{axis}[xmin=-4,xmax=4,ymin=-4,ymax=4,axis lines = middle, legend pos = south west,xtick={-10},ytick={17}] \addplot[domain=-3.8:3.8,color=black]{x}; \addlegendentry{$Id$} \addplot[color=blue,samples=100,domain=0:2.5,restrict y to domain=-4:4,dashed,thick]{(2*x+3)/(x+2)}; \addlegendentry{$f$} \addplot[samples=100,domain=0:2.5,restrict y to domain=-4:4,densely dotted,thick]{(4*x-1)/x}; \addlegendentry{$g$} \node[label={-1:{$b$}},circle,fill,inner sep=1pt] at (axis cs:0.268,0.268) {}; \node[label={110:{$c$}},circle,fill,inner sep=1pt] at (axis cs:1.732,1.732) {}; \end{axis} \end{tikzpicture} \end{minipage} \begin{minipage}{8cm}\centering\caption{Graphs of $f$ and $g$ in $(a,b')$}\label{alte} \begin{tikzpicture} \begin{axis}[xmin=-4,xmax=4,ymin=-4,ymax=4,axis lines = middle, legend pos = south west,xtick={-10},ytick={17}] \addplot[domain=-3.8:3.8,color=black]{x}; \addlegendentry{$Id$} \addplot[samples=100,domain=0.4365:1.732,restrict y to domain=-4:4,densely dotted,thick]{(2*x+3)/(x+2)}; \addlegendentry{$f$} \addplot[color=blue,samples=100,domain=0.382:2.618,restrict y to domain=-4:4,dashed,thick]{(3*x-1)/x}; \addlegendentry{$g$} \addplot[samples=100,domain=0.382:0.4365,restrict y to domain=-4:4,densely dotted,thick]{(8*x-3)/(3*x-1)}; \node[label={-2:{$a$}},circle,fill,inner sep=1pt] at (axis cs:0.382,0.382) {}; \node[label={-30:{$b$}},circle,fill,inner sep=1pt] at (axis cs:1.732,1.732) {}; \node[label={-30:{$b'$}},circle,fill,inner sep=1pt] at (axis cs:2.618,2.618) {}; \end{axis} \end{tikzpicture} \end{minipage} \end{figure} \section{An algebraic lemma and proof of the main result}\label{algsec} Consider the piecewise $\Pl$ group $\G$ (see Definition~\ref{tildeg}). Take a subgroup $H$ of $\G$. In Lemma~\ref{algone} we proved that if there are $f,g\in H$ and $b,c,s\in\R$ that satisfy certain assumptions, for every measure $\mu$ on $H$ the support of which generates $H$ as a semigroup and that has finite first break moment $\mathbb{E}[Br]$, $(H,\mu)$ has non-trivial Poisson boundary. To prove the main result (Theorem~\ref{main}) we will study subgroups that do not contain elements satisfying those assumptions. \begin{lemma}\label{algthree} Let $H=\langle h_1,\dots,h_k\rangle$ be a finitely generated subgroup of $\G$. Then either $H$ is solvable, or the assumptions of Lemma~\ref{algone} are satisfied for some $f,g\in H$, $b,c,s\in\R$. \end{lemma} We recall that for $f\in\G$, and $a,b\in\R$ such that $f(a)=a$ and $f(b)=b$, we defined (see Definition \ref{restr}) $f\restriction_{(a,b)}\in\G$ by $f\restriction_{(a,b)}(x)=f(x)$ for $x\in(a,b)$ and $x$ otherwise. \begin{proof} We first check that with the appropriate assumptions on $(f,g,b,c)$, $s$ always exists: \begin{lemma}\label{algtwo} Let $H$ be a subgroup of $\G$. Assume that there exist $b<c$ such that $g(b)=b$, $f(c)=c$, $(b,c]\subset\supp(g)$ and $[b,c)\subset\supp(f)$ for some $f,g\in H$. Then there exist $f',g',b',c'$ and $s$ that satisfy the assumptions of Lemma~\ref{algone}. \end{lemma} The assumptions of the lemma are illustrated in Figure~\ref{alto}. Recall that we defined $\supp(f)=\{x\in\R:f(x)\neq x\}$. \begin{proof} Without loss of generality assume that $b$ is minimal among all $b$ for which there exists $c$ such that either $(f,g,b,c)$ or $(g,f,b,c)$ satisfy the assumptions of this lemma. We can assume without loss of generality that $f(x)>x$ and $g(x)>x$ for $x\in(b,c)$ (otherwise, we can replace either or both with their inverse). Let $a$ be the largest fixed point of $f$ that is smaller than $b$. By minimality of $b$ we clearly have that $g(a)=a$. The stabiliser $St_a$ of $a$ in $\Pl$ is cyclic by Lemma~\ref{cyclic}. Therefore there exist $k$ and $l$ such that $f^k(x)=g^l(x)$ for $x\in(a,a+\varepsilon)$ for some $\varepsilon>0$. Take $(f',g')=(f,f^{-k}g^l)$. By our assumption, $f^k$ and $g^l$ are strictly greater then the identity function in $(b,c)$. As they are continuous and each fixes an end of the interval, by the mean values theorem there exists $b'\in(b,c)$ such that $f^k(b')=g^l(b')$. Then $(f',g')$ and $(b',c)$ satisfy the assumptions of this lemma. Furthermore, $f^{-k}g^l$ is the identity in a small enough right neighbourhood of $a$, which implies that there exists an element $s$ that satisfies the assumptions of Lemma~\ref{algone}. \end{proof} We now assume that the assumptions of Lemma~\ref{algone}, and therefore also the assumptions of Lemma~\ref{algtwo}, are not satisfied by any couple of elements in $H$. We will prove that $H$ is solvable. For any element in $g\in\G$, its support $\supp(g)$ is a finite union of (not necessarily finite) open intervals. The intervals in the support of $h_i$ we denote $I^i_j=(a_i^j,b_i^j)$ for $j<r_i$ where $r_i$ is the number of intervals in the support of $h_i$. In terms of those intervals, the negation of Lemma~\ref{algtwo} means that for every $(i,j)$ and $(i',j')$, either $I^i_j\cap I^{i'}_{j'}=\emptyset$, or $I^i_j\subset I^{i'}_{j'}$, or $I^{i'}_{j'}\subset I^i_j$. We further check that if the inclusion is strict, it must be strict at both extremities. Specifically: \begin{lemma}\label{algbonus} Let $H$ be a subgroup of $\G$. Assume that there exist $a<b<b'\in\R\cup\{-\infty\}$ such that $f(a)=g(a)=a$, $f(b)=b$, $g(b')=b'$, $(a,b)\subset\supp(f)$ and $(a,b')\subset\supp(g)$ for some $f,g\in H$ (see Figure~\ref{alte}). Then the assumptions of Lemma~\ref{algtwo} are satisfied by some elements of the group. \end{lemma} \begin{proof} In a small enough right neighbourhood of $a$ there are no break points of $f$ and $g$. Let $c$ be a point in that neighbourhood. Clearly, $a<c<b$. Without loss of generality, we can assume that $f(x)>x$ for $x\in(a,b)$, and idem for $g$ (otherwise, we can replace them with their inverse). For some $k\in\N$, $f^{-k}(b)<c$. Denote $g'=f^{-k}gf^k$. Consider the elements $g'$ and $g^{-1}g'$. As the stabiliser of $a$ in $\Pl$ is cyclic (by Lemma~\ref{cyclic}), $g^{-1}g'(x)=x$ for $x\in(a,f^{-k}(c))$. However, $g^{-1}g'(x)=g^{-1}(x)$ for $x\in(f^{-k}(b),b)$, and in particular $g^{-1}g'(x)\neq x$ in that interval. Let $c'$ be the largest fixed point of $g^{-1}g'$ that is smaller than $f^{-k}(b)$. Consider now $g'$. It is the conjugate of $g$, therefore it is different from the identity in $(a,f^{-k}(b))$ and fixes $f^{-k}(b)<c$. Clearly, $c'<f^{-k}(b)$. Then $g',g^{-1}g'$ and $c',f^{-k}(b)$ satisfy the assumptions of Lemma~\ref{algtwo}. Observe that the same arguments can be used for two elements with supports $(a,b)$ and $(a',b)$ with $a\neq a'$. \end{proof} Consider the natural extension of the action of $\G$ on $\R\cup\{+\infty,-\infty\}$, which is that every element of $\G$ fixes both $-\infty$ and $+\infty$. We make the convention that $+\infty$ is considered to be a break point of $f\in\G$ if and only if for every $M\in\R$ there is $x>M$ such that $f(x)\neq x$ (and idem for $-\infty$). In other words, if the support of an element is equal to an interval $(a,b)$, $a$ and $b$ are break points even if one or both are infinite. We now prove that $H$ is solvable by induction on the number of different orbits of $H$ on $\R\cup\{\pm\infty\}$ that contain non-trivial break points of elements of $H$. Remark that the number of orbits of $H$ that contain non-trivial break points of elements of $H$ is the same as the number of orbits that contain non-trivial break points of $h_1,\dots,h_k$. In particular, it is finite. Consider all maximal (for inclusion) intervals $I^i_j$ over all couples $(i,j)$. We denote them $I_1,I_2,\dots,I_n$. By our hypothesis we have that they do not intersect each other. We denote $h_i^j=h_i\restriction_{I_j}$ and $H_j=\langle h_1^j,h_2^j,\dots,h_k^j\rangle$ for every $j<n$. As the intervals $I_j$ do not intersect each other, $H$ is a subgroup of the Cartesian product of $H_j$: \begin{equation}\label{maxint}H\leq\prod_{j=1}^n H_j.\end{equation} Moreover, for every $j$, the amount of orbits with non-trivial break points of $H_j$ is not greater than that of $H$. Indeed, the orbits with break points of $H_j$ inside $I_j$ coincide with those of $H$, and it has only two other orbits containing break points, which are the singletons containing the end points of $I_j$. We just need to prove that $H$ has at least two other orbits containing non-trivial break points. If $I_j=I_{i'}^{j'}$, then the supremum and infimum of the support of $h_{i'}$ are break points, and by definition of $I_j$ their orbits by $H$ do not intersect the interior of $I_j$. The convention we chose assures that our arguments are also correct if one or both of the end points is infinite. It is thus sufficient to prove the induction step for $H_j$ for every $j$. Therefore without loss of generality we can assume $n=1$. Remark that in this case the end points of $I_1$ are both non-trivial break points, and both clearly have trivial orbits. We denote $(a,b)=I=I_1$. Consider the germs $g_i\in St_a$ of $h_i$ at a right neighbourhood of $a$. As $St_a$ is cyclic, there exist $m_i\in\Z$ such that $\prod_i g_i^{m_i}$ generates a subgroup of $St_a$ that contains $g_i$ for all $i$. Specifically, the image in $\Z$ of this product is the greatest common divisor of the images in $\Z$ of $g_i$. We denote $h=\prod_i h_i^{m_i}$ and let, for every $i$, $n_i$ satisfy $(\prod_i g_i^{m_i})^{n_i}=g_i$. For every $i\leq k$, we consider $h'_i=h_ih^{-n_i}$. Clearly, $H=\langle h,h'_1,h'_2,\dots,h'_k\rangle$, and there exists $\varepsilon$ such that for every $i$, $\supp(h'_i)\subset(a+\varepsilon,b-\varepsilon)$ (as the assumptions of Lemma~\ref{algbonus} are not satisfied by $h,h'_i$). Consider the set of $h^{-l}h'_ih^l$ for $i<k,l\in\Z$ and their supports. They are all elements of $H$. Furthermore, there is a power $n$ such that $h^n(a+\varepsilon)>b-\varepsilon$. Therefore, for every point $x\in(a,b)$, the number of elements of that set that contain $x$ in their support is finite. Considering the intervals that define those supports, we can therefore choose a maximal one (for the inclusion). Let $x_0$ be the lower bound of a maximal interval. By our assumption, $x_0$ is then not contained in the support of any of those elements, and neither is $x_l=h^l(x_0)$ for $l\in\Z$. We denote ${h'}_i^j=h^jh'_ih^{-j}\restriction(x_0,x_1)$. For $i<k$, let $J_i$ be the set of $j\in\Z$ such that ${h'}_i^j\neq Id$. Then $H$ is a subgroup of \begin{equation}\label{wreath} \left\langle h,\bigcup_{i<k}\bigcup_{j\in J_i}{h'}_i^j\right\rangle\cong\langle h\rangle\wr\left\langle\bigcup_{i<k}\bigcup_{j\in J_i}{h'}_i^j\right\rangle. \end{equation} For a group $F$, $\Z\wr F$ denotes the wreath product of $\Z$ on $F$. It is a group, the elements of which are pairs $(n,f)$ with $n\in\Z$ and $f\in\prod_{k\in\Z}F$ with finite support. The group multiplication is defined as $(n,f)(n',f')=(n+n',T^{n'}f+f')$, where $T^{n'}f(k)=f(k-n')$. It is a well known property of wreath products that if $F$ is solvable, so is $\Z\wr F$. Denote $H'=\langle\bigcup_{i<k}\bigcup_{j\in J_i}{h'}_i^j\rangle$. The non-trivial break points and supports of ${h'}_i^j$ are contained in $(x_0,x_1)$, and they fix that interval. Therefore the orbits that contain those break points are the same in relation to $\langle h,H'\rangle$ and to $H'$. On the other hand, $\langle h,H'\rangle$ and $H$ act the same way locally, which means that they have the same orbits. Those two facts imply that $H'$ has at least two less orbits containing non-trivial break points than $H$ (as it does not have non-trivial break points in the orbits of the end points of $I$). That group also does not contain elements that satisfy the assumptions of Lemma~\ref{algtwo}. Indeed, assume that there are two words on $\bigcup_{i<k}\bigcup_{j\in J_i}{h'}_i^j$ and $a,b\in\R$ that satisfy those assumptions. Their supports are also contained in $(x_0,x_1)$, therefore so are $a$ and $b$. Then the same words in $\bigcup_{i<k}\bigcup_{j\in J_i}h'_i$ are equal inside $(a,b)$, and they satisfy the conditions of Lemma~\ref{algtwo}. However, $h'_i$ are elements of $H$ and this is contradictory to our assumptions. This provides the induction step. The induction basis is the trivial group, which is solvable. Therefore $H$ is solvable. \end{proof} We can now prove the main result, that is that for any subgroup $H$ of $H(\Z)$ which is not locally solvable and any measure $\mu$ on $H$ such that the support of $\mu$ generates $H$ as a semigroup and has finite first break moment $\mathbb{E}[Br]$, the Poisson boundary of $(H,\mu)$ is non-trivial. \begin{proof}[Proof of Theorem~\ref{main}]Fix $H$ and take $\mu$ on $H$ with finite first break moment and the support of which generates $H$ as a semigroup. We distinguish two cases. Assume first that there exist $f,g\in H$ and $b,c,s\in\R$ that satisfy the assumptions of Lemma~\ref{algone}. By the result of the lemma, the Poisson boundary of $(H,\mu)$ is non-trivial. We now assume that no such $f,g,b,c,s$ exist and will prove that $H$ is locally solvable. Any finitely generated subgroup $\widetilde{H}$ of $H$ clearly also does not contain such $f$ and $g$ for any $b,c,s\in\R$. Furthermore, $H(\Z)$ is a subgroup of the piecewise $\Pl$ group $\G$ (see Definition~\ref{tildeg}), and thus $\widetilde{H}$ is a subgroup of $\G$. Therefore by Lemma~\ref{algthree} we obtain that $\widetilde{H}$ is solvable, which proves that $H$ is locally solvable.\end{proof} \section{A remark on the case of finite $1-\varepsilon$ moment}\label{last} Remark that in the proof of Lemma~\ref{algthree}, for a finitely generated subgroup that does not satisfy the assumptions of Lemma~\ref{algone} we obtained more than it being solvable. If the subgroup is also non-abelian, we have proven that it contains a wreath product of $\Z$ with another subgroup (see (\ref{wreath})). In particular, it is not virtually nilpotent, which implies (as it is finitely generated) that there exists a measure on it with non-trivial boundary by a recent result of Frisch-Hartman-Tamuz-Vahidi-Ferdowski~\cite{choquet-deny}. Furthermore, it is known that on the wreath products $\Z\wr\Z$ it is possible to obtain a measure with finite $1-\varepsilon$ moment and non-trivial Poisson boundary for every $\varepsilon>0$ (see Lemma~\ref{wreathnontriv} and discussion before and after it). The same arguments can be used in $\G$: \begin{lemma}\label{mineps} For every finitely generated subgroup $H=\langle h_1,\dots,h_k\rangle$ of $\G$ that is not abelian and every $\varepsilon>0$ there exists a symmetric non-degenerate measure $\mu$ on $H$ with non-trivial Poisson boundary such that $\int_H |g|^{1-\varepsilon}d\mu(g)<\infty$, where $|g|$ is the word length of $g$. \end{lemma} We recall that every measure on an abelian group has trivial Poisson boundary (see Blackwell~\cite{blackwell1955}, Choquet-Deny~\cite{ChoquetDeny}). \begin{proof} As there is always a non-degenerate symmetric measure with finite first moment, we can assume that the assumptions of Lemma~\ref{algone} are not satisfied in $H$. We will use the results on the structure of $H$ seen in the proof of Lemma~\ref{algthree}. It is shown (see (\ref{maxint})) that $H$ is a subgroup of a Cartesian product $\prod_{j=1}^n H_j$. Specifically, there exist disjoint intervals $I_1,I_2,\dots,I_n$ such that the supports of elements of $H$ are included in the union of those intervals. Taking $h_i^j=h_i\restriction_{I_j}$ to be the restriction on one of those intervals (as defined in Definition \ref{restr}), the group $H_j$ is then equal to $\langle h_1^j,h_2^j,\dots,h_k^j\rangle$. For any $j$, consider the composition of the projection of $\prod_{j=1}^n H_j$ onto $H_j$ and the inclusion of $H$ in $\prod_{j=1}^n H_j$. Then $H_j$ is the quotient of $\prod_{j=1}^n H_j$ by the kernel of this composition, which is equal to $\{h\in\prod_{j=1}^n H_j,h\restriction_{I_j}\equiv0\}$. We can therefore separately define measures on $H_j$ and on the kernel, and the Poisson boundary of their sum would have the Poisson boundary of the measure on $H_j$ as a quotient. In particular, it suffices to show that for some $j$ we can construct a measure on $H_j$ with non-trivial boundary satisfying the conditions of the lemma. As $H$ is non-abelian, so is at least one $H_j$. Without loss of generality, let that be $H_1$. In the proof of Lemma~\ref{algthree} we have shown (see (\ref{wreath})) that in $H_1$ there are elements $h^1$ and ${h^1}'_j$ for $j=1,2,\dots,k$ such that $H_1=\langle{h^1},{h^1}'_1,{h^1}'_2,\dots,{h^1}'_k\rangle$ and is isomorphic to a subgroup of the wreath product of $h^1$ on a group $H'$ defined by the rest of the elements. Remark that $H_1$ not being abelian implies that $H'$ is not trivial. Furthermore, by taking the group morphism of $H_1$ into $\Z\wr H'$, we see that the image of $h^1$ is the generator $(1,0)$ of the active group, while for every $j$, the image of ${h^1}'_j$ is of the form $(0,f_j)$ where $f_j$ has finite support. The following result is essentially due to Kaimanovich and Vershik~\cite[Proposition~6.1]{kaimpoisson},\cite[Theorem~1.3]{Kai83}, and has been studied in a more general context by Bartholdi and Erschler~\cite{Bartholdi2017}: \begin{lemma}\label{wreathnontriv} Consider the wreath product $\Z\wr H'$ where $H'$ is not trivial, and let $\mu$ be a measure on it such that the projection of $\mu$ on $\Z$ gives a transient walk and the projection of $\mu$ on ${H'}^\Z$ is finitary and non-trivial. Then the Poisson boundary of $\mu$ is not trivial. \end{lemma} In the article of Kaimanovich and Vershik, it is assumed that the measure is finitary, and the acting group is $\Z^k$ for $k\geq3$, which assures transience. The proof remains unchanged with our assumptions. Remark that those results have also been generalised in the case of a measure with finite first moment that is transient on the active group, see Kaimanovich~\cite[Theorem~3.3]{Kaimanovich1991},\cite[Theorem~3.6.6]{Kaimanovich2007PoissonBO}, Erschler~\cite[Lemma~1.1]{erschler2011}. \begin{proof} Take a random walk $(g_n)_{n\in\N}$ on $\Z\wr H'$ with law $\mu$. Let $p$ be the projection of the wreath product onto the factor isomorphic to $H'$ that has index $0$ in ${H'}^\Z$. By the assumptions of the lemma, $p(h_n)$ stabilises, and is not almost always the same. This provides a non-trivial quotient of the Poisson boundary of $\mu$. \end{proof} All that is left is constructing a measure that verifies the assumptions of Lemma \ref{wreathnontriv}. Consider a symmetric measure $\mu_1$ on $\langle h^1\rangle$ that has finite $1-\varepsilon$ moment and is transient. Let $\mu_2$ be defined by being symmetric and by $\mu_2({h^1}'_j)=\frac{1}{2k}$ for every $j$. Then $\mu=\frac{1}{2}(\mu_1+\mu_2)$ is a measure on $H_1$ with non-trivial Poisson boundary. \end{proof} \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The brightest cluster galaxy (BCG) is typically a giant, red elliptical or cD galaxy, located near the centre of the gravitational potential. It is likely that a rich history of galaxy-galaxy interactions and mergers is responsible for the unique morphology of such galaxies. This is supported indirectly by several pieces of evidence, including: the luminosity of the BCG is correlated with the cluster mass \citep{sco57,lin04} and X-ray luminosity \citep{Hud97}; BCGs in the most X-ray luminous clusters are larger and have surface brightness profiles which are less steep than their low X-ray luminosity counterparts \citep{Bro05}; and the velocity dispersion of BCGs rises less steeply with luminosity than for other bright galaxies \citep{von06}. All this indirect evidence for a rich merger history is supported by high resolution imaging of BCGs obtained with the {\it Hubble Space Telescope}, which has revealed that the cores of these galaxies can be complex, often showing multiple nuclei and prominent dust signatures \citep{lai03}. However, the evolutionary history of these galaxies is still not completely understood, and work on cooling flow (CF) clusters has hinted at another possible mechanism for adding to the stellar mass of BCG. The original cooling flow hypothesis is that hot cluster X-ray gas cools and condenses out of the intracluster medium (ICM) into the cluster's potential well, forming molecular clouds and stars \citep{fab94}. This drop-out occurs at the centre of the cooling flow, within the cooling radius, ie. onto the BCG. Cooling flow clusters are common in the local universe \citep[making up about 50\% of the population in an X-ray flux limited sample][]{per98,che07}), and cD galaxies are often found at the centre of these systems. Because of this, a link between the cooling X-ray gas and recent star formation in the BCG has been discussed for many years \citep{fab94}. Convincing observations which support this idea have been presented: a blue and UV-colour excess \citep{mcn96,mcn04,hic05}, molecular gas \citep{edg02,jaf01,sal03} and H$\alpha$ emission \citep{cra99,don00,cra05} have all been seen in the cooling flow cluster's BCG. However, although the morphology of the H$\alpha$ emission is diffuse and filamentary, indicating star formation in some CF BCGs, in others it is very compact and more characteristic of AGN dominated emission \citep{don00,edw07,hat07}. As well, \citet{von06} recently showed that optical emission lines in BCGs predominantly arise from LINER emission, rather than normal star formation. More recent X-ray satellite measurements from { \it Chandra } and { \it XMM} have shown that the gas does not cool directly from the hot X-ray phase through to the cool molecular gas phase \citep{boh02}, but rather that a large amount of the cooling gas is being reheated before condensing out of the ICM. The current paradigm is that AGN activity in the BCG is reheating the cooling X-ray gas, which implies a more complicated feedback process between the cooling gas and the central galaxy \citep[e.g][]{piz05}. This leads to revised, predicted mass deposition rates that are now in reasonable agreement with the observed values \citep{boh02,piz05}. The observed molecular and ionic gas may be attributed to a small amount that has cooled from the cooling flow; alternatively, the H$\alpha$ may be excited by the AGN itself. Detailed studies of star formation indicators in galaxy groups and clusters in cooling flows and non-cooling flows, discriminating between those with and without AGN activity, are required in order to analyze the relative importance of the different processes. A correlation between optical line emission in the BCG and cluster properties has been explored by several authors, most notably \citet{cra99}, who found that 27\% of BCGs have optical line emission, and that the projected distance from the BCG to the X-ray centre is less for line emitting galaxies than for non-emitting galaxies. Recently, \citet{von06} and \citet{bes06} have explored the properties of BCGs in the SDSS, using the C4 cluster catalogue \citep{mil05}. These authors find that radio-loud AGN activity is more frequent in BCGs than in other galaxies of the same mass, but that this frequency does not depend strongly on cluster velocity dispersion. On the other hand, in their study of radio-loud properties of an X-ray selected cluster sample, \citet{lin06} find the overall radio-loud fraction to be 30\% in BCGs, and that the fraction is higher in more massive clusters. Importantly, \citet{von06} find that many of these radio-loud galaxies would not necessarily be identified as AGN from their optical emission lines and, in fact, that optical AGN activity appears to be {\it less} frequent among BCGs than other cluster galaxies of similar mass. The interpretation is complicated by the fact that the radio-selected galaxy sample, though restricted to red galaxies, could be contaminated by galaxies in which the low luminosity radio emission arises from star formation, rather than AGN activity. In this paper, we explore optical line emission in BCGs with respect to properties of the galaxy and the host cluster, using two large, homogeneous datasets. One sample is taken from the X-ray selected, National Optical Astronomy Observatory Fundamental Plane Survey (NFPS), for which the X-ray properties are known for all clusters. For many of these clusters, we are able to identify those with short cooling times (CF clusters) based on {\it ROSAT}, {\it Chandra}, or {\it XMM-Newton} observations. We complement this sample with optically-selected clusters drawn from the Sloan Digital Sky Survey Data Release 3 (SDSS DR3), which is not biased toward X-ray luminous clusters, and is therefore more representative of the cluster population. In addition, the greater spectral coverage of the SDSS allows us to use emission line ratios to identify whether the emission arises predominantly from composite HII region and LINER activity, or from LINER activity alone (pure HII-region, and Seyfert-like emission are both rare). The paper is organized as follows. In section \ref{data}, we introduce our galaxy samples and selection criteria. In section \ref{results}, we report our results. For the X-ray selected NFPS we compare the frequency of H$\beta$ emission in BCGs as a function of BCG magnitude and distance to the cluster centre. Similar results are found for the SDSS sample, based on the H$\alpha$ emission line. However, we find that there are differences between the BCGs in the two samples, which we can attribute to the nature of the X-ray emitting gas. In section \ref{discussion}, we consider the impact of our results on various galaxy and cluster formation hypotheses, and summarize our results. We conclude in section \ref{conclusion}. Unless otherwise stated our analysis assumes the values $\Omega_{\mathrm m}$=0.3, for the matter density parameter, $\Omega_{\Lambda}$=0.7 for the cosmological constant, and $H_0$=100 km/s/Mpc for the Hubble parameter. L$_{X}$ refers to the bolometric X-ray luminosity throughout. \section{Data and Sample Selection}\label{data} For both NFPS and SDSS, our goal is to identify the BCG and a similar sample of luminous ``control'' galaxies that is located in the inner regions of the cluster. We first discuss the sample selection of the NFPS. \subsection{NOAO Fundamental Plane Survey}\label{nfp} The NFPS is an all sky study of 93 X-ray selected rich clusters with redshifts between 0.010 and 0.067 \citep{smi04}. The goals of the project are to measure cosmic flows on the scales of 100 h$^{-1}$ Mpc and to build a large homogeneous sample with which to investigate physical effects and environmental influences on early-type galaxy evolution and formation \citep{smi04}. The spectroscopic observations are made though a fiber diameter of 2" and are limited to red sequence galaxies; galaxies more than 0.2 magnitudes bluer than the red sequence were not generally observed spectroscopically. There are spectra for 5388 galaxies with a wavelength coverage between 4000 and 6100~\AA~ and a resolution of \ensuremath{\sim} 3 \AA ~\citep{smi04}. \subsubsection{Cluster, BCG, and Control Sample Definitions}\label{nfpss} In order to respect the completeness of the NFPS cluster sample, we exclude the 13 clusters that were observed serendipitously and that did not meet the original $L_{X}$ limit of $10^{42.6}$ erg/s. For each cluster, the redshifts were used to calculate the cluster velocity dispersion and the radius at which the mean density interior to the cluster is 200 times the critical density ($\sigma_{cl}$ and r$_{200}$, respectively). We used the prescription $r_{200}$=$\sqrt{3}$$\sigma_{cl}$/1000~km/s/Mpc as derived in \citet{car97}. Note that r$_{200}$ is typically of order 1.5 $h^{-1}$ Mpc, much larger than the cooling radius of a typical cooling-flow cluster (about 200 kpc). The centre of the cluster is taken to be at the peak of the X-ray emission \citep{ebe96,ebe00}. The galaxies are then assigned to the clusters based on a radial and a velocity weighting as in \citet{smi04}. For this analysis, we are interested in only the bright galaxies in the central regions of the cluster and so include a magnitude limit of $M_{K}<-24$, based on K-band total magnitudes obtained from 2MASS catalogues \citep{skr06}. This is about half a magnitude brighter than the characteristic K-band magnitude of $M_{K*} = -23.55$ \citep[][for $H_0 = 100$ km/s/Mpc]{lin04b}. Since we are interested only in galaxies occupying a similar environment as the BCG, ie. in the central regions of the cluster, we consider only galaxies within $0.5r_{200}$, and with velocity differences with respect to the cluster mean velocity less than twice the cluster velocity dispersion. The BCG is then defined as the first rank cluster galaxy using the K-band magnitudes. In twenty cases, the BCG was not observed spectroscopically due to constraints in the fiber positioning, and these clusters have been excluded from our analysis. Our final sample consists of 60 clusters with BCGs, as summarized in Table \ref{restab} and listed in Table \ref{bcgtab}. Our control sample consists of the 159 other bright ($M_{K}<-24$) galaxies within the same radial and velocity cuts described above. The BCGs are, of course, excluded from the control sample. For the NFPS, we use the stellar absorption-corrected H$\beta$ emission as an indicator of star formation activity, since H$\alpha$ is not generally available in these spectra. Although $H\beta$ emission is relatively weak, the high signal-to-noise ratio of these spectra allow us to measure its strength reliably after correcting for underlying stellar absorption (see the errors quoted in Table \ref{bcgtab}). The observed galaxy spectra are divided by best fit absorption template stellar population synthesis models from \citet{vaz99}, which have been redshifted and broadened to match the velocity dispersion of the observed galaxy. Subsequently, the H$\beta$ equivalent width is measured directly, without assuming a particular line profile, from the ratio of the observed spectrum to the best fit model \citep{nel05}. A thorough discussion of the emission line measurements can be found in \citet{nel05}. For two of the NFPS BCGs, in Abell 780 and in Abell 1795, the nebular emission is strong enough that our standard methods for obtaining reliable velocity dispersions, and hence stellar absorption corrections, fail. For galaxies of similar magnitude, the typical stellar absorption at H$\beta$ is $\sim$1.5~\AA~ with an uncertainty of 0.3~\AA. This error dominates our uncertainity in the total equivalent width of emission for these two special cases. For the BCGs in Abell 780 and Abell 1795, the H$\beta$ equivalent widths are 7.8 \AA~ and 7.2 \AA, respectively. However, to achieve a plot that is more easily read, these points are set to respective lower limits of 3.3~\AA~and 3.7~\AA~in Fig. \ref{relkcont} and Fig. \ref{hbrcont}. We will define emission-line galaxies to be those with an equivalent width $>$ 0.5 \AA. Since the H$\alpha$ and [N{\sc ii}] lines are generally unavailable, we are unable to use \citet[hereafter BPT]{bpt81} diagrams to reliably distinguish emission due to star formation from that arising from AGN activity. \subsubsection{Cooling Flow definition} In general, we designate an NFPS cluster as ``cooling flow'' (CF) or ``non-cooling flow'' using mass deposition rates and X-ray cooling times from published catalogues \citep{per98,whi00,all01,bir04}. This is a somewhat subjective classification complicated by the fact that not only are mass deposition rates calculated from {\it ROSAT} observations typically 2-10 times higher than those calculated from {\it Chandra} observations \citep{boh02}, but also higher resolution spectra from {\it XMM Newton} are not well matched to an isobaric cooling model \citep{pet03}; thus the mass deposition rate may not be an exact indicator of a cooling flow cluster. Therefore, whenever possible, we prefer to use recent cooling flow designations based on the presence of a central temperature gradient in {\it XMM Newton} or {\it Chandra} observations. For the rest, we are left with using the mass deposition rate based on {\it ROSAT} data as an indicator. For the 11 cases where observations are from the {\it Chandra} or {\it XMM Newton} satellites, we define a CF cluster to be one with a mass deposition rate $\dot{M} > 0$; otherwise, we require $\dot{M}>100 M_{\sun}\mbox{yr}^{-1}$. Within this framework we have 14 CF clusters and 19 non-CF clusters. The CF status of another 27 clusters is unknown. Clearly this is not an unassailable definition; however, it is likely that most, if not all, of the clusters we classify as CF clusters really do have short cooling times in the centre. Some galaxies with low mass deposition rates will undoubtedly fall in our non-CF sample. With this in mind, our results are not sensitive to this definition. We further discuss this point in Section \ref{sec-cfdef}, as well as a continuous method of defining a CF cluster, based on an excess of observed X-ray luminosity to predicted values \citep{mcc04}. \subsection{Data from the Sloan Digital Sky Survey}\label{sec-data} Our second sample of galaxy clusters is derived from the C4 catalogue \citep{mil05}, based on the third data release (DR3) of the SDSS. This release covers 5282 square degrees in imaging, and 4188 square degrees in spectroscopy \citep{aba05}. Imaging was taken in five optical bands, u', g', r', i' and z' with a median spatial resolution of 1.4" in r'. Spectra are observed with a aperture diameter of 3", cover the wavelength range from 3800 to 9200 \AA~ and have a spectral resolution of $\sim$3~\AA~ in r'. Redshifts measured from these spectra are accurate to $\sim$ 30~km/s \citep{aba05}. The H$\alpha$ line strengths are measured by fitting gaussians to the line profile in a standard pipeline \citep{sto02}. For $W_\circ(H\alpha)>5$\AA, the equivalent width uncertainty is less than 20\%. For weaker lines, $W_\circ(H\alpha)<5$\AA~ the errors are known to be large \citep{gom03}. We have made no correction to the emission-line strengths for underlying stellar absorption. For our purposes this is safe to neglect because, whereas even a modest star formation rate generates considerable H$\alpha$ emission, the stellar absorption doesn't vary by more than $\sim1$~\AA~for moderately old populations. As described in \citet{mil05}, clusters and groups are identified as overdensities in the multi-dimensional space spanned by position, redshift, and five-colour photometry. There are 1106 clusters identified in the SDSS DR3 using this algorithm. We further select objects with redshifts $z<$ 0.10, to minimize the incompleteness of the sample. In order to reduce the number of clusters with uncertain velocity dispersions, we exclude clusters flagged as having significant substructure. Specifically we include only clusters flagged as SUB=0, ie. those for which the ratio of the standard deviation of the cluster velocity dispersion profile to the mean cluster velocity dispersion is less than 15 \citep{mil05}. This reduces the number of clusters in our sample to 825. \subsubsection{Cluster, BCG, and Control Definitions}\label{selectioncriteria} For the SDSS we define the BCG and clusters in as close a manner as possible to the NFPS case. Again, the BCG is the brightest galaxy in the K band (and with $M_{K} <-24$), within half of r$_{200}$ of the geometric centre of the cluster, and within two times the cluster velocity dispersion. However, the geometric centres we use are different than those in the C4 catalogue. We start with the C4 catalogue geometric centres measured using the luminosity weighted position average of all the galaxies within 1Mpc and four times the velocity dispersion. As this definition encompasses such a large area, multiple substructures along the line of sight heavily influence the position of the centre. For example, the geometric centre will be placed in between two obvious subclumps. Thus, we iteratively recalculate a luminosity-weighted centre using only galaxies within two times the cluster velocity dispersion, and with magnitude $M_{K} <-24$. Due to the incomplete spectral coverage of the central galaxies, there are 154 cases in which the brightest cluster galaxy had not been observed spectroscopically. We exclude these clusters from the sample. To ensure this is not introducing an important bias, we verified that the subset of BCGs observed in the photometric catalogue but not in the spectroscopic catalogue have an equivalent ($u'-r'$) colour distribution to those in our final, spectroscopic sample. Finally, we also remove those clusters whose geometric centres are within 15' of a survey boundary, as well as those with measured velocity dispersions greater than 1200 km/s. The latter restriction is made because such high values are usually a result of significant contamination from line-of-sight substructure. These selections leave us with a final sample of 328 BCGs. Our control sample is built from the other bright galaxies near the centre of the clusters, as in the NFPS case. In order to increase the size of our control sample, we include clusters where the brightest central galaxy was not measured spectroscopically. There are 526 control galaxies in 353 groups and clusters with a velocity dispersion less than 1200km/s. We note that the number of control galaxies per cluster is significantly larger in the X-ray luminosity selected NFPS clusters, probably because they are richer than the optically-selected SDSS clusters. Because the centres of the NFPS clusters are based on the X-ray centroid, whereas in the SDSS the geometric centre is used, it is important to compare the two definitions. In order to find X-ray centers for the SDSS clusters, we have matched them to X-ray cluster catalogues including: NORAS \citep{boh00}, REFLEX \citep{boh01}, BAX \citep{sad04}, XBACS \citep{ebe96}, RASSCALS \citep{mah99}, as well as \citet{pop04,mul98} and \citet{hor01}. Most of the cluster catalogues are based on {\it ROSAT} observations, where the flux limit is high. This restricts our sample of SDSS clusters with X-ray detections to the small subset of massive clusters at z ~$<$ 0.03. There are 35 X-ray cluster matches, ie. cases where the X-ray centre is within half of r$_{200}$ of one of our 328 SDSS clusters. Fig. \ref{drsig} shows that for most of the matched clusters, there is good agreement between the X-ray and geometric centres. However, also depicted in the figure is how some cases exhibit differences of up to 0.5 Mpc, especially for lower mass clusters. It is unlikely that this is caused by an uncertainty in the X-ray centres, as the centres of the NORAS, BCS, and XBACS samples were found by determining the two-dimensional centre of mass using the Voronoi Tessellation and Percolation method \citep{ebe93}, and are generally accurate to about 1' \citep{ebe98}, corresponding to $\sim 90$ kpc at $z\sim 0.08$. More likely, is that the geometric centers do not trace the gravitation potential of the cluster as well as the X-ray centres. We note that for many of the cases in which the centres are discrepant by $>200$ kpc, the geometric centre appears to be contaminated by in-falling groups; which is not surprising as it is measured using a typical line of sight projection of $\sim 3$ Mpc (for $\sigma=500$km/s). \subsubsection{Emission Line diagnostics}\label{eline_sdss} Optical emission line galaxies in the SDSS are identified from the H$\alpha$ line, which is the line in our wavelength range that is most sensitive to star formation activity. To ensure a fair comparison with the NFPS, we must choose a threshold in H$\alpha$ equivalent width that is comparable to the H$\beta$ limit used in that survey. Recall that the lines are measured using different techniques, from spectra of different resolution and signal-to-noise, and obtained with different fibres; furthermore, the NFPS H$\beta$ measurements are corrected for underlying stellar absorption, while the SDSS H$\alpha$ lines are not. Therefore we opt for an empirical ``calibration'' between the two, by plotting the NFPS H$\beta$ equivalent widths against the SDSS H$\alpha$ equivalent widths, for galaxies that appear in both surveys (Fig. \ref{cfewzoom}). Although there is plenty of scatter, there is a strong correlation between the two lines, and we find the H$\alpha$ index is about four times larger than the H$\beta$; encouragingly, this is comparable to the factor of 4.5 derived by comparing the H$\alpha$/H$\beta$ ratio for the subset galaxies in the NFPS for which measurements of both emission lines are available \citep{nel05}. Thus, an H$\alpha$ equivalent width cut of 2~\AA\ is comparable to an H$\beta$ equivalent width of 0.5 \AA, and the fraction of galaxies in our samples above either of these thresholds is similar. Note that the points at $W_\circ(H\alpha)=0$ are non-detections (plotted with the average uncertainty of 0.15\AA), and those with $W_\circ(H\alpha< 0)$ are detected in absorption. We have experimented with SDSS H$\alpha$ equivalent width cuts of 2\ensuremath{\pm}1~\AA~and find that our final results do not change significantly. The good correlation also gives us additional confidence in the template correction used to correct the H$\beta$ equivalent widths for stellar absorption, which is relatively much more important here than for H$\alpha$ (the average stellar absorption strength is $\sim1$~\AA, with variations at the $\sim0.15$~\AA~level). Due to the proximity of the galaxies in our sample and the finite fiber size though which they are observed, the amount of extended line emission could be underestimated. However, as we do not know if the emission is extended we have not explicitly accounted for the finite fiber size, nor for the difference in fiber diameters between two surveys. Here again, we argue that the effect on the line strength measurements is calibrated by our use of an empirical relation. \begin{figure} \centering \epsfysize=3in \epsfbox{dRsig.ps} \caption[Difference in X-ray and Geometric Centres]{ Cluster velocity dispersion as a function of the difference in the X-ray and geometric centres for the 35 SDSS BCGs that have available X-ray positions. The clusters with large discrepancies between the two centres are generally found to have geometric centres highly influenced by in-falling groups. \label{drsig}} \end{figure} \begin{figure} \centering \epsfysize=3in \epsfbox{cfzoom.ps} \caption[Relationship of H$\alpha$ and H$\beta$ W$_{o}$]{ The absorption-corrected NFPS H$\beta$ equivalent width versus the SDSS H$\alpha$ equivalent width (uncorrected for absorption) for galaxies present in both surveys. The points with W$_{o}$(H$\alpha$)~$= 0$ are non-detections (plotted with the average H$\alpha$ error of 0.15\AA), and those with W$_{o}$(H$\alpha$) $< 0$ are detected in absorption. We identify line-emitting NFPS galaxies as those with W$_{o}$ (H$\beta$) $\ge$ 0.5~\AA. Correspondingly, we use a value of W$_{o}$ (H$\alpha$) $\ge$ 2~\AA\ for the SDSS galaxies. The best fit lines are constrained to go through (0,0) and are only fit to galaxies with W$_{o}$ (H$\alpha$) $\ge$ 2~\AA~ and ignore an outlier at W$_{o}$ (H$\alpha$) $\approx$ 40. The two different lines are fitting for W$_{o}$ (H$\beta$) as a function of W$_{o}$~(H$\alpha$) or vice versa. \label{cfewzoom}} \end{figure} H$\alpha$ line emission may arise from either ionization by hot stars, or ionization from AGN activity. We use the AGN classification taken from the emission-line analysis discussed in \citet{kau03}, which is based on H$\alpha$/[N{\sc ii}] and [O{\sc iii}]/H$\beta$ diagnostic ratios, which separates AGN, star-forming, and composite (intermediate between star-forming and AGN) regions on BPT diagrams. Table \ref{restab} shows that, in our SDSS sample, $\sim 65$ \% of galaxies with emission have line ratios consistent with AGN or composite emission, and this fraction may be somewhat higher for the BCG population, relative to the control galaxies. Using \citet{ho97} to separate Seyferts from LINERs, along with the \citet{kau03} definitions, we find that~$\sim33$\% of emitting SDSS BCGs can be reliably measured as LINERs,~$\sim27$\% as composite; Seyfert-like emission is negligible (~$\sim3$\%). If we assume the ionizing source for the HII regions is stellar, and that all of the H$\alpha$ emission is within the fibre diameter, then typical H$\alpha$ luminosities of our emitting BCGs correspond to star formation rates of $\sim0.5$ to $\sim1.6$~$M_{\sun} \mbox{yr}^{-1}$ \citep{ken98}. \subsubsection{Galaxy Colour} Fig. \ref{hacolb} shows the distribution of H$\alpha$ equivalent width as a function of ($u'-r'$) colour. Following \citet{str01}, we use the value of 2.2 for the ($u'-r'$) colour separation of blue and red galaxies. As the NFPS galaxies are red selected, we include a corresponding colour cut of ($u'-r'$)~$>$~2.2 for the SDSS galaxies when directly comparing to the NFPS results (as in Section \ref{nfpresults}). This colour cut excludes 10 of the 328 SDSS BCGs (only one with X-ray data) as well as 53 control galaxies. As seen in Fig. \ref{hacolb}, most BCGs are redder than this cut whether or not they show optical emission. Thus, if the line emission is due to star formation, it does not dominate the global colours of these BCGs, which are presumably quite old. Because of this, results based on BCGs in the NFPS are not likely to be affected by the colour cut in that sample. For the control sample, most of the emission-line galaxies are bluer than our cut. Therefore we expect the fraction of control galaxies with emission, as presented in Section~\ref{results}, to be biased low relative to the BCGs. This is evident from Table~\ref{restab}: in the red-selected SDSS sample, the overall fraction of emission line galaxies is similar ($\sim ~11$ \%) for both control galaxies and BCGs, while for the unrestricted sample the fraction of control galaxies with emission is $\sim ~18$\%. This difference is not large enough to affect any of our conclusions in Section~\ref{results}, however. \begin{figure} \epsfysize=3in \epsfbox{chha_colouragn.ps} \caption{The logarithm of W$_{o}$ (H$\alpha$) as a function of ($u'-r'$) colour of the galaxies in the SDSS. The crosses are the control sample, and the filled circles are the BCGs. The colour break of 2.2 is shown as the vertical dotted line, and the horizontal dashed line separates the emitting galaxies from the quiescent. The galaxies at Log[W$_{o}$ (H$\alpha$)] = -1.7 are those with H$\alpha$ absorption.\label{hacolb}} \end{figure} \section{Results}\label{results} \label{nfpresults} \begin{table*} \centering \caption{ Summary of Results. The first column presents the survey studied, NFPS or SDSS. SDSS (RED) includes only the red-selected SDSS galaxies, and SDSS (X-ray) only those with X-ray counterparts. Column 2 indicates if the results are for the BCGs or Controls. Column 3 shows the total number of galaxies for each sample. The number of these which are emitting is shown in column 4, and column 5 shows the fraction of emitting galaxies. Column 6 lists the number of strongly emitting (W$_{o}$(H$\alpha$)$>2$\AA) galaxies in each sample for which the emission line ratios are characteristic of AGN activity (usually LINER). We present the number of galaxies in our sample which are known to belong to a CF cluster in column 7. Column 8 shows the number of these galaxies in CF which are also emitting, and the final column shows the fraction of emitting galaxies in CF clusters.} \label{restab} \begin{tabular}{llccccccc} \hline \hline Survey & Sample & Total &Emitting &Emitting &AGN or Comp. & Known&Emitting&Emitting\\ & & & & Fraction(\%) & W$_{o}$(H$\alpha$)$>2$\AA &CF &CF&Fraction(\%)\\ \hline NFPS & BCGs & 60 & 12 & 20\ensuremath{\pm}6 & N/A & 14 & 10 &71\ensuremath$^{+9}_{-14}$\\ & Controls&159 & 15 &9\ensuremath{\pm}2& N/A& 36 & 5 &14\ensuremath{\pm}6\\ SDSS & BCGs & 328 & 42 &13\ensuremath{\pm}2& 31 & N/A & N/A&N/A\\ & Controls& 526 & 94 & 18\ensuremath{\pm}2&57 &N/A & N/A&N/A\\ SDSS (RED)& BCGs & 318 & 35 &11\ensuremath{\pm}2& 25 &N/A & N/A &N/A\\ & Controls& 446 & 51 & 11\ensuremath{\pm}2& 39 & N/A & N/A&N/A\\ SDSS (X-ray)& BCGs &34&6&18\ensuremath$^{+8}_{-5}$&3&N/A&N/A&N/A\\ \hline \end{tabular} \end{table*} In this section we examine the line emission in the galaxies in our cluster samples as a function of X-ray luminosity, K-band magnitude, and distance from the centre of the cluster. The numbers of emission-line galaxies in both surveys are given in column 4 of Table \ref{restab}. There is a higher fraction of emitting galaxies among the NFPS BCGs; 20\ensuremath{\pm}6\% are emitting, compared to only 9\ensuremath{\pm}2\% of the controls. If we look only at those BCGs identified with a CF cluster we find an even higher fraction of them show emission, 71\ensuremath$^{+9}_{-14}$, while the control sample is unchanged (14\ensuremath{\pm}6\%). On the other hand, only 11\ensuremath{\pm}2\% of the red BCGs selected from the SDSS sample show emission, comparable to that of the control population (11\ensuremath{\pm}2\%). In this section we will explore these trends, and differences between the two samples, in more detail. We use errors derived from the posterior probability distribution where sample sizes are small. \subsection{Dependence on the Presence of a Cooling Flow}\label{sec-cfdef} The most prominent result from the NFPS clusters is that the presence of a cooling flow is highly correlated with the presence of emission lines in the BCG. Fig. \ref{withmodel} shows the bolometric X-ray luminosity against the cluster velocity dispersion (a proxy for dynamical mass). We can see that most of the cooling flow clusters have larger X-ray luminosities for their mass, and that it is these clusters that have a BCG with line emission. Therefore, in the rest of the NFPS analysis we separate our sample into CF and non-CF subsets. Notice that in Fig.~\ref{hbkcont} and Fig.~\ref{hbrcont} (which we discuss below), none of the emitting BCGs are non-CFs. As mentioned in Section \ref{nfpss}, we use a rather strict cut for our definition of a CF, requiring {\it ROSAT} based $\dot{M}>100 M_{\sun}{yr}^{-1}$. However, changing our arbitrary definition of a CF cluster (e.g. to those with $\dot{M}> 10 M_{\sun}\mbox{yr}^{-1}$) does not significantly change our results. It would be useful to have a way to identify likely cooling-flow clusters without the need for high quality surface brightness and temperature maps. Recently, \citet{mcc04} has shown that CF clusters show significantly higher total X-ray luminosities, relative to their total dynamical mass, consistent with predictions from steady-state cooling models. On the other hand, non-CF clusters can be well modelled with cosmological haloes in which the gas has been preheated to $\sim 300-500$ keV cm$^2$ \citep[see also]{bal06}. This suggests that one could use the excess X-ray luminosity relative to these preheated models as an indicator of CF status. The solid line in Fig. \ref{withmodel} represents models for clusters with preheating from \citet{bab02}. Indeed, using this method arrives at results for the emission fraction which are very similar to those obtained when we use the mass deposition rate to define CF clusters. The separation is not as striking as in \citet{mcc04}, probably because the measured velocity dispersion can be systematically affected by substructures. \begin{figure} \centering \epsfysize=3in \epsfbox{withmodel.ps} \caption[Cluster Velocity Dispersion as a function L$_{X}$]{The cluster velocity dispersion as a function of the bolometric X-ray luminosity. The solid line is the preheated model from \citet{bab02}, which is known to provide a good match to non-CF clusters \citep{mcc04,bal06}. Known CF clusters in our sample do generally lie to the right of this model. \label{withmodel}} \end{figure} \subsection{Magnitude Dependence} We show in Fig. \ref{hbkcont} the line emission strength as a function of K absolute magnitude for the NFPS and SDSS BCGs and control galaxies. The NFPS clusters are separated into CF and non-CF cases. In the non-CF clusters, there are no emitting BCGs, and the fraction of emission-line galaxies, shown in the bottom panels, is also very low for the control galaxies ($\sim 5$~\%). On the other hand, in CF clusters $\sim 70$~\% of the BCGs show emission, as we noticed also in the previous section. Moreover, the BCGs with strong emission tend to be the brightest galaxies, $M_K<-25.5$, where by definition, there are few control galaxies. Hence, this trend is quite different from what is seen in the control sample, where almost all of the emission line galaxies are fainter than $M_K=-25$. For magnitudes near $M_K\approx -25$, where there is substantial overlap between the two populations, the emission line fraction of both populations are similar for non-CF clusters. The results from the SDSS, which include all clusters regardless of X-ray luminosity or CF status, are consistent with the results for the total NFPS. There are somewhat fewer emission-line BCGs at a given magnitude than the for NFPS, presumably because CF clusters make up a smaller proportion of the sample in the SDSS. We will say more about this in Section~\ref{discussion}. Also highlighted on Fig.~\ref{hbkcont} and Fig.~\ref{hbrcont} are the BCGs with LINER-like emission. Most of the emitting BCGs are more characteristic of LINER emission; this is especially true of the less luminous BCGs. \begin{figure*} \centering \epsfysize=9cm \epsfbox{K8.ps} \caption[Emission line strength as a function of K magnitude]{Line Emission as a function of K-Band Magnitude. \underline{Top Panel}: The equivalent width of the line emission as a function of K-band magnitude for the NFPS galaxies on the left, and for the SDSS galaxies on the right. The NFPS galaxies are in subsets: a) All galaxies, including those in clusters where the CF status is unknown, b) those which are in known CF clusters, and c) those which are in known non-CF clusters. Filled circles represent BCGs, Xs for control galaxies, and open circles indicate LINER emission (for the SDSS sample only). The dashed line represents the cut between emitting and non-emitting galaxies. The BCGs for Abell 780 and Abell 1795 have unreliable H$\beta$ equivalent widths, but nonetheless strong emission, and are represented by lower limits at W$_{o}$~$>3.3$~\AA~and~$>3.7$~\AA, respectively. \underline{Bottom Panel}: Using the same subsamples as the top panel, we plot the fraction of line emitting galaxies as a function of K-band magnitude. The solid line represents the BCGs, and the dotted line the control galaxies. \label{relkcont}\label{hbkcont}} \end{figure*} \subsection{Dependence on Location in Cluster}\label{sec-loc} In our samples, there are some BCGs that are found close to the X-ray centre, while others can be found several hundred kpc away. In this section we will investigate whether or not the presence of emission lines in the BCG depends on its distance to the X-ray centre. This is illustrated in Fig.~\ref{hbrcont} where we show the line emission as a function of cluster radius for the NFPS galaxies. {\it All of the strongly emitting BCGs are within 50 kpc of the X-ray centre}. As discussed in the previous section, these emitting galaxies are also usually found in a cooling flow cluster. In the rightmost panel we show the equivalent plot for the 34 BCGs in the SDSS sample for which we have X-ray centres. Although the sample is small, our results are consistent with those seen in the NFPS; only those BCGs that are close to the X-ray centre have significant line emission. \begin{figure*} \centering \epsfysize=9cm \epsfbox{r8.ps} \caption[Emission line strength as a function of Radius]{Line Emission as a function of distance from the cluster X-ray centre. Subsample definitions and symbols are the same as in Fig.~\ref{relkcont}. \label{hbrcont}} \end{figure*} A Kolmogorov-Smirnov test on the H$\beta$ distributions of the non-CF BCGs and the controls shows no evidence for a difference in the two populations. Therefore, to summarize, we conclude that an increased frequency of optical line emission in BCGs is observed only for those galaxies that lie within 50kpc of the X-ray centre of a CF cluster. For the remainder, the frequency of emission in BCGs is consistent with that observed in the control population. \subsection{Dependence on Cluster Mass and Density}\label{sdss} The SDSS provides us with an optically-selected sample, spanning a wide range in velocity dispersion, that should be representative of the cluster population independent of X-ray properties. In this section we use this full sample (including those galaxies with ($u'-r'$) $<2.2$ that were excluded when comparing directly with the NFPS) to explore the effect of environment on the presence of line emission in BCGs. \begin{figure*} \epsfysize=3in \epsfbox{doublehist.ps} \caption{Left Panel: The fraction of H$\alpha$ emitting galaxies in the SDSS as a function of group velocity dispersion. The solid line refers to BCGs and the dotted line are the control galaxies. This adaptive histogram contains 85 galaxies per bin for the BCGs and 100 galaxies per bin for the controls. Right Panel: The fraction of H$\alpha$ emitting galaxies in the SDSS as a function of the galaxy density. This adaptive histogram contains 85 galaxies per bin for the BCGs and 95 galaxies per bin for the controls. \label{denshistb} \label{adapsighist} } \end{figure*} Since the cooling flow status of a cluster might be correlated with its total mass, or central mass density, we wish to explore whether the trends we have observed are merely reflecting a more fundamental correlation with either of these quantities. First, in Fig. \ref{adapsighist}, we plot the fraction of BCGs with H$\alpha$ emission as a function of the cluster velocity dispersion. There is no strong trend. Rather, a fraction $\sim $~10-15~\% of BCGs in each bin showing H$\alpha$ emission is observed (the Spearman correlation coefficient is 0.17). We note that the frequency of radio-loud AGN is also found to be independent of velocity dispersion \citep{bes06}. The control galaxy population in the SDSS sample also shows no strong correlation between emission line fraction and group velocity dispersion (the Spearman correlation coefficient is 0.38). For the control galaxies, the overall fraction of emitting galaxies is somewhat higher than for BCGs, closer to 15-20\%. Next, we examine in Fig.~\ref{denshistb} the frequency of emission lines in BCGs as a function of the galaxy density, as measured by the distance to the fifth-nearest spectroscopic neighbour. Even in the densest regions this corresponds to a smoothing scale of $\sim200$ kpc, so the measurement here is of the total mass density on scales larger than the cooling radius. Generally, one would expect the central regions of CF clusters to be the densest environments, and if the presence of a cooling-flow was correlated with the mass or galaxy density on larger scales our previous results would lead us to expect the most star formation occurring in the most dense regions. On the contrary, we observe a clear correlation with the emission line fraction decreasing with increasing number density, and so BCGs are {\it less} likely to show emission lines if they are found in the densest regions of clusters (in both cases of the BCG and the controls, the Spearman test yields a correlation coefficient of $\sim 0.9$, and the correlation is significant at the $\sim 95$\% confidence level). At all densities, the fraction of emission-line galaxies is higher in the control population than the BCG population. The fact that we observe enhanced emission for those galaxies within the smaller 50 kpc scale of the X-ray centre of CF clusters (Section \ref{sec-loc}) is therefore very likely related directly to the presence of cooling, X-ray gas on small scales, rather than the overall gravitational potential. \section{Discussion}\label{discussion} The overall fraction of BCGs with emission lines is $\sim 13$~\% in the SDSS, and $\sim 20$~\% in the NFPS. The latter is in good agreement with \citet{cra99} who find a fraction of 27~\% emission-line BCGs in their sample of X-ray selected clusters. We do not know what the CF fraction is in an optically-selected sample, or as a function of mass. Nonetheless, since the fraction of massive clusters that host a cooling flow is likely no more than about 50 \% \citep{per98,mcc04,che07}, and CF clusters are systematically overluminous, we attribute the factor of two difference in emission fraction between our two surveys to the fact that NFPS \citep[and][]{cra99} is X-ray selected, and therefore biased toward CF clusters. There is currently much observational and theoretical work exploring the possible feedback between a central galaxy's AGN, current star formation, and the cooling X-ray gas \citep[e.g.]{sil98,bes06,cro06,bow06,sij06,del06}. AGN in the BCGs are thought to play an important role in suppressing cooling, and lowering mass deposition rates (and hence star formation rates). Our main result is that the majority of line emitting BCGs are positioned close to the X-ray centre of clusters classified as hosting a cooling flow directly from the mass deposition rates (as defined in Section \ref{nfpss}). Furthermore, this result holds when we classify cooling flow clusters instead by their excess X-ray luminosity relative to other clusters with similar dynamical mass (a definition we explored at the end of Section \ref{sec-cfdef}). Similar conclusions, based on smaller samples, have been reached by others \citep[e.g.][]{joh87,mcn89,cra99,raf06}. Importantly, we have shown that, in the absence of a cooling flow, emission lines are rare in BCGs, regardless of the mass density or velocity dispersion of the cluster. Moreover, a control population of similarly bright, centrally located cluster galaxies does not exhibit an increased frequency of emission lines in CF clusters. We therefore conclude that the observed emission (arising from star formation and/or an AGN) is directly related to the presence of cooling gas. For the most part, we have not concerned ourselves with the origin of the observed emission, since starburst galaxies and optically-selected AGNs are probably closely linked \citep{kau03}. However, it is worth investigating this further here. \citet{cra99} find that the very strongest H$\alpha$ emitters have star formation-like emission line ratios, while \citet{von06} find that most BCGs, which display weaker H$\alpha$ emission, are more characteristic of LINER activity. The six NFPS BCGs that have [NII]/H$\alpha$ line ratios available from \citet{cra99} lie in the regime straddling LINERS and Seyferts. Of the six emitting BCGs in the SDSS-X-Ray sample, the three stronger H$\alpha$ emitters are non-AGN, which means that they are likely composite, whereas the somewhat weaker emitters are classified as AGN. Thus the emission-line BCGS found near the X-ray centre of galaxy clusters appear to be a heterogeneous class of objects. In the unrestricted SDSS sample, we find only $\sim 13$~\% of BCGs have line emission, compared to $\sim 18$~\% for controls. This exhibits the same trend as in the \citet{von06} study of SDSS C4 cluster galaxies, where detectable emission line luminosity measurements (with S/N $>$3) are found in 30~\% of BCGs, as opposed to 40~\% for the other massive, central, galaxies (ie. controls). Our numbers cannot be compared directly, as their cut includes more weakly emitting systems as emitters than does our more strict criterion of W$_{o}$~$> 2$ \AA. If we relax our definition of an emitting galaxy being one with W$_{o}$~$> 0$ \AA, we find the percentage of line emitters increases to 27~\% for BCGs, and 38~\% for controls. Our Fig.~\ref{denshistb} reiterates the above point, that in the SDSS sample, the BCGs have a lower fraction of line emitting galaxies than the controls. This is also consistent with what \citet{bes06} find: that emission line AGN activity is suppressed in the galaxies near the centre of the cluster with respect to the other massive cluster galaxies. They point out that emission line AGN and radio-loud AGN are partly independent populations, but that BCGs with emission-line AGN are more likely to host radio-loud AGN than other galaxies. Best et al. also find that radio-loud AGN are preferentially found in BCGs within 0.2 r$_{200}$ of the centre of the cluster and that there is a strong dependence on galaxy stellar mass, with $\sim 40$~\% of the most massive galaxies showing radio loud AGN emission. Therefore it is plausible that our emission-line BCGs in the NFPS (which are found at the centre of CF clusters) and in the SDSS (many of which show LINER-like activity), host radio-loud AGN. We therefore matched our NFPS BCGs to radio sources from the Faint Images of the Radio Sky at Twenty-Centimeters survey \citep[hereafter FIRST]{wbh97} and NRAO VLA Sky Survey \citep[hereafter NVSS]{con98}. We find that indeed, all 12 of our H$\beta$ emitting BCGs are radio sources. Furthermore, for the 50 BCGs in the FOV of the surveys, 28 of our BCGs have radio counterparts ($\sim 56$\ensuremath{\pm11}~\%). Thus~$\sim42$\ensuremath{\pm11}~\% of non-H$\beta$ emitting BCGs have radio emission. Again, the cooling flow status of the cluster appears important:~$\sim71^{+~9}_{-14}$ \% of CF BCGs are radio sources, and~$\sim80^{+~7}_{-16}$ \% of central CF BCGs are radio sources. For non-CF BCGs only~$\sim32^{+12}_{-~9}$ \% have radio emission. For the controls, 36 of 144 galaxies in the FOV of the surveys are radio sources ($\sim 25$\ensuremath{\pm4}~\%). Recently, \citet{lin06} have studied the radio-loud properties of a large X-ray selected cluster sample, and find the overall radio-loud fraction to be $\sim 35$~\% for K-band selected BCGs within r$_{200}$ of the cluster center (compared with $\sim 20$~\% for bright cluster galaxies, excluding the BCGs). This is in reasonable agreement with our results, although the radio-loud BCG fraction we measure is somewhat higher. \citet{lin06} also find a strong trend with cluster mass (inferred from the X-ray luminosity), though the strength of the trend is sensitive to the radio power limit and K-band luminosity of the galaxy. Our NFPS sample is too small to robustly identify such trends, but we note that the fraction of BCGs with radio sources that are in clusters with a velocity dispersion $<$ 600km/s is 54\ensuremath{\pm15}~\%, quite similar to that for BCGs in clusters with a velocity dispersion $>$ 600km/s, 58\ensuremath{\pm15}~\%. In the optically-selected SDSS, we find no significant trend with velocity dispersion, but an overall fraction of~$\sim 40$~\% of the BCGs with radio emission, and~$\sim 25$~\% of the controls. Because of the difference in sample selection, and the cluster mass estimators, we do not consider this a serious discrepancy with the results of \citet{lin06}. \section{conclusions}\label{conclusion} We have used two large, homogeneous galaxy cluster surveys to investigate the incidence of optical emission lines amongst BCGs. The NFPS consists of 60 BCGs in X-ray selected clusters, while the SDSS sample is a larger, optically-selected sample of 328 BCGs. From these data, we are able to draw the following conclusions: \begin{itemize} \item Of the 10 BCGs that lie within 50kpc of the peak of X-ray emission in a cluster with evidence for a significant cooling flow, all show optical emission lines. Moreover, of the 12 BCGs that show emission, all are located within 50kpc of the X-ray emission peak, all are radio sources, and none are in clusters of known non-CF status (10 are in CF clusters, and 2 are in clusters with unknown CF status). \item Excluding the special circumstances noted above, the fraction of BCGs that exhibit optical emission lines is $\sim $~10-20 \%, and is always comparable to or lower than the fraction for control galaxies with a similar luminosity and environment. \item For optically selected cluster samples, which are dominated by non-CF clusters, the fraction of BCGs with emission does not correlate strongly with cluster mass or galaxy density. \end{itemize} We have therefore demonstrated a direct connection between the presence of cooling gas, and enhanced optical emission in a centrally located galaxy. It would be very useful to obtain pointed X-ray observations of those SDSS clusters in which we have found a BCG with H$\alpha$ emission, to determine if this correlation holds in optically-selected samples. These clusters would also be potentially interesting for observation with {\it Chandra} to observe the X-ray emission morphology, as other massive clusters with H$\alpha$ emitting BCGs, such as Abell 426 and Abell 1795 show remarkable X-ray morphology, such as X-ray holes, and cooling tails. \begin{table*} \caption{Table of NFPS BCGs. The cluster name is shown in column 1. Column 2 and 3 give the position of the cluster X-ray centre. The cluster redshift is given in column 4, the cluster velocity dispersion and r$_{200}$ are given in columns 5 and 6. Column 7 gives the cooling flow mass deposition rate (MDR) in (M$_{\odot}$/yr). Column 8 gives the cooling flow status of the cluster. Column 9 gives the reference for the MDR/CF status: b stands for \citet{bir04}, f for \citet{fuj06}, j for \citet{joh05}, h for \citet{hen04}, k for \citet{kem04}, m for \citet{mcc04}, p for \citet{per98}, r for \citet{sha04}, s for \citet{san06}, v for \citet{kan06}, w for \citet{whi00}, and o for other. Column 10 is the X-ray luminosity in units of 10$^{44}$erg/s, column 11 gives the name of the BCG, column 12 is the BCG K-band magnitude, column 13 gives the distance between the BCG and the cluster X-ray centre. Columns 14 gives the H$\beta$ equivalent width, and column 15 gives the error, which is the noise in the line-free regions of the absorption-corrected spectrum.} \scriptsize \label{bcgtab} \begin{tabular}{lccccccccclcccc} \hline \hline Name & RA$_{x}$ & DEC$_{x}$ & z & $\sigma_{cl}$ & r$_{200}$ & MDR& CF &ref & L$_{X}$ & Name & M$_{K}$ & dr & $W_{o}$H$\beta$ & H$\beta$$_{err}$ \\ (Clus) & (deg) & (deg) & & (km/s) & (Mpc) & (M$_{\odot}$/yr) & & & & (BCG) & (BCG) & (Mpc) & (\AA) & (\AA) \\ \hline A0085 & 10.5 & -9.3 & 0.0557 & 736 & 1.3 & 108 & \checkmark & p/s & 4.920 & MCG--02-02-086 & -26.0 & 0.046 & 1.13 & 0.21 \\ A0119 & 14.1 & -1.2 & 0.0436 & 653 & 1.1 & 0 & X& p & 1.580 & UGC-00579 & -25.7 & 0.054 & -0.11 & 0.06 \\ A0133 & 15.7 & -21.9 & 0.0561 & 794 & 1.4 & 25 & X& b/m & 1.590 & ESO-541--G-013 & -25.6 & 0.017 & -0.08 & 0.09 \\ A0262 & 28.2 & 36.2 & 0.0155 & 432 & 0.7 & 2 & \checkmark& b/s & 0.230 & NGC-0708 & -24.8 & 0.005 & 1.76 & 0.06 \\ A0376 & 41.5 & 36.9 & 0.0482 & 975 & 1.7 & 42 & X & w & 0.680 & GIN-138 & -24.4 & 0.408 &-0.04 & 0.06 \\ A0407 & 45.5 & 35.8 & 0.0465 & 670 & 1.2 & N/A & ?& ?& 0.210 & UGC-02489-NED02 & -25.6 & 0.022 &-0.14 & 0.10 \\ A3128 & 52.6 & -52.5 & 0.0595 & 838 & 1.5 & N/A & ?& ?& 1.160 & 2MASX-J03295060-5234471 & -25.5 & 0.217 &-0.30 & 0.13 \\ RXJ0341 & 55.3 & 15.4 & 0.0288 & 502 & 0.9 & N/A & ?& ?& 0.250 & 2MASX-J03412829+1515326 & -24.5 & 0.231 & 0.04 & 0.11 \\ A3158 & 55.7 & -53.6 & 0.0586 & 814 & 1.4 & 292& \checkmark & w & 2.820 & ESO-156--G-008-NED01 & -25.7 & 0.069 &-0.01 & 0.09\\ A3266 & 67.9 & -61.4 & 0.0588 & 946 & 1.6 & 145 & X& w/m & 3.900 & ESO-118-IG-030-NED02 & -26.2 & 0.128 &-0.05 & 0.15 \\ A0496 & 68.4 & -13.2 & 0.0321 & 577 & 1.0 & 70 & \checkmark& o/m & 1.770 & MCG--02-12-039 & -25.5 & 0.031 & 1.21 & 0.18 \\ A3341 & 81.4 & -31.6 & 0.0376 & 500 & 0.9 & N/A & ?& ?& 0.310 & MCG--05-13-019 & -24.9 & 0.009 &-0.01 & 0.07 \\ A0548A & 87.2 & -25.4 & 0.0386 & 794 & 1.4 & 10 & X& w & 0.370 & ESO-488-IG-031 & -24.3 & 0.356 & 0.20 & 0.13 \\ A3376 & 90.4 & -40.0 & 0.0464 & 710 & 1.2 & 0 & X& w & 1.320 & ESO-307--G-013 & -25.3 & 0.470 &-0.03 & 0.06 \\ A3389 & 95.5 & -65.0 & 0.0270 & 626 & 1.1 & 25& X & w & 0.180 & NGC-2235 & -25.2 & 0.061 & 0.13 & 0.07 \\ A3391 & 96.6 & -53.7 & 0.0556 & 696 & 1.2 & 131& \checkmark & w & 1.370 & ESO-161-IG-007-NED02 & -26.4 & 0.055 & 0.14 & 0.10 \\ A3395 & 96.8 & -54.5 & 0.0491 & 640 & 1.1 & N/A & ?& ?& 1.610 & ESO-161--G-008 & -25.4 & 0.158 &-0.05 & 0.12 \\ A0576 & 110.4 & 55.8 & 0.0383 & 854 & 1.5 & 3 & X& p/k & 0.710 & CGCG-261-056-NED01 & -25.9 & 0.008 &-0.02 & 0.05 \\ UGC03957 & 115.2 & 55.4 & 0.0339 & 511 & 0.9 & N/A& ? & ?& 0.480 & UGC-03957 & -25.3 & 0.017 &-0.04 & 0.05 \\ A0602 & 118.4 & 29.4 & 0.0605 & 675 & 1.2 & N/A & ?& ?& 0.530 & 2MASX-J07532661+2921341 & -24.2 & 0.032 &-0.45 & 0.12 \\ Z1665 & 125.8 & 4.4 & 0.0296 & 437 & 0.8 & N/A & ?& ?& 0.160 & IC-0505 & -24.9 & 0.072 & 0.02 & 0.05 \\ A0754 & 137.3 & -9.7 & 0.0546 & 784 & 1.4 & 218& X & w/h & 4.460 & 2MASX-J09083238-0937470 & -25.7 & 0.328 &0.06 & 0.10 \\ A0757 & 138.4 & 47.7 & 0.0514 & 381 & 0.7 & N/A & ?& ?& 0.460 & 2MASX-J09134460+4742169 & -24.5 & 0.142 &-0.01 & 0.10 \\ A0780 & 139.5 & -12.1 & 0.0551 & 641 & 1.1 & 492 & \checkmark& w/m & 3.470 & Hydra-A & -25.1 & 0.015 & 7.8 & 0.30 \\ Z2844 & 150.7 & 32.7 & 0.0504 & 462 & 0.8 & N/A & ?& ?& 0.300 & NGC-3099 & -25.4 & 0.048 & 0.35 & 0.07 \\ A1367 & 176.2 & 19.8 & 0.0219 & 747 & 1.3 & 0 & X& o/m & 0.930 & NGC-3842 & -24.9 & 0.252 &-0.11 & 0.05 \\ Z4803 & 181.1 & 1.9 & 0.0206 & 474 & 0.8 & N/A & ?& ?& 0.170 & NGC-4073 & -25.4 & 0.000 &-0.01 & 0.05 \\ A1631A & 193.2 & -15.4 & 0.0461 & 531 & 0.9 & N/A & ?&?& 0.330 & 2MASX-J12523166-1512150 & -24.4 & 0.358 &-0.06 & 0.07 \\ A3528B & 193.6 & -29.0 & 0.0547 & 500 & 0.9 & N/A & ?& ?& 0.690 & ESO-443--G-004 & -25.8 & 0.005 &-0.14 & 0.07 \\ A3528A & 193.7 & -29.3 & 0.0535 & 698 & 1.2 & N/A & ?& ?& 1.100 & 2MASX-J12543999-2927327 & -24.0 & 0.482 &-0.36 & 0.25 \\ A3530 & 193.9 & -30.4 & 0.0544 & 436 & 0.8 & N/A & ?& ?& 0.380 & AM-1252-300-NED02 & -25.0 & 0.018 &N/A & N/A \\ A1656 & 194.9 & 27.9 & 0.0230 & 898 & 1.6 & 85 & X& w & 3.980 & NGC-4889 & -25.6 & 0.169 &-0.03 & 0.03 \\ A1668 & 195.9 & 19.3 & 0.0641 & 476 & 0.8 & N/A & ?& ?& 0.880 & IC-4130 & -25.2 & 0.027 & 2.39 & 0.09 \\ A3558 & 202.0 & -31.5 & 0.0476 & 814 & 1.4 & 235 & X& w/s & 3.450 & ESO-444--G-046 & -26.2 & 0.019 &-0.06 & 0.12 \\ A1736A & 201.7 & -27.1 & 0.0465 & 664 & 1.2 & 79 & X& w & 1.250 & IC-4252 & -25.8 & 0.512 &-0.02 & 0.04 \\ A3560 & 203.1 & -33.1 & 0.0487 & 548 & 0.9 & N/A & ?& ?& 0.730 & 2MASX-J13322574-3308100 & -25.2 & 0.097 &-0.05 & 0.06 \\ A3571 & 206.9 & -32.9 & 0.0392 & 913 & 1.6 & 130& X & w/s & 3.910 & ESO-383--G-076 & -26.0 & 0.022 &-0.07 & 0.14 \\ A1795 & 207.2 & 26.6 & 0.0627 & 725 & 1.3 & 18 & \checkmark& b/m & 6.330 & CGCG-162-010 & -25.7 & 0.016 & 7.2 & 0.30 \\ A3581 & 211.9 & -27.0 & 0.0225 & 525 & 0.9 & 18 & \checkmark& w/j & 0.320 & IC-4374 & -24.5 & 0.003 & 1.93 & 0.09 \\ A1983A & 223.2 & 16.7 & 0.0448 & 472 & 0.8 & N/A & ?& ?& 0.230 & ABELL-1983-1:[CBW93]-C & -24.2 & 0.050 & 0.14 & 0.09 \\ A1991 & 223.6 & 18.6 & 0.0589 & 454 & 0.8 & 37 & \checkmark& w/r & 0.710 & NGC-5778 & -25.4 & 0.023 & 0.61 & 0.10\\ A2052 & 229.2 & 7.0 & 0.0352 & 531 & 0.9 & 81 & \checkmark&b/m & 1.330 & UGC-09799 & -25.5 & 0.038 & 1.64 & 0.09 \\ A2063 & 230.8 & 8.6 & 0.0344 & 764 & 1.3 & 99 & \checkmark& w/v & 1.020 & CGCG-077-097 & -24.9 & 0.056 & 0.10 & 0.08 \\ A2107 & 234.9 & 21.8 & 0.0415 & 527 & 0.9 & 57& X & w/f & 0.570 & UGC-09958 & -25.5 & 0.014 &-0.18 & 0.06 \\ A2147 & 240.6 & 16.0 & 0.0370 & 711 & 1.2 & 119& X & w/s & 1.660 & UGC-10143 & -24.9 & 0.082 &-0.02 & 0.07 \\ A2151A & 241.2 & 17.7 & 0.0352 & 746 & 1.3 & 173 & \checkmark& w & 0.430 & NGC-6041A & -25.1 & 0.051 &-0.11 & 0.07 \\ A2199 & 247.2 & 39.5 & 0.0293 & 647 & 1.1 & 2 & \checkmark&b/m & 1.900 & NGC-6166-NED01 & -25.7 & 0.007 & 2.17 & 0.11 \\ RXJ1733 & 263.3 & 43.8 & 0.0319 & 468 & 0.8 & N/A & ?& ?& 0.270 & IC-1262 & -24.6 & 0.005 & 0.35 & 0.08 \\ RXJ1740 & 265.1 & 35.7 & 0.0434 & 556 & 1.0 & N/A & ?& ?& 0.270 & CGCG-199-007-NED01 & -24.9 & 0.013 &-0.12 & 0.08 \\ Z8338 & 272.7 & 49.9 & 0.0494 & 532 & 0.9 & N/A & ?& ?& 0.450 & NGC-6582-NED02 & -25.7 & 0.098 &-0.09 & 0.04 \\ A3667 & 303.1 & -56.8 & 0.0549 & 852 & 1.5 & 196 & X& w/m & 5.020 & IC-4965 & -25.8 & 0.700 &N/A & N/A \\ A3716 & 312.9 & -52.7 & 0.0447 & 748 & 1.3 & N/A & ?& ?& 0.480 & ESO-187--G-020 & -25.2 & 0.098 &-0.07 & 0.08 \\ IIZW108 & 318.5 & 2.6 & 0.0482 & 399 & 0.7 & N/A & ?& ?& 1.090 & IC-1365-NED02 & -25.6 & 0.115 & 0.04 & 0.06 \\ A2399 & 329.3 & -7.8 & 0.0577 & 608 & 1.1 & N/A & ?& ?& 0.430 & 2MASX-J21572939-0747443 & -24.5 & 0.218 &-0.05 & 0.06 \\ A3880 & 337.0 & -30.6 & 0.0583 & 817 & 1.4 & N/A & ?& ?& 0.960 & PKS-2225-308 & -25.6 & 0.022 & 2.94 & 0.15 \\ Z8852 & 347.6 & 7.6 & 0.0402 & 771 & 1.3 & N/A & ?& ?& 0.470 & NGC-7503 & -25.6 & 0.108 & 0.08 & 0.07 \\ A2572B & 349.3 & 18.7 & 0.0386 & 138 & 0.2 & N/A & ?& ?& 0.280 & NGC-7602 & -24.5 & 0.105 &-0.04 & 0.05 \\ A2589 & 351.0 & 16.8 & 0.0411 & 583 & 1.0 & 0& X &o/m & 0.890 & NGC-7647 & -25.4 & 0.073 &-0.11 & 0.07 \\ A2634 & 354.6 & 27.0 & 0.0309 & 692 & 1.2 & 0 & X& w & 0.480 & NGC-7720-NED01 & -25.5 & 0.018 & 0.19 & 0.08\\ A4059 & 359.2 & -34.8 & 0.0496 & 556 & 1.0 & 7 & \checkmark& b & 1.680 & ESO-349--G-010 & -26.0 & 0.019 & 2.80 & 0.12 \\ \end{tabular} \end{table*} \section*{Acknowledgments} We are very grateful to C. Miller and R. Nichol for their help with the SDSS database, and the C4 cluster catalogue. We also thank R. Finn for useful discussions about the C4 clusters. LOVE wishes to thank C. Robert for helpful comments and support during her stay at the University of Waterloo. MJH and MLB acknowledge support from their respective NSERC Discovery grants. We also thank the anonymous referee for a careful reading of the manuscript and useful suggestions.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Abstract} {\bf An unidentified line at energy around 3.5 keV was detected in the spectra of dark matter-dominated objects. Recent work \cite{Dessert1465} used 30~Msec of XMM-Newton blank-sky observations to constrain the admissible line flux, challenging its dark matter decay origin. We demonstrate that these bounds are overestimated by more than an order of magnitude due to improper background modeling. Therefore the dark matter interpretation of the 3.5~keV signal remains viable. } \vspace{10pt} An X-ray line at $E\simeq 3.5$~keV has been found in 2014~\cite{Bulbul:2014sua,Boyarsky:2014jta}. Many consistency checks, as well as follow-up detections, have been reported, while non-detections have not ruled out the dark matter interpretation of the signal (see~\cite{Boyarsky:2018tvu} for review). Ref.~\cite{Dessert1465} (\textbf{DRS20} in what follows) recently reported bounds on the decay lifetime, which are about an order of magnitude below those required for dark matter interpretation. The improvements of these bounds compared with the previous results are much stronger than one would expect based solely on the increase of the exposition. We demonstrate that such a strong increase is mainly an artifact of overly restrictive background modeling. \begin{figure}[!t] \centering \includegraphics[width=0.75\textwidth]{fig1.pdf} \caption{Upper limits (95\% CL) on the extra line flux against the models, described in Table~\protect\ref{tab:fits} (colors coincide with the model names). Data points show the energies where the lines are detected at more than $3\sigma$ level over the continuum of the corresponding color (errorbars are $\pm 1\sigma$).} \label{fig:different_PL} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\textwidth]{models.pdf} \caption{Shown are the actual spectra and background models, described in this Comment. The necessity for the additional astrophysical lines above the continuum powerlaw is clearly seen. One can see that the level of the background continuum decreases with the addition of astrophysical lines. The dashed magenta line shows the model with the line at $E_\text{fid}$ included. Vertical dashed lines correspond to the modeling range of ``blue'', ``green'', and ``red'' models.} \label{fig:models} \end{figure} \begin{table}[!t] \centering \begin{tabular}{|l|p{2.7cm}|c|c|c|c|} \hline \textbf{Model} & \textbf{Model} & \textbf{Interval} & $\chi^2$/dof & \textbf{Line at} & $95\%$CL at $E_{\rm fid}$\\ \textbf{name} & \textbf{components}&&& $E_{\rm fid}$? & $10^{-6}$~cm$^{-2}$s$^{-1}$ \\ \hline \color{blue}{\bf Blue} & Powerlaw& 3.3-3.8~keV & 88.76/96 & No & 0.16\\ \hline \color{green}{\bf Green} & Powerlaw\newline Line @ 3.3~keV\newline Line @ 3.68~keV & 3.3-3.8~keV & 68.86/96 & No ($0.8\sigma$) & 0.70 \\ \hline \color{red}{\bf Red} & Powerlaw\newline Line @ 3.3~keV\newline Line @ 3.68~keV & 3.3-3.8~keV & 68.86/94 & No ($1.3\sigma$) & 1.41\\ \hline \color{magenta}{\bf Magenta} & Powerlaw\newline Line @ 3.12~keV\newline Line @ 3.3~keV\newline Line @ 3.68~keV\newline Line @ 3.9~keV & 3.0-4.0~keV & 163.0/193 & Yes ($4.0\sigma$) & 1.72\\ \hline \end{tabular} \caption{Four background models for line search. } \label{tab:fits} \end{table} We use $17$~Msec of \emph{XMM-Newton} MOS observations pointing $20^\circ - 35^\circ$ off the Galactic Center.\footnote{For the list of observations see \cite{zenodo}. } This dataset contains 57\% of the total exposure of \textbf{DRS20}, including 503 of their 534 observations. Thus, we expect the flux upper limit to differ by $\approx \sqrt{30/17}\approx 1.32$. Different dark matter profiles are consistent with each other in this region making the limits more robust. {\em Our results are shown in Fig.~\ref{fig:different_PL}, while Table~\ref{tab:fits} summarizes the details of our modeling and shows 95\% CL at fiducial energy $E_{\rm fid} = 3.48$~{\rm keV}}. First, we searched for a narrow line atop of a folded powerlaw (plus an instrumental continuum fixed at high energies) across the interval 3.3-3.8~keV. Our limits (``blue model'') are consistent with \textbf{DSR20}: strong constraints around $3.5$~keV, lines at $\sim 3.3$ and $3.68$~keV detected with significance $\ge 3\sigma$, consistently with significant weakening on the \textbf{DRS20} limits at these energies. Such lines (Ar XVIII and S~XVI complexes around 3.3~keV, and Ar XVII plus K XIX around 3.68~keV) are detected in astrophysical plasma both in galaxy clusters~\cite{Urban:2014yda,Bulbul:2014sua,Aharonian:2016gzq} and in our Galaxy \cite{2007PASJ...59S.237K,Boyarsky:2014ska,Ponti:2015tya,Boyarsky:2018ktr}. Besides, the presence of the weak instrumental lines -- K~K$\alpha$ at 3.3 keV and Ca~K$\alpha$ at 3.7 keV has been reported, see \cite{Boyarsky:2018ktr} and refs.\ therein. Therefore, next we add to the model extra Gaussians at 3.3~keV and 3.68~keV (``green model'', Table~\ref{tab:fits}) and repeat the analysis in the 3.3-3.8 energy range. The bounds weaken by a factor $\sim 4$ (Fig.~\ref{fig:different_PL}, green line) at $E_{\rm fid}$. This weakening is consistent with Fig.~S14(A) of \textbf{DRS20}. A background model without these lines raises the powerlaw continuum, which artificially lowers the upper limit on any line (see \cite{Ruchayskiy:2015onc} for the previous discussion). Indeed, Fig.~S16(B) of \textbf{DRS20} demonstrates that the bestfit value of the line flux at $3.5$~keV, parametrized by $\sin^2(2\theta)$, is negative at the level $\sim -1.5\sigma$ -- background is over-subtracted. We notice that the normalization of two lines at the end of the interval was fixed to their best-fit value during this procedure. When, instead, we let the normalization of the lines vary freely, while adding an extra line around $3.5$~keV (as in~\cite{Bulbul:2014sua,Boyarsky:2014jta,Boyarsky:2014ska,Ruchayskiy:2015onc}) -- the upper limit on the flux weakened by an extra factor of $\sim 2$ (``red model''). Interval 3--4~keV contains two more known lines -- Ca~XIX complex around $3.9$~keV and Ar~XVII plus S~XVI around $3.1$~keV \textit{c.f.} \cite{Bulbul:2014sua,Boyarsky:2014ska,Boyarsky:2018ktr}. We repeat our analysis in this interval with two extra lines in the model (``magenta model''). We find a $4\sigma$ line at $E_{\rm fid}$ and the upper limit weakens accordingly. Further extending the model to 2.8--6.0~keV, carefully modeling all astrophysical and instrumental lines and accounting for all significant line-like residuals, does not change the results. The described above models are shown in 3 -- 4~keV energy band in Fig.~\ref{fig:models} with corresponding colours. Fig.~\ref{fig:models} illustrates that the model with no astrophysical lines fails to adequately model the data on a broader energy range, and overestimates the continuum level, as discussed above. \section*{Conclusion} We demonstrate that the constraints from long exposure blank-sky observations \cite{Dessert1465} strongly depend on the background model. Namely, proper inclusion of the line complexes at 3.3~keV and 3.68~keV relaxes the bound by a factor $\sim 8$. The extension of the fitting interval to 3--4~keV weakens the bound by more than an order of magnitude (magenta line in Fig.~\ref{fig:different_PL} compared to the blue line, reproducing \textbf{DRS20}) and also leads to the detection of the line at 3.5~keV at $4\sigma$. \textbf{DRS20} investigates the effects of these lines in the \textit{Supplementary Material}. However they \textit{(i)} fix normalization of their best fit background values, reducing their effect (reproduced by our ``green'' model) and \textit{(ii)} chose to report more stringent bounds as their final result. When claiming \emph{exclusions}, one should be careful to push all systematic uncertainties in the conservative directions. In this particular case, to claim the strongest ``powerlaw'' bound (as done in \textbf{DRS20}) one should \emph{prove} that other known lines are not present in a particular dataset. Moreover, if the analysis at a wider energy interval (3--4 keV) gives weaker constraint, we see no reason not to report it as a proper conservative bound. Furthermore, to interpret the exclusion in terms of the decaying dark matter lifetime, one needs to adopt the most conservative density profile. In particular, the local dark matter density was adopted in \textbf{DRS20} to be $0.4\,\mathrm{GeV/cm^3}$. It has a systematic uncertainty of a factor $2-3$ \cite{PDG:2019}, see also discussion in \cite{Abazajian:2020unr}, which should be propagated into the final conservative bound. The spectral resolution of modern X-ray satellites is below that, required to resolve the intrinsic shape of astrophysical or putative dark matter decay lines. Future X-ray spectrometers will be able to finally settle this question. \section*{Other comments} Below we comment on other inconsistencies in \textbf{DRS20}. The above conclusions are not based on them. \paragraph*{PN out-of-time events not subtracted?} For the PN camera the out-of-time events were not subtracted. Indeed, the scripts \texttt{dl2dat.sh} and \texttt{spc2dat.py} of \textbf{DRS20} show that count rates from files \texttt{pn*-obj.pi}, produced by the ESAS script \texttt{pn-spectra} were used. Instead, out-of-time subtracted spectra, produced by \texttt{pn\_back} (filename pattern \texttt{pn*-obj-os.pi}) should have been used, according to the ESAS manual. \paragraph*{Wrong PN count rate?} Fig.~2 of \textbf{DRS20} shows that counts rates of stacked MOS and PN spectra are similar. However, it is known that count rate of the PN camera is $\approx 3$ times higher than of the MOS cameras, \textit{c.f.}\ \cite[Fig.~7 \& Table~2]{Carter:07} or \cite[Fig.~1]{Ruchayskiy:2015onc}. This difference is not explained in the text. \textbf{DSR20} showed the count rate for the PN camera of \texttt{ObsID 0653550301} (Fig.~S11) which we reproduced. The MOS count rate for the same observation is a factor of $\sim 3$ lower. \paragraph*{Acknowledgements.} The authors acknowledge communication with K.~N.~Abazajian and D.~Iakubovskyi. This project received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (GA 694896) (AB and OR). The work of DM was supported by DFG through the grant MA~7807/2-1. OR acknowledges support from the Carlsberg Foundation. The authors acknowledge support by the state of Baden-W\"urttemberg through bwHPC.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction\label{sec:intro}} Studies of Bose Einstein condensates (BEC) belongs to one of the fastest developing research directions. The major theoretical progress in this area has been stimulated by the fast experimental advances which enables to investigate subtle phenomena of fundamental nature~\cite{BEC_review},~\cite{BEC_review2}. In the semiclassical approach the spatial and temporal evolution of the condensates wave function is commonly described by the Gross Pitaevskii equation~\cite{Dalfovo:99} which reflects the interplay between kinetic energy of the condensate and the nonlinearity originating from the interaction potential leading, among others, to the formation of localized structures, bright and dark solitons~\cite{Khayakovich:science:02,Strecker:02:nature}. So far the main theoretical and experimental efforts have been concentrating on condensates with contact (or hard-sphere) bosonic interaction which, in case of attraction, may lead to collapse-like dynamics. Recently, also systems exhibiting a nonlocal, long-range dipolar interaction~\cite{Goral:05} have attracted a significant attention. This interest has been stimulated by successful condensation of Chromium atoms which exhibit an appreciable magnetic dipole moment~\cite{Griesmaier:05,Beaufils:08:pra,Stuhler:05}. The presence of spatially nonlocal nonlinear interaction and, at the same time, the ability to control externally the character of local (contact) interactions via the Feshbach resonance techniques offer the unique opportunity to study the effect of nonlocality on the dynamics, stability and interaction of bright and dark matter wave solitons~\cite{Pedri:05,Lahaye:07:nature,Koch:08:nphys,Pollack:09}. The enhanced stability of localized structures including fundamental, vortex and rotating solitons in nonlocal nonlinear media (not necessarily, BEC) has been already pointed out in a number of theoretical works~\cite{Bang:02:pre,Nath:07:pra,Nath:08,Cuevas:09:pra,Lashkin:07:pra, Lashkin:08:pra_a,Zaliznyak:08:pla,Lashkin:09:pscr}. In particular, stable toroidal solitons were presented in~\cite{Zaliznyak:08:pla,Lashkin:09:pscr}. However, since the dipole-dipole interaction is spatially anisotropic, an additional trapping potential or a combination of attractive two-particle and repulsive three-particle interaction were necessary. Various trapping arrangements have been proposed to minimize or completely eliminate this anisotropy. In particular, O'Dell {\em at al.}~\cite{Odell:00} have recently suggested to use a series of triads of orthogonally polarized laser beams illuminating cloud of cold atoms along three orthogonal axes so that the angular dependence of the dipole-dipole nonlinear term is averaged out. The resulting nonlocal interaction potential becomes effectively isotropic of the form $1/r$. It has been already shown by Turitsyn~\cite{Turitsyn:85:tmf} that a purely attractive "gravitational" (or Coulomb) interaction potential prevents collapse of nonlinear localized waves and gives rise to the formation of localized states - bright solitons which could be supported without necessity of using the external trapping potential. If realized experimentally such trapping geometry would enable to study effects akin to gravitational interaction. Few recent works have been dealing with this "gravitational" model of condensate looking, among others, at the stability of localized structures such as fundamental solitons and two-dimensional vortices~\cite{Giovanazzi:01:pra,Papadopoulos:07:pra,Cartarius:08:pra,Keles:08:pra}. In this paper we study formation of three-dimensional high-order solitons in BEC with gravity-like attractive nonlocal nonlinear potential. In particular, we demonstrate formation of vortex tororidal solitons (solitons) and investigate their stability. We show that such BEC supports robust localized structures even if the initial conditions are rather far from the exact soliton solutions. Furthermore, we also demonstrate that the presence of repulsive contact interaction does not prevent the existence of those solutions, but allows to control their rotation. The paper is organized as follows. In Sec.~\ref{model} we introduce briefly a scaled nonlocal Gross-Pitaevskii equation (GPE). We discuss two different response functions, the above long range $1/r$ response and the so-called Gaussian response yielding a much shorter interaction range. In Sec.~\ref{azimuthons} we recall general properties of rotating soliton solutions (azimuthons), which are then approximated in Sec.~\ref{variational} by means of a variational approach. Those variational approximations allow us to predict the rotation frequency of the azimuthons which are then confronted with results from rigorous numerical simulations. Finally, self-trapped higher order three-dimensional rotating solitons are presented in Sec.~\ref{numerics}, and we show that such a nonlocal BEC's support robust localized structures. \section{Model \label{model}} We consider a Bose-Einstein atomic condensate with the isotropic interatomic potential consisting of both, repulsive contact as well as attractive long-range nonlocal interaction contributions. Following O'Dell~{\em et.al}~\cite{Odell:00}, an attractive long-range interaction of ''gravitational`` form can be induced by triads of frequency detuned laser beams resulting in the following dimensionless Gross-Pitaevskii equation (GPE) for the condensate wave function $\psi\left(\mathbf{r},t\right)$: \begin{subequations}\label{eq:normalised_gpe} \begin{align} \partial_{t}\psi&=i\Delta\psi+i\Theta\psi\\ \Theta\left(\mathbf{r},t\right)&=\int\frac{\left|\psi\left(\mathbf{r}^{\prime},t\right)\right|^{2}}{\left|\mathbf{r}- \mathbf{r}^{\prime}\right|}d^{3}r^{\prime}-\left|\psi\right|^{2}. \end{align} \end{subequations} The nonlinear response $\Theta$ consists of both local and nonlocal contribution. Interestingly, for the ''gravitational`` nonlocal interaction $\Theta$ contains no additional parameter (see also Appendix~\ref{1/r}). The ratio between local and nonlocal term is solely determined by the form of the wavefunction $\psi$. We will see later (Sec.~\ref{variational}) that for very broad solitons the local contact interaction $\sim |\psi|^{2}$ becomes negligible. In this paper we will also consider a second, different nonlocal model, the so-called Gaussian model of nonlocality. Despite the fact that it is not motivated by a certain physical system, it serves as a popular toy model for the general class of nonlocal Schr\"odinger (Gross-Pitaevskii) equations in one and two dimensional problems \cite{Bang:02:pre,Krolikowski:04:job,Buccoliero:07,Buccoliero:07:PhysicaB,Skupin:08:oe}. Here, we will extend this classical model to three transverse dimensions, and moreover allow an additional local repulsive term similar to the previous case, and introduce \begin{equation} \label{eq:gauss} \Theta\left(\mathbf{r},t\right)=\left(\frac {1}{2\pi}\right)^{3/2}\int \left|\psi\left(\mathbf{r}^{\prime},t\right)\right|^{2} e^{-\frac {\left|\mathbf{r}- \mathbf{r}^{\prime}\right|^2}{2}}d^{3}r^{\prime}-\delta\left|\psi\right|^{2}. \end{equation} The additional parameter $\delta$ is necessary here to keep track of one of the two degrees of freedom of the Gaussian response, i.e. amplitude or width, which cannot be scaled out (see Appendix \ref{normgauss}). The value of $\delta$ determines the relative strength of the local repulsive term. Note that compared to the above ''gravitational response, the interaction range of the Gaussian nonlocal response is significantly shorter due to its rapid decay for $r \rightarrow \infty$. As far as stability of localized states is concerned, Turitsyn~\cite{Turitsyn:85:tmf} showed that the ground state of the nonlocal Schr{\"o}dinger equation with a purely attractive $1/r$ kernel is stable (collapse arrest) using Lyapunoff's method. A rather general estimate for non-negative responsefunctions has been found in \cite{ginibre:80:MZ} for arbitrary dimensions. Bang {\em at al.}~\cite{Bang:02:pre} showed, using the same method, that for systems with arbitrarily shaped, nonsingular response functions with positive definite Fourier spectrum, collapse cannot occur. Obviously the stability of the ground state is only a necessary but not sufficient condition for the stability of rotating higher-order states, which we will investigate in the following by means of numerical simulations. In \cite{Froelich:02:CMP}, linear and global (modulational) stability under small perturbations of solutions of the Hartree-equation was shown. \section{Rotating solitons \label{azimuthons}} It has been shown earlier that azimuthons, i.e. multi-peak solitons with angular phase ramp exhibit constant angular rotation and hence can be represented by straightforward generalization of the usual (nonrotating) soliton ansatz by including an additional parameter, the angular frequency $\Omega$~\cite{Desyatnikov:05,Skryabin:07:pre}. We write \begin{equation} \label{azimuthon_ansatz} \psi (r,z,\phi ,t) = U(r,z,\phi -\Omega t)\mathrm{e}^{iE t}, \end{equation} where $U$ is the complex amplitude and $E$ is the normalized chemical potential, $r=\sqrt{x^{2}+y^{2}}$ and $\phi$ denotes the azimuthal angle in the plane $(x,y)$. It can be shown, that by inserting the above function into the nonlocal GPE (\ref{eq:normalised_gpe}) one can derive the formal relation for the rotation frequency~\cite{Rosanov:os:96:405,Skupin:08:oe} \begin{equation}\label{omega} \Omega=-\frac{IL-I^{\prime}M+XL-X^{\prime}M}{L^{2}-MM^{\prime}}, \end{equation} where the functionals $M,M', X, X^{\prime},L,I, I'$ represent the following integrals over the stationary amplitude profiles of the azimuthons \begin{subequations}\label{int_system} \begin{align} M&=\int\left|U\right|^{2}\mathrm{d}^{3}\mathbf{r},\\ L&=-i\int U^{*}\partial_{\varphi}U\mathrm{d}^{3}\mathbf{r},\\ I&=\int U^{*}\Delta U\mathrm{d}^{3}\mathbf{r},\\ X&=\int \Theta \left( \mathbf {r} \right) \left|U\left(\mathbf{r}\right) \right|^{2}\mathrm{d}^{3}\mathbf{r},\\ M^{\prime}&=\int\left|\partial_{\varphi}U\right|^{2}\mathrm{d}^{3}\mathbf{r},\\ I^{\prime}&=i\int\partial_{\varphi}U^{*}\Delta U\mathrm{d}^{3}\mathbf{r},\\ X^{\prime}&=i\int \Theta \left( \mathbf {r} \right) U\left(\partial_{\varphi}U^{*}\right) \mathrm{d}^{3}\mathbf{r}. \end{align} \end{subequations} The first two conserved functionals ($M$) and ($L$) have straightforward physical meanings of ''mass'' or ''number of particles'' and ''angular momentum''. In the next Section, we will compute approximate azimuthon solutions and their rotation frequency employing a certain ansatz for the stationary amplitude profile $U$. \section{Variational approach \label{variational}} In order to get some insight into possible localized states of the Gross-Pitaevskii equation we resort first to the so called Lagrangian (or variational) approach~\cite{variational}. It is easy to show that Eq.~(\ref{eq:normalised_gpe}) can be derived from the following Lagrangian density: \begin{equation} \begin{split} \mathcal{L}:=\frac{i}{2}\left(\psi\partial_{t}\psi^{*}- \psi^{*}\partial_{t}\psi\right)+ \left|\nabla\psi\right|^{2}- \frac{1}{2}\left|\psi\right|^{2} \Theta \left( \mathbf {r} , t \right). \end{split} \label{lagr_density} \end{equation} It has been shown before that rotating solitons or 'azimuthons' are associated with nontrivial phase and amplitude structure~\cite{Buccoliero:07}. In two-dimensional optical problems the simplest case represents the state falling between optical vortex (ring-like pattern with $2\pi$ angular phase shift) and optical dipole in the form of two out-of-phase intensity peaks~\cite{Buccoliero:07,Buccoliero:07:PhysicaB}. In three dimensions, a reasonable ansatz for corresponding localized solutions is \begin{equation}\label{ansatz2} \begin{split} \psi\left(r,z,\varphi,t\right) & := Ar\exp\left(-\frac{r^{2}+z^{2}}{2\sigma^{2}}\right)\mathrm{e}^{iEt} \\ & \quad \times \left[\cos\left(\varphi-\Omega t\right)+ip\sin\left(\varphi-\Omega t\right)\right], \end{split} \end{equation} where parameter p varies between zero and unity. For $p=0$ Eq.~(\ref{ansatz2}) describes a dipole structure consisting of two out-of-phase lobes, while for $p=1$ it is a three-dimensional vortex, i.e. toroid-like structure with zero in the center and azimuthal (in the $(x,y)$ plane) phase ramp of $2\pi$. Using the ansatz Eq.~(\ref{ansatz2}), one can easily find that \begin{equation} IL-I^{\prime}M=0, \end{equation} which shows that only the non-linear terms contribute to the frequency $\Omega$ (vide formula Eq.~(\ref{omega})). After inserting the solution Eq.~(\ref{ansatz2}) into the Lagrangian density $\mathcal{L}$, and integrating over the whole 3D space we obtain the Lagrangian $L$ which is the function of variational parameters $\sigma$ and $A$ only. Looking for the extrema of $L$ leads to a set of algebraic relations among the variational variables.% \subsection{The ''gravitational'' response} In this case, the amplitude $A$ can be expressed as a function of $p$ and $\sigma$ as follows (see also Appendix \ref{convolution}) \begin{equation}\label{a_grav} A^{2}=\frac{5\sqrt{2}\left(1+p^{2}\right)}{\frac{49p^{4}+86p^{2}+49}{120}\pi\sigma^{6} -\frac{9p^{4}+6p^{2}+9}{32}\sigma^{4}}, \end{equation} and the energy $E$ is given by \begin{equation} E= \frac{15\left[2 \pi \left(49 p^4+86p^2+49\right)-\frac{15 p^4-10 p^2-15}{2\sigma^2}\right]} {4\sigma^2 \pi \left( 49 p^4+86 p^2+49 \sigma^2 \pi\right)-135-90p^2-135p^4} \end{equation} Because of the difference in the denominator, the localized solution (with finite amplitude) exists only if its width is greater than the critical value $\sigma_{cr}\left(p\right)$, \begin{equation} \sigma_{cr}=\frac{3}{2}\sqrt{\frac{5}{\pi}} \sqrt{\frac{3p^{4}+2p^{2}+3}{49p^{4}+86p^{2}+49}}. \end{equation} This threshold is an obvious consequence of competition between nonlocal and local interaction potentials, because the second term in the denominator of Eq.~(\ref{a_grav}) is due to the local contact interaction. While the former being attractive, leads to spatial localization, the latter, which is repulsive, tends to counteract it. For small $\sigma$ the kinetic energy term is large and can be compensated only if the particle density is high enough. In this regime the local repulsive interaction prevails over the attraction leading to the expansion of the condensate until the condition for its localization (i.e. $\sigma > \sigma_{cr}$) is satisfied. The rotation frequency $\Omega$ is then given by the following relation \begin{eqnarray}\label{eq:Omega} \Omega &=& A^2\frac{\sigma^{2}p\sqrt{2}\left(4\sigma^{2}\pi-5\right)}{80}. \end{eqnarray} Interestingly, this expression is not sign definite, which means that we can expect both positive and negative rotation frequencies. In particular, the azimuthon with the ''stationary'' width $\sigma_s=\sqrt{5/4\pi}\approx0.63$ has no angular velocity. Again, this effect is due to competition between nonlocal and local contribution to $\Omega$ for small $\sigma$. The nonlocal attractive interaction leads to a positive contribution to $\Omega$, the repulsive local interaction to a negative one. The expression for $\Omega$ {\em without} repulsion can be obtained by the outlined variational procedure or by asymptotic expansion ($\sigma\rightarrow \infty$) up to $\mathcal {O} \left( 1/\sigma^2 \right)$, $\Omega = \frac{60(1+p^{2})p}{(49p^{4}+86p^{2}+49)\sigma^2}.$ As expected, this quantity is strictly positive. Both curves $\Omega$ versus $\sigma$ ($p=0.7$) with and without contact interaction are shown in the right panel in Fig.~\ref{fig:properties_1_over_r}. We can see that the repulsive local interaction kicks in for $\sigma<1.5$. \begin{figure} \includegraphics[width=8cm]{mass_over_energy_1_over_r.eps} \caption{(color online) The left panels show the dependency of the mass $M$ (top) and the width $\sigma$ (bottom) on the chemical potential $E$. Black curves show results from the variational approach including local repulsion, dashed blue curves are without contact interaction. The right panel shows the angular frequency $\Omega$ as a function of $\sigma$. Black dots denote results obtained from numerical simulations of the GPE (\ref{eq:normalised_gpe}). All plots are for $p=0.7$.} \label{fig:properties_1_over_r} \end{figure} As we observe in Fig.~\ref{fig:properties_1_over_r}, the mass behaves like $\sqrt{E}$ close to $E=0$ , since generally, $ M \sim A^2 \sigma^5 $, and for $\sigma \rightarrow \infty$, one finds $E\sim 1/\sigma^2 $ $M\sim1/\sigma $, whereas for $\sigma \rightarrow 0$, one finds $E\sim 1/\sigma^2 $, $M\sim\sigma$. The fact that the mass can become zero for $E \rightarrow 0$ is a well-known property for very long range kernels, such as the Coulomb potential in three dimensions \cite{Froelich:02:CMP}. For shorter ranged responses (e.g., Gaussian response, see Fig.~\ref{fig:properties_gauss}), the mass attains its minimum at a finite value of $E$. In the limit of solely attractive local interaction ($E\rightarrow0$, $\sigma \rightarrow \infty$), the mass is a monotonically decreasing function in $E$. \subsection{The Gaussian response} Repeating each step of the previous calculations for the Gaussian nonlocal response, one ends up again with expressions for amplitude $A$ and rotation frequency $\Omega$, given by \begin{equation} \label{eq:A_for_gauss_with_repulsion} A^2= \frac {\sqrt{2}\left(1+p^2\right)\left(\sigma^2+1\right)^{9/2}} {\frac{\left(9p^4+9+6p^2\right)\sigma^{13}}{160} +\frac{\left(4p^2+1+p^4\right)\sigma^{11}}{20} +\frac{\left(1+p^2\right)^2\sigma^9}{8}-\delta F_{\rm rep}} \end{equation} with $F_{\rm rep}=(\sigma^2+1)^{9/2}(9p^4+9+6p^2)\sigma^4/160$ and \begin{equation} \label{eq:omega_for_gauss_with_repulsion} \Omega = A^2 \frac{p\left(\sigma^7-\delta(\sigma^2+1)^{7/2}\right)\sigma^2\sqrt{2}}{16(\sigma^2+1)^{7/2}}. \end{equation} As already pointed out in Sec.~\ref{model}, the additional parameter $\delta$ is necessary due to an additional degree of freedom of the Gaussian response, and fixes the ratio between repulsion and attraction (see Appendix \ref{normgauss}). Obviously, for $\delta=0$, the repulsive local contact interaction vanishes. Here, $\sigma_{s} = \sqrt{\delta^{2/7} /\left(1-\delta^{2/7}\right)} \approx 0.60$ for $\delta=0.01$. \begin{figure} \includegraphics[width=8cm]{mass_over_energy_gauss.eps} \caption{(color online) Same as Fig.~\ref{fig:properties_1_over_r}, but for the Gaussian response given in Eq.~(\ref{eq:gauss}). Black curves are for $\delta=0.01$, dashed blue curves without repulsion ($\delta=0.01$). All plots are for $p=0.7$.} \label{fig:properties_gauss} \end{figure} We observe that $E\sim1/\sigma^2$, $A\sim1/\sigma^2$, $M\sim\sigma\sim1/\sqrt{E}$ for both small and large $\sigma$. Compared to the ''gravitational'' response, the range of this potential is much shorter. Hence, when considering large $\sigma$ the Gaussian response acts more and more like a local attractive response and higher order solitons become unstable (see end of Sec.~\ref{numerics}). \section{Numerical results \label{numerics}} In this section, the predictions of the variational approach will be confronted with direct numerical simulations. The approximate solitons resulting from the variational approach will be used as an initial conditions to our three-dimensional code to compute their time evolution. In general, we find stable evolution, in particular the characteristic shape of the initial conditions is preserved. For rotating azimuthons, the angular velocities will be measured and compared to the ones obtained in the previous section. In Fig.~\ref{dynamics} we illustrate the temporal evolution of three-dimensional solitons for the ''gravitational'' response, i.e., solutions to Eq.~(\ref{eq:normalised_gpe}). This first two rows present the classical stationary soliton solutions torus and dipole, respectively. Due to imperfections of the initial conditions obtained from the variational approach we observe slight oscillations upon evolution, in particular for the dipole solutions (second row). Those oscillations are not present if we use numerically {\em exact} solutions (obtained from an iterative solver \cite{Skupin:pre:73:066603}) as initial conditions (not shown). In the last row of Fig.~\ref{dynamics} we show the evolution of an azimuthon ($p=0.7, \sigma=1$). The rotation of the amplitude profile is clearly visible. Again we observe radial oscillations due to the imperfect initial condition, but the solution is robust. \begin{figure} \includegraphics[width=8cm]{3d_intensities_orient.eps} \caption{(color online) Dynamics of the three-dimensional stable solitons in gravity-like BEC. Iso-surfaces of the normalized density $\left|\psi\right|^{2}$ are depicted for different evolution times, the interior density distribution is represented in grey-scales. The initial variational parameters used are $\sigma=1$ and $p=1$ (torus, iso-density surface at $\left|\psi\right|^{2}=0.76$) for the upper row, $p=0$ (dipole, iso-density surface at $\left|\psi\right|^{2}=1.41$) for the middle one and finally $p=0.7$ (azimuthon, iso-density surface at $\left|\psi\right|^{2}=0.86$). The sense of the rotation ($\Omega=0.64$) is indicated by the arrows.} \label{dynamics} \end{figure} Figure~\ref{fig:1_r_sigma_06} shows the dependency of the azimuthon rotation frequency as a function of the modulation parameter $p$. Solid lines represent predictions from the variational model, black dots represent rotation frequency obtained from numerical simulations. As expected from two-dimensional nonlocal models \cite{Lopez:ol:31:1100,Skupin:08:oe}, the modulus of $\Omega$ increases with $p$. Our variational calculations predict that for small width $\sigma$, when repulsive interaction comes into play, the sense of azimuthon rotation changes. In particular, we found a ''stationary'' width $\sigma_s$ where the rotation frequency $\Omega$ vanishes. Indeed, full model simulations confirm this property, since the first row in Fig.~\ref{fig:1_r_standing} and ~\ref{fig:1_r_sigma_06}, a) show a very slow rotation with opposite orientation, so that the numerical stationary width is between $0.6\dots0.61$. Hence, we propose that tuning the strength of contact interaction in experiments allows to control the azimuthon rotation. \begin{figure} \includegraphics[width=8cm]{plot_num_and_an_result.eps} \caption{(color online) Azimuthon rotation frequency $\Omega$ vs modulation parameter $p$ in gravity-like BEC, for $\sigma=0.6\approx\sigma_s$ (left panel) and $\sigma=3$ (right panel). Black curves show results from the variational approach including local repulsion, dashed blue curves are without contact interaction. Black dots denote results obtained from numerical simulations of the GPE (\ref{eq:normalised_gpe}). \label{fig:1_r_sigma_06}} \end{figure} \begin{figure} \includegraphics[width=8cm]{standing_and_other_orientation.eps} \caption{(color online) The upper row shows iso-density surfaces at $\left|\psi\right|^{2}=7.63$ for the very slow rotating ($\Omega\approx0$) azimuthon with $p=0.7$ and $\sigma=0.61$. The lower row shows a fast counter-rotating ($\Omega=-2.24$) azimuthon with $p=0.7$, $\sigma=0.5$ and iso-density surface at $\left|\psi\right|^{2}=32$. Same plot style as in Fig.~\ref{dynamics}. \label{fig:1_r_standing}} \end{figure} Furthermore, we observe that very narrow azimuthons ($\sigma \rightarrow \sigma_{cr}$) have negative $\Omega$ and rotate very fast (see Fig.~\ref{fig:properties_1_over_r}). This may be interesting for potential experiments, since the duration of BEC experiments is restricted to typically several hundreds of milliseconds. However, for azimuthons very close to $\sigma_{cr}$ the ansatz function \ref{ansatz2} becomes less appropriate and using variational initial conditions leads to very strong oscillations upon evolution, up to the point where is is no longer possible to identify properly the rotation frequency $\Omega$. Concerning the Gaussian nonlocal response, we find very similar evolution scenarios. Results shown in the left panel in Fig.~\ref{fig:comparison_numerical_and_an_results_gaussian_response} for the Gaussian response underline the observations from above, in particular we also find nonrotating azimuthons at $\sigma=\sigma_s$. However, there are some important differences. First, it seems that our ansatz Eq.~\ref{ansatz2} is better suited for the Gaussian response, the radial oscillations we observed with the ''gravitational'' are still present, but much weaker. The second difference is due to the fact that the Gaussian response has a much shorter range than the ''gravitational'' one. For large $\sigma$ the Gaussian kernel acts like a attractive local response. As a consequence, higher order solitons become unstable in the sense that the two humps spiral out. We observe unstable evolution in numerical simulations for $\sigma\gtrsim0.9$ at $p=0.7$. The right panel in Fig.~\ref{fig:comparison_numerical_and_an_results_gaussian_response} visualizes the cause of this instability: For increasing sigma the resulting convolution term $\Theta$ [Eq.~(\ref{eq:gauss})], which is responsible for the self-trapping, becomes smaller in amplitude and asymmetric in the rotation plane, which eventually leads to destabilization of the azimuthon. \begin{figure} \includegraphics[width=8cm]{potential_and_gauss_with_rep.eps} \caption{(color online) Azimuthon rotation frequency $\Omega$ vs modulation parameter $p$ in BEC with solely attractive Gaussian nonlocal response for $\sigma=0.6\approx\sigma_s$ (left panel). Black curves show results from the variational approach including local repulsion ($\delta=0.01$), dashed blue curves are without contact interaction ($\delta=0$). Black dots denote results obtained from numerical simulations (Eq.~\ref{eq:gauss}, $\delta=0.01$). The right panel shows profiles of the convolution term $\Theta$ [Eq.~(\ref{eq:gauss})] for $\sigma=0.6$ and $\sigma=1$. Solid lines correspond to profiles along the major axis of the resulting ellipsiod ($z=0$, $\varphi=0$), dotted lines to those along the minor axis ($z=0$, $\varphi=\pi/2$). \label{fig:comparison_numerical_and_an_results_gaussian_response}} \end{figure} \section{Conclusion} We studied formation of rotating localized structures in Bose Einstein condensate with different nonlocal interaction potentials. We successfully used variational techniques to investigate their dynamics and showed numerically that such localized structures are indeed robust objects which persist over long evolution times even if the initial conditions significantly differ from the exact soliton solutions. For rotating solitons (azimuthons), we derived analytical expressions for the angular velocity, in excellent agreement with rigorous three-dimensional numerical simulations. Furthermore, we show that it is possible to control the rotation frequency by tuning the local contact interaction, which is routinely possible by Feshbach resonance techniques. In particular, we can change the sense of rotation, and we can find non-rotating azimuthons. We also identify parameter regions with particularly fast rotation, which may be important for potential experimental observation of such solutions. By using different nonlocal kernel functions we showed that rotating soliton solutions are generic structures in nonlocal GPE's. Hence, we conjecture that the phenomena observed in this paper are rather universal and apply for a general class of attractive nonlocal interaction potentials.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} A nonlocal quantum field theory and quantum gravity theory has been formulated that leads to a finite, unitary and locally gauge invariant theory~\cite{Moffat,Moffat2,Moffat3,Woodard,Kleppe,Cornish,Cornish2,Hand,Woodard2,Clayton,Paris,Troost,Joglekar,Moffat4}. For quantum gravity the finiteness of quantum loops avoids the problem of the non-renormalizabilty of local quantum gravity~\cite{Veltman,Sagnotti}. The finiteness of the nonlocal quantum field theory draws from the fact that factors of $\exp[{\cal K}(p^2)/\Lambda^2]$ are attached to propagators which suppress any ultraviolet divergences in Euclidean momentum space, where $\Lambda$ is an energy scale factor. An important feature of the field theory is {\it that only the quantum loop graphs have nonlocal properties}; the classical tree graph theory retains full causal and local behavior. Consider first the 4-dimensional spacetime to be approximately flat Minkowski spacetime. Let us denote by $f$ a generic local field and write the standard local Lagrangian as \begin{equation} {\cal L}[f]={\cal L}_F[f]+{\cal L}_I[f], \end{equation} where ${\cal L}_F$ and ${\cal L}_I$ denote the free part and the interaction part of the action, respectively, and \begin{equation} {\cal L}_F[f]=\frac{1}{2}f_i{\cal K}_{ij}f_j. \end{equation} In a gauge theory the action would be the Becchi, Rouet, Stora, Tyutin (BRST) gauge-fixed action including ghost fields in the invariant action required to fix the gauge~\cite{Becchi,Tyutin}. The kinetic operator ${\cal K}$ is fixed by defining a Lorentz-invariant distribution operator: \begin{equation} \label{distribution} {\cal E}\equiv \exp\biggl(\frac{{\cal K}}{2\Lambda^2}\biggr) \end{equation} and the operator: \begin{equation} {\cal O}=\frac{{\cal E}^2-1}{{\cal K}}=\int_0^1\frac{d\tau}{\Lambda^2}\exp\biggl(\tau\frac{{\cal K}}{\Lambda^2}\biggr). \end{equation} The regularized interaction Lagrangian takes the form \begin{equation} {\hat {\cal L}}_{I}=-\sum_n(-g)^nf{\cal I}[{\cal F}^n,{\cal O}^{(n-1)})]f, \end{equation} where $g$ is a coupling constant and ${\cal F}$ is a vertex function form factor. The decomposition of ${\cal I}$ in order $n=2$ is such that the operator ${\cal O}$ splits into two parts ${\cal F}^2/{\cal K}$ and $-1/{\cal K}$. For Compton amplitudes the first such term cancels the contribution from the corresponding lower order channel, while the second term is just the usual local field theory result for that channel. The action is then invariant under an extended nonlocal gauge transformation. The precise results for QED were described in ref.~\cite{Moffat2}. The regularized action is found by expanding ${\hat{\cal L}}_I$ in an infinite series of interaction terms. Since ${\cal F}$ and ${\cal O}$ are entire function of ${\cal K}$ the higher interactions are also entire functions of ${\cal K}$. This is important for preserving the Cutkosky rules and unitarity, for an entire function does not possess any singularities in the finite complex momentum plane. The Feynman rules are obtained as follows: Every leg of a diagram is connected to a local propagator, \begin{equation} \label{regpropagator} D(p^2)=\frac{i}{{\cal K}(p^2)+i\epsilon} \end{equation} and every vertex has a form factor ${\cal F}^k(p^2)$, where $p$ is the momentum attached to the propagator $D(p^2)$, which has the form \begin{equation} {\cal F}^k(p^2)\equiv{\cal E}^k(p^2)=\exp\biggl(\frac{\cal K}{2\Lambda_k}\biggr), \end{equation} where $k$ denotes the particle nature of the external leg in the vertex. The formalism is set up in Minkowski spacetime and loop integrals are formally defined in Euclidean space by performing a Wick rotation. This facilitates the analytic continuation; the whole formalism could from the outset be developed in Euclidean space. We will demonstrate how the nonlocal transcendental entire function in momentum space that generates the finite and unitary standard model (SM) and quantum gravity (QG) loops to all orders of perturbation theory, produces an exponential suppression of the estimated very large vacuum density and cosmological constant in local quantum field theory. This can solve the severe fine-tuning cosmological constant problem, avoiding a naturalness problem and the need for an anthropic and multiverse solution. \section{Nonlocal Quantum Gravity} We expand the metric around a smooth fixed background spacetime: \begin{equation} \label{background} g_{\mu\nu}={\bar g}_{\mu\nu}+h_{\mu\nu}. \end{equation} By restricting ourselves to perturbation theory and a fixed geometrical background, we lose general covariance (diffeomorphism invariance). However, we still maintain gauge invariance of the gravitational calculations under the gauge group of the fixed background metric, e.g., for a fixed Minkowski metric background the action is invariant under local Poincar\'{e} transformations, while for a de Sitter background metric the action will be invariant under the group of de Sitter transformations. Although we lose general covariance in our perturbation calculations of gravitational scattering amplitudes, the basic physical properties such as finiteness of loop amplitudes, gauge invariance and unitarity will be expected to lead to correct and reliable physical conclusions. For the sake of simplicity, we shall only consider expansions about Minkowski spacetime. Let us define ${\bf g}^{\mu\nu}=\sqrt{-g}g^{\mu\nu}$, where ${\bf g}= {\rm det}({\bf g}^{\mu\nu})$ and $\partial_\rho{\bf g}={\bf g}_{\alpha\beta}\partial_\rho{\bf g}^{\alpha\beta}{\bf g}$. We can then write the local gravitational action $S_{\rm grav}$ in the form~\cite{Goldberg}: \begin{equation} \label{action} S_{\rm grav}=\int d^4x{\cal L}_{\rm grav}=\frac{1}{2\kappa^2}\int d^4x [({\bf g}^{\rho\sigma}{\bf g}_{\lambda\mu} {\bf g}_{\kappa\nu} $$ $$ -\frac{1}{2}{\bf g}^{\rho\sigma} {\bf g}_{\mu\kappa}{\bf g}_{\lambda\nu} -2\delta^\sigma_\kappa\delta^\rho_\lambda{\bf g}_{\mu\nu})\partial_\rho{\bf g}^{\mu\kappa} \partial_\sigma{\bf g}^{\lambda\nu} $$ $$ -\frac{2}{\alpha}\partial_\mu{\bf g}^{\mu\nu}\partial_\kappa{\bf g}^{\kappa\lambda} \eta_{\nu\lambda} +{\bar C}^\nu\partial^\mu X_{\mu\nu\lambda}C^\lambda], \end{equation} where $\kappa^2=32\pi G$ and we have added a gauge fixing term with the parameter $\alpha$, $C^\mu$ is the Fadeev-Popov ghost field and $X_{\mu\nu\lambda}$ is a differential operator: \begin{equation} X_{\mu\nu\lambda}=\kappa(-\partial_\lambda\gamma_{\mu\nu} +2\eta_{(\mu\lambda}\gamma_{\kappa\nu)}\partial^\kappa) +(\eta_{(\mu\lambda}\partial_{\nu)}-\eta_{\mu\nu}\partial_\lambda). \end{equation} We expand the local interpolating graviton field ${\bf g}^{\mu\nu}$ as \begin{equation} {\bf g}^{\mu\nu}=\eta^{\mu\nu}+\kappa\gamma^{\mu\nu}+O(\kappa^2). \end{equation} Then, \begin{equation} {\bf g}_{\mu\nu}=\eta_{\mu\nu}-\kappa\gamma_{\mu\nu} +\kappa^2{\gamma_\mu}^\alpha{\gamma_\alpha}_\nu+O(\kappa^3). \end{equation} The gravitational Lagrangian density is expanded as \begin{equation} {\cal L}_{\rm grav}={\cal L}^{(0)}+\kappa{\cal L}^{(1)} +\kappa^2{\cal L}^{(2)}+.... \end{equation} In the limit $\alpha\rightarrow\infty$, the Lagrangian density ${\cal L}_{\rm grav}$ is invariant under the gauge transformation \begin{equation} \delta\gamma_{\mu\nu}=X_{\mu\nu\lambda}\xi^\lambda, \end{equation} where $\xi^\lambda$ is an infinitesimal vector quantity. To implement nonlocal quantum gravity, we introduce the ``stripping'' graviton propagator in the gauge $\alpha=-1$: \begin{equation} {\tilde D}_{\alpha\beta\mu\nu}(p) =\frac{1}{2}(\eta_{\alpha\mu}\eta_{\beta\nu}+\eta_{\alpha\nu}\eta_{\beta\mu} -\eta_{\alpha\beta}\eta_{\mu\nu}){\cal O}_0(p), \end{equation} while the ghost stripping propagator is given by \begin{equation} {\tilde D}^{\rm ghost}_{\mu\nu}(p)=\eta_{\mu\nu}{\cal O}_0(p), \end{equation} where \begin{equation} {\cal O}_0(p)=\frac{{\cal E}^2_0-1}{p^2}. \end{equation} We choose ${\cal E}_0^2=\exp(-p^2/\Lambda_G^2)$ and we see that the local propagator can be obtained from the nonlocal propagator minus the stripping propagator \begin{equation} \frac{1}{p^2}=\frac{\exp(-p^2/\Lambda_G^2)}{p^2}-{\cal O}_0(p). \end{equation} The stripping propagators are used to guarantee that the tree-level graviton-graviton scattering amplitudes are identical to the local, point-like tree-level amplitudes, which couple only to physical gravitons. The graviton propagator in the fixed de Donder gauge $\alpha=-1$~\cite{Donder} in momentum space is given by \begin{equation} D_{\mu\nu\rho\sigma}(p) =\frac{\eta_{\mu\rho}\eta_{\nu\sigma}+\eta_{\mu\sigma}\eta_{\nu\rho} -\eta_{\mu\nu}\eta_{\rho\sigma}}{p^2+i\epsilon}, \end{equation} while the graviton ghost propagator in momentum space is \begin{equation} D^{\rm ghost}_{\mu\nu}(p)=\frac{\eta_{\mu\nu}}{p^2+i\epsilon}. \end{equation} The on-shell vertex functions are unaltered from their local antecedents, while virtual particles are attached to nonlocal vertex function form factors. This destroys the gauge invariance of e.g. graviton-graviton scattering and requires an iteratively defined series of ``stripping'' vertices to ensure the decoupling of all unphysical modes. Moreover, the local gauge transformations have to be extended to nonlinear, nonlocal gauge transformations to guarantee the over-all invariance of the regularized amplitudes. The quantum gravity perturbation theory is invariant under generalized, nonlinear field representation dependent transformations, and it is finite to all orders. At the tree graph level all unphysical polarization states are decoupled and nonlocal effects will only occur in graviton and graviton-matter loop graphs. Because the gravitational tree graphs are purely local there is a well-defined classical GR limit. The finite quantum gravity theory is well-defined in four real spacetime dimensions or in any higher D-dimensional spacetime. We quantize by means of the path integral operation \begin{equation} \langle 0\vert T^*(O[{\bf g}])\vert 0\rangle_{\cal E}=\int[D{\bf g}] \mu[{\bf g}]({\rm gauge\, fixing}) O[{\bf g}]\exp(i\hat S_{\rm grav}[{\bf g}]). \end{equation} The quantization is carried out in the functional formalism by finding a measure factor $\mu[{\bf g}]$ to make $[D{\bf g}]$ invariant under the classical symmetry. Because we have extended the gauge symmetry to nonlinear, nonlocal transformations, we must also supplement the quantization procedure with an invariant measure \begin{equation} {\cal M}=\Delta({\bf g}, {\bar C}, C)D[{\bf g}_{\mu\nu}]D[{\bar C}_\lambda]D[C_\sigma] \end{equation} such that $\delta {\cal M}=0$. \section{The Cosmological Constant Problem} The cosmological constant problem is considered to be the most severe hierarchy problem in modern physics~\cite{Weinberg,Polchinski,Martin,Burgess}. We can define an effective cosmological constant \begin{equation} \lambda_{\rm eff}=\lambda_0+\lambda_{\rm vac}, \end{equation} where $\lambda_0$ is the `bare' cosmological constant in Einstein's classical field equations, and $\lambda_{\rm vac}$ is the contribution that arises from the vacuum density $\lambda_{\rm vac}=8\pi G\rho_{\rm vac}$. The observational bound on $\rho_{\rm vac}$ is \begin{equation} \label{vacbound} \rho_{\rm vac} \leq 10^{-47}\, ({\rm GeV})^4, \end{equation} corresponding to the the bound on $\lambda_{\rm vac}$: \begin{equation} \label{lambdabound} \lambda_{\rm vac} \leq 10^{-84}\,{\rm GeV}^2. \end{equation} Zeldovich~\cite{Zeldovich} showed that the zero-point vacuum fluctuations must have a Lorentz invariant form \begin{equation} T_{{\rm vac}\,\mu\nu}=\lambda_{\rm vac}g_{\mu\nu}, \end{equation} consistent with the equation of state $\rho_{\rm vac}=-p_{\rm vac}$. Thus, the vacuum within the framework of particle quantum physics has properties identical to the cosmological constant. In quantum theory, the second quantization of a classical field of mass $m$, treated as an ensemble of oscillators each with a frequency $\omega(k)$, leads to a zero-point energy $E_0=\sum_k\frac{1}{2}\hbar\omega(k)$. An evaluation of the vacuum density obtained from a summation of the zero-point energy modes gives \begin{equation} \rho_{\rm vac} =\frac{1}{(2\pi)^2}\int_0^{M_c}dkk^2(k^2+m^2)^{1/2} \sim\frac{M^4_c}{16\pi^2}, \end{equation} where $M_c$ is the cutoff. Taking $M_c\sim M_{\rm Planck}\sim 10^{19}$ GeV, we get $\rho_{\rm vac}\sim 122$ orders of magnitude greater than the observed value. Already at the level of the standard model, we get $\rho_{\rm vac}\sim (10^2\,{\rm GeV})^4$ which is $55$ orders of magnitude larger than the bound (\ref{vacbound}). To agree with the experimental bound (\ref{vacbound}), we would have to invoke a very finely tuned cancellation of $\lambda_{\rm vac}$ with the `bare' cosmological constant $\lambda_0$, which is generally conceded to be theoretically unacceptable. We adopt a model consisting of a photon field $A_\mu$ coupled to gravity. We have for the effective field Lagrangian density: \begin{equation} \label{LA} {\cal L}_A=-\frac{1}{4}(-{\bf g})^{-1/2}{\bf g}^{\mu\nu}{\bf g}^{\alpha\beta}F_{\mu\alpha}F_{\nu\beta}, \end{equation} where \begin{equation} F_{\mu\nu}=\partial_\nu A_\mu-\partial_\mu A_\nu. \end{equation} We have \begin{equation} {\cal L}_A^{(0)}=-\frac{1}{4}\eta^{\mu\nu}\eta^{\alpha\beta}F_{\mu\alpha}F_{\nu\beta}, \end{equation} and \begin{equation} {\cal L}_A^{(1)}=-\frac{1}{4}\biggl(\eta^{\mu\nu}\gamma^{\alpha\beta}+\eta^{\alpha\beta}\gamma^{\mu\nu} -\frac{1}{2}\eta^{\mu\nu}\eta^{\alpha\beta}\gamma\biggr)F_{\mu\alpha}F_{\nu\beta}. \end{equation} We include in the Lagrangian density ${\cal L}_A^{(0)}$ an additional gauge-fixing piece $-\frac{1}{2}(\partial^\mu A_\mu)^2$. For a particular gauge no Faddeev-Popov ghost particles and diagrams contribute to the lowest order photon-graviton self-energy calculation. The local photon propagator has the form \begin{equation} D^{\rm A}_{\mu\nu}(p)=\frac{\eta_{\mu\nu}}{p^2+i\epsilon}. \end{equation} The graviton-A-A vertex in momentum space is given by \begin{align} {\cal V}_{\alpha\beta\lambda\sigma}(q_1,q_2) =\eta_{\lambda\sigma} q_{1(\alpha}q_{2\beta)}-\eta_{\sigma(\beta}q_{1\alpha)}q_{2\lambda} -\eta_{\lambda(\alpha}q_{1_\sigma}q_{2\beta)}\nonumber\\ +\eta_{\sigma(\beta}\eta_{\alpha)\lambda}q_1{\cdot q_2} -\frac{1}{2}\eta_{\alpha\beta}(\eta_{\lambda\sigma} q_1{\cdot q_2}-q_{1\sigma}q_{2\lambda}), \end{align} where $q_1,q_2$ denote the momenta of the two $Vs$ connected to the graviton with momentum $p$. The lowest order correction to the graviton vacuum loop will have the form \begin{align} \label{PolV} \Pi^{\rm GA}_{\mu\nu\rho\sigma}(p) =-\kappa^2\exp(-p^2/\Lambda_G^2)\int d^4q {\cal V}_{\mu\nu\lambda\alpha}(p,q){\cal F}(q^2) D^{A\,\lambda\delta}(q)\nonumber\\ \times{\cal V}_{\rho\sigma\kappa\delta}(p,q-p){\cal F}((q-p)^2) D^{A\,\alpha\kappa}(q-p). \end{align} Let us adopt the entire functions ${\cal F}(p^2)_{\rm SM}=\exp(-p^2/2\Lambda_{\rm SM}^2)$ and ${\cal F}(p^2)_=\exp(-p^2/2\Lambda_G^2)$ in Euclidean momentum space, scaled by the SM energy scale $\Lambda_{\rm SM}$ and the gravitational energy scale $\Lambda_G$, respectively. We obtain \begin{align} \label{Ptensor} \Pi^{\rm GV}_{\mu\nu\rho\sigma}(p)=-\kappa^2\exp(-p^2/\Lambda_G^2) \int\frac{d^4q\eta^{\lambda\delta}\eta^{\alpha\kappa}}{q^2(q-p)^2}{\cal V}_{\mu\nu\lambda\alpha}(p,q)\nonumber\\ \times{\cal V}_{\rho\sigma\kappa\delta}(p,q-p)\exp\biggl(-q^2/2\Lambda^2_{\rm SM}\biggr) \exp\biggl(-(q-p)^2/2\Lambda^2_{\rm SM}\biggr). \end{align} As usual, we must add to (\ref{Ptensor}) the contributions from the tadpole vector-graviton diagrams and the invariant measure diagram. We observe that from power counting of the momenta in the integral (\ref{Ptensor}), we obtain \begin{align} \label{VacPol} \Pi^{\rm GA}_{\mu\nu\rho\sigma}(p)\sim \kappa^2\exp(-p^2/\Lambda_G^2)N_{\mu\nu\rho\sigma}(\Lambda_{\rm SM},p^2), \end{align} where $N_{\mu\nu\rho\sigma}(\Lambda_{\rm SM},p^2)$ is a finite contribution to $\Pi^{\rm GA}_{\mu\nu\rho\sigma}(p)$. ${{{\Pi^{\rm GA}_\mu}^\mu}^\sigma}_\sigma(p)$ vanishes at $p^2=0$, as it should because of gauge invariance to this order and the massless graviton. The vector field vertex form factor, {\it when coupled to SM gauge bosons}, will have the form \begin{equation} {\cal E}^{\rm SM}(p^2) =\exp\biggl(-p^2/2\Lambda_{SM}^2\biggr). \end{equation} If we choose $\Lambda_{SM}\gtrsim 1$ TeV, then we will reproduce the low energy SM experimental results and ${\cal F}^{\rm SM}(p^2)$ becomes ${\cal F}^{\rm SM}(0)=1$ on the mass shell $p^2=0$~\cite{Moffat,Moffat2}. \section{Cosmological Constant Problem and Quantum Gravity} The cosmological constant problem is considered to be the most severe hierarchy problem in modern physics~\cite{Weinberg,Polchinski,Martin,Burgess}. Can our quantum gravity theory solve the cosmological constant problem? The cosmological constant is a non-derivative coupling in the Lagrangian density ${\cal L}_{\rm grav}$: \begin{equation} \label{lambda} {\cal L}_\lambda=-\frac{4}{\kappa^2}\lambda\sqrt{-g}. \end{equation} In diagrammatic terms, it is a sum of zero momentum and zero temperature vacuum fluctuation loops coupled to external gravitons. The problem is to explain why the magnitude of $\lambda$ is suppressed to be zero or a very small value when compared to observation. Let us initially consider the basic lowest order vacuum fluctuation diagram computed from the matrix element in flat Minkowski spacetime: \begin{equation} \rho_{\rm vac}\sim\rho^{(2)}_{\rm vac}\sim g^2\int d^4pd^4p'd^4k\delta(k+p-p')\delta(k+p-p') $$ $$ \times\frac{1}{k^2+m^2}{\rm Tr}\biggl(\frac{i\gamma^\sigma p_\sigma-m_f}{p^2+m_f^2}\gamma^\mu\frac{i\gamma^\sigma p'_\sigma-m_f}{p^{'2}+m_f^2}\gamma_\mu\biggl) \exp\biggl[-\biggl(\frac{p^2+m_f^2}{2\Lambda^2_{SM}}\biggr) -\biggl(\frac{p'^2+m^2_f}{2\Lambda^2_{SM}}\biggr) -\biggl(\frac{k^2+m^2}{2\Lambda^2_{SM}}\biggr)\biggr], \end{equation} where $g$ is a coupling constant associated with the standard model. We have considered a closed loop made of a SM fermion of mass $m_f$, an anti-fermion of the same mass and an internal SM boson propagator of mass $m$; the scale $\Lambda_{\rm SM}\sim 1$ TeV. This leads to the result \begin{equation} \rho_{\rm vac}\sim\rho^{(2)}_{\rm vac}\sim 16\pi^4g^2\delta^4(a)\int_0^{\infty}dpp^3\int_0^{\infty}dp'p^{'3} \biggl[\frac{-P^2+p^2+p^{'2}+4m_f^2}{(P+a)(P-a)}\biggr] \frac{1}{(p^2+m_f^2)(p'^2+m_f^2)} $$ $$ \times \exp\biggl[-\frac{(p^2+p'^2+2m^2_f)} {2\Lambda^2_{SM}}-\frac{P^2+m^2}{2\Lambda^2_{SM}}\biggr], \end{equation} where $P=p-p'$ and $a$ is an infinitesimal constant which formally regularizes the infinite volume factor $\delta^4(0)$. We see that $\rho_{\rm vac}\sim \Lambda_{\rm SM}^4$. By choosing our nonlocal energy scale for the standard model, $\Lambda_{\rm NL}\sim\Lambda_{\rm SM}\sim 1\, {\rm TeV}=10^3\,{\rm GeV}$, we have reduced the magnitude of the vacuum density by 64 orders of magnitude compared to having $\Lambda_{\rm SM}\sim \Lambda_{\rm Planck}\sim 10^{19}\,{\rm GeV}$. In Minkowski spacetime, the sum of all {\it disconnected} vacuum diagrams is a constant factor $C$ in the scattering S-matrix $S'=SC$. Since the S-matrix is unitary $\vert S'\vert^2=1$, then we must conclude that $\vert C\vert^2=1$, and all the disconnected vacuum graphs can be ignored. This result is also known to follow from the Wick ordering of the field operators. However, due to the equivalence principle {\it gravity couples to all forms of energy}, including the vacuum energy density $\rho_{\rm vac}$, so we can no longer ignore these virtual quantum fluctuations in the presence of a non-zero gravitational field. \begin{figure}[t] \begin{center}{\includegraphics[width=\linewidth]{selfEnergyGrav}}\end{center} \caption{Electron vacuum fluctuation loops coupled to gravitons generating a vacuum density.\label{fig:selfEnergyGrav.eps}} \end{figure} We can view the cosmological constant as a non-derivative coupling of the form $\lambda_0\sqrt{-g}$ in the Einstein-Hilbert action (See Fig. 1). Quantum corrections to $\lambda_0$ come from loops formed from massive SM states, coupled to external graviton lines at essentially zero momentum. The massive SM states are far off-shell. Experimental tests of the standard model involving gravitational couplings to the SM states are very close to being on-shell. Important quantum corrections to $\lambda_0$ are generated by a huge extrapolation to a region in which gravitons couple to SM particles which are far off-shell. To reduce the size of the vacuum density to agree with the observational bound, we must discover how gravity can couple to the vacuum energy density and generate an exponential damping of the very large $\rho_{\rm vacSM}$. This exponential suppression of $\rho_{\rm vacSM}$ can be produced by nonlocal QG. There will be virtual graviton legs connected to the quantum gravity-standard model loops by a nonlocal vertex entire function, $\exp(-p_G^2/2\Lambda_G^2)$. We see from (\ref{VacPol}) that the standard model vacuum polarization and vacuum density are reduced by the nonlocal graviton vertex interaction: \begin{equation} \rho_{\rm vac}\sim \exp(-{\bar p_G}^2/2\Lambda_G^2)\rho_{\rm vacSM}, \end{equation} where ${\bar p_G}=\langle p_G^2\rangle^{1/2}$ is an average mean of the virtual graviton momentum $p_G$. If we choose ${\bar p_G}=16.49\Lambda_G$, then we have \begin{equation} \label{VacSupp} \rho_{\rm vac}\sim \exp(-{\bar p_G}^2/2\Lambda_G^2)\rho_{\rm vacSM}\sim 10^{-47}\,{\rm GeV}^4, \end{equation} and we have reduced the cosmological constant contribution, $\lambda_{\rm vac}=8\pi G\rho_{\rm vac}$, to the observed bound $\lambda_{\rm vacObs}\leq 10^{-84}\,{\rm GeV}^2$, where we have used the nonlocal energy scale $\Lambda_{SM}\sim 1$ TeV in the coupling to standard model particles. The size of $\Lambda_G$ should be small enough to allow for soft graviton momenta. This can be achieved by choosing $\Lambda_G < 1$ MeV, so that the mean virtual graviton momentum ${\bar p}_G=16.5\Lambda_g < 17$ MeV. The size of the exponential suppression of the vacuum energy in (\ref{VacSupp}) can be related to a violation of the weak equivalence principle through the electrostatic energy associated with the vacuum polarization of atomic hydrogen coupled to external gravitons~\cite{Polchinski,Martin}, so the choice of $\Lambda_G$ can play an important role. However, the violation of the equivalence principle can be affected by the material environment, namely, the difference between the atomic matter environment versus the vacuum energy density in empty space at extra-galactic distance scales. \section{Conclusions} The nonlocal formulation of quantum gravity provides a finite, unitary and locally gauge invariant perturbation theory. The vertex functions associated with point-like interactions in local quantum field theory are replaced by smeared out nonlocal vertex functions controlled by transcendental entire functions. The choice of entire function in momentum space $\exp(-p^2/2\Lambda^2)$, where $\Lambda=\Lambda_{\rm SM}\sim 1$ TeV and $\Lambda=\Lambda_G$ for the standard model and quantum gravity, respectively, guarantees the finiteness of all quantum loops. We have demonstrated how the vacuum fluctuations involving SM loops can be exponentially dampened by the entire functions for the graviton-standard model particle vertex functions. For a mean value of the virtual graviton momenta the exponential suppression can reduce the vacuum density fluctuations and the cosmological constant to agree with the cosmological observational bounds. \section*{Acknowledgements} The John Templeton Foundation is thanked for its generous support of this research. The research was also supported by the Perimeter Institute for Theoretical Physics. Research at the Perimeter Institute for Theoretical Physics is supported by the Government of Canada through industry Canada and by the Province of Ontario through the Ministry of Research and Innovation (MRI). I thank Martin Green and Viktor Toth for helpful discussions.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Deep Surrogate Assisted Generation of Environments (DSAGE)} \label{sec:algo} \noindent\textbf{Algorithm.} We propose the Deep Surrogate Assisted Generation of Environments (DSAGE) algorithm for discovering environments that elicit diverse agent behaviors. Akin to the \mapelites{} family of QD algorithms, DSAGE maintains a \textit{ground-truth archive} where solutions are stored based on their ground-truth evaluations. Simultaneously, DSAGE also trains and exploits a deep surrogate model for predicting the behavior of a fixed agent in new environments. The QD optimization occurs in three phases that take place in an outer loop: model improvement, model exploitation, and agent simulation (Fig.~\ref{fig:alg}). \aref{alg:dsage} provides the pseudocode for the DSAGE algorithm. The model exploitation phase (lines~\ref{alg:inner_start}--\ref{alg:inner_end}) is an inner loop that leverages existing QD optimization algorithms and the predictions of the deep surrogate model to build an archive -- referred to as the \textit{surrogate} archive -- of solutions. The QD algorithm returns a list of $B$ candidate solutions through the \textit{ask} method. These solutions are environment parameters, e.g., latent vectors of a GAN, which are passed through the environment generator, e.g., a GAN, to create an environment (line~\ref{alg:inner_env_gen}). The surrogate model predicts ancillary agent behavior data (line~\ref{alg:inner_anc}), which guides its downstream prediction of the objective and the measure values (line~\ref{alg:inner_pred}). The \textit{tell} method adds the solution to the surrogate archive based on the predicted objective and measure values. The agent simulation phase (lines~\ref{alg:elite_sel}--\ref{alg:eval_end}) inserts a subset of solutions from the surrogate archive into the ground-truth archive. This phase begins by selecting the subset of solutions from the surrogate archive (line~\ref{alg:elite_sel}). The selected solutions are evaluated by generating the corresponding environment (line~\ref{alg:env_gen}) and simulating a fixed agent to obtain the true objective and measure values, as well as ancillary agent behavior data (line~\ref{alg:eval}). Evaluation data is appended to the dataset, and solutions that improve their corresponding cell in the ground-truth archive are added to that archive (lines~\ref{alg:data_add},~\ref{alg:real_archive_add}). In the model improvement phase (line~\ref{alg:sm_train}), the surrogate model is trained in a self-supervised manner through the supervision provided by the agent simulations and the ancillary agent behavior data. The algorithm is initialized by generating random solutions and simulating the agent in the corresponding environments (lines~\ref{alg:rand_gen}-\ref{alg:init_end}). Subsequently, every outer iteration (lines~\ref{alg:outer_start}-\ref{alg:outer_end}) consists of model exploitation followed by agent simulation and ending with model improvement. \begin{algorithm} \LinesNumbered \DontPrintSemicolon \KwIn{ $N$: Maximum number of evaluations, $n_{rand}$: Number of initial random solutions, $N_{exploit}$: Number of iterations in the model exploitation phase, $B$: Batch size for the model exploitation QD optimizer } \KwOut{Final version of the ground-truth archive $\mathcal{A}_{gt}$} Initialize the ground-truth archive $\mathcal{A}_{gt}$, the dataset $\mathcal{D}$, and the deep surrogate model $sm$ \; $\mathbf{\Theta} \leftarrow$ \textit{generate\_random\_solutions}($n_{rand}$) \label{alg:rand_gen}\; \For{${\bm{\theta}} \in \mathbf{\Theta}$}{ $env \leftarrow g({\bm{\theta}})$ \label{alg:init_env_gen}\; $f, {\bm{m}}, \bm{y} \leftarrow$ \textit{evaluate}($env$) \label{alg:init_eval}\; $\mathcal{D} \leftarrow \mathcal{D} \cup ({\bm{\theta}}, f, {\bm{m}}, \bm{y})$ \label{alg:init_data_add}\; $\mathcal{A}_{gt} \leftarrow$ \textit{add\_solution}($\mathcal{A}_{gt}, ({\bm{\theta}}, f, {\bm{m}})$) \label{alg:init_real_archive_add}\; } \label{alg:init_end} $evals \leftarrow n_{rand}$ \; \While{$evals < N$}{ \label{alg:outer_start} Initialize a QD optimizer $qd$ with the surrogate archive $\mathcal{A}_{surrogate}$ \label{alg:inner_start}\tikzmark{top_exploit}\tikzmark{right}\; \For{$itr \in \{1, 2, \ldots, N_{exploit}\}$}{ $\mathbf{\Theta} \leftarrow$ \textit{qd.ask}($B$) \; \For{${\bm{\theta}} \in \mathbf{\Theta}$}{ $env \leftarrow g({\bm{\theta}})$ \label{alg:inner_env_gen}\; $\hat{y} \leftarrow$ \textit{sm.predict\_ancillary}($env$) \label{alg:inner_anc}\; $\hat{f}, \hat{{\bm{m}}} \leftarrow$ \textit{sm.predict}($env,\hat{y}$) \label{alg:inner_pred}\; \textit{qd.tell}(${\bm{\theta}}, \hat{f}, \hat{{\bm{m}}}$) \; } }\label{alg:inner_end} \tikzmark{bottom_exploit} $\mathbf{\Theta} \leftarrow$ \textit{select\_solutions}($\mathcal{A}_{surrogate}$) \label{alg:elite_sel}\tikzmark{top_sim}\; \For{${\bm{\theta}} \in \mathbf{\Theta}$}{ $env \leftarrow g({\bm{\theta}})$ \label{alg:env_gen}\; $f, {\bm{m}}, \bm{y} \leftarrow$ \textit{evaluate}($env$) \label{alg:eval}\; $\mathcal{D} \leftarrow \mathcal{D} \cup ({\bm{\theta}}, f, {\bm{m}}, \bm{y})$ \label{alg:data_add}\; $\mathcal{A}_{gt} \leftarrow$ \textit{add\_solution}($\mathcal{A}_{gt}, ({\bm{\theta}}, f, {\bm{m}})$) \label{alg:real_archive_add}\; $evals \leftarrow evals + 1$ \tikzmark{bottom_sim} \; } \label{alg:eval_end} \textit{sm.train}($\mathcal{D}$) \label{alg:sm_train} \tikzmark{top_imp}\tikzmark{bottom_imp} \AddNoteHacked{top_exploit}{bottom_exploit}{right}{algo_exploit}{Model Exploitation} \AddNoteHackedTwo{top_sim}{bottom_sim}{right}{algo_sim}{Agent Simulation} \AddNote{top_imp}{bottom_imp}{right}{algo_imp}{Model Improvement} \; } \label{alg:outer_end} \caption{Deep Surrogate Assisted Generation of Environments (DSAGE)} \label{alg:dsage} \end{algorithm} \noindent\textbf{Self-supervised prediction of ancillary agent behavior data.} By default, a surrogate model directly predicts the objective and measure values based on the initial state of the environment and the agent (provided in the form of a one-hot encoded image). However, we anticipate that direct prediction will be challenging in some domains, as it requires understanding the agent's trajectory in the environment. Thus, we provide additional supervision to the surrogate model in DSAGE via a two-stage self-supervised process. First, a deep neural network predicts ancillary agent behavior data. In our work, we obtain this data by recording the expected number of times the agent visits each discretized tile in the environment, resulting in an ``occupancy grid.'' We then concatenate the predicted ancillary information, i.e., the predicted occupancy grid, with the one-hot encoded image of the environment and pass them through another deep neural network to obtain the predicted objective and measure values. We use CNNs for both predictors and include details about the architecture in Appendix~\ref{app:model}. As a baseline, we compare our model with a CNN that directly predicts the objective and measure values without the help of ancillary data. \noindent\textbf{Downsampling to select solutions from the surrogate archive.} After the model exploitation phase, the surrogate archive is populated with solutions that were predicted to be high-performing and diverse. Hence, a basic selection mechanism (line~\ref{alg:elite_sel}) would select all solutions from the surrogate archive, identical to \dsame{}~\citep{zhang2021deep}. However, if the surrogate archive is overly populated, full selection may result in a large number of ground-truth evaluations per outer-loop iteration, leading to fewer outer loops and less surrogate model training. To balance the trade-off between evaluating solutions from the surrogate archive and training the surrogate model, we only select a subset of solutions for evaluation by downsampling the surrogate archive. Downsampling uniformly divides the surrogate archive into sub-regions of cells and selects a random solution from each area. % \section{Qualitative Analysis of the Algorithms} \label{app:archives} \fref{fig:heatmap_figure_maze} and \fref{fig:heatmap_figure_mario} show typical archives output by the algorithms in our experiments in the Maze and Mario domains, respectively. \begin{figure}[h] \centering \includegraphics[width=0.909\linewidth]{figures/heatmap_figure/maze.pdf} \caption{Example archives generated by algorithms in the Maze domain. Among the algorithms, DSAGE fills the largest portion of the archive, resulting in the highest QD score, while \mapelites{} fills the smallest portion, resulting in the lowest QD score. Since the objective only tests whether the level is valid, all levels in each archive have an objective of 1. Note that certain portions of the archive are physically impossible to obtain. For example, a maze with 0 wall cells would have the starting position and the goal at opposite corners, meaning that the mean agent path length must be at least 32.} \label{fig:heatmap_figure_maze} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figures/heatmap_figure/mario.pdf} \caption{Example archives generated by all algorithms in the Mario domain. Compared to \cmame{}, DSAGE and its variants are more adept at finding levels with high numbers of jumps. The algorithms then differ in the objective values of the levels that have high numbers of jumps --- DSAGE Basic finds levels with low objective values, so its QD score is low. DSAGE-Only Down finds many levels with high numbers of jumps, but many of these levels have low objective values (hence the dark region in the top right of its archive), leading to a lower QD score than DSAGE, which primarily finds levels with high objective values. Note that in our experiments, we never observed a level that caused Mario to jump more than 60 times, so the upper portion of all archives is unoccupied.} \label{fig:heatmap_figure_mario} \end{figure} \section{Environment Details} \label{app:envs} \subsection{Maze} \paragraph{Environment.} The mazes that we consider in this paper are implemented as MiniGrid environments~\citep{gym_minigrid}. Each maze is a $16 \times 16$ grid containing walls and empty cells, two of which are the starting and the goal cells. An agent solving the maze starts at the starting cell and observes a $5 \times 5$ area around itself. The agent can move forward into an empty cell or turn left or right in its own cell. To maintain consistency with the MiniGrid environments, the agent is also allowed to pick up, drop or toggle an object or notify that a task is done. In the mazes generated by our work, all those actions result in the agent staying in the same cell. A time limit of $648$ is used since an optimal agent will be able to finish all possible $16 \times 16$ mazes in this duration. If the agent reaches the goal within this time limit, it receives a reward of $1 - 0.9 \times$ fraction of the time limit used. Otherwise, the agent receives no reward. \paragraph{Environment generator.} The environment generator accepts a $16 \times 16$ bit map denoting the walls and empty spaces as the input. For better visualization, we add a wall surrounding the $16 \times 16$ region. We set the starting cell and goal cell to be the pair of empty cells that are furthest apart, as identified by the Floyd-Warshall algorithm~\citep{cormen2022introduction}. \paragraph{Agent.} We select an agent from a recent work on open-ended learning, ACCEL \citep{parker2022evolving}, for the purpose of evaluation. Since individual ACCEL agents have a high variance in their performance, we evaluated the agents trained with four different random seeds on three of the test mazes given in the original paper (\textit{Labyrinth}, \textit{16Rooms}, \textit{LargeCorridor}). We chose the best performing agent out of the four and fixed it for all our experiments. The selected agent was able to always reach the goal in those test mazes. \subsection{Mario} \paragraph{Environment} The Mario environments that we consider in this paper are implemented in the Mario AI Framework~\citep{marioframework,marioai}. Each level is a $16\times56$ grid of tiles, where each tile can be one of 17 different objects. The agent in each environment receives as input the current game state, consisting of all tiles that are visible on the screen. The agent then outputs the action for Mario to take. Each episode runs for 20 time ticks. \paragraph{Environment generator.} Drawing from prior work \citep{fontaine2021illuminating,volz2018mario}, the Mario environments are generated with a GAN pre-trained on human-authored levels with the WGAN algorithm \citep{arjovsky2017wgan,gulrajani2017wgan}. The GAN's generator takes as input a latent vector of size 32 and outputs a $16 \times 56$ level padded to $64 \times 64$. The GAN architecture is shown in \fref{fig:mariogan}. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figures/mariogan.pdf} \caption{GAN architecture for generating Mario levels (BN stands for Batch Normalization \citep{ioffe2015batch}).} \label{fig:mariogan} \end{figure} \paragraph{Agent.} In each environment, we run the A* agent developed by Robin Baumgarten \citep{baumgarten}. This agent won the Mario AI competitions at the ICE-GIC conference and the IEEE Computational Intelligence in Games symposium in 2009. The trajectory taken by the agent in a level is stochastic due to randomness in the environment dynamics. \section{Experimental Details} \label{app:exp} \paragraph{QD Optimization Algorithm.} In the Maze domain, we used the \mapelites{} algorithm to generate the wall and the empty tiles of a maze. The first 100 solutions were generated by setting each cell to be either a wall cell or an empty cell uniformly at random. Every subsequent solution was generated by first choosing a random solution in the archive and mutating 10 random cells to a random value. The batch size was set to 150, i.e., 150 solutions were generated and evaluated in each iteration of the \mapelites{} algorithm. The archive was divided into $256 \times 162$ cells corresponding to the number of wall cells and the mean agent path length respectively. In the Mario domain, we followed previous work~\citep{fontaine2021illuminating} and selected the \cmame{} algorithm for QD optimization. The archive was divided into $150 \times 100$ cells corresponding to the number of sky tiles and the number of jumps respectively. The solutions, which are the input to a pre-trained GAN from previous work~\citep{fontaine2021illuminating}, were generated by 5 improvement emitters, each with a batch size of 30 and mutation power of 0.2. In the baselines without a surrogate model, we ran the QD optimization algorithm until the number of ground-truth evaluations reached the given budget. For the other algorithms, we used the QD optimizer in the surrogate model exploitation phase and ran 10,000 iterations of the corresponding algorithm to create the surrogate archive. We implemented all QD algorithms in Python with the pyribs~\citep{pyribs} library. \paragraph{Ancillary data and downsampling.} In both domains, we recorded and stored the average number of visits by the agent to each discretized tile in the environment as the ancillary data. Algorithms using downsampling chose a single random elite from every $8 \times 6$ cells in the Maze domain and every $5 \times 5$ cells in the Mario domain \paragraph{Surrogate Model Training.} At the start of each outer iteration, the deep surrogate model was trained on the most recent 20,000 data samples for 200 epochs with a batch size of 64. The surrogate model was updated by backpropagating the mean square error loss between the predicted and the true objective, measures, and ancillary data. The model weights were then updated by the Adam~\citep{adam} optimizer with a learning rate of 0.001 and betas equal to 0.9 and 0.999 respectively. We implemented the surrogate model with the PyTorch~\citep{pytorch} library. \paragraph{Computational Resources.} For each algorithm-domain pair, we repeated the experiments 5 times and compared the mean performance. Experiments were run on two local machines and a high-performance cluster. The local machines had AMD Ryzen Threadripper with a 64-core (128 threads) CPU and an NVIDIA GeForce RTX 3090/RTX A6000 GPU. 16 CPU cores and one V100 GPU were allocated for each run on the cluster. Maze experiments without downsampling lasted for 4-5 hours while those with downsampling lasted for around 30 hours. Mario experiments without downsampling took 2-3 hours while those with downsampling took around 12 hours. A single ground-truth evaluation in the Maze domain took between 1 to 13 seconds, with a mean of 3.5 seconds. The variation was mostly due to the difference in the agent performance since mazes that were finished in fewer steps required fewer forward passes through the agent's policy network. Evaluations in the Mario domain took between 1 to 135 seconds, with an average of 53 seconds, depending on the generated level. In contrast, a complete inner loop involving the surrogate model exploitation phase (around 1,500,000 surrogate evaluations) finished in around 90 seconds. \section{Searching for Additional Agent Behaviors} Here we present example results from different measures in the Maze and Mario domains. By searching for these measures with DSAGE, we discover environments that elicit a wide range of agent behaviors not presented in our main paper. \subsection{Maze} \fref{fig:maze_repeated_exploration}, \ref{fig:maze_walls_exploration}, and \ref{fig:maze_walls_repeated} show results from DSAGE runs in the Maze domain with different measures. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/maze-heatmap/repeated-exploration/repeated-exploration.pdf} \caption{A DSAGE run in the Maze domain where the measures are (1) the fraction of reachable cells that the agent has visited (termed the ``Maze exploration''; range [0, 1]) and (2) the number of times that the agent visits a cell it has already visited (termed the ``Repeated visits''; range [0, 648]). Note that both of these measures are agent-based. In (a) and (b), the agent becomes stuck in a small portion of the maze, leading to many repeated visits but low maze exploration. Notably, the agent observes the goal multiple times in (b), but it never figures out how to go around the large wall which blocks it. In (c), the agent gets stuck in several traps (leading to repeated visits) but eventually makes its way to the goal. In (d), the agent heads directly to the goal, so it does not explore the maze much, and the only repeated visits it makes come from turning (when the agent turns, it stays in the same cell, which counts as a repeated visit). In (e) and (f), the agent visits multiple parts of the maze several times and is unable to reach the goal. In (g), the agent explores all of the space without revisiting many cells and eventually finds the goal. Finally, in (h), the agent has many repeated visits because it gets stuck at the beginning, but afterwards, it explores the rest of the maze and finds the goal. Refer to the supplemental material for videos of these agents.} \label{fig:maze_repeated_exploration} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{figures/maze-heatmap/walls-exploration/walls-exploration.pdf} \caption{A DSAGE run in the Maze domain where the measures are (1) the fraction of reachable cells that the agent has visited (termed the ``Maze exploration''; range [0, 1]) and (2) the number of wall cells in the maze (range [0, 256]). (a) and (b) are mazes where the wall cells define a straightforward path for the agent to follow to the goal, resulting in low maze exploration. In (c), the agent goes in circles in the bottom right corner, resulting in low exploration. (d) has a similar number of wall cells to (c), but the agent here is able to quickly find the goal, which also results in low exploration. (e) and (f) are two levels that are similar in terms of both measures yet have very different structures -- in particular, (f) has a much larger reachable space for the agent to explore. In (g), the agent spends all its time exploring even though there are relatively few wall cells blocking its path. Finally, (h) has a similar number of wall cells as (g), but the agent heads almost directly to the goal. Refer to the supplemental material for videos of these agents.} \label{fig:maze_walls_exploration} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{figures/maze-heatmap/walls-repeated/walls-repeated.pdf} \caption{A DSAGE run in the Maze domain where the measures are (1) the number of wall cells in the maze (range [0, 256]) and (2) the number of times that the agent visits a cell it has already visited (termed the ``Repeated visits''; range [0, 648]). In (a), the agent gets stuck in the top right corner since it is surrounded by walls with only one path out, so it has many repeated visits. In (b), the agent repeatedly goes around the maze and even sees the goal several times, but it usually does not reach the goal. (c) and (d) are relatively easy for the agent --- since it finds the path quickly, it does not repeat many visits. (e) and (f) are cases where the agent gets stuck going in loops even though it is right next to the goal, which leads to many repeated visits. In (g), the agent makes several loops but eventually finds the goal. Finally, in (h), the agent goes directly to the goal, so it never repeats any visits. Refer to the supplemental material for videos of these agents.} \label{fig:maze_walls_repeated} \end{figure} \subsection{Mario} \fref{fig:mario_enemies_sky_tiles} and \ref{fig:mario_jumps_enemies} show results from DSAGE runs in the Mario domain with different measures. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figures/mario-heatmap/enemies-sky-tiles/enemies-sky-tiles.pdf} \caption{A DSAGE run in the Mario domain where the measures are (1) the number of sky tiles (exactly as in the main paper) and (2) the number of enemies Mario kills (range [0, 25]). Note that in (b), Mario kills many enemies because Mario repeatedly jumps on the bullets fired by the cannon at the end of the level. In (d), even though Mario kills multiple enemies, Mario cannot complete the level because the sky tiles form an unbreakable barrier. Refer to the supplemental material for videos of these agents.} \label{fig:mario_enemies_sky_tiles} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figures/mario-heatmap/jumps-enemies/jumps-enemies.pdf} \caption{A DSAGE run in the Mario domain where the measures are (1) the number of times that Mario jumps (range [0, 100]) and (2) the number of enemies that Mario kills (range [0, 25]). Similar to the levels from our earlier experiment (\fref{fig:mario_heatmap}(c)), levels (c), (d), and (e) here have a ``staircase trap'' at the end which causes Mario to perform many jumps, where different trap structures result in different numbers of jumps. Note that in some environments, there appear to be more jumps than indicated in the measures because Mario bounces whenever Mario lands on and kills an enemy, but these bounces do not count as jumps. Refer to the supplemental material for videos of these agents.} \label{fig:mario_jumps_enemies} \end{figure} \section{Deep Surrogate Model} \label{app:model} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{figures/cnn_v1.pdf} \caption{Architecture of the surrogate model. The model predicts the occupancy grid (red arrows) which guides the downstream prediction of the objective and the measure values (blue arrows).} \label{fig:cnn} \end{figure} In the DSAGE algorithm, we maintain a deep surrogate model (\fref{fig:cnn}) for predicting the objective and the measures resulting from simulating an agent's execution in the environment. The input to this model, provided as a one-hot encoded representation of the image of the environment, is passed through a two-stage deep surrogate model as described in \sref{sec:algo}. The first stage predicts the ancillary agent behavior data that is in the form of an occupancy grid. The predictor consists of a $3 \times 3$ convolution (with Leaky ReLU activation) followed by two residual layers~\citep{he2016deep} and a $1 \times 1$ convolution. Since the occupancy grid depends on the layout of the environment, we believe that residual layers' propagation of the input information is helpful for prediction. The predicted occupancy grid and the one-hot encoded image of the environment are stacked and passed through another CNN that predicts the objective and the measure values. The architecture of this CNN is inspired by the discriminator architecture in prior work on generating Mario levels with a GAN~\citep{fontaine2021illuminating,volz2018mario}. The input is passed through layers of $4 \times 4$ strided convolutions with a stride of 2 and an increasing number of channels. Each convolution is followed by Batch Normalization~\citep{ioffe2015batch} and LeakyReLU activation. Once the height and width of the output of a convolution have been reduced to 4, it is flattened and passed through two fully connected layers to obtain the objective and the measure values. DSAGE Basic and DSAGE-Only Down do not predict the occupancy grid. The surrogate model in those algorithms directly predicts the objective and the measure values as denoted by the blue arrows. \subsection{Evaluating the Prediction Performance} \noindent\textbf{Mean absolute error.} To test the prediction performance of the deep surrogate model trained by DSAGE and its variants, we select two separate runs of each algorithm. The datasets generated in the first run of each algorithm are combined into a single dataset. We then evaluate the trained surrogate models from the second run of each algorithm on the combined dataset by calculating the mean absolute error (MAE) between the predicted and the true objective and measures corresponding to the solutions in the combined dataset. \tref{tab:pred_results} shows the obtained MAEs in the Maze and the Mario domains. In both domains, we observe that the measures that depend on agent behavior (mean agent path length for Maze and number of jumps for Mario) are harder to predict compared to the ones that only depend on the environment (number of wall cells for Maze and number of sky tiles for Mario). Indeed, the MAEs for the number of wall cells in the Maze domain and the number of sky tiles in the Mario domain are much smaller than the MAE for the mean agent path length and the number of jumps, respectively. In the Maze domain, predicting ancillary agent behavior data helped improve the prediction of the mean agent path length. Both DSAGE and DSAGE-Only Anc have better predictions compared to their counterparts that do not predict ancillary data. Since the mean agent path length is a scaled version of the sum of the occupancy grid, having a good prediction of the occupancy grid makes the downstream prediction task much easier. We believe that the additional supervision during training in the form of the occupancy grid guides the surrogate model towards understanding the layout of the maze and the agent's behavior. On the other hand, we see little improvement when predicting the number of jumps in the Mario domain. Here, downsampling provided a larger boost to the predictions, with DSAGE and DSAGE-Only Down making better predictions than their counterparts without downsampling. Since we do not store temporal information in the occupancy grid, predicting the number of jumps remains a challenging task even with an accurate prediction of the occupancy grid. We conjecture that the additional training of the surrogate model, which resulted from the increased number of outer iterations when downsampling, played a more important role in improving the predictions. \begin{table} \centering \caption{Mean absolute error of the objective and measure predictions for the surrogate models.} \label{tab:pred_results} \scriptsize \begin{tabular}{L{0.15\textwidth}C{0.07\textwidth}C{0.12\textwidth}C{0.14\textwidth}C{0.07\textwidth}C{0.11\textwidth}C{0.11\textwidth}} \toprule & \multicolumn{3}{c}{Maze} & \multicolumn{3}{c}{Mario} \\ \cmidrule(l{2pt}r{2pt}){2-4} \cmidrule(l{2pt}r{2pt}){5-7} Algorithm & Objective MAE & Number of Wall Cells MAE & Mean Agent Path Length MAE & Objective MAE & Number of Sky Tiles MAE & Number of Jumps MAE \\ \midrule DSAGE & 0.03 & 0.37 & 96.58 & 0.10 & 1.10 & 7.16 \\ DSAGE-Only Anc & 0.04 & 0.96 & 95.14 & 0.20 & 1.11 & 9.97 \\ DSAGE-Only Down & 0.10 & 0.95 & 151.50 & 0.11 & 0.87 & 6.52 \\ DSAGE Basic & 0.18 & 5.48 & 157.69 & 0.20 & 2.16 & 10.71 \\ \bottomrule \end{tabular} \end{table} \noindent\textbf{Correlation plots.} To further test if DSAGE's predictions of some measures were more accurate in certain regions of the archive, for each solution we plot the true measure cell on the x-axis and the average of the corresponding predicted measure cell on the y-axis (\fref{fig:preds}). In this plot, accurate predictions would fall on the $x=y$ line (denoted in blue), and inaccurate ones would be above or below the line. \begin{figure} \centering \includegraphics[width=\textwidth]{figures/pred_corrs.pdf} \caption{Correlation between the predicted and the true measure cells. The first row corresponds to DSAGE while the second row corresponds to DSAGE Basic. The columns correspond to the two measures in the Maze and the Mario domains respectively. We observe that long agent path lengths and high numbers of jumps are more difficult to predict.} \label{fig:preds} \end{figure} Once again, we see that the measures dependent on agent simulation, i.e., the mean agent path length in Maze and the number of jumps in Mario, are difficult to predict. Interestingly, we observe that accurately predicting large number of jumps and long agent path length is harder compared to predicting them when the true value is low. Since the agent would be revisiting the tiles multiple times when the path length or the number of jumps is high, it becomes harder to obtain useful information from the occupancy grid. We also believe that in these regions, minor environment differences could cause a large change in the measure value, making the prediction problem extremely difficult. For example, if a jump in Mario is barely possible, the agent might need to try multiple times. But if one block is removed to make the jump easier, the agent might be able to finish it in one try, drastically reducing the total number of jumps. \begin{figure} \centering \includegraphics[width=\textwidth]{figures/archs_default.pdf} \caption{After running one surrogate model exploitation inner loop, we visualize the (a) surrogate archive, (b) positions of solutions from the surrogate archive in the ground-truth archive, (c) downsampled surrogate archive, and (d) positions of solutions from the downsampled surrogate archive in the ground-truth archive.} \label{fig:archs} \end{figure} \noindent\textbf{Surrogate archive accuracy.} To understand how the surrogate model's accuracy affects the creation of the ground-truth archive, we run an additional surrogate model exploitation inner loop starting from a completed DSAGE run and obtain a surrogate archive. We evaluate all the solutions in the surrogate archive and add them to a separate archive based on the ground-truth objective and measures. Additionally, we downsample the surrogate archive and create a corresponding ground-truth archive from the selected solutions. \fref{fig:archs} shows the full (\ref{fig:archs}.a) and the downsampled (\ref{fig:archs}.c) surrogate archive and the corresponding ground-truth archives (\ref{fig:archs}.b,~\ref{fig:archs}.d) in the Maze domain. We observe that many of the solutions from the surrogate archive end up in the same cell in the ground-truth archive, creating holes in the ground-truth archive. Only 47\% and 41\% of the solutions from the surrogate archive ended up in unique cells in the Maze and the Mario domains respectively. On the other hand, when downsampling, the percentage of surrogate archive solutions filling unique cells in the ground-truth archive improved to 97\% and 94\% in the Maze and the Mario domains respectively. Hence, downsampling reduces the number of unnecessary ground-truth evaluations. In the Maze domain, only 0.06\% of the surrogate archive solutions ended up in the exact same cell of the ground-truth archive as predicted. 4.6\% of the solutions were in the $8 \times 6$ (the area from which downsampled solutions are chosen) neighborhood of the predicted cell. The average Manhattan distance between the predicted cell and the true cell was 53.8. In the Mario domain, 2.0\% of the solutions were exactly in the same cell, 23.3\% in the $5 \times 5$ neighborhood, and the average Manhattan distance was 14.2. Despite the low accuracy of the surrogate model in terms of predicting the exact cell of the archive that the solution belongs to, the predictions were in the nearby region of the archive as evidenced by the average Manhattan distance. Furthermore, we conjecture that the holes in the ground-truth archive from a single outer iteration (as seen in \fref{fig:archs}.b,~\ref{fig:archs}.d) are filled by solutions from other outer iterations. Hence, the final ground-truth archive (\fref{fig:maze_heatmap},~\fref{fig:mario_heatmap}) is more densely filled, leading to a better archive coverage and a better QD-score. \section{Ablation: Random Selection of Surrogate Archive Solutions} \label{app:random_sample} As discussed in \sref{sec:ablation}, selecting solutions from the surrogate archive with downsampling has several advantages which lead to better performance, with the major advantage being that downsampling increases the number of outer loop iterations. However, we could also increase the number of outer iterations by choosing a different subset selection mechanism, including simply selecting solutions uniformly at random. Thus, we test DSAGE with the random selection mechanism as an additional baseline. Namely, after every inner loop, we select a fixed number of solutions from the surrogate archive uniformly at random such that the number of outer iterations is approximately the same for both downsampling and random sampling. \begin{table} \centering \caption{Mean and standard error of the QD score and archive coverage attained by DSAGE and DSAGE with random sampling in the Maze and Mario environments over 5 trials.} \label{tab:rsample_results} \small \begin{tabular}{L{0.25\textwidth}C{0.17\textwidth}C{0.13\textwidth}C{0.17\textwidth}C{0.13\textwidth}} \toprule & \multicolumn{2}{c}{Maze} & \multicolumn{2}{c}{Mario} \\ \cmidrule(l{2pt}r{2pt}){2-3} \cmidrule(l{2pt}r{2pt}){4-5} Algorithm & QD-score & Archive Coverage & QD-score & Archive Coverage \\ \midrule DSAGE & 16,446.60 $\pm$ 42.27 & 0.40 $\pm$ 0.00 & 4,362.29 $\pm$ ~~72.54 & 0.30 $\pm$ 0.00 \\ DSAGE (random sampling) & 15,974.40 $\pm$ 78.71 & 0.39 $\pm$ 0.00 & 4,370.28 $\pm$ 107.87 & 0.30 $\pm$ 0.01 \\ \bottomrule \end{tabular} \end{table} \tref{tab:rsample_results} shows the results obtained by DSAGE with downsampling and random sampling. We observe that the performance with random sampling is lower than that of downsampling in the Maze domain, but they are very close in the Mario domain. Hence, we can conclude that increasing the number of outer iterations is the largest contributor to the performance improvement, although downsampling has additional advantages that improve its performance in the Maze domain. \section{Background and Related Work} \label{sec:background} \noindent\textbf{Quality diversity (QD) optimization.} QD optimization originated in the genetic algorithm community with diversity optimization~\citep{lehman2011abandoning}, the predecessor to QD. Later work introduced objectives to diversity optimization and resulted in the first QD algorithms: Novelty Search with Local Competition~\citep{lehman2011nslc} and \mapelites{}~\citep{mouret2015illuminating, cully2015}. The QD community has grown beyond its genetic algorithm roots, with algorithms being proposed based on gradient ascent~\citep{fontaine2021dqd}, Bayesian optimization~\citep{kent2020bop}, differential evolution~\citep{choi:gecco21}, and evolution strategies~\citep{fontaine2020covariance, conti2018ns, colas2020scaling}. QD algorithms have applications in damage recovery in robotics~\citep{cully2015}, reinforcement learning~\citep{tjanaka2022approximating, nilsson2021pga, cideron2020qdrl}, and generative design~\citep{gaier2018dataefficient, hagg:gecco21}. Among the QD algorithms, those of particular interest to us are the model-based ones. Current model-based~\citep{bartz2016mobsurvey, Moerland2020ModelbasedRL} QD algorithms either (1) learn a surrogate model of the objective and measure functions~\citep{gaier2018dataefficient, hagg2020designing, cazenille2019exploring}, e.g. a Gaussian process or neural network, (2) learn a generative model of the representation parameters~\citep{gaier2020blackbox, rakicevic2021poms}, or (3) draw inspiration from model-based RL~\citep{Keller2020ModelBasedQS, Lim2021DynamicsAwareQF}. In particular, Deep Surrogate Assisted MAP-Elites (DSA-ME)~\citep{zhang2021deep} trains a deep surrogate model on a diverse dataset of solutions generated by \mapelites{} and then leverages the model to guide MAP-Elites. However, DSA-ME has only been applied to Hearthstone deck building, a simpler prediction problem than predicting agent behavior in generated environments. \noindent\textbf{Automatic environment generation.} Automatic environment generation algorithms have been proposed in a variety of fields. Methods between multiple communities often share generation techniques, but differ in how each community applies the generation algorithms. For example, in the procedural content generation (PCG) field~\citep{shaker2016procedural}, an environment generator produces video game levels that result in player enjoyment. Since diversity of player experience and game mechanics is valued in games, many level generation systems incorporate QD optimization~\citep{gravina2019procedural, fontaine2021illuminating, earle2021illuminating, khalifa2018talakat, steckel2021illuminating, schrum2020cppn2gan, sarkar2021generating}. The procedural content generation via machine learning (PCGML)~\citep{summerville2018procedural, liu2021deep} subfield studies environment generators that incorporate machine learning techniques such as Markov Chains~\citep{snodgrass2014experiments}, probabilistic graphical models~\citep{guzdial2016game}, LSTMs~\citep{summerville2016super}, generative models~\citep{volz2018mario, giacomello:gem18, torrado:cog20, sarkar2020conditional}, and reinforcement learning~\citep{khalifa2020pcgrl, earle2021learning}. Prior work~\citep{Karavolos2021AMS} has leveraged surrogate models trained on offline data to accelerate search-based PCG~\citep{togelius2011search}. Environment generation methods have also been proposed by the scenario generation community in robotics. Early work explored automated methods for generating road layouts, vehicle arrangements, and vehicle behaviors for testing autonomous vehicles~\citep{arnold:safecomp13,mullins:av18, abey:av19, rocklage:av17,gambi:av19,sadigh2019verifying, fremont2019scenic}. Outside of autonomous vehicles, prior work~\citep{zhou2022rocus} evaluates robot motion planning algorithms by generating environments that target specific motion planning behaviors. In human-robot interaction, QD algorithms have been applied as environment generators to find failures in shared autonomy systems~\citep{fontaine:sa_rss2021} and human-aware planners tested in the collaborative Overcooked domain~\citep{fontaine2021importance} Environment generation can also help improve the generality of RL agents. Prior work proposes directly applying PCG level generation algorithms to improve the robustness of RL~\citep{risi2020increasing, justesen2018illuminating} or to benchmark RL agents~\citep{cobbe2020leveraging}. Paired Open-ended Trailblazer (POET)~\citep{wang2019paired, wang2020enhanced} coevolves a population of both agents and environments to discover specialized agents that solve complex tasks. POET inspired a variety of open-ended coevolution algorithms~\citep{gabor2019scenario, bossens2020qed, dharna2020co, dharna2022transfer}. Later work proposes the PAIRED~\citep{dennis2020emergent}, PLR~\citep{jiang2021prioritized,jiang2021replay}, and ACCEL~\citep{parker2022evolving} algorithms that train a single generally capable agent by maximizing the regret between a pair of agents. These methods generate environments in parallel with an agent to create an automatic training curriculum. However, the authors validate these methods on human-designed environments~\citep{kirk2021survey}. Our work proposes a method that automatically generates valid environments that reveal diverse behaviors of these more general RL agents. \section{Domains} \label{sec:domains} We test our algorithms in two benchmark domains from previous work: a Maze domain~\citep{dennis2020emergent,parker2022evolving} with a trained ACCEL agent~\citep{parker2022evolving} and a Mario domain~\citep{togelius20102009,fontaine2021illuminating} with an A* agent~ \citep{baumgarten}. We select these domains because, despite their relative simplicity (each environment is represented as a 2D grid of tiles), agents in these environments exhibit complex and diverse behaviors. Appendix~\ref{app:envs} contains additional details about the environments. \noindent\textbf{Maze.} The environment generator directly outputs a $16\times16$ map of walls and empty cells. The starting cell and goal cell of the maze are set to the pair of reachable empty cells that are furthest apart in this map, as identified by the Floyd-Warshall algorithm~\citep{cormen2022introduction}. The agent needs to reach the goal cell from the starting cell within a specified time limit. We select the best trained ACCEL~\citep{parker2022evolving} agent out of four random seeds. Since we wish to generate valid environments, we set a binary objective function $f$ that is 1 if the generated environment is solvable, i.e., if the goal cell is reachable by the starting cell, and 0 otherwise. Furthermore, we wish to generate environments that vary with respect to the challenge they provide to the agent, thus we selected as measures (1) \textit{number of wall cells} (ranges from 0 to 256), and (2) \textit{mean agent path length} (ranges from 0 to 648, where 648 indicates a failure to reach the goal). Since the ACCEL agent's policy is stochastic, we average the path length over 50 episodes (validity of the level and the number of wall cells do not depend on the agent). \noindent\textbf{Mario.} Drawing upon prior work \citep{fontaine2021illuminating,volz2018mario}, we generate environments for the Mario AI Framework~\citep{marioframework,marioai} by passing latent vectors to a GAN which was pre-trained on human-authored levels. Each level output by the GAN's generator is a $16\times56$ grid of tiles, where each tile can be one of 17 different objects. In each environment, we run the A* agent developed by Robin Baumgarten \citep{baumgarten}. Since we wish to generate playable levels, we set the objective as the \textit{completion rate}, i.e., the proportion of the level that the agent completes before dying. We additionally want to generate environments that result in qualitatively different agent behaviors, thus we selected as measures: (1) \textit{sky tiles}, the number of tiles of a certain type that are in the top half of the 2D grid (ranges from 0 to 150), (2) \textit{number of jumps}, the number of times that the A* agent jumps during its execution (ranges from 0 to 100). Since the environment dynamics are stochastic, for the ground-truth evaluations the completion and number of jumps are averaged over 5 episodes (the number of sky tiles does not depend on the agent's execution). \section{Experiments} \label{sec:exp} \subsection{Experiment Design} \noindent\textbf{Independent variables.} In each domain (Maze and Mario), we follow a between-groups design, where the independent variable is the algorithm. We test the following algorithms: \textit{DSAGE}: The proposed algorithm that includes predicting ancillary agent behavior data and downsampling the surrogate archive (\sref{sec:algo}). \textit{DSAGE-Only Anc}: The proposed algorithm with ancillary data prediction and no downsampling, i.e., selecting all solutions from the surrogate archive. \textit{DSAGE-Only Down}: The proposed algorithm with downsampling and no ancillary data prediction. \textit{DSAGE Basic}: The basic version of the proposed algorithm that selects all solutions from the surrogate archive and does not predict ancillary data. \textit{Baseline QD}: The QD algorithm without surrogate assistance. We follow previous work \citep{fontaine2021illuminating} and use CMA-ME for the Mario domain. Since CMA-ME operates only in continuous spaces, we use MAP-Elites in the discrete Maze domain. \noindent\textbf{Dependent variables.} We measure the quality and diversity of the solutions with the QD-score metric~\cite{pugh2016qd}(\eref{eq:objective}). As an additional metric of diversity, we also report the archive coverage. We run each algorithm for 5 trials in each domain. \noindent\textbf{Hypothesis.} \textit{We hypothesize that DSAGE will result in a better QD-score than DSAGE Basic in all domains, which in turn will result in better performance than the baseline QD algorithm.} We base this hypothesis on previous work~\citep{gaier2018dataefficient,zhang2021deep} that showed that surrogate-assisted MAP-Elites outperformed standard MAP-Elites in design optimization and Hearthstone domains. Furthermore, we expect that the additional supervision through ancillary agent behavior data and downsampling will result in DSAGE performing significantly better than DSAGE Basic. \section{Societal Impacts} \label{sec:impact} By introducing surrogate models into quality diversity algorithms, we can efficiently generate environments that result in diverse agent behaviors. While we focused on an RL agent in a Maze domain and a planning agent in a Mario game domain, our method can be applied to a variety of agents and domains. This can help with testing the robustness of agents, attaining insights about their behavior, and discovering edge cases before real-world deployment. Furthermore, we anticipate that closing the loop between environment generation and agent training can improve the ability of agents to generalize to new settings and thus increase their widespread use. Our work may also have negative impacts. Training agents in diverse environments can be considered as a step towards open-ended evolution~\citep{stanley2017open}, which raises concerns about the predictability and safety of the emergent agent behaviors~\citep{ecoffet2020open,hendrycks2021unsolved}. Discovering corner cases that result in unwanted behaviors or catastrophic failures may also be used maliciously to reveal vulnerabilities in deployed agents~\citep{roy2018evolutionary}. \section{Introduction} We present an efficient method of automatically generating a collection of environments that elicit diverse agent behaviors. As a motivating example, consider deploying a robot agent at scale in a variety of home environments. The robot should generalize by performing robustly not only in test homes, but in any end user's home. To enable such generalization, the test environments should have good coverage for the robot agent. However, obtaining such coverage may be difficult, as the generated environments would depend on the application domain, e.g. kitchen or living room, and on the specific agent we want to test, since different agents exhibit different behaviors. To enable generalization of autonomous agents to new environments with differing levels of complexity, previous work on open-ended learning~\citep{wang2019paired,wang2020enhanced} has integrated the environment generation and the agent training processes. The interplay between the two processes acts as a natural curriculum for the agents to learn robust skills that generalize to new, unseen environments~\cite{dennis2020emergent,parker2022evolving,dharna2022transfer}. The performance of these agents has been evaluated either in environments from the training distribution~\citep{wang2019paired,wang2020enhanced,dharna2022transfer} or in suites of manually authored environments~\cite{dennis2020emergent,jiang2021replay,parker2022evolving}. As a step towards testing generalizable agents, there has been increasing interest in competitions~\citep{perez2016general,hambro2022insights} that require agents to generalize to new game layouts. Despite the recent progress of deep learning agents in fixed game domains, e.g. in Chess~\citep{silver2018general}, Go~\citep{silver2016mastering}, Starcraft~\citep{vinyals2019alphastar}, and Poker~\citep{moravvcik2017deepstack,brown2019superhuman}, it has been rule-based agents that have succeeded in these competitions~\citep{hambro2022insights}. Such competitions also rely on manually authored game levels as a test set, handcrafted by a human designer. While manually authored environments are important for standardized testing, creating these environments can be tedious and time-consuming. Additionally, manually authored test suites are often insufficient for eliciting the diverse range of possible agent behaviors. Instead, we would like an interactive test set that proposes an environment, observes the agent's performance and behavior, and then proposes new environments that diversify the agent behaviors, based on what the system has learned from previous execution traces of the agent. \begin{figure} \centering \includegraphics[width=1.0\textwidth]{figures/front_fig_1.pdf} \caption{An overview of the Deep Surrogate Assisted Generation of Environments (DSAGE) algorithm. DSAGE exploits a deep surrogate model to fill an archive of solutions (blue arrows), which are then evaluated by simulating an agent (red arrows). The surrogate model is then trained on the data from the simulations (yellow arrows). % } \label{fig:alg} \end{figure} To address collecting environments with diverse agent behaviors, prior work frames the problem as a quality diversity (QD) problem~\citep{fontaine2021importance, fontaine2021illuminating, fontaine:sa_rss2021}. A QD problem consists of an objective function, e.g. whether the agent can solve the environment, and measure functions, e.g. how long the agent takes to complete their task. The measure functions quantify the behavior we would like to vary in the agent, allowing practitioners to specify the case coverage they would like to see in the domain they are testing. While QD algorithms can generate diverse collections of environments, they require a large number of environment evaluations to produce the collection, and each of these evaluations requires multiple time-consuming simulated executions of potentially stochastic agent policies. We study how \textit{deep surrogate models that predict agent performance can accelerate the generation of environments that are diverse in agent behaviors}. We draw upon insights from model-based quality diversity algorithms that have been previously shown to improve sample efficiency in design optimization~\citep{gaier2018dataefficient} and Hearthstone deckbuilding~\citep{zhang2021deep}. Environments present a much more complex prediction task because the evaluation of environments involves simulating stochastic agent policies, and small changes in the environment may result in large changes in the emergent agent behaviors~\citep{sturtevant2020unexpected}. We make the following contributions: (1) We propose the use of deep surrogate models to predict agent performance in new environments. Our algorithm, Deep Surrogate Assisted Generation of Environments (DSAGE) (\fref{fig:alg}), integrates deep surrogate models into quality diversity optimization to efficiently generate diverse environments. (2) We show in two benchmark domains from previous work, a Maze domain~\citep{dennis2020emergent,parker2022evolving} with a trained ACCEL agent~\citep{parker2022evolving} and a Mario domain~\citep{marioai,fontaine2021illuminating} with an A* agent~\citep{baumgarten}, that DSAGE outperforms state-of-the-art QD algorithms in discovering diverse agent behaviors. (3) We show with ablation studies that training the surrogate model with ancillary agent behavior data and downsampling a subset of solutions from the surrogate archive results in substantial improvements in performance, compared to the surrogate models of previous work~\citep{zhang2021deep}. \section{Limitations and Future Work} \label{sec:limitations} Automatic environment generation is a rapidly growing research area with a wide range of applications, including designing video game levels~\citep{shaker2016procedural,summerville2018procedural,liu2021deep}, training and testing autonomous agents~\citep{risi2020increasing,cobbe2020leveraging,wang2019paired,dennis2020emergent,parker2022evolving}, and discovering failures in human-robot interaction~\citep{fontaine:sa_rss2021,fontaine2021illuminating}. We introduce the DSAGE algorithm, which efficiently generates a diverse collection of environments via deep surrogate models of agent behavior. Our paper has several limitations. First, occupancy grid prediction does not encode temporal information about the agent. On one hand, this prediction allows us to avoid the compounding error problem of model-based RL~\citep{xiao2019learning}. On the other hand, forgoing temporal information makes it harder to predict some behaviors, such as the number of jumps in Mario. We will explore this trade-off in future work. Furthermore, we have studied 2D domains where a single ground-truth evaluation lasts between a few seconds and a few minutes. We are excited about the use of surrogate models to predict the performance of agents in more complex domains with expensive, high-fidelity simulations~\citep{wurman2022outracing}. \section{Problem Definition} \label{sec:problem} \noindent\textbf{Quality diversity (QD) optimization.} We adopt the QD problem definition from previous work~\citep{fontaine2021dqd}. A QD optimization problem specifies an objective function $f:\mathbb{R}^n\rightarrow\mathbb{R}$ and a joint measure function ${\bm{m}}:\mathbb{R}^n\rightarrow\mathbb{R}^m$. For each element $s \in S$, where $S \subseteq \mathbb{R}^m$ is the range of the measure function, the QD goal is to find a solution ${\bm{\theta}} \in \mathbb{R}^n$ such that ${\bm{m}}({\bm{\theta}})=s$ and $f({\bm{\theta}})$ is maximized. Since the range of the measure function can be continuous, we restrict ourselves to algorithms from the \mapelites{} family \citep{cully2015,mouret2015illuminating} that discretize this space into a finite number of $M$ cells. A solution ${\bm{\theta}}$ is mapped to a cell based on its measure ${\bm{m}}({\bm{\theta}})$. The solutions that occupy cells form an \textit{archive} of solutions. Our goal is to find solutions ${\bm{\theta}}_i, i \in \{1,...,M\}$ that maximize the objective $f$ for all cells in the measure space. \begin{equation} \max_{{\bm{\theta}}_i} \sum_{i=1}^M f({\bm{\theta}}_i) \label{eq:objective} \end{equation} The computed sum in \eref{eq:objective} is defined as the QD-score~\citep{pugh2016qd}, where empty cells have an objective value of 0. A second metric of the performance of a QD algorithm is coverage of the measure space, defined as the proportion of cells that are filled in by solutions: $\frac{1}{M}\sum_{i=1}^M{\bm{1}}_{{\bm{\theta}}_i}$. \noindent\textbf{QD for environment generation.} We assume a single agent acting in an environment parameterized by ${\bm{\theta}} \in \mathbb{R}^n$. The environment parameters can be locations of different objects or latent variables that are passed as inputs to a generative model~\citep{goodfellow:nips14}.\footnote{For consistency with the generative model literature, we use $\mathbf{z}$ instead of ${\bm{\theta}}$ when denoting latent vectors} A QD algorithm generates new solutions ${\bm{\theta}}$ and evaluates them by simulating the agent on the environment parameterized by ${\bm{\theta}}$. The evaluation returns an objective value $f$ and measure values ${\bm{m}}$. The QD algorithm attempts to generate environments that maximize $f$ but are diverse with respect to the measures ${\bm{m}}$. \subsection{Analysis} \label{sec:results} \fref{fig:plots} summarizes the results obtained by the five algorithms on the Maze and the Mario domains. One-way ANOVA tests showed a significant effect of the algorithm on the QD-score for the Maze ($F(4, 20) = 126.93, p<0.001$) and Mario ($F(4, 20) = 142.09, p<0.001$) domains. Post-hoc pairwise comparisons with Bonferroni corrections showed that DSAGE outperformed DSAGE Basic and Baseline QD in both the Maze and the Mario domains ($p<0.001$). Additionally, DSAGE Basic outperformed MAP-Elites in the Maze domain ($p<0.001)$, while it performed significantly worse than the QD counterpart, CMA-ME, in the Mario domain ($p=0.003)$. These results show that deep surrogate assisted generation of environments results in significant improvements compared to quality diversity algorithms without surrogate assistance. They also show that adding ancillary agent behavior data and downsampling is important in both domains. Without these components, DSAGE Basic has limited or no improvement compared to the QD algorithm without surrogate assistance. To assess the quality of the trained surrogate model, we create a combined dataset consisting of data from one run of each surrogate assisted algorithm. We use the dataset to evaluate the surrogate models trained from separate runs of DSAGE and DSAGE Basic. The model learned by DSAGE Basic fails to predict the agent-based measures well. It has a mean absolute error (MAE) of 157.69 for the mean agent path length in Maze and MAE = 10.71 for the number of jumps in Mario. In contrast, the model learned by DSAGE makes more accurate predictions, with MAE = 96.58 for the mean agent path length and MAE = 7.16 for the number of jumps. We explore the contribution of each component of DSAGE with an ablation study in \sref{sec:ablation}. \subsection{Ablation Study} \label{sec:ablation} \sref{sec:algo} describes two key components of DSAGE: (1) self-supervised prediction of ancillary agent behavior data, and (2) downsampling to select solutions from the surrogate archive. We perform an ablation study by treating the inclusion of ancillary data prediction (ancillary data / no ancillary data) and the method of selecting solutions from the surrogate archive (downsampling / full selection) as independent variables. A two-way ANOVA for each domain showed no significant interaction effects. We perform a main effects analysis for each independent variable. \noindent\textbf{Inclusion of ancillary data prediction.} A main effects analysis for the inclusion of ancillary data prediction showed that algorithms that predict ancillary agent behavior data (DSAGE, DSAGE-Only Anc) performed significantly better than their counterparts with no ancillary data prediction (DSAGE-Only Down, DSAGE Basic) in both domains ($p<0.001$). \fref{fig:plots} shows that predicting ancillary agent behavior data also results in a larger mean coverage for Maze, while it has little or no improvement for Mario. The reason is that in the Maze domain, the mean agent path length is a scaled version of the sum of the agent's tile occupancy frequency, hence the two-stage process which predicts the occupancy grid first is essential for improving the accuracy of the CNN. On the other hand, the presence of a jump in Mario depends not only on cell occupancy, but also on the structure of the level and the sequence of the occupied cells. \noindent\textbf{Method of selecting solutions from the surrogate archive.} A main effects analysis for the method of selecting solutions from the surrogate archive showed that the algorithms with downsampling (DSAGE, DSAGE-Only Down) performed significantly better than their counterparts with no downsampling (DSAGE-Only Anc, DSAGE-Basic) in both domains ($p<0.001$). A major advantage of downsampling is that it decreases the number of ground-truth evaluations in each outer iteration. Thus, for a fixed evaluation budget, downsampling results in a greater number of outer iterations. For instance, in the Maze domain, runs without downsampling had only 6-7 outer iterations, while runs with downsampling had approximately 220 outer iterations. More outer iterations lead to more training and thus higher accuracy of the surrogate model. In turn, a more accurate surrogate model will generate a better surrogate archive in the inner loop. The second advantage of downsampling is that it selects solutions evenly from all regions of the measure space, thus creating a more balanced dataset. This helps train the surrogate model in parts of the measure space that are not frequently visited. We include an additional baseline in Appendix~\ref{app:random_sample} in which we select solutions uniformly at random from the surrogate archive instead of downsampling. Furthermore, if instead of downsampling we sampled multiple solutions from nearby regions of the surrogate archive, the prediction errors could cause the solutions to collapse to a single cell in the ground-truth archive, resulting in many solutions being discarded. Overall, our ablation study shows that both predicting the occupancy grid as ancillary data and downsampling the surrogate archive independently help improve the performance of DSAGE. \subsection{Qualitative Results} \fref{fig:maze_heatmap} and \fref{fig:mario_heatmap} show example environments generated by DSAGE in the Maze and the Mario domains. Having the mean agent path length as a measure in the Maze domain results in environments of varying difficulty for the ACCEL agent. For instance, we observe that the environment in \fref{fig:maze_heatmap}(a) has very few walls, yet the ACCEL agent gets stuck in the top half of the maze and is unable to find the goal within the allotted time. On the other hand, the environment in \fref{fig:maze_heatmap}(d) is cluttered and there are multiple dead-ends, yet the ACCEL agent is able to reach the goal. \fref{fig:mario_heatmap} shows that the generated environments result in qualitatively diverse behaviors for the Mario agent too. Level (b) only has a few sky tiles and is mostly flat, resulting in a small number of jumps. Level (c) has a ``staircase trap'' on the right side, forcing the agent to perform continuous jumps to escape and complete the level. We include videos of the playthroughs in the supplemental material. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/maze-heatmap/main/maze-heatmap.pdf} \caption{Archive and levels generated by DSAGE in the Maze domain. The agent's initial position is shown as an orange triangle, while the goal is a green square.} \label{fig:maze_heatmap} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{figures/mario-heatmap/main/mario-heatmap.pdf} \caption{Archive and levels generated by DSAGE in the Mario domain. Each level shows the path Mario takes, starting on the left of the level and finishing on the right.} \label{fig:mario_heatmap} \end{figure}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The network tomography scheme is a method to calculate link variables such as delay or bandwidth, using the end-to-end measurements. In this paper, we intend to solve a congestion detection problem using the network tomography scheme, based on end-to-end measurements. Our goal is to decrease the number of required measurements in order to identify all congested links. Since only a small number of network links may become congested, it is assumed that the delay vector (which contains delays of all links) is sparse. To solve this problem, we employ Compressive Sensing (CS) \cite{CSIntro} which has received significant attention in the recent years. Since its inception, CS shows remarkable results in reconstructing a high dimensional sparse signal with a small set of measurements. A significant issue in this field is the sample complexity which is the number of required measurements for high quality signal recovery which is solvable through LASSO \cite{CSIntro}. Although CS is a new sampling scheme in signal processing, it has been applied to many network applications specially in the area of network tomography. Fault diagnosis \cite{NetTom2}, traffic estimation \cite{NetTom1}, data localization \cite{CSP2P}, and congestion detection \cite{CSoverGraph} are some well known applications in this field. The state-of-the-art CS-based algorithm in network tomography \cite{CSoverGraph}, used random walks to gather end-to-end delays and employed LASSO in its model for congestion detection. However, it has two major drawbacks; relatively high number of random walk measurements (much higher than the total network links), and low accuracy in detecting the congested links when the required number of measurements is small. In this paper, we try to solve those drawbacks by using the prior knowledge corresponding to the correlation of link betweenness centrality (the number of shortest paths that traverse from the link) and link congestion. We also introduce a new CS objective function in which we consider the dependency of congestion and betweenness centrality. We further show that the proposed objective function can be considered as the Elastic-Net model \cite{ENet}, which is a more stable alternative to LASSO. We have simulated the proposed Compressive Sensing in Congestion Detection (CSCD) model, by using real data with various configurations. We have compared the F-Score (of the retrieved congested links) in our model with \cite{CSoverGraph}, in terms of the number of measurements (random walks) and various sparsity of the network delays in which we achieve significant improvements. The rest of the paper is organized as follow; in Section \ref{Related Work}, we describe the related work in network tomography, congestion detection, and compressive sensing. Section \ref{Problem Definition} shows an introduction to compressive sensing and the problem we try to solve in this paper. In Section \ref{The Proposed Model}, we introduce the proposed compressive sensing model. Comparison with the previous model is appeared in Section \ref{Evaluation}. Finally, we conclude the paper in Section \ref{Conclusion}. \section{Related Work} \label{Related Work} Compressive sensing in network tomography is first discussed in \cite{GroupTest1}. The authors discussed a group testing problem on the Erd$\acute{\text{o}}$s-R$\acute{\text{e}}$nyi random graphs. They have applied OR operation (instead of summation) on the gathered measurements and calculated the required number of measurements when the link variables are binary. In \cite{CSoverGraph}, the authors gathered end-to-end delay information by applying random walks to the network and recovered the sparse delay values of network links using compressive sensing. Although they have better theoretical order for the number of measurements compared to \cite{GroupTest1}, they have used high number of random walks to achieve good recovery percentage in practice. In their theoretical proof, they required that the graph is highly connected which may not be true in all of the real networks. Moreover, their practical results are accomplished according to specific graphs such as the complete graph. In our model, however, we try to solve these practical issues by employing the relationship between link betweenness centrality and link congestion in the network. As noted in \cite{BC1}, link betweenness centrality and congestion are two measurements in the network which are highly correlated. In \cite{BC1}, \cite{BC2}, and \cite{BC3} link betweenness centrality is used as the only element to detect the congested links in the network. Although betweenness centrality is effective to achieve this purpose, none of these papers consider end-to-end measurements in finding the congested links. Later in this paper, we show how efficient it is to employ both link betweenness centrality and end-to-end measurement for congestion detection. \section{Problem Definition} \label{Problem Definition} We consider a real communication network as an undirected graph with the set of links $E$, and vertices $V$. We assume each link can transfer data in both directions between two connected nodes. We are going to measure the network congested links using end-to-end delays of $M$ random walk paths in the network ($M<|E|$). Thus, we should first measure the delay of each network link. Let the vector $\mathbf y_{M \times 1}$ denote the observed end-to-end delays where $\mathbf y_i$ represents the end-to-end delay of the path $i$. We also define $\mathbf x_{N \times 1}$ as the links' delay vector where $\mathbf x_j$ represents the delay of link $j$. Since the number of congested links in a network are sparse, $\mathbf x$ is considered as a $k$-sparse vector ($k \ll N$ and $N=|E|$). The goal is to recover vector $\mathbf x_{N \times 1}$ which contains the delay of all $N$ links of the network. Moreover, by recovering $\mathbf x$, the congested links can be recognized through their high amount of delay. To recover this vector, compressive sensing method has been used. The recovery process starts with solving the following equation: \begin{equation} \label{y_Ax} \mathbf y = A \mathbf x + \epsilon \end{equation} where $\epsilon$ is the noise vector that is experienced in the random walk measurements, and $A$ is an $M \times N$ matrix. The elements of $i^{th}$ row and $j^{th}$ column of $A$ is equal to $1$, if the $i^{th}$ link of the network is available in the $j^{th}$ random walk path. We employ $M$ random walks over the network graph and gather information regarding the end-to-end delay of each path ($\mathbf y_i$). Having $A$ and $\mathbf y$, we may reconstruct $\mathbf x$ by minimizing $\lVert\mathbf x\rVert_{0}$ under the constraint of Eq. \eqref{y_Ax}. However, as stated in \cite{l0norm}, this is an NP-hard optimization problem. In compressive sensing, the aforementioned optimization problem is solved by minimizing the $\lVert\mathbf x\rVert_{1}$ instead of $\lVert\mathbf x\rVert_{0}$ which results in an optimization problem with computational complexity of $O(N^3)$ \cite{l0norm}, known as LASSO: \begin{equation} \label{CSEq} \hat{\mathbf x} = \arg\min\limits_{\mathbf x}~~ (\lambda \lVert \mathbf x \rVert_{1} + \gamma \lVert A \mathbf x - \mathbf y \rVert^{2}_{2}) \end{equation} \section{The Proposed Model} \label{The Proposed Model} We have employed the Bayesian theory in our model which is used in many compressive sensing applications \cite{Bayesian1} \cite{Bayesian2} \cite{Bayesian3}. The previous applications, however, have not considered prior knowledge on $\mathbf x$ other than its sparsity. The main goal of our model is congestion detection with a few number of random walk measurements. We have employed the betweenness centrality \cite{BetweenC} of the congested links as the prior knowledge in $\ell^1\text{-norm}$ minimization. Let $D$ denote the maximum delay that each link can tolerate in the network and $b_i$, $0 \leq b_i \leq B$, denote the link betweenness of the link $i$ where $B$ is the maximum link betweenness centrality in the network. We use linear interpolation to capture the relation between the betweenness centrality and the prior belief about the delay values of network links. In this way, we get the scaled version of link betweenness centralities ($\mathbf s$). With a high probability, a link with the maximum link betweenness ($B$) has the highest delay ($D$) in the network \cite{CongBetween}. Moreover, a link with the lowest link betweenness (0) is more likely to have no delay. Thus, (0,0) and ($B$,$D$) are two points on the interpolating line. Therefore, by considering the link betweenness values as the X-axis ($b_{i}$) and our prior belief about links' delay as the Y-axis ($s_{i}$) in a two dimensional space, $s_{i}$ is given by: \begin{equation} s_{i} = D \times \frac{b_i} {B \end{equation} In order to recover $\mathbf x$ from $\mathbf y$, it is critical that $\mathbf x$ be sparse. Since $\mathbf s$ is also a sparse vector (because it is highly correlated with $\mathbf x$), $\mathbf x - \mathbf s$ should be sparse too. Therefore, we define the probability density function of $\mathbf x$ as follows: \begin{equation} \label{ProbX} P(\mathbf x) ~\propto ~\exp ~ \left\{ -\left( \frac{\lVert\mathbf x - \mathbf s\rVert_1}{k_1} + \frac{\lVert\mathbf x-\mathbf s\rVert^2_2}{k_2} \right)\right\} \end{equation} where $k_1$,$k_2 \in \mathbb{R}^\text{+}$. It penalizes non-sparse choices of $\mathbf x - \mathbf s$ (by the first term) and also vectors $\mathbf x$ which are not similar to $\mathbf s$ (by the second term). By observing $\mathbf y$, we intend to find $\mathbf x$ with the highest probability. Thus, we may maximize $P(\mathbf x|\mathbf y)$ by using the Maximum a Posteriori probability (MAP) estimation as follows: \begin{equation} \label{PX_Y} \max\limits_{\mathbf x}~~\left(P(\mathbf x|\mathbf y) = \frac{P(\mathbf y|\mathbf x)~P(\mathbf x)}{P(\mathbf y)}\right) \end{equation} Since the goal of Eq. \eqref{PX_Y} is to find the maximum value of $\mathbf x$ in $P(\mathbf x|\mathbf y)$, by eliminating the terms that do not depend on $\mathbf x$, and taking logarithm on both sides, we obtain: \begin{equation} \label{LogP_X} \begin{split} &\max\limits_{\mathbf x}~\log \left( P(\mathbf y|\mathbf x)~ P(\mathbf x) \right) = \max\limits_{\mathbf x}~\left( \log P(\mathbf y|\mathbf x)+\log P(\mathbf x) \right) \end{split} \end{equation} On the other hand, by observing $\mathbf y$, we intend to find $\mathbf x$ from Eq. \eqref{y_Ax}. We assume $\epsilon$ has a normal distribution with zero mean and covariance matrix $\sigma^2I$. Therefore: \begin{equation} \label{PY_X} P(\mathbf y|\mathbf x) \sim \mathcal{N}(A\mathbf x,\sigma^2 I) \end{equation} According to Eq.s \eqref{ProbX} and \eqref{PY_X}, Eq. \eqref{LogP_X} may be expanded as follows: \begin{align*} &\max\limits_{\mathbf x}~\left( \log P(\mathbf y|\mathbf x)+\log P(\mathbf x)\right) \equiv \\ &\min\limits_{\mathbf x}~ \Bigg(-\log \left\{ \frac{\exp \left( -(\mathbf y-A\mathbf x)^{\text{T}} \frac{1}{2\sigma^2} I (\mathbf y-A\mathbf x)\right)}{(2\pi)^{\frac{M}{2}} \text{det}(\sigma^2 I)^\frac{1}{2}} \right\} - \\ &~\quad\quad\quad\quad\log ~ \left\{ \exp -\left( \frac{\lVert\mathbf x - \mathbf s\rVert_1}{k_1} + \frac{\lVert\mathbf x-\mathbf s\rVert_2^2}{k_2}\right) \right\} + C \Bigg) \end{align*} where $C \in \mathbb{R}$ and $M$ is the number of random walk measurements. Therefore, the optimization problem becomes: \begin{equation} \label{OurCSEq} \hat{\mathbf x} = \arg\min\limits_{\mathbf x}~~\left( \lambda \lVert \mathbf x - \mathbf s \rVert_{1} + \gamma \lVert A \mathbf x - \mathbf y \rVert^{2}_{2} + \alpha \lVert \mathbf x - \mathbf s \rVert_{2}^{2} \right) \end{equation} where $\gamma = \frac{1}{\sigma^2}$, $\lambda = \frac{1}{k_{1}}$, and $\alpha = \frac{1}{k_{2}}$. An important goal in CS is to have model consistency \cite{ENet}, which shows that the support of recovered $\mathbf x$ converges to the support of original $\mathbf x$ as the number of random walk measurements goes to $\infty$. To verify this property, we show that our problem is similar to the one discussed in \cite{ENet} which is known as Elastic-Net: \begin{equation} \hat{\mathbf \beta} = \arg\min\limits_{\mathbf \beta}~~\left( \lambda \lVert \mathbf \beta \rVert_{1} + \gamma\lVert A\mathbf \beta - \mathbf y^{\prime}\rVert^{2}_{2} + \alpha \lVert \mathbf \beta \rVert_2^2 \right) \label{ENetEq} \end{equation} To increase the recovery accuracy, achieve model consistency, and overcome LASSO limitations \cite{ENet}, Elastic-Net is used as an alternative to LASSO. Consider $\beta = \mathbf x - \mathbf s$ and $\mathbf y^{\prime} = A\mathbf s - \mathbf y$. Since $\mathbf s$ is a constant matrix, it is easy to show that Eq. \eqref{OurCSEq} and Eq. \eqref{ENetEq} are two equivalent optimizations. Thus, our model is in the form of Elastic-Net. As mentioned before, we have $M$ random walk measurements, $N$ network links and $k$ is the sparsity value. Without loss of generality, assume that only the first $k$ elements of $\mathbf x$ are non-zero. Then $\mathbf x_{1} = (x_{1}, ... ,x_{k})$, $\mathbf x_{2} = (x_{k+1}, ... ,x_{N})$, $A_{(1)}$ contains the first $k$ columns of $A$, and $A_{(2)}$ contains the last $N-k$ columns of $A$. Therefore, by having $C_{11} = \frac{1}{M}A_{(1)}^{\text{T}} A_{(1)}$, $C_{12} = \frac{1}{M}A_{(1)}^{\text{T}} A_{(2)}$, $C_{22} = \frac{1}{M}A_{(2)}^{\text{T}} A_{(2)}$, and $C_{21} = \frac{1}{M}A_{(2)}^{\text{T}} A_{(1)}$, Irrepresentable Condition (IC) can be shown to be a necessary and sufficient condition for LASSO's model consistency \cite{ENet}: \begin{equation} \label{IC_eq} \exists~\eta>0 : \Bigg\lVert C_{\text{21}} (C_{\text{11}})^{-1}(\textit{sign}(\mathbf x_{1})) \Bigg\rVert_{\infty} \leq 1 - \eta \end{equation} Moreover, according to the Corollary 1 in \cite{ENet}, if the Elastic Irrepresentable Condition (EIC) is satisfied, the Elastic-Net model has model consistency by choosing right values for $\lambda$, $M$, and $\alpha$. The EIC is as follows: \begin{equation} \label{EIC_eq} \Bigg\lVert C_{\text{21}} (C_{\text{11}} + \frac{\alpha}{M} \mathbf I)^{-1}\Bigg(\textit{sign}(\mathbf x_{1})+\frac{2\alpha}{\lambda}\mathbf x_{1} \Bigg) \Bigg\rVert_{\infty} \leq 1 - \eta \end{equation} where $\eta > 0$. In the next section, we show that in our model EIC is satisfied with a higher probability compared to IC. \section{Evaluation} \label{Evaluation} \subsection{Simulation Framework} In order to evaluate the proposed model, we have performed extensive simulations (30 runs) in MATLAB. We have used two real datasets. The first dataset contains the information of a mobile operator which has 273 links and 158 nodes corresponding to the network devices in Mobile Switching Centers (MSC) placed in 40 cities. The second dataset contains the information of a data network with 366 links and 277 nodes which are located in more than 50 cities. We have also considered the variance of the results by measuring the related error bars. For the first dataset, we consider $\lambda = 10^{-3}$, $\alpha = 10^{-5}$, and $\gamma = 1$. For the second dataset we have $\lambda = \alpha = \gamma = 1$. Through several simulations, these are almost the best configurations for both LASSO and our model. The number of steps in each random walk is assumed to be 15. \subsection{Validation} \label{Validation} To evaluate the proposed model, named Compressive Sensing in Congestion Detection (CSCD), we have used $\text{F-Score}$ measure which corresponds to the harmonic mean of precision and recall. Precision measures the percentage of correctly detected congested links to the sum of correctly and wrongly detected congested links, and recall refers to the percentage of correctly detected congested links to the total detected congested links. First, we have evaluated the proposed CSCD model and LASSO through model consistency. Assuming $\eta = 0.01$, EIC in Eq. \eqref{EIC_eq} and IC in Eq. \eqref{EIC_eq} are verified for our model and LASSO, respectively. At the end of all 30 simulation runs, the percentage of the times that EIC and IC are satisfied, is computed. \begin{figure}[h] \centering \includegraphics[width=0.27\textwidth]{EIC-IC.eps}~ \caption{Comparison of probability that EIC holds in Elastic-Net and IC holds in LASSO in various number of random walks in the first dataset} \label{ENet-LASSO} \end{figure} It has to be mentioned that CS-over-Graphs in \cite{CSoverGraph} has used LASSO in its model. Moreover, in the theoretical proof of \cite{CSoverGraph}, they required that the graph is highly connected which may not be true in all of the real networks. Choosing $\lambda = \sqrt{M} \log M$, $k = 8$ as sparsity, and $N = 273$ as the number of links in the first dataset, by increasing $M$ such that $M \rightarrow \infty$, Fig.\ref{ENet-LASSO} shows that in CSCD model model consistency holds with higher probability. The rest of this section shows the evaluation of F-Score in our model in various settings. Fig. \ref{F-ScoreRW} illustrates an improvement of the F-Score performance of the CSCD model by an average of 5\% (Dataset 1) and 10\% (Dataset 2) for the various number of random walks compared to the Compressive Sensing over Graphs method (CS-over-Graph) presented in \cite{CSoverGraph}. The number of random walks changes from 10\% to 90\% of the total network links. Although, we have evaluated F-Score for various random walk steps, the results were similar to those shown in Fig. \ref{F-ScoreRW}. Clearly, for lower number of random walks, we have lower number of measurements, and thus less samples. Since the proposed model simultaneously employs the prior knowledge based on the link betweenness centrality, it performs better than CS-over-Graphs model \cite{CSoverGraph} where only the sparsity information ($\lVert\mathbf x\rVert_{0}$) is used. However, as the number of random walks grows, our F-Score gets closer to CS-over-Graphs'. Because at higher number of random walk the prior knowledge becomes less significant. \begin{figure}[h] \centering \includegraphics[width=0.28\textwidth]{FScore-RW_D1D2.eps} \caption{F-Score versus random walk number in two datasets} \label{F-ScoreRW} \end{figure} As illustrated in Fig. \ref{F-ScoreSparsity}, we have also evaluated the F-Score of CSCD in terms of sparsity of the congested links in the network ($\lVert {\mathbf x}\rVert_0$). The F-Score of the CSCD model is improved by an average of 11\% in the first dataset, and 9\% in the second one compared to CS-over-Graphs model. \begin{figure}[h] \centering \includegraphics[width=0.28\textwidth]{FScore-Sparsity_D1D2.eps} \caption{F-Score versus sparsity in two datasets} \label{F-ScoreSparsity} \end{figure} In Fig. \ref{RWSparsity}, we have measured the required number of random walks in terms of the sparsity of the congested links in the network ($\lVert\mathbf x\rVert_0$). Considering the sparsity varies from 5\% to 30\% of the network links, we have calculated the least required number of random walks when F-Score equals to 50\%. The number of random walks in CSCD is decreased by an average of 16\% in the first dataset and 15\% in the second one compared to CS-over-Graphs. \begin{figure}[h] \centering \includegraphics[width=0.28\textwidth]{Sparsity_RW_D1D2.eps}~ \caption{Sparsity versus various number of random walks in two datasets } \label{RWSparsity} \end{figure} We also compare the F-Score of our model with the algorithm that is used only Betweenness Centrality (BC) measurement for congestion detection (BC algorithm) as employed in \cite{BC1}, \cite{BC2}, and \cite{BC3}. The result is illustrated in Fig. \ref{MereBC}. \begin{figure}[h] \centering \subfigure[]{\label{RW-BC}\includegraphics[width=0.27\textwidth]{BC-FScore-RW_D1.eps}}~ \subfigure[]{\label{Sparsity-BC}\includegraphics[width=0.27\textwidth]{BC-FScore-Sparsity_D1.eps}} \caption{Comparison of our model with BC algorithm in terms of (a) the required number of random walks (b) sparsity} \label{MereBC} \end{figure} Since betweenness centrality is independent from the number of random walks, it remains constant in Fig. \ref{RW-BC}. As shown in Fig. \ref{MereBC}, CSCD outperformed the algorithms based on mere betweenness centrality, by an average of 12\% on F-Score for different sparsities and 69\% on F-Score for various random walk numbers. \section{Conclusion} \label{Conclusion} In this paper, we introduced a new objective function based on the concepts of compressive sensing in a network tomography application. We used the link betweenness prior knowledge in our objective function which results in a decrease on the required number of measurements for detecting the network congested links. Based on extensive simulation results, we verified significant improvement in accuracy of detecting the network congested links in two real datasets.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \IEEEPARstart{T}{he} interest for autonomous driving has continuously increased during the last two decades. However, to be adopted, such critical systems need to be safe. Concerning the perception of the ego-vehicle environment, the literature has investigated two different types of methods. On the one hand, traditional analytical methods generally rely on hand-crafted designs and features. On the other hand, learning methods aim at designing their own appropriate representation of the observed scene. \textbf{Analytical methods} have demonstrated their usefulness for several tasks, including keypoints detection \cite{lowe_distinctive_2004}, \cite{karpushin_keypoint_2016}, optical flow, depth map estimation, background subtraction, geometric shape detection, tracking, and simultaneous localization and mapping (SLAM) \cite{bresson_simultaneous_2017}. Those methods have the advantage to be easily explainable. However, it is difficult to apply them on high dimensional data for semantic scene analysis. For example, identifying the other road users or understanding the large variety of situations present in an urban scene requires to extract complex patterns from high dimensional data captured by camera sensors. \textbf{Learning methods} are nowadays the most adapted in terms of prediction performances for complex pattern recognition tasks \cite{kirillov_panoptic_2018} implied in autonomous vehicles scene analysis and understanding. However, state-of-the-art results are often obtained with large and fully labeled training datasets \cite{cordts_cityscapes_2016}. Hand-labeling a large dataset for a given specific application has a cost. Another difficulty is to apprehend from end-to-end the learned representations. To overcome the former limitation, transfer learning and weakly supervised learning methods have been proposed. Some of them can exploit partially labeled datasets \cite{niu_theoretical_2016}, \cite{chiaroni_learning_2018}, or noisy labeled datasets \cite{ma_dimensionality-driven_2018}, \cite{chiaroni_hallucinating_2019}. Concerning the latter problem, under mild theoretical assumptions on the learning model, we can interpret the predicted outputs. For instance, it is possible to automatically detect overfitting of the training \cite{houle_local_2017}, to estimate the fraction of mislabeled examples \cite{jain_estimating_2016}, or to estimate the uncertainty in the prediction outputs \cite{gal_uncertainty_2016}. In addition to the difficulty of obtaining a large labeled training dataset, another challenge of learning methods is to \textbf{prevent unpredictable events}. Indeed, some scenes unseen during the training can appear frequently in the context of the autonomous vehicle. For instance, an accident on the road can change drastically the appearance and the location of potential obstacles. Thus, even if it is possible to predict when the model does not know what it observes, it may be interesting to confirm it through an analytical process and to adapt the learning model to this novel situation. It turns out that \textbf{self-supervised learning methods (SSL)}, consisting of combining analytical and learning techniques, have shown in the literature the ability to address such issues. For instance, the SSL system in \cite{dahlkamp_self_supervised_2006} won the 2005 DARPA Grand Challenge thanks to its adaptability to changing environments. SSL for autonomous driving vehicles perception is most often based on learning from data which is automatically labeled by an upstream method, similarly to feature learning in \cite{jing_self-supervised_2019}. In this paper, we address the following aspects of SSL: \begin{itemize} \item abilities such as sequential environment adaptation on the application time, referred to as online learning, self-supervised evaluation, non-necessity of hand-labeled data, fostering of multimodal techniques \cite{dahlkamp_self_supervised_2006}, and self-improvement. For example, iterative learning reduces progressively the corrupted predictions \cite{zhong_self_supervised_2017}; \item tasks made possible thanks to those advantages, such as depth map estimation \cite{garg_unsupervised_2016}, \cite{zhong_self_supervised_2017}, temporal predictions \cite{dequaire_deep_2017}, moving obstacles analysis \cite{bewley_online_2014}, and long-range vision \cite{dahlkamp_self_supervised_2006}, \cite{hadsell_learning_2009}. For example, the SSL system in \cite{hadsell_learning_2009} learns to extrapolate the appearance of obstacles and traversable areas observable by stereo-vision in a short-range, to identify them at a longer distance beyond the detection range of the stereo-vision. \end{itemize} \begin{figure*}[h] \begin{center} \begin{minipage}[c]{0.321\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{illustrations/2006_darpa_free.jpg}} \centerline{(a)}\medskip \end{minipage} \begin{minipage}[c]{0.25\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{illustrations/Lagr-robot_free.jpg}} \centerline{(b)}\medskip \end{minipage} \begin{minipage}[c]{0.215\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{illustrations/passat_sensors}} \centerline{(c)}\medskip \end{minipage} \caption{Some self-driving cars. (a) is the self-driving car \textit{Stanley} that won the \textit{DARPA Grand Challenge} using a SSL system equipped with a calibrated monocular camera and a LIDAR sensor \cite{dahlkamp_self_supervised_2006}. (b) is the autonomous mobile robot \textit{LAGR}. It integrates another SSL vision approach \cite{hadsell_learning_2009} able to identify online the obstacles and road segmentation from a short-range stereovision up to a long-range monocular vision. (c) is the car equipped with the perception sensors used to generate the KITTI dataset \cite{geiger_are_2012}.} \label{fig:Self_driving_cars} \end{center} \end{figure*} While the cited SSL techniques are respectively designed for a specific use case application, they present some similarities. In particular, a shared underlying idea is to: Learn to predict, from a given spatio-temporal information (e.g. a single camera frame \cite{dahlkamp_self_supervised_2006}, \cite{hadsell_learning_2009}, \cite{guizilini_online_2013}, \cite{garg_unsupervised_2016}, \cite{pathak_learning_2017}), something (e.g. traversable area segmentation \cite{dahlkamp_self_supervised_2006}, \cite{hadsell_learning_2009}, depth estimation \cite{garg_unsupervised_2016}, or moving obstacles segmentation \cite{guizilini_online_2013}, \cite{pathak_learning_2017}) that can be automatically labeled in another way using additional spatio-temporal information (e.g. stereo-vision camera \cite{hadsell_learning_2009}, \cite{garg_unsupervised_2016}, a temporal sequence \cite{ondruska_deep_2016}, or depth sensor \cite{dahlkamp_self_supervised_2006}). We propose to highlight those inter-dependencies hereafter. In this way, we aim at providing to the reader some analytical, learning and hybrid tools which are transversal to the final application use cases. In addition, the limitations of the presented frameworks are discussed and highlighted in Table \ref{tab:differences_A_L_SSL}, as well as the perspectives of improvement for self-evaluation, self-improvement, and self-adaptation, in order to address future autonomous driving challenges. \begin{table*}[h] \centering \caption{Comparison of state-of-the-art Analytical, Learning and SSL methods for autonomous vehicle perception challenges ('+' inappropriate, '++' intermediary, '+++' appropriate).} \resizebox{17.2cm}{!}{ \begin{tabular}{c||c|c|c|c|c \toprule Methodology & no hand-labeling & dense complex pattern analysis & online self-evaluation and adaptation & knowledge extrapolation & low-cost sensor requirements \\ \midrule \midrule Analytical & +++ & + & ++ & + & +\\ Supervised learning & + & +++ & + & + & +++\\ Self-Supervised Learning & +++ & ++ & +++ & +++ & ++ \\ \bottomrule \end{tabular}} \label{tab:differences_A_L_SSL} \end{table*} The outline of this article is as follows. After this introduction, we present in Sec. \ref{section:analytical} and \ref{section:learning} some analytical and learning perception tools relevant to SSL. We follow in Sec. \ref{section:ssl} with the presentation of existing SSL techniques for some autonomous driving perception applications. Finally, we conclude with a discussion focusing on limitations and future challenges in Sec. \ref{section:limit}. \section{Analytical methods} \label{section:analytical} Before the recent growing interest for deep learning methods, many analytical methods (without learning) have been proposed, bringing baseline reference tools for multiple challenging perception tasks in the context of autonomous driving. Some of the most investigated tasks considered in this article are briefly introduced hereafter: \begin{itemize} \item \textbf{Keypoints feature detection:} Before analyzing the sensor data from a relatively high level, analytical techniques often require to perform spatial or temporal data matching using \textbf{feature detection} methods. More specifically, these methods consist in detecting and extracting local features in the sensor data. These hand-crafted features can be small regions of interest \cite{harris1988combined}. In order to enable the matching of sensor data, captured from the same scene with different spatial or temporal points of view, such features need to be as invariant as possible to scale, translation, and rotation transformations. The most common sensor data is an image captured by a camera. In this case, competitive feature detectors include SIFT \cite{lowe_distinctive_2004}, SURF \cite{bay2006surf}, and ORB \cite{rublee2011orb}. When a depth sensor is also available, the depth information can be exploited in order to further improve feature detection. For instance, the TRISK method \cite{karpushin_keypoint_2016} is specifically designed for RGB-D images. More recently, LIDAR has enabled the acquisition of point clouds. To tackle this new form of sensor data, some feature detection techniques are derived from image ones (e.g. Harris and SIFT). Alternatively, some new approaches such as ISS \cite{zhong2009intrinsic} are exclusively designed for point clouds. From a practical point of view, implementations of common image feature detectors can be found in image libraries as OpenCV\footnote{https://opencv.org/}, and in point clouds libraries as PCL\footnote{http://pointclouds.org/}. Feature detectors are exploited by several autonomous driving perception techniques requiring matching of sensor data, including optical flow, disparity map, visual odometry, SLAM and tracking techniques. \item \textbf{Optical flow} is a dense \cite{farneback_two-frame_2003} or sparse \cite{lucas1981iterative} motion pattern. It can be obtained by computing points or features transformations throughout a temporal images sequence captured from a static or mobile ego-camera point of view. In the context of autonomous driving perception, optical flow is relevant for background subtraction, motion estimation of the ego-vehicle and surrounding moving obstacles as proposed by Menze et al. \cite{menze2015object}. It can also be exploited, in the case of a monocular mobile camera without any additional information, for relative depth map estimation \cite{prazdny1980egomotion} of the surrounding static environment. \item \textbf{Depth map estimation} aims at providing image pixels depths, namely the relative or absolute distance between the camera and the captured objects. Several techniques exist to address this task. One of the most common and effective approaches is to compute a disparity map from a stereo-camera. Combined with the extrinsic cameras parameters, such as the baseline distance separating both cameras, the disparity map can be converted into an inversely proportional absolute depth map. Another approach is to project LIDAR points on some of the camera image pixels. It also requires extrinsic spatial and temporal calibrations between both sensors. As mentioned previously, a relative depth map of a static scene can also be derived from the optical flow obtained with a moving camera. Under some assumptions, for example with additional accurate GPS and IMU sensors information concerning the absolute pose transformations of the moving camera, the absolute depth map can then be obtained. The depth map can also be directly obtained with some RGB-D sensors. Depth map is interesting for identifying the 3D shape of objects in the scene. More specifically, in autonomous driving, an absolute depth map is relevant for estimating the distance between the ego-vehicle and detected obstacles. However, we should note that absolute depth map estimation is more challenging compared to relative depth map, as at least two jointly calibrated sensors are necessary. Consequently, it implies a relative higher financial cost in production. Moreover, extrinsic calibrations can be sensitive to the ego-vehicle physical shocks. Finally such sensor fusions can only offer depth estimation in a limited range, due to fixed baselines with stereo cameras, or sparse point cloud projections with dispersive LIDAR sensors. Nevertheless, relative depth map is sometimes sufficient to detect obstacles and traversable areas. For example, considering the traversable area as a set of planes in the depth map 3D point cloud projection, some template matching techniques can be used \cite{hadsell_learning_2009}. \item \textbf{Geometric shape detection} techniques such as Hough transform and RANSAC \cite{fischler1981random} initially aimed at identifying some basic geometric shapes such as lines for lane marking detection, ellipses for traffic lights detection, or planes for road segmentation. In order to deal with sophisticated template matching tasks, techniques such as the Hough transform have been generalized for arbitrary shape detection \cite{ballard1981generalizing}. Nonetheless, these techniques require an exact model definition of the shapes to be detected. Consequently, they are sensitive to noisy data and are impractical for detection of complex and varying shapes such as obstacles encountered in the context of autonomous driving. Indeed, such objects typically suffer from outdoor illumination changes, background clutter, or non-rigid transformations. \item \textbf{Motion tracking} aims at following some data points, features or objects through time. Tracking filters, such as the Extended Kalman Filter (EKF), predict the next motion using the prior motion knowledge. Conversely, objects tracking can be achieved by features or template matching between consecutive video frames. Pixel points and features tracking is relevant for dense or sparse optical flow, as well as visual odometry estimation \cite{scaramuzza2011visual}. Conversely, obstacle objects tracking is very important in autonomous driving for modeling or anticipating their trajectories into the ego-vehicle environment. However, on the whole, while some techniques integrate uncertainty, they remain limited when dealing with complex real motion patterns. Pedestrians and drivers behaviour prediction typically requires knowledge about the context. Moreover, mobile obstacles appearance can drastically change depending on their orientation. \item \textbf{SLAM techniques:} The complementarity between the above enumerated concepts has been demonstrated through the problem of \textit{simultaneously} \textit{localizing} the ego-vehicle \textit{and mapping} the surrounding environment (SLAM) \cite{bresson_simultaneous_2017}. Features matching provides the pose transformations of the moving ego-vehicle. In turn, 3D scaled projections of depth maps combined with the successive estimated poses provide the environment mapping. Tracking filters and template matching may offer some robustness against sensor data noise and drifting localization estimation, as respectively proposed in EKF SLAM \cite{davison2007monoslam} and SLAM$++$ \cite{Salas_Moreno_2013_CVPR} approaches. \end{itemize} To summarize, analytical methods can successfully deal with several perception tasks of significant interest in the context of autonomous driving. In particular, a self-driving vehicle embedding these techniques is able to carry out physical analysis such as the 3D reconstruction modelling of the environment, and dynamic estimations concerning the ego-vehicle and the encountered surrounding mobile obstacles. Moreover, these techniques have the advantage to be easily explainable in terms of design. This facilitates the identification and prevention of failure modes. However, some critical limitations persist nowadays: \begin{itemize} \item A lack of landmarks and salient features combined with the presence of dynamic obstacles may entail a severe degradation of the feature detection and matching. \item Severe noisy sensor data induces the same risks. \item It is impossible to achieve dense real-time semantic scene analysis of environments including a wide range of complex shape patterns. \end{itemize} Learning methods, by recognizing and predicting complex patterns with generalization abilities, aim at overcoming such issues, as developed in the next section. \section{Learning methods} \label{section:learning} Learning methods have demonstrated state-of-the-art prediction performances for semantic analysis tasks during the last decade. Autonomous driving is a key application which can greatly benefit from these recent developments. For instance, learning methods have been investigated in this context, for identifying the observed scene context using classification, for detecting the other road users surrounding the ego-vehicle, for delineating the traversable area surface, or for dynamic obstacles tracking. \begin{itemize} \item \textbf{Classification} aims at predicting, for a given input sensor sample, an output class label. In order to deal with high dimensional data containing complex patterns, the first stage is generally to extract relevant features using hand-crafted filters or learned feature extractors. For image feature extraction, the state-of-the-art techniques use Convolutional Neural Network (CNN) architectures. They are composed of a superposition of consecutive layers of trainable convolutional filters. Then, a second stage is to apply a learning classifier on the feature maps generated as output of these filters. Some commonly used classifiers are the Support Vector Machine (SVM) and the Multi-Layer Perceptron (MLP). Both require a training which is most of the time performed in a fully supervised way on labeled data. The CNN and MLP deep learning models are trained by backpropagating the output prediction error on the trainable weigths up to the input. Concerning the evaluation of these models, a test dataset is required, which is labeled as well. The \textit{Accuracy} metric is commonly used for evaluating the prediction performances, while the F1-Score, an harmonic mean of the precision and recall, is relevant for information retrieval. An image classification application example in autonomous driving is for categorizing the context of the driven road \cite{teichmann_multinet:_2016}. \item \textbf{Detection} generally localizes the regions of interest in a visual sensor data, which in turn can be classified. A commonly used strategy, invariant to scales and translations, applies an image classifier on sliding windows over an image pyramid. Then, several advanced competitive image detection techniques, such as Faster R-CNN \cite{ren_faster_2015} or Yolo \cite{redmon_you_2016} have been more recently developed, and have been adapted for road users detection \cite{teichmann_multinet:_2016}. \item \textbf{Segmentation:} As its name suggests, this task provides a segmentation of visual sensor data. Three distinct applications can be considered: \begin{itemize} \item \textit{Semantic segmentation} assigns a semantic class label to each pixel. An example is road segmentation \cite{teichmann_multinet:_2016}. State-of-the-art methods for autonomous vehicle perception can exploit an auto-encoder architecture, but also dilated or atrous convolutions, as well as an image context modeling strategy as reviewed in \cite{garcia2018survey}. In the context of image segmentation, these models are trained to predict as output a pixel-wise classification of the input image. \item \textit{Instance segmentation} aims at detecting and segmenting each object instance. Examples include foreground segmentation and object detection of potentially moving obstacles \cite{He_2017_ICCV}. \item \textit{Panoptic segmentation} \cite{kirillov_panoptic_2018} is a unification of the two previously mentioned segmentation tasks. \end{itemize} Some models dealing with these segmentation tasks have been adapted for performing per-pixel regression tasks such as dense optical flow estimation \cite{Dosovitskiy_2015_ICCV} or depth map estimation \cite{deep_sup_depth_map_est}. \item \textbf{Temporal object tracking} follows the spatial location of selected objects along a temporal data sequence. State-of-the-art learning techniques use variants of the Recurrent Neural Network (RNN) model \cite{milan_online_2017}. Compared to standard filtering techniques, RNNs have the ability to learn complex and relatively long-term temporal patterns in the context of autonomous driving. \end{itemize} These methods can be combined in a unified framework, for instance by sharing the same encoded latent feature maps, as proposed in MultiNet \cite{teichmann_multinet:_2016} for joint real-time scene classification, vehicle detection and road segmentation. While demonstrating competitive prediction performances, the above mentioned learning techniques are fully supervised. In other words, they have in common the limitation to require large-scale fully annotated training datasets. In order to alleviate this issue, some other learning strategies have been investigated: \begin{itemize} \item \textbf{Weakly supervised learning:} These techniques can be trained with a partially labeled dataset \cite{niu_theoretical_2016}, and eventually with a fraction of corrupted labels \cite{ma_dimensionality-driven_2018}, \cite{chiaroni_hallucinating_2019}. Advantageously, these approaches drastically reduce the need of labeled data. \item \textbf{Clustering}: These approaches can be defined as an unlabeled classification strategy that aims at gathering without supervision the data depending on their similarities. A huge advantage is that no labels are required. However, if it is necessary to associate the resulting clusters with semantic meanings understandable by human, then a final step of punctual per-cluster hand-labeling is required. State-of-the-art methods \cite{caron_deep_2018} dealing with complex real images mix trainable feature extractors with standard clustering methods such as a Gaussian Mixture Model (GMM) \cite{moon_expectation_maximization_1996}. \item \textbf{Pre-training:} Some relevant generic visual feature extractors can be obtained by performing a preliminary pre-training of the CNN model on unlabeled or labeled data coming from the target application domain \cite{hadsell_learning_2009} or even from a different one \cite{godard_digging_2018}. \end{itemize} We also note that in order to apprehend from end-to-end the learned representations, it is possible to identify training overfitting \cite{houle_local_2017} of deep learning models without validation test supervision. Furthermore, some learning approaches can estimate the prior of a noisy labeled training dataset \cite{jain_estimating_2016} or the model uncertainty \cite{gal_uncertainty_2016}, \cite{kendall_bayesian_2015}. Now that some considered analytical and learning methods have been treated separately, the next section shows the complementarity between these two different types of approaches through several Self-Supervised Learning (SSL) systems developed in the context the perception of the autonomous driving vehicle. \section{SSL Autonomous Driving Applications} \label{section:ssl} In the context of autonomous driving applications, we can organize the Self-Supervised Learning (SSL) perception techniques in two main categories: \begin{itemize} \item High-level scene understanding: \begin{itemize} \item road segmentation in order to discriminate the traversable path from obstacles to be avoided \item dynamic obstacles detection and segmentation \item obstacles tracking and motion anticipation predictions \end{itemize} \item Low-level sensor data analysis, with a particular focus on: Dense depth map estimation, which is a potentially relevant input data information for dealing with the previously enumerated scene understanding challenges. \end{itemize} \subsection{Scene understanding} In order to navigate safely, smoothly, or swiftly when it is required, a self-driving car must perform a path planning adapted to the surrounding environment. The planned trajectories must pass trough traversable areas, while ensuring that surrounding static and dynamic obstacles are avoided. For this purpose, it is necessary to detect and delineate these potential obstacles in advance, but also to anticipate future positions of the mobile ones. \subsubsection{Traversable area segmentation} A traversable area can be identified by performing its segmentation over the mapped physical environment. Two different strategies have been successively applied. The former is mainly dedicated to offroad unknown terrain crossing. It entails fully self-supervised training systems (i.e. without hand-labeled data). The latter, that appeared more recently, is dedicated to urban road analysis. The main difference is that the SSL online systems are initialized with a supervised pre-training on hand-labeled data. This preliminary step aims at replacing the lack of landmarks on urban asphalt roads having uniform textures, by prior knowledge. \textbf{SSL offroad systems:} a road segmentation is proposed in \cite{lieb_adaptive_2005} by exploiting temporal past information concerning the road appearance on monocular camera images. It considers the close observable area on the current monocular camera frame in front of the car as a traversable road. Next, it propagates optical flow on this area from the current frame up to the past captured frames. Then, it can deduce this close area appearance when it was spatially farther in the past. This past appearance of the actual close traversable area is exploited for producing horizontal line templates using the SSD (sum of squared differences) matching measure. It is combined with a hough transform-based horizon detector to define the image horizontal lines of pixels on which to apply the horizontal 1-D template matching. Next, with the assumption that the actual distant traversable area has roughly the same appearance as the actual close area had in the past, the 1D templates are applied over the current frame to segment the distant traversable area. If the best template matching measure changes abruptly, then it is supposed that the ego-vehicle is going out of the road or that the road appearance has suddenly and drastically changed. The approach in \cite{lieb_adaptive_2005} is relevant for providing a long-range road image segmentation using a monocular camera only. However, a major issue is the critical assumption considering the close area as always traversable. If the road aspect changes suddenly, then it is impossible with this SSL strategy to correctly segment this novel road region. Another SSL road segmentation approach is proposed in \cite{dahlkamp_self_supervised_2006} dealing with this issue. Instead of using temporal information with the assumption that the close area is always traversable, and in addition to the monocular camera, a LIDAR sensor is used for detecting the obstacles close to the ego-vehicle. Projected on the camera images, LIDAR depth points enable to automatically and sparsely label the close traversable area on images pixels. Then, a learning gaussian mixture model (GMM) is trained online to recognize the statistical appearance of these sparse analytically labeled pixels. Next, the trained model is applied on the camera pixels which cannot benefit from the sparse LIDAR points projection, in order to classify them as road pixels or not. In this way, the vehicle can anticipate the far obstacles observable in the monocular camera images, but not in the dispersive LIDAR data. This SSL system enabled the \textit{Stanley} self-driving car, presented in Figure \ref{fig:Self_driving_cars}(a), to win the \textit{DARPA Grand Challenge}\footnote{https://www.darpa.mil/about-us/timeline/-grand-challenge-for-autonomous-vehicles} by smoothing the trajectories and increasing the vehicle speed thanks to the anticipation of distant obstacles. This highlighted the interest of combining multiple sensors in a self-driving car. More recently, with the growing interest for deep learning methods, Hadsell et al. \cite{hadsell_learning_2009} propose to use a CNN classifier model instead of the earlier template matching or GMM learning techniques. Moreover, an additional paired camera (i.e. stereo-camera) replaces the LIDAR sensor as in \cite{dahlkamp_self_supervised_2006}. As offroad terrain traversable areas are not always completely flat, a multi-ground plane segmentation is performed in \cite{hadsell_learning_2009}, on the short-range point cloud projection, obtained with the stereo-vision depth map, by using a hough transform plane detector. This technique provides several automatic labels for image patches which are observable in the short-range region. Then, addressing the long-range vision segmentation, the authors firstly train a classifier to predict patches labels automatically estimated within the short-range region. Next, the trained classifier predicts the same labels on the long-range observable image region patches by using a sliding window classification strategy. Concerning the prediction performances, the authors have demonstrated that the online fine tuning of the classifier and the offline pre-taining of its convolutional layers using an unsupervised autoencoder architecture can improve prediction performances. Moreover, an interesting point to note is that instead of using uncertainty or noisy labeled learning techniques, the authors created transition class labels for the boundary image surfaces separating the obstacles from the traversable area. Finally, from an initial 11-12 meters short range stereo-vision, the developed SSL system is able to extrapolate a long-range vision up to 50-100 meters. Nonetheless, in order to estimate the short-range stereo 3D reconstruction, including planar sets of points corresponding to the offroad traversable area, this approach requires the presence of salient visual features in the road regions. This may be impractical for instance on the uniform visual texture of asphalt roads commonly encountered in urban scenarios, as illustrated in Fig. \ref{fig:sift_urban_road_distr}. \begin{figure*}[h] \begin{center} \begin{minipage}[c]{0.49\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{illustrations/unknown_gray}} \centerline{(a)}\medskip \end{minipage} \begin{minipage}[c]{0.49\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{illustrations/sift_keypoints}} \centerline{(b)}\medskip \end{minipage} \caption{Salient features location on urban ego-vehicle environment. (a) is an arbitrary frame, extracted from the KITTI dataset \cite{geiger_are_2012}, illustrating an urban asphalt road with the surrounding environment. (b) shows keypoints detected on the left input image using SIFT detector. Keypoints distribution is dense in the offroad region, and sparse on the asphalt road in the image center.} \label{fig:sift_urban_road_distr} \end{center} \end{figure*} \textbf{Pre-trained SSL urban road systems:} Some other online SSL techniques deal with this issue by exploiting a classifer pre-trained offline on hand-labeled data \cite{zhou_road_2010}, \cite{roncancio_traversability_2014}. The automatic labeling step previously performed with analytical methods is replaced in \cite{zhou_road_2010} by an SVM classifier pre-trained offline using a human annotated dataset. In this way, this approach is also compatible with uniform asphalt road surfaces. However, compared to the previously presented SSL offroad approaches, it requires hand-labeled data. A hybrid path segmentation technique is proposed in \cite{roncancio_traversability_2014}. It combines a 3D traversability cost map obtained by stereo-vision, and an SVM classifier pre-trained offline over a human annotated dataset. Six different ground surfaces are considered to train the classifier: asphalt, big gravel, small gravel, soil, grass, bushes and stones. The strategy is as follows. SVM predictions refine online the cost map concerning the flat regions. In turn, the 3D traversability cost map obtained without supervision is exploited to update online some mis-classifications of the pre-trained classifier. To sum up these road segmentation SSL approaches, we can notice that while the sensor data and the analytical and learning models are different, the online process remains essentially the same. The first stage always consists in generating automatic labels by using additional temporal \cite{lieb_adaptive_2005}, sensor \cite{dahlkamp_self_supervised_2006}, \cite{hadsell_learning_2009}, or prior knowledge information \cite{zhou_road_2010}, \cite{roncancio_traversability_2014}. Then, a second stage trains or updates online a classifier, such that it can be used to provide a long-range or refine road segmentation. Overall, while the short-range visions based on depth sensors aims at ensuring the reliable detection of close obstacles, using such SSL vision techniques in static environments directly enables to anticipate the path planning evolution at a long range. Consequently, it is possible to increase the maximum speed velocity of the self-driving car \cite{dahlkamp_self_supervised_2006}, while preserving smooth trajectories \cite{hadsell_learning_2009}. Now that we have presented some SSL techniques dealing with limited depth sensors in static environments, we focus on dynamic obstacles, as they represent the other potential road users interacting with the ego-vehicle in the shared surrounding environment. \subsubsection{Dynamic obstacles analysis} This section starts by presenting an SSL approach \cite{guizilini_online_2013} based on a binary per-pixel segmentation of dynamic obstacles. Then, we introduce its extension \cite{bewley_online_2014} for dynamic obstacles instance segmentation, such that the different road users can be separated. \textbf{SSL for dynamic obstacles pixel-wise segmentation:} a pixel-level binary segmentation of dynamic obstacles is proposed in \cite{guizilini_online_2013}, using temporal image sequences captured with a monocular camera installed on a mobile urban vehicle. The approach firstly separates sparse dynamic keypoints features from the static ones, by applying a RANSAC technique over the optical flow between consecutive frames. Then, the automatically produced per-pixel dynamic labels are transferred as input of a learning Gaussian Process (GP) model. Next, the trained model extrapolates this knowledge to label as dynamic the pixels of the same visual properties instead of the ones previously automatically identified as dynamic. The whole process is achieved during an online procedure. The system is evaluated on a hand-labeled dataset. This SSL strategy has the advantage to provide the background subtraction from a moving camera, while extrapolating a dense per-pixel segmentation of the dynamic obstacles from sparse detected keypoints. However, this technique cannot provide per-obstacles analysis as it merely predicts a binary mask of pixels corresponding to dynamic obstacles. The technique in \cite{bewley_online_2014} extends the previous approach for SSL multi-instance segmentation by using temporal image sequences captured with a monocular camera installed on a mobile urban vehicle. The authors apply, over the mobile keypoints detected by \cite{guizilini_online_2013}, a clustering method using the tracked keypoints information such as their spatial location and motion pattern features. The multi-instance segmentation of dynamic obstacles is evaluated on a hand-labeled video sequence of the KITTI dataset \cite{geiger_are_2012}. Overall, the authors state that some issues shared with analytical methods persist in their approach. If the dynamic obstacles shadows are projected on the background, then the latter are considered as dynamic as well. Moreover, the segmentation of distant dynamic obstacles can be missed if the corresponding keypoints variations are considered as noise due to the difficulty to detect the corresponding slight optical flow variations. Furthermore, if a dynamic obstacle, either large or close to the sensor, represents the majority of the image keypoints, then this given obstacle is likely to be treated as the static background scene. Nonetheless, it is important to bear in mind that these approaches present state-of-the-art competitive performances for dynamic obstacles detection and segmentation without training or pre-training on annotated data. In addition, the method in \cite{bewley_online_2014} provides interesting tools to analyze on the move the dynamic obstacles, for example to separately track them and learn to predict their intention. The next focus is on SSL techniques designed for object tracking and temporal predictions in urban road scene evolution, including dynamic obstacles. \subsubsection{Temporal tracking predictions} In order to deal with object appearance changes, a competitive SSL tracking technique \cite{kala_tld_2012} proposes an online adaptive strategy combining tracking, learning, and object detector real-time modules. However, in the context of autonomous driving, it may be often necessary to simultaneously track, and even anticipate the trajectories of several surrounding road users. Moreover, being able to consider the interactions between each road user requires some complex motion pattern analysis. It turns out that some SSL approaches propose to deal with this challenge by focusing the prediction effort on the entire scene in a unified way, rather than on every obstacles independently. The SSL \textit{deep tracking} system \cite{ondruska_deep_2016}\footnote{Such an approach could be categorized as unsupervised. However, we make the choice in this article to consider that exploiting during the training an additional future temporal information, not available during the prediction step, is a type of self-supervision.} learns to predict the future state of a 2D LIDAR occupancy grid. This is achieved by training an RNN on the latent space of a CNN autoencoder (AE) which is applied on the input occupancy grid considered as an image. Each cell of the grid is represented by a pixel, which can be color-coded as occluded, void, or as an obstacle surface. Consequently, the model can be trained from end-to-end by learning to predict the next occupancy grid states using the past and current grid states. Solely the prediction output error of non occluded cells is backpropagated during the training. By definition, this system can perform a self-evaluation by computing a per-pixel photometric error between the predicted occupancy grid and the real future observed occupancy grid at the same temporal instant. This technique has the advantage of being compatible with complex motion patterns compared to Bayesian and Kalman tracking techniques. In addition, the training process enables to predict the obstacles trajectories even during occlusions. The major interest of \textit{deep tracking} is that, as the model learns to predict a complete scene, it naturally considers interactions between each dynamic obstacle present in the scene. In \cite{dequaire_deep_2017}, the \textit{deep tracking} model is extended for a real mobile LIDAR sensor by adding a spatial transformer module in order to take into consideration the displacements of the ego-vehicle with respect to its environment during objects tracking. In turn, these tracking approaches provide the tools to collect motion pattern information of surrounding dynamic obstacles, such that this information may help to classify obstacles depending on their dynamic properties \cite{fathollahi_autonomous_2016}. \subsection{Low-level sensor data analysis} This section addresses the sensor data analysis for low-level information estimation in the context of autonomous driving. Compared to the previous methods, the attention has mainly focused recently on SSL depth map estimation from monocular or stereo cameras. \subsubsection{SSL Depth map estimation} The self-supervised depth map estimation approach presented in \cite{garg_unsupervised_2016} predicts a depth map from a monocular camera without relying on annotated depth maps. The pose transformation between both left and right cameras is known. The SSL strategy is as follows. First, the left camera frame is provided as input to a CNN model trained from scratch to predict, the corresponding depth map. Second, an inverse warping is performed by combining the predicted left depth map with the right camera frame in order to output a synthesized frame similar to the input left frame. In this way, an SSL photometric reconstruction error can be computed as output of the decoder part. Next, this per-pixel error is directly used to train the encoder weights using Stochastic Gradient Descent (SGD) optimization technique. While not requiring pre-training, nor annotated ground-truth depths, this approach presents prediction performances comparable with the state-of-the-art fully supervised monocular techniques. However, the ground truth pose transformation, related to the inter-view displacement between both cameras, is required. Following a similar idea, another technique is proposed in \cite{zhong_self_supervised_2017}. It is trained to reconstruct, from a given frame, the second frame taken from a different point of view. It generates a depth map using a stereo camera during the training step, but also during the prediction step. This makes the approach more robust, such that it becomes competitive with standard stereo matching techniques. Moreover, the constraint of preserving two cameras and the pose transformation ground truth for predictions, enables in counterpart to perform online learning. This may be interesting for dealing with novel ego-vehicle environments unseen during the training. To overcome the necessity of the pose transformation ground-truth, Zhou et al. \cite{zhou_unsupervised_2017} propose to predict, from a temporal sequence of frames, the depth map with a learning model, and the successive camera pose transformations with another learning model. Both models are trained together from end-to-end for making the novel view synthesis of the next frame. However, the pose transformation estimation implies that the predicted depth map is defined up to a scale factor. A more modular technique \cite{godard_digging_2018} exploits either temporal monocular sequences of frames as in \cite{zhou_unsupervised_2017}, the paired frames of a stereo camera as in \cite{zhong_self_supervised_2017}, or to jointly exploit both temporal and stereo information. This framework also deals with the false depth estimation of moving obstacles by ignoring, during training, the pixels not varying between two consecutive temporal frames. It also deals with occluded pixels when the captured point of view changes by using a minimum re-projection loss. To summarize, low-level analysis techniques for depth map estimation have demonstrated that SSL strategies without using ground truth labels can bring state-of-the-art solutions competitive with fully supervised techniques. Overall, the SSL techniques presented in this section support the following conclusion: By exploiting the complementarity between analytical and learning methods, it is possible to address several autonomous driving perception tasks, without necessarily requiring an annotated dataset. Presented methodologies are summarized in Fig. \ref{fig:SSL_generic_methodology} along with Table \ref{tab:Methods_connections}. \begin{figure*}[ht] \begin{center} \begin{minipage}[c]{0.25\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{illustrations/sup_only}} \end{minipage} \begin{minipage}[c]{0.06\linewidth} \centering \centerline{ }\medskip \end{minipage} \begin{minipage}[c]{0.25\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{illustrations/ana_only}} \end{minipage} \begin{minipage}[c]{0.06\linewidth} \centering \centerline{ }\medskip \end{minipage} \begin{minipage}[c]{0.34\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{illustrations/ssl_only}} \end{minipage} \begin{minipage}[c]{0.32\linewidth} \centering \centerline{(a) Supervised learning}\medskip \end{minipage} \begin{minipage}[c]{0.32\linewidth} \centering \centerline{(b) Analytical}\medskip \end{minipage} \begin{minipage}[c]{0.32\linewidth} \centering \centerline{(c) SSL}\medskip \end{minipage} \caption{Function diagrams showing the common connections depending on the strategy. Functional blocks represent single monocular camera frame $S_1$, additional sensor data (e.g. temporal frame sequence, stereo-camera, or lidar data) $S_n$, a Learning model $L$, an Analytical method $A$, and Evaluation method $E$.} \label{fig:SSL_generic_methodology} \end{center} \end{figure*} \begin{table*}[h] \centering \caption{Functional block connections of presented SSL methodologies depending on the application. Experimental datasets exploited and relative prediction performances are reported whenever available. *refers to supervised methods.} \resizebox{17.75cm}{!}{ \begin{tabular}{c|ccccccccc|c|c \toprule SSL Methodologies &$S_1 \rightarrow L$&$S_n \rightarrow L$&$S_1 \rightarrow A$&$S_n \rightarrow A$&$S_n \rightarrow E$&$A \rightarrow L$&$L \rightarrow E$&$A \rightarrow E$&$E \rightarrow L$& datasets & performances \\ \midrule \midrule (Off)road segmentation & & & & & & & & & & & \\ \cite{lieb_adaptive_2005}, \cite{dahlkamp_self_supervised_2006}, \cite{hadsell_learning_2009}, \cite{zhou_road_2010}, \cite{roncancio_traversability_2014} & $\surd$ & & $\surd$ & $\surd$ & & $\surd$ &$\surd$ & $\surd$ & $\surd$ & - & - \\ \midrule Dynamic obstacles & & & & & & & & & & KITTI \cite{geiger_are_2012} & \\ analysis \cite{guizilini_online_2013}, \cite{bewley_online_2014} & $\surd$ & & $\surd$ & $\surd$ & & $\surd$ & & & & Sidney \cite{guizilini_online_2013} & - \\ \midrule Temporal tracking & & & & & & & & & &Oxford Robotcar& \\ predictions \cite{ondruska_deep_2016}, \cite{dequaire_deep_2017} & $\surd$ & $\surd$ & & & $\surd$ & & $\surd$ & & $\surd$ & dataset \cite{RobotCarDatasetIJRR} & - \\ \midrule Depth map estimation & & & & & & & & & & KITTI & \cite{fu2018deep}*$>$\cite{godard_digging_2018}$>$\cite{garg_unsupervised_2016}$>$ \\ \cite{garg_unsupervised_2016}, \cite{zhou_unsupervised_2017}, \cite{zhong_self_supervised_2017}$^1$, \cite{godard_digging_2018}$^1$ & $\surd$ & $\surd^1$ & & & $\surd$ & & $\surd$ & & $\surd$ & Make3D \cite{saxena2008make3d} &\cite{zhou_unsupervised_2017}$>$\cite{eigen2014depth}* \\ \bottomrule \end{tabular}} \label{tab:Methods_connections} \end{table*} \section{Limitations and future challenges} \label{section:limit} In the context of autonomous driving, some limitations remain in the presented SSL perception systems and open future research perspectives. \textit{Catastrophic forgetting:} During the online learning procedure, the trainable weights of the model may require unnecessary repetitive updates for detecting a given pattern throughout the environment exploration. In fact, when a learning model is continuously specialized for dealing with the latest data, the likelihood increases that the model simultaneously forget the potentially relevant formerly learned patterns. It turns out that it is possible to deal with this \textit{catastrophic forgetting} issue when using neural networks \cite{kirkpatrick_overcoming_2017}. For future research directions, it may be interesting to combine such incremental learning techniques with the presented SSL frameworks. Concerning the scene depth map estimation solely based on temporal analysis: \begin{itemize} \item the presence of dynamic obstacles in the scene during the learning stage can result in poor estimates of the observed scene. As discussed in \cite{guizilini_online_2013}, further research on SSL for potentially dynamic obstacles delineations on the sensor data may help to deal with this issue. \item the current state-of-the-art techniques cannot estimate the real depth map without requiring a supervised scaling factor. The latter is generally obtained by estimating the real metric values of the pose transformation between two consecutive camera viewpoints. As proposed in the supervised detector \textit{Deep MANTA} \cite{Chabot_2017_CVPR}, it may be interesting to recover this scale factor by using some template matching techniques on the observable objects of the scene. \end{itemize} Concerning the online self-evaluation, some of the presented systems require a baseline reference analytically obtained \cite{hadsell_learning_2009}. However, if we consider that the analytical processes, considered as ground-truth labeling techniques, are likely to generate some noisy labels, it may be interesting to investigate some future research on how to evaluate this prior noise from the learning model viewpoint \cite{jain_estimating_2016}, and how to deal with it \cite{chiaroni_hallucinating_2019}. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{sec:intro}Introduction} Data from experiments at the Large Hadron Collider (LHC) probe some of the most fundamental questions in modern science, such as the nature of dark matter, the potential unification of forces, and the matter/anti-matter imbalance in the observable universe. In recent years, the integration of machine learning into data analyses has catalyzed scientific advances in particle physics, as the machine-learned models are able to handle data of greater complexity and higher dimensionality than previously feasible~\cite{Baldi_2016, Guest_2016, Guest_2018, Larkoski_2020}. However, the improved power of such models often comes at the cost of reduced interpretability. As most models are trained on simulated samples using supervised learning techniques, physicists are rightly concerned that models may base their classification or regression decisions on portions of the feature space which are poorly described by simulations, or where the modeling is theoretically uncertain. For this reason, it is important that physicists be able to understand the nature of the information being used, providing confidence in the network decisions, and allowing the assessment of systematic uncertainties. One important application of machine learning is the classification of quark versus gluon jets~\cite{PhysRevD.44.2025, Gallicchio_2011, Gallicchio_2013, Aad_2014, Gras_2017, Kasieczka_2019, Andrews_2020}. While significant efforts have been made to improve classification performance, less attention~\cite{Choi:2018dag} has been paid to understanding the Quantum Chromodynamics (QCD) nature of the information being used. Two main classes of QCD effects can be considered: perturbative and non-perturbative. Although there is no universal and gauge-invariant way to distinguish between the two, physicists have relied on principles such as infrared and collinear (IRC) safety to identify perturbative effects, as IRC-safety ensures that observables are invariant to soft emissions and arbitrary collinear parton splittings. Studies in simulated quark/gluon dijet samples indicate that the task of quark/gluon tagging should be dependent only on IRC-safe observables, as the likelihood for quark vs. gluon classification is IRC-safe~\cite{Larkoski:2019nwj}. Nevertheless, networks trained on high-level information of quark/gluon jet samples simulated with hadronization effects often outperform those trained on high-level information of samples simulated before hadronization (``at parton level"), indicating that quark/gluon tagging is highly sensitive to both perturbative and non-perturbative effects~\cite{Larkoski:2019nwj, Gras_2017}. Given that the current theoretical understanding of non-perturbative effects like hadronization is limited, as well as a lack of a current unambiguous definition of quark and gluon jets at the hadron level, variations in the modeling of non-perturbative effects with different parton-shower event generators can occur, thus hindering our abilities to calculate the systematic uncertainties of the low-level models. Here we focus directly on the question of the nature of the QCD information used by networks that learn to classify quark and gluon jets from low-level calorimeter information. Our strategy is to identify the source and importance of the perturbative and non-perturbative effects by comparing the performance of networks whose internal structures enforce IRC-safety to those which are unconstrained, on benchmark problems with and without hadronization effects, and as a function of jet energy. A drop in performance for the networks that enforce IRC-safety relative to the unconstrained networks is attributed to IRC-unsafe information. We then attempt to more specifically identify the nature of the IRC-unsafe information by employing networks whose internal structure enforces prescribed IRC-unsafe dependencies on transverse momentum and angular distance metrics. This strategy allows us to reveal the nature of the IRC-unsafe information by narrowing down its energy and angular dependence, enabling us to map it to well-known IRC-unsafe jet variables, such as tower multiplicity, IRC-unsafe generalized angularities~\cite{Larkoski_2014}, and energy-flow polynomials (EFPs)~\cite{Komiske_2018}. Capturing the information used by low-level networks into high-level observables is vital for future applications of machine learning classifiers~\cite{Faucett:2020vbu}, as it enables physicists to both understand the nature of the information and improve confidence in the network's decisions by allowing for intentional inclusion or exclusion of such information. The rest of this paper is organized as follows. Section \ref{sec:StrategyAndMethods} lays out the strategy used to reveal the reliance of the low-level networks on IRC-safe and IRC-unsafe information, and for capturing this information into high-level observables. Section \ref{sec:Data} describes the data generation settings. Results and discussion are presented in Section \ref{sec:Results}, and conclusions in Section \ref{sec:Conclusions}. \section{\label{sec:StrategyAndMethods} Strategy and Methods} Two key elements for understanding the nature of the information used by machine learning classifiers for jet tagging are the inherent constraints of the classifier and the jet representation. In recent studies, many jet representations have been considered; popular choices include: jet images~\cite{de_Oliveira_2016, Baldi_2016, Komiske_2017}, unordered sets of constituents~\cite{Komiske_2019, Qu_2020}, and ordered sets of constituents~\cite{Guest_2016, Louppe_2019, Cheng_2018, egan2017long}. In order to compare each of the learning strategies on equal footing and to maintain the maximal amount of information, we choose to represent the jets as unordered sets of calorimeter towers. The towers in the sets are characterized by their three-momenta -- ($p_\textrm{T}, \eta, \phi$) -- and are centered with respect to the $E-$scheme jet axis. Only towers within a radial distance of $R =\sqrt{\Delta\phi^2+\Delta\eta^2} < 0.4$ from the jet axis are kept. The first step in our strategy focuses on assessing the importance of IRC-safe information calculated from the unordered sets of calorimeter towers. First, we estimate an effective upper limit on the performance of quark/gluon classifiers by employing Particle-Flow Networks (PFNs)~\cite{Komiske_2019}. PFNs have consistently achieved top performances for quark/gluon jet classification ~\cite{Komiske_2019, Kasieczka_2019, Qu_2020, bogatskiy2020lorentz}, so their performance is taken to be the benchmark to which all other networks in this paper are compared. Next, we consider Energy-Flow Networks (EFNs)~\cite{Komiske_2019} which, like PFNs, treat jets as unordered sets of constituents, but use an architecture which constrains internal functions to forms which enforce IRC-safety. We assess the importance of IRC-safe information by comparing the difference in performance between the PFNs and the EFNs. A reduced performance by the EFNs relative to the PFNs would suggest that the PFNs are relying on IRC-unsafe information. The second step focuses on exploring the nature of the IRC-unsafe information. We begin by introducing EFN$[\kappa]$s, a generalization of EFNs whose architecture constrains the models to use internal functions with a given energy weighting exponent, $\kappa$. Variation of the network performance with $\kappa$ reveals the nature of the functional forms which capture the IRC-unsafe information. We then attempt to map this information onto families of IRC-unsafe observables, which can be concatenated to IRC-safe observables to match the performance of the PFNs. Two families of IRC-safe observables are used: N-subjettiness variables in combination with jet mass, and IRC-safe EFPs. Similarly, two families of IRC-unsafe observables are used: IRC-unsafe generalized angularities and IRC-unsafe EFPs, both in combination with tower multiplicity. The networks employed in our analysis of the information used during quark/gluon jet classification are detailed below. \subsection{Particle-Flow Networks} The power of PFNs relies on their ability to learn virtually any symmetric function of the towers. Their mathematical structure is naturally invariant under permutation of the input ordering, as it is built on a summation over the towers. PFNs can be mathematically summarized as \begin{equation} \textrm{PFN} : F \left( \sum_{i \in \text{jet}} \Phi(p_{\textrm{T}i}, \eta_i, \phi_i) \right), \label{eq:PFN} \end{equation} where $\Phi$ represents the per-tower latent space and $F$ the event-level latent space. The transverse momentum, pseudorapidity, and azimuthal angle of tower $i$ are respectively given by $p_{\textrm{T}i}$, $\eta_i$, and $\phi_i$, and this notation is used throughout this paper when indexing over the towers in a jet. We place no constraints on the nature of the latent spaces, giving the network great flexibility. For this reason, they are a useful probe of the effective upper limit of the performance in the classification tasks when minimal constraints are applied to the nature of the learning method\footnote{We verified in several cases that similar performance is achieved by convolutional neural networks operating on jet images, but due to the significantly increased computational cost and number of parameters, we set them aside as probes of the upper bound in favor of PFNs.}. \subsection{Energy-Flow Networks} Similar in structure to PFNs, EFNs are constructed such that the event-level latent space learns functions which have a linear energy factor. Mathematically, EFNs can be summarized as \begin{equation} \textrm{EFN} : F \left( \sum_{i \in \text{jet}} p_{\textrm{T}i} \Phi(\eta_i, \phi_i) \right), \label{eq:EFN} \end{equation} where unlike in PFNs, $\Phi \left( \eta_i, \phi_i \right)$ is a function only of the angles, weighted by a linear term in transverse momentum, $p_{\textrm{T}i}$, as required for IRC-safety. \subsection{IRC-unsafe Energy-Flow Networks} In anticipation of the importance of IRC-unsafe information, we introduce a generalization of EFNs which includes non-linear energy weighting exponents. Mathematically, Eq. \ref{eq:EFN} is modified to be non-linear in $p_{\textrm{T}i}$ as \begin{equation} \textrm{EFN}[\kappa] : F \left( \sum_{i \in \text{jet}} p_{\textrm{T}i}^{\kappa} \Phi(\eta_i, \phi_i) \right), \label{eq:EFNkappa} \end{equation} where EFN$[\kappa]$ refers to the modified form of an EFN with an energy factor of degree $\kappa$, which is referred to as the \textit{energy weighting exponent} in this paper\footnote{Two equivalent approaches can be used to implement EFN$[\kappa]$: (1) elevating the tower's transverse momentum from $p_{\textrm{T}}$ to $p_{\textrm{T}}^{\kappa}$, and then then passing them as input to the EFN, and (2) directly modifying the architecture of the EFN to use $p_{\textrm{T}}^{\kappa}$ as the weighting parameter of $\Phi(\eta_i, \phi_i)$ in Eq. \ref{eq:EFN}.}. This modification allows us isolate the critical values of $\kappa \neq 1$ that capture most of the IRC-unsafe information needed for the quark/gluon classification task. Note that EFN$[\kappa]$ can also be used to explore IR-safe energy weighting factors by setting $\kappa > 1$. \subsection{\label{sec:JetsAsHlFeatures} High-level Observables} The literature on strategies for quark/gluon classification using high-level observables is quite mature, providing many families of high-level observables which reduce the large dimensionality of the input space into one-dimensional observables that retain useful information for classification. These observables have the significant advantages that they are physically interpretable, compact, and allow for reasonable assessment of systematic uncertainties due to, for example, mismodeling in simulation. The disadvantage is that they are limited to those ideas conceived of by human physicists. In our study, the observables are calculated directly from the calorimeter towers, and are paired with fully connected Dense Neural Networks (DNNs) or with Linear Discriminant Analysis (LDA) classification models in the cases where the observables are linearly separable, such as EFPs. We consider the following IRC-safe observables that are traditionally used in quark/gluon studies: \begin{itemize} \item N-subjettiness variables~\cite{Thaler_2011, Thaler_2012}, which provide a measure of the degree to which the radiation within a jet is aligned along $N$ candidate subjet axes, and are defined as \begin{equation} \tau_N[\beta] = \sum\limits_{i \in \textrm{jet}} p_{\textrm{T}i} \textrm{min} \{ R_{i, 1}^{\beta}, R_{i, 2}^{\beta}, \ldots , R_{i, N}^{\beta} \}, \label{eq:Nsub} \end{equation} where $R_{i, J}$ is the angular distance between subjet axis $J$ $(J \leq N)$ and tower $i$. The parameter $\beta$ $(\beta>0)$ is referred to as the \textit{angular weighting exponent} in this paper. Following~\cite{Datta_2017}, we compute the first 18 N-subjettiness observables with respect to the $k_\textrm{T}$ axis: \begin{flalign*} \{ & \tau_1{[\beta=\frac{1}{2}]}, \tau_1{[\beta=1]}, \tau_1{[\beta=2]}, \ldots, \\ & \tau_6{[\beta=\frac{1}{2}]}, \tau_6{[\beta=1]}, \tau_6{[\beta=2]} \}. \end{flalign*} \end{itemize} \begin{itemize} \item Jet mass ($m_{\textrm{jet}}$), which has been found to be a powerful quark/gluon jet discriminant~\cite{Gallicchio_2011}. \end{itemize} \begin{itemize} \item IRC-safe Energy-Flow Polynomials~\cite{Komiske_2018}, which are sets of non-isomorphic multigraphs that linearly span the space of IRC-safe observables. For a multigraph $G$ with $V$ vertices and edges $(k, l) \in G$, the corresponding EFP observable is defined as \begin{equation} \textrm{EFP}[\beta] = \sum\limits_{i_1 \in \textrm{jet}} \cdots \sum\limits_{i_V \in \textrm{jet}} p_{\textrm{T}i_1} \cdots p_{\textrm{T}i_V} \prod\limits_{(k,l) \in G} R_{i_k, i_l}^{\beta} \label{eq:EFPsafe} \end{equation} where $R_{i j}$ is the angular distance between particles $i$ and $j$. Following~\cite{Komiske_2018}, the optimal performance for quark/gluon jet classification using IRC-safe EFPs is achieved with $\beta=\frac{1}{2}$; we employ the same set for our studies, with a maximum number of edges of $d \leq 5$, where $d$ corresponds to the degree of the angular monomial. \end{itemize} Distributions of a selection of N-subjettiness and IRC-safe EFP observables are shown in Fig.~\ref{fig:DistributionsSafe}. \begin{figure*} \centering \includegraphics{IRCsafe_dists_500GeV.pdf} \caption{ Distributions of jet mass and select N-subjettiness observables (top), and IRC-safe EFP observables (bottom), for quark- and gluon-initiated jets with $p_{\textrm{T}} \in [500, 550]$ GeV. The shape of the EFP multigraphs is shown for illustrative purposes.} \label{fig:DistributionsSafe} \end{figure*} To capture the IRC-unsafe information, we consider the following IRC-unsafe observables: \begin{itemize} \item IRC-unsafe Generalized Angularities~\cite{Larkoski_2014}, which have a simple form that allows for easy interpretation, are defined as \begin{equation} \lambda[\kappa, \beta] = \sum\limits_{i \in \text{jet}} p_{\textrm{T}i}^{\kappa} \left( \frac{R_{i,\textrm{jet}}}{R} \right)^{\beta}, \label{eq:GenAng} \end{equation} where $R_{i, \text{jet}}$ is the radial distance from tower $i$ to the $k_T$ jet axis, and $R$ is the jet radius. \end{itemize} \begin{itemize} \item IRC-unsafe Energy-Flow Polynomials~\cite{Komiske_2018}, which have a similar form to the IRC-safe EFPs in Eq. \ref{eq:EFPsafe}, but with a non-linear energy weighting exponent ($\kappa \neq 1$). Following the notation in Eq. \ref{eq:EFPsafe}, the IRC-unsafe EFPs are defined as \begin{equation} \textrm{EFP}[\kappa, \beta] = \sum\limits_{i_1 \in \textrm{jet}} \cdots \sum\limits_{i_N \in \textrm{jet}} p_{\textrm{T}i_1}^{\kappa} \cdots p_{\textrm{T}i_N}^{\kappa} \prod\limits_{(k,l) \in G} R_{i_k i_l}^{\beta}. \label{eq:EFPunsafe} \end{equation} \end{itemize} \begin{itemize} \item Tower multiplicity $(n_{\mathrm{t}})$, which counts the number of towers in a jet and has also been found to be a powerful quark/gluon jet discriminant~\cite{Gallicchio_2011}. \end{itemize} Distributions of a selection of IRC-unsafe generalized angularity and IRC-unsafe EFP observables are shown in Fig.~\ref{fig:DistributionsUnsafe}. \begin{figure*} \centering \includegraphics{IRCunsafe_dists_500GeV.pdf} \caption{ Distributions of tower multiplicity and select IRC-unsafe Generalized Angularity observables (top), and IRC-unsafe EFP observables (bottom), for quark- and gluon-initiated jets with $p_{\textrm{T}} \in [500, 550]$ GeV. The shape of the EFP multigraphs is shown for illustrative purposes.} \label{fig:DistributionsUnsafe} \end{figure*} A summary of all the models trained with high-level observables is shown in Table~\ref{tab:ModelSummaries}. {\renewcommand{\arraystretch}{1.2 \begin{table*} \caption{\label{tab:ModelSummaries} Summary of the models trained on IRC-safe and IRC-unsafe jet observables.} \begin{tabular}{llr} \hline \hline Model Name & Description & \makecell[l]{Number of \\ Observables}\\ \hline DNN[safe] & DNN trained on N-subjettiness variables and jet mass. & 19 \\ DNN[safe, $n_{\mathrm{t}}$, $\lambda[\frac{1}{2}, \beta]$] & \makecell[l]{DNN trained on N-subjettiness variables, jet mass, tower multiplicity, and \\ a generalized angularity variable with $\kappa=\frac{1}{2}$ and $\beta \in \{\frac{1}{2}, 1, 2\}$.} & 21 \\ LDA[safe] & LDA trained on IRC-safe EFP variables ($d \leq 5$) with $\beta=\frac{1}{2}$. & 102 \\ LDA[safe, $n_{\mathrm{t}}$, EFP[$\frac{1}{2}$, $\beta$]] & \makecell[l]{LDA trained on IRC-safe EFP variables ($d \leq 5$) with $\beta=\frac{1}{2}$, tower multiplcity, \\ and IRC-unsafe EFP variables ($d \leq 5$) with $\kappa=\frac{1}{2}$ and $\beta \in \{\frac{1}{2}, 1, 2\}$.} & 205 \\ \hline \hline \end{tabular} \end{table*} } \section{Datasets and Training} \label{sec:Data} Samples of light quark (\textit{u, d, s}) jets and gluon jets are generated in dijet events from $pp$ collisions at $\sqrt{s}$=14 TeV. Collisions and immediate decays are generated with \textsc{Madgraph5} v2.6.5~\cite{Alwall_2014}, while showering and hadronization is simulated with \textsc{Pythia} v8.235~\cite{Sj_strand_2008}. The light quark-initiated jets come from the parton level hard-processes $pp \rightarrow qq$ and $q\bar{q}$ while the gluon-initiated jets come from $pp \rightarrow gg$. Mixed quark and gluon states are not generated to minimize ambiguity, as precise theoretical definitions of quark/gluon jet labels are generally elusive~\cite{Gras_2017}, though operational jet flavor definitions~\cite{Metodiev_2018, Komiske_2018_opdef} may be used to classify quark and gluon jets directly in LHC data. To compare the effects of particle hadronization on quark/gluon jet tagging, the events are generated with and without hadronization effects, respectively corresponding to ``hadron'' and ``parton'' level events, by toggling the {\sc HadronLevel:all} switch in \textsc{Pythia}. Jets are then passed through the \textsc{Delphes} v3.4.2~\cite{deFavereau:2013fsa} detector simulator, using the standard ATLAS card, to simulate interactions with the detector\footnote{Note that detector simulations such as Delphes~\cite{deFavereau:2013fsa} may introduce low-$p_{\textrm{T}}$ cutoffs, which effectively act as controlled cutoffs for IRC-unsafe observables.}. No additional $pp$ interactions (pileup) are considered, as many studies have shown effective mitigation techniques to attenuate the effects of pileup ~\cite{Aad_2016, Komiske_pileup_2017, martinez2019pileup}. Jets are reconstructed from calorimeter towers with the anti-$k_\textrm{T}$ clustering algorithm~\cite{Cacciari:2008gp}, as implemented in \textsc{FastJet} v3.3.2~\cite{Cacciari:2011ma}, with a distance parameter of $R=0.4$ and disregarding neutrinos. In each event, only the hardest jet with absolute pseudorapidity $|\eta| < 2.0$ is kept. To study energy dependence, three ranges of jet $p_\textrm{T}$ are considered: $200-220$ GeV, $500-550$ GeV, and $1000-1100$ GeV, with the threshold applied to reconstructed jets. For efficiency of generation, similar thresholds are applied at parton-level, but with a window 20\% broader to avoid distortions. For each jet $p_\textrm{T}$ range, 650k quark jets and 650k gluon jets are generated; these are split into datasets with 1M events for training, 200k for testing, and 100k for validation. The sets of unordered towers used as inputs to the low-level networks -- PFN, EFN, and EFN${[\kappa]}$ -- are preprocessed by normalizing the sum of the $p_\textrm{T}$ of the towers in the sets to unity. The observables used as inputs to the high-level classifiers -- DNN and LDA -- are preprocessed by subtracting the mean and dividing by the standard deviation of the distributions in the training set. See the Appendix for details on network architectures and training. The performance of the various classification strategies is compared using the area under the receiver operating curve (AUC) of each network. The statistical uncertainty on the strategies is measured using boostraping to $\pm$ 0.002 or less, unless otherwise specified. \section{\label{sec:Results}Results and Discussion} The PFNs, which provide a loose upper limit, perform well, with classification power increasing with jet $p_\textrm{T}$ as shown in Tab.~\ref{tab:AUCs_hadON}. The EFNs, which are limited to IRC-safe information, show a small but statistically significant drop in relative performance. The difference in the EFNs and PFNs internal constraints allows us to conclude that the difference in performance is due to the use of IRC-unsafe information by the PFNs, which grows modestly in importance with jet $p_\textrm{T}$. To understand the physical source of the IRC-unsafe information, we train EFN and PFN networks on events at parton and hadron level. In the parton level events, the PFN-EFN gap vanishes (see Tab.~\ref{tab:AUCs_hadOFF}), confirming the results of Ref.~\cite{Bieringer:2020tnw} and demonstrating the central conclusions of Ref.~\cite{Larkoski:2019nwj}; without hadronization effects only IRC-safe information is needed for quark-gluon tagging. In addition, this comparison reveals that the IRC-unsafe information is introduced in the non-perturbative hadronization process. The contrast in quark and gluon jets simulated with and without hadronization effects can be seen in observables sensitive to the number of non-perturbative emissions, such as tower multiplicity, as illustrated in Fig. \ref{fig:nTowers_had_noHad}. The number of towers in quark and gluon jets increases when including hadronization effects, indicating that despite low-$p_{\textrm{T}}$ detector cutoffs, calorimeter towers may be sensitive to non-perturbative emissions, resulting in statistically significant contributions to quark/gluon jet classifiers. \begin{table} \caption{\label{tab:AUCs_hadON} Comparison of the quark-gluon classification performance of EFN and PFN networks, via AUC, on jets with hadronization effects included.} \begin{ruledtabular} \begin{tabular}{rccc} Jet $p_{\textrm{T}}$ Range & EFN & PFN &$\Delta$(PFN-EFN) \\ \hline 200-220 GeV & 0.814 $\pm$ 0.001 & 0.828 $\pm$ 0.001 & 0.014 $\pm$ 0.002\\ 500-550 GeV & 0.819 $\pm$ 0.001 & 0.838 $\pm$ 0.001 & 0.019 $\pm$ 0.002\\ 1000-1100 GeV & 0.827 $\pm$ 0.001 & 0.848 $\pm$ 0.001 & 0.021 $\pm$ 0.002\\ \end{tabular} \end{ruledtabular} \end{table} \begin{table} \caption{\label{tab:AUCs_hadOFF}Comparison of the quark-gluon classification performance of EFN and PFN networks, via AUC, on jets with no hadronization effects included.} \begin{ruledtabular} \begin{tabular}{rccc} Jet $p_{\textrm{T}}$ Range & EFN & PFN &$\Delta$(PFN-EFN) \\ \hline 200-220 GeV & 0.739 $\pm$ 0.001 & 0.737 $\pm$ 0.001 & -0.002 $\pm$ 0.002\\ 500-550 GeV & 0.753 $\pm$ 0.001 & 0.750 $\pm$ 0.001 & -0.003 $\pm$ 0.002\\ 1000-1100 GeV & 0.759 $\pm$ 0.001 & 0.758 $\pm$ 0.001 & -0.001 $\pm$ 0.002\\ \end{tabular} \end{ruledtabular} \end{table} \begin{figure} \centering \includegraphics{nTower_dists_500GeV.pdf} \caption{Distributions of tower multiplicity ($n_{\mathrm{t}}$) for quark- and gluon-initiated jets with $p_{\textrm{T}} \in [500, 550]$ GeV, simulated with hadronization effects (solid line) and without hadronization effects (dashed line).} \label{fig:nTowers_had_noHad} \end{figure} \begin{table*} \caption{ Comparison of the quark-gluon classification performance of EFN${[\kappa]}$, classified by $p_{\textrm{T}}$ range and $\kappa \in \{-1, -\frac{1}{2}, -\frac{1}{4}, \frac{1}{4}, \frac{1}{2}, \frac{3}{2}\}$. See Fig.~\ref{fig:KappaSearch} for a visual representation and comparison to EFP and PFN performance.} \label{tab:AUCs_KappaSearch} \begin{ruledtabular} \begin{tabular}{ccccccc} Jet $p_{\textrm{T}}$ range & \vspace{0.05cm} \makecell[c]{EFN${[\kappa=-1]}$} & \makecell[c]{EFN${[\kappa=-\frac{1}{2}]}$} & \makecell[c]{EFN${[\kappa=-\frac{1}{4}]}$} & \makecell[c]{EFN${[\kappa=\frac{1}{4}]}$} & \makecell[c]{EFN${[\kappa=\frac{1}{2}]}$} & \makecell[c]{EFN${[\kappa=\frac{3}{2}]}$} \\ \hline 200-220 GeV & 0.785 $\pm$ 0.001 & 0.788 $\pm$ 0.001 & 0.794 $\pm$ 0.001 & 0.817 $\pm$ 0.001 & 0.821 $\pm$ 0.001 & 0.815 $\pm$ 0.001 \\ 500-550 GeV & 0.796 $\pm$ 0.001 & 0.802 $\pm$ 0.001 & 0.811 $\pm$ 0.001 & 0.830 $\pm$ 0.001 & 0.831 $\pm$ 0.001 & 0.811 $\pm$ 0.001 \\ 1000-1100 GeV & 0.801 $\pm$ 0.001 & 0.809 $\pm$ 0.001 & 0.822 $\pm$ 0.002 & 0.842 $\pm$ 0.001 & 0.841 $\pm$ 0.001 & 0.812 $\pm$ 0.001 \\ \end{tabular} \end{ruledtabular} \end{table*} \begin{figure*} \includegraphics{kappa_search.pdf} \caption{ Comparison of the quark-gluon classification performance, measured by AUC, of the EFN${[\kappa]}$s, for several choices of the energy weighting exponent $\kappa \in \{-1, -\frac{1}{2}, -\frac{1}{4}, \frac{1}{4}, \frac{1}{2}, \frac{3}{2}\}$, which reveals the exponent necessary to exceed the performance of the IRC-safe EFN (dashed green) and approach the performance of the IRC-unsafe PFN (solid red). Solid blue lines are a quadratic interpolation of the measurements of the EFN${[\kappa]}$ performance at each value of $\kappa$, also given in Table~\ref{tab:AUCs_KappaSearch}. Three panels show the performance in the distinct jet $p_{\textrm{T}}$ ranges. } \label{fig:KappaSearch} \end{figure*} PFNs are very flexible networks, allowing a vast space of possible functions. To understand how the PFNs capture the IRC-unsafe information, we seek to narrow the scope of possible functional forms. First, we attempt to narrow down the energy weighting exponents of the necessary IRC-unsafe information by comparing the performance of EFN${[\kappa]}$s for the range of values $\kappa \in \{-1, -\frac{1}{2}, -\frac{1}{4}, \frac{1}{4}, \frac{1}{2}, \frac{3}{2}\}$. The selected range covers both softer and harder radiation, as large values of ${\kappa}$ accentuate harder hadrons while small values of ${\kappa}$ accentuate softer hadrons. As shown in Tab.~\ref{tab:AUCs_KappaSearch} and Fig.~\ref{fig:KappaSearch}, EFN${[\kappa]}$ performs well for energy weighting exponents close to zero, with the best performing values between $\frac{1}{4} \leq \kappa \leq \frac{1}{2}$. This indicates that the IRC-unsafe information in the PFN-EFN gap is mainly due to soft radiation, and can potentially be captured by observables with small energy weighting exponents. In addition, we note how soft radiation becomes more relevant with higher jet $p_{\textrm{T}}$, as the EFN${[\kappa=\frac{1}{4}]}$ and the EFN${[\kappa=\frac{1}{2}]}$ increasingly outperform the EFN as energy increases. For simplicity, we take $\kappa=\frac{1}{2}$ to be the critical energy weighting exponent as it consistently outperforms or effectively matches the other $\kappa$ values. Having isolated the critical energy weighting exponent which captures the IRC-unsafe information, the next step is to identify the critical angular weighting exponent, $\beta$. However, unlike the energy weighting, the EFN structure does not allow us to easily constrain the critical angular weighting values. Instead, we search for a set of observables with specific angular weighting exponents which can be combined with IRC-safe observables to approximate the PFN performance. We consider IRC-unsafe observables with energy weighting exponent $\kappa=\frac{1}{2}$ and angular weighting exponent $\beta \in \{\frac{1}{2}, 1, 2\}$, to cover narrow- and wide-angle radiation. A summary of the high-level models and the corresponding IRC-safe and IRC-unsafe observables used in the search is shown in Table \ref{tab:ModelSummaries}. The results for the traditional features (N-subjettiness, jet mass, tower multiplicity, and IRC-unsafe generalized angularities) are shown in Table \ref{tab:AUCs_BetaSearch_traditional} and illustrated in Figure \ref{fig:BetaSearch_traditional}. These traditional observables fail to capture sufficient IRC-safe and IRC-unsafe information to match the PFN in all energy ranges. The results for the LDA models using EFPs and tower multiplicity are shown in Table \ref{tab:AUCs_BetaSearch_EFPs} and illustrated in Figure \ref{fig:BetaSearch_EFPs}. In contrast to the traditional features, IRC-safe EFPs largely capture the IRC-safe information used by the EFNs; in addition, there is a boost in performance when combining them with IRC-unsafe EFPs with small angular weighting exponents such as $\beta=\frac{1}{2}$, nearly matching the PFN performances\footnote{LDA models trained on IRC-safe and IRC-unsafe EFPs with $d<=6$ and $d<=7$ are also considered, in each case providing a marginal improvement in AUC, at the cost of significantly more EFP variables. LDA models with EFPs with $d<=5$ are thus chosen in this paper as they result in a good approximation of the PFN performances while having a manageable size of EFP variables.}. The boost in performance provided by the IRC-unsafe observables increases with energy range, which is consistent with the results illustrated in Fig. \ref{fig:KappaSearch}, corroborating the importance of IRC-unsafe observables for jets with higher $p_\textrm{T}$. Although less compact than the traditional observables, EFPs are more effective at capturing the necessary information for quark/gluon classification. \begin{table*}[!ht] \caption{\label{tab:AUCs_BetaSearch_traditional} AUCs of the DNN models trained on IRC-safe jet mass N-subjettiness variables, and their combinations with IRC-unsafe generalized angularities with $\kappa=\frac{1}{2}$ and $\beta \in \{\frac{1}{2}, 1, 2\}$.} \begin{ruledtabular} \begin{tabular}{ccccc} Jet $p_{\textrm{T}}$ range & DNN[safe] & DNN[safe, $n_{\mathrm{t}}$, $\lambda[\frac{1}{2}, \frac{1}{2}]$] & DNN[safe, $n_{\mathrm{t}}$, $\lambda[\frac{1}{2}, 1]$] & DNN[safe, $n_{\mathrm{t}}$, $\lambda[\frac{1}{2}, 2]$] \\ \hline 200-220 GeV & 0.804 $\pm$ 0.001 & 0.809 $\pm$ 0.001 & 0.809 $\pm$ 0.001 & 0.810 $\pm$ 0.001 \\ 500-550 GeV & 0.815 $\pm$ 0.001 & 0.822 $\pm$ 0.001 & 0.824 $\pm$ 0.002 & 0.824 $\pm$ 0.001 \\ 1000-1100 GeV & 0.822 $\pm$ 0.002 & 0.829 $\pm$ 0.002 & 0.831 $\pm$ 0.001 & 0.831 $\pm$ 0.001 \\ \end{tabular} \end{ruledtabular} \end{table*} \begin{table*}[!ht] \caption{\label{tab:AUCs_BetaSearch_EFPs} AUCs of the LDA models trained on IRC-safe EFPs, and their combinations with tower multiplicity and IRC-unsafe EFPs with $\kappa=\frac{1}{2}$ and $\beta \in \{\frac{1}{2}, 1, 2\}$, with $d \leq 5$ edges.} \begin{ruledtabular} \begin{tabular}{ccccc} Jet $p_{\textrm{T}}$ range & LDA[safe] & LDA[safe, $n_{\mathrm{t}}$, EFP[$\frac{1}{2}, \frac{1}{2}$]] & LDA[safe, $n_{\mathrm{t}}$, EFP[$\frac{1}{2}, 1$]] & LDA[safe, $n_{\mathrm{t}}$, EFP[$\frac{1}{2}, 2$]] \\ \hline 200-220 GeV & 0.816 $\pm$ 0.001 & 0.821 $\pm$ 0.001 & 0.820 $\pm$ 0.001 & 0.818 $\pm$ 0.001 \\ 500-550 GeV & 0.825 $\pm$ 0.001 & 0.835 $\pm$ 0.001 & 0.834 $\pm$ 0.001 & 0.830 $\pm$ 0.001 \\ 1000-1100 GeV & 0.826 $\pm$ 0.001 & 0.844 $\pm$ 0.001 & 0.842 $\pm$ 0.001 & 0.836 $\pm$ 0.001 \\ \end{tabular} \end{ruledtabular} \end{table*} \begin{figure*}[!ht] \includegraphics{beta_search_nsub_genAng.pdf} \caption{AUCs of the DNN models trained on IRC-safe N-subjettiness and IRC-unsafe generalized angularity observables with $\kappa=\frac{1}{2}$ and $\beta \in \{\frac{1}{2}, 1, 2\}$, in combination with the tower multiplicity.} \label{fig:BetaSearch_traditional} \end{figure*} \begin{figure*}[!ht] \includegraphics{beta_search_efps.pdf} \caption{AUCs of the LDA models trained on IRC-safe EFP and IRC-unsafe EFP observables with $\kappa=\frac{1}{2}$ and $\beta \in \{\frac{1}{2}, 1, 2\}$, in combination with the tower multiplicity.} \label{fig:BetaSearch_EFPs} \end{figure*} \section{\label{sec:Conclusions}Conclusions} In this work, we have confirmed that state-of-the-art machine learning models for quark/gluon jet classification are sensitive to perturbative and non-perturbative effects, the latter rooted in the hadronization process. Moreover, we have shown that the reliance of the networks on the non-perturbative IRC-unsafe information grows with jet $p_\textrm{T}$. Although the IRC-unsafe observable space is in principle infinite, its energy and angular dependence can be narrowed down by utilising the strategies introduced in this paper. By comparing the performance of networks whose architecture constrains the models to learn functions with prescribed energy weighting forms, EFN${[\kappa]}$s, we have found that most of the IRC-unsafe information can be captured by observables with small energy weighting exponents ($ \frac{1}{4} \leq \kappa \leq \frac{1}{2}$). Similarly, we have performed a grid search of high-level observables with narrow categories of angular weighting factors to delimit the angular dependence of the IRC-unsafe information. The results show that most of this information can be captured by small angular weighting factors ($\beta=\frac{1}{2}$). This indicates that, as expected, IRC-unsafe information relevant for quark/gluon jet classification is due to soft, narrow-angle radiation. Understanding the nature of the information used by deep neural networks trained for the classification of quark- vs. gluon-initiated jets, and mapping it into physics interpretable and compact jet observables, is an extremely powerful tool that could be used for analyses searching for signals beyond the Standard Model. The strategy presented in this paper allows for the interpretation of the information learned by PFNs in terms of high-level physics observables which provide a sense of the nature of the machine-learned information. This information was found to be both IRC-safe and IRC-unsafe, corresponding to perturbative and non-perturbative hadronization effects. The strategy proposed in this paper allows physicists to control and assess the systematic uncertainties of the networks by confidently including or excluding information from the learning process. In addition, this strategy can easily be extended to other analyses where having robust and interpretable observables that match the performances of deep neural networks would be a powerful tool. \begin{acknowledgments} We wish to thank Andrew Larkoski, Ian Mount, Benjamin Nachman, Joakim Olsson, Tilman Plehn, and Jesse Thaler for their valuable feedback and insightful discussions. We also thank Wenjie Huang for his work on the initial stages on this paper. This material is based upon work supported by the National Science Foundation under grant number 1633631. DW and MF are supported by the DOE Office of Science. The work of JC and PB in part supported by grants NSF 1839429 to PB. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Despite observational advances in characterising exoplanets and the environments in which they form (protoplanetary discs), a coherent picture that explains the origin and diversity of planets from their disky forebears remains elusive. We know planets must sculpt the discs in which they form, and indeed we see structures in observed protoplanetary discs; however, a lack of understanding means that using these observed structures to learn about the details planet formation is challenging. There is a handful of disc-systems which show IR colours and SEDs that are inconsistent with primordial discs: discs that are optically thick from small to large radii \citep[e.g.][]{Strom1989,Skrutskie1990}. Instead, these objects show inner regions that are heavily depleted in dust, while returning to primordial levels in the outer regions \citep[e.g.][]{Calvet2005,Espaillat2014,Owen2016}. Given these discs are partially cleared (in at least the dust) they have been termed ``transition'' discs. It is now known that transition discs do not represent a homogeneous class \citep[e.g.][]{OC12}. While many transition discs are believed to be protoplanetary discs caught in the act of clearing through photoevaporation \citep{Cieza2008,Merin2010,Owen2011}, a significant subset do not match any of the characteristics that would be associated with a protoplanetary disc in the throes of destruction \citep[e.g.][]{Kim2009,Espaillat2010,OC12}. { Approximately half} of transition discs { surveyed by \citet{OC12} had} large accretion rates ($\dot{M}\gtrsim10^{-9}$ M$_\odot$~yr$^{-1}$), large cleared dust cavities (out to $\gtrsim 10$~AU) and they { were} often the brightest of all class II discs at mm wavelengths \citep{Andrews2011,OC12,Ansdell2016}; for this reason they have been termed ``mm-bright transition discs" \citep{Owen2016}. The majority of mechanisms invoked to create a transition disc signature do so by removing dust from the inner regions, and preventing its resupply, by trapping dust outside some radius in a pressure enabled dust trap. Indeed this is true for the two most commonly invoked mechanisms: photoevaporation \citep[e.g.][]{Clarke2001,Alexander2007,Owen2011,Gorti2015} and gap formation by giant planets \citep[e.g.][]{Calvet2005,Rice2006,Zhu2011,Zhu2012,Owen2014}. Several of the mm-bright transition discs have been imaged at high resolution using {\it ALMA}, and in these high resolution images they show strong axisymmetric {\it and} non-axisymmetric emission features \citep[e.g.][]{Casassus2013,vanderMarel2013,Perez2014,vanderMarel2015,Andrews2016,Canovas2016} and see \citet{Casassus2016} for a recent review. These features have been linked to signs of planet formation, although as yet there is no clear understanding of how to link these observations to theory. A common interpretation of the non-axisymmetric structures is that they caused by large-scale vortices \citep[e.g.][]{vanderMarel2013,Ataiee2013,LyraLin2013}. Since these vortices represent local pressure maxima that orbit the star with roughly the local Keplerian velocity they can efficiently trap dust particles \citep[e.g.][]{Meheut2012,LyraLin2013,Zhu2014}. The strong dust density contrast that can result from dust particle trapping in vortices can lead to strong azimuthal surface brightness differences. The Rossby Wave Instability \citep[RWI][]{Lovelace1999,Li2000} provides a mechanism to generate vortices in astrophysical discs that contain radial structure. Sharp radial features give rise to an extremum in potential vorticity that leads to vortex formation. Non-linear hydrodynamic simulations \citep[e.g.][]{Li2001} show that several small scale vortices grow from the linear instability before they begin to grow and merge resulting in one large scale vortex, which can trap particles \citep[e.g.][]{Lyra2009,Meheut2012,Zhu2014}. Transition discs are believed to contain a cavity edge in the gas, and such drops in gas surface density have recently been observed in CO emission \citep{vanderMarel2015,vanderMarel2016}\footnote{ Note: CO is likely to be optically thick, and at this stage it is still difficult to directly transform CO structures into gas structures.} . Such an axisymmetric gas structure is necessary to explain many of the observed features of transition discs, however, it remains to be seen whether the observed density drops are steep enough to trigger the RWI. While massive planets {(of order Jupiter mass and higher)} inserted into discs in simulations are known to produce deep and sharp cavities, sharp enough to trigger the RWI, it is unclear whether this is likely to occur in reality. This is because massive planets are inserted into disc simulations instantaneously (or grown over several orbits), necessarily producing a transient phase with a very sharp cavity which is RWI unstable. In a realistic scenario, however, a planet accretes and grows on a time-scale comparable to the evolution of the protoplanetary disc itself. Viscosity present in the disc and, in fact, vortices created by any planet gap initially, can relax the gas to a much smoother distribution, which would be {\it stable} to the RWI over the long-term. This raises the intriguing question: if a transition disc cavity is stable to the RWI, or only unstable for a short period of time, why do transition discs, that are observable on time-scales $\gtrsim 10^4$ orbits, display non-axisymmetric structures generated by RWI vortices? This question is perhaps the biggest weakness of the RWI mechanism in explaining the observed transition disc structures \citep{Hammer2016}. { The RWI can also be triggered in protoplanetary discs at the edge of the dead-zone \citep[e.g.][]{Lyra2009}, where the change in viscosity leads to a sharp change in the surface density \citep[e.g.][]{Gammie1996}. Recent hydrodynamic simulations have shown that large-scale vortices can also be produced this way \citep[e.g.][]{Regaly2012,Flock2015,Lyra2015,Ruge2016} through the RWI. { Recent work \citep{Ruge2016,Pinilla2016} has shown that the dead-zone model can produce rings and gaps in scattered-light and mm images, although it remains to be seen if the model can explain the large drops in dust-opacity in the inner disc sufficient to reproduce the SEDs of transition discs.} Finally, vortices can also be formed through baroclinic instabilities \citep[e.g.][]{Klahr2003,Lesur2010,Raettig2015}.} By trapping dust at some radius, while it continues to drift in from larger radii, transition discs will have significant increases in the dust-to-gas ratio in the dust trap, from the standard ISM value of 0.01, to values that can approach unity \citep[e.g.][]{Pinilla2012}. Therefore, it has been suggested that these transition disc dust traps are likely to sites of increased planetesimal and planet formation. Indeed \citet{Lyra2008}, demonstrated that it is possible to get direct collapse to form planetary embryos in RWI generated vortices. One interesting advance in the theory of planet formation is the concept of pebble accretion \citep[e.g.][]{Ormel2010,Johansen2010,Lambrechts2012}. Dust particles that have gas-drag induced stopping times comparable to their orbital times can be rapidly accreted by planetary embryos. This is because gas pressure gradients induced by the planetary embryo's presence cause dust particles to be accreted if they approach within the proto-planet's Hill sphere, decreasing the growth time to $\sim 10^{4}$~years --- significantly faster than the standard planetesimal accretion times. Since pebble accretion is very sensitive to dust particle size --- it is only efficient for dust particles that are close to optimally coupled to the gas --- it can only work where there are large numbers of dust particles close to this size. The dust traps in transition discs {\it inevitably} provide a reservoir of these pebbles, as those dust particles that are most efficiently accreted are also those most efficiently trapped. Here we explore the possibility of low-mass planet formation in the pressure traps of transition discs, and argue that it is likely to naturally arise, while perhaps also explaining a variety of observational signatures that are now commonly associated with transition discs. We are agnostic about the mechanism that creates the transition disc itself, but argue that if low-mass planet formation begins in transition discs as we suggest it is: i) likely to be rapid and ii) if the disc is sufficiently massive, it is able to generate vortices that can lead to non-axisymmetric structures similar to those recently observed. We structure our work as follows: in Section~\ref{sec:mechanism} we discuss the physical picture and motivate the basic principles of our new mechanism. In Section~\ref{sec:sims} we present numerical simulations to look at the non-linear long term evolution. We discuss our results in Section~\ref{sec:results} and summarise in Section~\ref{sec:summary}. \section{Physical Mechanism} \label{sec:mechanism} The characteristic feature of transition discs is the significant drop in opacity at small radii relative to a primordial disc with a return to primordial values at larger radii. This opacity drop is interpreted as a significant removal of dust (and hence opacity) close-in, while returning to standard values further out in the disc. The changeover from the optically-thin dust in the inner regions to the optically-thick outer regions is known to be sharp \citep[e.g.][]{Andrews2011}. Thus the standard explanation for such a dust distribution is that there is a ``dust-trap'', where a pressure maximum can overcome the gas accretion flow and trap dust particles in the vicinity of the pressure maximum due to gas-drag. This is in contrast to a gradual change in the dust properties that might result from, for example, dust evolution \citep[e.g.][]{Dullemond2005,Birnstiel2012}. By generating an axisymmetric maximum in the gas surface density, which in turn creates a pressure maximum, dust particles will drift towards this pressure maximum where the dust particles feel no gas drag. The time-scale on which dust particles drift towards, and are trapped within the pressure trap, is determined by the particles' Stokes number, or the non-dimensional stopping time ($\tau_s$). Dust particles with $\tau_s$ { greater than the viscous $\alpha$ can become trapped \citep[e.g.][]{Birnstiel2013}. However, it is those particles with $\tau_s$ near unity $\sim10^{-1}-10$ that are {\it rapidly} and {\it efficiently}} trapped in a pressure maximum on a time-scale $\lesssim100$ orbits. However particles with Stokes numbers far from unity are not strongly trapped, with diffusion and advection dominating for particles with very small Stokes numbers, meaning the dust closely follows the gas distribution. The size of a dust particle $s$ can be related to the Stokes number (in the Epstein drag limit - relevant for small dust particles in the outer disc) as: \begin{equation} s\approx2\,{\rm mm}\, \tau_s\left(\frac{\Sigma_g}{1~{\rm g~cm}^{-2}}\right)\left(\frac{\rho_d}{3~{\rm g}~{\rm cm}^{-3}}\right)^{-1} \end{equation} where $\Sigma_g$ is the local gas surface density and $\rho_d$ is the dust density. Thus for typical parameters found in most transition discs, we find it is the mm to cm sized grains that are trapped in the pressure traps. This back of the envelope argument agrees with early mm observations of transition discs \citep[e.g.][]{Andrews2011}, which showed that the mm sized grain population was confined to narrow rings indicative of pressure trapping \citep{Pinilla2012}. Dust trapping can increase the surface density of particles with stopping times close to unity to very high values, of the order of 10~g~cm$^{-2}$ \citep[e.g.][]{Pinilla2012} in the most massive transition discs. Thus, the dust-to-gas ratio in the trap is significantly enhanced above the standard ISM value of 0.01. Such environments are ripe for the formation and growth of planetesimals and planetary embryos, either through coagulation or direct collapse through mechanisms such as the streaming instability \citep[e.g.][]{Youdin2005,Johansen2007}. The streaming instability is likely to be important in these dust traps since it is typically strongly triggered in environments with large dust-to-gas ratios \citep{Johansen2009}. We suggest here that should a planetary embryo reach high enough mass such that it can undergoing pebble accretion, this planetary embryo will grow incredibly rapidly, as we now demonstrate. \subsection{Pebble Accretion in a Dust Trap} Pebble accretion is a mechanism by which a planetary embryo can rapidly grow to significant mass \citep[e.g.][]{Lambrechts2012}. For dust particles that are coupled to the gas on time-scales comparable to the orbital time (i.e. $\tau_s\sim 1$), the embryo can accrete particles from impact parameters significantly larger than its physical radius or effective radius due to gravitational focussing. In fact, gas-drag enables the embryo to accrete particles from a radius out to its Hill sphere ($R_H=a(M_p/3M_*)^{1/3}$), with all particles with $\tau_s$ in the range 0.1-1, being accreted if they approach the embryo's Hill sphere \citep{Ormel2010,Lambrechts2012}. The rate at which embryos accrete through pebble accretion depends on whether it occurs in a largely planar or spherical manner \citep{Morbidelli2015,Bitsch2015}. In the planar case, the embryo accretes from the full height of the disc, whereas in the spherical case it just accretes from a fraction of the disc's height, and is therefore less efficient. The transition from spherical to planar accretion occurs approximately when the Hill sphere is larger than the scale height of the pebbles \citep{Morbidelli2015}. Or: \begin{equation} M_p \gtrsim 0.03~{\rm M}_\oplus \left(\frac{M_*}{1~{\rm M}_\odot}\right)\left(\frac{H/R}{0.1}\right)^3\left(\frac{\alpha}{10^{-3}}\right)^{3/2} \end{equation} As we shall see later, we are mainly concerned with accretion onto embryos with masses $\gtrsim 1$~M$_\oplus$ here, thus for simplicity, we assume pebble accretion will always take place in a 2D fashion with an accretion rate approximately given by: \begin{equation} \dot{M}_{\rm peb}\approx 2\Omega R_H^2 \Sigma_{\rm peb} \end{equation} As $R_H\propto M_p^{1/3}$, this means that at late times the planet mass will grow as $t^3$, provided it does not locally reduce the pebble surface density. With a typical growth time-scale of: \begin{eqnarray} t_{\rm acc}&\approx& 3\times 10^{3}\,\,{\rm yrs}\,\left(\frac{\Sigma_p}{3\,{\rm g~cm}^{-2}}\right)^{-1}\left(\frac{M_p}{5\,{\rm M}_\oplus}\right)^{1/3}\nonumber \\ &&\times \left(\frac{a}{20~{\rm AU}}\right)^{-1/2}\left(\frac{M_*}{1~{\rm M}_\odot}\right)^{1/6} \end{eqnarray} Such a time-scale is clearly very rapid, it could deplete the local reservoir of pebbles significantly, by turning them all into a low-mass planet and, as we shall argue in the next section, such a rapid accretion rate will have a significant thermal impact on the surrounding disc. Firstly, however, we will explore the time to deplete the dust trap using a simple model. Assuming that the dust trap is in a balance between turbulent diffusion and gas-drag and contains a mass in pebbles $M_{\rm peb}$, then the surface density of pebbles in the trap is given by $\Sigma_p=M_{\rm peb}/(2\pi R H_p)$, where $H_p$ is the radial width of the dust trap. Without considering the feedback on the gas $H_p$ could, in principle, be very thin compared to the disc's vertical scale height ($H$). Such a small $H_p$ would result in a dust-to-gas ratio well above unity, meaning dust drag becomes much less effective, and as such the assumptions in deriving a thin $H_p$ break-down. Thus, we expect $H_p$ is likely to be fixed to the gas radial scale length when the dust-to-gas ratio approaches unity. As the dust-traps in transition discs are ordinarily expected to have dust-to-gas ratios well above the ISM value, for simplicity we set $H_p\sim H$ and leave a more detailed calculation to further work. Therefore, both the pebble reservoir and forming planet in a transition disc dust trap will evolve according to the following coupled equations: \begin{eqnarray} \frac{{\rm d}M_p}{{\rm d}t}&=&2\Omega a^2 \left(\frac{M_p}{3M_*}\right)^{2/3}\Sigma_p\label{eqn:planet_evolve}\\ \frac{{\rm d}\Sigma_p}{{\rm d}t}&=&\left(\frac{H}{R}\right)^{-1}\frac{\dot{M_p}}{2\pi a^2}\label{eqn:pebble_evolve} \end{eqnarray} Equation~\ref{eqn:pebble_evolve} implies that at late times the pebble surface density drops exponentially, with a characteristic time-scale comparable to $t_{\rm acc}$ for a planet mass of roughly $M_{\rm peb}$. Therefore, we expect the planet mass to grow as $t^3$ until its mass becomes comparable to the mass originally in the pebble reservoir, at which point the reservoir is rapidly depleted in an exponential fashion. We are agnostic about the initial embryo mass since the strong temporal scaling of the growth makes our calculations largely insensitive to this value particularly at late times when the $t^3$ growth effectively erases the initial condition depleting all of the available mass into the planet. Depending on the pebble surface density and separation, however, the absolute time-scale over which this occurs varies. Nonetheless, this is always a rapid evolution occurring in under $\sim 10^5$ years for a wide range of plausible surface densities and separations. { Pure coagulation could give rise to the initial embryo; bypassing the fragmentation barrier (which resides at sizes of order cm's) by the sweep-up mechanism (e.g.} Windmark 2012ab, Dr{\c a}{\.z}kowska et al. 2013{), where larger particles can break through the fragmentation barrier by ``sweeping-up'' smaller ones. We also highlight two other possible avenues for the formation of the initial embryos that could be present in transition disc dust traps}. First, direct gravitational collapse itself could proceed; while in a slightly different scenario (inside a RWI vortex) the dust surface densities are not too dissimilar \citet{Lyra2008} showed one could directly collapse to embryo sizes of order $10^{-2}$~M$_\oplus$ and above. Second, as we have hinted to above the dust traps are also prime sites for planetesimal and embryo formation through the streaming instability. \citet{Johansen2015} and \citet{Simon2016} showed through numerical simulations that one can grow embryos up to masses $\sim 10^{-4}$ M$_\oplus$. { The streaming stability and coaugulation can also work together to promote embryo formation \citep{Drakowska2014}.} Certainly either of these scenarios or perhaps others could generate embryos large enough in mass to begin pebble accretion. Further, since the growth time drops as one approaches higher masses, accretion proceeds in a state that the first ``lucky'' embryo to start accreting will always be considerably more massive than all others. In the following calculations, we will assume the starting embryo masses are in the range $10^{-4}$-$10^{-3}$ M$_\oplus$, in accord with the values from the embryo formation models described above. The gestation of these embryos into planets in our scenario is demonstrated in Figure~\ref{fig:evolve_res}, where we numerically evaluate Equations~\ref{eqn:planet_evolve} \& \ref{eqn:pebble_evolve} for parameters expected in transition discs. \begin{figure*} \centering \includegraphics[width=\textwidth]{Evolution_2} \caption{The evolution of the planet mass (top row) and pebble surface density (bottom row) as a function of time. The solid lines show a starting embryo mass of $10^{-4}$~M$_\oplus$ and the dashed lines show a starting embryo mass of $10^{-3}$~M$_\oplus$. The line colour indicates the initial pebble surface density of 0.1 (red), 1.0 (blue) \& 10.0 (black) g~cm$^{-2}$. Each column show the calculations performed at separations of 10, 20 \& 50 AU.}\label{fig:evolve_res} \end{figure*} Therefore, in this picture the embryos grow and deplete the local pebbles on a rapid time-scale that can reach planetary mass objects very quickly ($\sim 10^5$~years). As seen in the Figure, this model naturally and rapidly produces super-Earth and Neptune-mass planets in these dust traps at large ($ \gtrsim 10$AU) radius from the host star. This formation channel could be one of the dominant modes of low-mass planet formation at large separations. \subsection{Thermal Consequences}\label{sec:thermal} Apart from providing the ideal site for the rapid formation of low-mass planets, and thereby exploiting the well-known virtues of pebble accretion, perhaps the most interesting and new consequence of this scenario is the thermal feedback from the rapid accretion. The energy resulting from the rapid accretion of pebbles will be liberated as radiation at the planet's surface, necessarily resulting in a large accretion luminosity. What we will argue is that if the low-mass planet can disrupt the thermal structure of the disc outside its Hill sphere (inside which the planet's gravity dominants) this can give rise to a source of vorticity strong enough to grow large scale vortices. In the outer regions of a protoplanetary disc we suspect it be optically thin to IR radiation \citep[e.g.][]{Chiang1997}. Thus, the temperature structure of the optically thin disc at a background temperature ($T_d$) surrounding a luminosity source is given by: \begin{equation} T=T_d\left[1+\left(\frac{R_T}{r}\right)^2\right]^{1/4}\label{eqn:temp_profile} \end{equation} where $R_T$ is the radius at which the black-body temperature due to just the irradiation from the planet is equal to the background disc temperature, and is given by: \begin{equation} R_T=\left(\frac{L}{16\pi\sigma T_d^4}\right)^{1/2} \end{equation} Thus assuming an accretion luminosity of $L=GM_p\dot{M}_{\rm peb}/R_p$ we can find the ratio of $R_T$ to the Hill radius as: \begin{eqnarray} \frac{R_T}{R_H}&=&\left(\frac{2GM_p\Omega\Sigma_{\rm peb}}{16\pi\sigma T_d^4R_p}\right)^{1/2}\\ &\approx & 5 \left(\frac{M_p}{5~{\rm M}_\oplus}\right)^{1/2}\left(\frac{R_p}{1.5~{\rm R}_\oplus}\right)^{-1/2}\left(\frac{\Omega}{3\times10^{-9}~{\rm s}^{-1}}\right)^{1/2}\nonumber \\&&\times \left(\frac{T_d}{30~{\rm K}}\right)^{-2}\left(\frac{\Sigma_{\rm peb}}{3~{\rm g~cm}^{-2}}\right)^{1/2} \end{eqnarray} Since the pressure trap can enhance $\Sigma_{\rm peb}$ to high values \citep[in some cases $\gtrsim 10$g~cm$^{-2}$, e.g.][]{Pinilla2012} the accreting planet will disrupt the temperature of the disc in regions significantly outside the Hill sphere of the planet. In fact in the above case, at the edge of the Hill sphere the temperature will be roughly three times the local disc temperature. Another relevant scale is $R_T/H$, which, for the above case, are roughly comparable. \begin{eqnarray} \frac{R_T}{H}&=&\left[\frac{2GM_p^{4/3}\Omega \Sigma_{\rm peb}}{16(3M_*)^{1/3} \pi\sigma T_d^4R_p(H/R)^2}\right]^{1/2}\\ &\approx&1.0 \left(\frac{H/R}{0.1}\right)^{-1}\left(\frac{M_p}{5~{\rm M}_\oplus}\right)^{2/3}\left(\frac{M_*}{1~{\rm M}_\odot}\right)^{-1/6}\nonumber\\&\times&\left(\frac{R_p}{1.5~{\rm R}_\oplus}\right)^{-1/2}\left(\frac{\Omega}{3\times10^{-9}~{\rm s}^{-1}}\right)^{1/2}\nonumber\\&\times&\left(\frac{T_d}{30~{\rm K}}\right)^{-2}\left(\frac{\Sigma_{\rm peb}}{3~{\rm g~cm}^{-2}}\right)^{1/2} \end{eqnarray} For distances from the planet $\gtrsim H$, the background Keplerian velocity, due to the shear, is supersonic with respect to the planet. If the temperature can disrupt the disc out to this radius it could cause wave breaking and significantly adjust the disc structure. However, given the weak scaling of $R_T/H$ on the parameters of the problem this is only likely to occur in the most extreme cases. \subsection{Radiative and advective time-scales} Since the gas in the neighbourhood of the planet is orbiting at slightly different velocities to the planet due to the background shear, there is the obvious question of whether there is time for the gas parcels to be heated to the equilibrium temperature given by Equation~\ref{eqn:temp_profile}. One requires the advective time-scale to be shorter than the radiative timescale to be close to local radiative equilibrium. To first order the relative azimuthal velocity of a gas parcel with respect to the planet is given by: \begin{equation} v_{\rm rel}=\frac{3}{2}\Omega x_p \end{equation} where $x_p=|R-a|$ is the distance from the planet along the radial co-ordinate connecting star and planet. So the advective time $t_{\rm adv}\approx x_p/ v_{\rm rel}$ is independent of distance from the planet. Whereas the radiative time-scale is approximately: \begin{equation} t_{\rm rad}\approx\frac{4\pi r^2 k_bT_d}{\mu L \kappa} \end{equation} The ratio of radiative to advective time-scales is thus: \begin{eqnarray} \frac{t_{\rm rad}}{t_{\rm adv}}&=&\frac{3k_b\Omega}{2\mu \sigma T_d^3 \kappa} \left(\frac{r}{R_T}\right)^2\\ &\approx& 0.16 \left(\frac{r}{R_T}\right)^2 \left(\frac{\Omega}{3\times10^{-9}~{\rm s}^{-1}}\right)\\ &&\times\left(\frac{T_d}{30~{\rm K}}\right)^{-3}\left(\frac{\kappa}{0.1~{\rm cm^2~g^{-1}}}\right)^{-1} \end{eqnarray} Therefore, for nominal parameters we expect the surrounding gas to be in radiative equilibrium in the entire region which is actively heated by the planet, unless the opacity is low, but since we are in a dust trap with a high dust-to-gas ratio we would expect the opacity to be high, rather than low. { For example, for a MRN particle distribution ($n(a)\propto a^{-3.5}$, as expected for a collisional cascade) that has a maximum particle size of 1~mm, \citet{dalessio2001} find an opacity of $\sim 0.1$~g~cm$^{-2}$ at FIR and mm wavelengths when the dust-to-gas ratio is $\sim 100$}. We thus conclude that the gas will indeed reach the equilibrium temperature given by Equation~\ref{eqn:temp_profile} and the disc will have a ``hot-spot" associated with the rapid planet formation. \subsection{Implications of hot-spot} The fact the flow is no longer barotropic means no steady-state solution exists and, as we shall see in the numerical calculations, the temperature disturbance launches waves. However, we can gain a great deal of insight by considering how a temperature bump effects the flow structure along a closed streamline in the disc\footnote{Note the following argument is {\it not} strictly a correct solution of the fluid equations in a rotating disc - we will use numerical simulations for that later - but is merely an illustrative calculation to gain insight.}. The vertically integrated steady-state fluid equations along a streamline with path length, ${\rm d}\ell$, are given by: \begin{eqnarray} \frac{\partial}{\partial \ell}\left(\Sigma u \right)&=&0\label{eqn:cont2d}\\ u\frac{\partial u}{\partial \ell}+\frac{1}{\Sigma}\frac{\partial \mathcal{P}}{\partial \ell}&=&0\label{eqn:euler2d} \end{eqnarray} where $\mathcal{P}$ is the vertically integrated 2D pressure. Adopting a locally isothermal equation of state, such that the 2D pressure is given by $\mathcal{P}=\Sigma c_s^2$, where $c_s$ is the isothermal sound speed, we can combine Equations~\ref{eqn:cont2d} \& \ref{eqn:euler2d} to obtain a standard ``nozzle'' expression: \begin{equation} \left(\frac{u^2}{c_s^2}-1\right)\frac{\partial \log u}{\partial \ell}=-\frac{\partial \log c_s^2}{\partial \ell} \label{eqn:euler_mom} \end{equation} As discussed above the relative velocity of gas parcels with respect to the planet does not become transonic until a distance of $\sim H$; therefore, the majority of the heated gas will have a significantly sub-sonic velocity with respect to the planet and temperature bump. This is crucial, as for the flow to adjust to the new temperature profile and reach dynamical equilibrium we require the flow time to be shorter than the sound crossing time, a condition obviously satisfied in the sub-sonic limit. Working in the sub-sonic limit with $r\lesssim H$, Equation~\ref{eqn:euler_mom} implies that $\partial \log u/\partial \ell \approx \partial \log c_s^2/\partial \ell$. Since Equation~\ref{eqn:cont2d} relates the gradient of the surface density to the velocity, along with the equation of state we find: \begin{eqnarray} \frac{\partial \log \Sigma}{\partial \ell}&\approx&-\frac{\partial \log c_s^2}{\partial \ell}\\ \frac{\partial \log \mathcal{P}}{\partial \ell}&=&\frac{\partial \log \Sigma}{\partial \ell}+\frac{\partial \log c_s^2}{\partial \ell}\approx 0 \end{eqnarray} Namely, the 2D pressure remains approximately constant as the gas parcel passes through the temperature bump, while the surface density drops, inversely tracking the temperature profile. This is an important result as it means the flow in the vicinity of the planet has become {\it baroclinic}. The implications of this fact become clear if we inspect the inviscid vortencity equation for a 2D disc: \begin{equation} \frac{D}{Dt}\left(\frac{\bm{\omega}}{\Sigma}\right)=\frac{\bm{\nabla}\Sigma\times\bm{\nabla} \mathcal{P}}{\Sigma^3} \end{equation} where $\bm{\omega}$ is the vorticity, which is only non-zero in the $\bm{\hat{z}}$ direction in our 2D disc. Therefore, if the flow was completely barotropic then vortencity ($\bm{\omega}/\Sigma$) would be conserved. However, along a gas parcel orbit, in the vicinity of the planet, $\bm{\nabla}\Sigma\times\bm{\nabla}\mathcal{P}\ne\bm{0}$ and the hot spot can source vortencity, allowing vortex structures to grow. Therefore, in this work we present a mechanism for the generation and maintenance of vortices in astrophysical accretion discs. We discuss the picture specifically within the case of transition discs as this is likely to be observationally relevant; however, we note it should occur more generally, whenever $R_T\gtrsim R_H$ for a forming planet, or for cases where there is a luminosity point source in an astrophysical accretion disc.\footnote{Our framework describes the general phenomenon of hot-spot induced anti-cyclones in astrophysical accretion discs. Here, the vortex is induced by planet-formation in a transition disc but in other systems, we anticipate a different heat source and different disc type.} \section{Numerical Calculations} \label{sec:sims} In order to investigate the long term evolution of a disc with a planet induced temperature bump and to see if we can grow large scale vortices we must perform numerical simulations. We work in 2D and use the {\sc fargo} code \citep{Masset2000} and adopt a locally isothermal equation of state ($\mathcal{P}=\Sigma c_s^2$), where we have modified the code to include a 2D locally isothermal temperature distribution. We consider the evolution of a low-mass planet that induces a temperature bump within a transition disc like gas structure. { To construct the simplest possible model, in order to isolate and investigate the physics of our new process, we neglect planet migration, dust-gas coupling, dust evolution, and cooling, all of which should be addressed in future work.} \subsection{Setup} As the starting point for our background transition disc gas structure we take a standard protoplanetary disc model with a surface density and passively heated temperature profile: \begin{eqnarray} \Sigma_b&=&\Sigma_1\left(\frac{R}{R_1}\right)^{-1}\\ T_b&=&T_1\left(\frac{R}{R_1}\right)^{-1/2} \end{eqnarray} where the normalising temperature at $R_1$ is chosen such that $H/R=0.1$ at $R_1$. In order to insert a gas cavity into the disc to mimic the possible gas structure that is likely present in a transition disc, we modify our background surface density structure ($\Sigma_b$) to the form: \begin{equation} \Sigma=\frac{\Sigma_b}{2}\left[1+\erf\left(\frac{R-R_0}{\sqrt{2}\sigma}\right)+\epsilon\right] \end{equation} where $R_0$ is a parameter that can be used to set the location of the peak of the surface density distribution, which we take to occur at $R_1$, such that $R_0\approx0.75 R_1$, and $\epsilon$ is a small number chosen to prevent the surface density from becoming zero at small radius { in the initial surface density distribution}, which we set to $10^{-3}$. The parameter $\sigma$ controls the smoothness over which the surface density declines. If $\sigma$ is too small then the disc will be unstable to the standard RWI and will naturally form vortices. This is obviously not what we want to investigate here, and we choose a value of $\sigma=0.3R_1$. We note we are not appealing to any physical model here (e.g. photoevaporation or giant planets), rather we want to remain agnostic and study a disc that has the general features one might expect in a transition disc, while not starting from a disc structure that is RWI unstable. The angular velocity is initially set to the Keplerian value, { suitably adjusted for the pressure gradient to maintain radial dynamical balance, using the numerical procedure suplied in {\sc fargo}.} We also adopt wave damping boundary conditions. We have checked that our initial surface density and temperature profile is stable by evolving it forward for 500 orbits at $R_1$. In order to isolate the physics of our new mechanism we ignore disc self-gravity and the indirect potential at this stage, meaning our choice of surface density normalisation is arbitrary and we scale all results in terms of $\Sigma_1$. We insert a planet in the disc with mass ratio $q=M_p/M_*$ on a fixed circular orbit at $R_1$, and the local temperature is modified such that it has the profile: \begin{equation} T=T_b\left[1+\left(\frac{R_L^2}{r^2+s^2}\right)\exp\left(\frac{-r^2}{2(fR_L)^2}\right)\right]^{1/4}\label{eqn:T_numerics} \end{equation} where $r$ is the distance from the centre of the planet. The temperature expression (Equation~\ref{eqn:T_numerics}) contains two additional terms not present in Equation~\ref{eqn:temp_profile}. The factor $s$ is a smoothing distance so that the temperature profile does not diverge on the grid and the exponential term provides a cut-off in the temperature profile at radii so far from the planet that gas parcels do not spend sufficient time in the planet's vicinity such that they are heated, with the cut-off radius occurring at a radius of $fR_T$. While we shall see the choice of $f$ does affect our results, $s$ doesn't as the smoothing occurs on the grid scale, well inside the planet's Hill sphere. \begin{table*} \centering \begin{tabular}{l|ccccccc} Simulation & $N_R\times N_\phi$ & $q$ & $R_T/R_1$ & $R_T/R_H$ & $f$ & $s$ & $\alpha$ \\ \hline Ctrl1 & $768\times1024$ & 0 & --- & --- & ---& ---& $1\times10^{-6}$ \\ Ctrl2 & $768\times1024$ & $1.5\times10^{-5}$ & 0 & 0 & ---& ---& $1\times10^{-6}$ \\ Standard & $1024 \times 1408$ & $1.5\times10^{-5}$ & 0.06 & 3.5 & $\infty$ & $9\times10^{-3}$ & $1\times10^{-6}$ \\ CutOff & $1024 \times 1408$ & $1.5\times10^{-5}$ & 0.06 & 3.5 & 1 & $9\times10^{-3}$ & $1\times10^{-6}$ \\ LowLum & $1532 \times 2056$ & $4.5\times10^{-5}$ & 0.01 & 0.4 & 1 & $5\times10^{-3}$ & $1\times10^{-6}$ \\ Viscous1 & $1024 \times 1408$ & $1.5\times10^{-5}$ & 0.06 & 3.5 & $\infty$ & $9\times10^{-3}$ & $1\times10^{-4}$ \\ Viscous2 & $1024 \times 1408$ & $1.5\times10^{-5}$ & 0.06 & 3.5 & $\infty$ & $9\times10^{-3}$ & $5\times10^{-4}$ \\ Viscous3 & $1024 \times 1408$ & $1.5\times10^{-5}$ & 0.06 & 3.5 & $\infty$ & $9\times10^{-3}$ & $1\times10^{-3}$ \\ \hline \end{tabular} \caption{Description of simulation parameters run. The second column lists the simulation resolution, columns 3--7 list physical and numerical parameters described above and the final column lists the $\alpha$ viscosity included.}\label{tab:sim_params} \end{table*} All simulations are performed on a polar grid with logarithmic-spaced cells in the radial direction and uniform-spaced cells in the azimuthal direction. This choice keeps the ratio of the cell side lengths approximately square over the entire domain. The radial grid stretches from 0.1$R_1$ to 10$R_1$, and we consider the full $2\pi$ in the azimuthal direction. We use the {\sc fargo} transport algorithm to speed up time-stepping, and work in a frame that is co-rotating with the planet. An explicit viscosity is included using an $\alpha$ viscosity law. The parameters for the simulation runs are described in Table~\ref{tab:sim_params}. We pick our temperature smoothing length $s$ to correspond to approximately two grid cells and we also smooth the planet's potential with a length scale of 0.5 the planet's Hill radius using the {\sc RocheSmoothing} facility implemented in {\sc fargo}. The planets are inserted at zero time with their full masses whereas the temperature hot spot's size $R_T$ is grown linearly over the first 50 orbits to the desired value. \subsection{Simulation Results} We evolved all of our simulations for 300 orbits to asses the impact of a hot spot around a planet in a transition disc-like gas structure. As discussed in Section~\ref{sec:thermal} the hot spot results in a surface density decrease that scales with the temperature increase. The resulting flow properties in the vicinity of the planet for our Standard case are shown in Figure~\ref{fig:baroclinic}. The properties are shown after 50 orbits; once the hot spot has grown to its full size, but before the resulting dynamics begin to dominate the flow. \begin{figure*} \centering \includegraphics[width=\textwidth]{Baroclinic_2} \caption{Zoom-in on the planet showing the surface density (left), temperature (centre) and magnitude of the baroclinic vector (right). The simulation shown is the Standard case with $R_T/R_H=3.5$ and $f=\infty$, and is shown after 50 orbits once the hot spot has grown to its full size, but before large scale vortex formation has occurred. { The units $\Sigma_1$ and $T_1$ are defined in Equations 24 \& 25 respectively as the surface density and temperature at $R_1$ of the backgroud, power-law disc profile.}}\label{fig:baroclinic} \end{figure*} Figure~\ref{fig:baroclinic} shows the surface density (left), temperature (centre) and baroclinic vector (right) for a zoom-in on the planet in the Standard run. We see a significant reduction in the surface density in the vicinity of the planet that mirrors the temperature increase as expected. We also note that the temperature hot-spot also launches waves that dominate over the density waves that are purely driven by the planet's gravity. This surface density drop and temperature increase correspondingly results in a significant baroclinic term that is 2-3 orders of magnitude larger than those arising from the planetary wakes. This source of vorticity extends well outside the planet's Hill sphere (which has a radius of $\sim 0.17R_1$) and thus gas parcel orbits intersect this region every orbit, allowing for the growth of vorticity. \begin{figure*} \centering \includegraphics[width=\textwidth]{Standard_surf_evol} \caption{Snapshots of the surface density distribution every 60 planetary orbits for the Standard run with $M_p/M_*=1.5\times10^{-5}$, $R_T/R_H=3.5$ and very low viscosity of $\alpha=10^{-6}$. The planet is located at a position $[0,-1]$.}\label{fig:Standard_Sevol} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{Standard_vortencity_evol} \caption{Snapshots of the vortensity (scaled to the value of the disc with a Keplerian velocity profile) shown every 60 orbits for the Standard run. The planet is located at a position $[0,-1]$.}\label{fig:vortensity_evol} \end{figure*} The evolution of the surface density and vortencity of the Standard run are shown every 60 orbits in Figures~\ref{fig:Standard_Sevol} \& \ref{fig:vortensity_evol} respectively. These show that the hot spot rapidly produces a number of small scale vortices that grow (thorough interactions with the hot spot) and merge. After approximately 150 orbits (100 orbits since the hot spot reached full size), the disc contains a significant large scale anti-cyclonic vortex that has grown to the maximal width of $\sim 2H$. The azimuthal surface density enchantment has reached roughly a factor of two over the original transition disc gas structure. Over the continued evolution, the vortex grows in strength and after 300 orbits the vortex has begun to migrate. Once the vortices have merged to result in a single large scale vortex it has pattern speed with respect to the planet in the range $|\Omega_{\rm pattern}/\Omega_p-1|\approx0.01-0.05$, close enough to Keplerian such that it should efficiently trap particles \citep{Ataiee2013}. \begin{figure*} \centering \includegraphics[width=\textwidth]{Panel_big_2} \caption{Snapshots of the surface density distribution for each simulation shown after 150 and 300 orbits. The planet is located at a position $[0,-1]$. The simulation parameters are indicated in Table~\ref{tab:sim_params}.}\label{fig:evol_all} \end{figure*} The surface density snapshots at 150 and 300 orbits are shown for all our simulations in Figure~\ref{fig:evol_all}. In the three cases where there is no significant source of baroclinicity -- Ctrl1, Ctrl2, LowLum -- no large scale vortex forms as expected. In Ctrl1, where we have no planet, as in all simulations we have deliberately selected a starting gas profile that is not steep enough to be RWI unstable, and as such the disc profile evolves unchanged for many hundreds of orbits. In Ctrl2, the planet has a mass of $M_p/M_*=1\times10^{-5}$ (which is well below the thermal gap opening mass), while the planet launches standard density waves as expected, it does not effect the large scale dynamics of the disc. In simulation LowLum, we have $R_T<R_H$ and the thermal impact of the planet is contained well within the Hill sphere of the planet. As such no source of significant baroclinicty intersects with circular gas parcel orbits and, as expected, no large scale vortex is generated. In the CutOff simulation, where we mimic the effect of the finite thermal inertia of the gas with an exponential cut-off to the temperature profile, the weakened source of vorticity results in a slightly weaker vortex; however, the general differences are small and a large scale vortex still forms after $\sim 100$ orbits. \subsubsection{Effect of Viscosity} Viscosity is known to effect vortex growth and survival in protoplanetary discs and indeed this is what our simulations with significant viscosity indicate. We find that with $\alpha=10^{-4}$ in the Viscosity1 run, the vortex growth is barely suppressed compared to the Standard run with negligible viscosity. In Viscosity2 with $\alpha=5\times10^{-4}$, large scale vortex formation still does occur but it is suppressed with an azimuthal surface density contrast of $\sim 1.3$ compared with $\sim 2$ in the Standard run. In the most viscous run - Viscosity3 - with $\alpha=10^{-3}$, we find that there is a weak vortex present at 150 orbits, with an azimuthal density contrast of $\sim 1.05-1.1$; however, it does not grow to become a strong large scale vortex and after 300 orbits the disc has returned to being close to axisymmetric. At this high viscosity the gas surface density bump is also significantly weakened due to standard radial viscous transport, allowing the small vortices initially generated to migrate into the inner regions of the disc. Thus, while the higher viscosity certainly makes large scale vortex formation more difficult, it is unclear whether if some process existed to maintain the gas profile during the simulation (e.g. photoevaporation, a dead-zone or a massive planet) a large scale vortex would have indeed formed. \section{Discussion} \label{sec:results} We have shown that ``transition'' discs are prime sites for the growth of low-mass planets by pebble accretion. This planet formation scenario has two important -- perhaps mutually exclusive -- implications. Firstly, if the planet were able to accrete pebbles at the standard rate for the entire time planet formation is occurring, then it could quickly deplete the local pebble reservoir in a very rapid $\lesssim 10^{5}$~year time-scale. Secondly, if the accretion rate results in a large accretion luminosity, such that it can heat the disc material outside its Hill radius, then it can lead to large scale vortex formation. The first implication begs an interesting question: if the pebble depletion time-scale is so fast $\lesssim 10^{5}$~years, how is it we see a number of transition discs with large mm-fluxes (and hence large reservoirs of pebbles) that almost certainly have lifetimes $\gtrsim 10^{5}$ years? We hypothesise the answer to this question lies in the second consequence. The observed, probably long lived, ``transition'' discs that have large mm-fluxes \citep[e.g.][]{Andrews2011,OC12,Owen2016}, are those discs which are likely to have the highest pebble accretion rates, and as such the discs most prone to vortex formation. Vortex formation, results in an azimuthal pressure trap that can very efficiently trap pebbles \citep[e.g.][]{Meheut2012,Ataiee2013,Birnstiel2013,Zhu2014}. After all, the same dust particles that are likely to be trapped in the dust trap, are those which will accrete onto the planet, and are also those most likely to be trapped in the vortex. Since the vortex can migrate slightly, and does not necessarily have a pattern speed that is exactly Keplerian (as seen in our simulations), then the pebbles that can accrete on to the planet are trapped in the vortex and can only accrete onto the planet for a small fraction of time, or none if the vortex and planet have migrated apart. Therefore, vortex formation and subsequent particle trapping may be the only way to ensure that these mm-bright transition discs remain long-lived. In fact, we can calculate the critical pebble surface density for a transition disc to form a vortex, and in principle remain long lived. We do this by assuming this threshold occurs when $R_T/R_H>1$. Since the Hill radius scales with planet mass as $M_p^{1/3}$ and $R_T$ scales as $M_p^{17/24}$ (where the solid planet mass-radius relationship can be approximated as $M_p=A_{\rm MR}R_p^4$ in the range 1-10 M$_\oplus$ using the profiles from \citealt{Fortney2007}). Thus, the limiting case when a disc can cross the $R_T/R_H>1$ threshold will be at the highest mass the planet can possibly reach. Therefore, if we approximate the maximum mass a pebble accreting planet can reach as $M_p\approx\Sigma_{\rm peb}/2\pi aH_p$ (i.e. the mass it would reach if it had accreted all the pebbles) and assume it is accreting from a reservoir of pebbles with surface density $\Sigma_{\rm peb}$, then we estimate the critical pebble surface density for vortex formation to occur before the entire reservoir is sequestered into a planet as: \begin{equation} \Sigma^{\rm crit}_{\rm peb}=\left[\frac{16\pi\sigma T_d^4A_{\rm MR}^{1/4}}{2G\Omega\left(2\pi a H_p\right)^{3/4}}\right]^{4/7}\label{eqn:sigma_crit} \end{equation} Given a $T\propto R^{-1/2}$ temperature profile for the passively heated, flared disc \citep[e.g.][]{Kenyon1987}, Equation~\ref{eqn:sigma_crit} only depends on a few parameters such that: \begin{equation} \Sigma^{\rm crit}_{\rm peb}\approx 0.3\, {\rm g\, cm^{-2}} \left(\frac{H_p}{H}\right)^{-3/7}\left(\frac{a}{20\,{\rm AU}}\right)^{-5/7}\left(\frac{M_*}{1\,{\rm M}_\odot}\right)^{-4/7}\label{eqn:sigma_crit2} \end{equation} where we have left $H_p/H$ as a free parameter here, but we suspect it to close to unity as discussed above. This critical surface density threshold can be compared to many of the well known mm bright transition discs. This comparison is shown in Figure~\ref{fig:Sigma_crit_compare}, where we plot the peak surface density at the edge of the cavity determined from mm imaging by \citet{Andrews2011} (using their model fits), compared to the result from Equation~\ref{eqn:sigma_crit2}. { This is done by taking the surface density models provided in \citet{Andrews2011}, which are obtained from fits to the mm image and spectral energy distribution. The pebble surface density is then taken to be the surface density in mm-sized particles at the peak of the profile. As such the values are uncertain due to several factors: (i) the simple surface density profile assumed by \citet{Andrews2011}; (ii) uncertainties in the underlying dust-particle distribution which could contribute to the mm-flux and (iii) the trap could be optically thick. } \begin{figure} \centering \includegraphics[width=\columnwidth]{Threshold_compare_2} \caption{The peak surface density of mm particles determined from mm imaging for well known mm-bright ``transition'' discs taken from \citet{Andrews2011} shown as points. The point sizes indicate the stellar mass. The lines show the minimum pebble surface density for vortex formation to be possible indicated by Equation~\ref{eqn:sigma_crit2}, the lines show different stellar masses: 0.5 (dot-dashed), 1.0 (solid) \& 2.0 M$_\odot$ (dashed). { The labels show the individual source names.} }\label{fig:Sigma_crit_compare} \end{figure} Figure~\ref{fig:Sigma_crit_compare} shows that the vast majority of mm-bright ``transition'' discs have mm size particle surface densities sufficiently high that vortex formation due to low-mass planet formation is possible. Finally, it is well known that high viscosities can prevent vortex formation in the planet induced RWI mechanism and can also dissipate vortices \citep[e.g.][]{ValBorro2007,Fu2014b,ZhuStone14}. Our results are also consistent with the previous RWI results, in that large scale vortex formation is suppressed when the typical viscous alpha parameter is $>10^{-3}$. While typical values to explain the global evolution of the protoplanetary discs are of this order \citep[e.g.][]{Hartmann1998,Owen2011} there is every reason to expect the viscosity is likely to be lower in dust-traps and the outer regions of protoplanetary discs. Firstly, non-ideal MHD effects are important in the outer regions of protoplanetary discs \citep[e.g.][]{Armitage2011} and in the case of ambipolar diffusion dominated discs, vortices are known to form and survive for an observable length of time \citep{ZhuStone14}. Secondly, the enhanced dust content in the dust trap is known to suppress the strength of MRI turbulence \citep[e.g.][]{Jacquet2012}. Therefore, it is not unreasonable to assume that the viscosity in the neighbourhood of the dust-trap is sufficiently low to allow the formation of vortices. \subsection{Long term evolution} The long term evolution of the disc will be strongly controlled by whether it is able to form a vortex or not. As discussed above, if a large scale vortex is able to form it will trap all the pebbles in the vortex itself. As none of the vortices seen in the simulations are co-located with the orbiting planet \citep[see][for a discussion of how the planet and vortex may interact]{Ataiee2014}, then the vortex will starve the planet of pebbles and the rapid pebble accretion will cease. Our simulations suggested that vortex formation is fairly rapid $\sim 100$ orbits. This is similar to the dust trapping time-scale, thus we suspect that the pebbles will become easily trapped in the vortex. At this stage the pebbles will no longer be available to rapidly accrete onto the planet. If the planet's accretion source is quickly shut-off then it will still be luminous for a short period of time, as the gaseous envelope previously supported by the accretion contracts towards the planet. This contraction will maintain its luminosity for a time-scale roughly similar to the gaseous envelopes Kelvin-Helmholtz time-scale ($t_{\rm KH}$), which for a planet with an envelope mass considerably less than the solid mass is given by, \citep[e.g.][]{Lee2015}: \begin{equation} t_{\rm KH}=\frac{GM_p^2X_{\rm env}}{R_pL}=X_{\rm env}t_{\rm acc} \end{equation} Thus, the Kelvin-Helmhotz time-scale is shorter than the accretion time-scale by the gas envelope to planet mass ratio. For these masses and luminosities $X_{\rm env}$ will be in the range $\sim$0.01, indicating that the luminosity output of the planet will drop on a time-scale $\sim 10-100$ orbits. This means that once the large scale vortex has trapped the majority of pebbles, the source of the barcolinicity generating the vortex will disappear on a comparable time-scale to the vortex formation time-scale, approximately as $R_T\propto t_{\rm KH}^{-1/2}$. Since a gravitationally contracting object will follow an evolution such that $t_{\rm KH}$ is approximately the time it has been cooling, then $R_T$ will decrease as the square root of time. Now, since the vortex lifetime is finite, as without a source of barcolinicty, viscosity, instabilities \citep[e.g.][]{Lesur2009} or even the inertia of the dust itself \citep[e.g.][]{Fu2014} can destroy a large scale vortex on a time-scale of roughly 1000s of orbits, which is short compared to the disc's lifetime (e.g. $> 10^4$ orbits). We can therefore expect that vortex generation and dissipation occurs several times during the disc's lifetime, where pebble accretion onto a planet generates a vortex, which traps the pebbles hence suppressing the accretion and vortex generation. The vortex then dissipates after some time-scale, releasing the dust particles into an axi-symmetric ring that can then undergo pebble accretion onto a planet again, forming another vortex. The entire process can repeat for a long time-scale until the pebble reservoir is too heavily depleted to permit vortex formation. If the pebble reservoir is too small to permit vortex formation, { although as demonstrated by Equation~\ref{eqn:sigma_crit2} still significant to make the disc appear as a mm-bright disc.} As discussed in Section~2, pebble accretion onto a planet will then rapidly deplete its local reservoir on a time-scale $\lesssim 10^{5}$~years. { This means the time-scale that a disc in this model would appear at intermediate mm-fluxes would be short.} Once the planet has depleted its local reservoir, particles at larger orbital separations will continue to drift into the dust trap. At large radii the maximum dust particle size is limited by drag rather than fragmentation \citep[e.g.][]{Birnstiel2012b}. Therefore, we can imagine that as dust particles drift from large radii in the disc, they will begin to grow and when they arrive in the vicinity of the planet they will be accreted readily as they will have naturally reached a size with $\tau_s=1$ at the planet's radius. In this stage, the planet's accretion rate will be limited by the accretion rate of dust particles into the trap due to drift, rather than the standard pebble accretion rate. Since these planets will necessary be low-mass ($< 10$~M$_\oplus$) they are unlikely to be able to accrete a significant gas envelope \citep[e.g.][]{Rafikov2006,Piso2014}. Thus, unlike the dust particles that could in principle be rapidly sequestered into low-mass planets, the gas will remain largely unchanged. This means this process may ultimately result in a low-mass dust-poor, gas-rich, low-mass-planet-rich disc, a type of disc for which there are poor observational constraints. Finally, the last concern is migration. These planets are low enough mass that they are unlikely to open a gap in the gas disc, where the gap opening mass is more typically $\sim 0.1$M$_J$ at tens of AU \citep[e.g.][]{Crida2006}. Therefore, migration is likely to take place in the type-I regime. While the migration rates for low-mass planets are still greatly uncertain in realistic protoplanetary discs, even the isothermal type-I migration rates, which are likely to strongly over-predict the migration rates, give rates $\sim 10^5$~years for a few earth mass planet at 10s of AU \citep[e.g.][]{Ward1997}. This means that a forming planet is able to initiate vortex formation and it will not migrate away in on the time scale for vortex formation to occur. However, migration might push the formed planet inside the transition disc cavity after the vortex has formed but before it has dissipated. The vortex will certainly qualitatively effect the migration of such a low-mass planet (we note again we do not consider migration of our planet in our simulations presented here), as demonstrated by \citet{Ataiee2014}. This might mean that over a Myr lifetime of planet formation, the cycles of vortex generation and dissipation in the disc may dump a handful of low-mass planets inside the transition disc cavity, which could subsequently scatter. \subsection{Observational implications} While we have suggested that transition discs are prime sites for planet formation through pebble accretion, the fact that it is so rapid means that if it was left to proceed as normal in discs with the parameters of standard mm-bright transition discs ($a\sim 10-50$~AU, $\Sigma_{\rm peb}\sim 1-10$g cm$^{-2}$), it would rapidly deplete the entire disc on an incredible short time $\lesssim 10^{5}$~years. We suggest the fact that these mm-bright transition discs can exist for a long enough time-scale to be observable, is that at the expected dust surface densities in these discs, the act of low-mass planet formation in their dust traps should result in large-scale vortex formation. As discussed above these dust traps should trap all the pebbles within the vortex itself and prevent further planet growth. Therefore, many of these discs should spend { some} fraction of their lifetimes with large scale vortices, which will be observable as large scale asymmetries when the discs are imaged at mm wavelengths. { For the mm-bright transition discs imaged at high resolution IRS48 \citet{vanderMarel2013} and HD142527 \cite{Casassus2013} show strong asymmetries, and LkH$\alpha$330 \citep{Isella2013} and SAO2016462 \citep{Perez2014} show weaker asymmetries. Many other observed transition discs do not show any evidence of an asymmetry (e.g. LkCa15, SR24S \citealt{vanderMarel2015}, Sz91 \citealt{Canovas2016}, DoAr 44 \citealt{vanderMarel2016}). In recent ALMA surveys of protoplanetary discs \citep[e.g.][]{Pascucci2016,Ansdell2016} several more transition disc like structures have been detected, many of which show no evidence for an asymmetry at $\sim 0.3"$ resolution. To date it is unclear exactly what fraction of transition discs show asymmetries. The reason for this is partly due to unsystematic observations at a variety of resolutions and partly due to the fluid definition of a ``transition disc'': for example should discs with large mm-holes but primordial SEDs be counted in this sample? \citep[See][for a discussion of these discs.]{Andrews2011,Owen2016}. What can be said with any confidence is that a moderate fraction of mm-bright transition discs show asymmetries.} Finally, exactly what kind of asymmetry our vortices produce will depend on the observational wavelength, surface density distribution and other factors (e.g. dust-growth and destruction) we do not consider here. Therefore, we must wait until dust-gas simulations are performed, before we can draw any hard conclusions from the (incomplete) transition disc statistics we have today. We suspect the hot-spot generated by the planet will fade on a time-scale of $\sim 100$ orbits. Thus if the vortex lifetime is significantly longer than 100 orbits, it would be very unlikely to directly observe the effect of the increase in local disc temperature in the continuum image as $R_T$ will have contracted to well inside the planet's Hill sphere once the vortex had trapped all the dust particles. However, it may be possible that the hot spot would generate some chemical fingerprint, that would last longer than the hot spot, spread into a ring, and possibly be a signature of this process. The critical pebble surface density for vortex formation derived in Equation~\ref{eqn:sigma_crit2} is similar to the value required to give a mm-flux at 140pc of $\sim 30$ mJy \citep[e.g.][]{Andrews2005}. This mm-flux value is the discriminating value between mm-bright and mm-faint transition discs determined by \citet{OC12}. \citet{OC12} and \citet{Owen2016} argued that mm-bright transition discs are likely to be rare and long lived (lifetimes $> 10^{5}-10^{6}$ years), whereas mm-faint transition discs are thought to be common (in the sense that all discs experience this phase) and short lived with lifetimes $\lesssim 10^{5}$~years. Here we suggest that this critical pebble surface density for vortex formation could provide the link as to why mm-bright transition discs are likely long lived and mm-faint transition discs are likely short lived. Since mm-bright transition discs can form vortices, they can prevent the dust from being rapidly turned into planets, producing many cycles of planet formation, vortex formation and subsequent destruction. Finally, one obvious consequence of this mechanism is the production of low-mass ($\lesssim$ Neptune mass) planets with orbital separations of tens of AU. There is currently limited observational sensitivity to the low-mass exoplanet population at large separations. However, mirco-lensing surveys have detected Neptune mass planets at separations $\sim$10~AU \citep[e.g.][]{Gaudi2012, Shvartzvald2016} and have suggested that such planets are common \citep[e.g.][]{Gould2006}. The sensitivity of these experiments will only improve with future surveys hopefully revealing the full mass-spectrum and semi-major axis distribution of these systems. \subsection{Future Directions} In this work we have mainly argued that transition disc dust traps are likely to be prime sites of planet formation by pebble accretion. The rapid nature of the planetary accretion is likely to have many interesting consequences. Here we have argued that by modifying the temperature structure outside the gravitational influence of the planet it can provide a source of vorticity, which allows the growth of large scale vortices. In the simulations performed in this work we have described a very ideal setup, where we impose a local hot-spot that is not coupled to the subsequent dynamics. Specifically, we do not to attempt to model the coupled evolution of the dust, gas and planetary accretion in a self consistent way. While the time-scales suggest that large scale vortex formation is likely to occur the finer details of the results require further investigation. The vortex strength, how it traps dust particles, and when and how accretion onto the planet is shut-off due to the fact the pebbles are now trapped in the vortex requires coupled dust and gas simulations, possibly including grain growth. Such simulations will be able to investigate the cycle of planet formation, vortex growth, planet migration and dissipation that allows these discs to exist for the $>10^5-10^6$~years that we have postulated above. Furthermore, in-order to isolate the physics of the problem at hand we have neglected the self-gravity of the disc and the indirect potential that could effect the planetary dynamics and migration. In massive discs, vortex formation could result in the vortices transitioning to global ``fast'' modes \citep{Mittal2015}, similar to the transition of RWI generated vortices into fast modes discussed by \citep{Zhu16a,Zhu16b}. It maybe interesting to investigate the interaction between pebble accretion generated vortices and the indirect potential to see if fast modes can be triggered in this case, even without the original transition disc cavity. Finally, we note that while we have primarily focused on pebble accretion and vortex generation in transition discs, the mechanism of vorticity generation is not limited specifically to transition discs. Indeed, pebble accretion has been invoked to solve numerous planet formation problems at various locations and rates throughout primordial protoplanetary discs \citep[e.g.][]{Bitsch2015}. In our work, the gas cavity prevents the vortices from rapidly migrating allowing them to grow to large sizes. In a primordial disc, we speculate that smaller vortices may be generated which can still trap particles and migrate. By trapping particles this could affect the assumed pebble accretion rates in such calculations. This may be particularly important at smaller radii, as our critical vortex formation threshold scales as $R^{-5/7}$, whereas disc surface densities are thought to fall in a steeper manner with radius, with mm observations of protoplanetary discs suggesting an $R^{-1}$ \citep{Andrews2009} decline, and the Minimum Mass Solar Nebula (MMSN) scaling as $R^{-3/2}$. Therefore, investigating the prospects of vortex generation in a primordial disc that is forming planets through pebble accretion is certainly a worthwhile investigation. \section{Summary} We have argued that the dust-traps created within ``transition'' discs can serve as planet incubators, and consequently vortex generators. If a small $\gtrsim 10^{-4}$~M$_\oplus$ embryo forms, it would undergo rapid accretion through ``pebble accretion''. The dust-trap naturally filters the dust particles size distribution within the trap, such that most of the dust mass in the trap will be at the preferred size to undergo pebble accretion, namely those particles with Stokes numbers $\sim 1$. The high surface density of pebbles means accretion is extremely rapid, and the embyro can grow to masses $>1$~M$_\oplus$ on short time-scales ($10^4-10^5$~years). Thus, massive transition discs are prime site for the formation of low-mass $\lesssim 10$~M$_\oplus$ planets. Depending on the exact frequency of massive transition discs, planet formation in a transition disc induced dust trap could be a dominant mechanism of low-mass planet formation at large separation. Furthemore, we argue that the accretion luminosity liberated during the formation of the planet is large enough to heat the surrounding disc, well outside the planet's gravitational influence. This makes the disc {\it locally} baroclinic and unstable to vortex formation. By performing numerical simulations we show that these vortices will grow and merge until one large scale vortex is formed in about 100 orbits, only if the temperature of the disc can be increased by the planetary accretion outside the planet's Hill sphere. We suggest that this mechanism naturally explains the observed asymmetries in transition discs, as planet formation and rapid pebble accretion is difficult to prevent in the dust densities expected in transition disc dust trap and thus seemingly inevitable. Furthermore, our mechanism doesn't suffer from the requirement of a sharp density contrast required by the Rossby Wave Instability which may be difficult to generate in an actual protoplanetary disc. { This new mechanism hinges on the production of a low-mass embryo to start undergoing pebble accretion (once the embryo forms it is difficult to imagine it could not undergo pebble accretion). We have not attempted to directly address the issue of the embryo's formation here. Several mechanisms do exist which would produces embryo's of the correct size (such as coagulation, the streaming instability of direct gravitational collapse, or some combination of them), they do require certain conditions be met. For example, the streaming stability requires low turbulence and high dust-to-gas ratios. The pressure traps to qualitatively provide many of these special conditions, it is impossible to say without quantitative calculations whether the observed ``transition'' discs satisfy these requirements all the time. } Finally, we hypothesise a cycle of planet formation, vortex generation, dust-trapping and vortex dispersal where the duty cycle of vortex observability will be high. Rapid planet formation will heat the disc and generate a large scale vortex. This large scale vortex will then trap the dust particles and prevent further pebble accretion and vortex growth. The vortex will then live for some time before being destroyed releasing all the dust particles in the axisymmetric dust-trap allowing the cycle to restart. Thus, our mechanism does not suffer from the problem that we require a long vortex lifetime for it to be observable, just a lifetime greater than a few hundred orbits. Calculating this cycle end-to-end will require more detailed, probably coupled dust and gas simulations, however it presently offers a promising resolution to a number of outstanding observational and theoretical puzzles. \section*{Acknowledgements} The authors are grateful to the referee for advice that improved the manuscript. We are grateful to Richard Booth, Subo Dong, Ruobing Dong, Kaitlin Kratter, Tim Morton, Ruth Murray-Clay, Roman Rafikov, Giovanni Rosotti and Zhaohuan Zhu for interesting discussions. JEO acknowledges support by NASA through Hubble Fellowship grant HST-HF2-51346.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555. JAK gratefully acknowledges support from the Institute for Advanced Study. \label{sec:summary} \input{ref.tex} \bsp \label{lastpage} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Scholars have recently drawn attention to a range of controversial issues posed by the use of computer vision for automatically generating descriptions of people in images. These include the essentializing nature of classification, especially with respect to identity-related concepts like gender and race \cite{bowker2000sorting,scheuerman2019computers}; the presence of derogatory and offensive labels in training datasets \cite{hanley2020ethical,birhane2021large,crawford2019excavating}; and biases in labeling practice that negatively impact marginalized groups \cite{buolamwini2018gender}. Critics also warn of the privacy implications of such tools \cite{keyes2018misgendering,birhane2021large}. Despite these urgent concerns, automated image description has become an important tool to ensure equitable access to information for blind and low vision (BLV) people. For many years, BLV people navigating the web have relied on \textit{alt text}---textual descriptions of visual content, attached to images as an "alt" attribute in HTML code. Alt text allows BLV people to have these descriptions read out to them by a screen-reader. Traditionally, alt text---when it is produced---has been produced manually and voluntarily by the person uploading the image. But in recent years, in the interest of addressing the paucity of user-provided alt text, platforms have turned to computer vision to automate this process, systematically assigning descriptions to images that otherwise might not receive them. A number of companies have embraced computer vision to improve the coverage and quality of alt text, including Google and Microsoft \cite{GoogleAccessibilityTeam, MicrosoftAutoDescriptions}. In 2016, Facebook announced its launch of "automatic alt text" (AAT), a feature that would provide its BLV users with descriptions of every photo on the platform \cite{wu2017automatic}. Creating descriptions of images, however, is not a straightforward task. The process of determining what information to include in a description is both technically difficult and ethically fraught. This is especially so when describing people; characteristics used to describe people's identities change over time, and visual markers of identity are often tied to social constructs with troubling histories. Organizations creating alt text thus face difficult questions: Which features of an image are salient enough to merit description in alt text? How should they be described, and what can (or should) be omitted? What values should inform image description practices? Who benefits and who is harmed by different policies? And how sensitive should these determinations be to different contexts? When it released AAT, Facebook expressed normative concerns about many of these questions, and acknowledged the various trade-offs they entailed and open questions that remained \cite{wu2017automatic}. But Facebook is, of course, not the first organization to be confronted with versions of these questions. Museums have long navigated these tensions in their own practices of describing images in text, and have developed specific principles and guidelines to aid in their determinations, along with explicit justifications for their normative choices. To be sure, technology companies and museums are not completely analogous; the two face very different constraints in approaching the task of image description, and we don't intend to suggest that platforms should (or could) adopt museums' approaches wholecloth. Yet museums' approaches are still instructive, revealing the fault lines that platforms must navigate in making their own design choices, and offering some strategies from which platforms might learn. In this respect, we situate this paper alongside scholarship like \cite{jo2020lessons}, which compares machine learning dataset collection to that of archives and calls "on the machine learning community to take lessons from other disciplines that have longer histories with similar concerns" \cite{jo2020lessons}. We further aim to put our work into conversation with research by Facebook itself \cite{wu2017automatic} which acknowledges some of these tensions, as well as a line of work by scholars exploring the ethical stakes of image description and access by BLV users \cite{stangl2020person, macleod2017understanding, bennett2019point}. A notable contemporaneous work in this line of scholarship is \cite{bennett2021s}, which explores the preferences of BIPOC, nonbinary, and transgender users of screen readers, with particular focus on how race, gender, and disability are described in alt text. We aim to complement Bennett et al.'s rich qualitative work from users' perspectives with our focus on comparative organizational practices. We proceed as follows. We begin by reviewing the scholarly research on the normative dimensions of computer vision, image description, and accessibility for BLV users. We then present a study of the policies and normative justifications that two different types of organizations have invoked in deciding how to describe images---especially images of people---in text. First, we analyze the policies that Facebook adopts with respect to race, gender, age, and physical characteristics and the company's decisions around whether to include or omit descriptions of this type from alt text. We then present an alternative---and manual---approach practiced in the museum community, focusing on how museums determine what to include in descriptions when the primary goal is to serve BLV people. We compare the similarities and differences between the policies adopted by Facebook and museums---and the expressed reasoning behind these policies. \section{Related Work} \subsection{The production and prevalence of alt text} Alt text online is infrequently available and of poor quality \cite{guinness2018caption, gleason2019s}. Responding to this problem, researchers have developed tools, strategies, and applications to increase its availability at scale. Some techniques to increase the quality and quantity of alt text are semi-automated; previous work has explored how to draw on crowd workers \cite{salisbury2017toward}, friends, and volunteers \cite{brady2015gauging} to provide image descriptions. These semi-automated systems have limitations. Highlighting the length of time it takes to provide descriptions to BLV people, for instance, researchers have noted that VizWiz, a crowdsourcing tool, takes more than two minutes to provide a description of a single image \cite{bigham2010vizwiz}. Researchers also report that relying on others can feel like a social burden \cite{rzeszotarski2014estimating} and describe the difficulty of implementing semi-automated systems at scale \cite{huang2016there}. Recent work has focused on addressing these issues by creating more fully automated systems. These systems rely primarily on machine learning, computer vision, and natural language processing techniques. Researchers have advanced the general techniques of object recognition for automatic caption generation \cite {fang2015captions,karpathy2015deep,tran2016rich}. Recently, these have been applied specifically to the production of alt text, as in the case of Facebook's AAT, which makes descriptions instantly available for all photos on its platform \cite{wu2017automatic}. Several companies have developed publicly available computer vision APIs that include object recognition and captioning, including Google Cloud, Microsoft Azure, and Amazon Rekognition \cite{GoogleCloudAPI, MicrosoftAzure, AmazonRekognition}. \subsection{What users want from alt text} Previous work has established that BLV people are interested in being presented with information about people in images \cite{zhao2017understanding} and that preferences about the type, quantity, and conditions under which information is presented varies across contexts. \cite{stangl2020person} explore the gap between what is typically offered in alt text and the stated preferences of BLV users. They find that BLV users want to be presented with detailed information about people in images but that preferences vary across contexts (e.g., dating websites, social media, news sources, etc.). On social networking sites, users want information such as the image subjects' gender, race, and physical appearance \cite{stangl2020person}. In other work, BLV users express that they feel it is more appropriate to describe people with concrete, visual details than with identity categories, although they qualify that in some contexts this information might be worth including, such as when users post photos where their race, gender, or disability is central to its meaning \cite{bennett2021s}. Researchers have also found that BLV people want contextual information about people while navigating through public space; Branham et al. \cite{branham2017someone} find that BLV users would like to be able to identify people they know, the physical attributes of people, as well as demographic information. A small percentage of respondents said they wanted information that could allow them to meet strangers in emergency situations or find attractive strangers. This work affirms research indicating that preferences for information about people, including attributes like race and gender, vary based on the context in which the image is presented \cite{petrie2005describing,zhao2017understanding, bennett2021s, stangl2020person}. \subsection{Normative implications of computer vision} A rich line of scholarship has explored the social consequences of classification, an act which is inescapably political; categories map insufficiently onto the complexity of human experience and impose an external (and often rigid) viewpoint on who or what "counts" and who is sufficiently "like" someone else to be grouped together in a taxonomy \cite{bowker2000sorting, crawford2019excavating}. The acts of translation and categorization inherent to computer vision are likewise necessarily reductive and discretizing; they inevitably do some violence both to the richness and depth of visual media and the indeterminacy of identity \cite{hoffmann2020terms}. Classification of people at various stages of the pipeline leads to many types of normative concerns \cite{hanley2020ethical,kazimzade2020biased,yang2020towards}. The process of labeling images of people is inherently subjective \cite{miceli2020between, van2018talking, otterbacher2018social} and biased along dimensions of race and gender both of the image subject \cite{van2016stereotyping, barlas2019social, kyriakou2019fairness} and the labeler \cite{otterbacher2019we}. Unique challenges surface when the process of describing images---particularly images of people---is automated via computer vision. Prior work demonstrates that computer vision disproportionately misidentifies or fails to identify members of marginalized communities, such as people of color \cite{buolamwini2018gender} or non-binary people \cite{scheuerman2019computers,keyes2018misgendering}, and reduces gender to a stable, binary classification system \cite{hamidi2018gender, scheuerman2019computers}. Other scholars have remarked on the dangers of using computer vision to draw inferences about personal characteristics that are not visually evident \cite{yang2020towards, van2016stereotyping}, including sexual orientation and criminality \cite{wang2018deep,hashemi2020criminal}. The output of computer vision models can reinforce harmful stereotypes \cite{barocas2017problem}, subject people to offensive description \cite{crawford2019excavating,birhane2021large} and otherwise raise serious questions about representational justice \cite{hoffmann2018data}. These areas of research demonstrate how organizations implementing automated alt text must balance multiple interests. On one hand, organizations may seek to improve accessibility by providing the rich information desired by BLV users; on the other, they may want to do so in a way that attends to concerns about the dangers of using computer vision for describing people. In what follows, we consider how these tensions are instantiated in the alt text practices and policies developed by two very different types of organizations. \section{Methods} To illustrate how these tensions are confronted in practice, we analyze policies related to describing people in alt text in two types of organizations: Facebook's AAT system and museums' guidelines for composing alternative descriptions of images in their collections. We focus on the Museum of Contemporary Art in Chicago (MCA) and Cooper Hewitt Smithsonian Design Museum in New York City, each of which has played a prominent role in developing alt text guidelines in the museum community. We describe the policies each organization has adopted for producing alt text, then interrogate and compare the normative reasoning underlying these design choices, to the extent we can derive it from organizations' explanations of their goals and rationales. Facebook does not publish its policies about how it describes images of people in AAT; museums make their guidelines publicly available. Thus, our analysis involved first identifying all available information related to Facebook's policies for describing images of people with AAT and, where available, its stated rationales for these policies. We conducted a systematic review of public sources describing Facebook's AAT tool, including Facebook's published research paper on the topic \cite{wu2017automatic}, the company website and blog posts, and broader media coverage. Our search involved querying Google News and Factiva for articles including the terms "Facebook" and either "automated alt text" or "automated alternative text," as well as site-specific Google searches of Facebook's website and blog posts. We read each result and noted any mention of Facebook's policies or rationale around describing images of people with AAT. Altogether we reviewed 539 responsive results; though the vast majority of these were duplicative, this review gives us reasonable confidence that we have comprehensively surveyed the landscape and located Facebook's public statements about AAT. Importantly, our analysis here focuses on publicly available sources. We did not formally audit Facebook's AAT system, nor secure "insider" knowledge about how its alt text practices were determined, how they are put into practice, or the rationales justifying those choices. In both cases, our analysis centers on stated policies rather than enacted practices, since our primary interest is in organizations' expressed normative reasoning (rather than specific workings of the systems in practice). \section{Automatic Alt Text at Facebook} Facebook introduced AAT in 2016, with the goal of ensuring that BLV users could "get the same enjoyment and benefit out of the platform as people who can [see photos]" \cite{wu2017automatic,VergeFacebook}. In their previous research, the company found that BLV users felt frustrated or isolated by their experience engaging with images on the platform \cite{zhao2017effect,FbookEngBlog}. While algorithmically generating alt text for every user-uploaded image on the platform posed an extremely difficult technical challenge, the accessibility team pursued the effort as one core to the company's mission, asserting that accessibility is a requirement of "connecting the world" \cite{Wapo} and emphasizing the team's ethos that what "visually appears for sighted people gets translated well into something that's non-visual" \cite{TechCrunchFacebook}. While media coverage of AAT was positive, highlighting the Facebook's accessibility team's promotion of inclusivity and equity \cite{VergeFacebook,Engadget,Wapo}, feedback from the BLV community was mixed, largely due to the sparsity of detail in image descriptions. Accessibility scholar and advocate Chancey Fleet has noted that Facebook's AAT is "famously useless in the blind community" despite "garner[ing] a ton of glowing reviews from mainstream outlets" \cite{Chancey}. There was also criticism from the broader public, reflecting expectations that Facebook should provide richer descriptions. A platform-wide outage on July 3, 2019, exposed AAT's shortcomings to sighted users who had not previously encountered the feature \cite{VergeOutage}. During the outage, Facebook users could access the platform but could not upload or see images. Instead, they saw blank white boxes with small, blue alt text describing the images that were not displayed. This led to a flurry of posts on Twitter in which Facebook users commented on the poor quality of the AAT, noting its vagueness and questioning whether it constituted an "inclusive experience" \cite{annatweet,MykalTweet}. In his 2017 exhibit "Image May Contain," the artist-researcher Lior Zamalson highlighted the tool's shortcomings by illustrating how AAT describes famous photographs of historical significance, showing how it "flattens [images'] meaning by taking them out of their social context" \cite{LiorTheFix}. For example, the image of President John F. Kennedy "driving through the streets of Dallas in a Lincoln Continental just moments before he was assassinated" was translated by AAT to "Ten people, car." Reflecting on the exhibit, Fleet noted how the descriptions are "stripped of valence" \cite{TWIMLAI}. The criticisms can be attributed, in part, to technical limitations, recognized by Facebook itself, such as choosing to restrict alt text to tags rather than generating rich and fluid captions, as the system is "limited by the accuracy and consistency of existing natural language caption generation systems" \cite{wu2017automatic}. Furthermore, Facebook notes the challenge of applying existing algorithms for caption generation to AAT's design and implementation; these models, which were "designed and evaluated for sighted people" \cite{wu2017automatic} tend to identify and describe objects more relevant for sighted users than BLV users. Ultimately, some of the public's criticism---which highlighted how flat, brittle, and unnatural the descriptions read---can be explained by these technical challenges. Other aspects of Facebook's AAT design, however, stem from the company's policy decisions dictating what descriptions should contain. These policy decisions are distinct from the system's technical constraints: even if a company could theoretically generate long, vivid automatic descriptions, it would nonetheless face the question of what concepts should be described, and how. These policy decisions are inescapable: other companies with comparable object detection services apply policies of their own around which identity attributes they will return in image description and alt text. In 2020, Google adjusted its Cloud Vision API so that it would no longer return tags for gender, such as `man' or 'woman,' but would instead return "non-gendered label[s] such as 'person' […] given that a person's gender cannot be inferred by appearance" \cite{GoogleGender}. Microsoft, which offers image captioning services to "improve accessibility" through Azure AI, appears to include gender in some instances and not others \cite{MicrosoftAzure}. As of April 2021, Amazon Rekognition makes "gender binary (male/female) predictions based on the physical appearance of a face in a particular image" \cite{AmazonRekognition}. \subsection{Facebook's AAT policies} The primary means through which Facebook operationalizes its AAT policies is through the use of a blocklist to prevent alt text from including information about certain "sensitive" categories. Facebook applies this blocklist to a set of pre-defined prominent and frequent concepts. Lastly, Facebook employs an "accuracy threshold" below which it will not return information in certain categories. These processes, detailed in \cite{wu2017automatic}, are elaborated below. Acknowledging that there is an "infinite number of details in a given photo," Facebook first had 30 human annotators review a subset of 200,000 randomly selected photos and then select three to ten "prominent" concepts per image. It then sorted the concepts by frequency and chose the top 200 as "concept candidates." It then filtered those concept candidates to exclude those that "could have a disputed definition or that were hard to define visually," like "concepts associated with personal identity," including gender-related concepts (to which we return below); "fuzzy and context-dependent" adjectives like `young' and `happy'; and concepts that are difficult for an algorithm to learn, like `landmark' \cite{wu2017automatic}. Applying this blocklist to the list of 200 concept candidates resulted in the removal of 103 concept candidates, leaving 97 concepts total. Importantly, this list purposefully excludes concepts not because they were not prominent or frequent, but due to other considerations. For example, the company considers gender an identity attribute and will categorically omit it from AAT descriptions. While the company does not outline explicit policies for describing age or physical appearance (other than that `young' is an excluded "fuzzy or context-dependent" adjective), final concepts do include age-related words like `baby' and `child' and concepts which could be considered gender proxies, like `beard' and `jewelry' \cite{FbookEngBlog}. In addition to categorical omission of some sensitive attributes via the blocklist, Facebook adopts a policy regarding the confidence threshold for predicting the concept accurately, which further distinguishes concepts by sensitivity. The company states that the algorithm must be at least approximately 80\% confident about its classification before it would display a concept in AAT \cite{wu2017automatic}. When it isn't confident, "Facebook simply won't suggest a description"; as the project lead notes, "[i]n some cases, no data is better than bad data" \cite{VergeFacebook}. In its research paper, the company does not mention its policies on ethnicity or race. However, in other sources, a Facebook representative has stated that AAT might return race if the model had sufficient confidence: "in sensitive cases---including ones involving race, the company […] will require a much higher level of confidence before offering a suggestion" \cite{VergeFacebook}. Facebook's research paper notes that the requisite confident level is set at high as 98\% for "a few more sensitive concepts" \cite{wu2017automatic}. In January 2021, Facebook announced that it was increasing the number of concepts that it includes in AAT from 200 to 1200, but the company has not commented on any changes in its policies around blocklisted or sensitive concepts, like gender or race. Facebook also announced that its new model is trained to perform consistently across the dimensions of skin tone, age, and gender \cite{facebookai2021}. In its 2017 research paper, Facebook stated that at that time it chose not to implement facial recognition until it could responsibly assess the privacy implications of doing so, despite requests for such a feature from research participants \cite{wu2017automatic}. Since then, the company has introduced the tool across the platform as an opt-in feature, and incorporated it into the AAT functionality \cite{facebookai2021}. \subsection{Normative justifications} In its research paper and public statements, Facebook explicitly acknowledges a set of normative concerns and considerations that reflect the interests of various parties---including the subjects depicted in the image, the BLV user receiving the image description, and the person who uploaded the image \cite{facebookai2021, wu2017automatic, VergeFacebook}. Regarding image subjects, Facebook expresses concern around offensive mistagging or essentialization. For example, Facebook's paper notes that while participants in users studies desired more detailed descriptions of image subjects, Facebook was hesitant to do so because of "the consequence[s] [of] wrong or potentially offensive tags" \cite{wu2017automatic}. By categorically omitting gender because "gender identification is more complex, personal, and culture-dependent than what may appear visually on photos," the authors express concern about mistagging or essentializing image subjects. The authors acknowledge that in addition to inherently offensive concepts, there are also concepts which may become offensive when assigned to an object; they use the example of tagging a `person' as a `horse.' The privacy interests of image subjects are also implicated in Facebook's expression of hesitation regarding the integration of face recognition into the AAT tool. The authors frame this concern as a tradeoff against BLV users' desire for more informative alt text. As to BLV users, the authors highlight concerns around providing incorrect or possibly misleading information about an image, impeding effective social interaction or even resulting in BLV users being led to believe something embarrassing or offensive about the people depicted \cite{wu2017automatic}. The authors frame the latter problem as a "value/risk" tradeoff, expressing the concern that the cost of "algorithmic failure" (i.e., inaccurate tags) is uniquely high in the case of alt text. BLV people cannot visually assess whether a description is inaccurate, and the misunderstanding could lead a BLV person to make "inappropriate comments'' about other users' photos \cite{wu2017automatic}. The authors specifically refer to the now-infamous situation in which Google's image tagging feature misidentified Black people as `gorillas' \cite{Googleapologises}, and express concern about the prospect of misleading BLV users with such an error; they cite this concern as justification for the high levels of confidence required before displaying certain image tags. Finally, Facebook expresses some concerns around the interests of the person who uploaded the image to which AAT is being applied (whom it refers to as the "photo owner"). In discussing the possibility of giving image uploaders the capacity to review and verify possible descriptions before making them available via AAT, Facebook expresses concern that doing so may "create a feeling of social debt for blind users" and that such a process could result in "a significant amount of work" for image owners, given that many photos will not be consumed by AAT users \cite{wu2017automatic}. Facebook also expresses concern about whether applying alt text to an image uploader's photo could undermine their agency or "creative ownership" over an image. \section{Manual Practices in Museums} In light of the challenges Facebook encounters in its policy choices around AAT, we consider a separate context with a well-developed tradition of attaching text descriptions to images: museums. Museum practices impart a unique perspective in how they implement appropriate policies for alt text, and provide an important point of comparison for Facebook's practices. We reviewed the image description guidelines for two museums, MCA and Cooper Hewitt, each of which has garnered attention for its alt text practices. As an initial matter, it is notable that museums may offer multiple versions of textual descriptions for a single image; Cooper Hewitt offers both short and long descriptions for orienting BLV people to exhibits and artworks. While screen readers default to providing the short description, guests can opt into being read a longer and more detailed description \cite{Artsy}. Here, we consider both types. An explicitly articulated set of normative values underpins museums' image description practices. Cooper Hewitt presents a guiding philosophy for how it approaches composing descriptions of images which include people: \begin{quote} ``One should identify the visual appearance of persons, especially when it is important to the understanding of the content of an image. Instincts for political correctness tend to inadvertently result in redacting information; it is a general rule that if you are able to see something it should be shared with those who cannot see it'' \cite{CHGuidelines}. \end{quote} \noindent Cooper Hewitt's guidelines recommend that all visual information should be available to BLV people in text descriptions. This principle is one often shared by individuals within the accessibility community. Sina Bahram, an accessibility consultant who advised both MCA and Cooper Hewitt on their guidelines, emphasizes the importance of rendering visual information explicit in certain contexts: \begin{quote} ``When I think of a picture or a photo of somebody using a wheelchair, I'm not necessarily interested in identifying disability. It would be, however, critical, in certain contexts to know that that person is using a wheelchair and in other contexts it may not be of interest, but I don't want to err on the side of, `Oh, we shouldn't talk about this because it's difficult.' Because I think that leads us to implicit censorship, and that's not what we want to do'' \cite{TWIMLAI}. \end{quote} Museums codify this philosophy in guidelines that offer recommendations for describing people in alt text. Both museums suggest describing an individual as a `person' without attributed gender, unless gender is "clearly evident and verifiable" (MCA) \cite{MCAGuidelines} or "clearly performed and/or verifiable" (Cooper Hewitt) \cite{CHGuidelines}. The "or" in Cooper Hewitt's guidelines would seem to allow for the use of gender even if not verifiable if it is "clearly performed"—and this is well reflected in the examples included in the guidelines, which use gender throughout. Similarly, both museums recommend avoiding any racial or ethnic terms unless race or ethnicity is "obvious and known"; in keeping with the guidance to rely on visual information, they instead recommend describing skin tone. Cooper Hewitt's guidelines note that "where skin tone is obvious, one can use more specific terms such as black and white, or where known and verified, ethnic identity can be included with the visual information: Asian, African, Latinx/o/a (also see gender), etc."~\cite{CHGuidelines}. Bahram elaborates: ``I think that it is important to identify what is visually apparent in images when we're working with institutions on describing photos in the art context. We describe skin color, but we don't describe ethnicity because one can be seen, and the other one is inferred"~\cite{TWIMLAI}. He emphasizes this critical distinction between what information "can be seen" and which cannot; skin color is visually perceived and therefore should be described, whereas a person's ethnicity, unless clearly represented through some other contextual cue, is not purely visual information and is therefore not amenable to description. While museums recommend explicit guidelines for composing image description, they also acknowledge that these guidelines are a "living document" and must be constantly revisited to "engage with contemporary dialogues" \cite{CHGuidelines}. They emphasize the importance of "regularly review[ing] and updat[ing] these guidelines and glossaries of terms to sensitively describe people without implications of judgement" noting that "image description inherently intersects with questions of race, gender, and identity" \cite{CHGuidelines}. \section{Comparing Policies} There are a number of similarities and differences to note about the policies adopted by Facebook and museums---and some of the expressed reasoning behind these policies. We summarize these in Table \ref{tab:commands} and highlight key points below. Facebook and museums both appear willing to use race tags under certain circumstances, but rely on different justifications for doing so. Media coverage suggests that Facebook would tag race if its model is sufficiently confident, with the confidence threshold set especially high \cite{VergeFacebook}. In contrast, museums seem to endorse its use only when it is known or verified. Museums are more permissive when it comes to skin tone, however: when it is clearly visible, they suggest that it should be noted in alt text. Nearly all of the alt text for example images in Cooper Hewitt's guidelines includes details about skin tone. Facebook, to our knowledge, has made no public statements about its willingness to describe skin tone. Facebook and museums depart substantially when it comes to their policies on gender description. Facebook categorically refuses to use gender terms. Museums, in contrast, endorse their use under certain conditions: Cooper Hewitt notes that gender can be named if "clearly performed and/or verifiable," while MCA only allows its use if "evident and verifiable." Museums seem to embrace the use of age descriptions with no obvious restrictions, while Facebook purposefully removed `young' from AAT's set of available concepts, suggesting that it hesitates to infer people's age. Facebook does use certain concepts that still indicate age (e.g., `child' and `baby'); notably, unlike `young,' these do not seem to be filtered out as part of its process of removing tags with ``disputed definition or that [are] hard to define visually'' \cite{wu2017automatic}. Cooper Hewitt is unique in suggesting that alt text should describe apparent physical disabilities. Note, however, that in its examples, Cooper Hewitt's alt text does not use the term `disabled' or name a specific physical disability; the examples instead mention the appearance of a wheelchair. It is thus unclear whether Cooper Hewitt endorses using the language of disability or prefers that alt text only describe observable properties like assistive devices. The MCA guidelines do not remark on disability, nor does Facebook. Both Facebook and museums, however, seem committed to describing physical features of particular importance, including those that might serve as the basis for inferences about identity categories like gender and race. For example, Facebook gives `beard' and `jewelry' as examples of concepts included in AAT; the company has also suggested that it would like to include attributes like hair color \cite{Engadget, TechCrunchFacebook, MashableFacebook}. As mentioned, both museum guidelines advocate in favor of describing skin tone, which can clearly serve as the basis for inference about race; MCA's guidelines even mention hair color. Finally, while both museums advocate in favor of identifying recognizable (i.e., public, historical, etc.) figures by name, Cooper Hewitt says that doing so does not obviate the need to also describe the person's physical features. Its example of alt text appropriate for a painting of Michelle Obama names her, but also mentions her skin tone. Facebook originally hesitated to use facial recognition to identify people by name---either when describing users' friends or even public figures. Matt King, an Accessibility Specialist at Facebook, even expressed frustration that "[i]f everyone else can see that's a picture of Donald Trump, I should be able to know that too" \cite{Wapo}. The company has since changed its policy to allow for the use of facial recognition if users that appear in photos have opted into the feature. It is unclear, however, whether Facebook reports any other details about people if they have been named. \section{Axes of difference} \subsection{Manual versus automatic alt text} Can the differences in Facebook's and museums' policies simply be explained by the fact that one relies on manual annotations by humans and the other on automatic descriptions based on computer vision---under the assumption that the normative implications of each technique are so distinct as to demand different policies? Humans can be more discerning than computer vision models---considering a wider range of details in an image, taking context into greater account, or relying on other expert knowledge. Humans have the capacity to reflect critically on their assumptions, to recognize their own uncertainty, and to decide when to seek out more information to make an accurate determination. They may also have the means to do so, perhaps through contact with the person whose identity is in question or by locating sources in which the person has shared details about their identity. Humans will be aware of the normative implications of making certain kinds of mistakes (e.g., describing a person as an animal). And humans can even decide to abstain from offering descriptions given their uncertainty, or decide to communicate that uncertainty in the descriptions themselves---something that might be especially important when further information is unavailable. In practice, however, both manual and automatic approaches might still result in people being described in terms not of their own choosing. Museums could just as easily subject a person to gender or racial classification as Facebook's AAT---and indeed examples of alt text for many images in the museum guidelines suggest that they are willing to do so in some circumstances. That museums might be more accurate or more circumspect in describing the gender or race of a person than Facebook does not sidestep the normative objections to engaging in gender and racial classification in the first place \cite{keyes2018misgendering,hamidi2018gender}. Notably, many of the findings from Bennett et al.'s interviews that express concerns about ascribing characteristics to people were not limited to automated alt text; they applied equally well to manual alt text \cite{bennett2021s}. Furthermore, humility---in the form of recognizing and communicating uncertainty---is not the preserve of human annotators. Computer vision models can be designed to quantify the certainty of their inferences. As mentioned earlier, Facebook spent a good deal of time using such quantifications of uncertainty to decide what should serve as a minimum threshold of confidence to report certain inferences. In fact, Facebook even experimented with reporting uncertainty along with each tag in the alt text, but their study revealed that users found such information "cumbersome and hard to interpret" and thus the company decided to omit it \cite{wu2017automatic}. The company instead starts each alt text with "Image may contain" to stress uncertainty, generally \cite{FbookEngBlog,wu2017automatic}. Thus, the differences in Facebook and museums' policies are not entirely explained by the fact that the former relies on automatic descriptions, while the latter relies on manual descriptions. \subsection{Social media versus museums} Social networking sites and museums serve very different functions in society. Perhaps it is unsurprising that the normative issues at stake when producing alt text are also very different across these organizations---and that their policy positions differ accordingly. Online social networks are places where people go to connect with others, but also cultivate their identity and express themselves \cite{bennett2021s}. Museums are cultural institutions that aim to curate important artistic contributions or cultural artifacts, make them available to broad audiences, and provide some interpretation and context to help patrons appreciate their meaning and significance. Facebook may naturally want to stake out a policy position that reinforces the view that it is merely a neutral conduit for social interactions among peers. In contrast, the very nature of museums is that they choose where to direct patrons' attention and that they play an active role in shaping the interpretation of displayed works. Even though the Cooper Hewitt guidelines caution that "[b]ringing interpretive knowledge to a description is not always preferred" when crafting alt text---encourage annotators to focus on what is visually evident and deferring interpretation to patrons---the museum still allows that there may be circumstances "when interpretive knowledge aids visual understanding of the content [and that] it should be incorporated" \cite{CHGuidelines}. No such imperative seems to exist at Facebook. \begin{table*} {\footnotesize \centering \caption{Policies for Describing People by Organization} \label{tab:commands} \resizebox{0.99\textwidth}{!}{% \begin{tabular}{p{0.08\linewidth}p{0.22\linewidth}p{0.33\linewidth}p{0.30\linewidth}} \\ \toprule \multicolumn{1}{c}{\hspace{1cm}} & \multicolumn{1}{c}{\textbf{Facebook}} & \multicolumn{1}{c}{\textbf{Cooper Hewitt}} & \multicolumn{1}{c}{\textbf{Museum Contemporary Art Chicago}} \\ \midrule \\ \textbf{Race/\hspace{1cm} Ethnicity/\hspace{1cm} Skin Tone} & ``By default, Facebook will only suggest a tag for a photo if it is 80 percent confident that it knows what it's looking at. But in sensitive cases---including ones involving race […] it will require a much higher level of confidence before offering a suggestion. When it isn't confident, Facebook simply won't suggest a description."~\cite{VergeFacebook} & ``When describing the skin tone of a person use non-ethnic terms such as ``light-skinned'' or ``dark-skinned'' when clearly visible. Because of its widespread use, we recommend the emoji terms for skin tone as follows: \includegraphics[height=1.1em]{light.png} Light Skin Tone, \includegraphics[height=1.1em]{Medium-Light.png} Medium-Light Skin Tone, \includegraphics[height=1.1em]{medium.png} Medium Skin Tone, \includegraphics[height=1.1em]{Medium-Dark.png} Medium-Dark Skin Tone, \includegraphics[height=1.1em]{dark.png} Dark Skin Tone. Also, where skin tone is obvious, one can use more specific terms such as black and white, or where known and verified, ethnic identity can be included with the visual information: Asian, African, Latinx/o/a (also see gender), etc.''~\cite{CHGuidelines} & ``Demographic: race. This is in development, but for the time being identify clearly visible visual appearance when it is important to the understanding of the content. Default to ``light-skinned'' and ``dark-skinned,'' when clearly visible. Where obvious and known, use more definite terms; e.g. black, Latino, Asian, etc.'' \cite{MCAGuidelines} \\ \\ \hline \\ \textbf{Gender} & ``[W]e decided to leave out gender-related concepts such as woman/man, girl/boy, as gender identification is more complex, personal, and culture-depdendent than what may appear visually on photos.'' \cite{wu2017automatic} & ``No assumptions should be made about the gender of a person represented. Although, where gender is clearly performed and/or verifiable, it should be described. When unknown, a person should be described using ``they, them'' and ``person'' and their physicality expressed through the description of their features, which inadvertently tend to indicate masculine or feminine characteristics. The use of masculine and feminine are problematic and should be avoided unless necessary for describing the performance of gender.'' \cite{CHGuidelines} & ``Where necessary for understanding content gender may be described, but no assumptions should be made. Our default should be ``person'' except where gender is clearly evident and verifiable.'' \cite{MCAGuidelines} \\ \\ \hline \\ \textbf{Age} & ``We ended up with a list of 97 concepts […] including people (e.g., people count, smiling, child, baby)'' \cite{wu2017automatic} & ``Describe the age of represented people in an image using terminology such as baby, toddler, child, youth, teen, young person, adult, older person.'' \cite{CHGuidelines} & ``Use terms that indicate age: baby, toddler, child, youth, teen, young adult, adult, older person.'' \cite{MCAGuidelines} \\ \\ \hline \\ \textbf{Disability} & & ``Not only […] prominent features or physical stature, but also physical disabilities [should be described].'' \cite{CHGuidelines} & \\ \\ \hline \\ \textbf{Physical Features} & ``The current list of concepts covers a wide range of things that can appear in photos, such as people's appearance (e.g., baby, eyeglasses, beard, smiling, jewelry)'' \cite{FbookEngBlog} & ``When particular features are immediately noticeable, or mutually agreed upon salient features of a known person are visually present, they should be described.'' \cite{CHGuidelines} & ``need to create reference list for […] hair color'' \cite{MCAGuidelines} \\ \\ \hline \\ \textbf{Identity} & ``And since people who use Facebook often share photos of friends and family, our AAT descriptions used facial recognition models that identified people (as long as those people gave explicit opt-in consent).''~\cite{facebookai2021} & ``When describing an image of a recognizable person, identify them by name, but also describe their physical attributes. If an individual is not a public figure, and the context does not imply the importance of who is represented, it may not be appropriate to identify the individual.'' \cite{CHGuidelines} & ``Feel free to identify clearly recognizable figures, e.g. Jesus, Bozo the Clown, Madonna, Anne Kaplan, and Sammy Davis Jr.'' \cite{MCAGuidelines} \\ \\ \bottomrule \end{tabular}}} \end{table*} The structural positions of stakeholders also differ across these organizations. On Facebook, users are often both the producers and consumers of visual media---and thus hyper-attuned to presentations of the self and others; in museums, patrons are almost always only consumers of the displayed works. The purpose of alt text in these contexts is thus very different: in one, to facilitate social interaction among people in structurally similar positions; in the other, to facilitate lay appreciation of an expertly curated set of cultural artifacts. If patrons' \textit{own} identities are at stake in the alt text generated in museums, it is only by virtue of how museums describe \textit{others} that belong to the groups with whom patrons may identify. Though museums are surely attuned to the concerns and interests of artists and the creators of other artifacts---as well as the possibly real people represented in these works---such concerns are different than those museums have for their patrons. There are also many \textit{practical} reasons why Facebook and museums have adopted quite different practices and why these might account for notable policy differences. Facebook may feel compelled to adopt automated methods, given the scale of the task, but also be more conservative in its use of identity terms, given that computer vision models will likely perform less well than humans and fail to engage in the types of deliberation described above. Facebook has committed itself to generating alt text for all images on its platforms, the scale of which is far larger than any collection that a museum might attempt to annotate. Three hundred and fifty million images are uploaded to Facebook each day \cite{FacebookStats}; the MCA's collection stands at 2,500 pieces \cite{Collection}. Facebook generates alt text for visual content as soon as it has been uploaded to the site; museums have the comparative luxury of taking some time to catalog a new piece in their collection, including adding alt text before making the piece publicly available. For Facebook, manually producing alt text at this scale and speed is likely infeasible---and certainly would be if the goal were to produce alt text of the type produced by museums. \section{Sources of uncertainty} The goal of comparing Facebook and museums' policies is not to suggest that they should be identical, or that one is clearly preferable to the other. There are many good reasons why they differ. Yet not all of these differences can be explained away by the fact that they adopt different techniques (one manual, the other automatic) to generate alt text or that they serve very different social functions. In drawing out these differences, we have tried to throw into greater relief the range of choices and normative considerations that go into alt text, whether explicitly or perhaps unwittingly. In what follows, we offer a deeper analysis of the normative reasoning that might---and should---undergird these choices. As we have described, Facebook's statements evince a range of justifications for its decision-making around tagging identity categories in AAT. Several related, but analytically separable, sources of potential uncertainty emerge, which we might group into three categories: (a) uncertainties related to technical accuracy; (b) uncertainties related to the ontology and epistemology of social categories; and (c) uncertainties related to social context and salience. \textbf{Limits on technical accuracy.} The first source of uncertainty relates to limitations on computer vision's ability to reliably classify images. A good deal of Facebook's discussion of identity categories invokes technical thresholds that must be met before it may display certain identity-related terms. As described, heightened confidence levels---up to 98\%---are required for particularly ``sensitive'' concepts, seemingly including race \cite{VergeFacebook, wu2017automatic}. Through these restrictions, Facebook imposes a requisite level of confidence that its algorithm is \textit{accurately} classifying images according to technical criteria. The heightened accuracy required for certain identity-related concepts suggests that Facebook recognizes that erroneous tags around these concepts can cause particular harm: it may be more troubling to have a sensitive personal characteristic mistagged than, for example, to have Facebook mistake an apple for a pear. This concern is further reflected in Facebook's discussion of the relative risks of returning no tags for an image versus returning inaccurate tags for an image (balancing precision and recall): as Facebook notes, "in some cases, no data is better than bad data" \cite{VergeFacebook}. Ensuring a higher degree of technical accuracy, then, is viewed as a partial means of mitigating the potential harms of AAT. \textbf{Ontological and epistemological limits of social categories.} A quite different source of uncertainty relates to the nature of the characteristics that comprise social identity. Rather than being stable, discrete categories with fixed meanings---as they are often treated in computer vision models---race, gender, disability, and other identity categories are complex social constructs, the meanings of which are fluid and contextual. As many scholars have noted, identity categories are unstable, historically and politically contingent, and premised on social hierarchies; they cannot be merely affixed to individuals as stable biophysical attributes \cite{benthall2019racial, scheuerman2020we, whittaker2019disability}. A related concern is epistemological. Even if we \textit{were} to treat race, gender, age, disability, or other identity categories as fixed categories, we would still be limited in what can be learned \textit{visually} about these attributes. While certain markers of gender and race may be expressed visually (as Benthall and Haynes describe it, the category of race is partially based on "the assigning of social meanings to … visible features of the body" \cite{benthall2019racial}), they encompass a much broader range of nonvisible dimensions, and cannot be reduced to externally visible phenotypic characteristics. Many disabilities are "invisible"---that is, not readily apparent from physical appearance---which can itself impose particular burdens of recognition, legitimation, and access \cite{davis2005invisible}. As Kate Crawford and Trevor Paglen write, computer vision tends to rely on the assumption that concepts are not only themselves "fixed, universal, and have some sort of […] grounding and internal consistency" but also that there are "fixed and universal correspondences between images and concepts" such that "the underlying essence [of a concept] expresses itself visually" \cite{crawford2019excavating}. Both of these assumptions often fail---and fail consequentially---in the context of identity categories. Some of Facebook's policies allude to these ontological and epistemological constraints---for instance, its decision to omit gender tags from image descriptions because "gender identification is more complex, personal, and culture-dependent than what may appear visually on photos" \cite{wu2017automatic}. This reasoning seems to evince a hybrid recognition of both ontological and epistemological constraints, noting that gender is neither a construct that can be consistently applied from person to person, nor one that can be readily identified visually. A similar logic seems to undergird Facebook's decision to redact ``fuzzy and context-dependent'' adjectives like `young' \cite{wu2017automatic}: what does `young' mean? What are the boundaries between `young' and `not young' and how readily can `young-ness' be inferred from a photo? Facebook's policies seem to acknowledge that there may be more consistent agreement at one pole of a malleable identity concept---about the `young-ness' of, say, a `baby,' a tag which Facebook \textit{will} return---but avoids tags where line-drawing is made more difficult due to the nature of the construct and its (in)visibility. \textbf{Social context and salience.} Finally, normative considerations around alt text relate to uncertainties related to social contexts, and what features are useful or appropriate to mark or make explicit within them. At least two contexts are at issue. The first involves what is being depicted in the photograph being described by AAT. Race, gender, age, disability and other identity categories may be central to understanding the social meaning of an image. The museum guidelines make this explicit, clarifying that visual appearance should be described especially to the extent that it is important to understanding an image's meaning. Zalmanson's work underscores the point in his description of the photo of the Kennedy assassination as 'ten people, car': the absurdity of the caption stems from its remove from the social meaning and historical import of the image. For some images, the race and gender of depicted people are clearly salient, and key to understanding what the image conveys; for others, it is incidental, or might even be offensive or essentializing to make reference to them. But unlike manual practices in museums, which are informed by knowledgeable human annotators who can consider the meaning of each piece of art individually, Facebook's AAT tool is not equipped to make such nuanced judgments. Facebook did consider the ``salience'' of different features of an image in constructing its initial list of concepts, in that it asked human annotators to tag a random selection of 200,000 photos publicly posted to Facebook with a limited number of tags (i.e., the most salient tags for the image), the top 200 most common of which were added to the initial concept candidate list \cite{wu2017automatic}. But this is a rather limited notion of salience in the context-specific sense in which we intend it here: while used to construct a list of concepts to be applied globally, it cannot address the specificities of how these tags contribute to the social meaning of particular images. An independent consideration related to salience is the relational context in which the BLV user is consuming the image. The relationships among the BLV user, the depicted image subject, and the photo uploader are complex and variable. It may or may not be useful or appropriate for Facebook to call out identity concepts within different relational contexts. Bennett et al.'s \cite{bennett2021s} interview subjects describe how knowing the identity categories of the people with whom they interact on social media can in some cases be useful to facilitate code-switching and creating a base level of understanding for community-building (though acknowledge that doing so often involves reliance on assumptions). In other relational contexts, identity categories are less salient, and it may be irrelevant or inappropriate to make explicit reference to them. Facebook's introduction of face recognition into its AAT model (as well as the identification of image subjects by name) seems geared toward addressing the relevance of relational context. Facebook also relies on this reasoning in explaining its recent decision to permit users to expand an AAT image description into additional detail, noting that screen reader users "[want] more information when an image is from friends or family, and less when it's not" \cite{facebookai2021}. To illustrate how these sources of uncertainty apply to image descriptions, consider the question of whether an AAT system should return the tag `wheelchair.' We might reasonably assume that it is feasible for a model to accurately tag most images of wheelchairs with high confidence, surpassing the first hurdle (technical accuracy). We might also assume that we can reasonably delimit a category of objects that comprises `wheelchairs' and that we can detect visually what objects belong to such a category (addressing ontological and epistemological concerns). `Wheelchair' is in these senses an easier case than `woman' or `young' or `Asian'---and it might seem reasonable, then, to include a `wheelchair' tag in AAT. But we might still question the inclusion of a `wheelchair' tag based on concerns about its \textit{salience} in social and relational contexts. Is the existence of a wheelchair in an image salient enough to be explicitly named? In some cases, the answer may be yes---for instance, in Bennett et al.'s study, the majority of participants \textit{did} describe "disability-related access technologies" in images of themselves \cite{bennett2021s}. But in others, naming the existence of a wheelchair might draw undue and irrelevant attention to the (inferred) disability of a person depicted in an image. Recall the explanation given by Sina Bahram, accessibility consultant for museums, in determining whether to describe wheelchairs in the museum context: "When I think of a picture or a photo of somebody using a wheelchair, I'm not necessarily interested in identifying disability. It would be, however, critical, in certain contexts to know that that person is using a wheelchair and in other contexts it may not be of interest" \cite{TWIMLAI}. Even if we can say with high accuracy and without ontological or epistemological constraints that an object is a wheelchair, salience considerations may caution against their inclusion in AAT in some cases. These various sources of uncertainty---the technical limitations of computer vision; the ontological instability of identity categories, and epistemological constraints on what can be visually observed about them; and the salience of identity characteristics in relation to the variable social meanings of images, both in their content and in the relational contexts in which they are shared---represent independent considerations that might guide policy choices about how to describe people in alt text. In practice, Facebook (and other companies creating alt text) must be sure neither to conflate these ideas in reasoning about their policies, nor to let resolution on one grounds (e.g., accuracy) be considered normatively sufficient. Rather, each concern must be considered independently when formulating an appropriate alt text policy. \section{Navigating uncertainty} Navigating these different sources of uncertainties is itself an uncertain process. There do not seem to be any obvious or easy answers to the challenges posed by each---and certainly not to all three. We discuss two strategies below: one that has achieved a good deal of traction already (limiting descriptions to only directly observable physical features) and another that is more speculative (relying on facial recognition to name people, rather than describe them)---and the potential limitations and dangers of both approaches. \subsection{Directly Observable Physical Features} One approach that might seem to allow Facebook to sidestep some of these concerns is to describe only that which is \textit{visually observable}, rather than imposing identity-related tags---but knowing that some of these descriptions might facilitate inferences about the race, gender, age, disability, etc. of the people so described. For example, rather than attempting to resolve the gender identity of a person that appears in a photo, Facebook could instead describe the clothing, hairstyle, and jewelry that the person is wearing, understanding and accepting that such descriptions may encourage BLV users to draw their own inferences about the person's gender. This could also happen unintentionally, of course: Facebook may have no explicit goal to facilitate such inferences, but might nonetheless describe these visual features simply in the interest of providing more detailed descriptions. It's not clear, for example, whether Facebook's decision to include `beard' and `jewelry' \cite{FbookEngBlog} was motivated by any specific concerns with facilitating inference about gender or a desire to provide details beyond gender itself that its annotators deemed especially salient. Recall that the Cooper Hewitt guidelines suggest such description of visual features in place of gender in some cases: "No assumptions should be made about the gender of a person represented […] When unknown, a person should be described using "they, them" and "person" and their physicality expressed through the description of their features, which inadvertently tend to indicate masculine or feminine characteristics" \cite{CHGuidelines}. To illustrate this principle, the guidelines offer an example of alt text that could be generated for a photograph of people whose genders are unknown, which includes descriptive details that may be read as gender-suggestive: "The person on the left, who is wearing a halter top, leans down to crank a lever as they look over their bare shoulder at the person on the right who pushes their long hair over their shoulder laughing" \cite{CHGuidelines}. Cooper Hewitt here describes certain visual details that annotators themselves may recognize as pertinent to gender expression, while stopping short of ascribing gender to image subjects or intentionally providing clues about gender. This approach was also favored by nearly all of the interview subjects in Bennett et al.'s study. Of the 25 subjects they interviewed, "24 participants argued AI-generated descriptions would be more respectful if they used appearance rather than identity presumptive language," with many agreeing that "[a]pproximating skin tone and describing hairstyles may help describers and viewers avoid assuming race; describing clothing, accessories and hairstyles can help describers and viewers avoid gender assumptions, and describing access technologies can help describers and viewers avoid assuming disability" \cite{bennett2021s}. Bennett et al. emphasize that "the preference upon which participants largely converged was that the language of appearance versus that which presumes identity is different" \cite{bennett2021s}. What might account for this degree of agreement between certain aspects of Facebook's policy, the museums' guidelines, and the views expressed by Bennett et al.'s participants? Describing physical characteristics that might serve as the basis for inference about identity categories, but refusing the explicit language of identity categories, seems to strike an interesting balance between competing commitments to autonomy: on the one hand, it still provides people depicted in images with the opportunity to self-define in the identity categories of their own choosing (or to not use any such terms at all), while on the other, it provides BLV users information that might allow them to make their own inferences about people's identity on the basis of appearance (or to refrain from doing so), just as sighted users could. We might consider it paternalistic for Facebook to deprive BLV users of all visual information that might, for example, insinuate gender---even if that information might serve as the basis for (sometimes incorrect or misguided) inferences, as it might for sighted people. Doing so would deprive BLV users of the opportunity to make their own judgments, leaving it to Facebook to impose ideological constraints on the propriety of inferring personal characteristics from visual data. This reasoning is evident in the argument made by one of Bennett et al.'s research participants: ``If somebody [sighted] sees a photo and has some kind of clues, we should be given comparable information. If you don't provide that, it does feed into, maybe indirectly, blind people don't need this information or they don't make judgements based on this information […W]e've gotta find a way to provide it in a way that's not overly prescriptive'' \cite{bennett2021s}. Allowing BLV users to draw their own inferences from visual data, rather than Facebook drawing conclusions for them, also permits these inferences to be made individually and privately (i.e., in the heads of BLV users) while Facebook's might be made in public (i.e., visible to the people being described). This too has normative implications for image subjects: seeing that one has been ascribed a specific gender by Facebook---and knowing that such ascriptions may have been accessed by other Facebook users---can be essentializing and stigmatizing in a way that individual BLV users' inferences might not. Even when attempting to defer judgment to BLV users, though, Facebook continues to exercise a good deal of influence: the company still determines what visual details to describe in alt text and thus the basis upon which any subsequent inferences might be made by BLV users. The visual cues upon which sighted people rely to draw inferences about identity are numerous, varied, and often subtle---so much so that sighted people might be unable to fully articulate the set of cues upon which they rely when drawing such inferences. Facebook's model, even with its recently expanded set of concepts, cannot provide descriptions of the full range of visual markers that serve as such cues. Rather, the model is likely to provide details on only fairly obvious and crude cues (e.g., clothing, hairstyle, and jewelry), limiting how discerning any inference might be about the identity categories of the person described in these terms. In this respect, Facebook's BLV users would not be in the same situation as its sighted users; BLV users' capacity to infer identity categories---and how such inferences might be drawn---would remain deeply dependent on Facebook's choices about what descriptors it provides in alt text.\footnote{What, if any, inferences BLV users may draw from these descriptors warrants further study. Both \cite{wu2017automatic} and \cite{bennett2021s} repeatedly remark on the skepticism and caution that BLV users exhibit when presented with limited or unreliable information.} In this respect, even though Facebook may be able to defer questions of accuracy, ontology, and epistemology to BLV users themselves by limiting alt text to descriptions of directly observable physical features, the company cannot escape the question of what exactly warrants description and the social meaning of such descriptions. In other words, it cannot avoid questions of salience. Moreover, in the context of limited information, those descriptors that \textit{are} provided may take on outsized importance. For example, if alt text includes `person' and `red nail polish,' the nail polish description may be over-read as an indicator of gender, and perhaps reify certain mappings of appearance to gender (even if unreliable). Determinations of what visual descriptors to provide must also consider what information is \textit{not} being provided, and how different visual descriptors might interact with each other. (For example: the gender inference that might be insinuated by `person' and `long hair' versus `person,' `long hair,' and `guitar' might differ.) The inferences to be drawn from each descriptor are not made independently. Finally---and fundamentally---presenting this choice as one between the use of social categories and the description of visual details that might serve as the basis for inference about social categories relies on the idea that there is a clear separation between social categories and directly observable properties. Some features are so common in a particular context or among a particular community that they are not even be remarked upon. In a highly homogeneous society, for example, there may be no reason to have concepts concerned with the tone of a person's skin, the shape of a person's eyes, or the size of a person's nose. Their salience as directly observable properties is a result of the fact that they have come to serve as important visual markers for social differentiation. What is "directly observable" encodes particular beliefs about what is an appropriate way to parse the visual world, given the relevance of these features to the social categories that we rely on to make sense of others. \subsection{Facial Recognition} Facial recognition might also represent another strategy for navigating these uncertainties. Rather than trying to infer identity categories (`woman') or trying to describe relevant physical characteristics (`long hair'), those producing alt text could instead try to simply identify the person in question (`Margot Hanley'). Doing so would mean that alt text could report the name of the person, which might allow BLV users to bring prior knowledge to bear about that person's identity---and perhaps justify providing less or no description of that person's appearance. In so doing, it might sidestep some of the questions of salience as well as accuracy, ontology, and epistemology, as there would be less or no need to decide which physical features are worthy of description. In their paper introducing AAT, \cite{wu2017automatic} explain that "while facial recognition turned out to be one of the most requested features from our participants and current technology is viable," the company was not then prepared to adopt the technique as it had privacy implications for other users. As mentioned, the company has since adopted facial recognition on an opt-in basis, allowing users to elect to be identified automatically in AAT. To our knowledge, Facebook does not use identity to specifically avoid providing other details in its alt text, but it could choose to do so. (Recall that the Cooper Hewitt guidelines warn against this approach, stating that alt text should report identity categories and physical details even when a person has been identified by name.) An alternative approach would be to rely on facial recognition to determine who appears in an image and then match that person's identity to any information that the person might have shared elsewhere (e.g., in the gender field on Facebook). In that case, the alt text could include the name of the person along with other shared details about their identity categories and appearance. Despite the potential appeal of facial recognition in order to deal with various sources of uncertainty, there are practical and normative limits to this approach. On social media, naming people, in place of describing their identity or physical appearance, would likely only offer much of value if the people that have been named are known by BLV users. BLV users may still be interested in the physical appearance of even those with whom they are close (e.g., when a friend changes the color of their hair). Beyond social media and other settings where images largely include family, friends, and associates (e.g., photo management software), the value of this approach will be even more limited, as BLV users will only benefit from facial recognition when the identified person is a public figure. From a normative perspective, this may put other people in an uncomfortable bind, effectively posing opting into facial recognition as a way to forestall the harms that might arise from attempts to describe people in other terms, neither of which they might welcome. Given the serious concerns that BLV users have already expressed about the manual alt text methods that place burdens on others \cite{rzeszotarski2014estimating}, organizations should be careful not to put BLV users in a position where they are made to feel that they are forcing such a decision on their social contacts \cite{Lehrer-Stein_2020}. \section{Conclusion} In this paper, we have explored the tensions that emerge when using computer vision to produce alt text descriptions of people, including identity categories like race, gender, age, disability, etc. We proposed museums as an apt point of comparison, as museums have long navigated these tensions and have developed specific principles and guidelines to aid in their determinations. By comparing the organizations' policies, we surfaced the normative and practical factors underlying their different approaches. We explained how different forms of uncertainty underlie policy choices about image descriptions, and explored the challenges associated with possible strategies to overcome these. \section*{Acknowledgements} This work was supported by the National Science Foundation (CHS-1901151), the John D. and Catherine T. MacArthur Foundation, and the Brown Institute for Media Innovation. We thank Sina Bahram, Ben Bianchi, Su Lin Blodgett, A. Feder Cooper, Carter Donohoe, Jacob Ford, Jared Katzman, Kristen Laird, Ruth Starr, Emily Tseng, Hanna Wallach, Meg Young, Freds Madison Avenue, and members of the Artificial Intelligence, Policy, and Practice initiative at Cornell University for valuable discussions. \bibliographystyle{plain} \section{Copyright} All papers submitted for publication by AAAI Press must be accompanied by a valid signed copyright form or, in the case of technical reports, by a valid signed permission to distribute form. There are no exceptions to this requirement. You must send us the original version of this form. However, to meet the deadline, you may fax (1-650-321-4457) or scan and e-mail the form ([email protected]) to AAAI by the submission deadline, and then mail the original via postal mail to the AAAI office. \textbf{If you fail to send in a signed copyright or permission form, your paper will not be published.} You will find PDF versions of the AAAI copyright and permission to distribute forms in the author kit. \section{Formatting Requirements in Brief} We need source and PDF files that can be used in a variety of ways and can be output on a variety of devices. AAAI imposes some requirements on your source and PDF files that must be followed. Most of these requirements are based on our efforts to standardize conference manuscript properties and layout. These requirements are as follows, and all papers submitted to AAAI for publication must comply: \begin{itemize} \item Your .tex file must compile in PDF\LaTeX{} --- \textbf{ no .ps or .eps figure files.} \item All fonts must be embedded in the PDF file --- \textbf{ this includes your figures.} \item Modifications to the style sheet (or your document) in an effort to avoid extra page charges are NOT allowed. \item No type 3 fonts may be used (even in illustrations). \item Your title must follow US capitalization rules. \item \LaTeX{} documents must use the Times or Nimbus font package (do not use Computer Modern for the text of your paper). \item No \LaTeX{} 209 documents may be used or submitted. \item Fonts that require non-English language support (CID and Identity-H) must be converted to outlines or removed from the document (even if they are in a graphics file embedded in the document). \item Two-column format in AAAI style is required for all papers. \item The paper size for final submission must be US letter. No exceptions. \item The source file must exactly match the PDF. \item The document margins must be as specified in the formatting instructions. \item The number of pages and the file size must be as specified for your event. \item No document may be password protected. \item Neither the PDFs nor the source may contain any embedded links or bookmarks. \item Your source and PDF must not have any page numbers, footers, or headers. \item Your PDF must be compatible with Acrobat 5 or higher. \item Your \LaTeX{} source file (excluding references) must consist of a \textbf{single} file (use of the ``input" command is not allowed. \item Your graphics must be sized appropriately outside of \LaTeX{} (do not use the ``clip" command) . \end{itemize} If you do not follow the above requirements, it is likely that we will be unable to publish your paper. \section{What Files to Submit} You must submit the following items to ensure that your paper is published: \begin{itemize} \item A fully-compliant PDF file. \item Your \LaTeX{} source file submitted as a \textbf{single} .tex file (do not use the ``input" command to include sections of your paper --- every section must be in the single source file). The only exception is the bibliography, which you may include separately. Your source must compile on our system, which includes the standard \LaTeX{} support files. \item All your graphics files. \item The \LaTeX{}-generated files (e.g. .aux and .bib file, etc.) for your compiled source. \item All the nonstandard style files (ones not commonly found in standard \LaTeX{} installations) used in your document (including, for example, old algorithm style files). If in doubt, include it. \end{itemize} Your \LaTeX{} source will be reviewed and recompiled on our system (if it does not compile, you may incur late fees). \textbf{Do not submit your source in multiple text files.} Your single \LaTeX{} source file must include all your text, your bibliography (formatted using aaai.bst), and any custom macros. Accompanying this source file, you must also supply any nonstandard (or older) referenced style files and all your referenced graphics files. Your files should work without any supporting files (other than the program itself) on any computer with a standard \LaTeX{} distribution. Place your PDF and source files in a single tar, zipped, gzipped, stuffed, or compressed archive. Name your source file with your last (family) name. \textbf{Do not send files that are not actually used in the paper.} We don't want you to send us any files not needed for compiling your paper, including, for example, this instructions file, unused graphics files, and so forth. A shell script (created by an AAAI member --- it might not work without modification on your system) that might help you create the \LaTeX{} source package is included in the Author Kit. \section{Using \LaTeX{} to Format Your Paper} The latest version of the AAAI style file is available on AAAI's website. Download this file and place it in a file named ``aaai.sty" in the \TeX\ search path. Placing it in the same directory as the paper should also work. You must download the latest version of the complete author kit so that you will have the latest instruction set. \subsection{Document Preamble} In the \LaTeX{} source for your paper, you \textbf{must} place the following lines as shown in the example in this subsection. This command set-up is for three authors. Add or subtract author and address lines as necessary, and uncomment the portions that apply to you. In most instances, this is all you need to do to format your paper in the Times font. The helvet package will cause Helvetica to be used for sans serif, and the courier package will cause Courier to be used for the typewriter font. These files are part of the PSNFSS2e package, which is freely available from many Internet sites (and is often part of a standard installation). Leave the setcounter for section number depth commented out and set at 0 unless you want to add section numbers to your paper. If you do add section numbers, you must uncomment this line and change the number to 1 (for section numbers), or 2 (for section and subsection numbers). The style file will not work properly with numbering of subsubsections, so do not use a number higher than 2. \begin{quote} \begin{small} \textbackslash documentclass[letterpaper]{article}\\ \% \textit{Required Packages}\\ \textbackslash usepackage\{aaai\}\\ \textbackslash usepackage\{times\}\\ \textbackslash usepackage\{helvet\}\\ \textbackslash usepackage\{courier\}\\ \textbackslash setlength\{\textbackslash pdfpagewidth\}\{8.5in\} \textbackslash setlength\{\textbackslash pdfpageheight\}\{11in\}\\ \%\%\%\%\%\%\%\%\%\%\\ \% \textit{PDFINFO for PDF\LaTeX{}}\\ \% Uncomment and complete the following for metadata (your paper must compile with PDF\LaTeX{})\\ \textbackslash pdfinfo\{\\ /Title (Input Your Paper Title Here)\\ /Author (John Doe, Jane Doe)\\ /Keywords (Input your paper's keywords in this optional area)\\ \}\\ \%\%\%\%\%\%\%\%\%\%\\ \% \textit{Section Numbers}\\ \% Uncomment if you want to use section numbers\\ \% and change the 0 to a 1 or 2\\ \% \textbackslash setcounter\{secnumdepth\}\{0\}\\ \%\%\%\%\%\%\%\%\%\%\\ \% \textit{Title, Author, and Address Information}\\ \textbackslash title\{Title\}\\ \textbackslash author\{Author 1 \textbackslash and Author 2\textbackslash\textbackslash \\ Address line\textbackslash\textbackslash\\ Address line\textbackslash\textbackslash \\ \textbackslash And\\ Author 3\textbackslash\textbackslash\\ Address line\textbackslash\textbackslash\\ Address line\}\\ \%\%\%\%\%\%\%\%\%\%\\ \% \textit{Body of Paper Begins}\\ \textbackslash begin\{document\}\\ \textbackslash maketitle\\ ...\\ \%\%\%\%\%\%\%\%\%\%\\ \% \textit{References and End of Paper}\\ \textbackslash bibliography\{Bibliography-File\}\\ \textbackslash bibliographystyle\{aaai\}\\ \textbackslash end\{document\} \end{small} \end{quote} \subsection{Inserting Document Metadata with \LaTeX{}} PDF files contain document summary information that enables us to create an Acrobat index (pdx) file, and also allows search engines to locate and present your paper more accurately. \textbf{Document Metadata for Author and Title are REQUIRED.} If your paper includes illustrations that are not compatible with PDF\TeX{} (such as .eps or .ps documents), you will need to convert them. The epstopdf package will usually work for eps files. You will need to convert your ps files to PDF however. \textit{Important:} Do not include \textit{any} \LaTeX{} code or nonascii characters (including accented characters) in the metadata. The data in the metadata must be completely plain ascii. It may not include slashes, accents, linebreaks, unicode, or any \LaTeX{} commands. Type the title exactly as it appears on the paper (minus all formatting). Input the author names in the order in which they appear on the paper (minus all accents), separating each author by a comma. You may also include keywords in the Keywords field. \subsection{Preparing Your Paper} After the preamble above, you should prepare your paper as follows: \begin{quote} \begin{small} \textbackslash begin\{document\}\\ \textbackslash maketitle\\ ...\\ \textbackslash bibliography\{Bibliography-File\}\\ \textbackslash bibliographystyle\{aaai\}\\ \textbackslash end\{document\}\\ \end{small} \end{quote} \subsection{Incompatible Packages} The following packages are incompatible with aaai.sty and/or aaai.bst and must not be used (this list is not exhaustive --- there are others as well): \begin{itemize} \item hyperref \item natbib \item geometry \item titlesec \item layout \item caption \item titlesec \item T1 fontenc package (install the CM super fonts package instead) \end{itemize} \subsection{Illegal Commands} The following commands may not be used in your paper: \begin{itemize} \item \textbackslash input \item \textbackslash vspace (when used before or after a section or subsection) \item \textbackslash addtolength \item \textbackslash columnsep \item \textbackslash top margin (or text height or addsidemargin or even side margin) \end{itemize} \subsection{Paper Size, Margins, and Column Width} Papers must be formatted to print in two-column format on 8.5 x 11 inch US letter-sized paper. The margins must be exactly as follows: \begin{itemize} \item Top margin: .75 inches \item Left margin: .75 inches \item Right margin: .75 inches \item Bottom margin: 1.25 inches \end{itemize} The default paper size in most installations of \LaTeX{} is A4. However, because we require that your electronic paper be formatted in US letter size, you will need to alter the default for this paper to US letter size. Assuming you are using the 2e version of \LaTeX{}, you can do this by including the [letterpaper] option at the beginning of your file: \textbackslash documentclass[letterpaper]{article}. This command is usually sufficient to change the format. Sometimes, however, it may not work. Use PDF\LaTeX{} and include \textbackslash setlength\{\textbackslash pdfpagewidth\}\{8.5in\} \textbackslash setlength\{\textbackslash pdfpageheight\}\{11in\} in your preamble. \textbf{Do not use the Geometry package to alter the page size.} Use of this style file alters aaai.sty and will result in your paper being rejected. \subsubsection{Column Width and Margins.} To ensure maximum readability, your paper must include two columns. Each column should be 3.3 inches wide (slightly more than 3.25 inches), with a .375 inch (.952 cm) gutter of white space between the two columns. The aaai.sty file will automatically create these columns for you. \subsection{Overlength Papers} If your paper is too long, turn on \textbackslash frenchspacing, which will reduce the space after periods. Next, shrink the size of your graphics. Use \textbackslash centering instead of \textbackslash begin\{center\} in your figure environment. If these two methods don't work, you may minimally use the following. For floats (tables and figures), you may minimally reduce \textbackslash floatsep, \textbackslash textfloatsep, \textbackslash abovecaptionskip, and \textbackslash belowcaptionskip. For mathematical environments, you may minimally reduce \textbackslash abovedisplayskip, \textbackslash belowdisplayskip, and \textbackslash arraycolsep. You may also alter the size of your bibliography by inserting \textbackslash fontsize\{9.5pt\}\{10.5pt\} \textbackslash selectfont right before the bibliography. Commands that alter page layout are forbidden. These include \textbackslash columnsep, \textbackslash topmargin, \textbackslash topskip, \textbackslash textheight, \textbackslash textwidth, \textbackslash oddsidemargin, and \textbackslash evensizemargin (this list is not exhaustive). If you alter page layout, you will be required to pay the page fee \textit{plus} a reformatting fee. Other commands that are questionable and may cause your paper to be rejected include \textbackslash parindent, and \textbackslash parskip. Commands that alter the space between sections are also questionable. The title sec package is not allowed. Regardless of the above, if your paper is obviously ``squeezed" it is not going to to be accepted. Before using every trick you know to make your paper a certain length, try reducing the size of your graphics or cutting text instead or (if allowed) paying the extra page charge. It will be cheaper in the long run. \subsection{Figures} Your paper must compile in PDF\LaTeX{}. Consequently, all your figures must be .jpg, .png, or .pdf. You may not use the .gif (the resolution is too low), .ps, or .eps file format for your figures. When you include your figures, you must crop them \textbf{outside} of \LaTeX{}. The command \textbackslash includegraphics*[clip=true, viewport 0 0 10 10]{...} might result in a PDF that looks great, but the image is \textbf{not really cropped.} The full image can reappear when page numbers are applied or color space is standardized. \subsection{Type Font and Size} Your paper must be formatted in Times Roman or Nimbus. We will not accept papers formatted using Computer Modern or Palatino or some other font as the text or heading typeface. Sans serif, when used, should be Courier. Use Symbol or Lucida or Computer Modern for \textit{mathematics only. } Do not use type 3 fonts for any portion of your paper, including graphics. Type 3 bitmapped fonts are designed for fixed resolution printers. Most print at 300 dpi even if the printer resolution is 1200 dpi or higher. They also often cause high resolution imagesetter devices and our PDF indexing software to crash. Consequently, AAAI will not accept electronic files containing obsolete type 3 fonts. Files containing those fonts (even in graphics) will be rejected. Fortunately, there are effective workarounds that will prevent your file from embedding type 3 bitmapped fonts. The easiest workaround is to use the required times, helvet, and courier packages with \LaTeX{}2e. (Note that papers formatted in this way will still use Computer Modern for the mathematics. To make the math look good, you'll either have to use Symbol or Lucida, or you will need to install type 1 Computer Modern fonts --- for more on these fonts, see the section ``Obtaining Type 1 Computer Modern.") If you are unsure if your paper contains type 3 fonts, view the PDF in Acrobat Reader. The Properties/Fonts window will display the font name, font type, and encoding properties of all the fonts in the document. If you are unsure if your graphics contain type 3 fonts (and they are PostScript or encapsulated PostScript documents), create PDF versions of them, and consult the properties window in Acrobat Reader. The default size for your type should be ten-point with twelve-point leading (line spacing). Start all pages (except the first) directly under the top margin. (See the next section for instructions on formatting the title page.) Indent ten points when beginning a new paragraph, unless the paragraph begins directly below a heading or subheading. \subsubsection{Obtaining Type 1 Computer Modern for \LaTeX{}.} If you use Computer Modern for the mathematics in your paper (you cannot use it for the text) you may need to download type 1 Computer fonts. They are available without charge from the American Mathematical Society: http://www.ams.org/tex/type1-fonts.html. \subsection{Title and Authors} Your title must appear in mixed case (nouns, pronouns, and verbs are capitalized) near the top of the first page, centered over both columns in sixteen-point bold type (twenty-four point leading). This style is called ``mixed case." Author's names should appear below the title of the paper, centered in twelve-point type (with fifteen point leading), along with affiliation(s) and complete address(es) (including electronic mail address if available) in nine-point roman type (the twelve point leading). (If the title is long, or you have many authors, you may reduce the specified point sizes by up to two points.) You should begin the two-column format when you come to the abstract. \subsubsection{Formatting Author Information} Author information can be set in a number of different styles, depending on the number of authors and the number of affiliations you need to display. For several authors from the same institution, use \textbackslash and: \begin{quote} \begin{small} \textbackslash author\{Author 1 \textbackslash and ... \textbackslash and Author \textit{n}\textbackslash \textbackslash \\ Address line \textbackslash \textbackslash ~... \textbackslash \textbackslash ~Address line\} \end{small} \end{quote} \noindent If the names do not fit well on one line use: \begin{quote} \begin{small} \textbackslash author\{Author 1\}\textbackslash \textbackslash \\ \{\textbackslash bf Author 2\}\textbackslash \textbackslash ~ ... \textbackslash \textbackslash ~\{\textbackslash bf Author \textit{n}\}\textbackslash \textbackslash \\ Address line \textbackslash \textbackslash ~ ... \textbackslash \textbackslash ~ Address line\} \end{small} \end{quote} \noindent For authors from different institutions, use \textbackslash And: \begin{quote} \begin{small} \textbackslash author\{Author 1\textbackslash \textbackslash ~ Address line \textbackslash \textbackslash ~... \textbackslash \textbackslash ~ Address line \textbackslash And ... \textbackslash And Author \textit{n}\textbackslash \textbackslash \\ Address line\textbackslash \textbackslash ~ ... \textbackslash \textbackslash ~ Address line\} \end{small} \end{quote} \noindent To start a separate ``row" of authors, use \textbackslash AND: \begin{quote} \begin{small} \textbackslash author\{Author 1\textbackslash\textbackslash ~ Address line \textbackslash\textbackslash ~ ... \textbackslash \textbackslash ~ Address line\textbackslash\textbackslash \\ \textbackslash AND\\ Author 2 \textbackslash\textbackslash ~ Address line \textbackslash\textbackslash ~ ... \textbackslash \textbackslash ~ Address line\textbackslash\textbackslash \\ \textbackslash And\\ Author 3 \textbackslash\textbackslash ~ Address line \textbackslash\textbackslash ~ ... \textbackslash \textbackslash ~ Address line\textbackslash\textbackslash \\\} \end{small} \end{quote} \noindent If the title and author information does not fit in the area allocated, place \textbackslash setlength\textbackslash titlebox\{\textit{height}\} after the \textbackslash documentclass line where \{\textit{height}\} is something like 2.5in. \subsection{\LaTeX{} Copyright Notice} The copyright notice automatically appears if you use aaai.sty. If you are creating a technical report, it is not necessary to include this notice. You may disable the copyright line using the \verb+\+nocopyrightcommand. To change the entire text of the copyright slug, use: \textbackslash copyrighttext \{\emph{text}\}. Either of these must appear before \textbackslash maketitle. Please be advised, however, that \textit{if you disable or change the copyright line and transfer of copyright is required, your paper will not be published.} \subsection{Credits} Any credits to a sponsoring agency should appear in the acknowledgments section, unless the agency requires different placement. If it is necessary to include this information on the front page, use \textbackslash thanks in either the \textbackslash author or \textbackslash title commands. For example: \begin{quote} \begin{small} \textbackslash title\{Very Important Results in AI\textbackslash thanks\{This work is supported by everybody.\}\} \end{small} \end{quote} Multiple \textbackslash thanks commands can be given. Each will result in a separate footnote indication in the author or title with the corresponding text at the botton of the first column of the document. Note that the \textbackslash thanks command is fragile. You will need to use \textbackslash protect. Please do not include \textbackslash pubnote commands in your document. \subsection{Abstract} The abstract must be placed at the beginning of the first column, indented ten points from the left and right margins. The title ÒAbstractÓ should appear in ten-point bold type, centered above the body of the abstract. The abstract should be set in nine-point type with ten-point leading. This concise, one-paragraph summary should describe the general thesis and conclusion of your paper. A reader should be able to learn the purpose of the paper and the reason for its importance from the abstract. The abstract should be no more than two hundred words in length. (Authors who are submitting short one- or two-page extended extracts should provide a short abstract of only a sentence or so.) \textbf{Do not include references in your abstract!} \subsection{Page Numbers} Do not \textbf{ever} print any page numbers on your paper. \subsection{Text } The main body of the paper must be formatted in ten-point with twelve-point leading (line spacing). \subsection{Citations} Citations within the text should include the author's last name and year, for example (Newell 1980). Append lower-case letters to the year in cases of ambiguity. Multiple authors should be treated as follows: (Feigenbaum and Engelmore 1988) or (Ford, Hayes, and Glymour 1992). In the case of four or more authors, list only the first author, followed by et al. (Ford et al. 1997). \subsection{Extracts} Long quotations and extracts should be indented ten points from the left and right margins. \begin{quote} This is an example of an extract or quotation. Note the indent on both sides. Quotation marks are not necessary if you offset the text in a block like this, and properly identify and cite the quotation in the text. \end{quote} \subsection{Footnotes} Avoid footnotes as much as possible; they interrupt the reading of the text. When essential, they should be consecutively numbered throughout with superscript Arabic numbers. Footnotes should appear at the bottom of the page, separated from the text by a blank line space and a thin, half-point rule. \subsection{Headings and Sections} When necessary, headings should be used to separate major sections of your paper. Remember, you are writing a short paper, not a lengthy book! An overabundance of headings will tend to make your paper look more like an outline than a paper. First-level heads should be twelve-point Times Roman bold type, mixed case (initial capitals followed by lower case on all words except articles, conjunctions, and prepositions, which should appear entirely in lower case), with fifteen-point leading, centered, with one blank line preceding them and three additional points of leading following them. Second-level headings should be eleven-point Times Roman bold type, mixed case, with thirteen-point leading, flush left, with one blank line preceding them and three additional points of leading following them. Do not skip a line between paragraphs. Third-level headings should be run in with the text, ten-point Times Roman bold type, mixed case, with twelve-point leading, flush left, with six points of additional space preceding them and no additional points of leading following them. \subsubsection{Section Numbers} The use of section numbers in AAAI Press papers is optional. To use section numbers in \LaTeX{}, uncomment the setcounter line in your document preamble and change the 0 to a 1 or 2. Section numbers should not be used in short poster papers. \subsubsection{Section Headings.} Sections should be arranged and headed as follows: \subsubsection{Acknowledgments.} The acknowledgments section, if included, appears after the main body of text and is headed ``Acknowledgments." This section includes acknowledgments of help from associates and colleagues, credits to sponsoring agencies, financial support, and permission to publish. Please acknowledge other contributors, grant support, and so forth, in this section. Do not put acknowledgments in a footnote on the first page. If your grant agency requires acknowledgment of the grant on page 1, limit the footnote to the required statement, and put the remaining acknowledgments at the back. Please try to limit acknowledgments to no more than three sentences. \subsubsection{Appendices.} Any appendices follow the acknowledgments, if included, or after the main body of text if no acknowledgments appear. \subsubsection{References} The references section should be labeled ``References" and should appear at the very end of the paper (don't end the paper with references, and then put a figure by itself on the last page). A sample list of references is given later on in these instructions. Please use a consistent format for references. Poorly prepared or sloppy references reflect badly on the quality of your paper and your research. Please prepare complete and accurate citations. \subsection{Illustrations and Figures} Figures, drawings, tables, and photographs should be placed throughout the paper near the place where they are first discussed. Do not group them together at the end of the paper. If placed at the top or bottom of the paper, illustrations may run across both columns. Figures must not invade the top, bottom, or side margin areas. Figures must be inserted using the \textbackslash usepackage\{graphicx\}. Number figures sequentially, for example, figure 1, and so on. The illustration number and caption should appear under the illustration. Labels, and other text in illustrations must be at least nine-point type. \subsubsection{Low-Resolution Bitmaps.} You may not use low-resolution (such as 72 dpi) screen-dumps and GIF files---these files contain so few pixels that they are always blurry, and illegible when printed. If they are color, they will become an indecipherable mess when converted to black and white. This is always the case with gif files, which should never be used. The resolution of screen dumps can be increased by reducing the print size of the original file while retaining the same number of pixels. You can also enlarge files by manipulating them in software such as PhotoShop. Your figures should be a minimum of 266 dpi when incorporated into your document. \subsubsection{\LaTeX{} Overflow.} \LaTeX{} users please beware: \LaTeX{} will sometimes put portions of the figure or table or an equation in the margin. If this happens, you need to scale the figure or table down, or reformat the equation. Check your log file! You must fix any overflow into the margin (that means no overfull boxes in \LaTeX{}). If you don't, the overflow text will simply be eliminated. \textbf{Nothing is permitted to intrude into the margins.} \subsubsection{Using Color.} Your paper will be printed in black and white and grayscale. Consequently, because conversion to grayscale can cause undesirable effects (red changes to black, yellow can disappear, and so forth), we strongly suggest you avoid placing color figures in your document. Of course, any reference to color will be indecipherable to your reader. \subsubsection{Drawings.} We suggest you use computer drawing software (such as Adobe Illustrator or, (if unavoidable), the drawing tools in Microsoft Word) to create your illustrations. Do not use Microsoft Publisher. These illustrations will look best if all line widths are uniform (half- to two-point in size), and you do not create labels over shaded areas. Shading should be 133 lines per inch if possible. Use Times Roman or Helvetica for all figure call-outs. \textbf{Do not use hairline width lines} --- be sure that the stroke width of all lines is at least .5 pt. Zero point lines will print on a laser printer, but will completely disappear on the high-resolution devices used by our printers. \subsubsection{Photographs and Images.} Photographs and other images should be in grayscale (color photographs will not reproduce well; for example, red tones will reproduce as black, yellow may turn to white, and so forth) and set to a minimum of 266 dpi. Do not prescreen images. \subsubsection{Resizing Graphics.} Resize your graphics \textbf{before} you include them with LaTeX. You may \textbf{not} use trim or clip options as part of your \textbackslash includgraphics command. Resize the media box of your PDF using a graphics program instead. \subsubsection{Fonts in Your Illustrations} You must embed all fonts in your graphics before including them in your LaTeX document. \subsection{References} The aaai.sty file includes a set of definitions for use in formatting references with BibTeX. These definitions make the bibliography style fairly close to the one specified below. To use these definitions, you also need the BibTeX style file ``aaai.bst," available in the author kit on the AAAI web site. Then, at the end of your paper but before \textbackslash end{document}, you need to put the following lines: \begin{quote} \begin{small} \textbackslash bibliographystyle\{aaai\} \textbackslash bibliography\{bibfile1,bibfile2,...\} \end{small} \end{quote} The list of files in the \textbackslash bibliography command should be the names of your BibTeX source files (that is, the .bib files referenced in your paper). The following commands are available for your use in citing references: \begin{quote} \begin{small} \textbackslash cite: Cites the given reference(s) with a full citation. This appears as ``(Author Year)'' for one reference, or ``(Author Year; Author Year)'' for multiple references.\\ \textbackslash shortcite: Cites the given reference(s) with just the year. This appears as ``(Year)'' for one reference, or ``(Year; Year)'' for multiple references.\\ \textbackslash citeauthor: Cites the given reference(s) with just the author name(s) and no parentheses.\\ \textbackslash citeyear: Cites the given reference(s) with just the date(s) and no parentheses. \end{small} \end{quote} \textbf{Warning:} The aaai.sty file is incompatible with the hyperref and natbib packages. If you use either, your references will be garbled. Formatted bibliographies should look like the following examples. \smallskip \noindent \textit{Book with Multiple Authors}\\ Engelmore, R., and Morgan, A. eds. 1986. \textit{Blackboard Systems.} Reading, Mass.: Addison-Wesley. \smallskip \noindent \textit{Journal Article}\\ Robinson, A. L. 1980a. New Ways to Make Microcircuits Smaller. \textit{Science} 208: 1019--1026. \smallskip \noindent \textit{Magazine Article}\\ Hasling, D. W.; Clancey, W. J.; and Rennels, G. R. 1983. Strategic Explanations in Consultation. \textit{The International Journal of Man-Machine Studies} 20(1): 3--19. \smallskip \noindent \textit{Proceedings Paper Published by a Society}\\ Clancey, W. J. 1983b. Communication, Simulation, and Intelligent Agents: Implications of Personal Intelligent Machines for Medical Education. In Proceedings of the Eighth International Joint Conference on Artificial Intelligence, 556--560. Menlo Park, Calif.: International Joint Conferences on Artificial Intelligence, Inc. \smallskip \noindent \textit{Proceedings Paper Published by a Press or Publisher}\\ Clancey, W. J. 1984. Classification Problem Solving. In \textit{Proceedings of the Fourth National Conference on Artificial Intelligence,} 49--54. Menlo Park, Calif.: AAAI Press. \smallskip \noindent \textit{University Technical Report}\\ Rice, J. 1986. Poligon: A System for Parallel Problem Solving, Technical Report, KSL-86-19, Dept. of Computer Science, Stanford Univ. \smallskip \noindent \textit{Dissertation or Thesis}\\ Clancey, W. J. 1979b. Transfer of Rule-Based Expertise through a Tutorial Dialogue. Ph.D. diss., Dept. of Computer Science, Stanford Univ., Stanford, Calif. \smallskip \noindent \textit{Forthcoming Publication}\\ Clancey, W. J. 1986a. The Engineering of Qualitative Models. Forthcoming. \section{Producing Reliable PDF\\Documents with \LaTeX{}} Generally speaking, PDF files are platform independent and accessible to everyone. When creating a paper for a proceedings or publication in which many PDF documents must be merged and then printed on high-resolution PostScript RIPs, several requirements must be met that are not normally of concern. Thus to ensure that your paper will look like it does when printed on your own machine, you must take several precautions: \begin{itemize} \item Use type 1 fonts (not type 3 fonts) \item Use only standard Times, Nimbus, and CMR font packages (not fonts like F3 or fonts with tildes in the names or fonts---other than Computer Modern---that are created for specific point sizes, like Times\~{}19) or fonts with strange combinations of numbers and letters \item Embed all fonts when producing the PDF \item Do not use the [T1]{fontenc} package (install the CM super fonts package instead) \end{itemize} \subsection{Creating Output Using PDF\LaTeX{} Is Required} By using the PDF\TeX{} program instead of straight \LaTeX{} or \TeX{}, you will probably avoid the type 3 font problem altogether (unless you use a package that calls for metafont). PDF\LaTeX{} enables you to create a PDF document directly from \LaTeX{} source. The one requirement of this software is that all your graphics and images must be available in a format that PDF\LaTeX{} understands (normally PDF). PDF\LaTeX{}'s default is to create documents with type 1 fonts. If you find that it is not doing so in your case, it is likely that one or more fonts are missing from your system or are not in a path that is known to PDF\LaTeX{}. \subsubsection{dvipdf Script} Scripts such as dvipdf which ostensibly bypass the Postscript intermediary should not be used since they generally do not instruct dvips to use the config.pdf file. \subsubsection{dvipdfm} Do not use this dvi-PDF conversion package if your document contains graphics (and we recommend you avoid it even if your document does not contain graphics). \subsection{Ghostscript} \LaTeX{} users should not use GhostScript to create their PDFs. \subsection{Graphics} If you are still finding type 3 fonts in your PDF file, look at your graphics! \LaTeX{} users should check all their imported graphics files as well for font problems. \section{Proofreading Your PDF} Please check all the pages of your PDF file. Is the page size A4? Are there any type 3, Identity-H, or CID fonts? Are all the fonts embedded? Are there any areas where equations or figures run into the margins? Did you include all your figures? Did you follow mixed case capitalization rules for your title? Did you include a copyright notice? Do any of the pages scroll slowly (because the graphics draw slowly on the page)? Are URLs underlined and in color? You will need to fix these common errors before submitting your file. \section{Improperly Formatted Files } In the past, AAAI has corrected improperly formatted files submitted by the authors. Unfortunately, this has become an increasingly burdensome expense that we can no longer absorb. Consequently, if your file is improperly formatted, it may not be possible to include your paper in the publication. If time allows, however, you will be notified via e-mail (with a copy to the program chair) of the problems with your file and given the option of correcting the file yourself (and paying a late fee) or asking that AAAI have the file corrected for you, for an additional fee. If you opt to correct the file yourself, please note that we cannot provide you with any additional advice beyond that given in your packet. Files that are not corrected after a second attempt will be withdrawn. \subsection{\LaTeX{} 209 Warning} If you use \LaTeX{} 209 we will not be able to publish your paper. Convert your paper to \LaTeX{}2e. \section{Naming Your Electronic File} We request that you name your \LaTeX{} source file with your last name (family name) so that it can easily be differentiated from other submissions. If you name your files with the name of the event or ``aaai" or ``paper" or ``camera-ready" or some other generic or indecipherable name, you bear all risks of loss --- it is extremely likely that your file may be overwritten. \section{Submitting Your Electronic Files to AAAI} Submitting your files to AAAI is a two-step process. It is explained fully in the author registration and submission instructions. Please consult this document for details on how to submit your paper. \section{Inquiries} If you have any questions about the preparation or submission of your paper as instructed in this document, please contact AAAI Press at the address given below. If you have technical questions about implementation of the aaai style file, please contact an expert at your site. We do not provide technical support for \LaTeX{} or any other software package. To avoid problems, please keep your paper simple, and do not incorporate complicated macros and style files. \begin{quote} \noindent AAAI Press\\ 2275 East Bayshore Road, Suite 160\\ Palo Alto, California 94303\\ \textit{Telephone:} (650) 328-3123\\ \textit{E-mail:} See the submission instructions for your particular conference or event. \end{quote} \section{Additional Resources} \LaTeX{} is a difficult program to master. If you've used that software, and this document didn't help or some items were not explained clearly, we recommend you read Michael Shell's excellent document (testflow doc.txt V1.0a 2002/08/13) about obtaining correct PS/PDF output on \LaTeX{} systems. (It was written for another purpose, but it has general application as well). It is available at www.ctan.org in the tex-archive. \section{ Acknowledgments} AAAI is especially grateful to Peter Patel Schneider for his work in implementing the aaai.sty file, liberally using the ideas of other style hackers, including Barbara Beeton. We also acknowledge with thanks the work of George Ferguson for his guide to using the style and BibTeX files --- which has been incorporated into this document --- and Hans Guesgen, who provided several timely modifications, as well as the many others who have, from time to time, sent in suggestions on improvements to the AAAI style. The preparation of the \LaTeX{} and Bib\TeX{} files that implement these instructions was supported by Schlumberger Palo Alto Research, AT\&T Bell Laboratories, Morgan Kaufmann Publishers, The Live Oak Press, LLC, and AAAI Press. Bibliography style changes were added by Sunil Issar. \verb+\+pubnote was added by J. Scott Penberthy. George Ferguson added support for printing the AAAI copyright slug. Additional changes to aaai.sty and aaai.bst have been made by the AAAI staff. \bigskip \noindent Thank you for reading these instructions carefully. We look forward to receiving your electronic files!
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $G$ be a finite simple graph with vertex set $V(G)$ and edge set $E(G)$. The cardinality of a set $X$ is denoted by $|X|$. Throughout the paper, we assume all graphs are finite and simple. The aim of this article is to find adequate formulae for the chromatic numbers of covering graphs. The chromatic number $\chi(G)$ of a graph $G$ is the smallest number of colors needed to color the vertices of G so that no two adjacent vertices share the same color. Since the exploratory paper by Dirac \cite{dirac}, the chromatic number has been in the center of graph theory research. Its rich history can be found in several articles \cite{HM, thmas}. The concept of covering graphs is relatively new \cite{GTP, GTB}. Its precise definition can be given as follows. For a graph $G$, we denote the set of all vertices adjacent to $v\in V(G)$ by $N(v)$ and call it the \emph{neighborhood} of a vertex $v$. A graph $\widetilde G$ is called a \emph{covering} of $G$ with a projection $p:\widetilde G \to G$, if there is a surjection $p:V(\widetilde G)\to V(G)$ such that $p|_{N(\tilde v)}:N(\tilde v) \to N(v)$ is a bijection for any vertex $v\in V(G)$ and $\tilde v \in p^{-1}(v)$. In particular, if $p$ is two-to-one, then the projection $p:\widetilde G \to G$ is called a \emph{double covering} of $G$. Some structures or properties of graphs work nicely with covering graphs. The characteristic polynomials of a covering graph $\widetilde G$ and its base graph $G$ have a strong relation \cite{FKL:Character, KL:Character, MS:Character}. The enumeration of non-isomorphic covering graphs has been well studied \cite{Jones:Iclass, KL:Iclass}. Amit, Linial, and Matousek find the asymptotic behavior of the chromatic numbers of $n$-fold coverings without considering isomorphic types~\cite{ALM}. We will relate the chromatic number and double covering graphs as follows. A \emph{signed graph} is a pair $G_\phi = (G,~\phi)$ of a graph $G$ and a function $\phi : E(G) \to {\B{Z}_2}$, $\B{Z}_2=\{1,-1\}$. We call $G$ the \emph{underlying graph} of $G_\phi$ and $\phi$ the \emph{signing} of $G$. A signing $\phi$ is in fact a $\B{Z}_2$-voltage assignment of $G$, which was defined by Gross and Tucker~\cite{GTP}. It is known \cite{GTP, GTB} that every double covering of a graph $G$ can be constructed as follows: let $\phi$ be a signing of $G$. The double covering $G^\phi$ of $G$ derived from $\phi$ has the following vertex set $V(G^\phi)$ and edge set $E(G^\phi)$, \begin{eqnarray*} V(G^\phi)&=& \{ v_g | v\in V(G) ~\mathrm{and}~ g\in {\B{Z}_2}\} \\ E(G^\phi)&=& \{ (u_g, v_{\phi(u,v)g})| (u,v)\in E(G), g\in {\B{Z}_2}\}. \end{eqnarray*} Two double coverings $p^\phi:G^\phi\to G $ and $p^\psi:G^\psi\to G$ are \emph{isomorphic} if there exists a graph isomorphism $\phi : G^\phi \to G^\psi$ such that the diagram in Figure \ref{cd} commutes. \begin{figure} $$ \begin{pspicture}[.4](0,0.25)(3,2.75) \qline(.6,2.5)(2.4,2.5)\psline[arrowscale=1.5]{->}(2.2,2.5)(2.4,2.5) \qline(.3,2.3)(1.4,.65)\psline[arrowscale=1.5]{->}(1.2,.95)(1.4,.65) \qline(2.6,2.15)(1.6,.65)\psline[arrowscale=1.5]{->}(1.8,.95)(1.6,.65) \rput(.7,2.5){\rnode{a1}{$$}} \rput(.7,2.5){\rnode{a2}{$$}} \rput(0.3,2.1){\rnode{a3}{$$}} \rput(2.7,2.1){\rnode{a4}{$$}} \rput(0.2,2.5){\rnode{c1}{$G^\phi$}} \rput(2.8,2.5){\rnode{c2}{$G^\psi$}} \rput(1.5,0.45){\rnode{c3}{$G$}} \rput[b](1.5,2.6){\rnode{c4}{$\phi$}} \rput[tr](0.7,1.4){\rnode{c5}{$p^\phi$}} \rput[tl](2.4,1.4){\rnode{c6}{$p^\psi$}} \end{pspicture} $$ \caption{Commuting diagram of isomorphic coverings.} \label{cd} \end{figure} For a spanning subgraph $H$ of $G$, colorings $f$ and $g$ of $H$ are \emph{compatible} in $G$ if for each edge $(u,v)\in E(G)-E(H)$, $f(u)\not=g(v)$ and $f(v)\not=g(u)$. The smallest number of colors such that $H$ has a pair of compatible colorings is called the \emph{chromatic number of $H$ with respect to $G$} and denoted by $\chi_{G}(H)$. Since $(f|_{H},f|_{H})$ is a pair of compatible colorings of $H$ for any spanning subgraph $H$ of $G$ and any coloring $f$ of $G$, one can find $\chi_G(H) \leq \chi(G)$ for any spanning subgraph $H$ of $G$. We remark that $\chi_{G}(G)=\chi(G)$ for any graph $G$, and that $\chi_{G}(\C{N}_{|V(G)|})=2$ if $G$ has at least one edge, where $\C{N}_n$ is the null graph on $n$ vertices. \begin{figure} $$ \begin{pspicture}[.4](-1.8,-1.8)(1.5,1) \rput(.5,0){\rnode{a1}{$$}} \rput(0,.5){\rnode{a2}{$$}} \rput(-.5,0){\rnode{a3}{$$}} \rput(0,-.5){\rnode{a4}{$$}} \rput(1,0){\rnode{b1}{$$}} \rput(.5,.5){\rnode{b2}{$$}} \rput(0,1){\rnode{b3}{$$}} \rput(-.5,.5){\rnode{b4}{$$}} \rput(-1,0){\rnode{b5}{$$}} \rput(-.5,-.5){\rnode{b6}{$$}} \rput(0,-1){\rnode{b7}{$$}} \rput(.5,-.5){\rnode{b8}{$$}} \ncline{b1}{b3} \ncline{b1}{b5} \ncline{b1}{b7} \ncline{b5}{b3} \ncline{b5}{b7} \rput(0,-1.5){\rnode{c4}{$G$}} \pscircle[linewidth=2.5pt](1,0){.1} \pscircle[fillstyle=solid,fillcolor=black](0,1){.1} \pscircle[fillstyle=solid,fillcolor=black](-1,0){.1} \pscircle[fillstyle=solid,fillcolor=black](0,-1){.1} \end{pspicture} \begin{pspicture}[.4](-1.5,-1.8)(4.5,1) \rput(.5,0){\rnode{a1}{$$}} \rput(0,.5){\rnode{a2}{$$}} \rput(-.5,0){\rnode{a3}{$$}} \rput(0,-.5){\rnode{a4}{$$}} \rput(1,0){\rnode{b1}{$$}} \rput(.5,.5){\rnode{b2}{$$}} \rput(0,1){\rnode{b3}{$$}} \rput(-.5,.5){\rnode{b4}{$$}} \rput(-1,0){\rnode{b5}{$$}} \rput(-.5,-.5){\rnode{b6}{$$}} \rput(0,-1){\rnode{b7}{$$}} \rput(.5,-.5){\rnode{b8}{$$}} \ncline{b5}{b7} \ncline{b5}{b3} \ncline{b5}{b1} \rput(0,-1.5){\rnode{c4}{$(H,f)$}} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](1,0){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](0,1){.1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](-1,0){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](0,-1){.1} \rput(3.5,0){\rnode{e1}{$$}} \rput(2,.5){\rnode{e2}{$$}} \rput(2.5,0){\rnode{e3}{$$}} \rput(2,-.5){\rnode{e4}{$$}} \rput(4,0){\rnode{f1}{$$}} \rput(3.5,.5){\rnode{f2}{$$}} \rput(3,1){\rnode{f3}{$$}} \rput(2.5,.5){\rnode{f4}{$$}} \rput(2,0){\rnode{f5}{$$}} \rput(2.5,-.5){\rnode{f6}{$$}} \rput(3,-1){\rnode{f7}{$$}} \rput(3.5,-.5){\rnode{f8}{$$}} \ncline{f5}{f7} \ncline{f5}{f3} \ncline{f5}{f1} \rput(3,-1.5){\rnode{c4}{$(H,g)$}} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](4,0){.1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](3,1){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](2,0){.1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](3,-1){.1} \end{pspicture} $$ \caption{A spanning subgraph $H$ of $G$ with $\chi_{G}(H)=2$.} \label{exam1} \end{figure} In Section~\ref{basic}, we recall some basic properties. We show that the chromatic numbers of double coverings of a given graph can be computed from the number $\chi_G(H)$ for any spanning subgraph $H$ of $G$. In Section~\ref{estimation}, we will estimate the number $\chi_G(H)$. We discuss a generalization to $n$-fold covering graphs in Section \ref{fremark}. \section{Basic properties} \label{basic} Let $\phi$ be a signing of $G$. We define the \emph{support} of $\phi$ by the spanning subgraph of $G$ whose edge set is $\phi^{-1}(-1)$, and denoted by $spt(\phi)$. Similarly, we define the \emph{co-support} of $\phi$ by the spanning subgraph of $G$ whose edge set is $\phi^{-1}(1)$, and denoted by $cospt(\phi)$. Any spanning subgraph $H$ of $G$ can be described as a co-support $cospt(\phi)$ of a signing $\phi$ of $G$. Let $\phi_H$ be the signing of $G$ with $cospt(\phi_H)=H$. Let $f$ and $g$ be compatible $\chi_G(H)$-colorings of $H$. We define a function $$h:V(G^\phi)\to \{1,2,\ldots,\chi_G(H)\}$$ by $h(v_1)=f(v)$ and $h(v_{-1})=g(v)$ for each $v\in V(G)$. Then, by the compatibility of $f$ and $g$, $h$ is a $\chi_G(H)$-coloring of $G^\phi$. Hence, $\chi(G^\phi)\le \chi_G(H)$. Conversely, let $h$ be a $\chi(G^\phi)$-coloring of $G^\phi$. We define two $\chi(G^\phi)$-colorings $f$ and $g$ of $H$ by $f(v)=h(v_1)$ and $g(v)=h(v_{-1})$ for each $v\in V(G)$. Then $f$ and $g$ are compatible because $h$ is a coloring of $G^\phi$. Hence, $\chi_G(H)\le \chi(G^\phi)$. Now, we have the following theorem. \begin{thm}\label{doub} Let $H$ be a spanning subgraph of a graph $G$. Then $$\chi_G(H)=\chi(G^{\phi_H}),$$ where $\phi_H$ is the signing of $G$ with $cospt(\phi_H)=H$. \end{thm} It is not hard to see that the graph $G$ in Figure \ref{exam1} has two non-isomorphic connected double coverings. We exhibit spanning subgraphs $H_1, H_2$ of $G$ corresponding to two non-isomorphic connected covering graphs of $G$ and their chromatic numbers with respect to $G$ in Figure~\ref{exam2}. \begin{figure} $$ \begin{matrix} & \begin{pspicture}[.4](-1,-1.5)(1,1) \rput(.5,0){\rnode{a1}{$$}} \rput(0,.5){\rnode{a2}{$$}} \rput(-.5,0){\rnode{a3}{$$}} \rput(0,-.5){\rnode{a4}{$$}} \rput(1,0){\rnode{b1}{$$}} \rput(.5,.5){\rnode{b2}{$$}} \rput(0,1){\rnode{b3}{$$}} \rput(-.5,.5){\rnode{b4}{$$}} \rput(-1,0){\rnode{b5}{$$}} \rput(-.5,-.5){\rnode{b6}{$$}} \rput(0,-1){\rnode{b7}{$$}} \rput(.5,-.5){\rnode{b8}{$$}} \ncline{b1}{b3} \ncline{b1}{b7} \ncline{b5}{b3} \ncline{b5}{b7} \ncline{b2}{b4} \ncline{b6}{b8} \rput(0,-1.5){\rnode{c4}{$\chi(G^{\phi_{H_1}})=3$}} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](1,0){.1} \pscircle[fillstyle=solid,fillcolor=lightgray,linecolor=black](0,1){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](-1,0){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](0,-1){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](.5,.5){.1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](-.5,.5){.1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](-.5,-.5){.1} \pscircle[fillstyle=solid,fillcolor=lightgray,linecolor=black](.5,-.5){.1} \end{pspicture} & \begin{pspicture}[.4](-1,-1.5)(1,1) \rput(.5,0){\rnode{a1}{$$}} \rput(0,.5){\rnode{a2}{$$}} \rput(-.5,0){\rnode{a3}{$$}} \rput(0,-.5){\rnode{a4}{$$}} \rput(1,0){\rnode{b1}{$$}} \rput(.5,.5){\rnode{b2}{$$}} \rput(0,1){\rnode{b3}{$$}} \rput(-.5,.5){\rnode{b4}{$$}} \rput(-1,0){\rnode{b5}{$$}} \rput(-.5,-.5){\rnode{b6}{$$}} \rput(0,-1){\rnode{b7}{$$}} \rput(.5,-.5){\rnode{b8}{$$}} \ncline{b1}{a1} \ncline{b1}{b3} \ncline{b1}{b7} \ncline{b5}{b3} \ncline{b5}{b7} \ncline{b5}{a3} \ncline{a1}{a2} \ncline{a2}{a3} \ncline{a3}{a4} \ncline{a1}{a4} \rput(0,-1.5){\rnode{c4}{$\chi(G^{\phi_{H_2}})=2$}} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](1,0){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](0,1){.1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](-1,0){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](0,-1){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](.5,0){.1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](0,.5){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](-.5,0){.1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](0,-.5){.1} \end{pspicture} \\ \begin{pspicture}[.4](-1,-.6)(1,.6) \rput(.5,0){\rnode{a1}{$$}} \rput(0,.5){\rnode{a2}{$$}} \rput(-.5,0){\rnode{a3}{$$}} \rput(0,-.5){\rnode{a4}{$$}} \rput(1,0){\rnode{b1}{$$}} \rput(.5,.5){\rnode{b2}{$$}} \rput(0,1){\rnode{b3}{$$}} \rput(-.5,.5){\rnode{b4}{$$}} \rput(-1,0){\rnode{b5}{$$}} \rput(-.5,-.5){\rnode{b6}{$$}} \rput(0,-1){\rnode{b7}{$$}} \rput(.5,-.5){\rnode{b8}{$$}} \ncline{b1}{b3} \ncline{b1}{b5} \ncline{b1}{b7} \ncline{b5}{b3} \ncline{b5}{b7} \rput(0,-1.5){\rnode{c4}{$\chi(G)=3$}} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](1,0){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](0,1){.1} \pscircle[fillstyle=solid,fillcolor=lightgray,linecolor=black](-1,0){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](0,-1){.1} \end{pspicture} & & \\ &\begin{pspicture}[.4](-1.2,-1.2)(1.2,1) \rput(.5,0){\rnode{a1}{$$}} \rput(0,.5){\rnode{a2}{$$}} \rput(-.5,0){\rnode{a3}{$$}} \rput(0,-.5){\rnode{a4}{$$}} \rput(1,0){\rnode{b1}{$$}} \rput(.5,.5){\rnode{b2}{$$}} \rput(0,1){\rnode{b3}{$$}} \rput(-.5,.5){\rnode{b4}{$$}} \rput(-1,0){\rnode{b5}{$$}} \rput(-.5,-.5){\rnode{b6}{$$}} \rput(0,-1){\rnode{b7}{$$}} \rput(.5,-.5){\rnode{b8}{$$}} \ncline{b1}{b3} \ncline{b1}{b7} \ncline{b5}{b3} \ncline{b5}{b1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](1,0){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](0,1){.1} \pscircle[fillstyle=solid,fillcolor=lightgray,linecolor=black](-1,0){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](0,-1){.1} \end{pspicture} \begin{pspicture}[.4](-1.2,-1.2)(1.4,1) \rput(.5,0){\rnode{a1}{$$}} \rput(0,.5){\rnode{a2}{$$}} \rput(-.5,0){\rnode{a3}{$$}} \rput(0,-.5){\rnode{a4}{$$}} \rput(1,0){\rnode{b1}{$$}} \rput(.5,.5){\rnode{b2}{$$}} \rput(0,1){\rnode{b3}{$$}} \rput(-.5,.5){\rnode{b4}{$$}} \rput(-1,0){\rnode{b5}{$$}} \rput(-.5,-.5){\rnode{b6}{$$}} \rput(0,-1){\rnode{b7}{$$}} \rput(.5,-.5){\rnode{b8}{$$}} \ncline{b1}{b3} \ncline{b1}{b7} \ncline{b5}{b3} \ncline{b5}{b1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](1,0){.1} \pscircle[fillstyle=solid,fillcolor=lightgray,linecolor=black](0,1){.1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](-1,0){.1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](0,-1){.1} \end{pspicture} & \begin{pspicture}[.4](-1.4,-1.2)(1.2,1) \rput(.5,0){\rnode{a1}{$$}} \rput(0,.5){\rnode{a2}{$$}} \rput(-.5,0){\rnode{a3}{$$}} \rput(0,-.5){\rnode{a4}{$$}} \rput(1,0){\rnode{b1}{$$}} \rput(.5,.5){\rnode{b2}{$$}} \rput(0,1){\rnode{b3}{$$}} \rput(-.5,.5){\rnode{b4}{$$}} \rput(-1,0){\rnode{b5}{$$}} \rput(-.5,-.5){\rnode{b6}{$$}} \rput(0,-1){\rnode{b7}{$$}} \rput(.5,-.5){\rnode{b8}{$$}} \ncline{b5}{b7} \ncline{b5}{b3} \ncline{b5}{b1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](1,0){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](0,1){.1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](-1,0){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](0,-1){.1} \end{pspicture} \begin{pspicture}[.4](-1.2,-1.2)(1.2,1) \rput(.5,0){\rnode{a1}{$$}} \rput(0,.5){\rnode{a2}{$$}} \rput(-.5,0){\rnode{a3}{$$}} \rput(0,-.5){\rnode{a4}{$$}} \rput(1,0){\rnode{b1}{$$}} \rput(.5,.5){\rnode{b2}{$$}} \rput(0,1){\rnode{b3}{$$}} \rput(-.5,.5){\rnode{b4}{$$}} \rput(-1,0){\rnode{b5}{$$}} \rput(-.5,-.5){\rnode{b6}{$$}} \rput(0,-1){\rnode{b7}{$$}} \rput(.5,-.5){\rnode{b8}{$$}} \ncline{b5}{b7} \ncline{b5}{b3} \ncline{b5}{b1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](1,0){.1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](0,1){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](-1,0){.1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](0,-1){.1} \end{pspicture}\\ & \chi_{G}(H_1)=3 & \chi_{G}(H_2)=2 \end{matrix} $$ \caption{$\chi_{G}(H)$ and the chromatic number of double coverings $G^{\phi_H}$ of a graph $G$.} \label{exam2} \end{figure} For a subset $X \subset V(G)$ and for a spanning subgraph $H$ of $G$, let $H_{X}$ denote a new spanning subgraph of $G$ defined as follow: Two vertices in $X$ or in $V(G)-X$ are adjacent in $H_{X}$ if they are adjacent in $H$, while a vertex in $X$ and a vertex in $V(G) - X $ are adjacent in $H_{X}$ if they are adjacent in $G$ but not adjacent in $H$, i.e., they are adjacent in the complement $\overline{H}(G)$ of $H$ in $G$. Two spanning subgraphs $H$ and $K$ of $G$ are \emph{Seidel switching equivalent} in $G$ if there exists a subset $X \subset V(G)$ such that $H_{X}=K$. Clearly, the Seidel switching equivalence is an equivalence relation on the set of spanning subgraphs of $G$, and the equivalence class $[H]$ of a spanning subgraph $H$ of $G$ is $\{H_{X}: X \subset V(G)\}$. For a signing $\phi :E(G) \to {\B{Z}}_2$ and for any $X \subset V(G)$, let $\phi_X$ be the signing obtained from $\phi$ by reversing the sign of each edge having exactly one end point in $X.$ If $\psi = \phi_X$ for some $X\subset V(G)$ then $\phi$ and $\psi$ are said to be \emph{switching equivalent} \cite{CW}. It is clear that for a subset $X \subset V(G)$ and for a spanning subgraph $H$ of $G$, $H_X= cospt((\phi_H)_X)$. By a slight modification of the proof of Corollary $4$~\cite{KHLS}, we obtain the following theorem. \begin{thm}\label{switchequi} Let $G$ be a graph. Let $H, K$ be spanning subgraphs of $G$. Then the following statements are equivalent. \begin{description} \item[{\rm (1)}] Two graphs $H$ and $K$ are Seidel switching equivalent. \item[{\rm (2)}] Two signings $\phi_H$ and $\phi_K$ are switching equivalent. \item[{\rm (3)}] Two double coverings $G^{\phi_H}$ and $G^{\phi_K}$ of $G$ are isomorphic as coverings. \end{description} \end{thm} The following corollary follows easily from Theorem \ref{doub} and \ref{switchequi}. \begin{cor}\label{switchcon} Let $H$ and $K$ be two spanning subgraphs of a graph $G$. If they are switching equivalent, then $$\chi_G(H) =\chi_G(K).$$ \end{cor} The converse of Corollary~\ref{switchcon} is not true in general. We have provided two non-switching equivalent spanning subgraphs $H, K$ in $G$ with $\chi_G(H) =\chi_G(K)=3$ in Figure~\ref{nonswitching}. \begin{figure} $$ \begin{pspicture}[.4](-1.8,-1.8)(1.5,1) \rput(1,0){\rnode{b1}{$$}} \rput(0,1){\rnode{b3}{$$}} \rput(-1,0){\rnode{b5}{$$}} \rput(0,-1){\rnode{b7}{$$}} \ncline{b1}{b3} \ncline{b1}{b5} \ncline{b1}{b7} \ncline{b5}{b3} \ncline{b5}{b7} \rput(0,-1.5){\rnode{c4}{$G$}} \pscircle[linewidth=2.5pt](1,0){.1} \pscircle[fillstyle=solid,fillcolor=black](0,1){.1} \pscircle[fillstyle=solid,fillcolor=black](-1,0){.1} \pscircle[fillstyle=solid,fillcolor=black](0,-1){.1} \end{pspicture} \hskip 1cm \begin{pspicture}[.4](-1.8,-1.8)(1.5,1) \rput(1,0){\rnode{b1}{$$}} \rput(0,1){\rnode{b3}{$$}} \rput(-1,0){\rnode{b5}{$$}} \rput(0,-1){\rnode{b7}{$$}} \ncline{b1}{b5} \ncline{b1}{b7} \ncline{b5}{b3} \ncline{b5}{b7} \rput(0,-1.5){\rnode{c4}{$H$}} \pscircle[linewidth=2.5pt](1,0){.1} \pscircle[fillstyle=solid,fillcolor=black](0,1){.1} \pscircle[fillstyle=solid,fillcolor=black](-1,0){.1} \pscircle[fillstyle=solid,fillcolor=black](0,-1){.1} \end{pspicture} \hskip 1cm \begin{pspicture}[.4](-1.8,-1.8)(1.5,1) \rput(1,0){\rnode{b1}{$$}} \rput(0,1){\rnode{b3}{$$}} \rput(-1,0){\rnode{b5}{$$}} \rput(0,-1){\rnode{b7}{$$}} \ncline{b1}{b3} \ncline{b1}{b5} \ncline{b5}{b3} \ncline{b5}{b7} \rput(0,-1.5){\rnode{c4}{$K$}} \pscircle[linewidth=2.5pt](1,0){.1} \pscircle[fillstyle=solid,fillcolor=black](0,1){.1} \pscircle[fillstyle=solid,fillcolor=black](-1,0){.1} \pscircle[fillstyle=solid,fillcolor=black](0,-1){.1} \end{pspicture} $$ \caption{Two non-switching equivalent subgraphs $H, K$ in $G$ with $\chi_G(H) =\chi_G(K)$.} \label{nonswitching} \end{figure} For a coloring $f$ of $H$, let $\C{I}_f$ be the number of colors in $\{1, 2, \ldots, \chi(H)\}$ such that the preimage $f^{-1}(i)$ is independent in $\overline{H}(G)$ (and hence, also in $G$). \begin{cor}\label{up-low-bds} Let $G$ be a connected graph and let $H$ be a spanning subgraph of $G$. Then $$\max_{K \in [H]} \{\chi(K)\} \leq\chi_G(H) \leq \min_{K \in [H], f} \{\chi(G), 2\chi(K)-\C{I}_f \},$$ where $f$ runs over all $\chi(K)$-colorings of $K$. \end{cor} \begin{pf} It is clear that $\chi(H)\leq \chi_G(H)\leq \chi(G)$. Let $f$ be a $\chi(H)$-coloring of $H$ such that $$\{ i | f^{-1}(i) ~\mathrm{is} ~\mathrm{independent}~\mathrm{in}~G \} = \{\chi(H), \chi(H)-1, \ldots, \chi(H)-\C{I}_f +1\}.$$ We define a function $g:V(H)\to \{1,2,\ldots, 2\chi(H)-\C{I}_f\}$ as follows: For a vertex $v$ in $V(H)$, $$ g(v)=\left\{ \begin{array}{ll} f(v) & \mbox{if $\chi(H)-\C{I}_f +1 \le f(v)\le \chi(H) $,} \\[1ex] f(v)+\chi(H) & \mbox{otherwise.}\end{array}\right.$$ Then $g$ is a coloring of $H$, and $f$ and $g$ are compatible. Now, the corollary comes from Corollary~\ref{switchcon}. \end{pf} For a partition $\C{P}=\{V_1, V_2, \ldots, V_k\}$ of the vertex set $V(G)$ of $G$, we define a new simple graph $G/\C{P}$ as follows: the vertex set of $G/\C{P}$ is $\{V_1, V_2, \ldots, V_k\}$ and there is an edge between two vertices $V_i$ and $V_j$ in $G/\C{P}$ if and only if there exist two vertices $v_i\in V_i$ and $v_j\in V_j$ such that $v_i$ and $v_j$ are adjacent in $G$ where $i\neq j$. We call $G/\C{P}$ the \emph{quotient graph} associated with a partition $\C{P}$. For a subset $S$ of $V(G)$, let $G[S]$ be the subgraph of $G$ whose vertex set $S$ and whose edge set is the set of those edges of $G$ that have both ends in $S$. We call $G[S]$ the \emph{subgraph induced by $S$}. \begin{cor}\label{h/p=bip} Let $\C{P}=\{V_1,V_2, \ldots, V_k\}$ be a partition of the vertex set of a connected graph $G$ and let $H=\cup_{i=1}^kG[V_i]$ be the disjoint union of the induced subgraphs $G[V_i]$. If $G/\C{P}$ is bipartite, then $\chi(G)=\chi_G(H).$ \end{cor} \begin{pf} Let $X=\{[v_{i_1}], [v_{i_2}], \ldots, [v_{i_k}]\}$ be a part of the bipartition of the vertex set of the bipartite graph $G/\C{P}$. Then $H_{\cup_{j=1}^{k} V_{i_j}}=G$. Then, the corollary follows from Corollary ~\ref{switchcon} or~\ref{up-low-bds}. \end{pf} The following theorem finds a necessary and sufficient condition for the bipartiteness of covering graphs. \begin{thm} [\cite{AKLS:BIPARTITE}] Let $G$ be a non-bipartite graph with a generating voltage assignment $\nu$ in $\mathcal{A}$ which derives the covering graph $\tilde{G}$. Then $\tilde{G}$ is bipartite if and only if there exists a subgroup $\mathcal{A}_e$ of index two in $\mathcal{A}$ such that for every cycle $C, \nu(C) \in \mathcal{A}_e$ if and only if the length of $C$ is even. \label{thm45} \end{thm} It is obvious that $\chi_G(H)=1$ if and only if $G$ is a null graph. In Theorem~\ref{bipar2}, we find a necessary and sufficient condition of $\chi_G(H)=2$. \begin{thm} Let $G$ be a connected graph having at least one edge and let $H$ be a spanning subgraph of $G$. Then $\chi_G(H)=2$ if and only if either $G$ is bipartite or $H\in [\C{N}_{|V(G)|}]$, where $\C{N}_{|V(G)|}$ is the null graph on $|V(G)|$ vertices. \label{bipar2} \end{thm} \begin{pf} Let $G$ be a bipartite graph and $H$ be a spanning subgraph of $G$. Then there exists a graph $K$ in the switching class $[H]$ of $H$ in $G$ such that $K$ has at least one edge. By Corollary~\ref{up-low-bds}, $\chi_G(H)=2$. We recall that $G$ itself is a spanning subgraph of $G$ and $\chi_G(G)=\chi(G)$. Therefore, if a graph $G$ has at least one edge, then $G$ is bipartite if and only if $\chi_G(H)=2$ for any spanning subgraph $H$ of $G$. If $G$ is not a bipartite graph, there exist a signing $\phi$ of $G$ such that $G^\phi$ is bipartite~\cite{GTP}. It follows the connectedness of $G$ that there exists a subset $Y$ of $V(G)$ such that $cospt(\phi_Y)$ is connected. Since $G^\phi$ and $G^{\phi_Y}$ are isomorphic, by Theorem~\ref{switchequi}, $G^{\phi_Y}$ is bipartite. We note that $cospt(\phi_Y)$ is isomorphic to a subgraph of $G^{\phi_Y}$ and hence it is bipartite. Let $e$ be an edge of $G$ such that one end is in $Y$ and the other is in $V(G)-Y$. If $\phi_Y(e)=-1$, then there exists an even cycle which contains the edge $e$ as the only edge whose value under $\phi_Y$ is $-1$. It follows from Theorem~\ref{thm45} that $G^{\phi_Y}$ is not bipartite. This is a contradiction. It implies that for an edge $e$ of $G$, $\phi_Y(e)=1$ if and only if one end of $e$ is in $Y$ and the other is in $V(G)-Y$. Let $X$ be a part of the bipartition of $cospt(\phi_Y)$, i.e., every edge $e$ in $cospt(\phi_Y)$ has one end in $X$ and the other end in $V(G)-X$. Then $cospt((\phi_{Y})_{_X})=\C{N}_{|V(G)|}$. Notice that $cospt((\phi_{Y})_{_X})=cospt(\phi_Z)$, where $Z=(Y-X)\cup (X-Y)$. Hence $H$ is switching equivalent to $\C{N}_{|V(G)|}$. It completes the proof of theorem. \end{pf} \section{Computations of $\chi_G(H)$} \label{estimation} In this section, we aim to estimate the number $\chi_G(H)$ for any spanning subgraph $H$ of $G$. Let $H$ be a spanning subgraph of $G$, and let $F_1, F_2, \ldots, F_k$ be the components of the complement $\overline{H}(G)$ of $H$ in $G$. Then $\C{P}_{\bar{H}}=\{V(F_1)$, $V(F_2)$, $\ldots$ , $V(F_k)\}$ is a partition of the vertex set $V(H)=V(G)$. Now, we consider a $\chi(H/\C{P}_{\bar{H}})$-coloring $c$ of the quotient graph $H/\C{P}_{\bar{H}}$. Then $c$ induces a partition $\C{P}_c=\{c^{-1}(1)$, $\ldots$, $c^{-1}$ $(\chi(H/\C{P}_{\bar{H}}))\}$ of the vertex set $H/\C{P}_{\bar{H}}$. By composing the quotient map $: G\rightarrow H/\C{P}_{\bar{H}}$ and $c:H/\C{P}_{\bar{H}} \rightarrow \{1, 2, \ldots,\chi(H/\C{P}_{\bar{H}})\}$, we obtain a partition of $H$ and by slightly abusing notation we denoted it identically $\C{P}_c$. One can notice that each vertex of $H/\C{P}_c$ can be considered as a union of the vertex sets $V(F_1), \ldots, V(F_k)$. For each $i=1, \ldots, \chi(H/\C{P}_{\bar{H}})$, let $H_c(i)=H[c^{-1}(i)]$, where we consider $c^{-1}(i)$ as a subset of $V(H)=V(G)$. A coloring $f$ of $H$ \emph{respects the coloring $c$} of $H/\C{P}_{\bar{H}}$ if $|f(H_c(i))|=\chi(H_c(i))$ and $f(H_c(i))\cap f(H_c(j))=\emptyset$ for any $1\le i\not=j\le\chi(H/\C{P}_{\bar{H}})$. For a coloring $f$ which respects $c$, let $\C{I}_f(i)$ be the number of colors in $\{i_1, i_2, \ldots, i_{\chi(H_c(i))}\}$ such that the vertex set $f^{-1}(i_k)$ is independent in $\overline{H}(G)$ and let $\C{D}_f(i)=\chi(H_c(i))-\C{I}_f(i)$ for each $i=1, \ldots, \chi(H/\C{P}_{\bar{H}})$. Let $$\Delta_S= \max\left\{0,\, 2\,\max\{s\,|\, s\in S\}-\sum_{s\in S} s \right\}$$ for any subset $S$ of natural numbers. \begin{thm}\label{complem} Let $G$ be a connected graph and let $H$ be a spanning subgraph of $G$. Then $$\begin{array}{lcl} \chi_G(H) & \le & \displaystyle \min_c\left\{ \displaystyle \sum_{i=1}^{\chi(H/\C{P}_{\bar{H}})}\!\!\! \chi(H_c(i)) + \Delta_{\{\chi(H_c(i))\,|\,i=1,2, \ldots, \chi(H/\C{P}_{\bar{H}})\}},\right.\\[3ex] & & \hspace{1.5cm} \left.\displaystyle \sum_{i=1}^{\chi(H/\C{P}_{\bar{H}})}\!\!\! \chi(H_c(i))+ \min_{f}\left\{\Delta_{\{\C{D}_f(i)\,|\,i=1,2, \ldots, \chi(H/\C{P}_{\bar{H}})\}} \right\}\right\},\end{array}$$ where $c$ runs over all ${\chi(H/\C{P}_{\bar{H}})}$-colorings of $H/\C{P}_{\bar{H}}$ and $f$ runs over all colorings of $H$ which respect $c$. \end{thm} \begin{pf} Let $c$ be a $\chi(H/\C{P}_{\bar{H}})$-coloring of $H/\C{P}_{\bar{H}}$ and let $f$ be a coloring of $H$ which respects $c$. First, we want to show that $$\chi_G(H)\le\sum_{i=1}^{\chi(H/\C{P}_{\bar{H}})} \chi(H_c(i)) + \Delta_{\{\chi(H_c(i))\,|\,i=1,2, \ldots, \chi(H/\C{P}_{\bar{H}})\}}.$$ Without loss of generality, we may assume that $\chi(H_c(1))\ge \chi(H_c(2))\ge $ $\ldots $ $\ge$ $\chi(H_c(\chi(H/\C{P}_{\bar{H}})))$. Let the image $$f(V(H_c(i)))=\{\sum_{j=1}^{i-1}\chi(H_c(j))+1, \ldots, \sum_{j=1}^i\chi(H_c(j))\}$$ and $$\ell=\Delta_{\{\chi(H_c(i))\})\,|\,i=1,2, \ldots, \chi(H/\C{P}_{\bar{H}})\}}.$$ Then $$\ell=\max\{\chi(H_c(1))-\sum_{i=2}^{\chi(H/\C{P}_{\bar{H}})}\chi(H_c(i)),\,0\}.$$ We define $g:V(H) \to \{1,2,\ldots, n+\ell\}$ by $g(v)=f(v)-\chi(H_c(1))$, where $$n=\sum_{i=1}^{\chi(H/\C{P}_{\bar{H}})}\chi(H_c(i))$$ and the arithmetic is done by modulo $n+\ell$. Then $g$ is a coloring of $H$. Since $f(V(H_c(i)))\cap g(V(H_c(i)))=\emptyset$ and each edge in $E(G)-E(H)=E(\overline{H}(G))$ connects two vertices in $H_c(i)$ for some $i=1,2,\ldots, \chi(H/\C{P}_{\bar{H}})$, we can see that $f$ and $g$ are compatible. Hence, $$\chi_G(H)\le \sum_{i=1}^{\chi(H/\C{P}_{\bar{H}})}\chi(H_c(i))+\ell.$$ Next, we want to show that $$\chi_G(H)\le\sum_{i=1}^{\chi(H/\C{P}_{\bar{H}})} \chi(H_c(i)) + \Delta_{\{\C{D}_f(i)\,|\,i=1,2, \ldots, \chi(H/\C{P}_{\bar{H}})\}}.$$ In general, we may assume that $\C{D}_f(1)\ge \C{D}_f(2)\ge \cdots \ge \C{D}_f(\chi(H/\C{P}_{\bar{H}}))$. Now, we aim to define another coloring $g$ of $H$ such that $f$ and $g$ are compatible. To do this, first for the vertices $v$ of $H$ such that the set $f^{-1}(f(v))$ is independent in $\overline{H}(G)$, we define $g(v)=f(v)$. Next, by using a method similar to the first case, we can extend the function $g$ to whole graph $H$ so that $f$ and $g$ are compatible colorings of $H$. Finally, by taking the minimum value among all $\chi(H/\C{P}_{\bar{H}})$-coloring $c$ of $H/\C{P}_{\bar{H}}$ and all coloring $f$ of $H$ which respect $c$, we have the theorem. \end{pf} The following example shows the upper bound in Theorem~\ref{complem} is sharp. \begin{exmp} Let $m,n$ be integers with $2\le m \le n$. Let $K_{m-1}$ be the complete graph on $m-1$ vertices $v_1,\ldots, v_{m-1}$. Let $H_m$ be a spanning subgraph of $K_{n}$ obtained by adding $n-m+1$ isolated vertices $v_{m},\ldots, v_n$ to $K_{m-1}$. Then $\chi_{K_n}(H_m)=m$. \label{example32} \end{exmp} \begin{pf} To show $m\le \chi_{K_n}(H_m)$, we set $X=V(K_{m-1})$. Then $\chi((H_m)_X)=m$ and hence $m\le \chi_{K_n}(H_m)$ by Corollary~\ref{up-low-bds}. We can show that $\chi_{K_n}(H_m)\le m$ by using two methods which are contained in the proof of Theorem~\ref{complem}. For the first method, we replace $H_m$ by $(H_m)_X$. We observe that $\overline{(H_m)_X}(K_n)=K_{n-m+1}\cup\{v_1,\ldots, v_{m-1}\}$ and $(H_m)_X/\C{P}_{\bar{(H_m)}_X}=K_{m}$. Let $c$ be a $(m)$-coloring of $(H_m)_X/\C{P}_{\bar{H}_X}$ such that $V((H_m)_c(i))=\{v_i\}$ for each $i=1,2,\ldots, m-1$ and $V((H_m)_c(m+1))=\{v_{m}, \ldots, v_n\}$. We note that $\chi((H_m)_c(i))=1$ for each $i=1,2,\ldots, m$. Since $\Delta_{\{1,1,\ldots,1\}}=0$, by Theorem~\ref{complem}, we have $\chi_{K_n}(H_m)=\chi_{K_n}((H_m)_X)\le m$. For the second method, let $c$ be the trivial coloring of $H_m/\C{P}_{\overline{H_m}}=K_1$ and let $f$ be a $(m-1)$-coloring of $H_m$ such that $f(v_i)=i$ for each $i=1,2,\ldots, m-1$ and $f(v_{m})=f(v_{m+1})=\cdots =f(v_n)=1$. Then $f$ respects $c$ and $$\C{D}_f(1)=\chi((H_m)_c(1))-\C{D}_f(1)=\chi(H_m)-\C{D}_f(1)=(m-1)-(m-2)=1.$$ Since $\Delta_{\{1\}}=2-1=1$, by Theorem~\ref{complem}, we have $\chi_{K_n}(H_m)\le m$. \end{pf} Example \ref{example32} can be generalized to the following corollary. \begin{cor}\label{kpart} Let $H$ be a complete $m$-partite graph which is a spanning subgraph of $K_n$. Then $\chi_{K_n}(H)=m$. \end{cor} \begin{pf} We observe that the complement $\overline{H}(K_n)$ of $H$ is also a spanning subgraph of $K_n$ having at least $k$ components of which each vertex set is a subset of a part of $H$. It is not hard to show that $H/\C{P}_{\bar{H}}$ is also a complete $m$-partite graph and $\chi(H_c(i))=1$ for each $i=1,\ldots, m$. By Theorem~\ref{complem}, $\chi_G(H)\le m$. Since $\chi(H)=m$, by Corollary~\ref{up-low-bds}, it completes the proof. \end{pf} If each component of a spanning subgraph $H$ of a graph $G$ is a vertex induced subgraph, we can have an upper bound of the chromatic number of $H$ induced by $G$ which is simpler than that in Theorem~\ref{complem}. \begin{thm}\label{part} Let $\C{P}=\{V_1,V_2, \ldots, V_k\}$ be a partition of the vertex set of a connected graph $G$. Let $H$ be a disjoint union of the induced subgraphs $G[V_1]$, $G[V_2]$,$ \ldots$, $G[V_n]$. Then we have $$\max_{V_i, V_j}\{ \chi(G[V_i\cup V_j])\}\le \chi_G(H)\le \max_{V_i, V_j}\{ \chi(G[V_i])+\chi(G[V_j])\},$$ where $V_i$ and $V_j$ runs over all pairs of adjacent vertices in $G/\C{P}$. \end{thm} \begin{pf} Let $V_i$ and $V_j$ be two adjacent vertices in $G/\C{P}$. Then $G[V_i\cup V_j]$ is a subgraph of $H_{V_i}$. By Corollary~\ref{up-low-bds}, $\chi(G[V_i\cup V_j])\le \chi(H_{V_i}) \le \chi_G(H)$ and hence $$\max\{\chi(G[V_i\cup V_j])\,|\, \mbox{\rm $V_i$ is adjacent to $V_j$ in $G/\C{P}$}\}\le \chi_G(H).$$ For the second inequality, let $$M= \max\{ \chi(G[V_i])+\chi(G[V_j])\,|\, V_i ~\mathrm{is}~\mathrm{adjacent}~\mathrm{to}~V_j~\mathrm{in} ~G/\C{P}\}.$$ By the definition of $M$, there exist $s$, $t$ and $M$-coloring $f:V(H)\to$ $\{1$, $2$, $\ldots$, $M\}$ of $H$ such that $$ \chi(G[V_s])+\chi(G[V_t])=M,$$ and $$f(G[V_i])=\{1,2,\ldots,\chi(G[V_i])\}$$ for each $i=1$, $2$, $\ldots$, $k$. We note that $f$ may not be surjective. We define another $M$-coloring $g$ of $H$ such that $g(G[V_i])=\{M, M-1, \ldots, M-\chi(G[V_i])+1\}$. Now, we aim to show that $f$ and $g$ are compatible. Let $uv$ be an edge of $G$ which is not in $E(H)$. Now, by the construction of $G/\C{P}$, Then there exist $i$ and $j$ such that $u\in V_i$ and $v\in V_j$. By the definition of $G/\C{P}$, $V_i$ is adjacent to $V_j$ in $G/\C{P}$. If $f(V_i)\cap g(V_j)\not=\emptyset$, then, by the construction of $f$ and $g$, $M<\chi(G[V_i])+\chi(G[V_j])$. This contradicts the hypothesis of $M$. Thus, $f(V_i)\cap g(V_j)=\emptyset$. Similarly, we can see that $g(V_i)\cap f(V_j)=\emptyset$. Therefore, $f(u)\not=g(v)$ and $g(u)\not=f(v)$, i.e., $f$ and $g$ are compatible. It completes the proof. \end{pf} By Corollary~\ref{h/p=bip} and Theorem~\ref{part}, we have the following corollaries. \begin{cor}\label{h/p=bip'} Let $\C{P}=\{V_1,V_2, \ldots, V_k\}$ be a partition of the vertex set of a connected graph $G$. If $G/\C{P}$ is bipartite, then $$\max_{V_i, V_j}\{ \chi(G[V_i\cup V_j])\}\le \chi(G)\le \max_{V_i, V_j}\{ \chi(G[V_i])+\chi(G[V_j])\},$$ where $V_i$ and $V_j$ runs over all pairs of adjacent vertices in $G/\C{P}$. \end{cor} \begin{cor}\label{kn1} Let $H$ be a spanning subgraph of a connected graph $G$ such that $H$ has $k$ components $H_1, H_2, \ldots, H_k$ with $\chi(H_i)\ge \chi(H_{i+1})$ for each $i=1,2, \ldots, k-1$. \begin{enumerate} \item[{\rm (1)}]If the complement $\overline{H}(G)$ of $H$ in $G$ is the complete $k$ partite graph, then we have $\chi_G(H)=\chi(H_1)+\chi(H_2)$. \item[{\rm (2)}]If each component of $H$ is the complete graph, $i.e$, $H_i=K_{\ell_i}$ for each $i=1,2, \ldots, k$. Then we have $\chi_G(H)\le \ell_1+\ell_2$. In particular, if $G$ is the complete graph $K_n$, then we have $\chi_G(H)=\ell_1+\ell_2$. \end{enumerate} \end{cor} \begin{pf} We observe that, in any case, each component $H_i$ of $H$ is an induced subgraph $G[V(H_i)]$ for each $i=1,2,\ldots,k$, and $\C{P}=\{V(H_i)\,|\, i=1,2, \ldots, k\}$ forms a partition of $G$. (1) If the complement $\overline{H}(G)$ is the complete $k$ partite graph, then $H/\C{P}$ is the complete graph $K_k$, i.e., each pair of vertices in $H/\C{P}$ is adjacent in $H/\C{P}$. Since $\chi(G[V(H_1)\cup V(H_2)])=\chi(H_1)+\chi(H_2)$ and $\max\{ \chi(G[V(H_i)])+\chi(G[V(H_j)])\,|\, 1\le i\not=j \le k \}= \chi(H_1)+\chi(H_2)$, by Theorem~\ref{part}, we have $\chi_G(H)=\chi(H_1)+\chi(H_2)$. (2) If $H_i=K_{\ell_i}$ for each $i=1,2, \ldots, k$, then, By Theorem~\ref{part}, we have $\chi_G(H)\le \ell_1+\ell_2$. If $G$ is the complete graph $K_n$, then $\overline{H}(G)$ of $H$ in $G$ is the complete $k$ partite graph, by (1), we have $\chi_G(H)=\ell_1+\ell_2$ \cite{AKLS:BIPARTITE}. \end{pf} \section{Further remarks} \label{fremark} \begin{figure} \begin{align*} \begin{pspicture}[.4](-1.8,-1.8)(1.5,1) \rput(.5,0){\rnode{a1}{$$}} \rput(0,.5){\rnode{a2}{$$}} \rput(-.5,0){\rnode{a3}{$$}} \rput(0,-.5){\rnode{a4}{$$}} \rput(1,0){\rnode{b1}{$$}} \rput(.5,.5){\rnode{b2}{$$}} \rput(0,1){\rnode{b3}{$$}} \rput(-.5,.5){\rnode{b4}{$$}} \rput(-1,0){\rnode{b5}{$$}} \rput(-.5,-.5){\rnode{b6}{$$}} \rput(0,-1){\rnode{b7}{$$}} \rput(.5,-.5){\rnode{b8}{$$}} \ncline{b1}{b3}\middlearrow \ncline{b5}{b1}\middlearrow \ncline{b1}{b7}\middlearrow \ncline{b5}{b3}\middlearrow \ncline{b5}{b7}\middlearrow \rput(0,-1.5){\rnode{c1}{$(G, \phi)$}} \rput(0,-.2){\rnode{c2}{$id$}} \rput(-.65,.69){\rnode{c3}{$id$}} \rput(-.65,-.69){\rnode{c4}{$id$}} \rput(1.1,.8){\rnode{c5}{$(12)(34)$}} \rput(1.1,-.7){\rnode{c6}{$(1234)$}} \pscircle[linewidth=2.7pt](1,0){.1} \pscircle[linewidth=2.7pt](0,1){.1} \pscircle[linewidth=2.7pt](-1,0){.1} \pscircle[linewidth=2.7pt](0,-1){.1} \end{pspicture} \begin{pspicture}[.4](-3.2,-1.8)(2.2,1) \psline(-.75,1)(0,.25)(.75,1)(1.75,0)(.75,-1)(0,-.25)(-.75,-1)(-1.75,0)(-.75,1)(-.75,.5)(-1.25,0)(-.75,-.5)(-.75,-1) \psline(-.75,.5)(-.25,0)(-.75,-.5) \psline(.75,1)(.75,.5)(1.25,0)(.75,-.5)(.75,-1) \psline(.75,.5)(.25,0)(.75,-.5) \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](1.75,0){.1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](.75,.5){.1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](.75,-.5){.1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](0,.25){.1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](0,-.25){.1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](-.75,.5){.1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](-.75,-.5){.1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](-1.75,0){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](-.75,1){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](-.75,-1){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](-1.25,0){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](-.25,0){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](.25,0){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](1.25,0){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](.75,1){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](.75,-1){.1} \rput(0,-1.5){\rnode{c1}{$G^{\phi}$}} \end{pspicture} \\ \begin{pspicture}[.4](-1,-1.8)(1,1) \rput(.7,0){\rnode{a1}{$$}} \rput(0,.7){\rnode{a2}{$$}} \rput(-.7,0){\rnode{a3}{$$}} \rput(0,-.7){\rnode{a4}{$$}} \ncline{a3}{a2} \ncline{a3}{a1} \ncline{a3}{a4} \rput(0,-1.2){\rnode{c4}{$(H,f_1)$}} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](.7,0){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](0,.7){.1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](-.7,0){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](0,-.7){.1} \end{pspicture} \begin{pspicture}[.4](-1,-1.8)(1,1) \rput(.7,0){\rnode{a1}{$$}} \rput(0,.7){\rnode{a2}{$$}} \rput(-.7,0){\rnode{a3}{$$}} \rput(0,-.7){\rnode{a4}{$$}} \ncline{a3}{a2} \ncline{a3}{a1} \ncline{a3}{a4} \rput(0,-1.2){\rnode{c4}{$(H,f_2)$}} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](.7,0){.1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](0,.7){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](-.7,0){.1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](0,-.7){.1} \end{pspicture} \begin{pspicture}[.4](-1,-1.8)(1,1) \rput(.7,0){\rnode{a1}{$$}} \rput(0,.7){\rnode{a2}{$$}} \rput(-.7,0){\rnode{a3}{$$}} \rput(0,-.7){\rnode{a4}{$$}} \ncline{a3}{a2} \ncline{a3}{a1} \ncline{a3}{a4} \rput(0,-1.2){\rnode{c4}{$(H,f_3)$}} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](.7,0){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](0,.7){.1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](-.7,0){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](0,-.7){.1} \end{pspicture} \begin{pspicture}[.4](-1,-1.8)(1,1) \rput(.7,0){\rnode{a1}{$$}} \rput(0,.7){\rnode{a2}{$$}} \rput(-.7,0){\rnode{a3}{$$}} \rput(0,-.7){\rnode{a4}{$$}} \ncline{a3}{a2} \ncline{a3}{a1} \ncline{a3}{a4} \rput(0,-1.2){\rnode{c4}{$(H,f_4)$}} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](.7,0){.1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](0,.7){.1} \pscircle[fillstyle=solid,fillcolor=white,linecolor=black](-.7,0){.1} \pscircle[fillstyle=solid,fillcolor=darkgray,linecolor=black](0,-.7){.1} \end{pspicture} \end{align*} \caption{A permutation voltage assignment $\phi$ of $G$, compatible colorings of $H = cospt(\phi)$ with $\chi_{G}(H)=2$ and its corresponding $4$-fold covering graph $G^{\phi}$.} \label{exam3} \end{figure} \subsection{Existence of a spanning subgraph $H$ of $G$ with $\chi_G(H) = m$ for any $m$ with $2\le m \le \chi(G)$} For $n\ge 2$, and for any spanning subgraph $H$ of a complete graph $K_n$, we have $2\le \chi_{K_n}(H)\le n$. For converse, we showed that for any integer $m$, between $2$ and $n$, there exists a spanning subgraph $H_m$ of $K_n$ such that $\chi_{K_n}(H_m)=m$ in Example \ref{example32}. One can ask this can be extended to an arbitrary connected graph. Let $G$ be connected graph. For any $m$ with $2\le m \le \chi(G)$, let $H$ be an $m$-critical subgraph of $G$, that is, $\chi(H)=m$ and for any proper subgraph $S$ of $H$ $\chi(S) <m$. Let $\tilde{H}$ be a spanning subgraph of $G$ obtained by adding $|\overline{H}(G)|$ isolated vertices to $H$. By Theorem \ref{part}, $\chi_G(\tilde{H})=m$. \subsection{n-fold covering graphs} For $n$-fold covering graphs, let $S_n$ denote a symmetric group on $n$ elements $\{1,2,\ldots, n\}$. Every edge of a graph $G$ gives rise to a pair of oppositely directed edges. We denote the set of directed edges of $G$ by $D(G)$. By $e^{-1}$ we mean the reverse edge to an edge $e$. Each directed edge $e$ has an initial vertex $i_e$ and a terminal vertex $t_e$. A {\it permutation voltage assignment} $\phi$ on a graph $G$ is a map $\phi :D(G) \rightarrow S_n$ with the property that $\phi(e^{-1})=\phi(e)^{-1}$ for each $e \in D(G)$. The {\it permutation derived graph} $G^{\phi}$ is defined as follows: $V(G^{\phi})=V(G) \times \{1, \dots ,n\}$, and for each edge $e \in D(G)$ and $j \in \{1, \dots ,n\}$ let there be an edge $(e,j)$ in $D(G^{\phi})$ with $i_{(e,j)} = (i_e,j)$ and $t_{(e,j)}=(t_e,\phi(e)j)$. The natural projection $p_{\phi} :G^{\phi} \rightarrow G$ is a covering. In \cite{GTP,GTB}, Gross and Tucker showed that every $n$-fold covering ${\widetilde G}$ of a graph $G$ can be derived from a voltage assignment. Let $H$ be a spanning subgraph of a graph $G$ which is the co-support of $\phi$, $i. e.,$ $V(H)=V(G)$ and $E(H)=\phi^{-1}(id)$, where $id$ is the identity element in $S_n$ and for $E(H)$, we identify each pair of oppositely directed edges of $\phi^{-1}(id)$. Then our chromatic number of $H$ respect $G$ naturally extends as follows; colorings $f_1, f_2, \ldots, f_n$ of $H$ are \emph{compatible} if for $e^{+}=(u,v)\in D(G)-D(H)$, $f_i(u)\neq f_{\phi((u,v)(i)}(v)$ for $i=1$, $2$, $\ldots$, $n$. The smallest number of colors such that $H$ has an $n$-tuple of compatible colorings is called the \emph{n-th chromatic number of $H$ with respect to $G$} and denoted by $\chi_{G}(H)$. Unlike two fold coverings, estimations of the $n$-th chromatic numbers of $H$ with respect to $G$ are not easy. The asymptotic behavior of the chromatic numbers of non-isomorphic $n$-fold coverings could be very fascinating compare to the result by Amit, Linial, and Matousek~\cite{ALM}. We conclude the discussion with an example. It is easy to see that all odd-fold covering graphs of the graph $G$ in Figure \ref{exam1} have chromatic number $3$. We provide a $4$-fold covering graph induced by the coloring $\phi$ in Figure \ref{exam3} together with $4$ compatible colorings $f_1$, $f_2$, $f_3$ and $f_4$ of the spanning subgraph $H= cospt(\phi)$. \vskip .5cm \noindent{\bf Acknowledgements} The authors would like to thank Younghee Shin for her attention to this work. Also, the referees have been very helpful and critical during refereing and revising. The \TeX\, macro package PSTricks~\cite{PSTricks} was essential for typesetting the equations and figures. \bibliographystyle{amsalpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{Sec: Introduction} Studying rotation curves of spiral galaxies during the 1960s-1980s leads to the conclusion that about 80 percent of the total mass of galaxies is in the form of dark matter (DM) \cite{1970ApJ...159..379R, Freeman1970,1980ApJ...238..471R}. Subsequent series of studies used different methods (e.g. gravitational lensing \cite{Natarajan2017,Cho2017}, cosmic microwave background analysis \cite{PlanckCollaboration2015}, Lyman-alpha forest \cite{Viel2009,Garzilli2019,Baur2016}, N-body simulations of the universe and galaxies \cite{Vogelsberger2020}, and etc.) to infer that DM must be present in large-scale and small-scale structures (e.g. dwarf galaxies) of the universe. \\ From DM density profiles of galaxies (e.g. NFW or Einasto DM density profile), it is conceivable that DM distributed non-uniformly throughout the galaxies. It means in central regions of galaxies, DM density is the highest and by moving toward the edges of the galaxies, the density of DM reduces. Then it is logical to suppose that all astronomical objects inside galaxies, including stars and stellar clusters, are immersed inside DM. Thus, DM will affect the physics of everything inside galaxies. \\ By passing the time, stars inside galaxies can absorb DM particles from their surrounding. It is predicted that the gathered DM particles inside stars can affect the physics of stars mostly in two ways: \begin{itemize} \item $\: \:$ DM particles can transfer (whether they annihilate or not) energy between different layers of stars. In this way, the temperature profile, pressure profile, chemical composition, and many other physical parameters of stars could be altered in comparison to the standard stellar evolutionary model. \item $\: \:$ If DM particles annihilate inside stars, then they can act as a new source of energy inside stars, besides the energy that comes from baryonic matter energy production cycles, e.g. pp, CNO and triple $\alpha$ cycles. If this supposition is true, then DM can alter the luminosity and temperature of stars on the H-R diagram too. In this way, stars with the same mass and with the same initial chemical compositions, but with different DM densities that surround them, will follow different evolutionary paths on the H-R diagram (using this note, we could infer that the non-uniform distribution of DM inside GCs can be considered as one of the reasons for the presence of multiple stellar populations in GCs (see Sec (\ref{Sec: Results and simulations}) for more details. In addition, for detailed review about DM effects on stars see the papers: \cite{TurckChieze:2012dc,2009MNRAS.394...82S}). \end{itemize} In the last few-decades signs of DM effects on stars were investigated in the literature. For instance: \begin{itemize} \item In simulated dwarf galaxies, DM halo around dwarf galaxies heated up by the stars inside them. Then, the more the dwarf galaxy evolves, the more the DM halo heats up by the stars inside them \cite{Read2019}. \item Stars near the Galactic massive black hole show signs of young and old stars, simultaneously (a problem known as the "paradox of youth" in the community). Considering DM effects on stars it is possible to solve this problem \cite{Hassani_2020zvz}. \item In 1978 Steigman proposed DM effects on the sun as a possible solution to the solar neutrino problem \cite{Hassani_2020zvz} (though, after discovering the neutrino oscillations by the Super-Kamiokande experiment in 1998 \cite{1998PhRvL..81.1562F} this problem has been considered as a solved problem by the neutrino oscillations assumption, instead of the DM assumption \cite{Fisher1999}). \item In addition to normal stars, DM effects on other celestial bodies like the moon \cite{2020PhRvD.102b3024C, 2020PhLB..80435403G}, planets \cite{Leane2021, 2012JCAP...07..046H}, neutron stars (NS) \cite{Rezaei_2017, REZAEI20181, 2018JCAP...09..018B, 2018JHEP...11..096K, 2018ApJ...863..157C, 2013PhRvD..87l3507B, Raj2018, Joglekar2020, Baryakhtar2017}, white dwarf stars (WD) \cite{2019JCAP...08..018D, 2018PhRvD..98f3002C, 2018PhRvD..98k5027G, 2016MNRAS.459..695A}, black holes \cite{2012PhRvD..85b3519M, 2009JCAP...08..024U, Belotsky2014} and binary star systems \cite{Hassani2020b} have been investigated in the literature. \end{itemize} Globular clusters (GC) are one of the oldest members of our galaxy, the milky way \cite{VandenBerg2013}. According to the traditional view of GCs, it is believed that \cite{Gratton2019}: \begin{itemize} \item Stars inside GCs have a similar chemical composition as they evolved from the same giant molecular cloud. \item Stars inside GCs have nearly the same age. \item Stars inside GCs are located from nearly the same distance from the Earth. \end{itemize} But nova days spectroscopic and photometric studies of GCs do not accept this traditional view for GCs. If this classical views for GCs is correct, then H-R diagrams of typical GCs are anticipated to be a narrow path. But instead, most of the H-R diagrams of GCs are split at least into two separate paths. As an example, Fig. (\ref{Fig: Color_Magnitude_Diagram_NGC2808}) illustrates the color-magnitude diagram of the $ \omega $ Centauri GC. In this figure, stars with higher values of metallicity are less luminous than the low-metallicity stars. This causes the H-R diagram of the $ \omega $ Centauri GC to be thicker than what is anticipated according to the traditional view of globular clusters. The presence of multiple stellar populations can be detected from spectroscopic analysis of the globular clusters too \cite{Masseron2019, Gratton2011, Wang2020}. As an example, the elemental abundances of stars of the $ \omega $ Centauri GC (or NGC 5139, which is the largest known GC of our Galaxy too), vary from star to star \cite{Masseron2019, Gratton2011}.\\ \begin{figure*} \centering \includegraphics[width=1.6 \columnwidth]{Color_Magnitude_Diagram_NGC2808.png} \caption{\label{Fig: Color_Magnitude_Diagram_NGC2808} (Colour online) The color-magnitude diagram of $ \omega $ Centauri GC (or NGC 5139, which is the largest known GC of our Galaxy) which depicts the presence of multiple stellar populations. Stars with different [Fe/H] values are color-coded. At least three separate generations of stars are detected in this GC \cite{Gratton2011}. Figure from Ref. \cite{Johnson2020}.} \end{figure*} These findings, and many more, are challenging our current view of stellar and stellar clusters evolutionary models. Many models have been developed to solve the discrepancies between the observed and predicted theoretical features of GCs. But each model has its own deficiency and no model has been able to solve the multiple stellar populations problem (MSPP) completely. Figure 6 of Ref. \cite{Bastian2018a} summarized a list of some famous models with their successes and shortcomings that are developed to solve the MSPP in GCs. \\ As an example, one of the first models that are proposed to solve the MSPP is the "asymptotic giant branch stars (or AGB stars) model" \cite{Cottrell1981}. According to this model, first-generation (1G) stars are formed inside the GC's initial giant molecular cloud. High-mass stars are evolved faster than low-mass ones. When high-mass stars reach the AGB phase, they usually become long-period variable stars and lose mass due to the stellar winds. The ejected material from these high-mass stars will pollute the interstellar medium, where second-generation of stars (2G) are forming. In this way, the initial chemical composition of 1G and 2G stars are not the same, although they are formed inside the same giant molecular cloud \cite{Bastian2018a}. AGB scenarios could solve some aspects of MSPP successfully, but it has shortcomings too. For instance, the fraction of 1G and 2G stars are almost the same inside most of the observed GCs. But AGB scenario predicts that just a small fraction of 2G stars will be produced by this way (a problem known as the "mass-budget problem" in the community). In addition, the AGB scenario can not produce observed Na-O anti-correlation which is seen in the spectra of many GCs that host multiple stellar populations \cite{Bastian2018a,Conroy2011}.\\ Another example of the scenarios that are proposed to solve the MSPP is the "fast-rotating massive stars (FRMSs) scenario" \cite{Decressin2007, Decressin2007a}. High-mass stars burn Hydrogen in central regions and through the CNO energy production cycle. The by-products elements of the CNO cycle are different from the pp cycle \cite{2012sse..book.....K}. Fast rotation of high-mass stars, brings the enriched material from central regions to the outer regions of the stars. In the next, the ejected material from FRMSs will pollute the interstellar medium of the host GCs. Similar to the AGB scenario, the 2G of stars will form and evolve by the polluted material that is produced by the 1G FRMSs. Like the AGB scenario, FRMSs scenario, besides its successes, suffers from the mass-budget problem \cite{Bastian2018a, Conroy2011}. In the current study, we investigated the possibility of solving MSPP using DM assumptions. We used the previously predicted DM effects on stars to provide an alternative possible solution to the MSPP. \\ In Sec. (\ref{Sec: DM effects on stars}), as mentioned above, we reviewed DM effects on stars. Sec. (\ref{Sec: DM in globular clusters}) devoted to discussion on the DM content of the GCs and DM density distribution inside GCs. The results of our simulations are presented in Sec. (\ref{Sec: Results and simulations}). Finally, a discussion about our simulations and results is presented in Sec. (\ref{Sec: Discussions}). \section{Dark Matter effects on stars} \label{Sec: DM effects on stars} By passing the time, stars inside galaxies can absorb and accrete DM particles. According to the definition, capture rate of DM particles by a massive body (like stars, planets, white dwarfs, etc.) is the total number of DM particles that are (after weak interaction with baryonic matters) absorbed and gravitationally bounded by that massive body and in units of time \cite{Gould1987}. The capture of DM particles by Hydrogen atoms inside stars can be calculated using Eq. (11) of the paper \cite{Hassani2020b}: \begin{multline} \label{Eq: CR_by_Hydrogen_atoms} C_{\chi ,H} = \left [ 4\sqrt{6\pi } \frac{\rho_{\chi}}{m_{\chi}} \frac{1}{\overline{v}_{\chi }v_{\ast}} exp(-\frac{3v^{2}_{\ast}}{2\overline{v}^{2}_{\chi}}) \right ] \\ \left [ \sigma_{\chi,SI} + \sigma_{\chi,SD} \right ] \left [ \int_{0}^{R_{\ast}} n_{H}(r) r^{2} dr \right ] \times \\ [ \int_{0}^{\infty } exp(-\frac{3u^{2}}{2\overline{v}^{2}_{\chi}}) sinh(\frac{3uv_{\ast}}{\overline{v}^{2}_{\chi}}) (v_{e}^{2}-\frac{\mu_{-,H}^{2}}{\mu_{H}}u^{2}) \theta (v_{e}^{2}-\frac{\mu_{-,H}^{2}}{\mu_{H}}u^{2}) \\ du ]. \end{multline} Capture of DM particles by elements heavier than Hydrogen can be calculated using the Eq. (12) of the same paper \cite{Hassani2020b}: \begin{multline} \label{Eq: CR_by_heavier_elements} C_{\chi ,i} = \left [ 8\sqrt{6\pi } \frac{\rho_{\chi}}{m_{\chi}^{2}} \frac{E_{0}}{\overline{v}_{\chi }v_{\ast}} \frac{\mu^{2}_{+,i}}{\mu_{i}} exp(-\frac{3v^{2}_{\ast}}{2\overline{v}^{2}_{\chi}}) \right ] \\ \left [ \sigma_{\chi,SI} A_{i}^{2} (\frac{m_{\chi}m_{n,i}}{m_{\chi}+m_{n,i}})^{2}(\frac{m_{\chi}+m_{p}}{m_{\chi}m_{p}})^{2} \right ] \left [ \int_{0}^{R_{\ast}} n_{i}(r) r^{2} dr \right ] \times \\ [ \int_{0}^{\infty } exp(-\frac{3u^{2}}{2\overline{v}^{2}_{\chi}}) sinh(\frac{3uv_{\ast}}{\overline{v}^{2}_{\chi}}) \: \times \\ \left \{ exp(-\frac{m_{\chi}u^{2}}{2E_{0}}) - exp(-\frac{m_{\chi}u^{2}}{2E_{0}}\frac{\mu_{i}}{\mu^{2}_{+,i}}) exp(-\frac{m_{\chi}v_{e}^{2}}{2E_{0}}\frac{\mu_{i}}{\mu^{2}_{-,i}} (1-\frac{\mu_{i}}{\mu^{2}_{+,i}})) \right \} \\ du ]. \end{multline} In Eqs. (\ref{Eq: CR_by_Hydrogen_atoms}) and (\ref{Eq: CR_by_heavier_elements}), $ \rho_{\chi} $ is DM density in the location of stars, $ m_{\chi} $ is the mass of DM particles, $ m_{n,i} $ is the nuclear mass of the element i, $ m_{p} $ is the mass of a proton, $ A_{i} $ is the atomic number of the element i, $ \overline{v}_{\chi} $ is the velocity dispersion of DM particles, $ v_{\ast} $ is the velocity of the star relative to the DM halo, $ u $ is the velocity of DM particles (velocity distribution of DM particles in the location of stars is usually considered to be a Maxwell–Boltzmanian distribution \cite{2009MNRAS.394...82S} ), $ v_{e} $ is the escape velocity from the surface of stars, $ \sigma_{\chi,SD} $ is the spin-dependent DM-nucleon scattering cross-section, $ \sigma_{\chi,SI} $ is the spin-independent DM-nucleon scattering cross-section, $ n_{H}(r) $ is the number density of the Hydrogen atoms at distance r from the centre of the star, $ n_{i}(r) $ is the number density of the element i at distance r from the centre of the star, $ R_{\ast} $ is the radius of the star, $ \theta $ is the step function, $ E_{0} = (3 \hbar^{2})/(2 m_{n,i}(0.91 m_{n,i}^{1/3}+0.3)^{2})$ is characteristic coherence energy (see Ref. \cite{Gould1987} for more details) and is a constant, $ \mu_{i}$ and $\mu_{\mp,i}$ are defined to be: $ \mu_{i} = m_{\chi}/m_{n,i} $ , $ \mu_{\mp,i} = (\mu_{i}\mp1)/2 $. \\ Each of Eqs. (\ref{Eq: CR_by_Hydrogen_atoms}) and (\ref{Eq: CR_by_heavier_elements}) consisted of four brackets. The first two brackets are constant and can be calculated analytically. The third brackets are functions of the distance from the centres of stars, $ r $. Although it seems impossible to calculate the third brackets analytically, but it is possible to calculate it numerically using the state-of-the-art stellar evolutionary codes. In this study, we embedded Eqs. (\ref{Eq: CR_by_Hydrogen_atoms}) and (\ref{Eq: CR_by_heavier_elements}) in the last version (version MESA-r21.12.1) of the MESA stellar evolutionary code to calculate the capture rate of DM particles by stars. MESA is a publicly available and open-source code and can simulate the evolution of stars from very low-mass to very high-mass ones ($ 10^{-3} - 10^{3} \: M_{\odot} $). For full capabilities of the MESA code see its official papers \cite{MESA_2015,2013ApJS..208....4P,2011ApJS..192....3P,MESA_2019,MESA_2018}. \\ If DM particles will annihilate inside stars then, they can act as a new source of energy inside stars (beside the energy sources that come from pp and CNO baryonic matter energy production cycles). By multiplying Eqs. (\ref{Eq: CR_by_Hydrogen_atoms}) and (\ref{Eq: CR_by_heavier_elements}) by the $ m_{\chi} c^{2} $ it is possible to calculate the extra luminosity that is produced by DM particles annihilation: \begin{equation} \label{Eq: DM_particles_annihilation} L_{\chi} = (Capture \: rate) \times m_{\chi} c^{2}, \end{equation} which $ L_{\chi} $ is the luminosity that is produced through DM particles annihilation, and $ c $ is the speed of light. \section{DM in globular clusters} \label{Sec: DM in globular clusters} In this section, we want to make an order-of-magnitude estimate of the average DM density inside GCs. Although it is believed that most of the DM content of the GCs has been stripped away by the tidal interactions with their host galaxies, but a typical GC can still keep about 20 percent of its initial DM content \cite{Baumgardt2008}. \\ To estimate the average DM density of a typical GC, consider $ \omega $ Centauri GC (or NGC 5139 globular cluster) as an example. Its physical parameters are presented in table (\ref{Tab: physical_parameters_of_GCs_in_the_milky_way}). According to table (\ref{Tab: physical_parameters_of_GCs_in_the_milky_way}), the total mass of this GC is about $ M = 3.34 \times 10^{6} \: M_{\odot} $ and its V-band mass-to-light ratio is about $ \Upsilon $ = 2.68 $ (M_{\odot}/L_{\odot}) $. Being $ \Upsilon $ bigger than one ($ \Upsilon > 1$) means that we do not receive any light from about 63 percent (that is $ (2.68-1)/2.68 \times 100 = 63 \% $ ) of the total mass of $ \omega $ Centauri GC. Assuming that about 20 percent of the total mass of the $ \omega $ Centauri GC is in the form of DM (and the rest of the dark mass is in the form of white dwarfs, neutron stars, interstellar gas, etc.), then the total DM mass of this GC becomes: \\ \begin{equation} \label{Eq: DM_Mass_of_w_centauri} M_{\chi, \omega \: Centauri} = \dfrac{20}{100} \times 3.34 \times 10^{6} M_{\odot} = 0.67 \times 10^{6} M_{\odot}. \end{equation} Then, the average DM density inside $ \omega $ Centauri GC becomes: \begin{equation} \label{Eq: DM_Mass_of_w_centauri} \overline{\rho}_{\chi, \omega \: Centauri} = \dfrac{\dfrac{1}{2} M_{\chi, \omega \: Centauri}}{\dfrac{4}{3} \pi R^{3}} = 70 \: (M_{\odot}/pc^{3}). \end{equation} In Eq. (\ref{Eq: DM_Mass_of_w_centauri}), R is the half-light radius of the $ \omega $ Centauri GC. We assumed that half of the total mass of this GC lies within the half-light radius. For this reason, the numerator of the Eq. (\ref{Eq: DM_Mass_of_w_centauri}) is multiplied to $\dfrac{1}{2}$. According to Eqs. (\ref{Eq: DM_Mass_of_w_centauri}) and (\ref{Eq: DM_Density_Sun}), the average DM density inside $ \omega $ Centauri GC is about $ 7 \times 10^{3} $ times more than the DM density at the sun's location. DM density at the sun's location was estimated to be about \cite{Salucci2010}: \begin{equation} \label{Eq: DM_Density_Sun} \overline{\rho}_{\chi, \odot} = 0.43 \: (GeV/cm^3 ) \simeq 0.01 \: (M_{\odot}/pc^{3}). \end{equation} Assuming that about 20 percent of the total mass of the milky way's GCs is in the form of DM, the average DM density for several other GCs is calculated and presented in table (\ref{Tab: physical_parameters_of_GCs_in_the_milky_way}). \begin{table*} \centering \caption{Physical parameters of some known GCs in the milky way galaxy. Data are from the on-line catalog of GCs parameters which is publicly and freely available at: \url{https://people.smp.uq.edu.au/HolgerBaumgardt/globular/parameter.html}. The catalog's reference papers are: \cite{Baumgardt2018,Baumgardt2020b,Baumgardt2017}} \begin{threeparttable} \begin{tabular}{|| c | c | c | c | c | c | c | c || } \hline \hline \shortstack{ Name } & \shortstack{ Distance \tnote{*} \\ (K \: pc) } & \shortstack{ Radius \tnote{**} \\ (pc) } & \shortstack{ Mass \tnote{***} \\ $(M_{\odot})$ } & \shortstack{$(M/L)_{V}$ \\ $ (M_{\odot}/L_{\odot}) $} & \shortstack{ $\overline{\rho}_{\chi}$ \tnote{****} \\ $(M_{\odot}/pc^{3})$ } & \shortstack{ $\overline{\rho}_{\chi}$ \tnote{*****} \\ $(\overline{\rho}_{\odot})$ } \\ \hline $ \omega $ Centauri & 5.24 & 10.36 & $ 3.34 \times 10^{6} \: M_{\odot} $ & 2.68 & 70 & $ 7 \times 10^{3} $ \\ NGC 6535 & 6.5 & 3.65 & $ 1.31 \times 10^{4} \: M_{\odot} $ & 3.93 & $ 6.4 $ & $ 6.4 \times 10^{2} $ \\ NGC 6121 & 1.93 & 3.69 & $ 9.3 \times 10^{4} \: M_{\odot} $ & 2.02 & $ 44 $ & $ 4.4 \times 10^{3} $ \\ NGC 5466 & 16.0 & 14.03 & $ 5.47 \times 10^{4} \: M_{\odot} $ & 1.52 & $ 0.47 $ & $ 4.7 \times 10 $ \\ NGC 6642 & 8.05 & 1.51 & $ 6.45 \times 10^{4} \: M_{\odot} $ & 2.79 & $ 447 $ & $ 4.47 \times 10^{4} $ \\ NGC 6316 & 11.6 & 4.77 & $ 5.09 \times 10^{5} \: M_{\odot} $ & 2.85 & $ 112 $ & $ 1.1 \times 10^{4} $ \\ NGC 3201 & 4.6 & 6.78 & $ 1.41 \times 10^{5} \: M_{\odot} $ & 2.37 & $ 11 $ & $ 1.1 \times 10^{3} $ \\ NGC 1851 & 11.33 & 2.90 & $ 2.81 \times 10^{5} \: M_{\odot} $ & 1.91 & $ 2.7 $ & $ 2.7 \times 10^{2} $ \\ Ter 4 & 6.7 & 6.06 & $ 7.95 \times 10^{4} \: M_{\odot} $ & 15.34 & $ 8.5 $ & $ 8.5 \times 10^{2} $ \\ NGC 5466 & 16.0 & 14.03 & $ 5.47 \times 10^{4} \: M_{\odot} $ & 1.52 & $ 0.47 $ & $ 47 $ \\ \hline \end{tabular} \begin{tablenotes} \item[*] Distance from the sun \item[**] Half-mass radius of the GCs \item[***] Total mass of the GCs \item[****] Estimated average DM density of the GCs \item[*****] Estimated average DM density of the GCs in units of the average DM density in the sun's location. DM density in the sun's location was estimated to be about: $ \overline{\rho}_{\chi, \odot} \simeq 0.01 \: (M_{\odot}/pc^{3} $, \cite{Salucci2010}) \end{tablenotes} \end{threeparttable} \label{Tab: physical_parameters_of_GCs_in_the_milky_way} \end{table*} The mass-to-light ratio of the central regions of the $ \omega $ Centauri GC was estimated to be about $ (M/L)_V = 6.7 (M_{\odot}/L_{\odot}) $ which is higher than the average mass-to-light ratio of the whole GC (\cite{Watkins2013}). This means DM density in central regions of this GC is higher than the average DM density of the whole GC. \\ If DM affects the physics of stars inside GCs, then the evolutionary courses of stars must deviate from the standard stellar evolutionary model. The more the DM density that surrounds the stars is, then the more the deviations from the standard stellar evolutionary model must be. In section (\ref{Sec: Results and simulations}), the results of our simulations showed that stars with the same mass and with the same initial chemical compositions but in different DM density environments (corresponding to different locations inside GCs) follow different evolutionary paths on the H-R diagram. We used these results to propose DM effects on stars as a possible solution for the MSPP in GCs. \\ The calculations of this section aim to show that the average DM density inside Milky-Way's GC is usually several orders of magnitude higher than the average DM density that surrounds the sun. So, DM effects on stars inside GCs are significant and must be taken into account. In the rest of this study, we supposed DM distributed non-uniformly inside GCs, i.e. its density is higher in central regions of the GCs. So, stars in different locations of GCs and with the same mass and with the same chemical composition will follow different evolutionary paths on the H-R diagram. \section{Results of simulations} \label{Sec: Results and simulations} In our simulations, weakly interacting massive particles (WIMP) with masses $ m_{\chi} = 100 \: Gev \: c^{-2} $ considered to be the DM candidate. In Eqs. (\ref{Eq: CR_by_Hydrogen_atoms}) and (\ref{Eq: CR_by_heavier_elements}), spin-dependent and spin-independent scattering cross-sections considered to be $ \sigma_{\chi,SD} = 10^{-38} \: cm^{2}$ and $ \sigma_{\chi,SI} = 10^{-44} \: cm^{2} $, respectively. These amounts are the maximum magnitudes that are determined through the experimental DM detection experiments \cite{2008PhRvL.100b1303A, 2008PhRvL.101i1301A}. Escape velocity from the surface of the stars $v_{e}$ are calculated while each star is evolving. We used MESA's build-in functions to calculate $v_{e}$ in each time-step of the evolution of stars. In addition, the velocity distribution of dark matter particles is assumed to be a Maxwell-Boltzmann distribution with a velocity dispersion $ \overline{v}_{\chi} = 270 \: km.sec^{-1}$ \cite{2009MNRAS.394...82S}. The velocity of stars relative to the dark matter halo of the GCs is considered to be $ v_{\ast} = 20 \: Km.sec^{-1} $. \\ After considering Eq. (\ref{Eq: DM_particles_annihilation}) in MESA stellar evolutionary code and running it, the results of the simulations for a one-solar-mass star are presented in Fig. (\ref{Fig: H_R_Diagram_1_Msun}). In this figure, blue lines represent the evolutionary paths of a one solar mass star according to the standard stellar evolutionary model (i.e. when DM effects have not been considered into account). So, the blue lines are the same for all sub-plots in Fig. (\ref{Fig: H_R_Diagram_1_Msun}). \begin{figure*} \centering \includegraphics[width=2 \columnwidth]{Fig_1_WIMPy_1M_Sun.png} \caption{\label{Fig: H_R_Diagram_1_Msun} (Colour online) The evolutionary path of a one-solar-mass star on the H-R diagram and in different DM density environments. In all sub-plots, blue lines represent the evolutionary path of a one-solar-mass star and according to the standard model stellar evolutionary model ($ \rho_{\chi} = 0 $). So, blue lines are the same in all sub-plots. Red lines represent the evolutionary path of a one-solar-mass star when DM effects are considered into account ($ \rho_{\chi} \neq 0 $). From sub-plot \ref{Fig: H_R_Diagram_1_Msun}-b to sub-plot \ref{Fig: H_R_Diagram_1_Msun}-d the density of DM increased. So, from sub-plot \ref{Fig: H_R_Diagram_1_Msun}-b to sub-plot \ref{Fig: H_R_Diagram_1_Msun}-d, the deviation between blue and red lines increased too.} \end{figure*} \begin{figure*} \centering \includegraphics[width=2 \columnwidth]{Fig_2_WIMPy_Stars_with_different_masses.png} \caption{\label{Fig: H_R_Diagram_WIMPy_Stars} (Colour online) The evolutionary paths of stars with different masses on the H-R diagram. Like Fig. (\ref{Fig: H_R_Diagram_1_Msun}), blue lines represent the evolutionary paths of stars according to the standard stellar evolutionary model ($ \rho_{\chi} = 0 $). Red lines represent the evolutionary paths of stars when DM effects are taken into account ($ \rho_{\chi} \neq 0 $). Sub-plots \ref{Fig: H_R_Diagram_WIMPy_Stars}-a, \ref{Fig: H_R_Diagram_WIMPy_Stars}-b, \ref{Fig: H_R_Diagram_WIMPy_Stars}-c and \ref{Fig: H_R_Diagram_WIMPy_Stars}-d represent the evolutions of stars with masses $0.5 \: M_{\odot}$, $ 1.0 \: M_{\odot}$, $ 2.0 \: M_{\odot} $ and $ 3.0 \: M_{\odot} $, respectively. It is conceivable from the figure that, by considering DM effects, the evolutionary paths of stars will deviate from the standard stellar evolutionary model.} \end{figure*} Red lines in Fig. (\ref{Fig: H_R_Diagram_1_Msun}) represent the evolutionary paths of one-solar-mass stars when DM effects are taken into account. Each red line in each sub-plot corresponds to the evolutionary path of a star with different DM density that surrounds the star (i.e. different values for $ \rho_{\chi} $ ). In Fig. (\ref{Fig: H_R_Diagram_1_Msun}) by increasing DM density from Fig. (\ref{Fig: H_R_Diagram_1_Msun}-b) to Fig. (\ref{Fig: H_R_Diagram_1_Msun}-d), the deviation between red and blue lines increases. So, we can say, the presence of a non-uniform distribution of DM inside GCs causes stars with the same mass and with the same initial chemical composition but with different values of $ \rho_{\chi} $ to follow different evolutionary paths on the observed H-R diagram of GC (e.g. Fig. (\ref{Fig: Color_Magnitude_Diagram_NGC2808})). \\ Fig. (\ref{Fig: H_R_Diagram_WIMPy_Stars}) represents the results of our simulations for stars with different masses. Like Fig. (\ref{Fig: H_R_Diagram_1_Msun}), blue lines represent the evolution of stars according to the standard stellar evolutionary model. Red lines represent the evolution of stars when DM assumptions are taken into account. It is conceivable from Fig. (\ref{Fig: H_R_Diagram_WIMPy_Stars}) that, when DM effects are taken into account, stars with the same mass (in each sub-plot) and with the same initial chemical compositions follow different evolutionary paths on the H-R diagram. \\ Assuming that DM density distribution inside GCs is not uniform, then stars in different locations of GCs are immersed in different DM density environments. So, stars with the same mass and with the same initial chemical compositions will follow different evolutionary paths on the H-R diagram. These results might be the solution to the question: why we see different generations of stars inside GCs? We will discuss more about this in Sec. (\ref{Sec: Discussions}). \section{Discussions and Conclusions} \label{Sec: Discussions} According to the results of our simulations (our results for stellar simulations in the presence of DM are in agreement with the results of previous works, e.g. see (\cite{2009MNRAS.394...82S})), the presence of DM inside GCs causes the evolutionary paths of stars to deviate from the standard stellar evolutionary model. The more the DM density around a star is, then the more the stars deviate from the standard stellar evolutionary model (see Fig. (\ref{Fig: H_R_Diagram_1_Msun}) and Fig. (\ref{Fig: H_R_Diagram_WIMPy_Stars})). \\ It is anticipated that GCs inside galaxies have lost most of their DM content due to the tidal interactions with their host galaxies \cite{Wirth2020}. But in Sec. (\ref{Sec: DM in globular clusters}) we discussed about the possibility that GCs may have kept some portion of their initial DM content. Mass-to-light ratios of most of the Milky way's GCs are bigger than one (see table (\ref{Tab: physical_parameters_of_GCs_in_the_milky_way})). It means we do not receive any light from some mass of the GCs. Assuming that just 20 percent of this dark part of the GCs is in the form of DM, then we estimated the average DM density for some GCs. In almost all cases, $ \overline{\rho}_{\chi} $ is several times bigger than the average DM density around the sun (see table (\ref{Tab: physical_parameters_of_GCs_in_the_milky_way}) for estimated values for $ \overline{\rho}_{\chi} $). \\ Assuming that DM distributed non-uniformly inside GCs (like NFW or Einasto DM density profile \cite{Merritt2006a}), then stars with the same mass and with the same initial chemical compositions will follow different evolutionary paths on the H-R diagram. As an example, a one-solar-mass star near the central regions of a GC (and with the higher DM density that surrounded the star) will follow different evolutionary path on the H-R diagram in comparison to a one-solar-mass star that evolves near the outer regions of the same GC (see Fig. (\ref{Fig: H_R_Diagram_1_Msun})). \\ In addition to the location of stars on the H-R diagram, the presence of DM can alter the amount of time that stars spent in each evolutionary phase (e.g. main-sequence phase or red-giant phase) of their evolution. As an example, consider the main-sequence phase. According to definition, it is a phase at which most of the energy sources of stars come from Hydrogen fusion in the core of stars. Depending on the mass of the stars, stars convert Hydrogen to Helium through pp or CNO energy production cycles \cite{2000itss.bookP}. Both pp and CNO energy production cycles are a strong function of the temperature (that is, $\varepsilon_{pp} \varpropto T^{4} $ and $\varepsilon_{CNO} \varpropto T^{16} $, \cite{2000itss.bookP}). Then, we can say, if DM particles will annihilate inside stars, then they alter the temperature of the core of the stars too. This causes stars to consume Hydrogen atoms at different rates in comparison to the models without DM. So, we infer that the presence of DM affects the elemental abundances of star too.\\ Our overall result from this discussion is that, if the presence of DM can alter the temperature of stars, then it can alter the age, chemical composition, luminosity, and many other physical parameters of stars too. \\ If the presence of DM alters the rate of pp and CNO energy production cycles inside stars, then we can say that, the chemical compositions of stars can be affected by the presence of DM too. But, because of a lack of our knowledge about the exact physical nature of DM, it is hard to discuss more about the exact consequences of DM effects on stars. \\ Our overall result is that, if the presence of DM alters the luminosity, temperature, chemical composition, age, etc. of stars, then its presence can be considered as a possible solution to the multiple stellar populations problem in GCs. \section{Acknowledgement} \label{Sec_ACKNOWLEDGMENTS} Special thanks are due to Dr Amin Rezaei Akbarieh From university of Tabriz, Iran, Prof. Kenneth Freeman from Research School of Astronomy and Astrophysics, Australian National University, Prof. Nate Bastian from the University of Liverpool John Moores, England and Dr. Marco Taoso from National Institute of Nuclear Physics (INFN) Turin, Italy for their helpful discussions during the research. Figures of this work are generated using python's visualizations library: matplotlib v3.2.1 \cite{Hunter2007}. This research made use of the python data analysis library, Pandas \cite{McKinney2010}. \section{Data availability} The data underlying this article will be shared on reasonable request to the corresponding author \bibliographystyle{apsrev4-2_16.bst}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The joint detection of a gravitational wave (GW) event GW170817 and a short gamma-ray burst (SGRB) GRB 170817A confirms that at least some SGRBs originate from double neutron star (NS) mergers \citep{Abbott2017a,Abbott2017b,Goldstein2017,Zhang2018}. Later, another NS merger event GW190425 was discovered \citep{Abbott2020}, and a sub-threshold GRB, GBM-190816, was reported to be possibly associated with a sub-threshold GW event \citep{Yang2019,Goldstein2019}. While the GW observations alone can provide constraints on the NS equation of state \citep[e.g.][]{Abbott2017a,Abbott2020}, the joint GW-EM detections would provide further useful information about the physics of NS mergers and associated GRBs, including the jet launching mechanism, jet structure, jet composition, as well as GRB radiation mechanism \cite[e.g.][]{Troja2017,Zhang2018,Mooley2018a,Mooley2018b,Gill2019,Geng2019,Zhang2019,Yang2019,Troja2019,Ryan2020}. Copious electromagnetic (EM) signals are expected to be generated before and after the NS merger \citep[for reviews, see][]{Berger2014,Fernandez2016,Zhang2018book,Metzger2019}. Prior to the merger, EM signals can be produced by the interaction between the magnetospheres of the two NSs \citep{Hansen2001,Lai2012,Palenzuela2013,Wang2016,Wang2018} or possible crust cracking of one or both NSs \citep{Tsang2012,Suvorov2020}. These mechanisms can lead to gamma-ray signals, which could be observed as precursor emission of SGRBs \citep[e.g.][]{Troja2010,Wang2018}. Precursor emission of SGRBs can also be produced after the merger. If the main SGRB emission is produced by the standard GRB mechanism (e.g. synchrotron radiation in an internal shock or magnetic dissipation site), a thermal precursor may be produced either as the shock breaks out from the surrounding ejecta or as the fireball ejecta reaches its photosphere radius \citep[e.g.][]{Meszaros2000,Ramirez-Ruiz2002}. Many efforts have been made to search for precursor emission for GRBs. Precursors were firstly identified in long GRBs \citep[e.g.][]{Lazzati2005,Hu2014,Zhang2018a}. For NS-merger-origin SGRBs, intense, short $\gamma$-ray emission is expected to occur shortly after the merger. The detection of precursor emission is therefore of great interest to diagnose the physical process right before or shortly after the merger. Observationally, identifying a weak signal before the main SGRB signal often suffers from instrumental biases, such as the energy range and sensitivity. {\em Fermi}/Gamma-ray Burst Monitor (GBM) covers a broad energy band (from $\sim8$ keV to $40$ MeV), while {\em Swift}/Burst Alert Telescope (BAT) is more sensitive in the $15-150$ keV range. Thus {\em Swift}/BAT would have a higher rate to detect soft weak precursors. Indeed, \cite{Troja2010} found that $\sim8\%-10\%$ SGRBs detected by {\em Swift}/BAT are associated with precursor activities, while in the SPI-ACS/INTEGRAL SGRB catalog, only $<0.4\%$ of the SGRBs are found to have precursor emission \citep{Minaev2017}\footnote{One event, GRB 100717, was actually regarded as a long GRB in the {\em Fermi}/GBM catalog. However, \cite{Wang2018} analyzed the spectra of GRB 100717 and found that this event can be well explained as an SGRB with a precursor generated by magnetospheric interaction between two merging NSs.}. Recently, \cite{Zhong2019} analyzed the {\em Swift} and {\em Fermi}/GBM SGRB data and found that $2.7\%$ of SGRBs have precursor emission. In this paper, we study the precursor emission of SGRBs in detail both observationally and theoretically. In Section 2, we first perform a systematic search for precursors in the {\em Fermi}/GBM SGRB catalog and then perform detailed data analyses to extract the temporal and spectral information of both the precursor and the main SGRB emission. In Section 3, we discuss the validity of several precursor models and constrain these models using observations. The conclusion and discussion are presented in Section 4. \section{Data analysis and results} \subsection{Precursor emission in {\em Fermi}/GBM SGRB sample} SGRBs are usually classified based on the duration criterion $T_{90}\lesssim 2$ s. However, since the duration of GRB 170817A (associated with GW170817) is $2.05$ s \citep{Zhang2018}, in this paper we adopt a more conservative criterion $T_{90}\lesssim3$ s to identify SGRB candidates. Up to April 2020, {\em Fermi}/GBM detected 529 such SGRB candidates \citep[][see also the online catalog\footnote{https://heasarc.gsfc.nasa.gov/W3Browse/fermi/fermigbrst.html}]{Bhat2016}. GBM consists of twelve sodium iodide (NaI) detectors (sensitive to 8 keV - MeV band) pointing to different directions and two bismuth germanate (BGO) detectors (sensitive to 200 keV - 40 MeV band). We sort out two NaI detectors that have the smallest angular separations with respect to the sky position of the corresponding GRB. The Time-Tagged Event data from the two NaI detectors are used to construct the light curve, which provides the arrival times and photon energies. We select the data with photon arrival time between $T_0-50$ s and $T_0+30$ s, where $T_0$ is the GRB trigger time. Using the Bayesian Block (BB) algorithm \citep{Scargle2013} in the Astropy package \citep{Astropy2018}, we segment the photons into a sequence of time blocks, as shown in Fig. \ref{fig:1}. We then search for precursor emission in these time block sequences. A precursor is defined as the first pulse in the light curve. It must satisfy the following three requirements: (1) the peak flux is lower than that of the main pulse; (2) the flux during the waiting time period (the time interval between the precursor and the main pulse) is consistent with the background level; (3) the significance level is larger than $3\sigma$. The first two requirements are the common definitions to identify precursor emission in SGRBs. The last one is reinforced in our study to reduce false-alarm signals. The second or main pulse is regarded as the main SGRB. To further strengthen the connection between the SGRB and the precursor, we also examined whether the precursor emission is only significant in the detectors in which the main pulse is bright. Then we follow the common definition of $T_{90}$ to calculate the durations of the precursor emission $(T_{\rm pre})$ and the main SGRB ($T_{\rm GRB}$), as well as the waiting time $(T_{\rm wt})$ in between. The significance level of the precursor depends on time-bin size, energy band, and the background level. We take the background data from two time intervals, i.e., 30 s before the precursor and 30 s after the main SGRB. We then simultaneously vary the energy band and bin size (limited to $<0.5T_{\rm pre}$) to determine the maximum significance level. \subsection{Properties of the precursor and the main SGRB emission} Using the above three requirements, we identify 16 precursor events of SGRBs in the {\em Fermi}/GBM catalogue, accounting for 3.0\% of the full sample. Albeit we set $3\sigma$ as the threshold, we find the significance level of our precursor sample satisfies $\gtrsim4.5\sigma$. The light curves of these SGRBs using both the ordinary histograms and BB algorithms, as well as the evolution of the hardness ratio, are shown in Fig. \ref{fig:1}. To further study their spectral properties, we employ the McSpecFit package \citep{Zhang2018a} to perform the spectral fitting for the precursor and main SGRB emission components using the data from two nearest NaI detectors and one BGO detector. This package includes various spectral models, such as blackbody, BAND \citep{Band1993}, BAND+blackbody, power-law (PL), PL+blackbody, exponential cutoff power-law (CPL), and CPL+blackbody. The Bayesian information criterion (BIC) is used to indicate the goodness of fits to these models, where $2\leq\Delta {\rm BIC}<6$ gives positive evidence and $\Delta {\rm BIC}\geq6$ gives strong positive evidence in favor of the model with a lower BIC \citep{Robert1995}. Here we adopt $\Delta {\rm BIC}=6$ to select the best-fit model, and for those with $\Delta {\rm BIC}<6$, we show two favoured models. The main features, including the duration and best fitting spectral models of the precursor and main SGRB emission components, are listed in Table \ref{tab:pre}. Fig. \ref{fig:2} shows the statistics of the durations. We find that in most cases, the spectral models of both the precursor and the main SGRB can be constrained, while in four cases (GRB170802638, GRB180511437, GRB181126413, and GRB191221802), only the spectral models of the main SGRB can be constrained. Most precursors can be fitted by the blackbody, PL or CPL models with $\Delta {\rm BIC}\gtrsim2$. Note for GRB081216531, although both blackbody and CPL models are favored with $\Delta {\rm BIC}=5.9$, the spectral index of CPL ($N(E)\propto E^{2.1}$) would suggest a blackbody origin. Three typical precursor spectra are shown in Fig. \ref{fig:GRB081216531} - \ref{fig:GRB160804180} as examples. The best-fitting models for the main SGRBs are usually CPL or BAND models with $\Delta {\rm BIC}\gtrsim2$, but some can be fitted with the blackbody, PL, or CPL+blackbody models. Most precursors have different spectra from the main SGRBs, except GRB160804180 (both are CPL or BAND models) and GRB170709334 (both are blackbody or CPL models). In the top panels of Fig. \ref{fig:2}, we show the histograms of $T_{\rm pre}$, $T_{\rm wt}$, and $T_{\rm GRB}$. In the bottom panels, we directly compare these three timescales in scatter plots. One can see that $T_{\rm wt}\simT_{\rm GRB}\simT_{\rm pre}$ is generally satisfied. The differential number distributions of $T_{\rm pre}$ and $T_{\rm wt}$ seem to be consistent with normal distributions, but more data are required to draw a firmer conclusion. The precursor component has a typical duration of $T_{\rm pre}\lesssim0.7$ s, with a significant outlier GRB180511437 that has $T_{\rm pre}\approx2.8$ s. In most cases, the waiting time satisfies $T_{\rm wt}<2$ s, but there are two significant outliers: GRB180511437 with $T_{\rm wt}\approx13$ s and GRB191221802 with $T_{\rm wt}\approx19$ s. Using the linear regression method, we find a linear correlation in logarithmic scale, i.e. $T_{\rm wt}\approx2.8T_{\rm GRB}^{1.2}$, with the correlation coefficient being $r=0.75$. However, there is also an outlier, GRB191221802 with $T_{\rm wt}/T_{\rm GRB}\approx52$. \section{Physical Implications for Precursor Emission in SGRBs} It has been argued that the classification of SGRBs based on $T_{90}$ could be biased for some GRBs, especially those at high redshifts (e.g., $z\gtrsim1$). These apparent SGRBs which could be intrinsically from collapsars, yet the observed light curve is just the ``tip-of-iceberg'' \cite[e.g.][]{Zhang2009,Virgili2011,Bromberg2013,Lv2014} of the emission with a longer duration. In our sample, the redshift of most events is unknown, except GRB090510016, which has a spectroscopic redshift $z=0.903$ \citep{Rau2009}. Therefore, we calculate the amplitude $f$-factor for these SGRBs \citep[see more details in][]{Lv2014} to determine the probability of some of them might be disguised SGRBs. We find that eight (four) of them have $f\gtrsim1.5(2)$, as listed in Tab. \ref{tab:pre}. These numbers are large enough to support their NS merger origins \citep{Lv2014}. In the following, we mainly discuss the precursor models based on the NS merger scenario, keeping in mind that in rare cases, a collapsar origin of the SGRB cannot be ruled out. \subsection{Precursor models} Within the framework of NS mergers, several scenarios have been discussed in the literature that may give rise to precursor emission before the main SGRB. We discuss four possibilities below. The first two are pre-merger models and the last two are post-merger models. \begin{itemize} \item {\bf The pre-merger NS crust cracking model}: For this mechanism, the dissipated energy likely is emitted in thermal radiation, since the crust is highly optically thick. The energy released in this process is found to be $E_{\rm cc,46}=E_{\rm cc}/10^{46} {\rm erg} \lesssim 1$ \citep{Troja2010,Tsang2012}. This would heat the crust to $T_{\rm c}=E_{\rm cc}/C\lesssim 2.8\times10^8$ K, where $C\approx 10^{29}T_{\rm c}$ erg/K \citep{Yakovlev1999}. The corresponding luminosity from the crust surface is \begin{equation} L_{\rm cc}\approx 4\pi R_*^2 a T_{\rm c}^4\lesssim 4.5\times 10^{42} E_{\rm cc,46}^2 ~{\rm erg~s}^{-1}, \label{L_cc} \end{equation} where $a$ is the Stefan-Boltzmann constant, and the NS radius is assumed to be $R_*=10^6$ cm. \item {\bf The pre-merger magnetosphere interaction model}: The luminosity of magnetospheric interaction between two NSs can be estimated as \citep{Lai2012,Palenzuela2013,Wang2018} \begin{equation} L_{\rm MI}\approx 2.0\times10^{46} \eta B_{*,13}^2 (a/ 30\;\!{\rm km})^{-7}~ {\rm erg~s}^{-1},\label{L_MI} \end{equation} where $B_*=10^{13}B_{*,13}$ G is the magnetic field of the main NS, $a$ is the separation between the two NSs, and the efficiency parameter $0.01\lesssim\eta\lesssim1$ depends on the magnetic field structure of the binary system. \item {\bf The post-merger shock breakout (SBO) model}: The SBO of the jet or cocoon from the fast component of the NS merger ejecta can release a minute fraction ($\zeta=10^{-4}\zeta_{-4}$) of the total kinetic energy of the outflow, i.e. $E_{\rm SBO}=\zeta E_{\rm iso}$ \citep{Gottlieb2018,Bromberg2018}. The luminosity of an SBO may be estimated as \begin{equation} L_{\rm SBO}\approx E_{\rm SBO}/t_{\rm SBO}=10^{47} \zeta_{-4} L_{\rm j,50}T_{\rm GRB} t_{\rm SBO,-1}^{-1}~{\rm erg~s}^{-1}, \label{L_SB} \end{equation} where we used $E_{\rm iso}=L_{\rm j}T_{\rm GRB}$, and $L_{\rm j}=10^{50} L_{\rm j,50}~{\rm erg/s}$ is the isotropic-equivalent jet luminosity. The SBO takes place at a radius of $R_{\rm SBO}\approx\Gamma^2_{\rm SBO}ct_{\rm SBO}$, where $\Gamma_{\rm SBO}$ is the Lorentz factor of the emitting region, which is $\Gamma_{\rm SBO}\sim10$ for jet breakout and $\Gamma_{\rm SBO}\sim3$ for cocoon breakout; $t_{\rm SBO}=0.1t_{\rm SBO,-1}$ s is the SBO timescale; The observed spectrum is quasi-thermal with a temperature $T_{\rm SBO}\sim \Gamma_{\rm SBO}(1-50)$ keV \citep{Gottlieb2018,Bromberg2018}. \item {\bf The post-merger fireball photosphere model}: The luminosity of photospheric radiation of a GRB fireball can be expressed as \begin{equation} L_{\rm ph}=10^{50}\xi L_{\rm j,50}~{\rm erg/s},\label{eq:Lph} \end{equation} where $\xi={\rm min}[1,(R_{\rm c}/R_{\rm ph})^{2/3}]$ with $R_{\rm c}$ and $R_{\rm ph}$ being the coasting radius and photosphere radius, respectively \citep[see Section 7.3.3 of ][and references therein]{Zhang2018book}. This leads to a quasi-blackbody spectrum with a temperature \begin{equation} kT_{\rm ph}=\xi kT_0= 40.9\xi_{-1}L_{\rm j,50}^{1/4}R_{0,7}^{-1/2}~{\rm keV},\label{eq:Tph} \end{equation} where $T_0$ and $R_0=10^7R_{0,7}$ cm are the initial temperature and the size of the fireball. \end{itemize} Recently, \cite{Dichiara2020} performed a systematic search for SGRBs in the local Universe based on the {\em Swift} catalog, and found that four closest SGRBs could be located at distances of 100-200 Mpc. The sensitivity of {\em Fermi}/GBM is roughly $0.5~{\rm cm}^{-2} {\rm s}^{-1}$ assuming a photon energy of $100$ keV\footnote{see https://fermi.gsfc.nasa.gov/science/instruments/table1-2.html}. The corresponding threshold luminosity for the events detectable at a luminosity distance of $D>100$ Mpc is \begin{equation} L_{\rm th} \sim 10^{47} ~ {\rm erg \ s^{-1}} (D / 100 \ {\rm Mpc})^2. \end{equation} Comparing this with the predicted luminosities of the four precursor models, one can see that the crust cracking model predicts too faint precursor emission to be detectable. For cosmological-distance-SGRBs ($D > 100$ Mpc), only the SBO emission and fireball photosphere model can give rise to a bright enough precursor for SGRBs. The magnetosphere interaction model may be relevant to presursor emission of some SGRBs if the sources are nearby and the surface magnetic field of the primary NS is strong enough (e.g. $B_s > 10^{13}$ G). \subsection{Constraints on GRB models} Some precursors in our sample can be explained by the blackbody model with $\Delta {\rm BIC}\gtrsim2$, especially GRB081216531 and GRB141102536 with $\Delta {\rm BIC}\gtrsim6$ (see in Table \ref{tab:pre}). This is consistent with the SBO and fireball photosphere model. The observed relative flux ratio between the precursor and to the main SGRB is about $0.01\lesssim L_{\rm pre}/L_{\rm j}<1$ in our sample. For the SBO model, it requires $10\lesssim \zeta_{-4}T_{\rm GRB} t_{\rm SBO,-1}^{-1}<10^3$. For the fireball photosphere model, the relative flux ratio as well as the precursor temperature can be well explained by the model with $1>\xi\gtrsim0.01$. The observed duration of the photospheric radiation is characterized by $t_{\rm ph}\approx R_{\rm ph}(1+z)/(\Gamma^{2}c)$, where $z$ is the redshift, $\Gamma$ is the bulk Lorentz factor of the jet, and the photosphere radius is $R_{\rm ph}=5.9\times10^{13}L_{\rm j,50}\Gamma_1^{-3}~{\rm cm}$, where $\Gamma_1=\Gamma/10$ \cite[e.g.][]{Meszaros2000,Rees2005,Zhang2018book}. Our sample shows that $t_{\rm ph}\approx0.1t_{\rm ph,-1}$ s, which gives an interesting constraint on the bulk Lorentz factor of SGRB outflow, i.e. \begin{equation} \Gamma=28.8L_{\rm j,50}^{1/5} t_{\rm ph,-1}^{-1/5}. \end{equation} This result is consistent with Eq. (1) of \cite{Troja2010}. Note this interpretation requires a matter-dominated jet, with the main SGRB signal originating from internal shocks \citep{Meszaros2000,Zhang2018book}. For the post-merger precursor models, the waiting time between the precursor and the main burst corresponds to the observer-frame time for the jet to propagate from the precursor radius $R_{\rm pre}$ (photospheric radius or SBO radius) to the jet dissipation radius $R_{\rm GRB}$, i.e., $T_{\rm wt}=(R_{\rm GRB}-R_{\rm pre})(1+z)/(\Gamma^{2}c)$. Observations show $T_{\rm wt}\simT_{\rm pre}$ (see Fig. \ref{fig:2}), which indicates that $R_{\rm GRB}\sim2R_{\rm ph}$ for the fireball photosphere model, and $R_{\rm GRB}\sim \Gamma^2 \Gamma^{-2}_{\rm SBO} R_{\rm SBO}$ for the SBO model. However, we should keep in mind that the definition of $T_{\rm pre}$ and $T_{\rm GRB}$ here are based on $T_{90}$, which could underestimate the intrinsic durations of the precursor and the main burst and over-estimate the waiting time. The main GRB signal is expected be non-thermal, which is consistent with our spectral fits for most events. One exception is GRB170709334, which favors thermal spectra for both the precursor and the main GRB. This may correspond to an SBO precursor with a fireball photosphere induced main pulse or two episodes of central engine activities with the internal shock emission suppressed. In some cases, the precursor emission has a non-thermal spectrum, especially GRB111117510 and GRB160804180 with $\Delta {\rm BIC}\gtrsim6$. These cases may be explained by the NS magnetospheric interaction model (assuming that the sources are nearby). For NS mergers with the surface magnetic field $B_{*,13}\gtrsim1$ for the primary NS, the typical spectrum may be approximately described by a synchrotron radiation spectrum of a photon index around $-2/3$ peaking at $\sim$MeV, because of the effect of synchrotron-pair cascades \citep{Wang2018PRD,Wang2018}. Such a model can well explain the photon indices and peak energies of the non-thermal precursor bursts, e.g., GRB111117510, GRB140209313, and GRB160804180 \citep{Wang2018}. The precursor emission time for this magnetospheric interaction model roughly coincides with the gravitational wave radiation chirp signal time. So the waiting time between the precursor and the main burst should correspond to the time delay between the GW signal and the SGRB signal. This timescale consists of three parts \citep{Zhang2019}: the time ($\Delta t_{\rm jet}$) for the jet to be launched by the central engine, the time ($\Delta t_{\rm bo}$) for the jet to propagate through and break out from the circum-burst medium, and the time ($\Delta t_{\rm GRB}$) for the jet to reach the energy dissipation radius (e.g., the photospheric radius or the internal shock radius). The last term is $\Delta t_{\rm GRB}/(1+z)\sim T_{\rm GRB}/(1+z) \sim0.01-1$ s, while the first two terms depend on the jet launch models. According to the Table 1 in \cite{Zhang2019}, for most models, $(\Delta t_{\rm jet}+\Delta t_{\rm bo})/(1+z)=0.01-1$ s. Consequently, one would also expect $T_{\rm wt}\simT_{\rm GRB}$. An exception is the SMNS/SNS magnetic model, in which a uniform-rotation-supported supramassive NS (SMNS) is formed after the NS merger, which subsequently becomes a stable NS (SNS). In this model, the waiting time is dominated by the term $\Delta t_{\rm jet}/(1+z)=0.01-10$ s, which is mainly contributed by the time needed to clean the environment to launch a relativistic jet \citep{Metzger2011,Zhang2019}. In this case, one expects $T_{\rm wt}\ggT_{\rm GRB}$. In our sample, we find most events satisfy $T_{\rm wt}\simT_{\rm GRB}$, except GRB191221802, which has $T_{\rm wt}/T_{\rm GRB}\approx52$ and $T_{\rm wt}=19.36_{-3.19}^{+1.24}$ s. We also notice that for GRB090510016, \cite{Troja2010} found two precursors in the {\em Swift} data, but only the second precursor can be found in {\em Fermi} data (consistent with our results). Its first precursor is found to be of $T_{\rm wt}/T_{\rm GRB}\approx40$ and $T_{\rm wt}\approx12$ s, while its second precursor in our analysis is consistent with the photospheric radiation of the fireball. Therefore, its first precursor with a long waiting time ($T_{\rm wt}/T_{\rm GRB}\gg1$) could originate from NS magnetospheric interaction, and such long waiting times are caused by the jet launch mechanism in the SMNS/SNS magnetic model. In conclusion, according to this model, a SNS engine might have been formed after the merger in events with $T_{\rm wt}/T_{\rm GRB}\gg1$, e.g. GRB090510016 and GRB191221802. \section{Conclusions and discussion} In this paper, we performed a stringent search for precursor emission of short GRBs in the {\em Fermi}/GBM data and found that 16 out of 529 (3.0\%) SGRBs have precursor with significance $\gtrsim4.5\sigma$. The light curves are shown in Fig. \ref{fig:1} and the properties of precursor and main SGRB emission are listed in Tab. \ref{tab:pre}. As shown in Fig. \ref{fig:2}, the timescales are roughly comparable to each other, $T_{\rm wt}\simT_{\rm GRB}\simT_{\rm pre}$, and there is a linear correlation (correlation coefficient $r=0.75$) $T_{\rm wt}\approx2.8T_{\rm GRB}^{1.2}$ in the logarithmic scale, but with a significant outlier $T_{\rm wt}/T_{\rm GRB}\approx52$ in GRB191221802. In most cases, we find $T_{\rm pre}\lesssim0.7$ s and $T_{\rm wt}<2$ s, but there are significant outliers, i.e. $T_{\rm pre}\approx2.8$ s and $T_{\rm wt}\approx13$ s for GRB180511437, and $T_{\rm wt}\approx19$ s for GRB191221802. Most precursors favour the blackbody, CPL and/or PL spectra with $\Delta {\rm BIC}\gtrsim2$. In particular, GRB081216531 and GRB141102536 favours blackbody model with $\Delta {\rm BIC}\gtrsim6$, and GRB111117510 and GRB160804180 favours CPL model with $\Delta {\rm BIC}\gtrsim6$. The thermal spectra can be explained within the SBO model and the photospheric radiation fireball model, and the non-thermal ones may be explained in the NS magnetospheric interaction model. The crust cracking mechanisms generally predict too faint emission to be detected at a cosmological distance. For the SBO model, we constrain $10^{-2}T_{\rm GRB}/T_{\rm pre}\lesssim \zeta<1$. This is larger than the expected value of $\zeta\sim10^{-4}-10^{-3}$ \citep{Gottlieb2018,Bromberg2018}. One possible explanation is that the jet is viewed slightly off-axis so that the observed luminosity of the main pulse is smaller than the jet luminosity. For the photospheric radiation mechanism, a matter-dominated jet is preferred. We constrain the jet Lorentz factor to be $\Gamma=28.8L_{\rm j,50}^{1/5} t_{\rm ph,-1}^{-1/5}$. However, as noted by \cite{Troja2010}, such a Lorentz factor is much smaller than that of typical SGRBs and thus may have difficulties to explain SGRB properties. For example, observation shows that the Lorentz factor of GRB090510016 should be $\Gamma\gtrsim 10^3$ \citep{Ackermann2010}. For the NS magnetospheric interaction model, we find it can provide a constraint on jet launch mechanism. More specifically, we find events with $T_{\rm wt}/T_{\rm GRB}\gg1$ can be well explained by the time delay to launch a relativistic jet in the SMNS/SNS magnetic model. As a consequence, in GRB191221802 there might be an SNS formed after each merger, and their jets are magnetically powered. We also notice that the possibility that some events in our sample are from collapsars cannot be excluded. For example, it is unclear whether GRB180511437 is a short GRB or not, as our study shows $T_{\rm GRB}=3.33_{-0.24}^{+0.18}$ s, which is $>2$ s, even though it has $T_{\rm GRB}= 1.98\pm0.97$ in the {\em Fermi} GBM Burst Catalog\footnote{see on webpage \url{https://heasarc.gsfc.nasa.gov/db-perl/W3Browse/w3query.pl}}. Besides, the precursor of GRB160804180 can be also explained by the BAND model with $\Delta {\rm BIC}=5.9$, which might be an early episode activity from the central engine. Furthermore, it is also suggested that GRB090510016 could be of collapsar origin based on the study of its afterglow \citep{Zhang2009,Panaitescu2011}. Such grey-zone cases can be better studied when multi-wavelength/multi-messenger information (e.g. host galaxy identifications) becomes available \citep[e.g.][]{Liye2020,Dichiara2020}. Albeit only 3.0\% SGRBs detected by Fermi/GBM have detectable precursor emission, we note that the opening angles of precursors from SBOs (especially for cocoon breakouts) and NS magnetospheric interactions can have a solid angle much larger than the jet opening angle. Therefore, searching EM counterparts of NS mergers in the local Universe will very likely detect such precursor emissions with/without detecting the main SGRBs. GW170817/GRB 170817A may be such a case. \begin{figure} \centering \begin{subfigure} \centering \includegraphics[width=\textwidth]{figsub1.eps} \end{subfigure} \vskip-0.3cm \begin{subfigure} \centering \includegraphics[width=\textwidth]{figsub2.eps} \end{subfigure} \vskip-0.3cm \begin{subfigure} \centering \includegraphics[width=\textwidth]{figsub3.eps} \end{subfigure} \vskip-0.3cm \begin{subfigure} \centering \includegraphics[width=\textwidth]{figsub4.eps} \end{subfigure} \caption{The light curves of SGRBs from the NaI detector with the highest significance for their precursors. We use both the traditional histogram and BB algorithm. The traditional histograms are obtained for the specific energy range optimized for the significance level of the precursor. Note that for GRB 150922234, the peak flux of the precursor appears larger than that of the main pulse, but it is smaller than in the light curve of the full energy band (and hence, is defined as precursor emission). The hardness ratio (hard/soft) is the ratio of numbers between hard photons ($50-800$ keV) and soft photons ($10-50$ keV). \label{fig:1}} \end{figure} \begin{figure} \includegraphics[width=\textwidth]{fig2.eps} \caption{The top panels show the histograms of $T_{\rm pre}$, $T_{\rm wt}$, and $T_{\rm GRB}$. The bottom panels show comparisons of these timescales, and the black lines represent the equality line. The red line in the right bottom panel is $T_{\rm wt}\approx2.8T_{\rm GRB}^{1.2}$. \label{fig:2}} \end{figure} \begin{sidewaystable} \scriptsiz \caption{ The SGRBs with precursor emission and their properties} \label{tab:pre} \begin{tabular}{ccccccccc \hline Name$^a$ & $T_{\rm pre}$ & Best-fit models of precursors$^b$&$\Delta {\rm BIC}$ & $T_{\rm wt}$ & $T_{\rm GRB}$ & Best-fit model of main pulse$^a$&$\Delta {\rm BIC}$ & $f$-factor\\ & (s) & Energy unit (keV)& & (s) & (s) & Energy unit (keV)& &\\\hline GRB081216531 & $0.15_{-0.03}^{+0.05}$ & Blackbody: $kT=18.66_{-2.11}^{+3.30}$; & $\geq5.9$ & $0.53_{-0.05}^{+0.04}$ & $0.24_{-0.02}^{+0.02}$ & CPL + Blackbody: $\Gamma_{ph}= 0.08_{-0.03}^{+0.27}$, &$\geq5.0$&$2.0\pm0.1$\\ & &CPL: $\Gamma_{ph}=2.1^{+1.20}_{-1.73}$, $E_{p}=72.49^{+17.17}_{-8.61}$ & & & & $E_{p}=265.94_{-30.44}^{+36.65}$, $kT=340.85_{-21.47}^{+31.98}$; & &\\ &&&&&&CPL: $\Gamma_{ph}=-0.50^{+0.06}_{-0.06}$, $E_{p}=1219.0^{+103.1}_{-114.5}$&\\\hline GRB090510016$^{*,c}$ & $0.05_{-0.03}^{+0.07}$& Blackbody: $kT=120.43_{-56.11}^{+103.18}$; &$\geq3.0$ & $0.52_{-0.08}^{+0.04}$ & $0.30_{-0.01}^{+0.01}$ & CPL: $\Gamma_{ph}=-0.61^{+0.03}_{-0.02}$, $E_{p}=2999.44^{+0.55}_{-62.77}$&$\geq79.7$&$2.6\pm1.0$\\ &&PL: $\Gamma_{ph}=-1.13^{+1.89}_{-4.19}$&&&&&\\\hline GRB100223110 & $0.02_{-0.01}^{+0.03}$ & Blackbody: $kT=66.02_{-15.62}^{+135.35}$; &$\geq1.7$ & $0.08_{-0.03}^{+0.02}$ & $0.12_{-0.01}^{+0.01}$ & CPL: $\Gamma_{ph}=-0.18^{+0.11}_{-0.12}$, $E_{p}=1101.63_{-107.93}^{+181.12}$;&$\geq5.8$&$1.4\pm0.1$\\ & & PL: $\Gamma_{ph}=-1.17^{+0.09}_{-4.01}$ & && \multicolumn{3}{c}{BAND: $\alpha=-0.19^{+0.12}_{-0.11}$,$\beta=-13.65^{+7.55}_{-3.52}$, $E_p=1122.3^{+153.4}_{-123.9}$} &\\\hline GRB100827455 & $0.11_{-0.04}^{+0.05}$ & Blackbody: $kT=98.60_{-37.80}^{+145.67}$; & $\geq4.1$& $0.34_{-0.06}^{+0.06}$ & $0.09_{-0.01}^{+0.02}$ & Blackbody: $kT=168.19_{-57.24}^{+82.07}$; &$\geq1.7$ & $1.4\pm0.3$\\ & & PL: $\Gamma_{ph}=-1.47^{+0.14}_{-3.74}$ & & & & PL: $\Gamma_{ph}=-1.11^{+0.17}_{-3.74}$;&&\\ \hline GRB101208498 & $0.17_{-0.08}^{+0.12}$ & Blackbody: $kT=9.74_{-1.68}^{+1.90}$; &$\geq1.1$& $1.17_{-0.14}^{+0.10}$ & $1.03_{-0.04}^{+0.03}$ & CPL: $\Gamma_{ph}= -0.77_{-0.07}^{+0.06}$, $E_{p}=148.24_{-6.77}^{+9.76}$;&$\geq3.8$&$3.6\pm0.2$\\ & & PL: $\Gamma_{ph}=-2.20^{+0.20}_{-0.44}$; & & & \multicolumn{3}{c}{BAND: $\alpha=-0.67^{+0.17}_{-0.17}$, $\beta=-2.63^{+0.34}_{-14.26}$, $E_p=127.6^{+29.4}_{-15.9}$}&\\\hline GRB111117510$^*$ & $0.18_{-0.03}^{+0.05}$ & CPL: $\Gamma_{ph}= -0.47_{-0.32}^{+0.22}$, $E_{p}=576.84_{-91.69}^{+442.45}$; &$\geq6.0$& $0.22_{-0.06}^{+0.03}$ & $0.09_{-0.01}^{+0.01}$ & Blackbody: $kT=55.31_{-6.86}^{+10.38}$; &$\geq2.1$& $1.3\pm0.1$\\ &&&&&&CPL: $\Gamma_{ph}= -0.02_{-0.48}^{+0.70}$, $E_{p}=254.25_{-39.43}^{+104.53}$&&\\\hline GRB140209313$^{*,d}$ & $0.61_{-0.08}^{+0.08}$ & CPL: $\Gamma_{ph}=-1.07^{+0.92}_{-0.60}$, $E_{p}=114.74_{-49.19}^{+1526.38}$; &$\geq2.3$ & $1.10_{-0.08}^{+0.08}$ & $1.03_{-0.06}^{+0.04}$ & BAND: $\alpha=-0.31^{+0.06}_{-0.05}$,&$\geq61.9$ & $7.3\pm0.6$\\ & & PL: $\Gamma_{ph}=-1.74^{+0.06}_{-0.10}$ && & &$\beta=-2.44^{+0.07}_{-0.08}$, $E_p=139.66^{+6.33}_{-5.77}$;&& \\\hline GRB141102536$^*$ & $0.06_{-0.06}^{+0.10}$ & Blackbody: $kT=83.92_{-12.07}^{+38.77}$; &$\geq6.0$& $1.26_{-0.15}^{+0.11}$ & $0.48_{-0.04}^{+0.04}$ & CPL: $\Gamma_{ph}= -0.52^{+0.15}_{-0.15}$, $E_{p}=402.76_{-46.93}^{+88.86}$;& $\geq5.7$& $1.6\pm0.1$\\ &&&&&\multicolumn{3}{c}{BAND: $\alpha=-0.53^{+0.14}_{-0.16}$,$\beta=-3.53^{+1.19}_{-13.6}$, $E_p=405.9^{+91.9}_{-48.8}$}& \\\hline GRB150604434 & $0.17_{-0.01}^{+0.25}$ & Blackbody: $kT=124.78_{-17.16}^{+31.32}$; &$\geq3.1$& $0.64_{-0.29}^{+0.02}$ & $0.21_{-0.02}^{+0.03}$ & CPL: $\Gamma_{ph}= -0.35^{+0.24}_{-0.28}$, $E_p=414.84^{+198.21}_{-73.32}$;&$\geq5.4$ &$1.5\pm0.1$\\ &&CPL: $\Gamma_{ph}= 0.04^{+0.79}_{-0.40}$, $E_p=637.59^{+260.37}_{-142.66}$&&&\multicolumn{3}{c}{BAND: $\alpha=-0.13^{+0.08}_{-0.49}$, $\beta=-2.25^{+0.31}_{-14.98}$, $E_p=293.4^{+310.1}_{-95.8}$}&\\\hline GRB150922234 & $0.05_{-0.01}^{+0.01}$ & PL: $\Gamma_{ph}= -1.91^{+0.35}_{-2.85}$; &$\geq2.2$& $0.03_{-0.01}^{+0.01}$ & $0.08_{-0.01}^{+0.01}$ & CPL: $\Gamma_{ph}= -0.23^{+0.21}_{-0.17}$, $E_p=474.00^{+86.67}_{-64.00}$;&$\geq4.8$ &$1.3\pm0.1$\\ &&Blackbody: $kT=7.65_{-5.99}^{+212.99}$&&&&CPL + Blackbody: $\Gamma_{ph}= 0.37_{-0.30}^{+1.13}$, &\\ &&&&&& $E_p=651.1^{+85.2}_{-429.2}$, $kT=39.63_{-37.5}^{+463.1}$ & \\\hline GRB160804180 & $0.16_{-0.02}^{+0.02}$ & CPL: $\Gamma_{ph}=-0.46^{+0.21}_{-0.41}$, $E_p=343.30^{+292.95}_{-58.10}$; &$\geq5.9$& $0.17_{-0.02}^{+0.02}$ & $0.26_{-0.02}^{+0.02}$ & CPL: $\Gamma_{ph}=-0.24^{+0.17}_{-0.19}$, $E_p=619.80^{+163.11}_{-77.88}$;&$\geq5.9$&$1.6\pm0.1 $\\ &\multicolumn{3}{c}{BAND: $\alpha=-0.54^{+0.31}_{-0.35}$, $\beta=-18.57^{+13.64}_{-1.41}$, $E_p=359.4^{+312.0}_{-78.5}$}&&\multicolumn{3}{c}{BAND: $\alpha=-0.23^{+0.16}_{-0.20}$, $\beta=-19.55^{+13.61}_{-0.45}$, $E_p=623.6^{+153.9}_{-83.5}$}&\\\hline GRB170709334 & $0.46_{-0.27}^{+0.01}$ & Blackbody: $kT=62.44_{-7.05}^{+23.18}$; &$\geq5.1$& $0.17_{-0.07}^{+0.30}$ & $0.15_{-0.04}^{+0.07}$ &Blackbody: $kT=88.49_{-10.64}^{+16.88}$;&$\geq4.4$&$1.3\pm0.1$ \\&&CPL: $\Gamma_{ph}=-0.52^{+1.92}_{-0.23}$, $E_p=723.04^{+453.48}_{-460.33}$&&&&CPL: $\Gamma_{ph}=0.63^{+0.82}_{-0.58}$, $E_p=380.01^{+119.84}_{-54.90}$ &&\\\hline GRB170802638 & $0.15_{-0.11}^{+0.17}$ & \multicolumn{2}{c}{unconstrained} & $1.85_{-0.21}^{+0.14}$ & $0.33_{-0.04}^{+0.04}$ & CPL: $\Gamma_{ph}=-0.62^{+0.07}_{-0.09}$, $E_p=799.50^{+155.17}_{-85.57}$;&$\geq5.5$&$1.5\pm0.1$\\ &&\multicolumn{2}{c}{}&&&CPL + Blackbody: $\Gamma_{ph}= 0.01_{-0.01}^{+0.17}$, &\\ &&\multicolumn{2}{c}{}&&& $E_p=269.3^{+24.9}_{-36.5}$, $kT=339.0_{-51.9}^{+65.6}$ &\\\hline GRB180511437 & $2.80_{-1.69}^{+1.38}$& \multicolumn{2}{c}{unconstrained} & $12.72_{-1.57}^{+1.80}$ & $3.33_{-0.24}^{+0.18}$ &CPL: $\Gamma_{ph}=-0.81^{+0.22}_{-0.27}$, $E_p=119.70^{+31.63}_{-15.83}$; &$\geq5.5$&$1.4\pm0.1$\\ &&\multicolumn{2}{c}{}&&\multicolumn{3}{c}{BAND: $\alpha=-0.413^{+0.41}_{-0.65}$, $\beta=-2.68^{+0.51}_{-14.44}$, $E_p=87.1^{+60.7}_{-13.4}$}&\\\hline GRB181126413$^*$ & $0.72_{-0.27}^{+0.18}$ & \multicolumn{2}{c}{unconstrained}& $0.85_{-0.29}^{+0.40}$ & $0.46_{-0.13}^{+0.11}$ & Blackbody: $kT=24.52_{-2.04}^{+3.16}$;&$\geq6.2$&$1.2\pm0.1$\\ \hline GRB191221802 & $0.03_{-0.03}^{+0.59}$&\multicolumn{2}{c}{unconstrained}& $19.36_{-3.19}^{+1.24}$ & $0.37_{-0.13}^{+0.26}$& Blackbody: $kT=67.21_{-9.46}^{+22.62}$;&$\geq1.0$ &$1.1\pm0.1$\\ &&\multicolumn{2}{c}{} & & & CPL: $\Gamma_{ph}=-0.57^{+0.48}_{-0.53}$, $E_p=471.92^{+945.05}_{-126.29}$; &\\\hline \end{tabular} {\\The durations of the precursor ($T_{\rm pre}$), waiting time ($T_{\rm wt}$), and the main SGRB ($T_{\rm GRB}$) are based on $T_{90}$ analyses. The best-fit models are obtained with the BIC.\\ $^a$ The GRBs marked with `$^*$' also triggered {\em Swift}, and can be found at \url{https://swift.gsfc.nasa.gov/archive/grb_table/}. \\ $^b$ For the blackbody model, $k$ and $T$ are the Boltzmann constant and temperature, respectively. PL ($N(E)\propto E^{\Gamma_{ph}}$) and CPL ($N(E)\propto E^{\Gamma_{ph}}\exp{[-E(2+\Gamma_{ph})/E_p]}$) represent power-law and cutoff power-law models with photon indices $\Gamma_{ph}$, and $E_{p}$ is the peak energy for the CPL model. For those unconstrained events, we find that both blackbody and PL models are favored, but there are too few photons to provide a robust constraint on the parameters. The $\Delta {\rm BIC}$ between the best-fit model and other models are also presented. And for $\Delta {\rm BIC}<6$, two favoured models are provided.\\ $^c$ The redshift of GRB090510016 is 0.903 \citep{Rau2009}. \cite{Troja2010} found there are two precursors in this burst from the {\em Swift} data. \\ $^d$ From the {\em Swift} observation, GRB140209313 was found to be a SGRB with extended emission, which has durations of $T_{90}=21.25 \pm 7.98$ s and $T_{50}=0.61 \pm 0.07$ s, respectively \citep{Palmer2014}. } \end{sidewaystable} \acknowledgements We thank the referee for valuable comments. J.S.W is supported by China Postdoctoral Science Foundation (Grant 2018M642000, 2019T120335). B.B.Z. acknowledges the supported by the Fundamental Research Funds for the Central Universities (14380035). This work is also supported by National Key Research and Development Programs of China (2018YFA0404204) and The National Natural Science Foundation of China (Grant Nos. 11833003). \begin{figure}[tbp] \centering \includegraphics[angle=0,width=0.3\textwidth]{GRB081216531bbody_int0_countspec_plot.eps} \includegraphics[angle=0,width=0.3\textwidth]{GRB081216531bbody_int0_phspec_plot.eps} \includegraphics[angle=0,width=0.38\textwidth]{GRB081216531bbody_int0_marg.eps} \caption{Constraints on the blackbody model of the precursor of GRB081216531.}\label{fig:GRB081216531} \end{figure} \begin{figure}[tbp] \centering \includegraphics[angle=0,width=0.3\textwidth]{GRB090510016bbody_int0_countspec_plot.eps} \includegraphics[angle=0,width=0.3\textwidth]{GRB090510016bbody_int0_phspec_plot.eps} \includegraphics[angle=0,width=0.38\textwidth]{GRB090510016bbody_int0_marg.eps} \caption{Constraints on the blackbody model of the precursor of GRB090510016.}\label{fig:GRB090510016} \end{figure} \begin{figure}[tbp] \centering \includegraphics[angle=0,width=0.3\textwidth]{GRB160804180cutoffpl_fit_int0_countspec_plot.eps} \includegraphics[angle=0,width=0.3\textwidth]{GRB160804180cutoffpl_fit_int0_phspec_plot.eps} \includegraphics[angle=0,width=0.38\textwidth]{GRB160804180cutoffpl_fit_int0_marg.eps} \caption{Constraints on the CPL model of the precursor of GRB160804180.}\label{fig:GRB160804180} \end{figure} \software{McSpecFit;~Astropy} \bibliographystyle{aasjournal}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec1} \IEEEPARstart{F}{ace} super-resolution (FSR), a.k.a. face hallucination, refers to a technology for obtaining high-resolution (HR) face images from input low-resolution (LR) face images. In practical application scenarios, due to the inherent differences in the hardware configuration, placement position, and shooting angle of the image capture device, the quality of the face images obtained by shooting is inevitably poor. Lower quality images seriously affect the downstream tasks such as face analysis and face recognition. Different from general image SR, the core goal of FSR is to reconstruct as much as possible the facial structure information (i.e., shapes of face components and face outline) that is missing in the degraded observation. Although these structures only occupy a small part of the face, they are the key to distinguishing different faces. Compared with other areas in a face image, the facial feature and contours of a person are usually more difficult to restore since they often span a large area and require more global information. Most of the previous FSR algorithms~\cite{ma2020deep,hu2020face,cai2019fcsr} mainly adopted the strategy of successive multi-task training. These methods used the facial landmark heatmaps or parsing maps to participate in the formal training to constrain the performance of the FSR reconstruction network. However, they also need extra labeled data to achieve the goal. Besides, in the previous FSR methods~\cite{chen2018fsrnet,xin2020facial}, the encoding and decoding parts are connected in series. This kind of connection cannot fully utilize the low-level features, and the low-level features also cannot thoroughly guide the learning of the high-level features, resulting in the unsatisfied performance in FSR task. In addition, many FSR networks~\cite{yu2016ultra,zhang2018super,dogan2019exemplar,kim2019progressive,chen2020learning} have been built using Convolution Neural Networks (CNNs) due to the powerful local modeling capabilities of CNN to predict fine-grained facial details. However, the human face usually has a fixed geometric features structure~\cite{jiang2014face,chen2020robust,gao2021constructing}. Therefore, if only focusing on the extraction of the local information while ignoring the relationship between them (global information), it will inevitably affect the restoration of the global facial structure, leading to blurry effects in the generated faces. As we know, local methods (such as CNN-based methods) mainly focus on the local facial details, while global methods (such as Transformer-based methods) usually capture the global facial structures. How to collaboratively make full use of the local and global features, and how to efficiently aggregate the multi-scale abundant features is important. To achieve this, in this work, we propose an efficient CNN-Transformer Cooperation Network (CTCNet) for FSR. Like most of previous FSR models, our CTCNet also uses an encoder-decoder structure. Specifically, in the encoder and decoder branches, the specially designed Local-Global Feature Cooperation Module (LGCM) are used for feature extraction. LGCM is composed of a Facial Structure Attention Unit (FSAU) and a Transformer block. Among them, FSAU is specially designed to extract key face components information and Transformer blocks are introduced to explore the long-distance visual relation modeling. The combination of FASU and Transformer block can simultaneously capture local facial texture details and global facial structures. Meanwhile, instead of using the successive connections, we design a Multi-scale Feature Fusion Unit (MFFU) to flexibly fuse the features from different stages of the network. In addition, we use the Feature Refinement Module (FRMs) between the encoder and decoder branches to further enhance the extracted features, thus further improving the performance of CTCNet. In summary, the main contributions of this work are as follows \begin{figure*}[t] \centerline{\includegraphics[width=18cm]{img/Network.png}} \caption{The complete structure of the proposed CNN-Transformer Cooperation Network (CTCNet).} \label{Network structure} \end{figure*} \begin{itemize} \item We propose an efficient Local-Global Feature Cooperation Module (LGCM). The combination of CNN and Transformer structure make it can simultaneously capture local facial texture details and global facial structures, which benefit for high-quality face super-resolution image reconstruction. \item We propose a elaborately designed Multi-scale Feature Fusion Unit (MFFU) to fuse the dense features from different scales and depths of the network. This operation ensures that our model can obtain rich features to better reconstruct high-quality images. \item We propose a Feature Refinement Module (FRM) to strengthen the different face structure information and further enhance the extracted features. \item We devise a novel CNN-Transformer Cooperation Network (CTCNet) for face super-resolution. Without relying on any prior information, our CTCNet gains state-of-the-art performance in terms of various kinds of metrics. Meanwhile, the extended model, CNN-Transformer Cooperation Generative Adversarial Network (CTCGAN) can generate more realistic face images. \end{itemize} \section{Related Work} \label{sec2} \subsection{Face Super-Resolution} \label{sec21} Recently, due to the powerful feature representation capabilities of deep convolution neural networks (CNNs), significant progress has been made in image super-resolution~\cite{wang2020deep,li2021beginner}, which also greatly promoted the progress of Face Super-Resolution (FSR). For example, Yu et al.~\cite{yu2016ultra} introduced an ultra-resolution by the discriminative generative network to solve the problem of blurry face image reconstruction. Zhang et al.~\cite{zhang2018super} proposed a super-identity CNN, which introduced super-identity loss to assist the network in generating super-resolution face images with more accurate identity information. Huang et al.~\cite{huang2017wavelet} turned the research core to the wavelet domain and proposed a WaveletSRNet capable of predicting the wavelet coefficients of HR images. Lu et al.~\cite{lu2021face} devised a split-attention in split-attention network based on their designed external-internal split attention group for clear facial images reconstruction. In addition, some scholars have considered the particularity of the FSR task and proposed some FSR models guided by facial priors (e.g., face parsing maps and landmarks). For instance, Yu et al.~\cite{yu2018face} directly aggregate the extracted features with facial component heatmaps in the middle of the network and achieve good results. Chen et al.~\cite{chen2018fsrnet} proposed the first end-to-end face super-resolution convolution network, which utilized the facial parsing maps and landmark heatmaps to guide the super-resolution process. Kim et al.~\cite{kim2019progressive} also used face key point maps and face heatmaps to construct facial attention loss and used them to train a progressive generator. To tackle face images that exhibit large pose variations, Hu et al.~\cite{hu2020face} introduced the 3D facial priors to better capture the sharp facial structures. Ma et al.~\cite{ma2020deep} designed an iterative collaboration method that focuses on facial recovery and landmark estimation respectively. Li et al.~\cite{li2020learning} incorporated face attributes and face boundaries in a successive manner together with self-attentive structure enhancement to super-resolved tiny LR face images. Although these models have achieved promising results, they requires additional marking on the dataset, and the accuracy of priors will greatly limit the accuracy of the reconstruction results. \subsection{Attention Mechanism} \label{sec22} In the past few decades, the attention mechanism has made prominent breakthroughs in various visual image understanding tasks, such as image classification\cite{woo2018cbam,hu2018squeeze,wang2020eca}, image restoration~\cite{zhang2018image,dai2019second,niu2020single,chen2020learning}, etc. The attention mechanism can give more attention to key features, which is beneficial to feature learning and model training. Zhang et al.~\cite{zhang2018image} proved that by considering the interdependence between channels and adjusting the channel attention mechanism, high-quality images could be reconstructed. Chen et al.~\cite{chen2020learning} presented a facial spatial attention mechanism, which uses the hourglass structure to form an attention mechanism, thus the convolutional layers can adaptively extract local features related to critical facial structures. Recently, Transformer~\cite{vaswani2017attention,devlin2018bert} are also widely used in computer vision tasks, such as image recognition~\cite{dosovitskiy2020image,touvron2021training}, object detection~\cite{carion2020end,zhu2020deformable}, and image restoration~\cite{liang2021swinir,lu2021efficient,wang2021uformer,zamir2021restormer}. The key idea of Transformer is the self-attention mechanism that can capture the long-range correlation between words/pixels. Although pure Transformers have great advantages in distilling the global representation of images, only depending on image-level self-attention will still cause the loss of local fine-grained details. Therefore, how to effectively combine the global information and local features of the image is important for high-quality image reconstruction, which is also the goal of this work. \section{CNN-Transformer Cooperation Network} \label{sec3} In this section, we first depict the overall architecture of the proposed CNN-Transformer Cooperation Network (CTCNet). Then, we introduce each module in the network in detail. Finally, we introduce related loss functions for supervised CTCGAN training. \subsection{Overview of CTCNet} \label{sec31} As shown in Fig.~\ref{Network structure}, the proposed CTCNet is a U-shaped symmetrical hierarchical network with three stages: encoding stag, bottleneck stage, and decoding stage. Among them, the encoding stage is designed to extract local and global features with different scales, and the decoding stage is designed for feature fusion and image reconstruction. Meanwhile, the multi-scale connections are used between the encoding stage and the decoding stage to achieve feature aggregation. To better demonstrate the model, we define ${I_{LR}}$, ${I_{SR}}$, and ${I_{HR}}$ as the LR input image, the recovered SR image, and the ground-truth HR image, respectively. \subsubsection{Encoding Stage} As we mentioned above, the encoding stage is designed for feature extraction. Therefore, give a degraded image ${I_{LR}}$ as the input, we first apply a ${3\times3}$ convolution layer to extract the shallow features. After that, the extract features are passed through 3 encoding stages. Each encoding stage includes one specially designed Local-Global Feature Cooperation Module (LGCM) and one downsampling block. Among them, LGCM consists of a Facial Structure Attention Unit (FSAU) and a Transformer block. The downsampling block consists of a ${3\times3}$ convolutional layer with stride 2, a LeakyReLU activation function, and a ${3\times3}$ convolution with stride 1, in which the first convolution uses stride 2 to extract feature information and reduce the size simultaneously. Therefore, after each encoding stage, the size of the output feature maps will be halved, while the number of output channels will be doubled. For instance, given the input feature maps ${I_{LR} \in \mathbb{R}^{C \times H \times W}}$, the $i$-th stage of the encoder produces the feature maps ${I_{en}^{i} \in \mathbb{R}^{2^{i}C \times \frac{H}{2^{i}} \times \frac{W}{2^{i}}}}$. \subsubsection{Bottleneck Stage} There exist a bottleneck stage among the encoding and decoding stages. At this stage, all encoded features will be converged here. In order to make these features better utilized in the decode stage, we introduce Feature Refinement Module (FRM) to further refine and enhance the encoded features. With the help of FRMs, our model can focus on more facial structures and continuously strengthen different face structure information. \begin{figure*}[t] \centering \includegraphics[width=18cm]{img/FSAU.png} \caption{The architecture of the proposed Facial Structure Attention Unit (FSAU).} \label{FSAU} \end{figure*} \subsubsection{Decoding Stage} In the decoding stage, we focus on feature utilization and aim to reconstruct high-quality face images. To achieve this, we introduced a novel module, called Multi-scale Feature Fusion Unit (MFFU). Specifically, the decoder takes the latent features of LR image as inputs and progressively fuse them through MFFUs to reconstruct the SR representations. As shown in Fig.~\ref{Network structure}, each decoder consists of an upsampling block, a MFFU, and a LGCM. Among them, the upsampling block consists of a ${6\times6}$ transposed convolutional layer with stride 2, a LeakyReLU activation function, and a ${3\times3}$ convolution with stride 1, in which the transposed convolutional layer uses stride 2 to extract feature information and increase the size of features simultaneously. Therefore, each decoder halves the number of the output feature channels while doubles the size of the output feature maps. It is worth mentioning that in MFFU, it will simultaneously fuses features with different scales extracted in the encoding stage. Therefore, all local and global features with different scale can be fully used to reconstruct high-quality face images. At the end of the decoding stage, we use a ${3\times3}$ convolutional layer to convert the learned features into the final SR features $I_{Out}$. Finally, the high-quality SR face image is obtained by ${I_{SR}=I_{LR}+I_{Out}}$. Given a training dataset ${\left\{I_{LR}^{i}, I_{HR}^{i}\right\}_{i=1}^{N}}$, we optimize our CTCNet by minimizing the following pixel-level loss function: \begin{equation} {\mathcal{L}(\Theta) = \frac{1}{N}\sum_{i=1}^{N}\left \|F_{CTCNet}(I_{LR}^{i},\Theta ) -I_{HR}^{i}\right\|_{1}}, \end{equation} where $N$ denotes the number of the training images. ${I_{LR}^{i}}$ and ${I_{HR}^{i}}$ are the LR image and the ground-truth HR image of the $i$-th image, respectively. Meanwhile, $F_{CTCNet}(\cdot)$ and $\Theta$ denote the CTCNet and its network parameters, respectively. \subsection{Local-Global Feature Cooperation Module (LGCM)} As one of the most important module in CTCNet, LGCM is designed for local and global feature extraction. As shown in Fig.~\ref{Network structure}, LGCM consists of a Facial Structure Attention Unit (FSAU) and a Transformer Block, which are used for local and global feature extraction, respectively. \subsubsection{Facial Structure Attention Unit (FSAU)} \label{sec32} In FSR, the main challenge is how to extract the key facial features (such as eyes, eyebrows, and mouth), and make the network pay more attention to these features. To achieve this, we propose the Facial Structure Attention Unit (FSAU) to make our model extract as much as possible useful information for better detail restoration. As shown in Fig.~\ref{FSAU}, FSAU mainly consists of one Attention Unit and two Adaptive Feature Distillation Units (AFDU). In the Attention Unit, we use channel attention nested in spatial attention to better extract spatial features and promote channel information interaction. This is because combining the two attention mechanisms can promote the representation power of the extracted features. Specifically, we first adopt the hourglass structure to capture facial landmark features at multiple scales since the hourglass structure has been successfully used in human pose estimation and FSR tasks~\cite{chen2017adversarial,newell2016stacked}. After that, in order to make the module focus on the features of the critical facial components, we introduce the channel attention (CA) mechanism~\cite{zhang2018image} to pay more attention to the channels containing landmark features. Then, we use an additional ${3\times3}$ convolutional layer and Sigmoid function to generate the spatial attention maps of the key components of the face. Finally, to alleviate the problem of vanishing gradients, we also add the residual connection between the input of the hourglass and the output of CA. In addition, we also introduce Adaptive Feature Distillation Units (AFDUs) at the beginning and end of the attention unit for local feature extraction. As shown in Fig.~\ref{FSAU} (b), to save memory and the number of parameters, we first use Reduction operation to halve the number of the the feature maps and then restore it by the Expansion operation. Among them, Reduction and Expansion operations are both composed of a ${3\times3}$ convolutional layer. Meanwhile, we apply the concatenation operation to aggregate the input of Reduction and the output of Expansion along the channel dimension, followed by a ${1\times1}$ convolutional layer and a ${3\times3}$ convolutional layer. The ${1\times1}$ convolution is used to fully utilize the hierarchical features, while the ${3\times3}$ convolution dedicated to reducing the number of feature maps. After that, a CA module is employed to highlight the channels with high activated values, and a ${3\times3}$ convolutional layer is used to refine the extract features. Finally, the residual learning mechanism~\cite{he2016deep} is also introduced to learn the residual information from the input and stabilize the training. \begin{figure}[t] \centering \includegraphics[width=8cm, trim=0 0 50 0]{img/Transformer.png} \caption{The architecture of (a) Transformer Block (b) GDFN, and (c) MDTA, respectively.} \label{Transformer Block} \end{figure} \subsubsection{Transformer Block} \label{sec33} As we mentioned above, FSAU is mainly designed for local features extraction. However, this is far from enough to restore high-quality face images since the global facial structure (such as face contour) will be ignored due to the limited receptive field of CNN. To solve this problem, we introduce a Transformer Block to collaboratively learn the long-term dependence of images. Motivated by~\cite{zamir2021restormer}, in the multi-head self-attention part, we use the Multi-Dconv Head Transposed Attention (MDTA) to alleviate the time and memory complexity issues. Specifically, to make up for the limitations of the Transformer in capturing local dependencies, deep-wise convolution is introduced to enhance the local features to generate the global attention map. As depicted in Fig.~\ref{Transformer Block} (c), different from the original Transformer block directly achieved $query (Q)$, $key (K)$, and $value (V)$ by a linear layer, a ${1\times1}$ convolutional layer is used to aggregate pixel-level cross-channel context and a ${3\times3}$ depth convolutional layer is utilized to encode channel-level spatial context and generate ${Q,K,V\in \mathbb{R} ^{C\times H\times W}}$. Given the input feature $X\in \mathbb{R}^{C\times H\times W}$ and the layer normalized tensor $X^{'}\in \mathbb{R}^{C\times H\times W}$, we have \begin{equation} {Q = H_{pconv}^{1\times1}(H_{dconv}^{3\times3}(X^{'}))}, \end{equation} \begin{equation} {K = H_{pconv}^{1\times1}(H_{dconv}^{3\times3}(X^{'}))}, \end{equation} \begin{equation} {V = H_{pconv}^{1\times1}(H_{dconv}^{3\times3}(X^{'}))}, \end{equation} where ${H_{pconv}^{1\times1}}(\cdot)$ is the ${1\times1}$ point-wise convolutional layer and ${H_{dconv}^{3\times3}}(\cdot)$ is the ${3\times3}$ depth-wise convolutional layer. By calculating the correlation between $Q$ and $K$, we can obtain global attention weights from different locations, thereby capturing the global information. Next, we reshape $Q$, $K$, and $V$ into ${\hat{Q} \in \mathbb{R} ^{C\times HW}}$, ${\hat{K} \in \mathbb{R} ^{HW\times C}}$, and ${\hat{V} \in \mathbb{R} ^{C\times HW}}$, respectively. Thus the dot-product interaction of ${\hat{Q}}$ and ${\hat{K}}$ will generate a transposed-attention map with size ${\mathbb{R}^{C\times C}}$, rather than the huge size of ${\mathbb{R}^{HW\times HW}}$. After that, the global attention weights are subsequently multiplied with $V$ to get the weighted integrated features $X_{w}\in \mathbb{R}^{C\times HW}$. This can help the module to capture valuable local context. Finally, we reshape $X_{w}$ into $\hat{X_{w}}\in \mathbb{R}^{C\times H\times W}$ and use a ${1\times1}$ convolutional layer to realize feature communication. The above procedure can be formulated as follows: \begin{equation} {X_{weighted} = \operatorname{Softmax}(\hat{Q}\cdot\hat{K}/\sqrt{d})} \cdot \hat{V}, \end{equation} \begin{equation} {Y_{M} = H_{pconv}^{1\times 1}(R(X_{weighted}))}, \end{equation} where ${Y_{M}}$ denotes the output of MDTA, ${R(\cdot)}$ stands for the reshaping operation. Here, ${\sqrt{d}}$ is a temperature parameter to control the magnitude of the dot product of ${\hat{K}}$ and ${\hat{Q}}$ before applying the Softmax function. \begin{figure}[t] \centering \includegraphics[width=9.3cm]{img/FEU.png} \caption{The architecture of proposed FEU} \label{FEU} \end{figure} \begin{figure*}[t] \centerline{\includegraphics[width=16cm]{img/MFFU.png}} \caption{Schematic diagram of how Multi-scale Feature Fusion Unit (MFFU) aggregates features from different scales.} \label{MFFU} \end{figure*} At the same time, we also introduce depth-wise convolutions into Gated-Dconv Feed-Forward Network (GDFN) to encode information from spatially neighboring pixel positions, responsible for learning local image structures for effective restoration. Given the input ${x}$, we have \begin{equation}\ {x^{'} = H_{dconv}^{3\times 3} (H_{pconv}^{1\times 1}(x))}, \end{equation} \begin{equation}\ {Y_{G} = H_{pconv}^{1\times 1} (x^{'} \cdot \sigma (x^{'}))}, \end{equation} where ${\sigma}$ denotes the GELU non-linearity operation~\cite{Dan2016gauss} and $Y_{G}$ denotes the output of GDFN. With the help of FSAU and Transformer Block, LGCM is able to capture both local features and global relationships of faces, which is benefit for high-quality images reconstruction. \subsection{Feature Refinement Module (FRM)} \label{sec34} In the bottleneck stage, we introduce the well-designed Feature Refinement Modules (FRMs) to continuously refine and enhance the important encoded features of the face. As shown in Fig.~\ref{Network structure}, each FRM encompasses an FSAU and a Feature Enhancement Unit (FEU). To reduce the computational burden and feature redundancy of the network, we use a double-branch structure in FEU. As shown in Fig.~\ref{FEU}, the first branch mainly uses AFDUs to extract the information in the original scale, while the second branch extracts features from the down-sampled feature maps, which are then up-sampled to fuse with the outputs of the first branch. In comparison with the general residual learning, we also add a feature self-calibration path to the residual connection to fully mine the hierarchical features and stabilize the training simultaneously. The above operations can be expressed as \begin{equation} {F_{in}^{\prime}=f_{a}\left(F_{in}\right),F_{low}^{\prime}=f_{a}\left(\downarrow F_{in}\right)},F_{low}^{\prime \prime}=f_{a}(F_{low}^{\prime}), \end{equation} \begin{equation} {F_{in}^{\prime\prime}=H_{conv}^{1\times1}\left(H_{cat}\left(f_{a}\left(F_{in}^{\prime}\right),\uparrow f_{a}\left(F_{low}^{\prime}\right)\right)\right.}, \end{equation} \begin{equation} {F_{in}^{\prime\prime\prime}=H_{con v}^{1\times1}\left(H_{cat}\left(f_{a}\left(F_{in}^{\prime\prime}\right), \uparrow f_{a}\left(F_{l o w}^{\prime \prime}\right)\right)\right.}, \end{equation} \begin{equation} F_{out}=f_{a}\left(F_{in}^{\prime \prime \prime}\right)+F_{in} \cdot \sigma\left({H}_{conv}^{1 \times 1}\left({~F}_{in}\right)\right), \end{equation} where ${f_{a}(\cdot)}$ denotes the operation of AFDU, ${H_{cat}(\cdot)}$ indicates the feature concatenating operation along the channel dimension, ${H_{conv}^{1 \times 1}(\cdot)}$ stands for the ${1\times1}$ convolutional layer, and ${\sigma}$ denotes the Sigmoid function. \subsection{Multi-scale Feature Fusion Unit (MFFU)} \label{sec35} In order to make full use of the multi-scale features extracted in the encoding stage, we introduce the multi-scale feature fusion scheme in the decoding stage to enable the network to have better feature propagation and representation capabilities. Specifically, our main goal is to explore and exploit the features from the encoding stage during the decoding process. However, the size of these features are different, how to integrate these features more effectively is critical important. Take the size of the input image as ${128\times128}$ as an example, the size of the feature maps we obtained in the encoding stages is ${128\times128}$, ${64\times64}$, and ${32\times32}$, respectively. However, the size of the feature maps in the decoding stage is ${32\times32}$, ${64\times64}$, and ${128\times128}$, successively. To solve this problem, we design a Multi-scale Feature Fusion Unit (MFFU). The details of MFFU are given in Fig~\ref{MFFU}. According to the figure, we can observe that we first use upsampling and downsampling operations to scale the image feature maps with inconsistent sizes. After unifying the size of all feature maps, we concatenate the four types of feature maps along the channel dimension. Then, we use a ${1\times1}$ convolutional layer to generate the preliminary fusion result. Finally, we assign a channel direction attention weight to each channel through the CA mechanism. Based on the size of the feature maps, the fusion scheme can be divided into three situations. The schematic diagram of how MFFU aggregates features from different scales is shown in Fig~\ref{MFFU}. For the sake of simplicity, we only give the formulation of Fig~\ref{MFFU} (b). The formulation of Fig {\ref{MFFU} (b)} can be defined as: \begin{equation} {E_{128_{-}64}=H_{conv}^{k3s2}\left(E_{128}\right)}, \end{equation} \begin{equation} {E_{32_{-}64}=H_{deconv}^{k6s2p2}\left(E_{32}\right)}, \end{equation} \begin{equation} De_{64}^{\prime}=H_{{conv}}^{k1s 1}\left(H_{cat}\left(E_{128\_64}, E_{32\_64}, E_{64}, D_{64}\right)\right), \end{equation} \begin{equation} De_{64}=CA\left(De_{64}^{\prime}\right), \end{equation} where ${E_{k}(k=32,64,128)}$ represents the feature maps from the previous three encoding stages with the size of ${k\times k}$, and ${D_{64}}$ represents the original feature maps of the current decoder with the size of ${64\times 64}$. ${E_{m\_n}}$ indicates that the size of the feature maps has changed from ${m\times m}$ to ${n\times n}$. ${H_{conv}^{k3s2}(\cdot)}$ denotes the ${3\times3}$ convolution operation with the stride to be 2, while ${H_{deconv}^{k6s2p2}(\cdot)}$ denotes the ${6\times6}$ transposed convolution operation with stride and padding to be 2. ${H_{cat}(\cdot)}$ denotes the concatenating operation along the channel dimension. ${De_{64}^{'}}$ represents the preliminary fusion result and ${De_{64}}$ means the final fusion result. \iffalse The situation level in Figure5(a) can be formulated as: \begin{equation} E_{128{\_32}}=H_{\text {Conv}}^{k3s2}\left(H_{\text {Conv}}^{k 3 s 2}\left(E_{128}\right)\right) \end{equation} \begin{equation} E_{64_{-} 32}=H_{\text {Conv}}^{k3s2}\left(E_{64}\right) \end{equation} \begin{equation} D e_{32}^{\prime}=H_{\text{Conv}}^{k1s1}\left(H_{\text{Cat}}\left(E_{128_{-}32}, E_{64_{-}32}, E_{32}, D_{32}\right)\right) \end{equation} \begin{equation} De_{32}=CA\left(De_{32}^{\prime}\right) \end{equation} where ${E_{k}(k=32,64,128)}$ represents the features from the three encoding stages, ${k}$ denotes the size of feature maps, and ${D_{32}}$ represents the original features of the decoder. ${E_{m\_k}}$ indicates that the size of the feature map has changed from ${k\times k}$ to ${m\times m}$. ${H_{\text{Conv}}^{k3s2}}(\cdot)$ denotes the convolutional layer with kernel size ${3\times3}$ and stride 2. ${H_{\text{Cat}}(\cdot)}$ denotes the concatenating feature operation along the channel dimension. ${De_{32}^{'}}$ represents the preliminary fusion result and ${De_{32}}$ means the final fusion result. The situation level in Figure5(c) can be formulated as: \begin{equation} E_{32{\_128}}=H_{\text {Deconv}}^{k6s2p2}\left(H_{\text {Deconv}}^{k6s2p2}\left(E_{32}\right)\right) \end{equation} \begin{equation} {E_{64_{-}128}=H_{Deconv}^{k6s2p2}\left(E_{64}\right)} \end{equation} \begin{equation} De_{128}^{\prime}=H_{Conv}^{k1s1}\left(H_{Cat}\left(E_{64\_128}, E_{32\_128}, E_{128}, D_{128}\right)\right) \end{equation} \begin{equation} De_{128}=CA\left(De_{128}^{\prime}\right) \end{equation} where ${D_{128}}$ represents the original features of the encoder. ${De_{128}^{'}}$ represents the preliminary fusion result with a size of ${128\times128}$ and ${De_{128}}$ A represents the final fusion result after channel attention, the size of ${De_{128}}$ is ${128\times128}$. \fi \subsection{Model Extension} \label{sec36} As we know, Generative Adversarial Network (GAN) has been proven to be effective in recovering photo-realistic images~\cite{ledig2017photo,wang2018esrgan}. Therefore, we also extended our model with GAN and propose a extended model in this work, named CNN-Transformer Cooperation Generative Adversarial Network (CTCGAN). In CTCGAN, we use our CTCNet as the generative model, and utilize the discriminative model in the conditional manner~\cite{isola2017image}. The new loss functions adopted in training the CTCGAN consists of three parts: \subsubsection{Pixel Loss} The same as CTCNet, we use the pixel-level loss to constrain the low-level information between the SR image and HR image. It is can be defined as \begin{equation} {\mathcal{L}_{pix}=\frac{1}{N} \sum_{i=1}^{N}\left\|G(I_{LR}^{i})-I_{HR}^{i}\right\|_{1}}, \end{equation} where $G(\cdot)$ indicates the CTCGAN generator. \subsubsection{Perceptual Loss} The perceptual loss is mainly used to promote the perceptual quality of the reconstructed SR images. Specifically, we use a pre-trained face recognition VGG19~\cite{simonyan2014very} to extract the facial features. Therefore, we can calculate the feature-level similarity of the two images. The perceptual loss can be defined as \begin{equation} {\mathcal{L}_{pcp}=\frac{1}{N}\sum_{i=1}^{N}\sum_{l=1}^{L_{VGG}}\frac{1}{M_{VGG}^{l}}\left\|f_{VGG}^{l}\left(I_{SR}^{i}\right)-f_{VGG}^{l}\left(I_{HR}^{i}\right)\right\|_{1}}, \end{equation} where $f_{VGG}^{l}(\cdot)$ is the $l$-th layer in $VGG$, $L_{VGG}$ denotes the total number of layers in $VGG$, and $M_{VGG}^{l}$ indicates the number of elements in $f_{VGG}^{l}$. \subsubsection{Adversarial Loss} The principle of GAN is that generator $G$ strives to create fake images, while discriminator $D$ tries to distinguish fake pictures. In other words, the discriminator $D$ aims to distinguish the super-resolved SR image and the HR image by minimizing \begin{equation} {\mathcal{L}_{dis}=-\mathbb{E}\left[\log\left(D\left(I_{H R}\right)\right)\right]-\mathbb{E}\left[\log\left(1-D\left(G\left(I_{L R}\right)\right)\right)\right]}. \end{equation} In addition, the generator tries to minimize \begin{equation} \mathcal{L}_{adv}=-\mathbb{E}\left[\log \left(D\left(G\left(I_{L R}\right)\right)\right)\right]. \end{equation} Therefore, CTCGAN is optimized by minimizing the following overall objective function: \begin{equation} {\mathcal{L}=\lambda _{pix}\mathcal{L}_{pix}+\lambda_{pcp}\mathcal{L}_{pcp}+\lambda_{adv}\mathcal{L}_{adv}}, \end{equation} where $\lambda _{pix}$, $\lambda_{pcp}$, and $\lambda_{adv}$ indicate the trade-off parameters for the pixle loss, the perceptual loss, and the adversarial loss, respectively. \section{Experiments} \label{sec4} \subsection{Datasets} \label{sec41} In our experiments, we use CelebA~\cite{liu2015deep} for training and evaluate the model validity on Helen~\cite{le2012interactive} and SCface~\cite{grgic2011scface} datasets. The height and width of the face pictures in CelebA are inconsistent. Therefore, we crop the image according to the center point, and the size is adjusted to ${128\times128}$ pixels, which is used as the HR image. Then we down-sample these HR images into ${16\times16}$ pixels with bicubic operation and treat them as the LR inputs. We use 18,000 samples of the CelebA dataset for training, 200 samples for validating, and 1,000 samples for testing. Furthermore, we also directly test our model on Helen and SCface datasets using the model trained on CelebA. \subsection{Implementation Details} \label{sec42} We implement our model using the PyTorch framework. Meanwhile, we optimize our model by Adam and set ${\beta _{1} = 0.9}$ and ${\beta _{2} = 0.99}$. The initial learning rate is set to ${2\times 10^{-4}}$. For CTCGAN, we empirically set ${\lambda _{pix} =1}$, ${\lambda _{pcp} =0.01}$, and ${\lambda _{adv} =0.01}$. We also use Adam to optimize both $G$ and $D$ with ${\beta _{1} = 0.9}$ and ${\beta _{2} = 0.99}$. The learning rates of $G$ and $D$ are set to ${1\times 10^{-4}}$ and ${4\times 10^{-4}}$, respectively. To assess the quality of the SR results, we employ four objective image quality assessment metrics: Peak Signal to Noise Ratio (PSNR), Structural Similarity (SSIM)~\cite{wang2004image}, Learned Perceptual Image Patch Similarity (LPIPS)~\cite{zhang2018unreasonable}, and Visual Information Fidelity (VIF)~\cite{sheikh2006image}. \begin{table}[t] \begin{center} \caption{Verify the effectiveness of LGCM.} \setlength{\tabcolsep}{3mm} \renewcommand\arraystretch{1} \begin{tabular}{c|cccc} \hline Methods & PSNR${\uparrow}$ & SSIM${\uparrow}$ & VIF${\uparrow}$ & LPIPS${\downarrow}$ \\ \cline{2-5} \hline \hline w/o LGCM & 27.56 &0.7867 & 0.4487 &0.2051 \\ LGCM w/o TB & 27.82 &0.7964 &0.4707 &0.1833 \\ LGCM w/o FSAU & 27.83 &0.7972 & 0.4637 &0.1845 \\ LGCM &\bf{27.90} &\bf{0.7980} &\bf{0.4721} &\bf{0.1797} \\ \hline \end{tabular} \label{Effects of LGCM} \end{center} \end{table} \subsection{Ablation Studies} \label{sec43} \subsubsection{Effectiveness of LGCM} LGCM is the most important module in CTCNet, which is designed to extract local features and global relationships of the image. At the same time, this is a new attempt to combine CNN and Transformer structures. To verify the effectiveness of LGCM and the feasibility of this combined method, we carried out a series of ablation studies in this part. As we know, LGCM contains an FSAU and a Transformer Block (TB). Therefore, design three modified models. The first model removes all LGCMs in the encoding and decoding stages, marked as ``w/o LGCM''. The second model removes all FSAUs while retaining the Transformer Block, marked as ``LGCM w/o FSAU''. The third model removes all Transformer Blocks while retaining the FSAU in LGCM, marked as ``LGCM w/o TB''. In Table~\ref{Effects of LGCM}, we show the results of these modified networks. According the table, we have the following observations: (a) By comparing the first and the last lines in Table~\ref{Effects of LGCM}, we can observe that the introduced LGCM can significantly improve the performance of the model. This fully verifies the effectiveness of LGCM; (b) By comparing the first three lines, we can see that the performance of the model can also be improved by introducing FSAU or TB alone. This is because both local features and global relationships of the image are helpful for image reconstruction; (c) By comparing the last three lines, we can clearly observe that both FASU and TB play a unique role in FSR tasks. This is because FSAU can capture the local details while TB can capture the global facial structures simultaneously, which provide complementary information for the final SR image reconstruction. Only using one of them cannot achieve the best results. This further verifies the effectiveness of LGCM and the feasibility of combining CNN with Transformer. \subsubsection{Effectiveness of FRM} To evaluate the effectiveness of FRM, we change the number of FRM in the bottleneck stage. In this part, we gradually increase the numbers of FRM and denote the model with $N$ FRMs as CTCNet-VN, where ${N\in\left\{ 0,2,4,6\right\}}$. From Table~\ref{Numbers of LFRM}, we can observe that the model achieve the worst results when all FRMs are removed (CTCNet-V0). This illustrates the necessity of the existence of FRM in CTCNet. Meanwhile, it can be observed that the model performance can be improved with the increase of FRM within a certain range. However, we also notice that when the number of FRM exceeds to $4$, the model performance will decreases and the model size will become larger. Therefore, we set $N$ = 4 to achieve a good balance between model performance and size. Meanwhile, from Fig.~\ref{fNumbers of LFRM}, we can intuitively see that as the number of FRM gradually increases from $0$ to $4$, the facial contours gradually become clear, which fully demonstrates the effectiveness of stacking multiple FRMs. \begin{table}[t] \begin{center} \caption{Performance of different numbers of FRM.} \setlength{\tabcolsep}{2.8mm} \renewcommand\arraystretch{1} \begin{tabular}{c|cccc} \hline Methods &PSNR/SSIM${\uparrow}$ &VIF${\uparrow}$ &LPIPS${\downarrow}$ &Parameters${\downarrow}$ \\ \cline{2-5} \hline \hline CTCNet-V0 &27.77/0.7954 &0.4683 &0.1856 &\bf{10.416M} \\ CTCNet-V2 &27.83/0.7965 &0.4692 &0.1858 &16.014M \\ CTCNet-V4 &\bf{27.87/0.7979} &\bf{0.4728} &\bf{0.1834} &21.613M \\ CTCNet-V6 &27.85/0.7967 &0.4691 &0.1872 &27.212M \\ \hline \end{tabular} \label{Numbers of LFRM} \end{center} \end{table} \begin{figure}[t] \centering \includegraphics[width=0.99\columnwidth]{img/Number_of_LFRM.png} \caption{Visual comparisons of different numbers of FRM on CelebA dataset for $\times 8$ SR.} \label{fNumbers of LFRM} \end{figure} \begin{table}[t] \begin{center} \caption{Performance of different feature fusion method in MFFU. The last line is the strategy used in our final model.} \setlength{\tabcolsep}{4.1mm} \renewcommand\arraystretch{1} \begin{tabular}{cccc|cccc} \hline MSC &Concat &Add &CA &PSNR${\uparrow}$ &SSIM${\uparrow}$ \\ \hline \hline ${\times}$ & ${\times}$ & ${\times}$ & ${\times}$ & 27.76 &0.7961 \\ $\surd$ & $\surd$ & ${\times}$ & ${\times}$ & 27.84 &0.7969 \\ $\surd$ & ${\times}$ & $\surd$ & ${\times}$ & 27.82 &0.7955 \\ $\surd$ & ${\times}$ & $\surd$ & $\surd$ & 27.83 &0.7960 \\ $\surd$ & $\surd$ & ${\times}$ & $\surd$ & \bf{27.87} &\bf{0.7979}\\ \hline \end{tabular} \label{Effects of MFFU} \end{center} \end{table} \subsubsection{Effectiveness of MFFU} MFFU is specially designed for multi-scale feature fusion. In this part, we conducts a series of experiments to demonstrate the effects of Multi-Scale Connections (MSC) and various feature fusion methods in MFFU. The first experiment is used to verify the necessity of MSC. The second and third experiments preserve the MSC but only use the concatenate or add operation to achieve multi-scale features fusion. The last two experiments use channel attention to reweigh the channels after teh concatenate or add operation. From Table~\ref{Effects of MFFU}, it can be observed that (a) Using multi-scale feature fusion strategy can effectively improve model performance, which proves the importance of multi-scale features for image reconstruction; (b) Using Channel Attention (CA) mechanism has positive effects on improving the model performance; (c) The effect of combining the concatenate operation and CA is apparent. This further verifies that adopting a suitable feature fusion strategy can well provide help for the subsequent reconstruction process. \begin{table}[t] \begin{center} \caption{Study of each component in FSAU.} \setlength{\tabcolsep}{4mm} \renewcommand\arraystretch{1} \begin{tabular}{cc|cccc} \hline CA &SA &PSNR &SSIM${\uparrow}$ &VIF${\uparrow}$ &LPIPS${\downarrow}$ \\ \cline{2-5} \hline \hline ${\times}$ &${\times}$ &27.80 &0.7989 &0.4701 &0.1874 \\ $\surd$ &${\times}$ &27.83 &0.7966 &0.4673 &0.1881 \\ ${\times}$ &$\surd$ &27.82 &0.7964 &0.4676 &0.1908 \\ $\surd$ &$\surd$ &\bf{27.87} &\bf{0.7979} &\bf{0.4728} &\bf{0.1834} \\ \hline \end{tabular} \label{Effects of FSAU} \end{center} \end{table} \subsubsection{Study of FSAU} In FSAU, we use the structure of the nested channel attention mechanism in the spatial attention mechanism to better extract spatial features and promote channel information interaction. To prove the effectiveness of using this nested structure, we remove channel attention and spatial attention respectively to perform ablation studies. From Table~\ref{Effects of FSAU}, we can see the effectiveness enlightened by the channel and spatial attention mechanisms. Adding channel attention or spatial attention alone can only slightly improve the PSNR value by 0.03dB and 0.02dB, respectively. However, when using the nested structure, the PSNR values increase from 27.80dB to 27.87dB. Therefore, we can draw a conclusion that we can gain better performance by applying the channel and spatial attention mechanisms simultaneously. \begin{table}[t] \begin{center} \caption{Study of each component in FEU.} \setlength{\tabcolsep}{3.5mm} \renewcommand\arraystretch{1} \begin{tabular}{c|cccc} \hline Methods & PSNR${\uparrow}$ & SSIM${\uparrow}$ & VIF${\uparrow}$ & LPIPS${\downarrow}$ \\ \cline{2-5} \hline \hline FEU w/o AFDU & 27.77 &0.7947 & 0.4628 &0.1952 \\ FEU w/o path & 27.80 &0.7959 & 0.4659 &0.1907 \\ FEU w/o dual & 27.81 &0.7951 &0.4679 &0.1933 \\ FEU &\bf{27.87} &\bf{0.7979} &\bf{0.4728} &\bf{0.1834} \\ \hline \end{tabular} \label{Effects of FEU} \end{center} \end{table} \begin{table}[t] \begin{center} \caption{Comparison results of GAN-based methods for 8$\times$ SR on the CelebA and Helen test sets.} \setlength{\tabcolsep}{2.2mm} \renewcommand\arraystretch{1} \begin{tabular}{c|c|cccc} \hline Methods & DataSet & PSNR${\uparrow}$ & SSIM${\uparrow}$ &FID${\downarrow}$ &VIF${\uparrow}$ \\ \cline{3-6} \hline \hline FSRGAN &\multirow{3}{*}{CelebA} &26.49 &0.7719 &30.60 &0.3857 \\ SPARNetHD &&27.08 &0.7661 &29.07 &0.4202 \\ CTCGAN (Ours) && \textbf{27.78} & \textbf{0.7898} & \textbf{25.96} & \textbf{0.4367} \\ \hline \hline FSRGAN & \multirow{4}{*}{Helen} &25.02 &0.7279 &146.55 &0.3400 \\ DICGAN &&25.59 &0.7398 &144.25 &0.3925 \\ SPARNetHD &&25.86 &0.7518 &149.54 &0.3932 \\ CTCGAN (Ours) &&\textbf{26.41} &\textbf{0.7776} & \textbf{118.05} & \textbf{0.4112} \\ \hline \end{tabular} \label{Tab_GAN} \end{center} \vspace{-0.2cm} \end{table} \subsubsection{Study of FEU} FEU is an essential part of FRM, which uses a double-branch structure to enhance feature extraction. As mentioned earlier, FEU mainly includes several AFDUs and a feature self-calibration path. In this part, we conducted three ablation experiments to verify the effectiveness of AFDU, dual-branch structure, and feature self-calibration path in FEU. From Table~\ref{Effects of FEU}, we can see that (a) If we do not use AFDU in FEU, the performance will drop sharply, and the usage of AFDU increases the PSNR value by 0.1dB; (b) Compared with a simple single-branch structure (without the downsampling and upsampling operations), using the dual-branch structure promotes the PSNR value by 0.06dB. It further verifies that multi-scale feature extraction often has better feature representation; (c) The usage of the feature self-calculation path increases the PSNR value by 0.07dB, since this path can highlight the helpful features with higher activation values. \begin{table*}[t] \centering \caption{Quantitative comparisons for $\times$8 SR on the CelebA and Helen test sets.} \setlength{\tabcolsep}{3mm} \renewcommand\arraystretch{1} \scalebox{1}{ \begin{tabular}{p{2cm}|p{1.3cm}p{1.3cm}p{1.3cm}p{1.3cm}|p{1.3cm}p{1.3cm}p{1.3cm}p{1.3cm}p{1.3cm}} \toprule \multirow{2}{*}{Methods} & \multicolumn{4}{c|}{$CelebA$} &\multicolumn{4}{c}{$Helen$}\\ & PSRN${\uparrow}$ & SSIM${\uparrow}$ & VIF${\uparrow}$ & LPIPS${\downarrow}$ &PSNR${\uparrow}$ & SSIM${\uparrow}$ & VIF${\uparrow}$ & LPIPS${\downarrow}$ \\ \hline \hline Bicubic & 23.61 & 0.6779 & 0.1821 & 0.4899 & 22.95 & 0.6762 & 0.1745 & 0.4912 \\ SAN~\cite{dai2019second} & 27.43 & 0.7826 & 0.4553 & 0.2080 & 25.46 & 0.7360 & 0.4029 & 0.3260 \\ RCAN~\cite{zhang2018image} & 27.45 & 0.7824 & 0.4618 & 0.2205 & 25.50 & 0.7383 & 0.4049 & 0.3437 \\ HAN~\cite{niu2020single} & 27.47 & 0.7838 & 0.4673 & 0.2087 & 25.40 & 0.7347 & 0.4074 & 0.3274 \\ SwinIR~\cite{liang2021swinir} & 27.88 & 0.7967 & 0.4590 & 0.2001 & 26.53 & 0.7856 & 0.4398 & 0.2644 \\ FSRNet~\cite{chen2018fsrnet} & 27.05 & 0.7714 & 0.3852 & 0.2127 & 25.45 & 0.7364 & 0.3482 & 0.3090 \\ DICNet~\cite{ma2020deep} & - & - & - & - & 26.15 & 0.7717 & 0.4085 & 0.2158 \\ FACN~\cite{xin2020facial} & 27.22 & 0.7802 & 0.4366 & 0.1828 & 25.06 & 0.7189 & 0.3702 & 0.3113 \\ SPARNet~\cite{chen2020learning} & 27.73 & 0.7949 & 0.4505 & 0.1995 & 26.43 & 0.7839 & 0.4262 & 0.2674 \\ SISN~\cite{lu2021face} & 27.91 & 0.7971 & 0.4785 & 0.2005 & 26.64 & 0.7908 & 0.4623 & 0.2571 \\ \midrule CTCNet (Ours) &{\bf28.37} &{\bf0.8115} &{\bf0.4927} &{\bf0.1702} &{\bf27.08} &{\bf0.8077} &{\bf0.4732} &{\bf0.2094} \\ \bottomrule \end{tabular} } \label{compare_CelebA_Helen} \end{table*} \subsection{Comparison with Other Methods} \label{sec44} In this part, we compare our CTCNet with other state-of-the-art (SOTA) methods, including general image SR methods SAN~\cite{dai2019second}, RCAN~\cite{zhang2018image}, HAN~\cite{niu2020single}, novel FSR methods FSRNet~\cite{chen2018fsrnet}, DICNet~\cite{ma2020deep}, FACN~\cite{xin2020facial}, SPARNet~\cite{chen2020learning}, SISN~\cite{lu2021face}, and pioneer Transformer based image restoration method SwinIR~\cite{liang2021swinir}. For a fair comparison, all models are trained using the same CelebA dataset. \begin{figure*}[t] \centerline{\includegraphics[width=18cm]{img/compare_CelebA_0213.png}} \caption{Visual comparisons for $\times$8 SR on the CelebA test set. Obviously, our CTCNet can reconstruct clearer face images.} \label{compare_CelebA} \end{figure*} \begin{figure*}[t] \centerline{\includegraphics[width=18cm]{img/compare_Helen_0213.png}} \caption{Visual comparisons for $\times$8 SR on the Helen test set. Obviously, our CTCNet can reconstruct clearer face images.} \label{compare_Helen} \end{figure*} \subsubsection{Comparison on CelebA dataset} The quantitative comparisons with other SOTA method on the CelebA test set are provided in Table~\ref{compare_CelebA_Helen}. According to the table, we can see that CTCNet significantly outperforms other competitive methods in terms of PSNR, VIP, LPIPS, and SSIM. This fully verifies the effectiveness of CTCNet. Meanwhile, from the visual comparisons in Fig.~\ref{compare_CelebA} we can see that most of the previous methods cannot clearly restore the eyes and nose in the face, while our CTCNet can better restore face structures and generate more precise results. The reconstructed face images are closer to the real HR images, which further proves the effectiveness and excellence of CTCNet. \subsubsection{Comparison on Helen dataset} In this part, we directly use the model trained on the CelebA dataset to test the model performance on the Helen test set to study the generality of CTCNet. Table~\ref{compare_CelebA_Helen} lists the quantitative experimental results on the Helen test set for $\times$8 SR. According to the table, we can clearly see that our CTCNet still achieves the best results on the Helen data set. From Fig.~\ref{compare_Helen}, we can also observe that the performance of most competitive methods degrades sharply, they cannot restore faithful facial details, and the shape is blurred. On the contrary, our CTCNet can still restore realistic facial contours and facial details. This further verifies the effectiveness and generality of CTCNet. \begin{figure} \centering \includegraphics[width=1\columnwidth]{img/compare_GAN.png} \caption{Visual comparison of different GAN-based methods on the Helen test set. Obviously, our CTCGAN can reconstruct high-quality face images with clear facial components.} \label{compare_GAN} \end{figure} \begin{figure}[t] \centering \includegraphics[width=1\columnwidth]{img/compare_SCface.png} \caption{Visual comparison of respective methods on real-world surveillance scenarios for $\times$8 SR. Obviously, our CTCNet can reconstruct more clear and accurate eyes.} \label{compare_SCface} \end{figure} \subsubsection{Comparison with GAN-based methods} As we mentioned above, we also propose an extended model named CTCGAN. In this part, we compare our CTCGAN with three popular GAN-based FSR models: FSRGAN~\cite{chen2018fsrnet}, DICGAN~\cite{ma2020deep}, and SPARNetHD~\cite{chen2020learning}. As we all know, GAN-based SR methods usually have superior visual qualities but lower quantitative values (such as PSNR and SSIM). Therefore, we also introduce Frechet Inception Distance score (FID)~\cite{obukhov2020quality} as a new metric to evaluate the performance of GAN-based SR methods. In Table~\ref{Tab_GAN}, we provide the quantitative comparisons of these model on CelebA and Helen test sets. Obviously, our CTCGAN gains much better performance than other methods in terms of PSNR, SSIM, FID, and VIF. Meanwhile, the qualitative comparisons on the Helen test set are also provide in Fig.~\ref{compare_GAN}. According the figure, we can see that those competitive methods cannot generate realistic faces and have undesirable artifacts and noise. In contrast, our CTCGAN can restore key facial components and the texture details in the mouth and eyes. This fully demonstrates the effectiveness and excellence of our CTCGAN. \subsubsection{Comparison on real-world surveillance faces} As we know, restoring face images from real-world surveillance scenarios is still a huge challenge. All the above experiments are in the simulation cases, which can not simulate the real-world scenarios well. To further verify the effectiveness of our CTCNet, we also conduct experiments on real-world low-quality face images, which are selected from the SCface dataset~\cite{grgic2011scface}. The images in SCface are captured by surveillance cameras, which inherently have lower resolutions hence no manual downsampling operation is required. \begin{table*}[t] \begin{center} \caption{Comparison results for average similarity of face images super-resolved by different methods.} \setlength{\tabcolsep}{2.5mm} \renewcommand\arraystretch{1} \scalebox{1}{ \begin{tabular}{p{1.5cm}|p{1.1cm}|p{1.1cm}|p{1.1cm}|p{1.1cm}|p{1.1cm}|p{1.1cm}|p{1.1cm}|p{1.1cm}|p{1.1cm}|p{1.1cm}} \hline \multirow{2}{*}{Methods} & \multicolumn{10}{c}{Average Similarity} \\ \cline{2-11} &Case 1 &Case 2 &Case 3 &Case 4 &Case 5 &Case 6 &Case 7 &Case 8 &Case 9 &Case 10\\ \cline{2-11} \hline \hline SAN~\cite{dai2019second} &0.8897 &0.9061 &0.9029 &0.8940 &0.8889 &0.9061 &0.9042 &0.8844 &0.9026 &0.9107 \\ RCAN~\cite{zhang2018image} &0.8927 &0.9000 &0.9038 &0.8957 &0.8963 &0.9090 &0.9028 &0.8807 &0.9045 &0.9064 \\ HAN~\cite{niu2020single} &0.8909 &0.9096 &0.8977 &0.9074 &0.8914 &0.9020 &0.9061 &0.8740 &0.8950 &0.9121 \\ SwinIR~\cite{liang2021swinir} &0.9087 &0.9196 &0.8991 &0.9079 &0.9105 &0.9040 &0.9119 &0.8939 &0.9080 &0.9093 \\ FSRNet~\cite{chen2018fsrnet} &0.8996 &0.8844 &0.9017 &0.8971 &0.8927 &0.9061 &0.8908 &0.8977 &0.9040 &0.9064 \\ DICNet~\cite{ma2020deep} &0.8859 &0.8814 &0.8692 &0.8760 &0.8736 &0.8755 &0.8837 &0.8743 &0.8687 &0.8914 \\ FACN~\cite{xin2020facial} &0.9048 &0.9009 &0.9040 &0.9017 &0.9058 &0.8985 &0.8970 &0.8906 &0.8687 &0.9007 \\ SPARNet~\cite{chen2020learning} &0.9089 &0.9188 &0.8995 &0.9015 &0.9075 &0.8980 &0.9077 &0.9067 &0.9025 &0.9142 \\ SISN~\cite{lu2021face} &0.9127 &0.9206 &0.9086 &0.9049 &0.9080 &0.8999 &0.9175 &0.9098 &0.9060 &0.9227 \\ \hline CTCNet &\bf{0.9278} &\bf{0.9219} &\bf{0.9129} &\bf{0.9165} &\bf{0.9243} &\bf{0.9194} &\bf{0.9228} &\bf{0.9136} &\bf{0.9106} &\bf{0.9280} \\ \hline \end{tabular}} \label{Average_Similarity} \end{center} \end{table*} In this part, we try to restore the SR face images with more texture details and good facial structures. Visual comparison of reconstruction performance is given in Fig.~\ref{compare_SCface}. We can see that the face priors based methods reconstruct unsatisfactory results. The reason may be that estimating accurate priors from real-world LR face images is a difficult problem. Meanwhile, inaccurate prior information will brings misleading guidance to the reconstruction process. In comparison, benefit by the CNN-Transformer Cooperation mechanism, which is the prominent difference between CTCNet and other methods, our CTCNet can recover cleaner facial details and faithful facial structures. We also verify the superiority of our CTCNet over the performance of downstream tasks such as face matching. The high-definition frontal face images of the test candidates are selected as the source samples while the corresponding LR face images captured by the surveillance camera are treated as the target samples. To make the experiments more convincing, we conducted 10 cases. In each case, we randomly select five pairs of candidate samples and calculate the average similarity. The quantitative results can be seen in Table~\ref{Average_Similarity}. We can see that our method can achieve higher similarity in each case, which further indicates that our CTCNet can also produce more faithful HR faces in real-world surveillance scenarios, making it highly practical and applicable. \section{Conclusions} In this work, we proposed a novel CNN-Transformer Cooperation Network (CTCNet) for face super-resolution. CTCNet uses the multi-scale connected encoder-decoder architecture as the backbone and exhibits extraordinary results. Specifically, we designed an efficient Local-Global Feature Cooperation Module (LGCM), which consists of a Facial Structure Attention Unit (FSAU) and a Transformer block, to simultaneously focus on local facial details and global facial structures. Meanwhile, to further improve the restoration results, we presented a Multi-scale Feature Fusion Unit (MFFU) to adaptively and elaborately fuse the features from different scales and depths. Extensive experiments on both simulated and real-world datasets have demonstrated the superiority of our CTCNet over other competitive methods in terms of quantitative and qualitative comparisons. Furthermore, its reconstructed images show excellent results in downstream tasks such as face matching, which fully demonstrates its practicality and applicability. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{} \vspace{-1cm} \footnotetext{\textit{$^{a}$~Centro de Física Teórica e Computacional, Faculdade de Ciências, Universidade de Lisboa, P-1749-016 Lisboa, Portugal; E-mail: [email protected]}} \footnotetext{\textit{$^{b}$~Departamento de Física, Faculdade de Ciências, Universidade de Lisboa, P-1749-016 Lisboa, Portugal. }} \section{Introduction} Systems formed by units or particles capable of transforming the energy of the environment into directed motion are known as active matter. These intrinsically non-equilibrium systems are characterized by complex dynamical behavior. Examples that occur naturally are dense suspensions of bacteria, mixtures of microtubule-kinesin, and shoals of fish (see \cite{Ramaswamy_2017,BechingerRMP2016} and references therein). While a detailed description of a particular system is daunting, particle based simulations of simple models exploded in the last decade and reported a range of new phenomena. Motility induced phase separation (MIPS) is one of the most striking~\cite{CatesAnnRev2015}. A different approach focused on the phenomenology observed in dense systems, which results from the interplay of self-propulsion and alignment interactions due to the shape of the particles, and their hydrodynamic coupling to the suspending medium~\cite{Doostmohammadi2018,Ramaswamy_2017,BechingerRMP2016,MarchettiRMP2013}. Here, we follow this route and consider momentum conserving (or wet~\cite{Doostmohammadi2016}) active systems with nematic order, which may be described in the hydrodynamic limit by the continuum equations of liquid crystals (LC) with an extra stress term that accounts for the activity. While passive LCs~\cite{p1995physics, beris1994thermodynamics} have been studied for decades, active nematics are currently a hot topic of fundamental research. Indeed, only recently, a second active stress term was proposed, which may have a profound effect on the behaviour and stability of active nematics~\cite{MaitraPNAS2018}. One question that has been investigated concerns the conditions to observe directed coherent flow, which is key to applications in micromotors and micromachines~\cite{Wueaal1979, Opathalage4788}. While in the bulk active turbulence may be unavoidable for any degree of activity~\cite{Doostmohammadi2018,Ramaswamy_2017,BechingerRMP2016,MarchettiRMP2013}, confinement in narrow channels results in a spontaneous symmetry breaking and opens a window to observe directed flow at low but non-zero activities~\cite{VoituriezEPL2005, marenduzzo07steady, EdwardsEPL2009}. As the activity increases, the directed flow changes to oscillatory, dancing disclinations, and ultimately turbulent regimes, both in the deep nematic phase~\cite{ShendrukSM2017, Giomi_2012, Wensink14308} and also at temperatures above the passive nematic-isotropic transition~\cite{ChandragiriSoftMatter2019}. The fact that the activity, by itself, induces local nematic ordering and even active turbulence, raises the question of the coexistence between active nematic and isotropic phases and of the behaviour of the interfaces between them. Although this is a topic of intense research for MIPS in active systems with scalar order parameters~\cite{CatesAnnRev2015, SolonPRE2018, SolonNJP2018, TjhungPRX2018}, it has not been addressed for active nematics, despite the fact that an intrinsic free energy has been proposed for these systems~\cite{ThampiEPL2015}. In addition to fundamental issues related to the existence and characterizaton of phase coexistence in active nematics, interfaces of LCs have played a prominent role in applications. For example, they can be controlled easily in order to modify their optical properties~\cite{Blow_2013} and provide an efficient means to transport colloids~\cite{refId0}. Interfaces of active LCs are expected to play similar roles. Previous studies of active and passive LCs have focused on the dynamical behaviour of droplets of one in the other, with the observation that passive droplets may be driven through the active fluid while active droplets may rotate in passive nematics ~\cite{PhysRevLett.113.248303, C4SM00937A, C7SM00325K, C7SM01019B}. The coexistence of active nematic and isotropic phases and the dynamics of their interfaces have not been investigated. \begin{figure}[t] \center \includegraphics[width=\linewidth]{active-particle.eps} \caption{A and B) Velocity field in a domain of a contractile and an extensile system. C) Directors in a narrow channel, where the strong anchoring at the walls dominates the planar interfacial anchoring. D) Directors in a channel, where the planar interfacial anchoring dominates. E) Direction of the active force, at the centre of the interface, for extensile and contractile systems with planar interfacial anchoring. F) Direction of the active force, at the centre of the interface, for extensile and contractile systems with perpendicular interfacial anchoring. } \label{active-particle-fig} \end{figure} \begin{figure}[b] \center \includegraphics[width=\linewidth]{scheme.eps} \caption{Homeotropic channel setup and initial conditions. The nematic phase, with uniform director perpendicular to the walls, is set in the center of the channel with isotropic fluid elsewhere. The anchoring at the walls is fixed (strong) and perpendicular (homeotropic). Periodic boundary conditions are applied in the $x$ direction and no-slip at the walls. } \label{setup-fig} \end{figure} In this paper, we consider the dynamics of the interface between the nematic and the isotropic phases of an active LC confined in channels with strong homeotropic anchoring at the walls. The system exhibits a nematic ordering transition, driven by the temperature at fixed (low) activity. The velocity of the interface at the passive NI temperature increases from zero (i) linearly with the activity for contractile systems and (ii) quadratically for extensile ones. In the former, the nematic order increases while it decreases in the latter. This behavior is unexpected as extensile systems favor nematic order, but it is readily understood as a result of the active forces at the interface. The effect is reversed in planar channels as the active forces at the interface are also reversed, as we show here. At fixed activity, the velocity of the interface depends linearly on the temperature. In the stable regime the temperature can be tuned to observe a static interface in the channel providing an operational definition for the coexistence of confined active nematic and isotropic phases. Beyond the stable regime, extensile nematics exhibit an interfacial instability in hometropic and planar channels. We performed computer simulations of the hydrodynamic equations of active liquid crystals using a hybrid method of lattice Boltzmann~\cite{succi2018lattice, kruger2016lattice} (LB) and a finite differences (FD) predictor corrector~\cite{vesely2001computational}. There are challenges associated with the simulation of interfaces in multiphase LB models, such as the control of spurious velocities and the difficulty to set the physical values of both viscosity and surface tension~\cite{0953-8984-25-24-245103}. To overcome these problems, we introduced various improvements, as discussed in the following. Our paper is organized as follows. In Sec.~\ref{methods-sec}, we describe the equations of motion and the numerical method. In Sec.~\ref{stable-int-ec}, we describe the results of numerical simulations of passive and active nematic interfaces in channels and show how the active interfaces may become static through a shift in the temperature. In Sec.~\ref{flow-states-sec}, we discuss the interfacial instability observed in extensile nematics and mention briely the flow states that occur at higher activities. In Sec.~\ref{other-setups-sec}, we investigate the effects of anchoring at the channel walls and check the effect of the interaction between interfaces. In Sec.~\ref{conclusions} we summarize and conclude$^\dag$\footnotetext{\dag~Electronic Supplementary Information (ESI) available: details of the MRT model and videos of the interfacial dancing state and interfacial instability are given in the ESI. See DOI: 00.0000/00000000.}. \begin{figure}[t] \center \includegraphics[width=\linewidth]{passive.eps} \caption{Velocity of the (right) passive interface, $v_i$, at temperatures $\gamma$ close to $\gamma_{NI}=2.7$. The squares are the measured velocities and the solid line is a linear fit, which yields the coexisting temperature $\gamma_{NI}^{ch}$ where the passive interface in the channel is static. The inset shows the (right) interface position as a function of time at different values of $\gamma$ (from top to bottom): 2.68968, 2.68966, 2.68964, 2.68962, 2.68960.} \label{passive-fig} \end{figure} \begin{figure*}[htb] \center \includegraphics[width=\linewidth]{zeta-normal.eps} \caption{Representative flow states in a channel with homeotropic anchoring at the walls at different activities. The system is at the coexistence temperature of the passive system in the channel ($\gamma_{NI}^{ch}$). The active nematic is flow aligning ($\xi=0.7$) and the screenshots were taken after $t=2\times 10^{6}$ iterations. A) For a contractile system with $\zeta=-0.002$, the nematic phase expands. B) For an extensile system with $\zeta = 0.0005$, the nematic phase contracts. C) At $\zeta=0.002$, the interfacial dancing state with one pair of defects is observed. D) At $\zeta=0.0075$, the system is in the dancing flow state (no interface). E) At $\zeta=0.05$, active turbulence is observed. Left: the lines represent the director field and the colors represent the scalar order parameter normalized to the minimum and maximum values, which are, from A to E, ($0$, $0.33$), ($0$, $0.33$), ($-0.03$, $0.34$), ($-0.10$, $0.45$) and ($-0.23$, $0.71$). Right: the lines indicate the direction of the velocity field and the norm of the vorticity is color coded, where the minimum and maximum values are, from A to E, ($0$, $1.86\times10^{-4}$), ($0$, $5.48\times 10^{-5}$), ($0$, $1.98\times 10^{-3}$), ($4.68\times 10^{-6}$, $0.0118$), ($1.53\times 10^{-6}$, $0.159$).} \label{zeta-normal} \end{figure*} \section{Methods} \label{methods-sec} \subsection{Equations of motion} We employ the Landau-de-Gennes free energy, $\mathcal{F} = \int_V f\,d^3 r $, to describe the passive liquid crystal at equilibrium. The free energy density, $f$, is the sum of two terms: $f = f_{bulk} + f_{el}$, where $f_{bulk}$ is the bulk free energy and $f_{el}$ is the elastic energy, i.e., the energy cost associated to distortions with respect to the uniform alignment of the director field. These energy densities are given by: \begin{align} f_{bulk} =& \frac{A_0}{2}\left( 1- \frac{\gamma}{3} \right) Q_{\alpha \beta} Q_{\alpha \beta} - \frac{A_0\gamma}{3} Q_{\alpha \beta} Q_{\beta \gamma} Q_{\gamma \alpha} \nonumber \\ &+ \frac{A_0\gamma}{4} (Q_{\alpha \beta} Q_{\alpha \beta} )^2, \nonumber \\ f_{el} =& \frac{K}{2} (\partial _\gamma Q_{\alpha \beta}) (\partial _\gamma Q_{\alpha \beta}). \end{align} The tensor order parameter is assumed to be uniaxial, both in the initial conditions and in the interpretation of the results: $Q_{\alpha \beta} = S(n_\alpha n_\beta - \delta_{\alpha \beta}/3)$, where $S$ is the scalar order parameter and $n_\alpha$ is the $\alpha$-component of the director. The parameter $\gamma$ controls the magnitude of the order and depends on the temperature for thermotropic or on the density for lyotropic LCs, $K$ is the elastic constant and $A_0$ is a constant. In this one-constant approximation the anchoring at the NI interface is planar~\cite{p1995physics}. Here the Greek indices stand for Cartesian coordinates with sums over the terms with repeated indices. The coexistence between the bulk nematic and isotropic phases occurs at $\gamma_{NI}=2.7$ and $S=1/3$ as obtained by minimizing $f_{bulk}$. For $\gamma>\gamma_{NI}$ the bulk equilibrium phase is nematic while for $\gamma<\gamma_{NI}$ the bulk equilibrium phase is isotropic. The tensor order parameter evolves according to the Beris-Edwards equation: \begin{align} \partial _t Q_{\alpha \beta} + u _\gamma \partial _\gamma Q_{\alpha \beta} - S_{\alpha \beta}(W_{\alpha\beta}, Q_{\alpha\beta}) = \Gamma H_{\alpha\beta} , \label{beris-edwards} \end{align} where $\Gamma$ is the collective rotational diffusive constant. The co-rotational term, $S_{\alpha \beta}$, describes the response of the LC to gradients in the velocity field $\mathbf{u}$ and it is given by: \begin{align} S_{\alpha \beta} =& ( \xi D_{\alpha \gamma} + W_{\alpha \gamma})\left(Q_{\beta\gamma} + \frac{\delta_{\beta\gamma}}{3} \right) + \left( Q_{\alpha\gamma}+\frac{\delta_{\alpha\gamma}}{3} \right)(\xi D_{\gamma\beta}-W_{\gamma\beta}) \nonumber \\& - 2\xi\left( Q_{\alpha\beta}+\frac{\delta_{\alpha\beta}}{3} \right)(Q_{\gamma\epsilon} \partial _\gamma u_\epsilon), \end{align} where $W_{\alpha\beta} = (\partial _\beta u_\alpha - \partial _\alpha u_\beta )/2$, $D_{\alpha\beta} = (\partial _\beta u_\alpha + \partial _\alpha u_\beta )/2$. The aligning parameter $\xi$ depends on the particles shape: it is positive for rod-like particles and negative for disk-like particles. The molecular field stands for: \begin{align} H_{\alpha\beta} = -\frac{\delta \mathcal{F}}{\delta Q_{\alpha\beta}} + \frac{\delta_{\alpha\beta}}{3} \Tr \left( \frac{\delta \mathcal{F}}{\delta Q_{\gamma \epsilon}} \right). \end{align} The Navier-Stokes and continuity equations give the time evolution of the velocity field in terms of the stress tensor $\Pi_{\alpha\beta}$, which depends on the tensor order parameter: \begin{align} \partial _t \rho + \partial_\alpha (\rho u_\alpha) =0, \end{align} \begin{align} \rho \partial _t u_\alpha + \rho u_\beta \partial_\beta u_{\alpha} = \partial _\beta \Pi_{\alpha\beta} + \eta \partial_\beta\left( \partial_\alpha u_\beta + \partial_\beta u_\alpha \right). \label{navier-stokes} \end{align} The stress tensor is the sum of an active and a passive terms: $\Pi_{\alpha\beta} = \Pi^{\text{passive}}_{\alpha\beta} + \Pi^{\text{active}}_{\alpha\beta}$. In the simplest description of the hydrodynamics of active LCs, which accounts for the lowest order contribution with the appropriate symmetry, the active term is given by~\cite{PhysRevLett.92.118101}: \begin{align} \Pi^{\text{active}}_{\alpha\beta} = -\zeta Q_{\alpha\beta}, \label{active-pressure-eq} \end{align} while the passive one is: \begin{align} \Pi^{\text{passive}}_{\alpha\beta} =& -P_0 \delta_{\alpha\beta} + 2\xi \left( Q_{\alpha\beta} +\frac{\delta_{\alpha\beta}}{3} \right)Q_{\gamma\epsilon}H_{\gamma\epsilon} \nonumber \\ &- \xi H_{\alpha\gamma} \left( Q_{\gamma\beta}+\frac{\delta_{\gamma\beta}}{3} \right) - \xi \left( Q_{\alpha\gamma} +\frac{\delta_{\alpha\gamma}}{3} \right) H_{\gamma \beta} \nonumber \\ &- \partial _\alpha Q_{\gamma\nu} \,\frac{\delta \mathcal{F}}{\delta (\partial_\beta Q_{\gamma\nu})} + Q_{\alpha\gamma}H_{\gamma\beta} - H_{\alpha\gamma}Q_{\gamma\beta} . \label{passive-pressure-eq} \end{align} In these equations $\eta$ is the shear viscosity, $P_0$ is the hydrostatic pressure, and $\zeta$ is the activity parameter. Figs.\ref{active-particle-fig} A and B are a schematic representation of the velocity field for positive and negative $\zeta$. For $\zeta>0$, the velocity field resembles the one for pushers and the system is extensile, while for $\zeta<0$, it resembles the flow for pullers and the system is contractile. \begin{figure}[t] \center \includegraphics[width=\linewidth]{stable.eps} \caption{Position of the interfaces of extensile nematics with different activities, at $\gamma_{NI}^{ch}$. The interfaces become static due to their mutual interaction, through the velocity field, when they are sufficiently close. The inset shows the scalar order parameter and the directors for the active nematic with $\zeta=0.001$ at $t=10^{7}$. The nematic is flow aligning with $\xi=0.7$. } \label{stable-fig} \end{figure} \begin{figure}[t] \center \includegraphics[width=\linewidth]{ux-zeta.eps} \caption{Velocity of stable interfaces for flow aligning nematics, $\xi=0.7$, with different activities. The velocities are calculated when the interfaces are far apart, $t=50000$ to $t=150000$. The blue line is a linear fit that illustrates the linear regime of the interfacial velocity for contractile nematics. The dashed line is a quadratic fit showing that the linear regime for extensile nematics is greatly reduced.} \label{ux-zeta-fig} \end{figure} \begin{figure*}[htb] \center \includegraphics[width=\linewidth]{zeta-dg-neg.eps} \caption{Phase diagram of active nematics in a channel. A) Time evolution of the interface position at $y=L_Y/2$ at different temperatures for a contractile nemtic with $\zeta=-0.002$. The solid lines are linear fits from which the velocity is obtained. B) Phase diagram. $\Delta \gamma = \gamma - \gamma_{NI}^{ch}$ is the shift from the passive nematic isotropic temperature in the channel required to observe a static active interface for active nematics. The solid line is a fit illustrating a linear phase diagram for contractile nematics while the dashed line is a quadratic fit illustrating a more complex behaviour for extensile nematics. Above the coexistence line the system is nematic and below it is isotropic. The inset shows the interfacial velocity as a function of $\Delta \gamma$ at four activities with the respective linear fit shown in the same color (from top to bottom): -0.0025, -0.0010, 0.0003, 0.0009. The nematics are flow aligning with $\xi=0.7$.} \label{zeta-dg-neg} \end{figure*} \subsection{Hybrid method} \label{hybrid-method-sec} To perform the numerical simulations, we solved the hydrodynamic equations of liquid crystals using a hybrid method of LB and FD. This approach is common in the literature and has successfully reproduced analytical and experimental results~\cite{Doostmohammadi2018,PhysRevLett.110.048303, marenduzzo07steady}. The LB has many advantages if compared to other computational fluid dynamics methods, namely the way complex boundary conditions are treated, the performance in parallel architectures and the possibility to include many physical models~\cite{kruger2016lattice, succi2018lattice, PhysRevB.96.184307} which justifies its use to simulate liquid crystals. The first approaches were fully LB, meaning that both the Navier-Stokes and Beris-Edwards equations were solved using LB~\cite{Denniston2002}\footnote{In fact, the LB solves the Boltzmann equation and recovers the macroscopic equations of motion, e.g. the Navier Stokes equation, in the macroscopic limit as can be shown through the Chapman-Enskog expansion~\cite{kruger2016lattice}.}. Later, the Beris-Edwards equations started to be solved with FD since this is more efficient in memory usage and furthermore eliminates spurious terms arising from the coupling between these two equations of motion in the Chapman-Enskog expansion~\cite{marenduzzo07steady}. In the LB part (to solve the Navier-Stokes equation) of the models used in the literature, the stress tensor was implemented in the equilibrium distribution and the D3Q15 lattice was used to discretize the velocity space~\cite{Denniston2002, denniston2004lattice, marenduzzo07steady}. There are additional challenges associated with simulation of interfaces in multiphase LB models as the control of spurious velocities and the difficulty to set the physical values of both viscosity and surface tension~\cite{0953-8984-25-24-245103}. To overcome these problems, we introduced three main improvements. First, we adopted the D3Q19 lattice since it is more isotropic than the D3Q15 and can resolve with finer resolution the angular distribution of the velocity field. Second, we use the multi-relaxation-time (MRT) collision operator, which has superior stability and accuracy and allows us to independently choose the relaxation rates of the hydrodynamic moments\cite{doi:10.1098/rsta.2001.0955}. The MRT is known to reduce, by orders of magnitude, the spurious velocities in the pseudopotential LB models, which are a class of multiphase and multicomponent models that suffer badly from this problem~\cite{PhysRevE.82.046708, AMMAR201773, PhysRevE.73.047701}. Third, in our model, the stress tensor is implemented in the force term, which reduces the spurious velocities in the free energy models~\cite{10.1142/S0217979203017448}. The discrete form of the Boltzmann equation with the MRT collision operator reads as follows: \begin{align} &f_i(\mathbf{x}+\mathbf{c}_i \Delta t, t+\Delta t) - f_i(\mathbf{x}, t) \nonumber \\ &= \mathbf{M}^{-1}\mathbf{R}\mathbf{M} [ f_i(\mathbf{x}, t) - f_i^{eq}(\mathbf{x}, t) ]\Delta t + \mathcal{S}_i, \end{align} where the index $i=\{0, \ldots, 18\}$ runs over the velocity vectors. The transformation matrix $\mathbf{M}$, and the relaxation matrix $\mathbf{R}$ are given in the Electronic Supplementary Information. The source term, $\mathcal{S}_i$, is given by Guo's forcing scheme, the equilibrium moments of which are also given in the Electronic Supplementary Information (see also Ref.~\cite{kruger2016lattice} for a detailed description of the method). The equilibrium distribution is the Maxwell-Boltzmann distribution expanded up to second order in Hermite polynomials~\cite{COELHO2018144}: \begin{align} f^{eq}_i = \rho w_i \left[ 1+ \frac{\mathbf{c}_i\cdot\mathbf{u}}{c_s^2} + \frac{(\mathbf{c}_i\cdot\mathbf{u})^2}{2c_s^4} - \frac{\mathbf{u}^2}{2c_s^2} \right], \end{align} where $\mathbf{c}_i$ and $w_i$ are the velocity vectors and the discrete weights of the lattice, and $c_s=1/\sqrt{3}$ is the speed of sound in the D3Q19 lattice. The density $\rho$ and the macroscopic velocity $\mathbf{u}$ are calculated from the distribution functions: \begin{align} \rho = \sum _i f_i, \quad \rho \mathbf{u} = \sum _i \mathbf{c}_i f_i + \frac{\mathbf{F}_i \Delta t}{2} . \label{density-vel-eq} \end{align} Notice that the velocity passed to the FD method is the one calculated with Eq.~\eqref{density-vel-eq}, which is corrected by the term $\mathbf{F}_i \Delta t/2$ to ensure second order accuracy. The force is calculated as $F_\alpha = \partial _\beta (\Pi_{\alpha\beta} + \rho {c_s}^2 \delta_{\alpha\beta})$, where $\Pi_{\alpha\beta}$ is given by Eqs.~\eqref{active-pressure-eq} and ~\eqref{passive-pressure-eq}. In the FD, Eq.~\eqref{beris-edwards} is discretized on the same grid as the LB and solved through the predictor-corrector algorithm using second order differences. Therefore, both methods (FD and LB) are second order accurate and should provide consistent solutions of the equations of motion. \section{Stable interface} \label{stable-int-ec} In this section, we simulate active stable interfaces (low activities) in a channel and compare the results with those of passive interfaces. We show that the active interfaces move with a velocity that depends on the activity and that they may be stabilized by a shift in the temperature. \subsection{Channel setup and initial conditions} \label{channel-setup-sec} We start by simulating an open channel, with the nematic, between $x_1=3L_X/8$ and $x_2=5L_X/8$, and the isotropic fluid elsewhere. The fluid is confined between two flat walls, at a fixed distance $L_Y$, with infinite homeotropic anchoring (see Fig.~\ref{setup-fig}) and periodic boundary conditions in the $x$ direction. The initial velocity is set to zero and the density to $\rho=1$ everywhere. The following parameters are used: $\tau=1.5$, $L_x\times L_Y\times L_Z= 270\times 45 \times 1$, $K=0.04$, $A_0=0.1$ and $\Gamma=0.34$. The other parameters change for each simulation and will be given in the caption of the corresponding figures. Our results are given in lattice units: the distance between nodes is $\Delta x = 1$ and the time step is $\Delta t = 1$ (see Refs.~\cite{Thampi_2015, Doostmohammadi2018} to transform these to physical units). Since the interfacial profile follows approximately a hyperbolic tangent we initialize the scalar order parameter as: \begin{align} S = \frac{S_n}{2}\left[\tanh\left( \frac{x-x_1}{2 \lambda} \right) - \tanh\left(\frac{x-x_2}{2 \lambda}\right) \right], \label{S-initial-eq} \end{align} where $\lambda=\sqrt{27 K/A_0 \gamma}$ is the correlation length (equal to 2 at the bulk nematic-isotropic coexistence $\gamma=2.7$) and the scalar order parameter of the nematic phase, $S_n$, is computed through the minimization of the free energy: \begin{align} S_n = \frac{\gamma + \sqrt{3(3\gamma^2-8\gamma )}}{4 \gamma}. \label{Sn-eq} \end{align} $S_n =1/3$ at bulk coexistence $\gamma=2.7$. This expression for $S_n$ is used in the initial conditions and its value is fixed at the walls. We find the position of the interface, $x_{i}$, by fitting \begin{align} S = \frac{S_n}{2}\left[ 1-\tanh\left( \frac{x-x_{i}}{2 \lambda} \right ) \right]. \label{x-interface-eq} \end{align} to the interface on the right (the interface on the left is a mirror image in the stable regime). Other fields proportional to $S$, such as $Q_{yy}$, can also be used to obtain the position of the interface. In the passive case, the interface relaxes and becomes concave~\cite{refId0}, indicating that the nematic wets the channel walls. Indeed, a thin film of nematic is observed between the channel walls and the isotropic phase, for all the parameters considered here. In the simulations of active nematics, we turn the activity after 20000 iterations, allowing for the relaxation of the passive interface. \subsection{Passive interface in a channel} \label{passive-sec} In a liquid confined between two walls, the coexistence liquid-vapour temperature changes due to the interaction of the fluid with the walls, an effect known as capillary condensation~\cite{doi2013soft}. This also happens in nematics and has been called capillary nematisation. At the bulk NI transition temperature ($\gamma_{NI}$) the interface between the confined nematic and isotropic phases will move, leading to the expansion of the nematic if the walls favor the ordered phase~\cite{croxton1986fluid}. We calculate numerically the NI coexistence temperature under confinement $\gamma^{ch}_{NI}$, as the temperature where the passive interface is static. We start by evaluating the position of the interface at different times through a fit of $Q_{yy}$ using Eq.~\eqref{x-interface-eq}. Fig.\ref{passive-fig} shows the interfacial velocity at five different temperatures. Since the dependence is linear it is straightforward to calculate the NI coexistence temperature in the channel $\gamma^{ch}_{NI}\approx 2.68964$ as the temperature where the interfacial velocity vanishes. At temperatures $\gamma > \gamma^{ch}_{NI}$ the velocity is positive for the interface on the right (nematic expansion) and for $\gamma < \gamma^{ch}_{NI}$ the velocity is negative (nematic contraction). The interface is stable for a range of temperatures $\vert \Delta \gamma \vert < 0.02$ around $\gamma^{ch}_{NI}$. At temperatures higher or lower than these, one of the two phases becomes mechanically unstable, the interface disappears and the system becomes nematic ($\gamma > \gamma^{ch}_{NI}$) or isotropic ($\gamma < \gamma^{ch}_{NI}$). For the narrow channels considered here the strong anchoring at the walls dominates the interfacial anchoring and the anchoring at the interface is dictated by the wall anchoring (see Fig.\ref{active-particle-fig} C). \begin{figure}[t] \center \includegraphics[width=\linewidth]{curvature-zeta.eps} \caption{Curvature at the center of static active interfaces stabilized by a temperature shift as a function of the activity, for flow aligning nematics with $\xi=0.7$. The inset shows the shape of the interface at three different activities. The curvature was calculated at $t=10^{7}$.} \label{curvature-fig} \end{figure} \subsection{Active interface and stabilization by temperature} \label{stabilization-temp-sec} In order to proceed we consider the interface of an active nematic at the coexistence temperature of the passive system, $\gamma = \gamma^{ch}_{NI}$. \begin{figure*}[t] \center \includegraphics[width=\linewidth]{instability.eps} \caption{Interfacial instabilities in extensile nematics with $\xi=0.7$. Screenshots were taken at $t=10^{7}$ for systems with different activities at $\gamma = \gamma^{ch}_{NI}$. A) At $\zeta=0.0011$ the interface is stable and static due to the interaction between the two interfaces. B) At $\zeta=0.0012$, the interface breaks the symmetry with respect to the centre of the channel but remains static. C) At $\zeta=0.0013$, an interfacial instability is formed and the system becomes disordered (the interface disappears). D) At $\zeta=0.0014$ the interface is highly asymmetric but still static. Finally, at $\zeta=0.0015$ an interfacial dancing state is observed. Left: the lines represent the director field and the colors represent the scalar order parameter normalized to the minimum and maximum values, which are, from A to D, ($-2.73\times 10^{-3}$, $0.33$), ($-4.36\times 10^{-3}$, $0.33$), ($8.45 \times 10^{-5}$, $0.33$), ($-0.0291$, $0.33$). Right: the lines indicate the direction of the velocity field and the norm of the vorticity is color coded, where the minimum and maximum values are, from A to E, ($0$, $1.35\times 10^{-4}$), ($0$, $2.76\times 10^{-4}$), ($0$,$0$), ($0$, $1.11\times 10^{-3}$).} \label{instability-fig} \end{figure*} \begin{figure}[t] \center \includegraphics[width=\linewidth]{size.eps} \caption{Activity of the interfacial instability as a function of the channel height for flow aligning nematics with $\xi=0.7$. The slope of the linear fit is -1.8.} \label{size-fig} \end{figure} \begin{figure*}[t] \center \includegraphics[width=\linewidth]{vort.eps} \caption{Vorticity of systems with different activities at $t=2\times10^{6}$. A) Extensile nematics with $\xi =0.7$, at higher resolution (more points) than the color map; B) Color map for extensile nematics with different $\xi$'s. The blue region corresponds to the stable interface moving with constant velocity; the green region corresponds to the interfacial dancing state; the red region corresponds to dancing and turbulent states (no interface); the black region corresponds to systems with zero vorticity (isotropic system and no interface). } \label{xi-zeta-fig} \end{figure*} The orientation of the directors close to the interface is determined by a competition between the anchoring at the walls~\cite{batista2014effect} and the interfacial passive and active anchorings~\cite{C7SM00325K}. The latter tends to align the directors parallel to the interface in extensile systems and normal to it in contractile ones. In narrow channels with strong anchoring, as the ones considered here, the effect of the wall anchoring is dominant and the director field is uniform (Fig. \ref{active-particle-fig} C). In wider channels and/or for weak anchoring at the walls the directors are expected to align parallel to the interface, both in passive LCs with a single elastic constant $K$ as well as in extensile nematics with the same elasticity (Fig. \ref{active-particle-fig} D). At low activities a stable interface is formed (Fig.~\ref{zeta-normal} A and B). The interface moves with constant velocity and constant shape, while the bulk remains at rest. The domain of the nematic phase expands for contractile systems ($\zeta<0$) and contracts for extensile ones ($\zeta>0$). This behavior is counterintuitive since bulk extensile systems favor nematic ordering while contractile ones disfavor it. To understand this result, we calculate the active forces at the interface. Projecting the active force $\mathbf{F}^{\text{active}} = -\zeta \nabla \cdot \mathbf{Q}$ on the outward normal to the NI interface, $\mathbf{m}$, we have: \begin{align} \mathbf{F}^{\text{active}}_{\bot} &= -\zeta \Big[ (\mathbf{n}\cdot\mathbf{m}) (\mathbf{n}\cdot \boldsymbol{\nabla} S + S \, \boldsymbol{\nabla} \cdot \mathbf{n} ) \nonumber \\ & + S \,\mathbf{m}\cdot (\mathbf{n}\cdot \boldsymbol{\nabla})\mathbf{n} - \frac{1}{3} \textbf{m}\cdot \boldsymbol{\nabla} S \Big] \textbf{m}. \label{active-force-complete} \end{align} An estimate of the force is easily obtained by assuming that the interface is circular, with radius $R$, and that the director is parallel to it, $\mathbf{n}\cdot\mathbf{m}=0$ (Fig. \ref{active-particle-fig} E), which is reasonable in the center of the channel. The normal force is then given by the sum of two terms: \begin{align} \mathbf{F}^{\text{active}}_{\bot} = -\zeta \left( \frac{\vert\boldsymbol{\nabla}S \vert}{3} + \frac{S}{R} \right) \mathbf{m}. \label{active-force} \end{align} The first of these terms is the active force due to the gradient of the nematic order, while the second is due to the interfacial curvature. For extensile nematics, the normal active force acts inwards, while for contractile ones the force acts outwards, revealing the mechanism for the contraction and the expansion of the nematic phase observed in the simulations. In addition, the force is linear in $\zeta$ and it increases as $R$ decreases. We note that the second term of the active force, which increases the curvature of a curved interface with planar anchoring, acts in a similar way to the active forces that increase bend fluctuations in extensile nematics and are ultimately responsible for the bend instability of unconfined extensile nematics. By contrast, the first term of the force, proportional to $\vert\boldsymbol{\nabla}S \vert$, is absent deep in the nematic, where the scalar order parameter is nearly uniform. At the interface, however, this term becomes increasingly important as the curvature decreases, and it is the only active force at a flat interface. Similar active forces were reported in Ref.~\cite{PhysRevLett.113.248303} for interfaces in active nematic emulsions, characterized by an additional conserved order parameter. One important difference between our results and those of Ref.~\cite{PhysRevLett.113.248303} is that the interfaces considered here are stable due to the confinement in the channel. In what follows we show that the active interfacial motion can be stopped by a shift in temperature, in a way reminiscent of the behaviour of passive nematic-isotropic interfaces near coexistence. We start by noting that, for extensile nematics $\zeta>0$ the interfaces move towards each other and, when they are sufficiently close, the velocity field of one pushes the other rendering the interfaces static without the need for a temperature shift (see Fig.~\ref{stable-fig}). This effect does not occur for contractile systems $\zeta<0$ as the two interfaces move in opposite directions. We are interested in the dynamics of a single interface, far from surfaces or other interfaces. Thus, in Fig.~\ref{ux-zeta-fig} we plot the interfacial velocity measured when the interfaces are far apart and the interaction between them is negligible. For each system (or activity), we changed the temperature around $\gamma^{ch}_{NI}$ and measured the interfacial velocity. We found that the interfacial velocity is linear in $\Delta \gamma$, for both contractile and extensile nematics. It is then straightforward to calculate the shift $\Delta \gamma$ required to observe a static interface for a given activity (Fig.~\ref{zeta-dg-neg}A). In Fig.~\ref{zeta-dg-neg}B we plot the phase diagram for active nematics in a channel, as the temperature shift, $\Delta \gamma$, that renders the interface static. For contractile nematics the coexistence line is approximately linear. For extensile ones, the linear relation is observed in a narrow range of activities becoming quadratic afterwards. This quadratic behaviour correlates with the quadratic dependence of the interfacial velocity on the activity observed in extensile nematics (Fig.~\ref{ux-zeta-fig}). In order to check that the combined effects of temperature and activity lead to a static interface, we performed longer simulations (up to $t=10^7$ time steps) at the values of $\Delta\gamma$ given in Fig.~\ref{zeta-dg-neg}B for different activities. We found that the interfacial velocities are indeed very close to zero at these temperatures (which could be refined iteratively). The shape of the static interface changes with the activity and is plotted in the inset of Fig.~\ref{curvature-fig}. Although there is no net flow under these conditions, the vortices near the interface (see Fig.~\ref{zeta-normal}) become stronger as the activity increases. We quantify the dependence of the static interfacial shape on the activity through the curvature at the center of the channel ($y=L_Y/2$). The interfacial curvature is defined as: \begin{align} \kappa = \frac{\vert x_{i}^{\prime\prime}(y) \vert }{\left[ 1+(x_{i}^\prime (y))^2 \right]^{\frac{3}{2}}}. \end{align} and is plotted in Fig.~\ref{curvature-fig}, where $x_{i}(y)$ is the position of the interface. For extensile nematics the curvature increases with the activity. Assuming that the interface is circular, we can estimate its radius and find: $R (\zeta=-0.0025) \approx 33$ and $R (\zeta=0.0009) \approx 12$. We can now understand why the interfacial velocity has a linear regime and a non linear one, Fig. \ref{ux-zeta-fig}. When the curvature is small (large $R$), the second term in Eq. \eqref{active-force} is small and the force is approximately linear in $\zeta$. When the curvature increases (small $R$), the second term in Eq. \eqref{active-force}, which depends on the activity (Fig. \ref{curvature-fig}), is relevant and the active force becomes non-linear. \section{Flow states and interfacial instability} \label{flow-states-sec} In Fig.~\ref{zeta-normal} we give an overview of the different states observed as the activity increases, from contractile to extensile nematics. The top pannels (A and B) correspond to contractile (A) and extensile (B) nematics where a stable interface, with fixed shape, was found to move at constant velocity in the channel. These states are observed at low activities. At higher activities, the interface of extensile nematics exhibits an instability. A complex interface with a persistent saptio-temporal structure (panel C) is observed for a range of activities beyond the instability. At even higher activities the interface cannot be sustained, as the isotropic phase in the channel becomes unstable, and complex flow states, dancing D and active turbulence E, are observed in line with earlier reports ~\cite{Doostmohammadi2018, 10.1039/C8SM02103A}. Stable interfaces have a shape that is invariant in time and mirror symmetric with respect to the centre of the channel. An interfacial instability occurs when the system breaks one or both of these symmetries (Fig.~\ref{instability-fig}) at a specific value of the activity, $\zeta_{inst}$. For the initial conditions described above, the interfaces move towards each other, interact through the velocity field and become static as illustrated in Fig.~\ref{stable-fig}. Fig.~\ref{instability-fig}B shows that at $\zeta_{inst} \approx 0.0012$ the interfaces break the mirror symmetry and one of the two vortices close to the interface becomes stronger. At $\zeta = 0.0013$, the system becomes isotropic and the interface disappears. This suggests that the system is multistable in this range of parameters and that the final state may be affected by finite size effects. At $\zeta=0.0014$ (see Fig.~\ref{instability-fig} C) a strongly distorted interface reappears. The instability is driven by the active force that increases with the interfacial curvature which in turn increases the curvature of the interfaces with planar anchoring. This instability is closely related to the bend instability for extensile systems reported previously in unconfined active nematic emulsions~\cite{PhysRevLett.113.248303,C7SM00325K}. With one important difference: the active force that drives the interfacial instability is also responsible for the re-appearence of the nematic phase and the interfacial state persists at long times. A distinct instability in active nematics, where the fluid starts moving coherently across the channel, was reported deep in the ordered phase, at higher activities that scale with the inverse of the square of the channel height~\cite{VoituriezEPL2005}. This scaling was verified numerically in the non-linear regime in Ref.~\cite{marenduzzo07steady} and is also approximately verified for the interfacial instability. In Fig.~\ref{size-fig} we plot the activity at threshold $\zeta_{inst}$ for channels with different heights. For each channel $L_Y$, we calculated $\gamma^{ch}_{NI}$ as the passive NI temperature depends on the channel height. We found that the value of the threshold activity decreases approximately with ${L_Y}^2$ in line with the scaling reported previously. This is to be expected as the active length associated with the active vortices scales with the ratio of the elastic to the active forces, $\sqrt{K/\zeta}$ and the activity number, which is the ratio of the channel height to the active length, is nearly constant, $A=L_Y \sqrt{\zeta/K} \approx 7.8$, at the interfacial instability threshold. At higher activities, in the slab geometry, an interfacial dancing state (Fig.~\ref{zeta-normal}C) is observed. This state is characterized by two NI interfaces and a pair of $+1/2$ defects moving in complex oscillatory trajectories. The defects leave one interface into the nematic and disappear from it at the other interface. This state resembles the bulk dancing state, confined by the NI interfaces. The interfaces are no longer concave with constant curvature, as the stable interfaces discussed in Sec.~\ref{stable-int-ec}, but they are not transient. The interfacial dancing state is stable over a finite range of activities: for a flow aligning nematic with $\xi=0.7$ and a channel with $L_Y=45$, the state is observed for activities in the range $0.0015 < \zeta < 0.0021$. At even higher activities, the interface disappears (the isotropic phase becomes unstable) and the previously reported dancing (Fig.~\ref{zeta-normal}D) and active turbulent states (Fig.~\ref{zeta-normal}E) are observed~\cite{Doostmohammadi2018, 10.1039/C8SM02103A}. We used the spatial average of the vorticity norm in the steady state, calculated at $t=2\times10^{6}$, as an order parameter to characterize the states of extensile nematics: $\langle \omega \rangle$, where $\boldsymbol{\omega} = \nabla \times \mathbf{u}$. In Fig.~\ref{xi-zeta-fig}A, we plot the average vorticity as a function of the activity for nematics with $\xi=0.7$. The results reveal that the average vorticity $\langle \vert \mathbf{\omega} \vert \rangle$ increases as the activity increases. More interestingly the vorticity exhibits jumps between dynamical states. The first jump, at $\zeta\approx 0.0012$, occurs between the stable interface and the interfacial dancing states. As discussed previously, we have observed multi-stability and a sequence of intermediate states (see Fig.~\ref{instability-fig}) including isotropic states with zero vorticity, in the transition region. States with zero vorticity are represented by the black squares in the right panel of the figure, Fig.~\ref{xi-zeta-fig}B. The second jump occurs between the interfacial dancing and the dancing states where the nematic fills the channel and the interface is no longer stable. In Fig.~\ref{xi-zeta-fig}B we plot the vorticity at $t=2\times 10^{6}$ for active nematics with different alignment parameters and activities. At the bulk NI temperature the flow is tumbling if the alignment parameter $\xi <0.43$~\cite{marenduzzo07steady} but the stability of the flow states does not depend strongly on it. However, most of the black squares (isotropic) are in the flow tumbling regime. \section{Other channels: planar anchoring and closed channel} \label{other-setups-sec} \begin{figure}[t] \center \includegraphics[width=\linewidth]{parallel.eps} \caption{Channel with planar anchoring. A) Contraction of the nematic phase for contractile systems with $\zeta=-0.001$ after 250000 iterations (the final state is isotropic); B) Expansion of the nematic phase for extensile systems $\zeta = 0.001$ after 250000 iterations (the final state is nematic); C) Interfacial dancing state $\zeta=0.0025$ at $t=5\times10^6$; D) Dancing state $\zeta=0.003$ at $t=5\times10^6$. Here $\xi=0.5$. The lines represent the director field and the colors represent the scalar order parameter normalized to the minimum and maximum values, which are, from A to D, ($-6.64\times 10^{-5}$, $0.33$), ($-2.17\times 10^{-3}$, $0.33$), ($-0.0289$, $0.34$), ($-0.0637$, $0.37$).} \label{parallel-fig} \end{figure} \begin{figure}[t] \center \includegraphics[width=\linewidth]{closed.eps} \caption{Closed channel with homeotropic anchoring. The figures from A to C illustrate the flow states in a closed channel at $t=5\times10^6$. A) Expansion of the nematic phase for contractile systems $\zeta=-0.005$; B) Contraction of the nematic phase for extensile systems with a symmetric but static interface close to the left border $\zeta = 0.001$; C) Dancing state at $\zeta=0.003$. The panels D to G depict the interfacial instability in the closed channel for an extensile system with $\zeta=0.002$ at: D) $t=15000$, E) $t=30000$, F) $t=40000$, G) $t=5\times 10^6$. Note that the instability in the closed channel occurs by breaking time invariance through the creation of a defect that relaxes the interfacial curvature. The interface breals the mirror symmetry only when it interacts with the left wall. Here $\xi=0.5$. The lines represent the director field and the colors represent the scalar order parameter normalized to the minimum and maximum values, which are, from A to D (colors for D-G are the same), ($-8.07\times 10^{-4}$, $0.33$), ($-1.96\times 10^{-3}$, $0.33$), ($-0.0719$, $0.38$), ($-0.0564$, $0.34$). } \label{closed-fig} \end{figure} Finally, we consider other channels to investigate the effects of the anchoring at the walls and the open boundary condition. We start by considering planar channels with strong anchoring and scalar order parameter given by Eq.~\ref{Sn-eq}. For narrow channels the confined nematic is uniform and the alignment by the walls dominates the interfacial and the active anchorings. We initialize the director field in this configuration. We consider a slab geometry with the nematic in the center of the channel surrounded by isotropic. All other parameters are as in Sec.~\ref{channel-setup-sec}. Fig.~\ref{parallel-fig}, illustrates a sequence of states that resemble those found for homeotropic channels. At low activities, the interface is stable and propagates with constant velocity depending on the activity (see Fig.~\ref{parallel-fig} A and B). However, the nematic phase contracts for contractile systems and expands for extensile ones. As before we use Eq. \eqref{active-force-complete}, to calculate the active force for a circular interface with a normal director field, $\mathbf{n}\cdot \mathbf{m}=1$ (Fig. \ref{active-particle-fig} F). The active force is now: \begin{align} \mathbf{F}^{\text{active}}_{\bot} = \zeta \left( \frac{2\vert\boldsymbol{\nabla}S \vert}{3} + \frac{S}{R} \right) \mathbf{m}. \label{active-force-planar} \end{align} Clearly, the active forces at interfaces with planar and homeotropic anchoring point in opposite directions (see Eq. \eqref{active-force}). In planar channels, the active force at the interface is also larger, in line with the larger velocities observed in the simulations. In Fig.~\ref{parallel-fig} C, we illustrate the interfacial dancing state observed in the planar channel. We note the formation of islands of nematic close to the boundaries and the presence of $+1/2$ defects that move in directions opposite to those observed in the homeotropic channel (as a result of the opposite signs of the vorticities). The dancing state, shown in Fig.~\ref{parallel-fig} D, is similar in both channels. Finally, we consider a closed channel with only one interface (see Fig.~\ref{closed-fig}). The left boundary is nematic with planar anchoring and the right one is isotropic, with no slip boundary conditions at the four walls. The system is initialized with the left half of the channel in the uniform nematic state with directors parallel to the interface and the right half isotropic. We note that the stable interface behaves in the same way as described in Sec.~\ref{stable-int-ec}. Furthermore, the interfacial dancing state appears even when the channel is closed (Fig.~\ref{closed-fig} C) but manifests itself through a different route (Fig.~\ref{closed-fig} D to G): The interface does not break the mirror symmetry but becomes increasingly curved until a defect is formed and is ejected into the nematic phase. When the interface approaches the left rigid boundary it finally breaks the mirror symmetry and becomes asymmetric as in Fig.~\ref{instability-fig}. \section{Summary and conclusion}\label{conclusions} To summarize, we studied the behavior of nematic-isotropic interfaces in active liquid crystals confined in a narrow channel. We found that, at high activities, the interface disappears and the system becomes nematic in the previously reported dancing or active turbulent flow states. At lower activities, there are two flow states where an interface is present: the stable interface and the interfacial dancing state. In the stable interface regime with homeotropic anchoring, the right interface propagates forwards for contractile nematics and backwards for extensile ones, as a result of the active force at the interface. The moving active interfaces may be stopped by a shift in the temperature. We calculated this shift for a wide range of activities and checked the robusteness of this stabilization mechanism by performing longer simulations. The static active interfaces are flatter than the passive ones for contractile nematics and more strongly curved for extensile ones. The curvature increases with the activity and, above a critical curvature, the interface becomes unstable. In addition, we observed an interfacial dancing state similar to the unconfined dancing state. In this state, +1/2 defects are continuously formed at the IN interface and are ejected into the bulk nematic. We also explored other channels and initial conditions to verify the robustness of the results. To perform the simulations, we used an improved hybrid model based on the lattice Boltzmann method and on a predictor-corrector FD scheme. Among the improvements are: the use of a more isotropic lattice (D3Q19); a multi-relaxation-time collision operator, which is more accurate and stable since it allows the choice of different relaxation rates for the hydrodynamic moments; implementation of the stress tensor in the force term, eliminating spurious velocities. Those improvements were required to obtain a reliable description of the interfacial dynamics since these systems are prone to spurious numerical effects. A final word on experiments. Although the motivation of the work reported here was theoretical, the relevance of interfaces in active nematic experiments is clear. As mentioned in the introduction the active turbulent state may be observed at temperatures above the passive NI transition rather than deep in the bulk nematic phase. Under these conditions, the role of interfaces may affect transient or steady dynamical states, as described here. Indeed, recently an experimental study of the propagation of active-passive interfaces in bacterial swarms has been reported~\cite{Patteson2018}. The methods and models proposed in our work may be used to address a number of questions raised by these and related experiments. \section*{Conflicts of interest} There are no conflicts to declare. \section*{Acknowledgements} We acknowledge financial support from the Portuguese Foundation for Science and Technology (FCT) under the contracts: PTDC/FIS-MAC/28146/2017 (LISBOA-01-0145-FEDER-028146) and UID/FIS/00618/2019. Margarida Telo da Gama (MTG) would like to thank the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the program "The mathematical design of new materials" where most of this work was carried out. This program was supported by EP-SRC Grant Number: EP/R014604/1. MTG participation in the program was supported in part by a Simons Foundation Fellowship. We thank Prof. Mykola Tasinkevych for the fruitful discussions. \balance
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Planar semiconductor microcavities have attracted much attention as they provide a method to enhance and control the interaction between the light and electronic excitations. When the microcavity mode (cavity photon) is resonant with the excitonic transition, two different regimes can be distinguished based on the competition between the processes of the exciton-photon coupling and damping (both photon damping and exciton dephasing). The weak coupling regime corresponds to the damping prevailing over the light-matter interaction, and the latter then simply modifies the radiative decay rate and the emission angular pattern of the cavity mode. In contrast, in the strong coupling regime the damping processes are weak in comparison with the exciton-photon interaction, and the true eigenstates of the system are mixed exciton-photon states, cavity polaritons. This particularly results in the appearance of the gap in the spectrum of the excitations whose magnitude is established by Rabi (splitting) energy. In inorganic semiconductors the strong coupling regime has been investigated extensively, both experimentally and theoretically,\cite{burweis95,weisrar96,skol98,koch99,Kavokin,Bass03,weisben06} and the dynamics of microcavity polaritons is now reasonably well understood.\cite{bottle} These studies continue to expand because of the prospect of important applications such as the polariton laser related to the polariton condensation in the lowest energy state.\cite{imam96,yama00,savv00,stev00,savv00a,alex01,deng02,Kavokin,rubo03,laus04} In another development, organic materials have been utilized in microcavities as optically active semiconductors. In many organic materials excitons are known to be small-radius states, Frenkel excitons, which interact much stronger with photons than large-radius Wannier-Mott excitons in inorganic semiconductors. The cavity polaritons, therefore, may exhibit much larger Rabi splittings on the order of few hundreds of meV; large splittings have in fact been observed experimentally.\cite{lidz98,lidz99,tart01,hobs02,scho02,lidz02,taka03,lidz02a,lidz03,holm04,holm05} At the same time, Frenkel excitons typically also feature substantially stronger interactions with phonons and disorder - electronic resonances in both disordered and crystalline organic systems are frequently found rather broad and dispersionless. It is thus likely that manifestations of exciton-polaritons in organic microcavities could be quite different from the corresponding inorganic counterparts. In this paper we are concerned with the nature and dynamics of the low energy exciton-polaritons in organic microcavities, the states of particular importance for the problem of polariton condensation. Our goal here is to illustrate some qualitative features of the dynamics in both perfect and disordered systems and, thereby, to draw attention to the need of more detailed experimental and theoretical studies to elucidate conditions for the polariton laser operation based on organic systems. The bare planar cavity photons are coherent wave excitations with a continuous spectrum and whose energy $\epsilon (\mathbf{k})=\epsilon_{k}$ depends on the magnitude $k=|\mathbf{k}|$ of the 2$d$ wave vector $\mathbf{k}$: \begin{equation}\label{phot} \epsilon_{k}=\left(\Delta^2 + \hbar^2 c^2 k^2 /\epsilon \right)^{1/2}, \end{equation} where $\Delta$ is the cutoff energy for the lowest transverse quantization photon branch we restrict our attention to, $c$ is the speed of light and $\epsilon$ the appropriate dielectric constant. In the vicinity of the excitonic resonance, $\epsilon_{k} \approx \varepsilon$ ($\varepsilon$ being the exciton energy), strong exciton-photon coupling leads to the formation of new mixed states of exciton-polaritons whose two-branch ($E_{\pm}$) energy spectrum features a gap as, e.g., illustrated in Fig.~\ref{Spectrum}. Especially interesting are systems with detuning $|\Delta-\varepsilon|$ small in comparison with the Rabi splitting. In the absence of the exciton scattering processes, cavity polaritons are also clearly coherent excitations that can be well characterized by wave vectors $\mathbf{k}$ and have energies $E_{\pm}(k)$. Frenkel exciton scattering (due to phonons and/or disorder) in organic systems with weak intermolecular interactions results in the exciton localization: excitons propagate not as coherent wave packets but by hopping; in such a regime the wave vector $\mathbf{k}$ is no longer a ``good'' quantum number. \begin{figure} \includegraphics[scale=0.7]{Spectrum.eps} \caption{\label{Spectrum}The energy spectrum of the polaritonic eigenstates in a 1$d$ model microcavity described by the Hamiltonian (\ref{h1}) with $N=1500$ molecular sites and $N=1500$ photon modes. Parameters of the systems are as follows: average exciton energy $\varepsilon=2$ eV, cavity photon cutoff energy $\Delta=1.9$ eV, dielectric constant $\epsilon=3$, the exciton-photon interaction energy $\gamma=0.15$ eV. The dashed lines show the energy eigenvalues in the system without exciton disorder, $\sigma=0$, the solid lines correspond to the spectrum in the system with disorder, $\sigma=0.03$ eV. Here the energies are shown as a function of state ``number'' sorted in an increasing energy order, separately for the lower (LP) and upper (UP) polariton branches. In the system without disorder, the state numbers would be immediately convertible to the corresponding wave vectors. On this scale, the UP branches of perfect and disordered systems are hardly distinguishable.} \end{figure} As we are actually interested in lowest energy polariton states, we will constrain further discussion mostly to the lower polariton (LP) branch. One can quickly notice that, since the photon energy (\ref{phot}) rapidly increases with $k$, higher-$k$ photon states would be only very weakly interacting with exciton states. Therefore a very large number of eigenstates of the system with energies close the upper end of the LP branch (see Fig.~\ref{Spectrum}) are essentially reflective of the bare localized exciton states with an incoherent propagation mode. For the remaining fewer and lower energy states of the LP branch, however, the exciton-photon interaction is strong and one is then faced with an interesting interplay of the bare localized nature of the ``exciton part'' of the polariton and the bare coherent character of its ``photon part''. Transparent physical arguments were used in Ref.~\onlinecite{ALL} involving the indeterminacy (broadening) of the \textit{polariton} wave vector $k$ owing to the exciton dephasing from the general relation:\cite{peierls} \begin{equation}\label{dk} \delta k=\frac{dk}{dE}\,\delta E(k)=\frac{\delta E(k)}{\hbar v_{g}(k)}, \end{equation} where $v_{g} (k)$ is the group velocity of parent polaritons in the perfect systen and $\delta E(k)$ is the energy broadening due to scattering. Based on Eq.~(\ref{dk}), one can at least distinguish polaritonic states where $k$ is relatively well-defined in the sense of \begin{equation}\label{dk1} \delta k \ll k. \end{equation} If $\delta E(k)$ is weakly $k$-dependent, then, evidently from Eq.~(\ref{dk}), condition (\ref{dk1}) would be necessarily violated in regions of the spectrum where the group velocity $v_{g}(k)$ vanishes. For the LP branch, as seen in Fig.~\ref{Spectrum}, this occurs both at its lower- and higher-energy ends. Reference \onlinecite{ALL} provided estimates of the corresponding ``end-points'' $k_\mathrm{min}$ and $k_\mathrm{max}$, where $\delta k \simeq k$, for organic planar microcavities within the macroscopic electrodynamics description of polaritons. It was then anticipated that exciton scattering would render eigenstates corresponding to parent polaritons with $k < k_\mathrm{min}$ and $k > k_\mathrm{max}$ spatially localized in accordance with uncertainty relations like $\delta x\delta k_{x} \sim 1$. As we discussed above, at the higher-energy end ($k > k_\mathrm{max}$) of the LP branch, the eigenstates are practically bare exciton states in nature. At the lower-energy end ($k < k_\mathrm{min}$), one would deal with localized polaritons having comparable exciton and photon contents, particularly for the detuning $|\Delta-\varepsilon|$ small with respect to the Rabi splitting . Further numerical calculations\cite{ML,AL} for 1$d$ microcavities with diagonal exciton disorder confirmed this qualitative picture for the polariton states. Below, we will use a similar 1$d$ microcavity model to illustrate the nature of the low-energy LP states as well as time evolution of low-energy wave packets. \section{Dynamics of low-energy wave packets in perfect microcavities} Before proceeding with a model analysis of polaritons in a disordered system, we will briefly discuss the time evolution of wave packets in perfect microcavities where all polaritons are coherent states well characterized by their wave vectors. Not only will this establish a comparative benchmark but is useful in itself as such dynamics reflects features of the polariton spectrum, and hence of the exciton-photon hybridization. Of course, specific features of the low-energy wave packets stem from the fact that the polariton dispersion near the bottom of the LP branch ($\mathbf{k} \simeq 0$) is manifestly parabolic: \begin{equation}\label{disp} \omega(\mathbf{k})\simeq \om_{0} +\alpha k^{2}, \ \ \ \alpha=\hbar/2M, \end{equation} $M$ being the cavity polariton effective mass, which makes the broadening of wave packets a relevant factor. Consider now a wave packet formed with the states close to the branch bottom: \begin{equation}\label{pack2d} U(\vvr,t)=\pwm{\om_{0} t} \int d\mathbf{k} \, A(\mathbf{k}) \, \pw{\left(\mathbf{k}\cdot\mathbf{r} - \alpha k^2 t \right)}. \end{equation} It is convenient to choose the weight amplitude function $A(\mathbf{k})$ Gaussian: $A(\mathbf{k})=\left(\beta/2\pi^3\right)^{1/2} \exp\left[-\beta (\mathbf{k}-\mathbf{k}_{0})^{2}\right]$, centered at wave vector $\mathbf{k}_{0}$. With this amplitude function, Eq.~(\ref{pack2d}) yields \begin{equation}\label{pack2da} \left|U(\vvr,t)\right|^2 = C(t) \,\exp\left[- \frac{\beta (\mathbf{r}-2\alpha \mathbf{k}_{0} t)^2}{2(\beta^2 + \alpha^2 t^2)} \right] \end{equation} for the time evolution of the spatial ``intensity'' of the wave packet, $$ C(t)=\frac{\beta}{2\pi(\beta^2+\alpha^2 t^2)}. $$ Equation (\ref{pack2da}) describes a Gaussian-shaped wave packet in 2$d$ whose center $$ \mathbf{r}_{c}(t)=\vv_{g} t, \ \ \ \vv_{g}=2\alpha \mathbf{k}_{0}, $$ moves with the group velocity consistent with the dispersion (\ref{disp}), and whose linear width increases with time in accordance with the 1$d$ variance \begin{equation}\label{var} s(t)=\left(\beta + \alpha^2 t^2/\beta \right)^{1/2}. \end{equation} The total energy in the wave packet is conserved: with our choice of the amplitude function, $$ \int d\mathbf{r} \left|U(\vvr,t)\right|^2 =1. $$ Of course, Eq.~(\ref{pack2da}) can be derived as a product of two independent 1$d$ normalized evolutions such as \begin{equation}\label{pak1da} \left|U(x,t)\right|^2 =\sqrt{C(t)} \,\exp\left[- \frac{\beta (x-2\alpha k_{0} t)^2}{2(\beta^2 + \alpha^2 t^2)} \right], \end{equation} which we will be relevant in our discussion of 1$d$ microcavities, in this case the 1$d$ packet amplitude function \begin{equation}\label{ampl1} A(k)=\left(\beta/2\pi^3\right)^{1/4} \exp\left[-\beta (k-k_{0})^{2}\right]. \end{equation} The spatial broadening (\ref{var}) features an initial value of $\beta^{1/2}$ and a characteristic time $t_{b}=\beta/\alpha$ such that, at times $t \gg t_{b}$, the variance grows linearly with velocity $v_{b}=\alpha \beta^{-1/2}$. To appreciate the scales, some rough estimates can be made. So for the effective polariton mass $M$ on the order of $10^{-5}\, m_{0}$ ($m_{0}$ being the vacuum electron mass), parameter $\alpha=\hbar/2M \simeq 5\cdot 10^{4}$ cm$^2$/s. Estimates in Ref.~\onlinecite{ALL} made with the Rabi splitting and detuning $\sim 100$ meV yielded for microcavities with disordered organics $k_\mathrm{min} \sim 10^{4}$ cm$^{-1}$. Then for the wave packets satisfying $1 \lesssim \beta^{1/2}k_\mathrm{min} \lesssim 10$, the characteristic time would be $0.2 \lesssim t_{b} \lesssim 20$ ps and the corresponding velocity $5\cdot 10^8 \gtrsim v_{b} \gtrsim 5\cdot 10^7$ cm/s. In our 1$d$ numerical example below we will use the value of parameter $\beta$ within the segment just discussed. We note that by changing physical parameters of the microcavity and organic material, as well as conditions for the polariton excitation, one can influence the dynamics described above. One should also be aware that the evolution times are limited by the actual life times $\tau$ of small wave-vector cavity polaritons. Long life times $\tau$ on the order of 10 ps can be achieved only in microcavities with high quality factors $Q=\omega \tau$. \section{Time evolution in a 1$d$ microcavity with diagonal disorder} Finding polariton states in disordered planar microcavities microscopically is a difficult task which we are not attempting in this paper. As a first excursion into the study of disorder effects on polariton dynamics, here we will follow Ref.~\onlinecite{ML} to explore the dynamics in a simpler microscopic model of a 1$d$ microcavity. Such microcavities are interesting in themselves and can have experimental realizations; from the results known in the theory of disordered systems,\cite{lee1985} one can also anticipate that certain qualitative features may be common for 1$d$ and 2$d$ systems. The microscopic model we study is set up in the following Hamiltonian: \begin{eqnarray} H & = & \sum_{n} (\varepsilon + \varepsilon_{n})\ada_{n} + \sum_{k} \epsilon_{k} \bkdb_{k} \nonumber \\ & + & \gamma \sum_{nk} \sqrt{\frac{\varepsilon}{N\epsilon_{k}}} \left(\pw{kna}\,a_{n}^{\dag}b_{k} + \pwm{kna}\,\bkda_{n} \right) \label{h1}. \end{eqnarray} It consists of a lattice of $N$ ``molecular sites'' spaced by distance $a$ and comprises the exciton part ($a_{n}$ is the exciton annihilation operator on the site $n$), photon part ($b_{k}$ is the photon annihilation operator with the wave vector $k$ and a given polarization) as well as the ordinary exciton-photon interaction. The cavity photon energy $\epsilon_{k}$ is defined by Eq.~(\ref{phot}), $\varepsilon$ represents the average exciton energy, while $\varepsilon_{n}$ the on-site exciton energy fluctuations. We will use uncorrelated normally distributed $\varepsilon_{n}$ with the zero mean and the variance $\sigma$: \begin{equation}\label{dis} \langle \varepsilon_{n}\varepsilon_{m} \rangle = \sigma^2 \delta_{nm}. \end{equation} The exciton-photon interaction is written in such a form that $2\gamma$ yields the Rabi splitting energy in the perfect system. We chose to use the same number $N$ of photon modes, the wave vectors $k$ are discrete with $2\pi/Na$ increments. Our approach is to straightforwardly find the normalized polariton eigenstates $|\WF_{i}\rangle$ ($i$ is the state index) of the Hamiltonian (\ref{h1}) and then use them in the site-coordinate representation: \begin{equation}\label{wf} \Psi (n) = \left(\Psi_{\mathrm{p}} (n), \, \Psi_{\mathrm{e}} (n) \right), \ \ \sum_{n}|\Psi (n)|^2 = 1, \end{equation} where $\Psi_{\mathrm{p}}$ and $\Psi_{\mathrm{e}}$ respectively describe the photon and exciton parts of the polariton wave function and $n$ denotes the $n$th site. We have tried various numerical parameters in the model Hamiltonian with the results being qualitatively consistent; the parameters we exploit in this paper have been chosen, on one hand, to be reasonably comparable with the experimental data in the output and, on the other hand, to better illustrate our point within a practical computational effort. It should be kept in mind though that we consider a model system and the numerical values of results may differ, likely within an order of magnitude, for various systems. The numerical parameters are indicated in the caption to Fig.~\ref{Spectrum} and have been used to calculate the eigenstates of the Hamiltonian (\ref{h1}) with $N=1500$ for a cavity of the physical length $L=Na=150$ $\mu$m and a small negative detuning $(\Delta - \varepsilon)=-0.1$ eV. Figure \ref{Spectrum} compares the energy spectra in the perfect microcavity and in the cavity with one realization of the excitonic disorder, Eq.~(\ref{dis}), $\sigma=0.03$ eV. It is apparent that the effect of this amount of disorder on the polariton energy spectrum \textit{per se} is relatively small, except in the higher-energy region of the LP branch where eigenstates, as we discussed, are practically of a pure exciton nature. The lower-energy part of the LP branch, however, corresponds to the polariton states $\Psi$ (\ref{wf}) in which the exciton and photon are strongly coupled ($\gamma=0.15 > |\Delta -\varepsilon|=0.1$ eV) with comparable weight contributions in $\Psi_{\mathrm{p}}$ and $\Psi_{\mathrm{e}}$. A dramatic effect of the disorder is in the strongly localized character of the polaritonic eigenstates near the bottom of the LP branch, as illustrated in Fig.~\ref{Wfunctions}(a) (needless to \begin{figure} \includegraphics[scale=0.7]{Wfunctions.eps} \caption{\label{Wfunctions}Examples of the spatial structure of the the photon part $|\WFp|^{2}$ of the polariton eigenstates in a 1$d$ microcavity with disorder. (a) Four states, shown by different lines, from the very bottom of the LP polariton branch with energies within the range of 1.76-1.77 eV (see the spectrum in Fig.~\ref{Spectrum}). The inset shows one of these states in more detail. Dots in the inset correspond to the sites of the underlying lattice. (b) One state with a higher energy close to 1.82 eV. The inset shows part of the spatial structure of this state in more detail. Dots correspond to the lattice sites and spatial oscillations of the wave function are clearly seen.} \end{figure} say that the same behavior is observed for the states near the bottom of the UP branch which we we are not concerned with in this paper). Of course, both the photon $\Psi_{\mathrm{p}}$ and exciton $\Psi_{\mathrm{e}}$ parts of a localized polariton state are localized on the same spatial scale, however the exciton part of the spatial wave function is more ``wiggly'' reflecting more the individual site energy fluctuations. For better clarity, in both Figs.~\ref{Wfunctions} and \ref{Evol} we show only the smoother behaving photon parts $\Psi_{\mathrm{p}}$. Panel (a) of Fig.~\ref{Wfunctions} displays examples of $|\WFp|^{2}$ for four states from the very bottom of the LP branch in a realization of the disordered system that are localized at different locations of the 150 $\mu$m sample. The inset to this panel shows the spatial structure of one of these states in more detail; it demonstrates both the spatial scale $l$ of localization in this energy range ($l \sim 1$ $\mu$m with the used parameters) as well as a ``macroscopic'' size of the localized state in comparison with the lattice spacing: $l \gg a=100$ nm. The states at the bottom of the LP branch can be characterized as strongly localized in the sense of $k l \lesssim 1$ where $k$ is a typical wave vector of the parent polariton states in the perfect system. This feature may be contrasted to the behavior at somewhat higher energies and at higher $k > k_\mathrm{min}$ of parent states, where the disorder-induced indeterminacy of the $k$-vector becomes small satisfying Eq.~(\ref{dk1}) so that $k$ would appear as a good quantum number. As is known,\cite{lee1985} however, the multiple scattering should still lead to spatial localization of the eigenstates, now on the spatial scale $l$ such that $k l \gg 1$. Panel (b) of Fig.~\ref{Wfunctions} illustrates the spatial structure of such a state with much larger $l$ than in panel (a). The inset to panel (b) shows that the wave function in this case, within the localization length, exhibits multiple oscillations with a period on the order of $1/k$, which produces the black appearance on the scale of whole panel (b). Having all the eigenstates of the system calculated, we are now in a position to study the time evolution of an initial polariton excitation, which we choose in the form of a wave packet $|\WF^{0}\rangle$ built out of the low-energy polariton states $|\WF^{0}_{i}\rangle$ of the perfect system: \begin{equation}\label{ini} |\WF^{0}\rangle = \sum_{i} A_{i}\,|\WF^{0}_{i}\rangle =\sum_{i} B_{i}\,|\WF_{i}\rangle. \end{equation} Polaritons in the perfect system are ordinary plane waves and we used a discretized analog of Eq.~(\ref{ampl1}) for the amplitude function $A_{i}$, the result is a Gaussian-shaped wave packet as illustrated in Fig.~\ref{Evol} by the long-dashed lines for the photon part of the polariton wave function. Amplitudes $B_{i}$ in Eq.~(\ref{ini}) are, on the other hand, expansion coefficients of the same initial excitation over the eigenstates $|\WF_{i}\rangle$ of the system with disorder. The time evolution of the initial excitation in the perfect system is then given by \begin{equation}\label{evol0} |\WF^{0}(t)\rangle = \sum_{i} A_{i}\,\pwm{E^{0}_{i}t/\hbar} \, |\WF^{0}_{i}\rangle, \end{equation} while the evolution in the disordered system by \begin{equation}\label{evol1} |\WF(t)\rangle = \sum_{i} B_{i}\,\pwm{E_{i}t/\hbar} \, |\WF_{i}\rangle, \end{equation} where $E_{i}^{0}$ and $E_{i}$ are the respective eigenstate energies. \begin{figure} \includegraphics[scale=0.7]{Evol.eps} \caption{\label{Evol}Examples of the the time evolution of spatially identical wave packets built out of the polariton eigenstates of a perfect 1$d$ microcavity as in Eq.~(\ref{ampl1}) with the parameter $\beta^{1/2}=5$ $\mu$m. For panels (a) and (b), the initial packet has zero total momentum, $k_{0}=0$; for panel (c) the initial packet has a finite momentum determined by $k_{0}=10^4$ cm$^{-1}$. Only the photon part $|\WFp|^{2}$ of the polariton wave function is displayed. The initial packets are shown by long-dashed lines, results of the evolution after indicated times $t$ are shown by solid lines for the disordered system and by short-dashed lines for the perfect microcavity (except in panel (a), where the latter practically coincides with the initial packet.)} \end{figure} Of course, the evolution of the low-energy wave packet (\ref{evol0}) in the perfect microcavity takes place in accordance with our continuum generic description in Eq.~(\ref{pak1da}) (barring small differences that may be caused by deviations from the purely parabolic spectrum). This is clearly seen in panels (b) and (c) of Fig.~(\ref{Evol}) where the photon part of $|\WF^{0}(t)\rangle$ at indicated times $t$ is displayed by the short-dash lines: mere broadening of the wave packet with no momentum ($k_{0}=0$) in panel (b), and both broadening and translational displacement ($k_{0} \neq 0$) in panel (c). On the time scale of panel (a), the $|\WF^{0}(t)\rangle$ state has not practically evolved yet from $|\WF^{0}\rangle$ and is not shown on that panel. The time evolution of exactly the same initial polariton packets is drastically different in the disordered system, the corresponding spatial patterns of the photon part of $|\WF(t)\rangle$ are shown in Fig.~\ref{Evol} with solid lines. First of all, the initial packet is quickly (faster than a fraction of ps) transformed into a lumpy structure reflecting the multitude of the localized polariton states within the spatial region of the initial excitation. Note that in our illustration here we intentionally chose the initial amplitude function (\ref{ampl1}) with the parameter $\beta^{1/2}=5$ $\mu$m large enough for the spatial size of the initial excitation to be much larger than the size of the individual localized polaritons at these energies (compare to Fig.~\ref{Wfunctions}(a)). Importantly, however, that, while displaying some internal dynamics (likely resulting from the overlap of various localized states), this lumpy structure does not propagate well beyond the initial excitation region over longer times. This is especially evident in comparison, when the broadening and motion of the packets in the perfect system is apparent (panels (b) and (c) of Fig.~\ref{Evol}). We have run simulations over extended periods of time ($\sim 100$ ps) with the result that $|\WF(t)\rangle$ remains essentially localized within the same spatial region. Of course, some details of $|\WF(t)\rangle$ depend on the initial excitation - see, e.g., a somewhat broader localization region in Fig.~\ref{Evol}(c) for the initial excitation with an initial momentum corresponding to $k_{0}=10^{4}$ cm$^{-1}$ but the long-term localization in the disordered system appears robust in all our simulations. It would be interesting to extend the dynamical studies with participation of higher energy states such as in Fig.~\ref{Wfunctions}(b), this is, however, beyond the scope of the present paper. \section{Concluding remarks} The nature and dynamics of low-energy cavity polariton states are important for various physical processes in microcavities, particularly for the problem of condensation of polaritons into the lowest energy state(s). As was demonstrated in Ref.~\onlinecite{ALL}, low-energy polaritons in organic microcavities should be especially susceptible to effects of scattering/disorder in the exciton subsystem. The problem of disorder effects on polaritons in organic microcavities appears quite interesting as organic materials would typically feature both strong exciton-photon coupling and substantial static and/or dynamic exciton scattering. In this paper we have continued a line of study in Ref.~\onlinecite{ML} to look in some more detail at disorder effects on polaritons in a 1$d$ model microcavity. Our numerical analysis has brought further evidence that low-energy polariton states in organic microcavities can be strongly localized in the sense of $l \lesssim \lambda$, where $l$ is the spatial size of localized states and $\lambda$ the wave length of parent polariton waves. (We have also found indications of weaker localization at higher polariton energies in the sense of $l \gg \lambda$.) Our illustrations have included demonstrations of localization not only via the spatial appearance of polariton eigenstates but also via the time evolution of different low-energy wave packets. On the physical grounds,\cite{ALL} one should expect that low-energy polaritons in 2$d$ organic microcavities would also be rendered strongly localized by disorder, as it would also follow from the general ideas of the theory of localization.\cite{lee1985} Further work on microscopic models of 2$d$ polariton systems is required to quantify their localization regimes. The strongly localized nature of low-energy polariton states should affect many processes such as light scattering and nonlinear phenomena as well as temperature-induced diffusion of polaritons. Manifestations of the localized polariton statistics (Frenkel excitons are paulions exhibiting properties intermediate between fermi and bose particles) in the problem of condensation also appear interesting and important. We note that one can exercise an experimental control over the degree of exciton-photon hybridization and disorder by modifying the size of microcavity for various organic materials making such systems a fertile ground for detailed experimental and theoretical research into their physics. And the last remark. While we have specifically discussed exciton-photon polaritons in organic systems, it is clear that some aspects have a generic character and could be applicable to other systems. This, for instance, concerns inorganic semiconductor microcavities. Both exciton-photon coupling and magnitudes of disorder there, however, are much smaller, which would likely make any localization effects relevant only at very low temperatures. As one of recent examples of a very different kind of systems, we will mention hybrid modes in chains of noncontacting noble metal nanoparticles where the interaction of photons and nanoparticles lead to plasmon-polaritons.\cite{citrin2006} \section{Acknowledgements} VMA's work was supported by Russian Foundation of Basic Research and Ministry of Science and Technology of Russian Federation. He is also grateful to G.~C.~La Rocca for discussions. The authors thank D.~Basko for reading and commenting on the manuscript.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Whereas lattice gauge theory (LGT) has been initially formulated to study gauge-invariant quantities in the non-perturbative regime, it has long been recognized that LGT could be a useful tool for studying gauge-variant quantities such as Green functions, both in the non-perturbative and in the perturbative regimes. The SU(3) gluon propagator in momentum space was first considered \cite{BER94} to gain some insight into the physics of confinement. Much work was then devoted to the study of its infrared behavior (for a review see \cite{MAN99}). Subsequent studies \cite{LPTX99,LPTX00} were focused on the ultraviolet behavior and have been able to compare quantitatively the large momentum dependence of the lattice gluon propagator with perturbative predictions beyond one-loop order. The result for $\Lambda_{\ms}$ was found to depend strongly upon the order of the perturbation theory and upon the renormalisation scheme used in the parametrization. This strong dependence raised the question whether the energy windows in these calculations were large enough for perturbative QCD to be a valid approximation. On the other hand, as shown by Gribov \cite{GRI78}, the infrared behavior of the gluon propagator is closely related to the singularity structure of the ghost propagator inferred from the gauge-fixing ambiguities. As is well-known, the Landau gauge, which is presently the only covariant gauge for which there exists effective local gauge-fixing algorithms on the lattice, suffers from these ambiguities. The comprehensive theoretical study by Zwanziger \cite{ZWA94} of the Faddeev-Popov operator on the lattice in Landau gauge spurred the first numerical study of the ghost propagator \cite{SUM96} in SU(2) and SU(3) gauge theories. Most subsequent activity has been dedicated to the SU(2) lattice gauge theory in the infrared region, mainly for technical reasons as we shall explain below. There are relatively few numerical studies of the SU(3) ghost propagator which are either more focused on the infrared region and the Gribov copy problem \cite{STE05a,STE05b,STE05c} or have only performed a qualitative perturbative description in the quenched approximation \cite{JAP04a,JAP04b} and, quite recently, in the unquenched case also \cite{JAP05}. It is important to make the study of the SU(3) ghost propagator in the ultraviolet region more quantitative for comparison purposes with the gluon propagator. Lattice results at small distances may be described by perturbation theory and the independent extraction of the $\Lambda_{\text{QCD}}$ scale from the two propagators would provide a self-consistency test of the analysis and of the lattice approach. It would be particularly significant to confirm or not, from the study of the lattice propagators alone, the need for the non-perturbative power corrections found in the study of the three-gluon coupling on the lattice \cite{BOU00}. The paper is organized as follows. We will begin by recalling in section~\ref{sec:charge} the method used to relate lattice data for the ghost propagator to its perturbative renormalization description. Then we proceed by exhibiting in section~\ref{sec:lattice} the salient features of our lattice calculation, particularly of our implementation of the Faddeev-Popov operator on the lattice. The following section outlines the general method that we devised previously \cite{LPTX99,LPTX00} to eliminate hypercubic artifacts from two-point functions and extrapolate the lattice data towards the continuum. This extrapolation is crucial to succeed in a quantitative description. The results are discussed in section~\ref{sec:analysis} which contains several subsections where the analysis is performed in different renormalization schemes up to four-loop order. In particular the scheme dependence is thoroughly investigated and used to probe the effects of the truncation of the perturbative series. We conclude in section \ref{sec:conclusion} with a comparison of the different methods to compute the $\Lambda_{\text{QCD}}$ scale on the lattice. \section{Renormalization description of the ghost propagator} \label{sec:charge} Let $\Gamma^{(n)}_{B}$ be some gauge-fixed multiplicatively renormalizable one-particle irreducible $n$-point bare Green function defined in euclidean momentum space and in some regularization scheme with cut-off $\Lambda$. Let $s$ denotes some polarization state and kinematical configuration of the external particles contributing to $\Gamma^{(n)}_{B}$. Let $p$ denote a scale transformation on $s$ and $g_{B}$ denote the bare coupling. It is well known that, in any renormalization scheme $R$ defined by some renormalization conditions on state $s$ at the renormalization point $p=\mu$, we have \begin{eqnarray} \label{eq:renormalization} \Gamma^{(n)}_{B}(p,s,g_{B},\Lambda) = Z_{\Gamma,R}(\mu,s,g_{R},\Lambda) \Gamma^{(n)}_{R}(p,s,g_{R},\mu) + {\cal O}(\Lambda^{-1}) \end{eqnarray} where $Z_{\Gamma,R}$ is the renormalization constant in scheme $R$, $\Gamma^{(n)}_{R}$ is the renormalized Green function and $g_{R}(\mu)$ is the renormalized coupling. We omit the dependence on the gauge parameter for simplicity of notation since we will specialize to Landau gauge. The explicit dependence on $\mu$ drops out of the renormalized Green function $\Gamma^{(n)}_{R}$ at the renormalization point $p=\mu$. It follows that \begin{align} \label{eq:evolution} \begin{split} \lim_{\Lambda\rightarrow\infty} \frac{d\ln\left(\Gamma^{(n)}_{B}(\mu,s,g_{B},\Lambda)\right)} {d\ln \mu^{2}} &= \lim_{\Lambda\rightarrow\infty} \frac{d\ln\left(Z_{\Gamma,R}(\mu,s,g_{R},\Lambda)\right)} {d\ln \mu^{2}} + \frac{d\ln\left(\Gamma^{n}_{R}(s,g_{R})\right)} {d\ln \mu^{2}} \\ &\equiv \gamma_{\Gamma,R}(g_{R}) + \frac{d g_{R}}{d \ln\mu^{2}}\frac{\partial\ln\Gamma^{n}_{R}}{\partial g_{R}} \end{split} \end{align} The arbitrariness in the choice of the renormalization scheme $R$ has prompted attempts at determining the ``best'' schemes for describing the $q^{2}$-evolution of bare Green functions on the lattice. Clearly it is always possible to find a change of coupling which will be a best approximation of a set of data at a given order of perturbation theory, within some prescribed criteria. Rather than pursuing this route, we will follow the standard wisdom which consists in choosing renormalization conditions appropriate to the continuum quantity under scrutiny. Momentum substraction schemes have long been used to define renormalization conditions befitted to the description of the renormalization dependence of ``physical'' quantities. They are defined by setting some of the 2 and 3-point functions to their tree values. In the $\mom$ schemes, for these Green functions, Eq.\,(\ref{eq:evolution}) simplifies to \begin{align} \label{eq:mom} \lim_{\Lambda\rightarrow\infty} \frac{d\ln\left(\Gamma^{(n)}_{B}(\mu,s,g_{B},\Lambda)\right)} {d\ln \mu^{2}} &= \frac{d\ln(Z_{\Gamma,MOM})}{d \ln \mu^{2}} = \gamma_{\Gamma,MOM}(g_{MOM}) \end{align} Infinitely many MOM schemes can be defined which differ by the substraction point of the vertices. We have shown in \cite{LPTX99} that the $\momg$ scheme defined by substracting the transversal part of the three-gluon vertex at the asymmetric point where one external momentum vanishes, appears to provide a much better estimate of the asymptotic behavior of the gluon propagator in Landau gauge than the $\ms$ scheme. For the study of the asymptotic behavior of the ghost propagator in Landau gauge, it seems therefore natural to use a $\momc$ scheme defined by substracting the ghost-gluon vertex at the asymmetric point where the momentum of the external gluon vanishes. Comparison of the two $\mom$ schemes should provide us with an estimate of the systematic error entailed in the truncation of the perturbation theory. \begin{figure}[h] \centering \psfig{figure=3GluonVertex.eps, width=4cm, height=4cm} \hskip 2cm \psfig{figure=GluonGhostVertex.eps, width=4cm, height=4cm} \caption{$\momg$ scheme (left) and $\momc$ scheme (right).} \label{fig:mom} \end{figure} The perturbative calculation of the gluon, ghost and quark self-energies and all 3-vertices appearing in the QCD Lagrangian have been done at three-loop order in the $\ms$ scheme and in a general covariant gauge at the asymmetric point with one vanishing momentum \cite{CHE00}. These three-loop results allow one to relate the coupling constants of any $\mom$-like scheme to the $\ms$ scheme at three-loop order. For the $\momg$ and $\momc$ schemes defined above these relations read respectively in Landau gauge and in the quenched approximation ($n_{f}=0$), with $\displaystyle{h = \frac{g^{2}}{16\pi^{2}}}$: \begin{align} \label{eq:momg} \begin{split} h_{\momg} &= h_{\ms} + \frac{70}{3}\,h^{2}_{\ms} + \left(\frac{516217}{576}-\frac{153}{4}\zeta_{3}\right)h^{3}_{\ms} \\ &+ \left(\frac{304676635}{6912} - \frac{299961}{64}\zeta_{3} - \frac{81825}{64}\zeta_{5} \right)h^{4}_{\ms} \end{split} \\ \label{eq:momc} \begin{split} h_{\momc} &= h_{\ms} + \frac{223}{12}\,h^{2}_{\ms} + \left(\frac{918819}{1296}-\frac{351}{8}\zeta_{3}\right)h^{3}_{\ms} \\ &+ \left(\frac{29551181}{864} - \frac{137199}{32}\zeta_{3} - \frac{74295}{64}\zeta_{5} \right)h^{4}_{\ms} \end{split} \end{align} The very large coefficients of these perturbative expansions explain the difficulties met by the $\ms$ scheme to approach asymptotic scaling below 10 GeV. The recent calculation \cite{CHE05} of the anomalous dimensions in the $\ms$ scheme of the gluon and ghost fields at four-loop order, together with the knowledge of the $\beta$-function \cite{RIT97}, makes it possible to perform the analysis of the lattice data for the gluon and ghost propagators up to four-loop order also in the $\momg$ and $\momc$ schemes. The numerical coefficients of the $\beta$-function defined as \begin{align} \label{eq:beta_f} \beta(h) = \frac{dh}{d\ln \mu^{2}} = - \sum_{i=0}^{n} \beta_{i}h^{i+2} + {\cal O}(h^{n+3}) \end{align} are: \begin{align} \label{eq:coeffs} \beta_{2}^{\momg} = 2412.16,\ \ \beta_{2}^{\momc} = 2952.73, \ \beta_{3}^{\momg} = 84353.8,\ \beta_{3}^{\momc} = 101484. \end{align} For completeness we also give the expansion coefficients of the renormalisation constants of the gluon and ghost fields in the MOM schemes with respect to the renormalized coupling of the $\ms$ scheme up to four-loop order: \begin{align} \label{eq:Z3} \begin{split} \frac{d\ln(Z_{3,MOM})}{d \ln \mu^{2}} &= \frac{13}{2}\,h_{\ms} + \frac{3727}{24}\,h^{2}_{\ms} + \left(\frac{2127823}{288} - \frac{9747}{16}\zeta_{3}\right) h^{3}_{\ms} \\ &+ \left(\frac{3011547563}{6912} - \frac{18987543}{256}\zeta_{3} - \frac{1431945}{64}\zeta_{5}\right) h^{4}_{\ms} \end{split} \\ \label{eq:Z3c} \begin{split} \frac{d\ln(\widetilde{Z}_{3,MOM})}{d \ln \mu^{2}} &= \frac{9}{4}\,h_{\ms} + \frac{813}{16}\,h^{2}_{\ms} + \left(\frac{157303}{64} - \frac{5697}{32}\zeta_{3}\right) h^{3}_{\ms} \\ &+ \left(\frac{219384137}{1536} - \frac{9207729}{512}\zeta_{3} - \frac{221535}{32}\zeta_{5}\right) h^{4}_{\ms} \end{split} \end{align} \section{Lattice calculation} \label{sec:lattice} \subsection{Faddeev-Popov operator on the lattice} \label{sec:FP} The ghost propagator is defined on the lattice as \begin{align} \label{eq:Ghost} G(x-y)\delta^{ab} \equiv \left<\left(M^{-1}\right)^{ab}_{xy}\right> \end{align} where the action of the Faddeev-Popov operator $M$ on an arbitrary element $\eta$ of the Lie algebra ${\cal SU}$(N) of the gauge group SU(N), in a Landau gauge fixed configuration, is given by \cite{ZWA94}: \begin{align} \label{eq:FP1} \nonumber (M\eta)^{a}(x) &= \frac{1}{N}\sum_{\mu}\biggl\{ G_{\mu}^{ab}(x) \left(\eta^{b}(x+\hat{\mu})-\eta^{b}(x)\right) - (x \leftrightarrow x-\hat{\mu}) \\ &\qquad\qquad + \frac{1}{2}f^{abc}( \eta^{b}(x+\hat{\mu})A_{\mu}^{c}(x) - \eta^{b}(x-\hat{\mu})A_{\mu}^{c}(x-\hat{\mu}) \bigr) \biggr\} \end{align} and where, with antihermitian generators $T^{a}$, \begin{align} \label{eq:defa} G_{\mu}^{ab}(x) &= \frac{1}{2}\tr\left(\left\{T^{a},T^{b}\right\} \left(U_{\mu}(x)+U_{\mu}^{\dagger}(x)\right)\right) \\ \label{eq:defb} A_{\mu}^{c}(x) &= -\tr\left(T^{c}\left(U_{\mu}(x)-U_{\mu}^{\dagger}(x)\right)\right) \end{align} Most lattice implementations of the Faddeev-Popov operator have followed closely the component-wise Eqs.\,(\ref{eq:FP1}-\ref{eq:defb}). But the derivation in \cite{ZWA94} shows that the Faddev-Popov operator can also be written as a lattice divergence: \begin{align} \label{eq:FP2a} M(U) = -\frac{1}{N} \nabla\cdot \widetilde{D}(U) \end{align} where the operator $\widetilde{D}$ reads \begin{align} \label{eq:FP2b} \widetilde{D}_{\mu}(U)\eta(x) &= \frac{1}{2}\left(U_{\mu}(x)\eta(x+\hat{\mu}) -\eta(x)U_{\mu}(x) + \eta(x+\hat{\mu})U^{\dagger}_{\mu}(x) - U^{\dagger}_{\mu}(x)\eta(x) \right) \end{align} Using conversion routines between the Lie algebra and the Lie group, eqs.~(\ref{eq:FP2a}-\ref{eq:FP2b}) allow for a very efficient lattice implementation, sketched in Table~\ref{tab:algo}, which is based on the fast routines coding the group multiplication law. \begin{table}[h] \center{\rule{10cm}{1pt}} \begin{align*} &!\ \eta_{in},\ \eta_{out}\ \text{are the ghost fields.} \\ &!\ U\ \text{is the gauge configuration.} \\ &\mathrm{type~(SUN)}\quad U(*), dU, W, W_{+}, W_{-} \\ &\mathrm{type~({\cal SU}N)}\quad \eta_{in}(*), \eta_{out}(*) \\ &\mathrm{for~all~x:}\\ &\quad dU = 0. \\ &\quad W = \eta_{in}(x) \\ &\quad \mathrm{do}\ \mu = 1,\ 4 \\ &\quad\qquad W_{+} = \eta_{in}(x+\hat{\mu}) \\ &\quad\qquad W_{-} = \eta_{in}(x-\hat{\mu}) \\ &\quad\qquad dU = dU + U_{\mu}(x-\hat{\mu})\times W + W\times U_{\mu}(x) \\ &\quad\qquad\quad\qquad\quad - U_{\mu}(x)\times W_{+} - W_{-}\times U_{\mu}(x-\hat{\mu}) \\ &\quad \mathrm{enddo} \\ &\quad \eta_{out}(x) = dU - dU^{\dagger} - \frac{1}{N}\tr(dU- dU^{\dagger}) \end{align*} \center{\rule{10cm}{1pt}} \caption{Pseudo code of our implementation of the Faddeev-Popov operator.} \label{tab:algo} \end{table} \subsection{Inversion of the Faddeev-Popov operator} \label{sec:inversion} Constant fields are zero modes of the Faddeev-Popov operator. This operator can be inverted only in the vector subspace $K^{\perp}$ orthogonal to its kernel. If the Faddeev-Popov operator has no other zero modes than constant fields, then the non-zero Fourier modes form a basis of $K^{\perp}$: \begin{align} \label{eq:ortho} \eta(x) = \sum_{p\neq 0} c_{p}e^{ip\cdot x}\,,\quad \forall \eta\in K^{\perp} \end{align} The standard procedure has been to invert the Faddev-Popov operator with one non-zero Fourier mode as a source \begin{align} S^{a}_{p}(x) = \delta^{ab}e^{ip\cdot x} \end{align} and to take the scalar product of $M^{-1}S^{a}_{p}$ with the source: \begin{align} \left(S^{a}_{p}\left|\right.M^{-1}S^{a}_{p}\right) &= \sum_{x,y}\left(M^{-1}\right)^{aa}_{xy}e^{-ip\cdot(x-y)} \\ \label{eq:ftp} &= V\,\widehat{G}(p) \end{align} after averaging over the gauge field configurations. This method requires one matrix inversion for each value of the ghost propagator in momentum space. It is suitable only when one is interested in a few values of the ghost propagator. However, the study of the ultraviolet behavior of the ghost propagator in the continuum requires its calculation at many lattice momenta to control the spacing artifacts, as we shall see in the next section. This can be done very economically by noting that \begin{align} \label{eq:delta} \delta(x,y) = \frac{1}{V} + \frac{1}{V}\sum_{p\ne 0} e^{-ip\cdot(x-y)} \end{align} and choosing as source: \begin{align} \label{eq:zero} S^{a}_{0}(x) = \delta^{ab}\left(\delta(x,0) - \frac{1}{V}\right) \end{align} The Fourier transform of $M^{-1}S^{a}_{0}$, averaged over the gauge configurations, yields: \begin{align} \nonumber \sum_{x} e^{-ip\cdot x}\left<M^{-1}S^{a}_{0}\right> &= \sum_{x} e^{-ip\cdot x}\left<\left(M^{-1}\right)^{aa}_{x0}\right> - \frac{1}{V}\sum_{x,y}e^{-ip\cdot x} \left<\left(M^{-1}\right)^{aa}_{xy}\right> \\ \nonumber &= \sum_{x} e^{-ip\cdot x}G(x) - \frac{1}{V}\sum_{x,y}e^{-ip\cdot x}G(x-y) \\ &= \widehat{G}(p) - \delta(p)\sum_{x}G(x) \label{eq:ft0} \end{align} as a consequence of the translation invariance of the ghost propagator. Therefore, with this choice of source, only one matrix inversion followed by one Fourier transformation of the solution is required to get the full ghost propagator on the lattice. There is of course a price to pay, as can be read off Eq.\,(\ref{eq:ft0}) which lacks the factor $V$ present in Eq.\,(\ref{eq:ftp}). The statistical accuracy with the source $S^{a}_{p}$ is better, especially at high momentum $p$. However the statistical accuracy with the source $S^{a}_{0}$ turns out to be sufficient for our purpose. There is one final point we want to make and which has never beeen raised to the best of our knowledge. It is mandatory to check, whatever the choice of sources, that rounding errors during the inversion do not destroy the condition that the solution belongs to $K^{\perp}$: \begin{align} \label{eq:kernel} \sum_{x}\left(M^{-1}S\right)(x) = 0 \end{align} Indeed, if the zero-mode component of the solution grows beyond some threshold during the inversion of the Faddeev-Popov operator on some gauge configuration, then that component starts to increase exponentially and a sizeable bias is produced in other components as well. We have observed this phenomenon occasionally, about one gauge configuration every few hundreds, when using the implementation of the lattice Faddeev-Popov operator based on Eqs.\,(\ref{eq:FP1}-\ref{eq:defb}). But the systematic bias which is induced on the averages over gauge field configurations can be uncomfortably close to those ascribed to Gribov copies. Another virtue of the algorithm described in Table~\ref{tab:algo} is its numerical stability which is improved by several orders of magnitude. We have never observed sizeable deviations from Eq.\,(\ref{eq:kernel}) with this algorithm. \subsection{The simulation} \label{sec:simulation} We ran simulations of the $SU(3)$ lattice gauge theory with the Wilson action in the quenched approximation on several hypercubic lattices, whose parameters are summarized in Table~\ref{tab:simulation}. All lattices have roughly the same physical volume except the $24^{4}$ lattice at $\beta=6.0$ which has been included to check out finite-volume effects. \begin{table} \centering \begin{tabular}[h]{cccc} \hline $\beta$ & $V$ & $a^{-1}$ (GeV) &\# Configurations \\ \hline $6.0$ & $16^{4}$ & $1.96$ & $1000$ \\ $6.0$ & $24^{4}$ & $1.96$ & $500$ \\ $6.2$ & $24^{4}$ & $2.75$ & $500$ \\ $6.4$ & $32^{4}$ & $3.66$ & $250$ \\ \hline \end{tabular} \caption{Run parameters. The lattice spacings are taken from Table~3 in \cite{BAL93} with a physical unit normalized by $\sqrt{\sigma}=445$ MeV.} \label{tab:simulation} \end{table} The SU(3) gauge configurations were generated using a hybrid algorithm of Cabibbo-Marinari heatbath and Creutz overrelaxation steps. 10000 lattice updates were discarded for thermalization and the configurations were analyzed every 100/200/500 sweeps on the $16^{4}/24^{4}/32^{4}$ lattices. Landau gauge fixing was carried out by minimizing the functional \begin{align} \label{eq:landau} F_{U}[g] =\text{Re}\sum_{x}\sum_{\mu} \left(1-\frac{1}{N}g(x)U_{\mu}(x)g^{\dagger}(x+\hat{\mu})\right) \end{align} by use of a standard overrelaxation algorithm driving the gauge configuration to a local minimum of $F_{U}[g]$. We did not try to reach the fundamental modular region $\Lambda$, defined as the set of absolute minima of $F_{U}[g]$ on all gauge orbits. Indeed there have been numerous studies, in SU(2) \cite{CUC97,BAK04} and in SU(3) \cite{STE05a,STE05b}, of the effect of Gribov copies on the ghost propagator. The consensus is that noticeable systematic errors, beyond statistical errors, are only found for the smallest $p^{2}$, much smaller than the squared momenta that we used to study the asymptotic behavior of the ghost propagator. Then the ghost propagator $G(p)$ is extracted from Eq.\,(\ref{eq:ft0}) for all $p\neq 0$. The required matrix inversion, with a conjugate-gradient algorithm without any preconditioning, and the Fourier transform consume in average less than half the computing time of the Landau gauge fixing. \section{Hypercubic artifacts} \label{sec:artifact} The ghost propagator $\widehat{G}(p)$ is a scalar invariant on the lattice which means that it is invariant along the orbit $O(p)$ generated by the action of the isometry group $H(4)$ of hypercubic lattices on the discrete momentum $p\equiv\frac{2\pi}{La}\times(n_{1},n_{2},n_{3},n_{4})$ where the $n_{\mu}$'s are integers, $L$ is the lattice size and $a$ the lattice spacing. The general structure of polynomials invariant under a finite group is known from group-invariant theory. Indeed it can be shown that any polynomial function of $p$ which is invariant under the action of $H(4)$ is a polynomial function of the 4 invariants $p^{[n]} = a^{n}\sum_{\mu}p_{\mu}^{n}, n = 2, 4, 6, 8$ which index the set of orbits. Our analysis program uses these 4 invariants to average the ghost propagator over the orbits of $H(4)$ to increase the statistical accuracy: \begin{align} a^{2}G_{L}(p^{[2]},p^{[4]},p^{[6]},p^{[8]}) = \frac{1}{\|O(p)\|} \sum_{p\in O(p)} \widehat{G}(p) \end{align} where $\|O(p)\|$ is the cardinal number of the orbit $O(p)$. By the same token, one should always take the following \bfit{real} source \begin{align} \label{eq:source} \overline{S}^{a}_{p}(x) = \delta^{ab}\sum_{p\in O(p)} \cos(p\cdot x) \end{align} rather than a single complex Fourier mode for studies of the ghost propagator in the infrared region. Indeed, after averaging over the gauge configurations and use of the translational invariance, one gets \begin{align} \nonumber \left<\left( \overline{S}^{a}_{p}\left|\right.M^{-1}\overline{S}^{a}_{p} \right)\right> &= \sum_{p,p'\in O(p)}\sum_{x,y}\left<\left(M^{-1}\right)^{aa}_{xy}\right> e^{-ip'\cdot x+ip\cdot y} \\ \label{eq:fto} &= V\|O(p)\|\,a^{2}G_{L}(p^{[2]},p^{[4]},p^{[6]},p^{[8]}) \end{align} By analogy with the free lattice propagator \begin{align} \label{eq:free} G_{0}(p) = \frac{1}{\sum_{\mu}\widehat{p}_{\mu}^2} = \frac{a^{2}}{p^{[2]}}\left(1+\frac{1}{12}\frac{p^{[4]}}{p^{[2]}} + \cdots\right)\,, \quad\mathrm{where}\quad\widehat{p}_{\mu} = \frac{2}{a}\sin\left(\frac{ap_{\mu}}{2}\right) \end{align} it is natural to make the hypothesis that the lattice ghost propagator is a smooth function of the discrete invariants near the continuum limit, when $a\,p_{\mu}\ll 1\,,\forall\mu$, \begin{align} \label{eq:invariants} G_{L}(p^{[2]},p^{[4]},p^{[6]},p^{[8]}) \approx G_{L}(p^{[2]},0,0,0) + p^{[4]}\frac{\partial G_{L}} {\partial p^{[4]}}(p^{[2]},0,0,0) + \cdots \end{align} and $G_{L}(p^{[2]},0,0,0)$ is nothing but the propagator of the continuuum in a finite volume, up to lattice artifacts which do not break $O(4)$ invariance. When several orbits exist with the same $p^{2}$, the simplest method to reduce the hypercubic artifacts is to extrapolate the lattice data towards $G_{L}(p^{[2]},0,0,0)$ by making a linear regression at fixed $p^{2}$ with respect to the invariant $p^{[4]}$ since the other invariants are of higher order in the lattice spacing. The range of validity of this linear approximation can be checked a posteriori from the smoothness of the extrapolated data with respect to $p^{2}$. Choosing the variables $\widehat{p}_{\mu}$ appropriate to the parametrization of a lattice propagator with periodic boundary conditions provides an independent check of the extrapolation. Indeed we can write as well \begin{align} G_{L}(p^{[2]},p^{[4]},p^{[6]},p^{[8]}) \equiv \widehat{G}_{L}(\widehat{p}^{[2]},\widehat{p}^{[4]},\widehat{p}^{[6]}, \widehat{p}^{[8]}) \end{align} with the new invariants, again hierachically suppressed with respect to the lattice spacing, \begin{align} \widehat{p}^{[n]} = a^{n}\sum_{\mu}\widehat{p}_{\mu}^{n} \end{align} $G_{L}$ and $\widehat{G}_{L}$ are two different parametrizations of the same lattice data, but near the continuum limit one must also have \begin{align} \label{eq:sinus} \widehat{G}_{L}(\widehat{p}^{[2]},\widehat{p}^{[4]},\widehat{p}^{[6]}, \widehat{p}^{[8]}) \approx \widehat{G}_{L}(\widehat{p}^{[2]},0,0,0) + \widehat{p}^{[4]} \frac{\partial \widehat{G}_{L}} {\partial \widehat{p}^{[4]}}(\widehat{p}^{[2]},0,0,0) + \cdots \end{align} where $G_{L}(p^{[2]},0,0,0)$ and $\widehat{G}_{L}(\widehat{p}^{[2]},0,0,0)$ are the \bfit{same} function, the propagator of the continuum , of a \bfit{different} variable (again up to lattice artifacts which do not break $O(4)$ invariance). If one wants to include in the data analysis the points with a single orbit at fixed $p^{2}$, one must interpolate the slopes extracted from Eqs~(\ref{eq:invariants}) or (\ref{eq:sinus}). This interpolation can be done either numerically or by assuming a functional dependence of the slope with respect to $p^{2}$ based on dimensional arguments. The simplest ansatz is to assume that the slope has the same leading behavior as for a free lattice propagator: \begin{align} \label{eq:slope} \frac{\partial G_{L}} {\partial p^{[4]}}(p^{[2]},0,0,0) &= \frac{1}{\left(p^{[2]}\right)^{2}}\left( c_{1}+ c_{2}p^{[2]}\right) \end{align} The inclusion of $O(4)$-invariant lattice spacing corrections is required to get fits with a reasonable $\chi^{2}$. The quality of such two-parameter fits to the slopes, and the extension of the fitting window in $p^{2}$, supplies still another independent check of the validity of the extrapolations. We have used Eqs.~(\ref{eq:invariants}) and (\ref{eq:slope}) to extrapolate our lattice data towards the continuum and determined the range of validity in $p^{2}$ of the extrapolations from the consistency of the different checks within our statistical errors. The errors on the extrapolated points have been computed with the jackknife method. Tables~\ref{tab:cutg} and \ref{tab:cutc} summarize the cuts that have been applied to the data for the estimation of the systematic errors in the analysis of the next section. We have repeated the analysis of the gluon propagator \cite{LPTX00} to study the sensitivity of the results with respect to the window in $p^{2}$ which has been enlarged considerably in our new data. The cuts for the lattice ghost propagator are stronger than for the gluon lattice propagator because the statistical errors of the former are two to three times larger which make the continuum extrapolations less controllable. \begin{table}[h] \centering \begin{tabular}{c|c|c|c|c|c} \hline $\beta$ & $V$ & $N_{points}$ & $a\,p_{min}$ & $a\,p_{max}$ & $\chi^{2}$ \\ \hline $6.0$ & $16^{4}$ & $> 10$ & $\leq 1.30$ & $\leq 1.82$ & $\leq 1.4$ \\ $6.2$ & $24^{4}$ & $> 12$ & $\leq 1.30$ & $\leq 1.82$ & $\leq 1.1$ \\ $6.4$ & $32^{4}$ & $> 20$ & $\leq 1.40$ & $\leq 1.82$ & $\leq 1.3$ \\ \hline \end{tabular} \caption{Cuts on the lattice data for the gluon propagator. $[a\,p_{min},a\,p_{max}]$ is the momentum window of a fit in lattice units and $N_{points}$ is the number of data points in that window.} \label{tab:cutg} \end{table} \begin{table}[h] \centering \begin{tabular}{c|c|c|c|c|c} \hline $\beta$ & $V$ & $N_{points}$ & $a\,p_{min}$ & $a\,p_{max}$ & $\chi^{2}$ \\ \hline $6.0$ & $16^{4}$ & $> 10$ & $\leq 1.30$ & $\leq 1.57$ & $\leq 1.0$ \\ $6.2$ & $24^{4}$ & $> 20$ & $\leq 1.30$ & $\leq 1.57$ & $\leq 1.0$ \\ $6.4$ & $32^{4}$ & $> 20$ & $\leq 1.00$ & $\leq 1.57$ & $\leq 1.0$ \\ \hline \end{tabular} \caption{Cuts on the lattice data for the ghost propagator. Columns have the same meaning as in Table\,\ref{tab:cutg}.} \label{tab:cutc} \end{table} The number of distinct orbits at each $p^{2}$ increases with the lattice size and, eventually, a linear extrapolation limited to the single invariant $p^{[4]}$ breaks down. However there is a systematic way to include higher-order invariants and to extend the range of validity of the extrapolations. A much more detailed exposition of the controlling of systematic errors is in preparation, since our method has been largely ignored in the litterature where very empirical recipes are still in use. \section{Data analysis} \label{sec:analysis} The evolution equation of the renormalization constants of the gluon or ghost fields in a MOM scheme, with respect to the coupling constant $h$ in an arbitrary scheme $R$ (the index $R$ is omitted but understood everywhere), can be written generically up to four-loop order: \begin{align} \label{eq:dZmom} \frac{d\ln(Z_{\Gamma,MOM})}{d \ln \mu^{2}} = \overline{\gamma}_{0} h + \overline{\gamma}_{1} h^{2} + \overline{\gamma}_{2} h^{3} + \overline{\gamma}_{3} h^{4} \end{align} and the perturbative integration of Eq.~(\ref{eq:dZmom}) yields, to the same order, \begin{align} \label{eq:Zmom} \begin{split} \ln\left(\frac{Z_{\Gamma,MOM}}{Z_{0}}\right) & = \log(h)\,\frac{\overline{\gamma}_{0}}{\beta_{0}} + h\,\frac{\left(\beta_{0}\,\overline{\gamma}_{1}-\beta_{1}\,\overline{\gamma}_{0}\right)}{\beta_{0}^{2}} \\ &+ h^{2}\,\frac{\left(\beta_{0}^{2}\,\overline{\gamma}_{2}-\beta_{0}\,\beta_{1}\,\overline{\gamma}_{1}-(\beta_{0}\,\beta_{2}-\beta_{1}^{2})\,\overline{\gamma}_{0}\right)}{2\beta_{0}^{3}} \\ &+ h^{3}\,\bigl(\beta_{0}^{3}\,\overline{\gamma}_{3}-\beta_{0}^{2}\,\beta_{1}\,\overline{\gamma}_{2}+(\beta_{0}\,\beta_{1}^{2}-\beta_{0}^{2}\,\beta_{2})\,\overline{\gamma}_{1} \\ &\qquad +(-\beta_{0}^{2}\,\beta_{3}+2\,\beta_{0}\,\beta_{1}\,\beta_{2}-\beta_{1}^{3})\,\overline{\gamma}_{0}\bigr)\frac{1}{3\beta_{0}^{4}} \end{split} \end{align} with the standard four-loop formula for the running coupling \begin{align} \label{eq:alpha} \begin{split} h(t) &= \frac{1}{\beta_{0}t} \left(1 - \frac{\beta_{1}}{\beta_{0}^{2}}\frac{\log(t)}{t} + \frac{\beta_{1}^{2}}{\beta_{0}^{4}} \frac{1}{t^{2}}\left(\left(\log(t)-\frac{1}{2}\right)^{2} + \frac{\beta_{2}\beta_{0}}{\beta_{1}^{2}}-\frac{5}{4}\right)\right) \\ &+ \frac{1}{(\beta_{0}t)^{4}} \left(\frac{\beta_{3}}{2\beta_{0}}+ \frac{1}{2}\left(\frac{\beta_{1}}{\beta_{0}}\right)^{3} \left(-2\log^{3}(t)+5\log^{2}(t)+ \left(4-6\frac{\beta_{2}\beta_{0}}{\beta_{1}^{2}}\right)\log(t)-1\right)\right) \end{split} \end{align} and $\displaystyle{t=\log\left(\frac{\mu^{2}}{\Lambda^{2}}\right)}$. We now consider in turn the three renormalization schemes $\ms$, $\momg$ and $\momc$ and fit the two parameters of Eqs.\,(\ref{eq:Zmom}) and (\ref{eq:alpha}) to our extrapolated lattice data. Figure\,\ref{fig:Zn} illustrates the typical quality of such fits. \begin{figure}[h] \centering \psfig{figure=Zn.eps, width=17cm, height=9cm} \caption{Extrapolated lattice data at $\beta=6.4$ for $Z_{3}$ (left) and $\widetilde{Z}_{3}$ (right). The solid line is the fit at four-loop order in the $\ms$ scheme. The vertical dotted lines delimit the window of each fit.} \label{fig:Zn} \end{figure} \subsection{$\ms$ scheme} \label{sec:ms} The analysis in the $\ms$ scheme is summarized in Table~\ref{tab:ms}. The statistical error is at the level of 1\% for the gluon propagator and 2-3\% for the ghost propagator, whereas the systematic error due to the extrapolations is around 3-5\% and 5-10\% respectively. The values of $\Lambda_{\ms}$ extracted from the gluon and the ghost propagators are consistent within these errors and within each order of perturbation theory. \begin{table}[h] \centering \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline $\beta$ & $L$ & $a$p$_{min}$ & $a\Lambda^{(2)}_{\ms,gluon}$ & $\chi^{2}$ & $a\Lambda^{(3)}_{\ms,gluon}$ & $\chi^{2}$ & $a\Lambda^{(4)}_{\ms,gluon}$ & $\chi^{2}$ \\ \hline 6.0 & 16 & 1.111 & 0.336(3)$^{+8~}_{-4~}$ & 1.3 & 0.265(3)$^{+6~}_{-2~}$ & 1.0 & 0.225(2)$^{+4~}_{-2~}$ & 1.1 \\ \hline & 24 & 1.111 & 0.332(3)$^{+8~}_{-12}$ & 0.6 & 0.262(3)$^{+6~}_{-8~}$ & 0.5 & 0.222(2)$^{+5~}_{-8~}$ & 0.6 \\ \hline 6.2 & 24 & 0.907 & 0.240(2)$^{+6~}_{-9~}$ & 0.8 & 0.185(2)$^{+6~}_{-10}$ & 0.8 & 0.158(2)$^{+4~}_{-7~}$ & 0.8 \\ \hline 6.4 & 32 & 0.760 & 0.171(2)$^{+10}_{-11}$ & 1.4 & 0.130(2)$^{+12}_{-11}$ & 1.4 & 0.112(1)$^{+9~}_{-8~}$ & 1.4 \\ \hline \hline $\beta$ & $L$ & $a$p$_{min}$ & $a\Lambda^{(2)}_{\ms,ghost}$ & $\chi^{2}$ & $a\Lambda^{(3)}_{\ms,ghost}$ & $\chi^{2}$ & $a\Lambda^{(4)}_{\ms,ghost}$ & $\chi^{2}$ \\ \hline 6.0 & 16 & 1.039 & 0.354(7)$^{+23}_{-13}$ & 0.5 & 0.281(6)$^{+17}_{-8~}$ & 0.5 & 0.235(5)$^{+15}_{-7~}$ & 0.5 \\ \hline & 24 & 0.785 & 0.325(6)$^{+13}_{-20}$ & 0.2 & 0.259(5)$^{+10}_{-18}$ & 0.2 & 0.217(4)$^{+8~}_{-13}$ & 0.2 \\ \hline 6.2 & 24 & 0.693 & 0.254(4)$^{+20}_{-20}$ & 0.4 & 0.200(3)$^{+10}_{-23}$ & 0.4 & 0.169(3)$^{+12}_{-13}$ & 0.4 \\ \hline 6.4 & 32 & 0.555 & 0.193(2)$^{+22}_{-14}$ & 0.8 & 0.150(2)$^{+15}_{-14}$ & 0.8 & 0.128(2)$^{+13}_{-11}$ & 0.8 \\ \hline \hline \end{tabular} \caption{Fits of $\Lambda_{\ms}$ from the gluon and ghost lattice propagators. The error in parenthesis is the statistical error corresponding to a window $[a\,p_{min},a\,p_{max}]$ with the $a\,p_{min}$ quoted in the Table and the upper bound for $a\,p_{max}$ quoted in Tables~\ref{tab:cutg} and \ref{tab:cutc} respectively.} \label{tab:ms} \end{table} However, the three-loop and four-loop values, which are displayed in Table~\ref{tab:ms_values} with the physical units of Table~\ref{tab:simulation}, clearly confirm our previous result \cite{LPTX99} that we are still far from asymptoticity in that scheme. \begin{table}[h] \centering \begin{tabular}{c||c|c||c|c} \hline $\beta$ & $\Lambda^{(3)}_{\ms,gluon}$ & $\Lambda^{(3)}_{\ms,ghost}$ & $\Lambda^{(4)}_{\ms,gluon}$ & $\Lambda^{(4)}_{\ms,ghost}$ \\ \hline 6.0 & $519(6)^{+12}_{-4~}$ & $551(12)^{+33}_{-16}$ & $441(4)^{+8~}_{-4~}$ & $461(10)^{+29}_{-14}$ \\ 6.2 & $509(6)^{+17}_{-27}$ & $550(8)^{+27}_{-63}~$ & $435(6)^{+11}_{-19}$ & $465(8)^{+33}_{-36}~$ \\ 6.4 & $476(7)^{+44}_{-40}$ & $549(7)^{+55}_{-51}~$ & $410(4)^{+33}_{-29}$ & $468(7)^{+48}_{-40}~$ \\ \hline \end{tabular} \caption{Three-loop and four-loop physical values of $\Lambda_{\ms}$ in MeV extracted from Table~\ref{tab:ms}.} \label{tab:ms_values} \end{table} \subsection{$\momg$ scheme} \label{sec:momg} Table~\ref{tab:momg}, which summarizes the analysis in the $\momg$ scheme, shows that, at the lower $\beta$'s, we were not able to describe both lattice propagators at four-loop order with reasonable cuts and $\chi^{2}$. This could be interpreted as an hint that perturbation theory has some problems of convergence beyond three-loop order below 3-4 GeV. \begin{table}[h] \centering \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline $\beta$ & $L$ & $a$p$_{min}$ & $a\Lambda^{(2)}_{\momg,gluon}$ & $\chi^{2}$ & $a\Lambda^{(3)}_{\momg,gluon}$ & $\chi^{2}$ & $a\Lambda^{(4)}_{\momg,gluon}$ & $\chi^{2}$ \\ \hline 6.0 & 16 & 1.039 & 0.551(3)$^{+8~}_{-8~}$ & 1.0 & 0.477(3)$^{+5~}_{-8~}$ & 1.2 & --- & --- \\ \hline & 24 & 1.014 & 0.536(4)$^{+14}_{-19}$ & 0.9 & 0.464(3)$^{+10}_{-11}$ & 0.9 & --- & --- \\ \hline 6.2 & 24 & 0.693 & 0.396(2)$^{+19}_{-12}$ & 1.0 & 0.336(2)$^{+8~}_{-15}$ & 0.9 & --- & --- \\ \hline 6.4 & 32 & 0.555 & 0.292(1)$^{+15}_{-14}$ & 1.3 & 0.246(1)$^{+7~}_{-20}$ & 1.4 & 0.253(3)$^{+5~}_{-3~}$ & 1.6 \\ \hline \hline $\beta$ & $L$ & $a$p$_{min}$ & $a\Lambda^{(2)}_{\momg,ghost}$ & $\chi^{2}$ & $a\Lambda^{(3)}_{\momg,ghost}$ & $\chi^{2}$ & $a\Lambda^{(4)}_{\momg,ghost}$ & $\chi^{2}$ \\ \hline 6.0 & 16 & 1.039 & 0.660(40)$^{+24}_{-29}$ & 0.4 & 0.475(12)$^{+29}_{-24}$ & 0.5 & --- & --- \\ \hline & 24 & 1.014 & 0.559(22)$^{+25}_{-20}$ & 0.2 & 0.438(12)$^{+14}_{-25}$ & 0.2 & 0.408(17)$^{+20}_{-18}$ & 0.9 \\ \hline 6.2 & 24 & 0.693 & 0.455(11)$^{+9~}_{-17}$ & 0.3 & 0.342(5)$^{+27}_{-34}~$ & 0.6 & 0.348(8)$^{+23}_{-17}~$ & 1.0 \\ \hline 6.4 & 32 & 0.555 & 0.333(4)$^{+36}_{-26}~$ & 1.2 & 0.261(3)$^{+33}_{-28}~$ & 0.9 & 0.279(7)$^{+14}_{-30}~$ & 0.8 \\ \hline \hline \end{tabular} \caption{Fits of $\Lambda_{\momg}$ from the gluon and ghost lattice propagators. The notations are the same as in Table~\ref{tab:ms}.} \label{tab:momg} \end{table} If we select the three-loop result as the best perturbative estimate of $\Lambda_{\momg}$ and convert it to the $\ms$ scheme with the asymptotic one-loop formula, $\Lambda_{\ms} = 0.346\,\Lambda_{\momg}$, then we get the physical values quoted in Table~\ref{tab:momg_values} which agree completely with previous values \cite{LPTX00}. \begin{table}[h] \centering \begin{tabular}{c||c|c} \hline $\beta$ & $\Lambda^{(3)}_{\ms,gluon}$ & $\Lambda^{(3)}_{\ms,ghost}$ \\ \hline 6.0 & 324(2)$^{+2~}_{-5~}$ & 322(8)$^{+20}_{-16}$ \\ 6.2 & 320(2)$^{+8~}_{-14}$ & 326(5)$^{+26}_{-33}$ \\ 6.4 & 312(1)$^{+9~}_{-25}$ & 331(4)$^{+42}_{-35}$ \\ \hline \end{tabular} \caption{Three-loop physical values of $\Lambda_{\ms}$ in MeV extracted from Table~\ref{tab:momg}.} \label{tab:momg_values} \end{table} \subsection{$\momc$ scheme} \label{sec:momc} The results of the analysis in the $\momc$ scheme are displayed in Table~\ref{tab:momc}. We still find that the three-loop and four-loop values of $\Lambda^{(3)}_{\momc}$ are very much the same both for the gluon propagator and for the ghost propagator. Thus the perturbative series seems again to become asymptotic at three-loop order in that scheme. \begin{table}[h] \centering \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline $\beta$ & $L$ & $a$p$_{min}$ & $a\Lambda^{(2)}_{\momc,gluon}$ & $\chi^{2}$ & $a\Lambda^{(3)}_{\momc,gluon}$ & $\chi^{2}$ & $a\Lambda^{(4)}_{\momc,gluon}$ & $\chi^{2}$ \\ \hline 6.0 & 16 & 1.178 & 0.482(6)~~~~ & 1.3 & 0.408(3)$^{+4~}_{-4~}~$ & 1.0 & --- & --- \\ \hline & 24 & 1.111 & 0.468(5)$^{+6~}_{-5~}~$ & 0.5 & 0.394(3)$^{+11}_{-6~}~$ & 0.8 & 0.411(7)$^{+3~}_{-2~}~$ & 1.1 \\ \hline 6.2 & 24 & 0.907 & 0.345(3)$^{+17}_{-10}~$ & 0.9 & 0.288(2)$^{+5~}_{-6~}~$ & 0.9 & 0.292(3)$^{+8~}_{-5~}~$ & 0.8 \\ \hline 6.4 & 32 & 0.589 & 0.255(1)$^{+12}_{-15}~$ & 1.5 & 0.205(1)$^{+11}_{-7~}~$ & 1.6 & 0.212(1)$^{+9~}_{-17}~$ & 1.6 \\ \hline \hline $\beta$ & $L$ & $a$p$_{min}$ & $a\Lambda^{(2)}_{\momc,ghost}$ & $\chi^{2}$ & $a\Lambda^{(3)}_{\momc,ghost}$ & $\chi^{2}$ & $a\Lambda^{(4)}_{\momc,ghost}$ & $\chi^{2}$ \\ \hline 6.0 & 16 & 0.962 & 0.489(6)$^{+10}_{-6~}~$ & 0.8 & 0.437(11)$^{+6~}_{-2~}$ & 0.4 & --- & --- \\ \hline & 24 & 1.047 & 0.459(6)$^{+15}_{-15}~$ & 0.5 & 0.408(8)$^{+6~}_{-5~}~$ & 0.5 & 0.398(14)$^{+13}_{-20}$ & 0.3 \\ \hline 6.2 & 24 & 0.740 & 0.367(7)$^{+21}_{-33}~$ & 0.4 & 0.308(7)$^{+9~}_{-16}~$ & 0.2 & 0.303(9)$^{+7~}_{-14}~$ & 0.2 \\ \hline 6.4 & 32 & 0.589 & 0.280(5)$^{+28}_{-23}~$ & 0.6 & 0.225(5)$^{+18}_{-13}~$ & 0.6 & 0.224(5)$^{+15}_{-16}~$ & 0.6 \\ \hline \hline \end{tabular} \caption{Fits of $\Lambda_{\momc}$ from the gluon and ghost lattice propagators. The notations are the same as in Table~\ref{tab:ms}.} \label{tab:momc} \end{table} Selecting the three-loop result as the best perturbative estimate of $\Lambda_{\momc}$ and converting it to the $\ms$ scheme with the asymptotic formula, $\Lambda_{\ms} = 0.429\,\Lambda_{\momc}$, we get the physical values quoted in Table~\ref{tab:momc_values}. \begin{table}[h] \centering \begin{tabular}{c||c|c} \hline $\beta$ & $\Lambda^{(3)}_{\ms,gluon}$ & $\Lambda^{(3)}_{\ms,ghost}$ \\ \hline 6.0 & 345(3)$^{+4~}_{-4~}$ & 369(9)$^{+3~}_{-2~}$ \\ 6.2 & 341(2)$^{+6~}_{-7~}$ & 364(8)$^{+11}_{-19}$ \\ 6.4 & 323(2)$^{+17}_{-11}$ & 354(8)$^{+28}_{-20}$ \\ \hline \end{tabular} \caption{Three-loop physical values of $\Lambda_{\ms}$ in MeV extracted from Table~\ref{tab:momc}.} \label{tab:momc_values} \end{table} \subsection{Scheme dependence} The puzzling feature of Tables~\ref{tab:ms_values}, \ref{tab:momg_values} and \ref{tab:momc_values} is the rather large dependence of the $\Lambda_{\text{QCD}}$ scale upon the loop-order and the renormalisation scheme whereas, within any scheme, the values from the ghost and gluon propagators are rather consistent at each loop order and pretty independent of the lattice spacing. Let us consider again the evolution equation of the renormalisation constants of the gluon or ghost fields in a MOM scheme, with respect to the coupling $h_{R}$ in an arbitrary scheme $R$. We have \begin{align} \label{eq:dlambda} \frac{d\ln(Z_{\Gamma,MOM})}{d \ln \mu^{2}} = \overline{\gamma}_{R}(h_{R}) = -\frac{1}{2} \frac{d\ln(Z_{\Gamma,MOM})}{d \ln \Lambda_{R}} \end{align} where $ \Lambda_{R}$ is the scale in scheme $R$. If we truncate the perturbative expansion at order $n$ \begin{align} \label{eq:Zn} \ln\left(\frac{Z_{\Gamma,MOM}}{Z_{0}}\right) = c_{R,0}\ln(h_{R}) + \sum_{k=1}^{n-1} c_{R,k}h_{R}^{k} \end{align} the change in the effective scale $\Lambda^{(n)}_{R}$, or equivalently, the change in the coupling $h_{R}$, induced by adding the contribution at order $n+1$ is typically \begin{align} \label{eq:relative} \frac{\Delta\Lambda^{(n)}_{R}}{\Lambda^{(n)}_{R}} \approx -\frac{c_{R,n}h_{R}^{n}}{2\gamma_{R}(h_{R})} \end{align} Now the dependence of the effective scale $\Lambda_{R}$ upon the coupling $h_{R}$ is given up to order 4 \begin{align} \label{eq:beta} 2\ln\Lambda^{(4)}_{R} = \ln\mu^{2} - \frac{1}{\beta_{0}h_{R}} - \frac{\beta_{1}}{\beta_{0}^{2}}\ln(\beta_{0}h_{R}) - \frac{\beta_{0}\beta_{2}-\beta_{1}^{2}}{\beta_{0}^{3}}h_{R} - \frac{\beta_{0}^{2}\beta_{3} - 2\beta_{0}\beta_{1}\beta_{2} + \beta_{1}^{3}}{2\beta_{0}^{4}}h_{R}^{2} \end{align} and, denoting the coefficient of order $h_{R}^{n-2}$ in that equation by $-\rho_{R,n-1}$, the effective scales which describe a same coupling at order $n$ and $n+1$ are related by \begin{align} \label{eq:order} \ln\frac{\Lambda^{(n+1)}_{R}}{\Lambda^{(n)}_{R}} \equiv -\frac{1}{2} \rho_{R,n-1}h_{R}^{n-1} \end{align} Combining Eqs.\,(\ref{eq:relative}) and (\ref{eq:order}) gives the relation between the effective scales which describe the renormalisation constants of the gluon or ghost fields in a MOM scheme at order $n$ and $n+1$ \begin{align} \label{eq:loop} \frac{\Lambda^{(n+1)}_{R}}{\Lambda^{(n)}_{R}} = \exp-\frac{1}{2} \left(\rho_{R,n-1}+\frac{c_{R,n}h_{R}}{\gamma_{R}(h_{R})}\right)h_{R}^{n-1} \end{align} Figure~\ref{fig:errors} displays the behavior of this ratio for the gluon and ghost propagators in the three schemes as a function of the momentum $p$ for $n=2$ and $n=3$. The couplings are taken from the fits at $\beta=6.4$. \begin{figure}[h] \centering \psfig{figure=errors.eps, width=15cm, height=16cm} \caption{ $\frac{\Lambda^{(n+1)}_{R}}{\Lambda^{(n)}_{R}}$ for $n=2$ (dashed lines) and $n=3$ (solid lines), for the gluon propagator in the $\ms$ scheme (a), $\momg$ scheme (c) and $\momc$ scheme (e), and for the ghost propagator in the $\ms$ scheme (b), $\momg$ scheme (d) and $\momc$ scheme (f).} \label{fig:errors} \end{figure} There is a pretty good qualitative agreement with Tables~\ref{tab:ms_values}, \ref{tab:momg_values} and \ref{tab:momc_values}, which confirms the overall consistency with perturbation theory of the lattice data for the gluon and ghost propagators \bfit{within} any renormalization scheme. The scheme dependence of the $\Lambda_{\text{QCD}}$ scale can also be analyzed with Eq.\,(\ref{eq:beta}): \begin{align} \label{eq:scheme} \frac{\Lambda^{(n)}_{R_{2}}}{\Lambda^{(n)}_{R_{1}}} = \exp\left\{ \frac{1}{2\beta_{0}}\left(\frac{1}{h_{R_{1}}}-\frac{1}{h_{R_{2}}}\right) + \frac{\beta_{1}}{2\beta_{0}^{2}}\ln\frac{h_{R_{1}}}{h_{R_{2}}} + \cdots \right\} \end{align} Figure~\ref{fig:scheme} shows the behavior of the ratios $\frac{\Lambda^{(n)}_{\ms}}{\Lambda^{(n)}_{\momg}}$ and $\frac{\Lambda^{(n)}_{\momc}}{\Lambda^{(n)}_{\momg}}$, as a function of the momentum $p$ at each order of perturbation theory. The couplings are taken from the fits of the gluon propagator at $\beta=6.4$. \begin{figure}[h] \centering \psfig{figure=Z3_scheme.eps, width=16cm, height=9cm} \caption{$\frac{\Lambda^{(n)}_{\ms}}{\Lambda^{(n)}_{\momg}}$ (left) and $\frac{\Lambda^{(n)}_{\momc}}{\Lambda^{(n)}_{\momg}}$ (right) for $n=2$, $n=3$ and $n=4$. The solid lines are the plots of Eq.\,(\ref{eq:scheme}) with the fitted couplings whereas the dashed lines are the plots with Eq.\,(\ref{eq:expand}). Horizontal lines are the asymptotic values.} \label{fig:scheme} \end{figure} Clearly, the limiting values of these ratios are not the asymptotic values. If we replace in Eq.\,(\ref{eq:scheme}) the coupling $h_{R_{2}}$ by its perturbative expansion with respect to $h_{R_{1}}$ \begin{align} h_{R_{2}} = h_{R_{1}} + \sum_{k=1}^{n-1} r_{k} h_{R_{1}}^{k+1} \label{eq:expand} \end{align} then the ratios do of course tend towards the asymptotic values $\displaystyle{\exp\left\{\frac{r_{1}}{2\beta_{0}}\right\}}$. The disagreement with respect to the perturbative expansion is not a problem with the lattice data or with the numerical analysis. Indeed the fits do a very good job at extracting a well-behave coupling as illustrated in Fig.~\ref{fig:lambda} which displays the dimensionless scales $a\Lambda^{(4)}_{\ms}$, $a\Lambda^{(4)}_{\momg}$ and $a\Lambda^{(4)}_{\momc}$ as a function of the momentum $p$, using Eq.\,(\ref{eq:beta}) with the fitted couplings at $\beta = 6.4$ from the ghost and gluon propagators. $Z_{0}$, the other fitted parameter of Eq.\,(\ref{eq:Zmom}), is nearly independent, within a few percent, of the renormalisation scheme as it should in the absence of truncations. It follows that the difficulty to reproduce the asymptotic ratios between the scales of different renormalization schemes, is mainly a consequence of the truncation of the perturbative series of the renormalization constants of the gluon and ghost propagators. \begin{figure}[h] \centering \psfig{figure=lambda.eps, width=10cm, height=10cm} \caption{$a\Lambda^{(4)}_{\ms}$, $a\Lambda^{(4)}_{\momg}$ and $a\Lambda^{(4)}_{\momc}$ from the gluon propagator (solid lines) and from the ghost propagator (dashed lines) at $\beta$ = 6.4, as a function of the momentum through Eq.\,(\ref{eq:beta}).} \label{fig:lambda} \end{figure} We can substantiate this claim, and estimate the rate of convergence, by the following exercise. We solve $h_{R_{2}}$ in terms of $h_{R_{1}}$ using Eq.\,(\ref{eq:Zn}) at four-loop order \begin{align} \ln\left(\frac{Z_{\Gamma,MOM}}{Z_{0}}\right) = c_{R_{2},0}\ln(h_{R_{2}}) + \sum_{k=1}^{3} c_{R_{2},k}h_{R_{2}}^{k} = c_{R_{1},0}\ln(h_{R_{1}}) + \sum_{k=1}^{3} c_{R_{1},k}h_{R_{1}}^{k} \end{align} Then we plug the solution into Eq.\,(\ref{eq:scheme}). Figure~\ref{fig:truncation} shows the behavior of the corresponding ratios, $\frac{\Lambda^{(4)}_{\ms}}{\Lambda^{(4)}_{\momg}}$ and $\frac{\Lambda^{(4)}_{\momc}}{\Lambda^{(4)}_{\momg}}$, as a function of the coupling $h_{\ms}$ and $h_{\momc}$ respectively. The effect of the truncation of the perturbative series is manifest for the $\ms$ scheme and gives the right order of magnitude of what is actually measured in Tables~\ref{tab:ms} and \ref{tab:momg}. \begin{figure}[h] \centering \psfig{figure=truncation.eps, width=10cm, height=10cm} \caption{$\frac{\Lambda^{(4)}_{\ms}}{\Lambda^{(4)}_{\momg}}$ (solid line) as a function of $h_{\ms}$ and $\frac{\Lambda^{(4)}_{\momc}}{\Lambda^{(4)}_{\momg}}$ (dashed line) as a function of $h_{\momc}$. The vertical lines delimit the values spanned by $h_{\ms}$ (dashed) and $h_{\momc}$ (dotted) in the fits of the gluon propagator at $\beta=6.4$.} \label{fig:truncation} \end{figure} \section{Conclusion} \label{sec:conclusion} We have shown that the lattice formulation of the ghost propagator has the expected perturbative behavior up to four-loop order from 2 GeV to 6 GeV. We have been able to go beyond the qualitative level and to produce quantitative results for the scale $\Lambda_{\ms}$ which are pretty consistent with the values extracted from the lattice gluon propagator. We have understood the strong dependence of the effective $\Lambda_{\ms}$ scale upon the order of perturbation theory and upon the renormalisation scheme used for the parametrisation of the data. The perturbative series of the $\mom$ schemes seem to be asymptotic at three-loop order in the energy range we have probed whereas the $\ms$ scheme converges very slowly. If we assume that all perturbative series remain well behaved beyond four-loop above 4 GeV, then we get $\Lambda_{\ms} \approx 320$ MeV with a 10\% systematic uncertainty. The statistical errors are at the 1\% level. This value is also in pretty good agreement with the values of $\Lambda_{\ms}$ extracted from the three-gluon vertex in a $\mom$ scheme at three-loop order \cite{LPTX98}, at the same $\beta$'s and with the same lattice sizes. On the other hand it exceeds by 20\% the value obtained from the same vertex at $\beta=6.8$ on a $24^{4}$ lattice. This discrepancy motivated the introduction of power corrections which are successful in describing the combined data of the three-gluon vertex \cite{BOU00}. We will show in a forthcoming paper how the power corrections can be unraveled from the lattice propagators alone. The value quoted above exceeds also by about 30\% the previous determinations of the QCD scale in the quenched approximation based on gauge-invariant definitions of the strong coupling constant \cite{ALPHA,BOO01} (take note, for comparison purposes, that our physical unit corresponds to the force parameter $r_{0}$ \cite{NEC02} set approximately to 0.53\,fm). However there is also an uncertainty due to the use of the asymptotic one-loop relation between $\Lambda_{\ms}$ and the $\Lambda_{\text{L}}$'s. For illustration, let us consider the determination of $\Lambda_{\ms}$ using lattice perturbation theory up to three-loop order with the Wilson action \cite{GOC05}. It is possible to estimate the rate of convergence of the ratio $\frac{\Lambda^{(3)}_{\text{L}}}{\Lambda^{(3)}_{\ms}}$ as a function of the bare lattice coupling $h_{\text{L}} = \frac{6}{(4\pi)^{2}\beta}$ by inserting the perturbative expansion of $h_{\ms}$ into Eq.\,(\ref{eq:scheme}). Figure~\ref{fig:lattice} displays the evolution of this ratio and also of the ratio $\frac{\Lambda^{(3)}_{\square}}{\Lambda^{(3)}_{\ms}}$ for the so-called ``boosted'' lattice scheme which re-express the lattice perturbative series as a function of the coupling $h_{\square}=h_{\text{L}}/\left<plaq\right>$. The mere truncation of the perturbative series introduces an uncertainty on the absolute scale of the lattice schemes which could be as large as 30\% in the range of $\beta$ studied in these simulations. \begin{figure}[h] \centering \psfig{figure=lattice.eps, width=10cm, height=10cm} \caption{$\frac{\Lambda^{(3)}_{\text{L}}}{\Lambda^{(3)}_{\ms}}$ (lower solid line) as a function of $h_{\text{L}}$ and $\frac{\Lambda^{(3)}_{\square}}{\Lambda^{(3)}_{\ms}}$ (upper solid line) as a function of $h_{\square}$. The vertical lines (dotted) delimit the values spanned by $h_{\text{L}}$ and $h_{\square}$ in the simulations of \cite{GOC05} $(5.7 \leq \beta \leq 6.9)$. The dashed horizontal lines are the asymptotic values.} \label{fig:lattice} \end{figure} No strategy can fix the scale $\Lambda_{\text{QCD}}$ to an accuracy better than the uncertainty entailed by the truncation of the perturbative series in the conversion to the $\ms$ scheme. We have shown that this error can be larger than the main well-known sources of systematic errors which come from setting the scale $a^{-1}$ and from the continuum extrapolation. If we aim at reducing below 10\% the error in the conversion of the $\mom$ schemes to the $\ms$ scheme, then a look at Figure~\ref{fig:truncation} shows that we need to apply a cut at 6 GeV. Such an analysis would require simulations at $\beta=6.6$ and $\beta=6.8$ on $48^{4}$ and $64^{4}$ lattices respectively, to work at fixed volume and minimize finite-size effects. The existence of several lattice observables, gluon propagator, ghost propagator, three-gluon vertex, from which one can extract independent values of the scale $\Lambda_{\text{QCD}}$, an advantage of the Green function approach, should then allow to disentangle unambiguously the effects of the truncation of the perturbative series from the non-perturbative corrections, and to get a value of $\Lambda_{\ms}$ at a true 10\% accuracy.
{ "redpajama_set_name": "RedPajamaArXiv" }